Oct 8 20:02:04.071887 kernel: Linux version 6.6.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Oct 8 18:24:27 -00 2024 Oct 8 20:02:04.071914 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=ed527eaf992abc270af9987554566193214d123941456fd3066b47855e5178a5 Oct 8 20:02:04.071924 kernel: BIOS-provided physical RAM map: Oct 8 20:02:04.071933 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Oct 8 20:02:04.071938 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Oct 8 20:02:04.071944 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Oct 8 20:02:04.071955 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Oct 8 20:02:04.071964 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Oct 8 20:02:04.071971 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Oct 8 20:02:04.071980 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Oct 8 20:02:04.071986 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Oct 8 20:02:04.071993 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Oct 8 20:02:04.072002 kernel: printk: bootconsole [earlyser0] enabled Oct 8 20:02:04.072008 kernel: NX (Execute Disable) protection: active Oct 8 20:02:04.072020 kernel: APIC: Static calls initialized Oct 8 20:02:04.072028 kernel: efi: EFI v2.7 by Microsoft Oct 8 20:02:04.072036 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c1a98 Oct 8 20:02:04.072046 kernel: SMBIOS 3.1.0 present. Oct 8 20:02:04.072053 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Oct 8 20:02:04.072060 kernel: Hypervisor detected: Microsoft Hyper-V Oct 8 20:02:04.072071 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Oct 8 20:02:04.072078 kernel: Hyper-V: Host Build 10.0.20348.1633-1-0 Oct 8 20:02:04.072086 kernel: Hyper-V: Nested features: 0x1e0101 Oct 8 20:02:04.072095 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Oct 8 20:02:04.072104 kernel: Hyper-V: Using hypercall for remote TLB flush Oct 8 20:02:04.072114 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Oct 8 20:02:04.072122 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Oct 8 20:02:04.072132 kernel: tsc: Marking TSC unstable due to running on Hyper-V Oct 8 20:02:04.072140 kernel: tsc: Detected 2593.905 MHz processor Oct 8 20:02:04.072151 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 8 20:02:04.072159 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 8 20:02:04.072168 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Oct 8 20:02:04.072177 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Oct 8 20:02:04.072188 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 8 20:02:04.072197 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Oct 8 20:02:04.072204 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Oct 8 20:02:04.072213 kernel: Using GB pages for direct mapping Oct 8 20:02:04.072221 kernel: Secure boot disabled Oct 8 20:02:04.072228 kernel: ACPI: Early table checksum verification disabled Oct 8 20:02:04.072239 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Oct 8 20:02:04.072250 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 8 20:02:04.072263 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 8 20:02:04.072270 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Oct 8 20:02:04.072279 kernel: ACPI: FACS 0x000000003FFFE000 000040 Oct 8 20:02:04.072289 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 8 20:02:04.072299 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 8 20:02:04.072307 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 8 20:02:04.072321 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 8 20:02:04.072330 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 8 20:02:04.072341 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 8 20:02:04.072350 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 8 20:02:04.072360 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Oct 8 20:02:04.072370 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Oct 8 20:02:04.072380 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Oct 8 20:02:04.072388 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Oct 8 20:02:04.072400 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Oct 8 20:02:04.072409 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Oct 8 20:02:04.072419 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Oct 8 20:02:04.072429 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Oct 8 20:02:04.072438 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Oct 8 20:02:04.072447 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Oct 8 20:02:04.072454 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Oct 8 20:02:04.072465 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Oct 8 20:02:04.072472 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Oct 8 20:02:04.072484 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Oct 8 20:02:04.072494 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Oct 8 20:02:04.072503 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Oct 8 20:02:04.072513 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Oct 8 20:02:04.072523 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Oct 8 20:02:04.072532 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Oct 8 20:02:04.072544 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Oct 8 20:02:04.072556 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Oct 8 20:02:04.072568 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Oct 8 20:02:04.072584 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Oct 8 20:02:04.072596 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Oct 8 20:02:04.072605 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Oct 8 20:02:04.072616 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Oct 8 20:02:04.072629 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Oct 8 20:02:04.072643 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Oct 8 20:02:04.072655 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Oct 8 20:02:04.072669 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Oct 8 20:02:04.072683 kernel: Zone ranges: Oct 8 20:02:04.072700 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 8 20:02:04.072713 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Oct 8 20:02:04.072728 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Oct 8 20:02:04.072742 kernel: Movable zone start for each node Oct 8 20:02:04.072757 kernel: Early memory node ranges Oct 8 20:02:04.072771 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Oct 8 20:02:04.072786 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Oct 8 20:02:04.072818 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Oct 8 20:02:04.072833 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Oct 8 20:02:04.072851 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Oct 8 20:02:04.072865 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 8 20:02:04.072880 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Oct 8 20:02:04.072895 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Oct 8 20:02:04.072909 kernel: ACPI: PM-Timer IO Port: 0x408 Oct 8 20:02:04.072924 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Oct 8 20:02:04.072939 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Oct 8 20:02:04.072954 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 8 20:02:04.072968 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 8 20:02:04.072986 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Oct 8 20:02:04.072999 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Oct 8 20:02:04.073011 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Oct 8 20:02:04.073023 kernel: Booting paravirtualized kernel on Hyper-V Oct 8 20:02:04.073038 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 8 20:02:04.073050 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Oct 8 20:02:04.073061 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Oct 8 20:02:04.073073 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Oct 8 20:02:04.073086 kernel: pcpu-alloc: [0] 0 1 Oct 8 20:02:04.073100 kernel: Hyper-V: PV spinlocks enabled Oct 8 20:02:04.073111 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 8 20:02:04.073124 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=ed527eaf992abc270af9987554566193214d123941456fd3066b47855e5178a5 Oct 8 20:02:04.073137 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 8 20:02:04.073149 kernel: random: crng init done Oct 8 20:02:04.073160 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Oct 8 20:02:04.073173 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 8 20:02:04.073186 kernel: Fallback order for Node 0: 0 Oct 8 20:02:04.073203 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Oct 8 20:02:04.073227 kernel: Policy zone: Normal Oct 8 20:02:04.073244 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 8 20:02:04.073258 kernel: software IO TLB: area num 2. Oct 8 20:02:04.073273 kernel: Memory: 8077076K/8387460K available (12288K kernel code, 2305K rwdata, 22716K rodata, 42828K init, 2360K bss, 310124K reserved, 0K cma-reserved) Oct 8 20:02:04.073287 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Oct 8 20:02:04.073301 kernel: ftrace: allocating 37784 entries in 148 pages Oct 8 20:02:04.073315 kernel: ftrace: allocated 148 pages with 3 groups Oct 8 20:02:04.073330 kernel: Dynamic Preempt: voluntary Oct 8 20:02:04.073343 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 8 20:02:04.073358 kernel: rcu: RCU event tracing is enabled. Oct 8 20:02:04.073375 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Oct 8 20:02:04.073389 kernel: Trampoline variant of Tasks RCU enabled. Oct 8 20:02:04.073403 kernel: Rude variant of Tasks RCU enabled. Oct 8 20:02:04.073416 kernel: Tracing variant of Tasks RCU enabled. Oct 8 20:02:04.073428 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 8 20:02:04.073445 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Oct 8 20:02:04.073459 kernel: Using NULL legacy PIC Oct 8 20:02:04.073473 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Oct 8 20:02:04.073488 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 8 20:02:04.073502 kernel: Console: colour dummy device 80x25 Oct 8 20:02:04.073516 kernel: printk: console [tty1] enabled Oct 8 20:02:04.073530 kernel: printk: console [ttyS0] enabled Oct 8 20:02:04.073544 kernel: printk: bootconsole [earlyser0] disabled Oct 8 20:02:04.073560 kernel: ACPI: Core revision 20230628 Oct 8 20:02:04.073575 kernel: Failed to register legacy timer interrupt Oct 8 20:02:04.073591 kernel: APIC: Switch to symmetric I/O mode setup Oct 8 20:02:04.073605 kernel: Hyper-V: enabling crash_kexec_post_notifiers Oct 8 20:02:04.073617 kernel: Hyper-V: Using IPI hypercalls Oct 8 20:02:04.073630 kernel: APIC: send_IPI() replaced with hv_send_ipi() Oct 8 20:02:04.073642 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Oct 8 20:02:04.073655 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Oct 8 20:02:04.073670 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Oct 8 20:02:04.073683 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Oct 8 20:02:04.073694 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Oct 8 20:02:04.073711 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593905) Oct 8 20:02:04.073725 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Oct 8 20:02:04.073739 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Oct 8 20:02:04.073753 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 8 20:02:04.073766 kernel: Spectre V2 : Mitigation: Retpolines Oct 8 20:02:04.073779 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Oct 8 20:02:04.073808 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Oct 8 20:02:04.073824 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Oct 8 20:02:04.073839 kernel: RETBleed: Vulnerable Oct 8 20:02:04.073858 kernel: Speculative Store Bypass: Vulnerable Oct 8 20:02:04.073874 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Oct 8 20:02:04.073892 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Oct 8 20:02:04.073905 kernel: GDS: Unknown: Dependent on hypervisor status Oct 8 20:02:04.073919 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 8 20:02:04.073932 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 8 20:02:04.073951 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 8 20:02:04.073968 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Oct 8 20:02:04.073981 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Oct 8 20:02:04.073994 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Oct 8 20:02:04.074008 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 8 20:02:04.074026 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Oct 8 20:02:04.074040 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Oct 8 20:02:04.074055 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Oct 8 20:02:04.074069 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Oct 8 20:02:04.074084 kernel: Freeing SMP alternatives memory: 32K Oct 8 20:02:04.074097 kernel: pid_max: default: 32768 minimum: 301 Oct 8 20:02:04.074112 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Oct 8 20:02:04.074127 kernel: landlock: Up and running. Oct 8 20:02:04.074142 kernel: SELinux: Initializing. Oct 8 20:02:04.074157 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 8 20:02:04.074171 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 8 20:02:04.074185 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Oct 8 20:02:04.074202 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 8 20:02:04.074216 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 8 20:02:04.074230 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 8 20:02:04.074245 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Oct 8 20:02:04.074260 kernel: signal: max sigframe size: 3632 Oct 8 20:02:04.074274 kernel: rcu: Hierarchical SRCU implementation. Oct 8 20:02:04.074290 kernel: rcu: Max phase no-delay instances is 400. Oct 8 20:02:04.074305 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Oct 8 20:02:04.074319 kernel: smp: Bringing up secondary CPUs ... Oct 8 20:02:04.074335 kernel: smpboot: x86: Booting SMP configuration: Oct 8 20:02:04.074348 kernel: .... node #0, CPUs: #1 Oct 8 20:02:04.074363 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Oct 8 20:02:04.074376 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Oct 8 20:02:04.074391 kernel: smp: Brought up 1 node, 2 CPUs Oct 8 20:02:04.074404 kernel: smpboot: Max logical packages: 1 Oct 8 20:02:04.074418 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Oct 8 20:02:04.074432 kernel: devtmpfs: initialized Oct 8 20:02:04.074450 kernel: x86/mm: Memory block size: 128MB Oct 8 20:02:04.074465 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Oct 8 20:02:04.074479 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 8 20:02:04.074492 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Oct 8 20:02:04.074508 kernel: pinctrl core: initialized pinctrl subsystem Oct 8 20:02:04.074522 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 8 20:02:04.074537 kernel: audit: initializing netlink subsys (disabled) Oct 8 20:02:04.074552 kernel: audit: type=2000 audit(1728417722.027:1): state=initialized audit_enabled=0 res=1 Oct 8 20:02:04.074566 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 8 20:02:04.074585 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 8 20:02:04.074601 kernel: cpuidle: using governor menu Oct 8 20:02:04.074615 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 8 20:02:04.074630 kernel: dca service started, version 1.12.1 Oct 8 20:02:04.074646 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Oct 8 20:02:04.074661 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 8 20:02:04.074675 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 8 20:02:04.074691 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 8 20:02:04.074705 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 8 20:02:04.074724 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 8 20:02:04.074739 kernel: ACPI: Added _OSI(Module Device) Oct 8 20:02:04.074754 kernel: ACPI: Added _OSI(Processor Device) Oct 8 20:02:04.074769 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 8 20:02:04.074783 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 8 20:02:04.074820 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 8 20:02:04.074835 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Oct 8 20:02:04.074850 kernel: ACPI: Interpreter enabled Oct 8 20:02:04.074865 kernel: ACPI: PM: (supports S0 S5) Oct 8 20:02:04.074883 kernel: ACPI: Using IOAPIC for interrupt routing Oct 8 20:02:04.074899 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 8 20:02:04.074914 kernel: PCI: Ignoring E820 reservations for host bridge windows Oct 8 20:02:04.074928 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Oct 8 20:02:04.074943 kernel: iommu: Default domain type: Translated Oct 8 20:02:04.074958 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 8 20:02:04.074973 kernel: efivars: Registered efivars operations Oct 8 20:02:04.074988 kernel: PCI: Using ACPI for IRQ routing Oct 8 20:02:04.075002 kernel: PCI: System does not support PCI Oct 8 20:02:04.075020 kernel: vgaarb: loaded Oct 8 20:02:04.075035 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Oct 8 20:02:04.075050 kernel: VFS: Disk quotas dquot_6.6.0 Oct 8 20:02:04.075065 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 8 20:02:04.075081 kernel: pnp: PnP ACPI init Oct 8 20:02:04.075095 kernel: pnp: PnP ACPI: found 3 devices Oct 8 20:02:04.075110 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 8 20:02:04.075125 kernel: NET: Registered PF_INET protocol family Oct 8 20:02:04.075140 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Oct 8 20:02:04.075158 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Oct 8 20:02:04.075173 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 8 20:02:04.075188 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 8 20:02:04.075203 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Oct 8 20:02:04.075217 kernel: TCP: Hash tables configured (established 65536 bind 65536) Oct 8 20:02:04.075233 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Oct 8 20:02:04.075248 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Oct 8 20:02:04.075262 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 8 20:02:04.075277 kernel: NET: Registered PF_XDP protocol family Oct 8 20:02:04.075295 kernel: PCI: CLS 0 bytes, default 64 Oct 8 20:02:04.075310 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Oct 8 20:02:04.075326 kernel: software IO TLB: mapped [mem 0x000000003b5c1000-0x000000003f5c1000] (64MB) Oct 8 20:02:04.075341 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Oct 8 20:02:04.075355 kernel: Initialise system trusted keyrings Oct 8 20:02:04.075370 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Oct 8 20:02:04.075385 kernel: Key type asymmetric registered Oct 8 20:02:04.075399 kernel: Asymmetric key parser 'x509' registered Oct 8 20:02:04.075414 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Oct 8 20:02:04.075432 kernel: io scheduler mq-deadline registered Oct 8 20:02:04.075447 kernel: io scheduler kyber registered Oct 8 20:02:04.075462 kernel: io scheduler bfq registered Oct 8 20:02:04.075476 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 8 20:02:04.075491 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 8 20:02:04.075507 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 8 20:02:04.075521 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Oct 8 20:02:04.075536 kernel: i8042: PNP: No PS/2 controller found. Oct 8 20:02:04.075722 kernel: rtc_cmos 00:02: registered as rtc0 Oct 8 20:02:04.076911 kernel: rtc_cmos 00:02: setting system clock to 2024-10-08T20:02:03 UTC (1728417723) Oct 8 20:02:04.077036 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Oct 8 20:02:04.077055 kernel: intel_pstate: CPU model not supported Oct 8 20:02:04.077070 kernel: efifb: probing for efifb Oct 8 20:02:04.077084 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Oct 8 20:02:04.077098 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Oct 8 20:02:04.077112 kernel: efifb: scrolling: redraw Oct 8 20:02:04.077127 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Oct 8 20:02:04.077145 kernel: Console: switching to colour frame buffer device 128x48 Oct 8 20:02:04.077159 kernel: fb0: EFI VGA frame buffer device Oct 8 20:02:04.077174 kernel: pstore: Using crash dump compression: deflate Oct 8 20:02:04.077188 kernel: pstore: Registered efi_pstore as persistent store backend Oct 8 20:02:04.077203 kernel: NET: Registered PF_INET6 protocol family Oct 8 20:02:04.077217 kernel: Segment Routing with IPv6 Oct 8 20:02:04.077231 kernel: In-situ OAM (IOAM) with IPv6 Oct 8 20:02:04.077245 kernel: NET: Registered PF_PACKET protocol family Oct 8 20:02:04.077259 kernel: Key type dns_resolver registered Oct 8 20:02:04.077276 kernel: IPI shorthand broadcast: enabled Oct 8 20:02:04.077290 kernel: sched_clock: Marking stable (844044600, 46552300)->(1108376400, -217779500) Oct 8 20:02:04.077304 kernel: registered taskstats version 1 Oct 8 20:02:04.077319 kernel: Loading compiled-in X.509 certificates Oct 8 20:02:04.077333 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.54-flatcar: 14ce23fc5070d0471461f1dd6e298a5588e7ba8f' Oct 8 20:02:04.077347 kernel: Key type .fscrypt registered Oct 8 20:02:04.077361 kernel: Key type fscrypt-provisioning registered Oct 8 20:02:04.077376 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 8 20:02:04.077393 kernel: ima: Allocated hash algorithm: sha1 Oct 8 20:02:04.077407 kernel: ima: No architecture policies found Oct 8 20:02:04.077422 kernel: clk: Disabling unused clocks Oct 8 20:02:04.077436 kernel: Freeing unused kernel image (initmem) memory: 42828K Oct 8 20:02:04.077451 kernel: Write protecting the kernel read-only data: 36864k Oct 8 20:02:04.077465 kernel: Freeing unused kernel image (rodata/data gap) memory: 1860K Oct 8 20:02:04.077480 kernel: Run /init as init process Oct 8 20:02:04.077494 kernel: with arguments: Oct 8 20:02:04.077512 kernel: /init Oct 8 20:02:04.077527 kernel: with environment: Oct 8 20:02:04.077541 kernel: HOME=/ Oct 8 20:02:04.077555 kernel: TERM=linux Oct 8 20:02:04.077568 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 8 20:02:04.077585 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 8 20:02:04.077604 systemd[1]: Detected virtualization microsoft. Oct 8 20:02:04.077618 systemd[1]: Detected architecture x86-64. Oct 8 20:02:04.077634 systemd[1]: Running in initrd. Oct 8 20:02:04.077653 systemd[1]: No hostname configured, using default hostname. Oct 8 20:02:04.077669 systemd[1]: Hostname set to . Oct 8 20:02:04.077685 systemd[1]: Initializing machine ID from random generator. Oct 8 20:02:04.077700 systemd[1]: Queued start job for default target initrd.target. Oct 8 20:02:04.077715 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 20:02:04.077731 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 20:02:04.077747 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 8 20:02:04.077762 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 8 20:02:04.077780 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 8 20:02:04.078818 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 8 20:02:04.078842 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 8 20:02:04.078852 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 8 20:02:04.078864 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 20:02:04.078872 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 8 20:02:04.078884 systemd[1]: Reached target paths.target - Path Units. Oct 8 20:02:04.078898 systemd[1]: Reached target slices.target - Slice Units. Oct 8 20:02:04.078909 systemd[1]: Reached target swap.target - Swaps. Oct 8 20:02:04.078918 systemd[1]: Reached target timers.target - Timer Units. Oct 8 20:02:04.078929 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 8 20:02:04.078938 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 8 20:02:04.078950 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 8 20:02:04.078959 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 8 20:02:04.078970 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 8 20:02:04.078980 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 8 20:02:04.078993 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 20:02:04.079003 systemd[1]: Reached target sockets.target - Socket Units. Oct 8 20:02:04.079011 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 8 20:02:04.079023 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 8 20:02:04.079032 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 8 20:02:04.079040 systemd[1]: Starting systemd-fsck-usr.service... Oct 8 20:02:04.079049 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 8 20:02:04.079059 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 8 20:02:04.079096 systemd-journald[176]: Collecting audit messages is disabled. Oct 8 20:02:04.079119 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 20:02:04.079130 systemd-journald[176]: Journal started Oct 8 20:02:04.079155 systemd-journald[176]: Runtime Journal (/run/log/journal/2078fdb7b59d4d43971a1d4a90045f5d) is 8.0M, max 158.8M, 150.8M free. Oct 8 20:02:04.097752 systemd[1]: Started systemd-journald.service - Journal Service. Oct 8 20:02:04.098456 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 8 20:02:04.101871 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 20:02:04.103948 systemd-modules-load[177]: Inserted module 'overlay' Oct 8 20:02:04.110601 systemd[1]: Finished systemd-fsck-usr.service. Oct 8 20:02:04.115454 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:02:04.131009 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 20:02:04.141232 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 8 20:02:04.159396 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 8 20:02:04.167590 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 8 20:02:04.170925 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 8 20:02:04.185815 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 8 20:02:04.186501 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 20:02:04.193999 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 8 20:02:04.200325 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 20:02:04.208596 kernel: Bridge firewalling registered Oct 8 20:02:04.208588 systemd-modules-load[177]: Inserted module 'br_netfilter' Oct 8 20:02:04.213030 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 8 20:02:04.215809 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 8 20:02:04.223137 dracut-cmdline[206]: dracut-dracut-053 Oct 8 20:02:04.228835 dracut-cmdline[206]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=ed527eaf992abc270af9987554566193214d123941456fd3066b47855e5178a5 Oct 8 20:02:04.244944 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 20:02:04.258417 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 20:02:04.272973 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 8 20:02:04.314670 systemd-resolved[237]: Positive Trust Anchors: Oct 8 20:02:04.314692 systemd-resolved[237]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 8 20:02:04.314734 systemd-resolved[237]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 8 20:02:04.339513 systemd-resolved[237]: Defaulting to hostname 'linux'. Oct 8 20:02:04.342865 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 8 20:02:04.348819 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 8 20:02:04.362815 kernel: SCSI subsystem initialized Oct 8 20:02:04.373814 kernel: Loading iSCSI transport class v2.0-870. Oct 8 20:02:04.384819 kernel: iscsi: registered transport (tcp) Oct 8 20:02:04.404822 kernel: iscsi: registered transport (qla4xxx) Oct 8 20:02:04.404877 kernel: QLogic iSCSI HBA Driver Oct 8 20:02:04.441619 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 8 20:02:04.453921 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 8 20:02:04.485136 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 8 20:02:04.485215 kernel: device-mapper: uevent: version 1.0.3 Oct 8 20:02:04.489814 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 8 20:02:04.527840 kernel: raid6: avx512x4 gen() 18496 MB/s Oct 8 20:02:04.546810 kernel: raid6: avx512x2 gen() 18385 MB/s Oct 8 20:02:04.565803 kernel: raid6: avx512x1 gen() 18549 MB/s Oct 8 20:02:04.584809 kernel: raid6: avx2x4 gen() 18449 MB/s Oct 8 20:02:04.603808 kernel: raid6: avx2x2 gen() 18440 MB/s Oct 8 20:02:04.623663 kernel: raid6: avx2x1 gen() 14063 MB/s Oct 8 20:02:04.623700 kernel: raid6: using algorithm avx512x1 gen() 18549 MB/s Oct 8 20:02:04.644806 kernel: raid6: .... xor() 26970 MB/s, rmw enabled Oct 8 20:02:04.644833 kernel: raid6: using avx512x2 recovery algorithm Oct 8 20:02:04.667825 kernel: xor: automatically using best checksumming function avx Oct 8 20:02:04.812822 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 8 20:02:04.822533 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 8 20:02:04.832978 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 20:02:04.846180 systemd-udevd[396]: Using default interface naming scheme 'v255'. Oct 8 20:02:04.850593 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 20:02:04.866165 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 8 20:02:04.877351 dracut-pre-trigger[404]: rd.md=0: removing MD RAID activation Oct 8 20:02:04.903166 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 8 20:02:04.915958 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 8 20:02:04.958893 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 20:02:04.974127 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 8 20:02:04.996258 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 8 20:02:05.010746 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 8 20:02:05.017899 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 20:02:05.025835 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 8 20:02:05.038310 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 8 20:02:05.060824 kernel: cryptd: max_cpu_qlen set to 1000 Oct 8 20:02:05.072837 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 8 20:02:05.097583 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 8 20:02:05.101713 kernel: hv_vmbus: Vmbus version:5.2 Oct 8 20:02:05.097987 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 20:02:05.108839 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 20:02:05.111732 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 20:02:05.112004 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:02:05.115142 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 20:02:05.147714 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 20:02:05.157589 kernel: pps_core: LinuxPPS API ver. 1 registered Oct 8 20:02:05.157625 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Oct 8 20:02:05.157650 kernel: PTP clock support registered Oct 8 20:02:05.173200 kernel: hv_vmbus: registering driver hyperv_keyboard Oct 8 20:02:05.173238 kernel: hv_utils: Registering HyperV Utility Driver Oct 8 20:02:05.175921 kernel: AVX2 version of gcm_enc/dec engaged. Oct 8 20:02:05.175954 kernel: hv_vmbus: registering driver hv_utils Oct 8 20:02:05.175969 kernel: hv_utils: Heartbeat IC version 3.0 Oct 8 20:02:05.183182 kernel: hv_utils: Shutdown IC version 3.2 Oct 8 20:02:05.915434 kernel: hv_utils: TimeSync IC version 4.0 Oct 8 20:02:05.915472 kernel: AES CTR mode by8 optimization enabled Oct 8 20:02:05.914139 systemd-resolved[237]: Clock change detected. Flushing caches. Oct 8 20:02:05.931169 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 8 20:02:05.931195 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Oct 8 20:02:05.931214 kernel: hv_vmbus: registering driver hv_netvsc Oct 8 20:02:05.921697 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:02:05.940934 kernel: hv_vmbus: registering driver hv_storvsc Oct 8 20:02:05.944933 kernel: hv_vmbus: registering driver hid_hyperv Oct 8 20:02:05.949277 kernel: scsi host0: storvsc_host_t Oct 8 20:02:05.949585 kernel: scsi host1: storvsc_host_t Oct 8 20:02:05.950982 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Oct 8 20:02:05.951024 kernel: hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Oct 8 20:02:05.951189 kernel: scsi 1:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Oct 8 20:02:05.949975 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 20:02:05.991393 kernel: scsi 1:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Oct 8 20:02:06.012612 kernel: sr 1:0:0:2: [sr0] scsi-1 drive Oct 8 20:02:06.012864 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 8 20:02:06.018575 kernel: sd 1:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Oct 8 20:02:06.018832 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Oct 8 20:02:06.025659 kernel: sd 1:0:0:0: [sda] Write Protect is off Oct 8 20:02:06.025891 kernel: sd 1:0:0:0: [sda] Mode Sense: 0f 00 10 00 Oct 8 20:02:06.026088 kernel: sd 1:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Oct 8 20:02:06.026938 kernel: sr 1:0:0:2: Attached scsi CD-ROM sr0 Oct 8 20:02:06.028507 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 20:02:06.034282 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 8 20:02:06.039221 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Oct 8 20:02:06.146751 kernel: hv_netvsc 000d3ad8-07a2-000d-3ad8-07a2000d3ad8 eth0: VF slot 1 added Oct 8 20:02:06.154951 kernel: hv_vmbus: registering driver hv_pci Oct 8 20:02:06.160420 kernel: hv_pci caab7c06-a659-4141-ac8f-04fb112b79ca: PCI VMBus probing: Using version 0x10004 Oct 8 20:02:06.160629 kernel: hv_pci caab7c06-a659-4141-ac8f-04fb112b79ca: PCI host bridge to bus a659:00 Oct 8 20:02:06.166002 kernel: pci_bus a659:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Oct 8 20:02:06.169147 kernel: pci_bus a659:00: No busn resource found for root bus, will use [bus 00-ff] Oct 8 20:02:06.174083 kernel: pci a659:00:02.0: [15b3:1016] type 00 class 0x020000 Oct 8 20:02:06.177978 kernel: pci a659:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Oct 8 20:02:06.182229 kernel: pci a659:00:02.0: enabling Extended Tags Oct 8 20:02:06.193306 kernel: pci a659:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at a659:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Oct 8 20:02:06.200206 kernel: pci_bus a659:00: busn_res: [bus 00-ff] end is updated to 00 Oct 8 20:02:06.200515 kernel: pci a659:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Oct 8 20:02:06.364380 kernel: mlx5_core a659:00:02.0: enabling device (0000 -> 0002) Oct 8 20:02:06.368943 kernel: mlx5_core a659:00:02.0: firmware version: 14.30.1284 Oct 8 20:02:06.590467 kernel: hv_netvsc 000d3ad8-07a2-000d-3ad8-07a2000d3ad8 eth0: VF registering: eth1 Oct 8 20:02:06.590822 kernel: mlx5_core a659:00:02.0 eth1: joined to eth0 Oct 8 20:02:06.594770 kernel: mlx5_core a659:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Oct 8 20:02:06.604950 kernel: mlx5_core a659:00:02.0 enP42585s1: renamed from eth1 Oct 8 20:02:06.661315 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Oct 8 20:02:06.687934 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (441) Oct 8 20:02:06.703003 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Oct 8 20:02:06.765910 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Oct 8 20:02:06.788960 kernel: BTRFS: device fsid a8680da2-059a-4648-a8e8-f62925ab33ec devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (450) Oct 8 20:02:06.802646 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Oct 8 20:02:06.806068 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Oct 8 20:02:06.826132 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 8 20:02:06.838974 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 8 20:02:06.844928 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 8 20:02:07.851698 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 8 20:02:07.853344 disk-uuid[597]: The operation has completed successfully. Oct 8 20:02:07.921468 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 8 20:02:07.921585 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 8 20:02:07.952061 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 8 20:02:07.960040 sh[683]: Success Oct 8 20:02:07.996513 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Oct 8 20:02:08.241208 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 8 20:02:08.258960 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 8 20:02:08.264246 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 8 20:02:08.280882 kernel: BTRFS info (device dm-0): first mount of filesystem a8680da2-059a-4648-a8e8-f62925ab33ec Oct 8 20:02:08.280958 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 8 20:02:08.284315 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 8 20:02:08.287108 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 8 20:02:08.289575 kernel: BTRFS info (device dm-0): using free space tree Oct 8 20:02:08.688180 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 8 20:02:08.693490 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 8 20:02:08.709084 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 8 20:02:08.715174 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 8 20:02:08.728278 kernel: BTRFS info (device sda6): first mount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 20:02:08.733946 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 8 20:02:08.733998 kernel: BTRFS info (device sda6): using free space tree Oct 8 20:02:08.757936 kernel: BTRFS info (device sda6): auto enabling async discard Oct 8 20:02:08.772713 kernel: BTRFS info (device sda6): last unmount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 20:02:08.772311 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 8 20:02:08.783285 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 8 20:02:08.794130 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 8 20:02:08.816693 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 8 20:02:08.828143 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 8 20:02:08.847693 systemd-networkd[867]: lo: Link UP Oct 8 20:02:08.847703 systemd-networkd[867]: lo: Gained carrier Oct 8 20:02:08.853071 systemd-networkd[867]: Enumeration completed Oct 8 20:02:08.854359 systemd-networkd[867]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 20:02:08.854364 systemd-networkd[867]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 20:02:08.855447 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 8 20:02:08.868919 systemd[1]: Reached target network.target - Network. Oct 8 20:02:08.924940 kernel: mlx5_core a659:00:02.0 enP42585s1: Link up Oct 8 20:02:08.957742 kernel: hv_netvsc 000d3ad8-07a2-000d-3ad8-07a2000d3ad8 eth0: Data path switched to VF: enP42585s1 Oct 8 20:02:08.957175 systemd-networkd[867]: enP42585s1: Link UP Oct 8 20:02:08.957324 systemd-networkd[867]: eth0: Link UP Oct 8 20:02:08.957577 systemd-networkd[867]: eth0: Gained carrier Oct 8 20:02:08.957593 systemd-networkd[867]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 20:02:08.961191 systemd-networkd[867]: enP42585s1: Gained carrier Oct 8 20:02:08.982948 systemd-networkd[867]: eth0: DHCPv4 address 10.200.8.13/24, gateway 10.200.8.1 acquired from 168.63.129.16 Oct 8 20:02:10.240234 ignition[830]: Ignition 2.19.0 Oct 8 20:02:10.240246 ignition[830]: Stage: fetch-offline Oct 8 20:02:10.240288 ignition[830]: no configs at "/usr/lib/ignition/base.d" Oct 8 20:02:10.240299 ignition[830]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 8 20:02:10.240414 ignition[830]: parsed url from cmdline: "" Oct 8 20:02:10.240419 ignition[830]: no config URL provided Oct 8 20:02:10.240425 ignition[830]: reading system config file "/usr/lib/ignition/user.ign" Oct 8 20:02:10.240436 ignition[830]: no config at "/usr/lib/ignition/user.ign" Oct 8 20:02:10.240444 ignition[830]: failed to fetch config: resource requires networking Oct 8 20:02:10.240733 ignition[830]: Ignition finished successfully Oct 8 20:02:10.261053 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 8 20:02:10.270161 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Oct 8 20:02:10.288245 ignition[876]: Ignition 2.19.0 Oct 8 20:02:10.288257 ignition[876]: Stage: fetch Oct 8 20:02:10.288491 ignition[876]: no configs at "/usr/lib/ignition/base.d" Oct 8 20:02:10.288504 ignition[876]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 8 20:02:10.288608 ignition[876]: parsed url from cmdline: "" Oct 8 20:02:10.288611 ignition[876]: no config URL provided Oct 8 20:02:10.288616 ignition[876]: reading system config file "/usr/lib/ignition/user.ign" Oct 8 20:02:10.288623 ignition[876]: no config at "/usr/lib/ignition/user.ign" Oct 8 20:02:10.288643 ignition[876]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Oct 8 20:02:10.380547 ignition[876]: GET result: OK Oct 8 20:02:10.380665 ignition[876]: config has been read from IMDS userdata Oct 8 20:02:10.380705 ignition[876]: parsing config with SHA512: 99bc1de9f7cf200134af589a05bfb821e1cb3a1650f157c8f1aed3253e4fbdba14277bf883ba8706d4a1ab97eede90dffd7494b500b4e6df95627612e538358e Oct 8 20:02:10.388310 unknown[876]: fetched base config from "system" Oct 8 20:02:10.388325 unknown[876]: fetched base config from "system" Oct 8 20:02:10.389224 ignition[876]: fetch: fetch complete Oct 8 20:02:10.388333 unknown[876]: fetched user config from "azure" Oct 8 20:02:10.389232 ignition[876]: fetch: fetch passed Oct 8 20:02:10.393395 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Oct 8 20:02:10.389289 ignition[876]: Ignition finished successfully Oct 8 20:02:10.409240 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 8 20:02:10.425508 ignition[882]: Ignition 2.19.0 Oct 8 20:02:10.425529 ignition[882]: Stage: kargs Oct 8 20:02:10.428580 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 8 20:02:10.425765 ignition[882]: no configs at "/usr/lib/ignition/base.d" Oct 8 20:02:10.425777 ignition[882]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 8 20:02:10.426602 ignition[882]: kargs: kargs passed Oct 8 20:02:10.426653 ignition[882]: Ignition finished successfully Oct 8 20:02:10.449109 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 8 20:02:10.465760 ignition[888]: Ignition 2.19.0 Oct 8 20:02:10.465770 ignition[888]: Stage: disks Oct 8 20:02:10.468345 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 8 20:02:10.465995 ignition[888]: no configs at "/usr/lib/ignition/base.d" Oct 8 20:02:10.466008 ignition[888]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 8 20:02:10.466853 ignition[888]: disks: disks passed Oct 8 20:02:10.466896 ignition[888]: Ignition finished successfully Oct 8 20:02:10.479437 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 8 20:02:10.485891 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 8 20:02:10.493793 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 8 20:02:10.498456 systemd[1]: Reached target sysinit.target - System Initialization. Oct 8 20:02:10.503559 systemd[1]: Reached target basic.target - Basic System. Oct 8 20:02:10.507911 systemd-networkd[867]: enP42585s1: Gained IPv6LL Oct 8 20:02:10.516078 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 8 20:02:10.584403 systemd-fsck[896]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Oct 8 20:02:10.590604 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 8 20:02:10.605092 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 8 20:02:10.694932 kernel: EXT4-fs (sda9): mounted filesystem 1df90f14-3ad0-4280-9b7d-a34f65d70e4d r/w with ordered data mode. Quota mode: none. Oct 8 20:02:10.695308 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 8 20:02:10.698152 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 8 20:02:10.749074 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 8 20:02:10.754085 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 8 20:02:10.763936 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (907) Oct 8 20:02:10.764181 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Oct 8 20:02:10.787089 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 8 20:02:10.787876 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 8 20:02:10.806006 kernel: BTRFS info (device sda6): first mount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 20:02:10.806050 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 8 20:02:10.806070 kernel: BTRFS info (device sda6): using free space tree Oct 8 20:02:10.807001 systemd-networkd[867]: eth0: Gained IPv6LL Oct 8 20:02:10.811195 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 8 20:02:10.821088 kernel: BTRFS info (device sda6): auto enabling async discard Oct 8 20:02:10.821817 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 8 20:02:10.830082 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 8 20:02:11.765608 coreos-metadata[909]: Oct 08 20:02:11.765 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Oct 8 20:02:11.772425 coreos-metadata[909]: Oct 08 20:02:11.772 INFO Fetch successful Oct 8 20:02:11.774933 coreos-metadata[909]: Oct 08 20:02:11.772 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Oct 8 20:02:11.791635 coreos-metadata[909]: Oct 08 20:02:11.791 INFO Fetch successful Oct 8 20:02:11.816809 coreos-metadata[909]: Oct 08 20:02:11.816 INFO wrote hostname ci-4081.1.0-a-b9ef23c535 to /sysroot/etc/hostname Oct 8 20:02:11.820979 initrd-setup-root[935]: cut: /sysroot/etc/passwd: No such file or directory Oct 8 20:02:11.824266 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Oct 8 20:02:11.831221 initrd-setup-root[943]: cut: /sysroot/etc/group: No such file or directory Oct 8 20:02:11.835751 initrd-setup-root[950]: cut: /sysroot/etc/shadow: No such file or directory Oct 8 20:02:11.840512 initrd-setup-root[957]: cut: /sysroot/etc/gshadow: No such file or directory Oct 8 20:02:13.903186 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 8 20:02:13.915053 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 8 20:02:13.922089 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 8 20:02:13.928739 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 8 20:02:13.934928 kernel: BTRFS info (device sda6): last unmount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 20:02:13.965046 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 8 20:02:13.972948 ignition[1025]: INFO : Ignition 2.19.0 Oct 8 20:02:13.972948 ignition[1025]: INFO : Stage: mount Oct 8 20:02:13.972948 ignition[1025]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 20:02:13.972948 ignition[1025]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 8 20:02:13.987139 ignition[1025]: INFO : mount: mount passed Oct 8 20:02:13.987139 ignition[1025]: INFO : Ignition finished successfully Oct 8 20:02:13.976486 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 8 20:02:13.997083 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 8 20:02:14.005691 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 8 20:02:14.024313 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1039) Oct 8 20:02:14.024362 kernel: BTRFS info (device sda6): first mount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 20:02:14.027489 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 8 20:02:14.029910 kernel: BTRFS info (device sda6): using free space tree Oct 8 20:02:14.034949 kernel: BTRFS info (device sda6): auto enabling async discard Oct 8 20:02:14.036644 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 8 20:02:14.060062 ignition[1055]: INFO : Ignition 2.19.0 Oct 8 20:02:14.060062 ignition[1055]: INFO : Stage: files Oct 8 20:02:14.079959 ignition[1055]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 20:02:14.079959 ignition[1055]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 8 20:02:14.079959 ignition[1055]: DEBUG : files: compiled without relabeling support, skipping Oct 8 20:02:14.118817 ignition[1055]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 8 20:02:14.118817 ignition[1055]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 8 20:02:14.216219 ignition[1055]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 8 20:02:14.219928 ignition[1055]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 8 20:02:14.223686 unknown[1055]: wrote ssh authorized keys file for user: core Oct 8 20:02:14.226438 ignition[1055]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 8 20:02:14.226438 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 8 20:02:14.226438 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Oct 8 20:02:14.293049 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 8 20:02:14.377311 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 8 20:02:14.383430 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 8 20:02:14.383430 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 8 20:02:14.383430 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 8 20:02:14.383430 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 8 20:02:14.383430 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 8 20:02:14.383430 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 8 20:02:14.383430 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 8 20:02:14.383430 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 8 20:02:14.383430 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 8 20:02:14.383430 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 8 20:02:14.383430 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Oct 8 20:02:14.383430 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Oct 8 20:02:14.383430 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Oct 8 20:02:14.383430 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Oct 8 20:02:14.883234 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 8 20:02:15.282705 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Oct 8 20:02:15.282705 ignition[1055]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 8 20:02:15.307098 ignition[1055]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 8 20:02:15.312640 ignition[1055]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 8 20:02:15.312640 ignition[1055]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 8 20:02:15.320809 ignition[1055]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Oct 8 20:02:15.324515 ignition[1055]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Oct 8 20:02:15.328173 ignition[1055]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 8 20:02:15.332624 ignition[1055]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 8 20:02:15.336883 ignition[1055]: INFO : files: files passed Oct 8 20:02:15.338795 ignition[1055]: INFO : Ignition finished successfully Oct 8 20:02:15.340257 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 8 20:02:15.351139 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 8 20:02:15.357288 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 8 20:02:15.365664 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 8 20:02:15.365785 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 8 20:02:15.380587 initrd-setup-root-after-ignition[1085]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 8 20:02:15.380587 initrd-setup-root-after-ignition[1085]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 8 20:02:15.388850 initrd-setup-root-after-ignition[1089]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 8 20:02:15.389679 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 8 20:02:15.396560 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 8 20:02:15.410101 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 8 20:02:15.433330 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 8 20:02:15.433447 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 8 20:02:15.442446 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 8 20:02:15.445206 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 8 20:02:15.452944 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 8 20:02:15.462066 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 8 20:02:15.475138 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 8 20:02:15.489059 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 8 20:02:15.501667 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 8 20:02:15.501869 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 20:02:15.502381 systemd[1]: Stopped target timers.target - Timer Units. Oct 8 20:02:15.502775 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 8 20:02:15.502879 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 8 20:02:15.504090 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 8 20:02:15.504515 systemd[1]: Stopped target basic.target - Basic System. Oct 8 20:02:15.504945 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 8 20:02:15.505443 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 8 20:02:15.505875 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 8 20:02:15.506316 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 8 20:02:15.506720 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 8 20:02:15.507154 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 8 20:02:15.507557 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 8 20:02:15.507977 systemd[1]: Stopped target swap.target - Swaps. Oct 8 20:02:15.508367 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 8 20:02:15.508503 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 8 20:02:15.509227 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 8 20:02:15.509662 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 20:02:15.510037 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 8 20:02:15.549249 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 20:02:15.552774 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 8 20:02:15.558035 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 8 20:02:15.576214 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 8 20:02:15.581786 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 8 20:02:15.599997 systemd[1]: ignition-files.service: Deactivated successfully. Oct 8 20:02:15.605702 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 8 20:02:15.620226 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Oct 8 20:02:15.620359 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Oct 8 20:02:15.650135 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 8 20:02:15.652865 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 8 20:02:15.653052 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 20:02:15.671748 ignition[1109]: INFO : Ignition 2.19.0 Oct 8 20:02:15.671748 ignition[1109]: INFO : Stage: umount Oct 8 20:02:15.691258 ignition[1109]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 20:02:15.691258 ignition[1109]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 8 20:02:15.691258 ignition[1109]: INFO : umount: umount passed Oct 8 20:02:15.691258 ignition[1109]: INFO : Ignition finished successfully Oct 8 20:02:15.673405 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 8 20:02:15.676084 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 8 20:02:15.676286 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 20:02:15.679690 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 8 20:02:15.679832 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 8 20:02:15.684760 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 8 20:02:15.684845 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 8 20:02:15.699656 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 8 20:02:15.702212 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 8 20:02:15.706956 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 8 20:02:15.706998 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 8 20:02:15.707263 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 8 20:02:15.707298 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Oct 8 20:02:15.707691 systemd[1]: Stopped target network.target - Network. Oct 8 20:02:15.708510 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 8 20:02:15.708546 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 8 20:02:15.708962 systemd[1]: Stopped target paths.target - Path Units. Oct 8 20:02:15.709538 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 8 20:02:15.721656 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 20:02:15.766023 systemd[1]: Stopped target slices.target - Slice Units. Oct 8 20:02:15.768349 systemd[1]: Stopped target sockets.target - Socket Units. Oct 8 20:02:15.770965 systemd[1]: iscsid.socket: Deactivated successfully. Oct 8 20:02:15.771024 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 8 20:02:15.773603 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 8 20:02:15.773645 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 8 20:02:15.788903 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 8 20:02:15.788984 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 8 20:02:15.793780 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 8 20:02:15.793838 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 8 20:02:15.804277 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 8 20:02:15.809493 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 8 20:02:15.815746 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 8 20:02:15.818756 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 8 20:02:15.821164 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 8 20:02:15.824985 systemd-networkd[867]: eth0: DHCPv6 lease lost Oct 8 20:02:15.829730 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 8 20:02:15.829856 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 8 20:02:15.852590 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 8 20:02:15.852738 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 8 20:02:15.864185 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 8 20:02:15.864244 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 8 20:02:15.876085 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 8 20:02:15.878451 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 8 20:02:15.878507 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 8 20:02:15.881834 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 8 20:02:15.881898 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 8 20:02:15.884734 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 8 20:02:15.884777 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 8 20:02:15.889955 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 8 20:02:15.890010 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 8 20:02:15.893636 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 20:02:15.920333 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 8 20:02:15.920481 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 20:02:15.924324 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 8 20:02:15.924402 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 8 20:02:15.929770 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 8 20:02:15.929812 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 20:02:15.932614 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 8 20:02:15.932672 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 8 20:02:15.963709 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 8 20:02:15.967410 kernel: hv_netvsc 000d3ad8-07a2-000d-3ad8-07a2000d3ad8 eth0: Data path switched from VF: enP42585s1 Oct 8 20:02:15.963787 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 8 20:02:15.970293 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 8 20:02:15.970349 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 20:02:15.986157 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 8 20:02:15.989187 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 8 20:02:15.989257 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 20:02:15.992850 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 20:02:15.992904 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:02:16.005656 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 8 20:02:16.005772 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 8 20:02:16.011825 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 8 20:02:16.011911 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 8 20:02:16.445961 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 8 20:02:16.446125 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 8 20:02:16.449284 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 8 20:02:16.453994 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 8 20:02:16.454062 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 8 20:02:16.468114 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 8 20:02:16.479778 systemd[1]: Switching root. Oct 8 20:02:16.581512 systemd-journald[176]: Journal stopped Oct 8 20:02:04.071887 kernel: Linux version 6.6.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Oct 8 18:24:27 -00 2024 Oct 8 20:02:04.071914 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=ed527eaf992abc270af9987554566193214d123941456fd3066b47855e5178a5 Oct 8 20:02:04.071924 kernel: BIOS-provided physical RAM map: Oct 8 20:02:04.071933 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Oct 8 20:02:04.071938 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Oct 8 20:02:04.071944 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Oct 8 20:02:04.071955 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Oct 8 20:02:04.071964 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Oct 8 20:02:04.071971 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Oct 8 20:02:04.071980 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Oct 8 20:02:04.071986 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Oct 8 20:02:04.071993 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Oct 8 20:02:04.072002 kernel: printk: bootconsole [earlyser0] enabled Oct 8 20:02:04.072008 kernel: NX (Execute Disable) protection: active Oct 8 20:02:04.072020 kernel: APIC: Static calls initialized Oct 8 20:02:04.072028 kernel: efi: EFI v2.7 by Microsoft Oct 8 20:02:04.072036 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c1a98 Oct 8 20:02:04.072046 kernel: SMBIOS 3.1.0 present. Oct 8 20:02:04.072053 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Oct 8 20:02:04.072060 kernel: Hypervisor detected: Microsoft Hyper-V Oct 8 20:02:04.072071 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Oct 8 20:02:04.072078 kernel: Hyper-V: Host Build 10.0.20348.1633-1-0 Oct 8 20:02:04.072086 kernel: Hyper-V: Nested features: 0x1e0101 Oct 8 20:02:04.072095 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Oct 8 20:02:04.072104 kernel: Hyper-V: Using hypercall for remote TLB flush Oct 8 20:02:04.072114 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Oct 8 20:02:04.072122 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Oct 8 20:02:04.072132 kernel: tsc: Marking TSC unstable due to running on Hyper-V Oct 8 20:02:04.072140 kernel: tsc: Detected 2593.905 MHz processor Oct 8 20:02:04.072151 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 8 20:02:04.072159 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 8 20:02:04.072168 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Oct 8 20:02:04.072177 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Oct 8 20:02:04.072188 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 8 20:02:04.072197 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Oct 8 20:02:04.072204 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Oct 8 20:02:04.072213 kernel: Using GB pages for direct mapping Oct 8 20:02:04.072221 kernel: Secure boot disabled Oct 8 20:02:04.072228 kernel: ACPI: Early table checksum verification disabled Oct 8 20:02:04.072239 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Oct 8 20:02:04.072250 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 8 20:02:04.072263 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 8 20:02:04.072270 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Oct 8 20:02:04.072279 kernel: ACPI: FACS 0x000000003FFFE000 000040 Oct 8 20:02:04.072289 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 8 20:02:04.072299 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 8 20:02:04.072307 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 8 20:02:04.072321 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 8 20:02:04.072330 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 8 20:02:04.072341 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 8 20:02:04.072350 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 8 20:02:04.072360 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Oct 8 20:02:04.072370 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Oct 8 20:02:04.072380 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Oct 8 20:02:04.072388 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Oct 8 20:02:04.072400 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Oct 8 20:02:04.072409 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Oct 8 20:02:04.072419 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Oct 8 20:02:04.072429 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Oct 8 20:02:04.072438 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Oct 8 20:02:04.072447 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Oct 8 20:02:04.072454 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Oct 8 20:02:04.072465 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Oct 8 20:02:04.072472 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Oct 8 20:02:04.072484 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Oct 8 20:02:04.072494 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Oct 8 20:02:04.072503 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Oct 8 20:02:04.072513 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Oct 8 20:02:04.072523 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Oct 8 20:02:04.072532 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Oct 8 20:02:04.072544 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Oct 8 20:02:04.072556 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Oct 8 20:02:04.072568 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Oct 8 20:02:04.072584 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Oct 8 20:02:04.072596 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Oct 8 20:02:04.072605 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Oct 8 20:02:04.072616 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Oct 8 20:02:04.072629 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Oct 8 20:02:04.072643 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Oct 8 20:02:04.072655 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Oct 8 20:02:04.072669 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Oct 8 20:02:04.072683 kernel: Zone ranges: Oct 8 20:02:04.072700 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 8 20:02:04.072713 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Oct 8 20:02:04.072728 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Oct 8 20:02:04.072742 kernel: Movable zone start for each node Oct 8 20:02:04.072757 kernel: Early memory node ranges Oct 8 20:02:04.072771 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Oct 8 20:02:04.072786 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Oct 8 20:02:04.072818 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Oct 8 20:02:04.072833 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Oct 8 20:02:04.072851 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Oct 8 20:02:04.072865 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 8 20:02:04.072880 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Oct 8 20:02:04.072895 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Oct 8 20:02:04.072909 kernel: ACPI: PM-Timer IO Port: 0x408 Oct 8 20:02:04.072924 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Oct 8 20:02:04.072939 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Oct 8 20:02:04.072954 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 8 20:02:04.072968 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 8 20:02:04.072986 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Oct 8 20:02:04.072999 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Oct 8 20:02:04.073011 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Oct 8 20:02:04.073023 kernel: Booting paravirtualized kernel on Hyper-V Oct 8 20:02:04.073038 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 8 20:02:04.073050 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Oct 8 20:02:04.073061 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Oct 8 20:02:04.073073 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Oct 8 20:02:04.073086 kernel: pcpu-alloc: [0] 0 1 Oct 8 20:02:04.073100 kernel: Hyper-V: PV spinlocks enabled Oct 8 20:02:04.073111 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 8 20:02:04.073124 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=ed527eaf992abc270af9987554566193214d123941456fd3066b47855e5178a5 Oct 8 20:02:04.073137 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 8 20:02:04.073149 kernel: random: crng init done Oct 8 20:02:04.073160 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Oct 8 20:02:04.073173 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 8 20:02:04.073186 kernel: Fallback order for Node 0: 0 Oct 8 20:02:04.073203 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Oct 8 20:02:04.073227 kernel: Policy zone: Normal Oct 8 20:02:04.073244 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 8 20:02:04.073258 kernel: software IO TLB: area num 2. Oct 8 20:02:04.073273 kernel: Memory: 8077076K/8387460K available (12288K kernel code, 2305K rwdata, 22716K rodata, 42828K init, 2360K bss, 310124K reserved, 0K cma-reserved) Oct 8 20:02:04.073287 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Oct 8 20:02:04.073301 kernel: ftrace: allocating 37784 entries in 148 pages Oct 8 20:02:04.073315 kernel: ftrace: allocated 148 pages with 3 groups Oct 8 20:02:04.073330 kernel: Dynamic Preempt: voluntary Oct 8 20:02:04.073343 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 8 20:02:04.073358 kernel: rcu: RCU event tracing is enabled. Oct 8 20:02:04.073375 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Oct 8 20:02:04.073389 kernel: Trampoline variant of Tasks RCU enabled. Oct 8 20:02:04.073403 kernel: Rude variant of Tasks RCU enabled. Oct 8 20:02:04.073416 kernel: Tracing variant of Tasks RCU enabled. Oct 8 20:02:04.073428 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 8 20:02:04.073445 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Oct 8 20:02:04.073459 kernel: Using NULL legacy PIC Oct 8 20:02:04.073473 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Oct 8 20:02:04.073488 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 8 20:02:04.073502 kernel: Console: colour dummy device 80x25 Oct 8 20:02:04.073516 kernel: printk: console [tty1] enabled Oct 8 20:02:04.073530 kernel: printk: console [ttyS0] enabled Oct 8 20:02:04.073544 kernel: printk: bootconsole [earlyser0] disabled Oct 8 20:02:04.073560 kernel: ACPI: Core revision 20230628 Oct 8 20:02:04.073575 kernel: Failed to register legacy timer interrupt Oct 8 20:02:04.073591 kernel: APIC: Switch to symmetric I/O mode setup Oct 8 20:02:04.073605 kernel: Hyper-V: enabling crash_kexec_post_notifiers Oct 8 20:02:04.073617 kernel: Hyper-V: Using IPI hypercalls Oct 8 20:02:04.073630 kernel: APIC: send_IPI() replaced with hv_send_ipi() Oct 8 20:02:04.073642 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Oct 8 20:02:04.073655 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Oct 8 20:02:04.073670 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Oct 8 20:02:04.073683 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Oct 8 20:02:04.073694 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Oct 8 20:02:04.073711 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593905) Oct 8 20:02:04.073725 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Oct 8 20:02:04.073739 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Oct 8 20:02:04.073753 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 8 20:02:04.073766 kernel: Spectre V2 : Mitigation: Retpolines Oct 8 20:02:04.073779 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Oct 8 20:02:04.073808 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Oct 8 20:02:04.073824 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Oct 8 20:02:04.073839 kernel: RETBleed: Vulnerable Oct 8 20:02:04.073858 kernel: Speculative Store Bypass: Vulnerable Oct 8 20:02:04.073874 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Oct 8 20:02:04.073892 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Oct 8 20:02:04.073905 kernel: GDS: Unknown: Dependent on hypervisor status Oct 8 20:02:04.073919 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 8 20:02:04.073932 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 8 20:02:04.073951 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 8 20:02:04.073968 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Oct 8 20:02:04.073981 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Oct 8 20:02:04.073994 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Oct 8 20:02:04.074008 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 8 20:02:04.074026 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Oct 8 20:02:04.074040 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Oct 8 20:02:04.074055 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Oct 8 20:02:04.074069 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Oct 8 20:02:04.074084 kernel: Freeing SMP alternatives memory: 32K Oct 8 20:02:04.074097 kernel: pid_max: default: 32768 minimum: 301 Oct 8 20:02:04.074112 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Oct 8 20:02:04.074127 kernel: landlock: Up and running. Oct 8 20:02:04.074142 kernel: SELinux: Initializing. Oct 8 20:02:04.074157 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 8 20:02:04.074171 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 8 20:02:04.074185 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Oct 8 20:02:04.074202 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 8 20:02:04.074216 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 8 20:02:04.074230 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 8 20:02:04.074245 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Oct 8 20:02:04.074260 kernel: signal: max sigframe size: 3632 Oct 8 20:02:04.074274 kernel: rcu: Hierarchical SRCU implementation. Oct 8 20:02:04.074290 kernel: rcu: Max phase no-delay instances is 400. Oct 8 20:02:04.074305 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Oct 8 20:02:04.074319 kernel: smp: Bringing up secondary CPUs ... Oct 8 20:02:04.074335 kernel: smpboot: x86: Booting SMP configuration: Oct 8 20:02:04.074348 kernel: .... node #0, CPUs: #1 Oct 8 20:02:04.074363 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Oct 8 20:02:04.074376 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Oct 8 20:02:04.074391 kernel: smp: Brought up 1 node, 2 CPUs Oct 8 20:02:04.074404 kernel: smpboot: Max logical packages: 1 Oct 8 20:02:04.074418 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Oct 8 20:02:04.074432 kernel: devtmpfs: initialized Oct 8 20:02:04.074450 kernel: x86/mm: Memory block size: 128MB Oct 8 20:02:04.074465 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Oct 8 20:02:04.074479 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 8 20:02:04.074492 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Oct 8 20:02:04.074508 kernel: pinctrl core: initialized pinctrl subsystem Oct 8 20:02:04.074522 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 8 20:02:04.074537 kernel: audit: initializing netlink subsys (disabled) Oct 8 20:02:04.074552 kernel: audit: type=2000 audit(1728417722.027:1): state=initialized audit_enabled=0 res=1 Oct 8 20:02:04.074566 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 8 20:02:04.074585 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 8 20:02:04.074601 kernel: cpuidle: using governor menu Oct 8 20:02:04.074615 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 8 20:02:04.074630 kernel: dca service started, version 1.12.1 Oct 8 20:02:04.074646 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Oct 8 20:02:04.074661 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 8 20:02:04.074675 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 8 20:02:04.074691 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 8 20:02:04.074705 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 8 20:02:04.074724 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 8 20:02:04.074739 kernel: ACPI: Added _OSI(Module Device) Oct 8 20:02:04.074754 kernel: ACPI: Added _OSI(Processor Device) Oct 8 20:02:04.074769 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 8 20:02:04.074783 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 8 20:02:04.074820 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 8 20:02:04.074835 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Oct 8 20:02:04.074850 kernel: ACPI: Interpreter enabled Oct 8 20:02:04.074865 kernel: ACPI: PM: (supports S0 S5) Oct 8 20:02:04.074883 kernel: ACPI: Using IOAPIC for interrupt routing Oct 8 20:02:04.074899 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 8 20:02:04.074914 kernel: PCI: Ignoring E820 reservations for host bridge windows Oct 8 20:02:04.074928 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Oct 8 20:02:04.074943 kernel: iommu: Default domain type: Translated Oct 8 20:02:04.074958 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 8 20:02:04.074973 kernel: efivars: Registered efivars operations Oct 8 20:02:04.074988 kernel: PCI: Using ACPI for IRQ routing Oct 8 20:02:04.075002 kernel: PCI: System does not support PCI Oct 8 20:02:04.075020 kernel: vgaarb: loaded Oct 8 20:02:04.075035 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Oct 8 20:02:04.075050 kernel: VFS: Disk quotas dquot_6.6.0 Oct 8 20:02:04.075065 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 8 20:02:04.075081 kernel: pnp: PnP ACPI init Oct 8 20:02:04.075095 kernel: pnp: PnP ACPI: found 3 devices Oct 8 20:02:04.075110 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 8 20:02:04.075125 kernel: NET: Registered PF_INET protocol family Oct 8 20:02:04.075140 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Oct 8 20:02:04.075158 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Oct 8 20:02:04.075173 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 8 20:02:04.075188 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 8 20:02:04.075203 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Oct 8 20:02:04.075217 kernel: TCP: Hash tables configured (established 65536 bind 65536) Oct 8 20:02:04.075233 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Oct 8 20:02:04.075248 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Oct 8 20:02:04.075262 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 8 20:02:04.075277 kernel: NET: Registered PF_XDP protocol family Oct 8 20:02:04.075295 kernel: PCI: CLS 0 bytes, default 64 Oct 8 20:02:04.075310 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Oct 8 20:02:04.075326 kernel: software IO TLB: mapped [mem 0x000000003b5c1000-0x000000003f5c1000] (64MB) Oct 8 20:02:04.075341 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Oct 8 20:02:04.075355 kernel: Initialise system trusted keyrings Oct 8 20:02:04.075370 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Oct 8 20:02:04.075385 kernel: Key type asymmetric registered Oct 8 20:02:04.075399 kernel: Asymmetric key parser 'x509' registered Oct 8 20:02:04.075414 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Oct 8 20:02:04.075432 kernel: io scheduler mq-deadline registered Oct 8 20:02:04.075447 kernel: io scheduler kyber registered Oct 8 20:02:04.075462 kernel: io scheduler bfq registered Oct 8 20:02:04.075476 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 8 20:02:04.075491 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 8 20:02:04.075507 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 8 20:02:04.075521 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Oct 8 20:02:04.075536 kernel: i8042: PNP: No PS/2 controller found. Oct 8 20:02:04.075722 kernel: rtc_cmos 00:02: registered as rtc0 Oct 8 20:02:04.076911 kernel: rtc_cmos 00:02: setting system clock to 2024-10-08T20:02:03 UTC (1728417723) Oct 8 20:02:04.077036 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Oct 8 20:02:04.077055 kernel: intel_pstate: CPU model not supported Oct 8 20:02:04.077070 kernel: efifb: probing for efifb Oct 8 20:02:04.077084 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Oct 8 20:02:04.077098 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Oct 8 20:02:04.077112 kernel: efifb: scrolling: redraw Oct 8 20:02:04.077127 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Oct 8 20:02:04.077145 kernel: Console: switching to colour frame buffer device 128x48 Oct 8 20:02:04.077159 kernel: fb0: EFI VGA frame buffer device Oct 8 20:02:04.077174 kernel: pstore: Using crash dump compression: deflate Oct 8 20:02:04.077188 kernel: pstore: Registered efi_pstore as persistent store backend Oct 8 20:02:04.077203 kernel: NET: Registered PF_INET6 protocol family Oct 8 20:02:04.077217 kernel: Segment Routing with IPv6 Oct 8 20:02:04.077231 kernel: In-situ OAM (IOAM) with IPv6 Oct 8 20:02:04.077245 kernel: NET: Registered PF_PACKET protocol family Oct 8 20:02:04.077259 kernel: Key type dns_resolver registered Oct 8 20:02:04.077276 kernel: IPI shorthand broadcast: enabled Oct 8 20:02:04.077290 kernel: sched_clock: Marking stable (844044600, 46552300)->(1108376400, -217779500) Oct 8 20:02:04.077304 kernel: registered taskstats version 1 Oct 8 20:02:04.077319 kernel: Loading compiled-in X.509 certificates Oct 8 20:02:04.077333 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.54-flatcar: 14ce23fc5070d0471461f1dd6e298a5588e7ba8f' Oct 8 20:02:04.077347 kernel: Key type .fscrypt registered Oct 8 20:02:04.077361 kernel: Key type fscrypt-provisioning registered Oct 8 20:02:04.077376 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 8 20:02:04.077393 kernel: ima: Allocated hash algorithm: sha1 Oct 8 20:02:04.077407 kernel: ima: No architecture policies found Oct 8 20:02:04.077422 kernel: clk: Disabling unused clocks Oct 8 20:02:04.077436 kernel: Freeing unused kernel image (initmem) memory: 42828K Oct 8 20:02:04.077451 kernel: Write protecting the kernel read-only data: 36864k Oct 8 20:02:04.077465 kernel: Freeing unused kernel image (rodata/data gap) memory: 1860K Oct 8 20:02:04.077480 kernel: Run /init as init process Oct 8 20:02:04.077494 kernel: with arguments: Oct 8 20:02:04.077512 kernel: /init Oct 8 20:02:04.077527 kernel: with environment: Oct 8 20:02:04.077541 kernel: HOME=/ Oct 8 20:02:04.077555 kernel: TERM=linux Oct 8 20:02:04.077568 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 8 20:02:04.077585 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 8 20:02:04.077604 systemd[1]: Detected virtualization microsoft. Oct 8 20:02:04.077618 systemd[1]: Detected architecture x86-64. Oct 8 20:02:04.077634 systemd[1]: Running in initrd. Oct 8 20:02:04.077653 systemd[1]: No hostname configured, using default hostname. Oct 8 20:02:04.077669 systemd[1]: Hostname set to . Oct 8 20:02:04.077685 systemd[1]: Initializing machine ID from random generator. Oct 8 20:02:04.077700 systemd[1]: Queued start job for default target initrd.target. Oct 8 20:02:04.077715 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 20:02:04.077731 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 20:02:04.077747 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 8 20:02:04.077762 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 8 20:02:04.077780 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 8 20:02:04.078818 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 8 20:02:04.078842 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 8 20:02:04.078852 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 8 20:02:04.078864 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 20:02:04.078872 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 8 20:02:04.078884 systemd[1]: Reached target paths.target - Path Units. Oct 8 20:02:04.078898 systemd[1]: Reached target slices.target - Slice Units. Oct 8 20:02:04.078909 systemd[1]: Reached target swap.target - Swaps. Oct 8 20:02:04.078918 systemd[1]: Reached target timers.target - Timer Units. Oct 8 20:02:04.078929 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 8 20:02:04.078938 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 8 20:02:04.078950 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 8 20:02:04.078959 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 8 20:02:04.078970 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 8 20:02:04.078980 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 8 20:02:04.078993 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 20:02:04.079003 systemd[1]: Reached target sockets.target - Socket Units. Oct 8 20:02:04.079011 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 8 20:02:04.079023 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 8 20:02:04.079032 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 8 20:02:04.079040 systemd[1]: Starting systemd-fsck-usr.service... Oct 8 20:02:04.079049 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 8 20:02:04.079059 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 8 20:02:04.079096 systemd-journald[176]: Collecting audit messages is disabled. Oct 8 20:02:04.079119 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 20:02:04.079130 systemd-journald[176]: Journal started Oct 8 20:02:04.079155 systemd-journald[176]: Runtime Journal (/run/log/journal/2078fdb7b59d4d43971a1d4a90045f5d) is 8.0M, max 158.8M, 150.8M free. Oct 8 20:02:04.097752 systemd[1]: Started systemd-journald.service - Journal Service. Oct 8 20:02:04.098456 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 8 20:02:04.101871 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 20:02:04.103948 systemd-modules-load[177]: Inserted module 'overlay' Oct 8 20:02:04.110601 systemd[1]: Finished systemd-fsck-usr.service. Oct 8 20:02:04.115454 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:02:04.131009 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 20:02:04.141232 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 8 20:02:04.159396 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 8 20:02:04.167590 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 8 20:02:04.170925 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 8 20:02:04.185815 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 8 20:02:04.186501 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 20:02:04.193999 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 8 20:02:04.200325 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 20:02:04.208596 kernel: Bridge firewalling registered Oct 8 20:02:04.208588 systemd-modules-load[177]: Inserted module 'br_netfilter' Oct 8 20:02:04.213030 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 8 20:02:04.215809 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 8 20:02:04.223137 dracut-cmdline[206]: dracut-dracut-053 Oct 8 20:02:04.228835 dracut-cmdline[206]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=ed527eaf992abc270af9987554566193214d123941456fd3066b47855e5178a5 Oct 8 20:02:04.244944 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 20:02:04.258417 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 20:02:04.272973 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 8 20:02:04.314670 systemd-resolved[237]: Positive Trust Anchors: Oct 8 20:02:04.314692 systemd-resolved[237]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 8 20:02:04.314734 systemd-resolved[237]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 8 20:02:04.339513 systemd-resolved[237]: Defaulting to hostname 'linux'. Oct 8 20:02:04.342865 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 8 20:02:04.348819 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 8 20:02:04.362815 kernel: SCSI subsystem initialized Oct 8 20:02:04.373814 kernel: Loading iSCSI transport class v2.0-870. Oct 8 20:02:04.384819 kernel: iscsi: registered transport (tcp) Oct 8 20:02:04.404822 kernel: iscsi: registered transport (qla4xxx) Oct 8 20:02:04.404877 kernel: QLogic iSCSI HBA Driver Oct 8 20:02:04.441619 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 8 20:02:04.453921 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 8 20:02:04.485136 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 8 20:02:04.485215 kernel: device-mapper: uevent: version 1.0.3 Oct 8 20:02:04.489814 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 8 20:02:04.527840 kernel: raid6: avx512x4 gen() 18496 MB/s Oct 8 20:02:04.546810 kernel: raid6: avx512x2 gen() 18385 MB/s Oct 8 20:02:04.565803 kernel: raid6: avx512x1 gen() 18549 MB/s Oct 8 20:02:04.584809 kernel: raid6: avx2x4 gen() 18449 MB/s Oct 8 20:02:04.603808 kernel: raid6: avx2x2 gen() 18440 MB/s Oct 8 20:02:04.623663 kernel: raid6: avx2x1 gen() 14063 MB/s Oct 8 20:02:04.623700 kernel: raid6: using algorithm avx512x1 gen() 18549 MB/s Oct 8 20:02:04.644806 kernel: raid6: .... xor() 26970 MB/s, rmw enabled Oct 8 20:02:04.644833 kernel: raid6: using avx512x2 recovery algorithm Oct 8 20:02:04.667825 kernel: xor: automatically using best checksumming function avx Oct 8 20:02:04.812822 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 8 20:02:04.822533 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 8 20:02:04.832978 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 20:02:04.846180 systemd-udevd[396]: Using default interface naming scheme 'v255'. Oct 8 20:02:04.850593 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 20:02:04.866165 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 8 20:02:04.877351 dracut-pre-trigger[404]: rd.md=0: removing MD RAID activation Oct 8 20:02:04.903166 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 8 20:02:04.915958 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 8 20:02:04.958893 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 20:02:04.974127 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 8 20:02:04.996258 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 8 20:02:05.010746 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 8 20:02:05.017899 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 20:02:05.025835 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 8 20:02:05.038310 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 8 20:02:05.060824 kernel: cryptd: max_cpu_qlen set to 1000 Oct 8 20:02:05.072837 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 8 20:02:05.097583 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 8 20:02:05.101713 kernel: hv_vmbus: Vmbus version:5.2 Oct 8 20:02:05.097987 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 20:02:05.108839 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 20:02:05.111732 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 20:02:05.112004 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:02:05.115142 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 20:02:05.147714 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 20:02:05.157589 kernel: pps_core: LinuxPPS API ver. 1 registered Oct 8 20:02:05.157625 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Oct 8 20:02:05.157650 kernel: PTP clock support registered Oct 8 20:02:05.173200 kernel: hv_vmbus: registering driver hyperv_keyboard Oct 8 20:02:05.173238 kernel: hv_utils: Registering HyperV Utility Driver Oct 8 20:02:05.175921 kernel: AVX2 version of gcm_enc/dec engaged. Oct 8 20:02:05.175954 kernel: hv_vmbus: registering driver hv_utils Oct 8 20:02:05.175969 kernel: hv_utils: Heartbeat IC version 3.0 Oct 8 20:02:05.183182 kernel: hv_utils: Shutdown IC version 3.2 Oct 8 20:02:05.915434 kernel: hv_utils: TimeSync IC version 4.0 Oct 8 20:02:05.915472 kernel: AES CTR mode by8 optimization enabled Oct 8 20:02:05.914139 systemd-resolved[237]: Clock change detected. Flushing caches. Oct 8 20:02:05.931169 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 8 20:02:05.931195 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Oct 8 20:02:05.931214 kernel: hv_vmbus: registering driver hv_netvsc Oct 8 20:02:05.921697 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:02:05.940934 kernel: hv_vmbus: registering driver hv_storvsc Oct 8 20:02:05.944933 kernel: hv_vmbus: registering driver hid_hyperv Oct 8 20:02:05.949277 kernel: scsi host0: storvsc_host_t Oct 8 20:02:05.949585 kernel: scsi host1: storvsc_host_t Oct 8 20:02:05.950982 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Oct 8 20:02:05.951024 kernel: hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Oct 8 20:02:05.951189 kernel: scsi 1:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Oct 8 20:02:05.949975 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 20:02:05.991393 kernel: scsi 1:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Oct 8 20:02:06.012612 kernel: sr 1:0:0:2: [sr0] scsi-1 drive Oct 8 20:02:06.012864 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 8 20:02:06.018575 kernel: sd 1:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Oct 8 20:02:06.018832 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Oct 8 20:02:06.025659 kernel: sd 1:0:0:0: [sda] Write Protect is off Oct 8 20:02:06.025891 kernel: sd 1:0:0:0: [sda] Mode Sense: 0f 00 10 00 Oct 8 20:02:06.026088 kernel: sd 1:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Oct 8 20:02:06.026938 kernel: sr 1:0:0:2: Attached scsi CD-ROM sr0 Oct 8 20:02:06.028507 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 20:02:06.034282 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 8 20:02:06.039221 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Oct 8 20:02:06.146751 kernel: hv_netvsc 000d3ad8-07a2-000d-3ad8-07a2000d3ad8 eth0: VF slot 1 added Oct 8 20:02:06.154951 kernel: hv_vmbus: registering driver hv_pci Oct 8 20:02:06.160420 kernel: hv_pci caab7c06-a659-4141-ac8f-04fb112b79ca: PCI VMBus probing: Using version 0x10004 Oct 8 20:02:06.160629 kernel: hv_pci caab7c06-a659-4141-ac8f-04fb112b79ca: PCI host bridge to bus a659:00 Oct 8 20:02:06.166002 kernel: pci_bus a659:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Oct 8 20:02:06.169147 kernel: pci_bus a659:00: No busn resource found for root bus, will use [bus 00-ff] Oct 8 20:02:06.174083 kernel: pci a659:00:02.0: [15b3:1016] type 00 class 0x020000 Oct 8 20:02:06.177978 kernel: pci a659:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Oct 8 20:02:06.182229 kernel: pci a659:00:02.0: enabling Extended Tags Oct 8 20:02:06.193306 kernel: pci a659:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at a659:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Oct 8 20:02:06.200206 kernel: pci_bus a659:00: busn_res: [bus 00-ff] end is updated to 00 Oct 8 20:02:06.200515 kernel: pci a659:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Oct 8 20:02:06.364380 kernel: mlx5_core a659:00:02.0: enabling device (0000 -> 0002) Oct 8 20:02:06.368943 kernel: mlx5_core a659:00:02.0: firmware version: 14.30.1284 Oct 8 20:02:06.590467 kernel: hv_netvsc 000d3ad8-07a2-000d-3ad8-07a2000d3ad8 eth0: VF registering: eth1 Oct 8 20:02:06.590822 kernel: mlx5_core a659:00:02.0 eth1: joined to eth0 Oct 8 20:02:06.594770 kernel: mlx5_core a659:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Oct 8 20:02:06.604950 kernel: mlx5_core a659:00:02.0 enP42585s1: renamed from eth1 Oct 8 20:02:06.661315 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Oct 8 20:02:06.687934 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (441) Oct 8 20:02:06.703003 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Oct 8 20:02:06.765910 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Oct 8 20:02:06.788960 kernel: BTRFS: device fsid a8680da2-059a-4648-a8e8-f62925ab33ec devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (450) Oct 8 20:02:06.802646 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Oct 8 20:02:06.806068 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Oct 8 20:02:06.826132 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 8 20:02:06.838974 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 8 20:02:06.844928 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 8 20:02:07.851698 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 8 20:02:07.853344 disk-uuid[597]: The operation has completed successfully. Oct 8 20:02:07.921468 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 8 20:02:07.921585 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 8 20:02:07.952061 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 8 20:02:07.960040 sh[683]: Success Oct 8 20:02:07.996513 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Oct 8 20:02:08.241208 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 8 20:02:08.258960 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 8 20:02:08.264246 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 8 20:02:08.280882 kernel: BTRFS info (device dm-0): first mount of filesystem a8680da2-059a-4648-a8e8-f62925ab33ec Oct 8 20:02:08.280958 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 8 20:02:08.284315 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 8 20:02:08.287108 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 8 20:02:08.289575 kernel: BTRFS info (device dm-0): using free space tree Oct 8 20:02:08.688180 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 8 20:02:08.693490 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 8 20:02:08.709084 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 8 20:02:08.715174 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 8 20:02:08.728278 kernel: BTRFS info (device sda6): first mount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 20:02:08.733946 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 8 20:02:08.733998 kernel: BTRFS info (device sda6): using free space tree Oct 8 20:02:08.757936 kernel: BTRFS info (device sda6): auto enabling async discard Oct 8 20:02:08.772713 kernel: BTRFS info (device sda6): last unmount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 20:02:08.772311 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 8 20:02:08.783285 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 8 20:02:08.794130 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 8 20:02:08.816693 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 8 20:02:08.828143 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 8 20:02:08.847693 systemd-networkd[867]: lo: Link UP Oct 8 20:02:08.847703 systemd-networkd[867]: lo: Gained carrier Oct 8 20:02:08.853071 systemd-networkd[867]: Enumeration completed Oct 8 20:02:08.854359 systemd-networkd[867]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 20:02:08.854364 systemd-networkd[867]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 20:02:08.855447 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 8 20:02:08.868919 systemd[1]: Reached target network.target - Network. Oct 8 20:02:08.924940 kernel: mlx5_core a659:00:02.0 enP42585s1: Link up Oct 8 20:02:08.957742 kernel: hv_netvsc 000d3ad8-07a2-000d-3ad8-07a2000d3ad8 eth0: Data path switched to VF: enP42585s1 Oct 8 20:02:08.957175 systemd-networkd[867]: enP42585s1: Link UP Oct 8 20:02:08.957324 systemd-networkd[867]: eth0: Link UP Oct 8 20:02:08.957577 systemd-networkd[867]: eth0: Gained carrier Oct 8 20:02:08.957593 systemd-networkd[867]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 20:02:08.961191 systemd-networkd[867]: enP42585s1: Gained carrier Oct 8 20:02:08.982948 systemd-networkd[867]: eth0: DHCPv4 address 10.200.8.13/24, gateway 10.200.8.1 acquired from 168.63.129.16 Oct 8 20:02:10.240234 ignition[830]: Ignition 2.19.0 Oct 8 20:02:10.240246 ignition[830]: Stage: fetch-offline Oct 8 20:02:10.240288 ignition[830]: no configs at "/usr/lib/ignition/base.d" Oct 8 20:02:10.240299 ignition[830]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 8 20:02:10.240414 ignition[830]: parsed url from cmdline: "" Oct 8 20:02:10.240419 ignition[830]: no config URL provided Oct 8 20:02:10.240425 ignition[830]: reading system config file "/usr/lib/ignition/user.ign" Oct 8 20:02:10.240436 ignition[830]: no config at "/usr/lib/ignition/user.ign" Oct 8 20:02:10.240444 ignition[830]: failed to fetch config: resource requires networking Oct 8 20:02:10.240733 ignition[830]: Ignition finished successfully Oct 8 20:02:10.261053 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 8 20:02:10.270161 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Oct 8 20:02:10.288245 ignition[876]: Ignition 2.19.0 Oct 8 20:02:10.288257 ignition[876]: Stage: fetch Oct 8 20:02:10.288491 ignition[876]: no configs at "/usr/lib/ignition/base.d" Oct 8 20:02:10.288504 ignition[876]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 8 20:02:10.288608 ignition[876]: parsed url from cmdline: "" Oct 8 20:02:10.288611 ignition[876]: no config URL provided Oct 8 20:02:10.288616 ignition[876]: reading system config file "/usr/lib/ignition/user.ign" Oct 8 20:02:10.288623 ignition[876]: no config at "/usr/lib/ignition/user.ign" Oct 8 20:02:10.288643 ignition[876]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Oct 8 20:02:10.380547 ignition[876]: GET result: OK Oct 8 20:02:10.380665 ignition[876]: config has been read from IMDS userdata Oct 8 20:02:10.380705 ignition[876]: parsing config with SHA512: 99bc1de9f7cf200134af589a05bfb821e1cb3a1650f157c8f1aed3253e4fbdba14277bf883ba8706d4a1ab97eede90dffd7494b500b4e6df95627612e538358e Oct 8 20:02:10.388310 unknown[876]: fetched base config from "system" Oct 8 20:02:10.388325 unknown[876]: fetched base config from "system" Oct 8 20:02:10.389224 ignition[876]: fetch: fetch complete Oct 8 20:02:10.388333 unknown[876]: fetched user config from "azure" Oct 8 20:02:10.389232 ignition[876]: fetch: fetch passed Oct 8 20:02:10.393395 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Oct 8 20:02:10.389289 ignition[876]: Ignition finished successfully Oct 8 20:02:10.409240 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 8 20:02:10.425508 ignition[882]: Ignition 2.19.0 Oct 8 20:02:10.425529 ignition[882]: Stage: kargs Oct 8 20:02:10.428580 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 8 20:02:10.425765 ignition[882]: no configs at "/usr/lib/ignition/base.d" Oct 8 20:02:10.425777 ignition[882]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 8 20:02:10.426602 ignition[882]: kargs: kargs passed Oct 8 20:02:10.426653 ignition[882]: Ignition finished successfully Oct 8 20:02:10.449109 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 8 20:02:10.465760 ignition[888]: Ignition 2.19.0 Oct 8 20:02:10.465770 ignition[888]: Stage: disks Oct 8 20:02:10.468345 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 8 20:02:10.465995 ignition[888]: no configs at "/usr/lib/ignition/base.d" Oct 8 20:02:10.466008 ignition[888]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 8 20:02:10.466853 ignition[888]: disks: disks passed Oct 8 20:02:10.466896 ignition[888]: Ignition finished successfully Oct 8 20:02:10.479437 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 8 20:02:10.485891 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 8 20:02:10.493793 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 8 20:02:10.498456 systemd[1]: Reached target sysinit.target - System Initialization. Oct 8 20:02:10.503559 systemd[1]: Reached target basic.target - Basic System. Oct 8 20:02:10.507911 systemd-networkd[867]: enP42585s1: Gained IPv6LL Oct 8 20:02:10.516078 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 8 20:02:10.584403 systemd-fsck[896]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Oct 8 20:02:10.590604 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 8 20:02:10.605092 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 8 20:02:10.694932 kernel: EXT4-fs (sda9): mounted filesystem 1df90f14-3ad0-4280-9b7d-a34f65d70e4d r/w with ordered data mode. Quota mode: none. Oct 8 20:02:10.695308 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 8 20:02:10.698152 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 8 20:02:10.749074 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 8 20:02:10.754085 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 8 20:02:10.763936 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (907) Oct 8 20:02:10.764181 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Oct 8 20:02:10.787089 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 8 20:02:10.787876 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 8 20:02:10.806006 kernel: BTRFS info (device sda6): first mount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 20:02:10.806050 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 8 20:02:10.806070 kernel: BTRFS info (device sda6): using free space tree Oct 8 20:02:10.807001 systemd-networkd[867]: eth0: Gained IPv6LL Oct 8 20:02:10.811195 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 8 20:02:10.821088 kernel: BTRFS info (device sda6): auto enabling async discard Oct 8 20:02:10.821817 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 8 20:02:10.830082 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 8 20:02:11.765608 coreos-metadata[909]: Oct 08 20:02:11.765 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Oct 8 20:02:11.772425 coreos-metadata[909]: Oct 08 20:02:11.772 INFO Fetch successful Oct 8 20:02:11.774933 coreos-metadata[909]: Oct 08 20:02:11.772 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Oct 8 20:02:11.791635 coreos-metadata[909]: Oct 08 20:02:11.791 INFO Fetch successful Oct 8 20:02:11.816809 coreos-metadata[909]: Oct 08 20:02:11.816 INFO wrote hostname ci-4081.1.0-a-b9ef23c535 to /sysroot/etc/hostname Oct 8 20:02:11.820979 initrd-setup-root[935]: cut: /sysroot/etc/passwd: No such file or directory Oct 8 20:02:11.824266 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Oct 8 20:02:11.831221 initrd-setup-root[943]: cut: /sysroot/etc/group: No such file or directory Oct 8 20:02:11.835751 initrd-setup-root[950]: cut: /sysroot/etc/shadow: No such file or directory Oct 8 20:02:11.840512 initrd-setup-root[957]: cut: /sysroot/etc/gshadow: No such file or directory Oct 8 20:02:13.903186 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 8 20:02:13.915053 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 8 20:02:13.922089 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 8 20:02:13.928739 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 8 20:02:13.934928 kernel: BTRFS info (device sda6): last unmount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 20:02:13.965046 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 8 20:02:13.972948 ignition[1025]: INFO : Ignition 2.19.0 Oct 8 20:02:13.972948 ignition[1025]: INFO : Stage: mount Oct 8 20:02:13.972948 ignition[1025]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 20:02:13.972948 ignition[1025]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 8 20:02:13.987139 ignition[1025]: INFO : mount: mount passed Oct 8 20:02:13.987139 ignition[1025]: INFO : Ignition finished successfully Oct 8 20:02:13.976486 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 8 20:02:13.997083 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 8 20:02:14.005691 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 8 20:02:14.024313 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1039) Oct 8 20:02:14.024362 kernel: BTRFS info (device sda6): first mount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 20:02:14.027489 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 8 20:02:14.029910 kernel: BTRFS info (device sda6): using free space tree Oct 8 20:02:14.034949 kernel: BTRFS info (device sda6): auto enabling async discard Oct 8 20:02:14.036644 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 8 20:02:14.060062 ignition[1055]: INFO : Ignition 2.19.0 Oct 8 20:02:14.060062 ignition[1055]: INFO : Stage: files Oct 8 20:02:14.079959 ignition[1055]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 20:02:14.079959 ignition[1055]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 8 20:02:14.079959 ignition[1055]: DEBUG : files: compiled without relabeling support, skipping Oct 8 20:02:14.118817 ignition[1055]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 8 20:02:14.118817 ignition[1055]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 8 20:02:14.216219 ignition[1055]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 8 20:02:14.219928 ignition[1055]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 8 20:02:14.223686 unknown[1055]: wrote ssh authorized keys file for user: core Oct 8 20:02:14.226438 ignition[1055]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 8 20:02:14.226438 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 8 20:02:14.226438 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Oct 8 20:02:14.293049 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 8 20:02:14.377311 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 8 20:02:14.383430 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 8 20:02:14.383430 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 8 20:02:14.383430 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 8 20:02:14.383430 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 8 20:02:14.383430 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 8 20:02:14.383430 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 8 20:02:14.383430 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 8 20:02:14.383430 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 8 20:02:14.383430 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 8 20:02:14.383430 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 8 20:02:14.383430 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Oct 8 20:02:14.383430 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Oct 8 20:02:14.383430 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Oct 8 20:02:14.383430 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Oct 8 20:02:14.883234 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 8 20:02:15.282705 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Oct 8 20:02:15.282705 ignition[1055]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 8 20:02:15.307098 ignition[1055]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 8 20:02:15.312640 ignition[1055]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 8 20:02:15.312640 ignition[1055]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 8 20:02:15.320809 ignition[1055]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Oct 8 20:02:15.324515 ignition[1055]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Oct 8 20:02:15.328173 ignition[1055]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 8 20:02:15.332624 ignition[1055]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 8 20:02:15.336883 ignition[1055]: INFO : files: files passed Oct 8 20:02:15.338795 ignition[1055]: INFO : Ignition finished successfully Oct 8 20:02:15.340257 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 8 20:02:15.351139 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 8 20:02:15.357288 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 8 20:02:15.365664 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 8 20:02:15.365785 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 8 20:02:15.380587 initrd-setup-root-after-ignition[1085]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 8 20:02:15.380587 initrd-setup-root-after-ignition[1085]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 8 20:02:15.388850 initrd-setup-root-after-ignition[1089]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 8 20:02:15.389679 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 8 20:02:15.396560 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 8 20:02:15.410101 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 8 20:02:15.433330 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 8 20:02:15.433447 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 8 20:02:15.442446 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 8 20:02:15.445206 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 8 20:02:15.452944 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 8 20:02:15.462066 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 8 20:02:15.475138 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 8 20:02:15.489059 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 8 20:02:15.501667 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 8 20:02:15.501869 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 20:02:15.502381 systemd[1]: Stopped target timers.target - Timer Units. Oct 8 20:02:15.502775 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 8 20:02:15.502879 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 8 20:02:15.504090 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 8 20:02:15.504515 systemd[1]: Stopped target basic.target - Basic System. Oct 8 20:02:15.504945 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 8 20:02:15.505443 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 8 20:02:15.505875 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 8 20:02:15.506316 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 8 20:02:15.506720 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 8 20:02:15.507154 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 8 20:02:15.507557 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 8 20:02:15.507977 systemd[1]: Stopped target swap.target - Swaps. Oct 8 20:02:15.508367 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 8 20:02:15.508503 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 8 20:02:15.509227 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 8 20:02:15.509662 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 20:02:15.510037 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 8 20:02:15.549249 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 20:02:15.552774 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 8 20:02:15.558035 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 8 20:02:15.576214 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 8 20:02:15.581786 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 8 20:02:15.599997 systemd[1]: ignition-files.service: Deactivated successfully. Oct 8 20:02:15.605702 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 8 20:02:15.620226 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Oct 8 20:02:15.620359 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Oct 8 20:02:15.650135 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 8 20:02:15.652865 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 8 20:02:15.653052 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 20:02:15.671748 ignition[1109]: INFO : Ignition 2.19.0 Oct 8 20:02:15.671748 ignition[1109]: INFO : Stage: umount Oct 8 20:02:15.691258 ignition[1109]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 20:02:15.691258 ignition[1109]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 8 20:02:15.691258 ignition[1109]: INFO : umount: umount passed Oct 8 20:02:15.691258 ignition[1109]: INFO : Ignition finished successfully Oct 8 20:02:15.673405 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 8 20:02:15.676084 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 8 20:02:15.676286 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 20:02:15.679690 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 8 20:02:15.679832 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 8 20:02:15.684760 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 8 20:02:15.684845 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 8 20:02:15.699656 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 8 20:02:15.702212 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 8 20:02:15.706956 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 8 20:02:15.706998 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 8 20:02:15.707263 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 8 20:02:15.707298 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Oct 8 20:02:15.707691 systemd[1]: Stopped target network.target - Network. Oct 8 20:02:15.708510 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 8 20:02:15.708546 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 8 20:02:15.708962 systemd[1]: Stopped target paths.target - Path Units. Oct 8 20:02:15.709538 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 8 20:02:15.721656 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 20:02:15.766023 systemd[1]: Stopped target slices.target - Slice Units. Oct 8 20:02:15.768349 systemd[1]: Stopped target sockets.target - Socket Units. Oct 8 20:02:15.770965 systemd[1]: iscsid.socket: Deactivated successfully. Oct 8 20:02:15.771024 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 8 20:02:15.773603 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 8 20:02:15.773645 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 8 20:02:15.788903 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 8 20:02:15.788984 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 8 20:02:15.793780 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 8 20:02:15.793838 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 8 20:02:15.804277 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 8 20:02:15.809493 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 8 20:02:15.815746 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 8 20:02:15.818756 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 8 20:02:15.821164 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 8 20:02:15.824985 systemd-networkd[867]: eth0: DHCPv6 lease lost Oct 8 20:02:15.829730 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 8 20:02:15.829856 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 8 20:02:15.852590 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 8 20:02:15.852738 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 8 20:02:15.864185 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 8 20:02:15.864244 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 8 20:02:15.876085 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 8 20:02:15.878451 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 8 20:02:15.878507 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 8 20:02:15.881834 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 8 20:02:15.881898 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 8 20:02:15.884734 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 8 20:02:15.884777 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 8 20:02:15.889955 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 8 20:02:15.890010 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 8 20:02:15.893636 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 20:02:15.920333 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 8 20:02:15.920481 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 20:02:15.924324 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 8 20:02:15.924402 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 8 20:02:15.929770 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 8 20:02:15.929812 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 20:02:15.932614 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 8 20:02:15.932672 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 8 20:02:15.963709 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 8 20:02:15.967410 kernel: hv_netvsc 000d3ad8-07a2-000d-3ad8-07a2000d3ad8 eth0: Data path switched from VF: enP42585s1 Oct 8 20:02:15.963787 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 8 20:02:15.970293 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 8 20:02:15.970349 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 20:02:15.986157 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 8 20:02:15.989187 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 8 20:02:15.989257 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 20:02:15.992850 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 20:02:15.992904 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:02:16.005656 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 8 20:02:16.005772 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 8 20:02:16.011825 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 8 20:02:16.011911 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 8 20:02:16.445961 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 8 20:02:16.446125 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 8 20:02:16.449284 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 8 20:02:16.453994 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 8 20:02:16.454062 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 8 20:02:16.468114 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 8 20:02:16.479778 systemd[1]: Switching root. Oct 8 20:02:16.581512 systemd-journald[176]: Journal stopped Oct 8 20:02:21.808631 systemd-journald[176]: Received SIGTERM from PID 1 (systemd). Oct 8 20:02:21.808660 kernel: SELinux: policy capability network_peer_controls=1 Oct 8 20:02:21.808672 kernel: SELinux: policy capability open_perms=1 Oct 8 20:02:21.808683 kernel: SELinux: policy capability extended_socket_class=1 Oct 8 20:02:21.808691 kernel: SELinux: policy capability always_check_network=0 Oct 8 20:02:21.808699 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 8 20:02:21.808710 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 8 20:02:21.808721 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 8 20:02:21.808732 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 8 20:02:21.808740 kernel: audit: type=1403 audit(1728417738.321:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 8 20:02:21.808752 systemd[1]: Successfully loaded SELinux policy in 194.736ms. Oct 8 20:02:21.808762 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.761ms. Oct 8 20:02:21.808774 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 8 20:02:21.808786 systemd[1]: Detected virtualization microsoft. Oct 8 20:02:21.808801 systemd[1]: Detected architecture x86-64. Oct 8 20:02:21.808811 systemd[1]: Detected first boot. Oct 8 20:02:21.808823 systemd[1]: Hostname set to . Oct 8 20:02:21.808833 systemd[1]: Initializing machine ID from random generator. Oct 8 20:02:21.808845 zram_generator::config[1152]: No configuration found. Oct 8 20:02:21.808858 systemd[1]: Populated /etc with preset unit settings. Oct 8 20:02:21.808870 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 8 20:02:21.808880 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 8 20:02:21.808892 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 8 20:02:21.808904 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 8 20:02:21.808923 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 8 20:02:21.808936 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 8 20:02:21.808949 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 8 20:02:21.808961 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 8 20:02:21.808971 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 8 20:02:21.808983 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 8 20:02:21.808993 systemd[1]: Created slice user.slice - User and Session Slice. Oct 8 20:02:21.809005 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 20:02:21.809015 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 20:02:21.809027 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 8 20:02:21.809042 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 8 20:02:21.809054 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 8 20:02:21.809064 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 8 20:02:21.809076 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 8 20:02:21.809086 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 20:02:21.809098 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 8 20:02:21.809112 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 8 20:02:21.809124 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 8 20:02:21.809139 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 8 20:02:21.809150 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 20:02:21.809162 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 8 20:02:21.809173 systemd[1]: Reached target slices.target - Slice Units. Oct 8 20:02:21.809184 systemd[1]: Reached target swap.target - Swaps. Oct 8 20:02:21.809195 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 8 20:02:21.809206 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 8 20:02:21.809220 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 8 20:02:21.809232 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 8 20:02:21.809243 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 20:02:21.809255 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 8 20:02:21.809266 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 8 20:02:21.809281 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 8 20:02:21.809293 systemd[1]: Mounting media.mount - External Media Directory... Oct 8 20:02:21.809306 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 20:02:21.809317 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 8 20:02:21.809328 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 8 20:02:21.809340 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 8 20:02:21.809352 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 8 20:02:21.809363 systemd[1]: Reached target machines.target - Containers. Oct 8 20:02:21.809378 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 8 20:02:21.809391 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 20:02:21.809401 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 8 20:02:21.809413 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 8 20:02:21.809424 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 20:02:21.809435 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 8 20:02:21.809447 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 20:02:21.809458 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 8 20:02:21.809470 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 20:02:21.809485 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 8 20:02:21.809496 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 8 20:02:21.809508 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 8 20:02:21.809519 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 8 20:02:21.809531 systemd[1]: Stopped systemd-fsck-usr.service. Oct 8 20:02:21.809543 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 8 20:02:21.809555 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 8 20:02:21.809566 kernel: fuse: init (API version 7.39) Oct 8 20:02:21.809579 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 8 20:02:21.809590 kernel: loop: module loaded Oct 8 20:02:21.809601 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 8 20:02:21.809612 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 8 20:02:21.809624 systemd[1]: verity-setup.service: Deactivated successfully. Oct 8 20:02:21.809636 systemd[1]: Stopped verity-setup.service. Oct 8 20:02:21.809661 systemd-journald[1237]: Collecting audit messages is disabled. Oct 8 20:02:21.809688 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 20:02:21.809700 systemd-journald[1237]: Journal started Oct 8 20:02:21.809724 systemd-journald[1237]: Runtime Journal (/run/log/journal/a66b44bd0ae645849cd6c499a3e31648) is 8.0M, max 158.8M, 150.8M free. Oct 8 20:02:21.103602 systemd[1]: Queued start job for default target multi-user.target. Oct 8 20:02:21.218137 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Oct 8 20:02:21.218524 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 8 20:02:21.818942 systemd[1]: Started systemd-journald.service - Journal Service. Oct 8 20:02:21.824015 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 8 20:02:21.826993 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 8 20:02:21.834267 systemd[1]: Mounted media.mount - External Media Directory. Oct 8 20:02:21.837818 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 8 20:02:21.844374 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 8 20:02:21.847426 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 8 20:02:21.850460 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 20:02:21.854091 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 8 20:02:21.854248 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 8 20:02:21.857508 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 20:02:21.857660 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 20:02:21.861088 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 20:02:21.861245 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 20:02:21.864696 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 8 20:02:21.864852 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 8 20:02:21.868021 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 20:02:21.868175 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 20:02:21.871325 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 8 20:02:21.875887 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 8 20:02:21.883597 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 8 20:02:21.888414 kernel: ACPI: bus type drm_connector registered Oct 8 20:02:21.889228 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 8 20:02:21.889526 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 8 20:02:21.903701 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 8 20:02:21.916375 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 8 20:02:21.928146 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 8 20:02:21.937031 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 8 20:02:21.940612 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 8 20:02:21.940746 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 8 20:02:21.948523 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Oct 8 20:02:21.960114 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 8 20:02:21.964439 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 8 20:02:21.967370 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 20:02:21.969386 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 8 20:02:21.977028 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 8 20:02:21.980211 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 8 20:02:21.981496 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 8 20:02:21.984938 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 8 20:02:21.988097 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 20:02:21.997047 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 8 20:02:22.002452 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 8 20:02:22.007552 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 20:02:22.011228 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 8 20:02:22.020208 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 8 20:02:22.023945 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 8 20:02:22.027677 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 8 20:02:22.035356 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 8 20:02:22.044709 systemd-journald[1237]: Time spent on flushing to /var/log/journal/a66b44bd0ae645849cd6c499a3e31648 is 42.215ms for 959 entries. Oct 8 20:02:22.044709 systemd-journald[1237]: System Journal (/var/log/journal/a66b44bd0ae645849cd6c499a3e31648) is 8.0M, max 2.6G, 2.6G free. Oct 8 20:02:22.235168 systemd-journald[1237]: Received client request to flush runtime journal. Oct 8 20:02:22.235258 kernel: loop0: detected capacity change from 0 to 142488 Oct 8 20:02:22.050127 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Oct 8 20:02:22.056664 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Oct 8 20:02:22.060981 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 20:02:22.072619 udevadm[1299]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Oct 8 20:02:22.238693 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 8 20:02:22.262885 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 8 20:02:22.263523 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Oct 8 20:02:22.290602 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 8 20:02:22.298095 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 8 20:02:22.352081 systemd-tmpfiles[1305]: ACLs are not supported, ignoring. Oct 8 20:02:22.352107 systemd-tmpfiles[1305]: ACLs are not supported, ignoring. Oct 8 20:02:22.356827 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 20:02:22.753947 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 8 20:02:22.790945 kernel: loop1: detected capacity change from 0 to 31056 Oct 8 20:02:23.282948 kernel: loop2: detected capacity change from 0 to 205544 Oct 8 20:02:23.343948 kernel: loop3: detected capacity change from 0 to 140768 Oct 8 20:02:23.396233 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 8 20:02:23.404190 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 20:02:23.429248 systemd-udevd[1313]: Using default interface naming scheme 'v255'. Oct 8 20:02:23.714552 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 20:02:23.728095 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 8 20:02:23.765942 kernel: loop4: detected capacity change from 0 to 142488 Oct 8 20:02:23.786401 kernel: loop5: detected capacity change from 0 to 31056 Oct 8 20:02:23.785579 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 8 20:02:23.801961 kernel: loop6: detected capacity change from 0 to 205544 Oct 8 20:02:23.820553 kernel: loop7: detected capacity change from 0 to 140768 Oct 8 20:02:23.833998 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1328) Oct 8 20:02:23.835284 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 8 20:02:23.844202 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1328) Oct 8 20:02:23.856962 (sd-merge)[1332]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Oct 8 20:02:23.857566 (sd-merge)[1332]: Merged extensions into '/usr'. Oct 8 20:02:23.867305 systemd[1]: Reloading requested from client PID 1288 ('systemd-sysext') (unit systemd-sysext.service)... Oct 8 20:02:23.867322 systemd[1]: Reloading... Oct 8 20:02:24.010207 zram_generator::config[1383]: No configuration found. Oct 8 20:02:24.021128 kernel: hv_vmbus: registering driver hv_balloon Oct 8 20:02:24.025221 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Oct 8 20:02:24.037942 kernel: mousedev: PS/2 mouse device common for all mice Oct 8 20:02:24.121945 kernel: hv_vmbus: registering driver hyperv_fb Oct 8 20:02:24.129775 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Oct 8 20:02:24.129855 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Oct 8 20:02:24.163222 kernel: Console: switching to colour dummy device 80x25 Oct 8 20:02:24.163936 kernel: Console: switching to colour frame buffer device 128x48 Oct 8 20:02:24.195944 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1331) Oct 8 20:02:24.318742 systemd-networkd[1320]: lo: Link UP Oct 8 20:02:24.318755 systemd-networkd[1320]: lo: Gained carrier Oct 8 20:02:24.321671 systemd-networkd[1320]: Enumeration completed Oct 8 20:02:24.322123 systemd-networkd[1320]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 20:02:24.322128 systemd-networkd[1320]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 20:02:24.442940 kernel: mlx5_core a659:00:02.0 enP42585s1: Link up Oct 8 20:02:24.452837 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 20:02:24.482936 kernel: hv_netvsc 000d3ad8-07a2-000d-3ad8-07a2000d3ad8 eth0: Data path switched to VF: enP42585s1 Oct 8 20:02:24.483171 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Oct 8 20:02:24.506957 systemd-networkd[1320]: enP42585s1: Link UP Oct 8 20:02:24.508118 systemd-networkd[1320]: eth0: Link UP Oct 8 20:02:24.508128 systemd-networkd[1320]: eth0: Gained carrier Oct 8 20:02:24.508152 systemd-networkd[1320]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 20:02:24.511302 systemd-networkd[1320]: enP42585s1: Gained carrier Oct 8 20:02:24.544993 systemd-networkd[1320]: eth0: DHCPv4 address 10.200.8.13/24, gateway 10.200.8.1 acquired from 168.63.129.16 Oct 8 20:02:24.585908 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Oct 8 20:02:24.590078 systemd[1]: Reloading finished in 722 ms. Oct 8 20:02:24.618554 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 8 20:02:24.621833 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 8 20:02:24.625006 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 8 20:02:24.676190 systemd[1]: Starting ensure-sysext.service... Oct 8 20:02:24.680758 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 8 20:02:24.686647 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 8 20:02:24.692352 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 8 20:02:24.698181 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 20:02:24.712293 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Oct 8 20:02:24.717897 systemd[1]: Reloading requested from client PID 1481 ('systemctl') (unit ensure-sysext.service)... Oct 8 20:02:24.718949 systemd[1]: Reloading... Oct 8 20:02:24.739464 systemd-tmpfiles[1484]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 8 20:02:24.741553 systemd-tmpfiles[1484]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 8 20:02:24.743142 systemd-tmpfiles[1484]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 8 20:02:24.743572 systemd-tmpfiles[1484]: ACLs are not supported, ignoring. Oct 8 20:02:24.743661 systemd-tmpfiles[1484]: ACLs are not supported, ignoring. Oct 8 20:02:24.766892 systemd-tmpfiles[1484]: Detected autofs mount point /boot during canonicalization of boot. Oct 8 20:02:24.766906 systemd-tmpfiles[1484]: Skipping /boot Oct 8 20:02:24.780766 systemd-tmpfiles[1484]: Detected autofs mount point /boot during canonicalization of boot. Oct 8 20:02:24.780785 systemd-tmpfiles[1484]: Skipping /boot Oct 8 20:02:24.827021 zram_generator::config[1521]: No configuration found. Oct 8 20:02:24.955455 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 20:02:25.034066 systemd[1]: Reloading finished in 314 ms. Oct 8 20:02:25.056406 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 8 20:02:25.061034 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 8 20:02:25.076238 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 8 20:02:25.097252 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 8 20:02:25.101761 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Oct 8 20:02:25.106954 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 8 20:02:25.112220 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 8 20:02:25.117139 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 8 20:02:25.128351 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 20:02:25.128629 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 20:02:25.139227 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 20:02:25.144818 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 20:02:25.148283 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 20:02:25.148432 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 20:02:25.148530 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 20:02:25.149346 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 20:02:25.149473 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 20:02:25.162409 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 8 20:02:25.166687 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 20:02:25.166979 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 20:02:25.169333 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 20:02:25.169500 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 20:02:25.169597 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 20:02:25.175448 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 20:02:25.175792 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 20:02:25.183272 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 8 20:02:25.188693 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 20:02:25.189012 systemd[1]: Reached target time-set.target - System Time Set. Oct 8 20:02:25.189163 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 20:02:25.192005 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 20:02:25.192200 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 20:02:25.192749 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 20:02:25.192871 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 20:02:25.194024 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 8 20:02:25.194148 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 8 20:02:25.198337 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 8 20:02:25.206261 systemd[1]: Finished ensure-sysext.service. Oct 8 20:02:25.212678 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 20:02:25.212888 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 20:02:25.218691 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 8 20:02:25.277389 systemd-resolved[1588]: Positive Trust Anchors: Oct 8 20:02:25.277409 systemd-resolved[1588]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 8 20:02:25.277536 systemd-resolved[1588]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 8 20:02:25.283762 lvm[1586]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 8 20:02:25.299605 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 8 20:02:25.304138 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:02:25.305559 systemd-resolved[1588]: Using system hostname 'ci-4081.1.0-a-b9ef23c535'. Oct 8 20:02:25.308200 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 8 20:02:25.310000 augenrules[1618]: No rules Oct 8 20:02:25.311475 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 8 20:02:25.314833 systemd[1]: Reached target network.target - Network. Oct 8 20:02:25.317089 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 8 20:02:25.322553 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Oct 8 20:02:25.326158 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 8 20:02:25.334457 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Oct 8 20:02:25.348254 lvm[1627]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 8 20:02:25.374294 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Oct 8 20:02:25.584233 systemd-networkd[1320]: enP42585s1: Gained IPv6LL Oct 8 20:02:25.776138 systemd-networkd[1320]: eth0: Gained IPv6LL Oct 8 20:02:25.778937 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 8 20:02:25.782776 systemd[1]: Reached target network-online.target - Network is Online. Oct 8 20:02:26.221442 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 8 20:02:26.225639 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 8 20:02:28.743722 ldconfig[1283]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 8 20:02:28.760250 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 8 20:02:28.767152 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 8 20:02:28.787385 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 8 20:02:28.790827 systemd[1]: Reached target sysinit.target - System Initialization. Oct 8 20:02:28.793556 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 8 20:02:28.796635 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 8 20:02:28.800063 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 8 20:02:28.803007 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 8 20:02:28.806292 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 8 20:02:28.809533 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 8 20:02:28.809571 systemd[1]: Reached target paths.target - Path Units. Oct 8 20:02:28.812006 systemd[1]: Reached target timers.target - Timer Units. Oct 8 20:02:28.814902 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 8 20:02:28.819144 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 8 20:02:28.834754 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 8 20:02:28.838191 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 8 20:02:28.841029 systemd[1]: Reached target sockets.target - Socket Units. Oct 8 20:02:28.843544 systemd[1]: Reached target basic.target - Basic System. Oct 8 20:02:28.845906 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 8 20:02:28.845963 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 8 20:02:28.858272 systemd[1]: Starting chronyd.service - NTP client/server... Oct 8 20:02:28.864033 systemd[1]: Starting containerd.service - containerd container runtime... Oct 8 20:02:28.878125 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Oct 8 20:02:28.884122 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 8 20:02:28.896034 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 8 20:02:28.902535 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 8 20:02:28.905348 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 8 20:02:28.905399 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Oct 8 20:02:28.908109 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Oct 8 20:02:28.908883 (chronyd)[1636]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Oct 8 20:02:28.913368 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Oct 8 20:02:28.915832 jq[1640]: false Oct 8 20:02:28.925086 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:02:28.932113 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 8 20:02:28.939055 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 8 20:02:28.949697 KVP[1644]: KVP starting; pid is:1644 Oct 8 20:02:28.950450 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 8 20:02:28.957165 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 8 20:02:28.965101 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 8 20:02:28.967527 chronyd[1654]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Oct 8 20:02:28.972977 chronyd[1654]: Timezone right/UTC failed leap second check, ignoring Oct 8 20:02:28.977092 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 8 20:02:28.973207 chronyd[1654]: Loaded seccomp filter (level 2) Oct 8 20:02:28.979111 KVP[1644]: KVP LIC Version: 3.1 Oct 8 20:02:28.983079 kernel: hv_utils: KVP IC version 4.0 Oct 8 20:02:28.985653 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 8 20:02:28.987207 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 8 20:02:28.991103 systemd[1]: Starting update-engine.service - Update Engine... Oct 8 20:02:29.000297 extend-filesystems[1643]: Found loop4 Oct 8 20:02:29.010118 extend-filesystems[1643]: Found loop5 Oct 8 20:02:29.010118 extend-filesystems[1643]: Found loop6 Oct 8 20:02:29.010118 extend-filesystems[1643]: Found loop7 Oct 8 20:02:29.010118 extend-filesystems[1643]: Found sda Oct 8 20:02:29.010118 extend-filesystems[1643]: Found sda1 Oct 8 20:02:29.010118 extend-filesystems[1643]: Found sda2 Oct 8 20:02:29.010118 extend-filesystems[1643]: Found sda3 Oct 8 20:02:29.010118 extend-filesystems[1643]: Found usr Oct 8 20:02:29.010118 extend-filesystems[1643]: Found sda4 Oct 8 20:02:29.010118 extend-filesystems[1643]: Found sda6 Oct 8 20:02:29.010118 extend-filesystems[1643]: Found sda7 Oct 8 20:02:29.010118 extend-filesystems[1643]: Found sda9 Oct 8 20:02:29.010118 extend-filesystems[1643]: Checking size of /dev/sda9 Oct 8 20:02:29.001884 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 8 20:02:29.087828 update_engine[1662]: I20241008 20:02:29.073719 1662 main.cc:92] Flatcar Update Engine starting Oct 8 20:02:29.014985 systemd[1]: Started chronyd.service - NTP client/server. Oct 8 20:02:29.032118 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 8 20:02:29.091323 jq[1663]: true Oct 8 20:02:29.032332 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 8 20:02:29.042355 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 8 20:02:29.042573 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 8 20:02:29.066506 systemd[1]: motdgen.service: Deactivated successfully. Oct 8 20:02:29.066767 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 8 20:02:29.070349 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 8 20:02:29.089826 (ntainerd)[1680]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 8 20:02:29.109885 jq[1679]: true Oct 8 20:02:29.113297 extend-filesystems[1643]: Old size kept for /dev/sda9 Oct 8 20:02:29.117942 extend-filesystems[1643]: Found sr0 Oct 8 20:02:29.123062 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 8 20:02:29.123295 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 8 20:02:29.155348 dbus-daemon[1639]: [system] SELinux support is enabled Oct 8 20:02:29.163284 tar[1669]: linux-amd64/helm Oct 8 20:02:29.155581 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 8 20:02:29.169470 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 8 20:02:29.170998 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 8 20:02:29.178853 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 8 20:02:29.178967 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 8 20:02:29.188891 update_engine[1662]: I20241008 20:02:29.180902 1662 update_check_scheduler.cc:74] Next update check in 5m47s Oct 8 20:02:29.186272 systemd[1]: Started update-engine.service - Update Engine. Oct 8 20:02:29.190986 systemd-logind[1658]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 8 20:02:29.193287 systemd-logind[1658]: New seat seat0. Oct 8 20:02:29.199810 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 8 20:02:29.206377 systemd[1]: Started systemd-logind.service - User Login Management. Oct 8 20:02:29.299112 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1721) Oct 8 20:02:29.330018 bash[1716]: Updated "/home/core/.ssh/authorized_keys" Oct 8 20:02:29.321021 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 8 20:02:29.340046 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 8 20:02:29.351975 coreos-metadata[1638]: Oct 08 20:02:29.349 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Oct 8 20:02:29.357001 coreos-metadata[1638]: Oct 08 20:02:29.356 INFO Fetch successful Oct 8 20:02:29.357154 coreos-metadata[1638]: Oct 08 20:02:29.357 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Oct 8 20:02:29.364180 coreos-metadata[1638]: Oct 08 20:02:29.364 INFO Fetch successful Oct 8 20:02:29.369974 coreos-metadata[1638]: Oct 08 20:02:29.366 INFO Fetching http://168.63.129.16/machine/7f31c455-f984-4a2e-9ee1-426889c59e32/83d85c6c%2Dc1da%2D4523%2D9f04%2De5a00fc9bc21.%5Fci%2D4081.1.0%2Da%2Db9ef23c535?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Oct 8 20:02:29.374796 coreos-metadata[1638]: Oct 08 20:02:29.374 INFO Fetch successful Oct 8 20:02:29.381668 coreos-metadata[1638]: Oct 08 20:02:29.381 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Oct 8 20:02:29.405119 coreos-metadata[1638]: Oct 08 20:02:29.403 INFO Fetch successful Oct 8 20:02:29.481827 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Oct 8 20:02:29.487928 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 8 20:02:29.593319 locksmithd[1715]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 8 20:02:29.740129 sshd_keygen[1691]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 8 20:02:29.773403 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 8 20:02:29.787030 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 8 20:02:29.799155 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Oct 8 20:02:29.814761 systemd[1]: issuegen.service: Deactivated successfully. Oct 8 20:02:29.815104 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 8 20:02:29.828215 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 8 20:02:29.853593 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Oct 8 20:02:29.873784 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 8 20:02:29.893085 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 8 20:02:29.904693 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 8 20:02:29.910440 systemd[1]: Reached target getty.target - Login Prompts. Oct 8 20:02:30.148074 tar[1669]: linux-amd64/LICENSE Oct 8 20:02:30.148303 tar[1669]: linux-amd64/README.md Oct 8 20:02:30.160810 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 8 20:02:30.405001 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:02:30.422379 (kubelet)[1795]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:02:30.766784 containerd[1680]: time="2024-10-08T20:02:30.766262800Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Oct 8 20:02:30.813431 containerd[1680]: time="2024-10-08T20:02:30.813172000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 8 20:02:30.815612 containerd[1680]: time="2024-10-08T20:02:30.815554400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.54-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 8 20:02:30.815612 containerd[1680]: time="2024-10-08T20:02:30.815601600Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 8 20:02:30.815772 containerd[1680]: time="2024-10-08T20:02:30.815624800Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 8 20:02:30.817039 containerd[1680]: time="2024-10-08T20:02:30.815825300Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Oct 8 20:02:30.817039 containerd[1680]: time="2024-10-08T20:02:30.815854300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Oct 8 20:02:30.817039 containerd[1680]: time="2024-10-08T20:02:30.815971200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 20:02:30.817039 containerd[1680]: time="2024-10-08T20:02:30.815990600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 8 20:02:30.817039 containerd[1680]: time="2024-10-08T20:02:30.816216100Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 20:02:30.817039 containerd[1680]: time="2024-10-08T20:02:30.816237400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 8 20:02:30.817039 containerd[1680]: time="2024-10-08T20:02:30.816257500Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 20:02:30.817039 containerd[1680]: time="2024-10-08T20:02:30.816272600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 8 20:02:30.817039 containerd[1680]: time="2024-10-08T20:02:30.816368200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 8 20:02:30.817039 containerd[1680]: time="2024-10-08T20:02:30.816604700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 8 20:02:30.817039 containerd[1680]: time="2024-10-08T20:02:30.816763000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 20:02:30.817487 containerd[1680]: time="2024-10-08T20:02:30.816783700Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 8 20:02:30.817487 containerd[1680]: time="2024-10-08T20:02:30.816870100Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 8 20:02:30.817487 containerd[1680]: time="2024-10-08T20:02:30.816944500Z" level=info msg="metadata content store policy set" policy=shared Oct 8 20:02:30.834942 containerd[1680]: time="2024-10-08T20:02:30.834578300Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 8 20:02:30.834942 containerd[1680]: time="2024-10-08T20:02:30.834652800Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 8 20:02:30.834942 containerd[1680]: time="2024-10-08T20:02:30.834675800Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Oct 8 20:02:30.834942 containerd[1680]: time="2024-10-08T20:02:30.834697800Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Oct 8 20:02:30.834942 containerd[1680]: time="2024-10-08T20:02:30.834720800Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 8 20:02:30.835270 containerd[1680]: time="2024-10-08T20:02:30.835245500Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 8 20:02:30.835696 containerd[1680]: time="2024-10-08T20:02:30.835660700Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 8 20:02:30.835834 containerd[1680]: time="2024-10-08T20:02:30.835810800Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Oct 8 20:02:30.835893 containerd[1680]: time="2024-10-08T20:02:30.835835100Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Oct 8 20:02:30.835893 containerd[1680]: time="2024-10-08T20:02:30.835854700Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Oct 8 20:02:30.835893 containerd[1680]: time="2024-10-08T20:02:30.835873800Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 8 20:02:30.836016 containerd[1680]: time="2024-10-08T20:02:30.835893100Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 8 20:02:30.836016 containerd[1680]: time="2024-10-08T20:02:30.835911500Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 8 20:02:30.836016 containerd[1680]: time="2024-10-08T20:02:30.835945000Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 8 20:02:30.836016 containerd[1680]: time="2024-10-08T20:02:30.835964900Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 8 20:02:30.836016 containerd[1680]: time="2024-10-08T20:02:30.835984300Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 8 20:02:30.836016 containerd[1680]: time="2024-10-08T20:02:30.836001900Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 8 20:02:30.836212 containerd[1680]: time="2024-10-08T20:02:30.836019100Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 8 20:02:30.836212 containerd[1680]: time="2024-10-08T20:02:30.836046300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 8 20:02:30.836212 containerd[1680]: time="2024-10-08T20:02:30.836066600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 8 20:02:30.836212 containerd[1680]: time="2024-10-08T20:02:30.836084800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 8 20:02:30.836212 containerd[1680]: time="2024-10-08T20:02:30.836103900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 8 20:02:30.836212 containerd[1680]: time="2024-10-08T20:02:30.836121400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 8 20:02:30.836212 containerd[1680]: time="2024-10-08T20:02:30.836140900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 8 20:02:30.836212 containerd[1680]: time="2024-10-08T20:02:30.836158200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 8 20:02:30.836212 containerd[1680]: time="2024-10-08T20:02:30.836187400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 8 20:02:30.836212 containerd[1680]: time="2024-10-08T20:02:30.836207700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Oct 8 20:02:30.836552 containerd[1680]: time="2024-10-08T20:02:30.836228000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Oct 8 20:02:30.836552 containerd[1680]: time="2024-10-08T20:02:30.836245900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 8 20:02:30.836552 containerd[1680]: time="2024-10-08T20:02:30.836262800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Oct 8 20:02:30.836552 containerd[1680]: time="2024-10-08T20:02:30.836280300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 8 20:02:30.836552 containerd[1680]: time="2024-10-08T20:02:30.836301600Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Oct 8 20:02:30.836552 containerd[1680]: time="2024-10-08T20:02:30.836330400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Oct 8 20:02:30.836552 containerd[1680]: time="2024-10-08T20:02:30.836348200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 8 20:02:30.836552 containerd[1680]: time="2024-10-08T20:02:30.836363100Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 8 20:02:30.836552 containerd[1680]: time="2024-10-08T20:02:30.836418800Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 8 20:02:30.836552 containerd[1680]: time="2024-10-08T20:02:30.836441900Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Oct 8 20:02:30.836552 containerd[1680]: time="2024-10-08T20:02:30.836462500Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 8 20:02:30.836552 containerd[1680]: time="2024-10-08T20:02:30.836479900Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Oct 8 20:02:30.836552 containerd[1680]: time="2024-10-08T20:02:30.836494300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 8 20:02:30.836990 containerd[1680]: time="2024-10-08T20:02:30.836512300Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Oct 8 20:02:30.836990 containerd[1680]: time="2024-10-08T20:02:30.836525700Z" level=info msg="NRI interface is disabled by configuration." Oct 8 20:02:30.836990 containerd[1680]: time="2024-10-08T20:02:30.836543600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 8 20:02:30.837102 containerd[1680]: time="2024-10-08T20:02:30.836911300Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 8 20:02:30.837102 containerd[1680]: time="2024-10-08T20:02:30.837009500Z" level=info msg="Connect containerd service" Oct 8 20:02:30.837102 containerd[1680]: time="2024-10-08T20:02:30.837065100Z" level=info msg="using legacy CRI server" Oct 8 20:02:30.837102 containerd[1680]: time="2024-10-08T20:02:30.837075400Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 8 20:02:30.837418 containerd[1680]: time="2024-10-08T20:02:30.837244500Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 8 20:02:30.838577 containerd[1680]: time="2024-10-08T20:02:30.838292500Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 8 20:02:30.839587 containerd[1680]: time="2024-10-08T20:02:30.839217100Z" level=info msg="Start subscribing containerd event" Oct 8 20:02:30.839587 containerd[1680]: time="2024-10-08T20:02:30.839276500Z" level=info msg="Start recovering state" Oct 8 20:02:30.839587 containerd[1680]: time="2024-10-08T20:02:30.839386600Z" level=info msg="Start event monitor" Oct 8 20:02:30.839587 containerd[1680]: time="2024-10-08T20:02:30.839402800Z" level=info msg="Start snapshots syncer" Oct 8 20:02:30.839587 containerd[1680]: time="2024-10-08T20:02:30.839411500Z" level=info msg="Start cni network conf syncer for default" Oct 8 20:02:30.839587 containerd[1680]: time="2024-10-08T20:02:30.839422100Z" level=info msg="Start streaming server" Oct 8 20:02:30.840399 containerd[1680]: time="2024-10-08T20:02:30.838926400Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 8 20:02:30.842168 containerd[1680]: time="2024-10-08T20:02:30.842130000Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 8 20:02:30.842503 systemd[1]: Started containerd.service - containerd container runtime. Oct 8 20:02:30.848141 containerd[1680]: time="2024-10-08T20:02:30.848116600Z" level=info msg="containerd successfully booted in 0.083727s" Oct 8 20:02:30.849528 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 8 20:02:30.858494 systemd[1]: Startup finished in 1.380s (firmware) + 1min 7.620s (loader) + 984ms (kernel) + 13.700s (initrd) + 12.730s (userspace) = 1min 36.416s. Oct 8 20:02:31.072148 kubelet[1795]: E1008 20:02:31.072019 1795 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:02:31.074328 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:02:31.074510 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:02:31.222882 login[1785]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Oct 8 20:02:31.224825 login[1786]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Oct 8 20:02:31.237277 systemd-logind[1658]: New session 1 of user core. Oct 8 20:02:31.238697 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 8 20:02:31.244183 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 8 20:02:31.257789 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 8 20:02:31.267467 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 8 20:02:31.285020 (systemd)[1816]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 8 20:02:31.511114 systemd[1816]: Queued start job for default target default.target. Oct 8 20:02:31.523170 systemd[1816]: Created slice app.slice - User Application Slice. Oct 8 20:02:31.523203 systemd[1816]: Reached target paths.target - Paths. Oct 8 20:02:31.523220 systemd[1816]: Reached target timers.target - Timers. Oct 8 20:02:31.527045 systemd[1816]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 8 20:02:31.537777 systemd[1816]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 8 20:02:31.537842 systemd[1816]: Reached target sockets.target - Sockets. Oct 8 20:02:31.537860 systemd[1816]: Reached target basic.target - Basic System. Oct 8 20:02:31.537901 systemd[1816]: Reached target default.target - Main User Target. Oct 8 20:02:31.537962 systemd[1816]: Startup finished in 246ms. Oct 8 20:02:31.538349 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 8 20:02:31.547055 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 8 20:02:31.740264 waagent[1783]: 2024-10-08T20:02:31.740156Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Oct 8 20:02:31.744161 waagent[1783]: 2024-10-08T20:02:31.744083Z INFO Daemon Daemon OS: flatcar 4081.1.0 Oct 8 20:02:31.746768 waagent[1783]: 2024-10-08T20:02:31.746709Z INFO Daemon Daemon Python: 3.11.9 Oct 8 20:02:31.749282 waagent[1783]: 2024-10-08T20:02:31.749208Z INFO Daemon Daemon Run daemon Oct 8 20:02:31.751718 waagent[1783]: 2024-10-08T20:02:31.751664Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.1.0' Oct 8 20:02:31.755793 waagent[1783]: 2024-10-08T20:02:31.755739Z INFO Daemon Daemon Using waagent for provisioning Oct 8 20:02:31.759930 waagent[1783]: 2024-10-08T20:02:31.758415Z INFO Daemon Daemon Activate resource disk Oct 8 20:02:31.759930 waagent[1783]: 2024-10-08T20:02:31.758607Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Oct 8 20:02:31.763212 waagent[1783]: 2024-10-08T20:02:31.763130Z INFO Daemon Daemon Found device: None Oct 8 20:02:31.763963 waagent[1783]: 2024-10-08T20:02:31.763902Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Oct 8 20:02:31.764368 waagent[1783]: 2024-10-08T20:02:31.764331Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Oct 8 20:02:31.766812 waagent[1783]: 2024-10-08T20:02:31.766767Z INFO Daemon Daemon Clean protocol and wireserver endpoint Oct 8 20:02:31.767567 waagent[1783]: 2024-10-08T20:02:31.767530Z INFO Daemon Daemon Running default provisioning handler Oct 8 20:02:31.781801 waagent[1783]: 2024-10-08T20:02:31.781707Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Oct 8 20:02:31.787896 waagent[1783]: 2024-10-08T20:02:31.787845Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Oct 8 20:02:31.795997 waagent[1783]: 2024-10-08T20:02:31.788050Z INFO Daemon Daemon cloud-init is enabled: False Oct 8 20:02:31.795997 waagent[1783]: 2024-10-08T20:02:31.788497Z INFO Daemon Daemon Copying ovf-env.xml Oct 8 20:02:31.915110 waagent[1783]: 2024-10-08T20:02:31.913159Z INFO Daemon Daemon Successfully mounted dvd Oct 8 20:02:31.947255 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Oct 8 20:02:31.949971 waagent[1783]: 2024-10-08T20:02:31.949871Z INFO Daemon Daemon Detect protocol endpoint Oct 8 20:02:31.952444 waagent[1783]: 2024-10-08T20:02:31.952382Z INFO Daemon Daemon Clean protocol and wireserver endpoint Oct 8 20:02:31.955411 waagent[1783]: 2024-10-08T20:02:31.955359Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Oct 8 20:02:31.958488 waagent[1783]: 2024-10-08T20:02:31.958437Z INFO Daemon Daemon Test for route to 168.63.129.16 Oct 8 20:02:31.961306 waagent[1783]: 2024-10-08T20:02:31.961255Z INFO Daemon Daemon Route to 168.63.129.16 exists Oct 8 20:02:31.963837 waagent[1783]: 2024-10-08T20:02:31.963789Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Oct 8 20:02:31.994692 waagent[1783]: 2024-10-08T20:02:31.994616Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Oct 8 20:02:32.003126 waagent[1783]: 2024-10-08T20:02:31.995284Z INFO Daemon Daemon Wire protocol version:2012-11-30 Oct 8 20:02:32.003126 waagent[1783]: 2024-10-08T20:02:31.996067Z INFO Daemon Daemon Server preferred version:2015-04-05 Oct 8 20:02:32.178148 waagent[1783]: 2024-10-08T20:02:32.177989Z INFO Daemon Daemon Initializing goal state during protocol detection Oct 8 20:02:32.181752 waagent[1783]: 2024-10-08T20:02:32.181679Z INFO Daemon Daemon Forcing an update of the goal state. Oct 8 20:02:32.187880 waagent[1783]: 2024-10-08T20:02:32.187821Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Oct 8 20:02:32.223409 login[1785]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Oct 8 20:02:32.228138 systemd-logind[1658]: New session 2 of user core. Oct 8 20:02:32.236118 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 8 20:02:32.242297 waagent[1783]: 2024-10-08T20:02:32.236397Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.159 Oct 8 20:02:32.242297 waagent[1783]: 2024-10-08T20:02:32.237191Z INFO Daemon Oct 8 20:02:32.242297 waagent[1783]: 2024-10-08T20:02:32.238083Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 78828939-74c4-4e5b-8450-b5b97cfdad21 eTag: 3844696392757372789 source: Fabric] Oct 8 20:02:32.242297 waagent[1783]: 2024-10-08T20:02:32.239354Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Oct 8 20:02:32.242297 waagent[1783]: 2024-10-08T20:02:32.240029Z INFO Daemon Oct 8 20:02:32.242297 waagent[1783]: 2024-10-08T20:02:32.240900Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Oct 8 20:02:32.246936 waagent[1783]: 2024-10-08T20:02:32.245828Z INFO Daemon Daemon Downloading artifacts profile blob Oct 8 20:02:32.330788 waagent[1783]: 2024-10-08T20:02:32.330701Z INFO Daemon Downloaded certificate {'thumbprint': '37854D03026B87B9704BE15355505E1A8ABEBDA2', 'hasPrivateKey': False} Oct 8 20:02:32.335797 waagent[1783]: 2024-10-08T20:02:32.335733Z INFO Daemon Downloaded certificate {'thumbprint': '17CE09731FDD673B5DC1375EA94C69F1B9EFB2AB', 'hasPrivateKey': True} Oct 8 20:02:32.341959 waagent[1783]: 2024-10-08T20:02:32.336357Z INFO Daemon Fetch goal state completed Oct 8 20:02:32.344729 waagent[1783]: 2024-10-08T20:02:32.344681Z INFO Daemon Daemon Starting provisioning Oct 8 20:02:32.359168 waagent[1783]: 2024-10-08T20:02:32.344930Z INFO Daemon Daemon Handle ovf-env.xml. Oct 8 20:02:32.359168 waagent[1783]: 2024-10-08T20:02:32.345984Z INFO Daemon Daemon Set hostname [ci-4081.1.0-a-b9ef23c535] Oct 8 20:02:32.359168 waagent[1783]: 2024-10-08T20:02:32.348299Z INFO Daemon Daemon Publish hostname [ci-4081.1.0-a-b9ef23c535] Oct 8 20:02:32.359168 waagent[1783]: 2024-10-08T20:02:32.349488Z INFO Daemon Daemon Examine /proc/net/route for primary interface Oct 8 20:02:32.359168 waagent[1783]: 2024-10-08T20:02:32.350027Z INFO Daemon Daemon Primary interface is [eth0] Oct 8 20:02:32.383277 systemd-networkd[1320]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 20:02:32.383291 systemd-networkd[1320]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 20:02:32.383343 systemd-networkd[1320]: eth0: DHCP lease lost Oct 8 20:02:32.384680 waagent[1783]: 2024-10-08T20:02:32.384588Z INFO Daemon Daemon Create user account if not exists Oct 8 20:02:32.400865 waagent[1783]: 2024-10-08T20:02:32.385015Z INFO Daemon Daemon User core already exists, skip useradd Oct 8 20:02:32.400865 waagent[1783]: 2024-10-08T20:02:32.385498Z INFO Daemon Daemon Configure sudoer Oct 8 20:02:32.400865 waagent[1783]: 2024-10-08T20:02:32.386728Z INFO Daemon Daemon Configure sshd Oct 8 20:02:32.400865 waagent[1783]: 2024-10-08T20:02:32.387505Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Oct 8 20:02:32.400865 waagent[1783]: 2024-10-08T20:02:32.388583Z INFO Daemon Daemon Deploy ssh public key. Oct 8 20:02:32.403033 systemd-networkd[1320]: eth0: DHCPv6 lease lost Oct 8 20:02:32.432977 systemd-networkd[1320]: eth0: DHCPv4 address 10.200.8.13/24, gateway 10.200.8.1 acquired from 168.63.129.16 Oct 8 20:02:33.510972 waagent[1783]: 2024-10-08T20:02:33.510877Z INFO Daemon Daemon Provisioning complete Oct 8 20:02:33.523813 waagent[1783]: 2024-10-08T20:02:33.523746Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Oct 8 20:02:33.530873 waagent[1783]: 2024-10-08T20:02:33.524136Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Oct 8 20:02:33.530873 waagent[1783]: 2024-10-08T20:02:33.525065Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Oct 8 20:02:33.649066 waagent[1870]: 2024-10-08T20:02:33.648974Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Oct 8 20:02:33.649435 waagent[1870]: 2024-10-08T20:02:33.649132Z INFO ExtHandler ExtHandler OS: flatcar 4081.1.0 Oct 8 20:02:33.649435 waagent[1870]: 2024-10-08T20:02:33.649214Z INFO ExtHandler ExtHandler Python: 3.11.9 Oct 8 20:02:33.690180 waagent[1870]: 2024-10-08T20:02:33.690073Z INFO ExtHandler ExtHandler Distro: flatcar-4081.1.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Oct 8 20:02:33.690448 waagent[1870]: 2024-10-08T20:02:33.690387Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Oct 8 20:02:33.690561 waagent[1870]: 2024-10-08T20:02:33.690509Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Oct 8 20:02:33.699568 waagent[1870]: 2024-10-08T20:02:33.699481Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Oct 8 20:02:33.704541 waagent[1870]: 2024-10-08T20:02:33.704489Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.159 Oct 8 20:02:33.705006 waagent[1870]: 2024-10-08T20:02:33.704951Z INFO ExtHandler Oct 8 20:02:33.705081 waagent[1870]: 2024-10-08T20:02:33.705043Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 1933deda-1248-46fe-9f69-32f393b5ff1d eTag: 3844696392757372789 source: Fabric] Oct 8 20:02:33.705396 waagent[1870]: 2024-10-08T20:02:33.705350Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Oct 8 20:02:33.705943 waagent[1870]: 2024-10-08T20:02:33.705878Z INFO ExtHandler Oct 8 20:02:33.706015 waagent[1870]: 2024-10-08T20:02:33.705980Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Oct 8 20:02:33.709397 waagent[1870]: 2024-10-08T20:02:33.709356Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Oct 8 20:02:33.777024 waagent[1870]: 2024-10-08T20:02:33.775867Z INFO ExtHandler Downloaded certificate {'thumbprint': '37854D03026B87B9704BE15355505E1A8ABEBDA2', 'hasPrivateKey': False} Oct 8 20:02:33.777024 waagent[1870]: 2024-10-08T20:02:33.776274Z INFO ExtHandler Downloaded certificate {'thumbprint': '17CE09731FDD673B5DC1375EA94C69F1B9EFB2AB', 'hasPrivateKey': True} Oct 8 20:02:33.777024 waagent[1870]: 2024-10-08T20:02:33.776628Z INFO ExtHandler Fetch goal state completed Oct 8 20:02:33.792816 waagent[1870]: 2024-10-08T20:02:33.792747Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1870 Oct 8 20:02:33.792970 waagent[1870]: 2024-10-08T20:02:33.792931Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Oct 8 20:02:33.794523 waagent[1870]: 2024-10-08T20:02:33.794466Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.1.0', '', 'Flatcar Container Linux by Kinvolk'] Oct 8 20:02:33.794893 waagent[1870]: 2024-10-08T20:02:33.794843Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Oct 8 20:02:33.849497 waagent[1870]: 2024-10-08T20:02:33.849439Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Oct 8 20:02:33.849795 waagent[1870]: 2024-10-08T20:02:33.849733Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Oct 8 20:02:33.856911 waagent[1870]: 2024-10-08T20:02:33.856864Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Oct 8 20:02:33.863379 systemd[1]: Reloading requested from client PID 1885 ('systemctl') (unit waagent.service)... Oct 8 20:02:33.863396 systemd[1]: Reloading... Oct 8 20:02:33.950967 zram_generator::config[1920]: No configuration found. Oct 8 20:02:34.072471 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 20:02:34.151522 systemd[1]: Reloading finished in 287 ms. Oct 8 20:02:34.178943 waagent[1870]: 2024-10-08T20:02:34.177143Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Oct 8 20:02:34.185098 systemd[1]: Reloading requested from client PID 1976 ('systemctl') (unit waagent.service)... Oct 8 20:02:34.185113 systemd[1]: Reloading... Oct 8 20:02:34.273085 zram_generator::config[2006]: No configuration found. Oct 8 20:02:34.392834 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 20:02:34.471445 systemd[1]: Reloading finished in 285 ms. Oct 8 20:02:34.498279 waagent[1870]: 2024-10-08T20:02:34.496101Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Oct 8 20:02:34.499516 waagent[1870]: 2024-10-08T20:02:34.498583Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Oct 8 20:02:35.557239 waagent[1870]: 2024-10-08T20:02:35.557139Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Oct 8 20:02:35.558055 waagent[1870]: 2024-10-08T20:02:35.557979Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Oct 8 20:02:35.559008 waagent[1870]: 2024-10-08T20:02:35.558889Z INFO ExtHandler ExtHandler Starting env monitor service. Oct 8 20:02:35.559502 waagent[1870]: 2024-10-08T20:02:35.559435Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Oct 8 20:02:35.559893 waagent[1870]: 2024-10-08T20:02:35.559829Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Oct 8 20:02:35.560382 waagent[1870]: 2024-10-08T20:02:35.560284Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Oct 8 20:02:35.560465 waagent[1870]: 2024-10-08T20:02:35.560356Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Oct 8 20:02:35.560600 waagent[1870]: 2024-10-08T20:02:35.560557Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Oct 8 20:02:35.560964 waagent[1870]: 2024-10-08T20:02:35.560888Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Oct 8 20:02:35.561088 waagent[1870]: 2024-10-08T20:02:35.561041Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Oct 8 20:02:35.561279 waagent[1870]: 2024-10-08T20:02:35.561202Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Oct 8 20:02:35.561906 waagent[1870]: 2024-10-08T20:02:35.561790Z INFO EnvHandler ExtHandler Configure routes Oct 8 20:02:35.561906 waagent[1870]: 2024-10-08T20:02:35.561850Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Oct 8 20:02:35.562423 waagent[1870]: 2024-10-08T20:02:35.562362Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Oct 8 20:02:35.562540 waagent[1870]: 2024-10-08T20:02:35.562474Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Oct 8 20:02:35.563580 waagent[1870]: 2024-10-08T20:02:35.563057Z INFO EnvHandler ExtHandler Gateway:None Oct 8 20:02:35.563580 waagent[1870]: 2024-10-08T20:02:35.563143Z INFO EnvHandler ExtHandler Routes:None Oct 8 20:02:35.565166 waagent[1870]: 2024-10-08T20:02:35.565117Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Oct 8 20:02:35.565166 waagent[1870]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Oct 8 20:02:35.565166 waagent[1870]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Oct 8 20:02:35.565166 waagent[1870]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Oct 8 20:02:35.565166 waagent[1870]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Oct 8 20:02:35.565166 waagent[1870]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Oct 8 20:02:35.565166 waagent[1870]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Oct 8 20:02:35.575511 waagent[1870]: 2024-10-08T20:02:35.575464Z INFO ExtHandler ExtHandler Oct 8 20:02:35.575711 waagent[1870]: 2024-10-08T20:02:35.575675Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: a557396a-bd3a-4156-a286-0b04382fa033 correlation 152fbad9-5b5d-4e33-8eec-7c9f78d33d92 created: 2024-10-08T20:00:42.900253Z] Oct 8 20:02:35.576284 waagent[1870]: 2024-10-08T20:02:35.576233Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Oct 8 20:02:35.578151 waagent[1870]: 2024-10-08T20:02:35.577176Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Oct 8 20:02:35.635172 waagent[1870]: 2024-10-08T20:02:35.635002Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 2BACB6DC-0968-4032-979A-A562595FFC90;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Oct 8 20:02:35.685865 waagent[1870]: 2024-10-08T20:02:35.685781Z INFO MonitorHandler ExtHandler Network interfaces: Oct 8 20:02:35.685865 waagent[1870]: Executing ['ip', '-a', '-o', 'link']: Oct 8 20:02:35.685865 waagent[1870]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Oct 8 20:02:35.685865 waagent[1870]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:d8:07:a2 brd ff:ff:ff:ff:ff:ff Oct 8 20:02:35.685865 waagent[1870]: 3: enP42585s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:d8:07:a2 brd ff:ff:ff:ff:ff:ff\ altname enP42585p0s2 Oct 8 20:02:35.685865 waagent[1870]: Executing ['ip', '-4', '-a', '-o', 'address']: Oct 8 20:02:35.685865 waagent[1870]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Oct 8 20:02:35.685865 waagent[1870]: 2: eth0 inet 10.200.8.13/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Oct 8 20:02:35.685865 waagent[1870]: Executing ['ip', '-6', '-a', '-o', 'address']: Oct 8 20:02:35.685865 waagent[1870]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Oct 8 20:02:35.685865 waagent[1870]: 2: eth0 inet6 fe80::20d:3aff:fed8:7a2/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Oct 8 20:02:35.685865 waagent[1870]: 3: enP42585s1 inet6 fe80::20d:3aff:fed8:7a2/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Oct 8 20:02:35.729126 waagent[1870]: 2024-10-08T20:02:35.729071Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Oct 8 20:02:35.729126 waagent[1870]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Oct 8 20:02:35.729126 waagent[1870]: pkts bytes target prot opt in out source destination Oct 8 20:02:35.729126 waagent[1870]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Oct 8 20:02:35.729126 waagent[1870]: pkts bytes target prot opt in out source destination Oct 8 20:02:35.729126 waagent[1870]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Oct 8 20:02:35.729126 waagent[1870]: pkts bytes target prot opt in out source destination Oct 8 20:02:35.729126 waagent[1870]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Oct 8 20:02:35.729126 waagent[1870]: 5 457 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Oct 8 20:02:35.729126 waagent[1870]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Oct 8 20:02:35.732449 waagent[1870]: 2024-10-08T20:02:35.732391Z INFO EnvHandler ExtHandler Current Firewall rules: Oct 8 20:02:35.732449 waagent[1870]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Oct 8 20:02:35.732449 waagent[1870]: pkts bytes target prot opt in out source destination Oct 8 20:02:35.732449 waagent[1870]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Oct 8 20:02:35.732449 waagent[1870]: pkts bytes target prot opt in out source destination Oct 8 20:02:35.732449 waagent[1870]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Oct 8 20:02:35.732449 waagent[1870]: pkts bytes target prot opt in out source destination Oct 8 20:02:35.732449 waagent[1870]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Oct 8 20:02:35.732449 waagent[1870]: 10 1102 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Oct 8 20:02:35.732449 waagent[1870]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Oct 8 20:02:35.732837 waagent[1870]: 2024-10-08T20:02:35.732684Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Oct 8 20:02:41.325373 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 8 20:02:41.331196 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:02:41.434508 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:02:41.439350 (kubelet)[2106]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:02:42.073481 kubelet[2106]: E1008 20:02:42.073428 2106 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:02:42.076850 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:02:42.077057 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:02:52.327635 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 8 20:02:52.334190 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:02:52.467129 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:02:52.471545 (kubelet)[2121]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:02:52.767142 chronyd[1654]: Selected source PHC0 Oct 8 20:02:53.047488 kubelet[2121]: E1008 20:02:53.047377 2121 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:02:53.049734 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:02:53.049929 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:03:03.175522 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Oct 8 20:03:03.182233 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:03:03.272824 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:03:03.277555 (kubelet)[2136]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:03:03.313548 kubelet[2136]: E1008 20:03:03.313485 2136 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:03:03.315806 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:03:03.316008 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:03:12.155309 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Oct 8 20:03:13.425476 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Oct 8 20:03:13.431163 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:03:13.520592 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:03:13.524806 (kubelet)[2151]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:03:14.069171 kubelet[2151]: E1008 20:03:14.069095 2151 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:03:14.071488 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:03:16.888889 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2170) Oct 8 20:03:16.888987 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2171) Oct 8 20:03:16.889024 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2171) Oct 8 20:03:14.071662 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:03:16.889356 update_engine[1662]: I20241008 20:03:14.472070 1662 update_attempter.cc:509] Updating boot flags... Oct 8 20:03:20.536327 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 8 20:03:20.541201 systemd[1]: Started sshd@0-10.200.8.13:22-10.200.16.10:48856.service - OpenSSH per-connection server daemon (10.200.16.10:48856). Oct 8 20:03:21.258231 sshd[2253]: Accepted publickey for core from 10.200.16.10 port 48856 ssh2: RSA SHA256:9U3oUBAdXYwgJqp6v+f9jEdEmxxRHlTxYCPOmLL0ALI Oct 8 20:03:21.260067 sshd[2253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:03:21.264976 systemd-logind[1658]: New session 3 of user core. Oct 8 20:03:21.272072 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 8 20:03:21.835709 systemd[1]: Started sshd@1-10.200.8.13:22-10.200.16.10:48862.service - OpenSSH per-connection server daemon (10.200.16.10:48862). Oct 8 20:03:22.511521 sshd[2258]: Accepted publickey for core from 10.200.16.10 port 48862 ssh2: RSA SHA256:9U3oUBAdXYwgJqp6v+f9jEdEmxxRHlTxYCPOmLL0ALI Oct 8 20:03:22.513163 sshd[2258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:03:22.517007 systemd-logind[1658]: New session 4 of user core. Oct 8 20:03:22.524068 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 8 20:03:22.990835 sshd[2258]: pam_unix(sshd:session): session closed for user core Oct 8 20:03:22.993843 systemd[1]: sshd@1-10.200.8.13:22-10.200.16.10:48862.service: Deactivated successfully. Oct 8 20:03:22.995667 systemd[1]: session-4.scope: Deactivated successfully. Oct 8 20:03:22.997142 systemd-logind[1658]: Session 4 logged out. Waiting for processes to exit. Oct 8 20:03:22.998100 systemd-logind[1658]: Removed session 4. Oct 8 20:03:23.112813 systemd[1]: Started sshd@2-10.200.8.13:22-10.200.16.10:48870.service - OpenSSH per-connection server daemon (10.200.16.10:48870). Oct 8 20:03:23.784762 sshd[2265]: Accepted publickey for core from 10.200.16.10 port 48870 ssh2: RSA SHA256:9U3oUBAdXYwgJqp6v+f9jEdEmxxRHlTxYCPOmLL0ALI Oct 8 20:03:23.786555 sshd[2265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:03:23.791898 systemd-logind[1658]: New session 5 of user core. Oct 8 20:03:23.801071 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 8 20:03:24.175434 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Oct 8 20:03:24.181214 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:03:24.260668 sshd[2265]: pam_unix(sshd:session): session closed for user core Oct 8 20:03:24.265879 systemd[1]: sshd@2-10.200.8.13:22-10.200.16.10:48870.service: Deactivated successfully. Oct 8 20:03:24.268649 systemd[1]: session-5.scope: Deactivated successfully. Oct 8 20:03:24.269696 systemd-logind[1658]: Session 5 logged out. Waiting for processes to exit. Oct 8 20:03:24.273860 systemd-logind[1658]: Removed session 5. Oct 8 20:03:24.284385 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:03:24.293245 (kubelet)[2279]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:03:24.328447 kubelet[2279]: E1008 20:03:24.328395 2279 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:03:24.330686 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:03:24.330867 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:03:24.362645 systemd[1]: Started sshd@3-10.200.8.13:22-10.200.16.10:45708.service - OpenSSH per-connection server daemon (10.200.16.10:45708). Oct 8 20:03:24.992454 sshd[2287]: Accepted publickey for core from 10.200.16.10 port 45708 ssh2: RSA SHA256:9U3oUBAdXYwgJqp6v+f9jEdEmxxRHlTxYCPOmLL0ALI Oct 8 20:03:24.994240 sshd[2287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:03:24.999105 systemd-logind[1658]: New session 6 of user core. Oct 8 20:03:25.008070 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 8 20:03:25.440486 sshd[2287]: pam_unix(sshd:session): session closed for user core Oct 8 20:03:25.444881 systemd[1]: sshd@3-10.200.8.13:22-10.200.16.10:45708.service: Deactivated successfully. Oct 8 20:03:25.447000 systemd[1]: session-6.scope: Deactivated successfully. Oct 8 20:03:25.447751 systemd-logind[1658]: Session 6 logged out. Waiting for processes to exit. Oct 8 20:03:25.448687 systemd-logind[1658]: Removed session 6. Oct 8 20:03:25.554884 systemd[1]: Started sshd@4-10.200.8.13:22-10.200.16.10:45714.service - OpenSSH per-connection server daemon (10.200.16.10:45714). Oct 8 20:03:26.193751 sshd[2294]: Accepted publickey for core from 10.200.16.10 port 45714 ssh2: RSA SHA256:9U3oUBAdXYwgJqp6v+f9jEdEmxxRHlTxYCPOmLL0ALI Oct 8 20:03:26.195547 sshd[2294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:03:26.201039 systemd-logind[1658]: New session 7 of user core. Oct 8 20:03:26.211092 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 8 20:03:26.787869 sudo[2297]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 8 20:03:26.788364 sudo[2297]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 20:03:26.803241 sudo[2297]: pam_unix(sudo:session): session closed for user root Oct 8 20:03:26.908398 sshd[2294]: pam_unix(sshd:session): session closed for user core Oct 8 20:03:26.912142 systemd[1]: sshd@4-10.200.8.13:22-10.200.16.10:45714.service: Deactivated successfully. Oct 8 20:03:26.914452 systemd[1]: session-7.scope: Deactivated successfully. Oct 8 20:03:26.916259 systemd-logind[1658]: Session 7 logged out. Waiting for processes to exit. Oct 8 20:03:26.917487 systemd-logind[1658]: Removed session 7. Oct 8 20:03:27.022280 systemd[1]: Started sshd@5-10.200.8.13:22-10.200.16.10:45726.service - OpenSSH per-connection server daemon (10.200.16.10:45726). Oct 8 20:03:27.666670 sshd[2302]: Accepted publickey for core from 10.200.16.10 port 45726 ssh2: RSA SHA256:9U3oUBAdXYwgJqp6v+f9jEdEmxxRHlTxYCPOmLL0ALI Oct 8 20:03:27.668509 sshd[2302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:03:27.673954 systemd-logind[1658]: New session 8 of user core. Oct 8 20:03:27.680069 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 8 20:03:28.022379 sudo[2306]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 8 20:03:28.022736 sudo[2306]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 20:03:28.025870 sudo[2306]: pam_unix(sudo:session): session closed for user root Oct 8 20:03:28.030705 sudo[2305]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 8 20:03:28.031098 sudo[2305]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 20:03:28.043238 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Oct 8 20:03:28.045352 auditctl[2309]: No rules Oct 8 20:03:28.046454 systemd[1]: audit-rules.service: Deactivated successfully. Oct 8 20:03:28.046682 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Oct 8 20:03:28.048614 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 8 20:03:28.090568 augenrules[2327]: No rules Oct 8 20:03:28.091847 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 8 20:03:28.093042 sudo[2305]: pam_unix(sudo:session): session closed for user root Oct 8 20:03:28.197252 sshd[2302]: pam_unix(sshd:session): session closed for user core Oct 8 20:03:28.201721 systemd[1]: sshd@5-10.200.8.13:22-10.200.16.10:45726.service: Deactivated successfully. Oct 8 20:03:28.203784 systemd[1]: session-8.scope: Deactivated successfully. Oct 8 20:03:28.204675 systemd-logind[1658]: Session 8 logged out. Waiting for processes to exit. Oct 8 20:03:28.205719 systemd-logind[1658]: Removed session 8. Oct 8 20:03:28.306871 systemd[1]: Started sshd@6-10.200.8.13:22-10.200.16.10:45742.service - OpenSSH per-connection server daemon (10.200.16.10:45742). Oct 8 20:03:28.936148 sshd[2335]: Accepted publickey for core from 10.200.16.10 port 45742 ssh2: RSA SHA256:9U3oUBAdXYwgJqp6v+f9jEdEmxxRHlTxYCPOmLL0ALI Oct 8 20:03:28.937893 sshd[2335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:03:28.943096 systemd-logind[1658]: New session 9 of user core. Oct 8 20:03:28.949089 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 8 20:03:29.283106 sudo[2338]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 8 20:03:29.283475 sudo[2338]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 20:03:30.943226 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 8 20:03:30.943745 (dockerd)[2354]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 8 20:03:33.018663 dockerd[2354]: time="2024-10-08T20:03:33.018379956Z" level=info msg="Starting up" Oct 8 20:03:33.531222 dockerd[2354]: time="2024-10-08T20:03:33.530946225Z" level=info msg="Loading containers: start." Oct 8 20:03:33.691945 kernel: Initializing XFRM netlink socket Oct 8 20:03:33.837697 systemd-networkd[1320]: docker0: Link UP Oct 8 20:03:33.877066 dockerd[2354]: time="2024-10-08T20:03:33.877028609Z" level=info msg="Loading containers: done." Oct 8 20:03:33.947277 dockerd[2354]: time="2024-10-08T20:03:33.947227435Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 8 20:03:33.947526 dockerd[2354]: time="2024-10-08T20:03:33.947363136Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Oct 8 20:03:33.947526 dockerd[2354]: time="2024-10-08T20:03:33.947496537Z" level=info msg="Daemon has completed initialization" Oct 8 20:03:34.007882 dockerd[2354]: time="2024-10-08T20:03:34.007822975Z" level=info msg="API listen on /run/docker.sock" Oct 8 20:03:34.008502 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 8 20:03:34.425362 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Oct 8 20:03:34.438817 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:03:34.543991 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:03:34.558289 (kubelet)[2499]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:03:35.104849 containerd[1680]: time="2024-10-08T20:03:35.104754051Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.0\"" Oct 8 20:03:35.141017 kubelet[2499]: E1008 20:03:35.111891 2499 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:03:35.113894 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:03:35.114050 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:03:35.810748 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1681929593.mount: Deactivated successfully. Oct 8 20:03:37.600888 containerd[1680]: time="2024-10-08T20:03:37.600820696Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:37.604699 containerd[1680]: time="2024-10-08T20:03:37.604648530Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.0: active requests=0, bytes read=28066629" Oct 8 20:03:37.611775 containerd[1680]: time="2024-10-08T20:03:37.611613592Z" level=info msg="ImageCreate event name:\"sha256:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:37.617824 containerd[1680]: time="2024-10-08T20:03:37.617760747Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:37.619379 containerd[1680]: time="2024-10-08T20:03:37.619151260Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.0\" with image id \"sha256:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.0\", repo digest \"registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf\", size \"28063421\" in 2.514346009s" Oct 8 20:03:37.619379 containerd[1680]: time="2024-10-08T20:03:37.619204260Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.0\" returns image reference \"sha256:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3\"" Oct 8 20:03:37.621380 containerd[1680]: time="2024-10-08T20:03:37.621340479Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.0\"" Oct 8 20:03:39.220207 containerd[1680]: time="2024-10-08T20:03:39.220147128Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:39.224210 containerd[1680]: time="2024-10-08T20:03:39.224141164Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.0: active requests=0, bytes read=24690930" Oct 8 20:03:39.230637 containerd[1680]: time="2024-10-08T20:03:39.230606221Z" level=info msg="ImageCreate event name:\"sha256:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:39.236062 containerd[1680]: time="2024-10-08T20:03:39.235997269Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:39.237314 containerd[1680]: time="2024-10-08T20:03:39.237142980Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.0\" with image id \"sha256:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.0\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d\", size \"26240868\" in 1.615751999s" Oct 8 20:03:39.237314 containerd[1680]: time="2024-10-08T20:03:39.237183580Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.0\" returns image reference \"sha256:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1\"" Oct 8 20:03:39.238145 containerd[1680]: time="2024-10-08T20:03:39.237825786Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.0\"" Oct 8 20:03:40.482983 containerd[1680]: time="2024-10-08T20:03:40.482904582Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:40.485827 containerd[1680]: time="2024-10-08T20:03:40.485762908Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.0: active requests=0, bytes read=18646766" Oct 8 20:03:40.489539 containerd[1680]: time="2024-10-08T20:03:40.489480741Z" level=info msg="ImageCreate event name:\"sha256:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:40.494935 containerd[1680]: time="2024-10-08T20:03:40.494882289Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:40.496048 containerd[1680]: time="2024-10-08T20:03:40.495852398Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.0\" with image id \"sha256:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.0\", repo digest \"registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808\", size \"20196722\" in 1.257991311s" Oct 8 20:03:40.496048 containerd[1680]: time="2024-10-08T20:03:40.495892298Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.0\" returns image reference \"sha256:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94\"" Oct 8 20:03:40.496863 containerd[1680]: time="2024-10-08T20:03:40.496659405Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.0\"" Oct 8 20:03:41.739081 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2224428041.mount: Deactivated successfully. Oct 8 20:03:42.312928 containerd[1680]: time="2024-10-08T20:03:42.312849097Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:42.314908 containerd[1680]: time="2024-10-08T20:03:42.314829915Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.0: active requests=0, bytes read=30208889" Oct 8 20:03:42.321512 containerd[1680]: time="2024-10-08T20:03:42.321458477Z" level=info msg="ImageCreate event name:\"sha256:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:42.325864 containerd[1680]: time="2024-10-08T20:03:42.325812018Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:42.329056 containerd[1680]: time="2024-10-08T20:03:42.326739027Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.0\" with image id \"sha256:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494\", repo tag \"registry.k8s.io/kube-proxy:v1.31.0\", repo digest \"registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe\", size \"30207900\" in 1.82982302s" Oct 8 20:03:42.329056 containerd[1680]: time="2024-10-08T20:03:42.326785927Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.0\" returns image reference \"sha256:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494\"" Oct 8 20:03:42.330894 containerd[1680]: time="2024-10-08T20:03:42.330871266Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Oct 8 20:03:42.905989 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount147362395.mount: Deactivated successfully. Oct 8 20:03:44.346845 containerd[1680]: time="2024-10-08T20:03:44.346787398Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:44.350136 containerd[1680]: time="2024-10-08T20:03:44.350078829Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Oct 8 20:03:44.357567 containerd[1680]: time="2024-10-08T20:03:44.357505599Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:44.362706 containerd[1680]: time="2024-10-08T20:03:44.362641747Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:44.364246 containerd[1680]: time="2024-10-08T20:03:44.364206662Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.033206095s" Oct 8 20:03:44.364246 containerd[1680]: time="2024-10-08T20:03:44.364241462Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Oct 8 20:03:44.365207 containerd[1680]: time="2024-10-08T20:03:44.365170571Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 8 20:03:44.940727 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2414830810.mount: Deactivated successfully. Oct 8 20:03:44.965581 containerd[1680]: time="2024-10-08T20:03:44.965530909Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:44.967729 containerd[1680]: time="2024-10-08T20:03:44.967674430Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Oct 8 20:03:44.973100 containerd[1680]: time="2024-10-08T20:03:44.973049780Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:44.978014 containerd[1680]: time="2024-10-08T20:03:44.977964626Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:44.978888 containerd[1680]: time="2024-10-08T20:03:44.978736833Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 613.529862ms" Oct 8 20:03:44.978888 containerd[1680]: time="2024-10-08T20:03:44.978773334Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Oct 8 20:03:44.979574 containerd[1680]: time="2024-10-08T20:03:44.979527641Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Oct 8 20:03:45.175281 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Oct 8 20:03:45.183142 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:03:45.278244 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:03:45.291255 (kubelet)[2632]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:03:45.861980 kubelet[2632]: E1008 20:03:45.861907 2632 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:03:45.863619 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:03:45.863792 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:03:46.254641 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2134918191.mount: Deactivated successfully. Oct 8 20:03:48.594425 containerd[1680]: time="2024-10-08T20:03:48.594366390Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:48.597608 containerd[1680]: time="2024-10-08T20:03:48.597544520Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56241748" Oct 8 20:03:48.604439 containerd[1680]: time="2024-10-08T20:03:48.604376684Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:48.609846 containerd[1680]: time="2024-10-08T20:03:48.609789735Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:48.610949 containerd[1680]: time="2024-10-08T20:03:48.610884945Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.631320304s" Oct 8 20:03:48.610949 containerd[1680]: time="2024-10-08T20:03:48.610946146Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Oct 8 20:03:51.413432 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:03:51.419198 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:03:51.451438 systemd[1]: Reloading requested from client PID 2719 ('systemctl') (unit session-9.scope)... Oct 8 20:03:51.451455 systemd[1]: Reloading... Oct 8 20:03:51.545966 zram_generator::config[2755]: No configuration found. Oct 8 20:03:51.690987 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 20:03:51.778780 systemd[1]: Reloading finished in 326 ms. Oct 8 20:03:51.833407 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 8 20:03:51.833524 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 8 20:03:51.833829 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:03:51.835569 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:03:52.076877 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:03:52.083000 (kubelet)[2830]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 8 20:03:52.120250 kubelet[2830]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 20:03:52.120250 kubelet[2830]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 8 20:03:52.120250 kubelet[2830]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 20:03:52.785284 kubelet[2830]: I1008 20:03:52.784743 2830 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 8 20:03:53.098894 kubelet[2830]: I1008 20:03:53.098767 2830 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Oct 8 20:03:53.098894 kubelet[2830]: I1008 20:03:53.098798 2830 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 8 20:03:53.099340 kubelet[2830]: I1008 20:03:53.099306 2830 server.go:929] "Client rotation is on, will bootstrap in background" Oct 8 20:03:53.120558 kubelet[2830]: I1008 20:03:53.120006 2830 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 8 20:03:53.121002 kubelet[2830]: E1008 20:03:53.120830 2830 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.8.13:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.13:6443: connect: connection refused" logger="UnhandledError" Oct 8 20:03:53.130214 kubelet[2830]: E1008 20:03:53.130175 2830 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 8 20:03:53.130214 kubelet[2830]: I1008 20:03:53.130209 2830 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Oct 8 20:03:53.134936 kubelet[2830]: I1008 20:03:53.134445 2830 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 8 20:03:53.134936 kubelet[2830]: I1008 20:03:53.134544 2830 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Oct 8 20:03:53.134936 kubelet[2830]: I1008 20:03:53.134674 2830 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 8 20:03:53.134936 kubelet[2830]: I1008 20:03:53.134701 2830 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.1.0-a-b9ef23c535","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 8 20:03:53.135239 kubelet[2830]: I1008 20:03:53.134860 2830 topology_manager.go:138] "Creating topology manager with none policy" Oct 8 20:03:53.135239 kubelet[2830]: I1008 20:03:53.134867 2830 container_manager_linux.go:300] "Creating device plugin manager" Oct 8 20:03:53.135239 kubelet[2830]: I1008 20:03:53.135024 2830 state_mem.go:36] "Initialized new in-memory state store" Oct 8 20:03:53.137283 kubelet[2830]: I1008 20:03:53.137255 2830 kubelet.go:408] "Attempting to sync node with API server" Oct 8 20:03:53.137283 kubelet[2830]: I1008 20:03:53.137287 2830 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 8 20:03:53.137416 kubelet[2830]: I1008 20:03:53.137324 2830 kubelet.go:314] "Adding apiserver pod source" Oct 8 20:03:53.137416 kubelet[2830]: I1008 20:03:53.137339 2830 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 8 20:03:53.144357 kubelet[2830]: W1008 20:03:53.143987 2830 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.1.0-a-b9ef23c535&limit=500&resourceVersion=0": dial tcp 10.200.8.13:6443: connect: connection refused Oct 8 20:03:53.144357 kubelet[2830]: E1008 20:03:53.144066 2830 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.8.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.1.0-a-b9ef23c535&limit=500&resourceVersion=0\": dial tcp 10.200.8.13:6443: connect: connection refused" logger="UnhandledError" Oct 8 20:03:53.145750 kubelet[2830]: W1008 20:03:53.145702 2830 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.8.13:6443: connect: connection refused Oct 8 20:03:53.145886 kubelet[2830]: E1008 20:03:53.145869 2830 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.8.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.13:6443: connect: connection refused" logger="UnhandledError" Oct 8 20:03:53.146071 kubelet[2830]: I1008 20:03:53.146058 2830 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Oct 8 20:03:53.147924 kubelet[2830]: I1008 20:03:53.147893 2830 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 8 20:03:53.148082 kubelet[2830]: W1008 20:03:53.148071 2830 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 8 20:03:53.149041 kubelet[2830]: I1008 20:03:53.149025 2830 server.go:1269] "Started kubelet" Oct 8 20:03:53.150424 kubelet[2830]: I1008 20:03:53.150391 2830 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 8 20:03:53.151543 kubelet[2830]: I1008 20:03:53.151517 2830 server.go:460] "Adding debug handlers to kubelet server" Oct 8 20:03:53.155168 kubelet[2830]: I1008 20:03:53.154023 2830 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 8 20:03:53.155168 kubelet[2830]: I1008 20:03:53.154637 2830 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 8 20:03:53.155168 kubelet[2830]: I1008 20:03:53.154871 2830 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 8 20:03:53.158166 kubelet[2830]: E1008 20:03:53.155072 2830 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.13:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.13:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.1.0-a-b9ef23c535.17fc92dc984beb1e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.1.0-a-b9ef23c535,UID:ci-4081.1.0-a-b9ef23c535,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.1.0-a-b9ef23c535,},FirstTimestamp:2024-10-08 20:03:53.149000478 +0000 UTC m=+1.062425885,LastTimestamp:2024-10-08 20:03:53.149000478 +0000 UTC m=+1.062425885,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.1.0-a-b9ef23c535,}" Oct 8 20:03:53.160363 kubelet[2830]: E1008 20:03:53.160344 2830 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 8 20:03:53.160462 kubelet[2830]: I1008 20:03:53.160354 2830 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 8 20:03:53.162322 kubelet[2830]: I1008 20:03:53.162182 2830 volume_manager.go:289] "Starting Kubelet Volume Manager" Oct 8 20:03:53.162408 kubelet[2830]: E1008 20:03:53.162369 2830 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.1.0-a-b9ef23c535\" not found" Oct 8 20:03:53.162988 kubelet[2830]: E1008 20:03:53.162959 2830 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.1.0-a-b9ef23c535?timeout=10s\": dial tcp 10.200.8.13:6443: connect: connection refused" interval="200ms" Oct 8 20:03:53.163282 kubelet[2830]: I1008 20:03:53.163265 2830 factory.go:221] Registration of the systemd container factory successfully Oct 8 20:03:53.163440 kubelet[2830]: I1008 20:03:53.163422 2830 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 8 20:03:53.165063 kubelet[2830]: I1008 20:03:53.164415 2830 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 8 20:03:53.165063 kubelet[2830]: W1008 20:03:53.164754 2830 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.13:6443: connect: connection refused Oct 8 20:03:53.165063 kubelet[2830]: E1008 20:03:53.164806 2830 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.8.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.13:6443: connect: connection refused" logger="UnhandledError" Oct 8 20:03:53.165063 kubelet[2830]: I1008 20:03:53.164865 2830 reconciler.go:26] "Reconciler: start to sync state" Oct 8 20:03:53.165863 kubelet[2830]: I1008 20:03:53.165847 2830 factory.go:221] Registration of the containerd container factory successfully Oct 8 20:03:53.197434 kubelet[2830]: I1008 20:03:53.197397 2830 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 8 20:03:53.199656 kubelet[2830]: I1008 20:03:53.198729 2830 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 8 20:03:53.199656 kubelet[2830]: I1008 20:03:53.198766 2830 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 8 20:03:53.199656 kubelet[2830]: I1008 20:03:53.198787 2830 kubelet.go:2321] "Starting kubelet main sync loop" Oct 8 20:03:53.199656 kubelet[2830]: E1008 20:03:53.198831 2830 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 8 20:03:53.200736 kubelet[2830]: W1008 20:03:53.200691 2830 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.13:6443: connect: connection refused Oct 8 20:03:53.201804 kubelet[2830]: E1008 20:03:53.201773 2830 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.8.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.13:6443: connect: connection refused" logger="UnhandledError" Oct 8 20:03:53.262908 kubelet[2830]: E1008 20:03:53.262802 2830 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.1.0-a-b9ef23c535\" not found" Oct 8 20:03:53.264828 kubelet[2830]: I1008 20:03:53.264800 2830 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 8 20:03:53.264828 kubelet[2830]: I1008 20:03:53.264822 2830 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 8 20:03:53.265013 kubelet[2830]: I1008 20:03:53.264847 2830 state_mem.go:36] "Initialized new in-memory state store" Oct 8 20:03:53.273566 kubelet[2830]: I1008 20:03:53.273536 2830 policy_none.go:49] "None policy: Start" Oct 8 20:03:53.274322 kubelet[2830]: I1008 20:03:53.274244 2830 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 8 20:03:53.274322 kubelet[2830]: I1008 20:03:53.274275 2830 state_mem.go:35] "Initializing new in-memory state store" Oct 8 20:03:53.283699 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 8 20:03:53.296657 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 8 20:03:53.299207 kubelet[2830]: E1008 20:03:53.299177 2830 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 8 20:03:53.299580 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 8 20:03:53.305841 kubelet[2830]: I1008 20:03:53.305621 2830 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 8 20:03:53.305841 kubelet[2830]: I1008 20:03:53.305839 2830 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 8 20:03:53.305999 kubelet[2830]: I1008 20:03:53.305858 2830 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 8 20:03:53.306259 kubelet[2830]: I1008 20:03:53.306224 2830 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 8 20:03:53.308699 kubelet[2830]: E1008 20:03:53.308661 2830 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.1.0-a-b9ef23c535\" not found" Oct 8 20:03:53.364414 kubelet[2830]: E1008 20:03:53.364267 2830 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.1.0-a-b9ef23c535?timeout=10s\": dial tcp 10.200.8.13:6443: connect: connection refused" interval="400ms" Oct 8 20:03:53.408576 kubelet[2830]: I1008 20:03:53.408539 2830 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.1.0-a-b9ef23c535" Oct 8 20:03:53.408997 kubelet[2830]: E1008 20:03:53.408961 2830 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.8.13:6443/api/v1/nodes\": dial tcp 10.200.8.13:6443: connect: connection refused" node="ci-4081.1.0-a-b9ef23c535" Oct 8 20:03:53.511290 systemd[1]: Created slice kubepods-burstable-pod34b815dd2aa2e0ea8e5389db5451c148.slice - libcontainer container kubepods-burstable-pod34b815dd2aa2e0ea8e5389db5451c148.slice. Oct 8 20:03:53.537275 systemd[1]: Created slice kubepods-burstable-pod5e7058bc7388c6972e79cf05530149c7.slice - libcontainer container kubepods-burstable-pod5e7058bc7388c6972e79cf05530149c7.slice. Oct 8 20:03:53.542656 systemd[1]: Created slice kubepods-burstable-podd37f964d12d466217a4a315e331fceea.slice - libcontainer container kubepods-burstable-podd37f964d12d466217a4a315e331fceea.slice. Oct 8 20:03:53.566779 kubelet[2830]: I1008 20:03:53.566730 2830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d37f964d12d466217a4a315e331fceea-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.1.0-a-b9ef23c535\" (UID: \"d37f964d12d466217a4a315e331fceea\") " pod="kube-system/kube-apiserver-ci-4081.1.0-a-b9ef23c535" Oct 8 20:03:53.566779 kubelet[2830]: I1008 20:03:53.566793 2830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/34b815dd2aa2e0ea8e5389db5451c148-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.1.0-a-b9ef23c535\" (UID: \"34b815dd2aa2e0ea8e5389db5451c148\") " pod="kube-system/kube-controller-manager-ci-4081.1.0-a-b9ef23c535" Oct 8 20:03:53.566779 kubelet[2830]: I1008 20:03:53.566822 2830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/34b815dd2aa2e0ea8e5389db5451c148-k8s-certs\") pod \"kube-controller-manager-ci-4081.1.0-a-b9ef23c535\" (UID: \"34b815dd2aa2e0ea8e5389db5451c148\") " pod="kube-system/kube-controller-manager-ci-4081.1.0-a-b9ef23c535" Oct 8 20:03:53.567147 kubelet[2830]: I1008 20:03:53.566846 2830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/34b815dd2aa2e0ea8e5389db5451c148-kubeconfig\") pod \"kube-controller-manager-ci-4081.1.0-a-b9ef23c535\" (UID: \"34b815dd2aa2e0ea8e5389db5451c148\") " pod="kube-system/kube-controller-manager-ci-4081.1.0-a-b9ef23c535" Oct 8 20:03:53.567147 kubelet[2830]: I1008 20:03:53.566873 2830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5e7058bc7388c6972e79cf05530149c7-kubeconfig\") pod \"kube-scheduler-ci-4081.1.0-a-b9ef23c535\" (UID: \"5e7058bc7388c6972e79cf05530149c7\") " pod="kube-system/kube-scheduler-ci-4081.1.0-a-b9ef23c535" Oct 8 20:03:53.567147 kubelet[2830]: I1008 20:03:53.566896 2830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d37f964d12d466217a4a315e331fceea-ca-certs\") pod \"kube-apiserver-ci-4081.1.0-a-b9ef23c535\" (UID: \"d37f964d12d466217a4a315e331fceea\") " pod="kube-system/kube-apiserver-ci-4081.1.0-a-b9ef23c535" Oct 8 20:03:53.567147 kubelet[2830]: I1008 20:03:53.566938 2830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d37f964d12d466217a4a315e331fceea-k8s-certs\") pod \"kube-apiserver-ci-4081.1.0-a-b9ef23c535\" (UID: \"d37f964d12d466217a4a315e331fceea\") " pod="kube-system/kube-apiserver-ci-4081.1.0-a-b9ef23c535" Oct 8 20:03:53.567147 kubelet[2830]: I1008 20:03:53.566963 2830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/34b815dd2aa2e0ea8e5389db5451c148-ca-certs\") pod \"kube-controller-manager-ci-4081.1.0-a-b9ef23c535\" (UID: \"34b815dd2aa2e0ea8e5389db5451c148\") " pod="kube-system/kube-controller-manager-ci-4081.1.0-a-b9ef23c535" Oct 8 20:03:53.567290 kubelet[2830]: I1008 20:03:53.566988 2830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/34b815dd2aa2e0ea8e5389db5451c148-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.1.0-a-b9ef23c535\" (UID: \"34b815dd2aa2e0ea8e5389db5451c148\") " pod="kube-system/kube-controller-manager-ci-4081.1.0-a-b9ef23c535" Oct 8 20:03:53.611870 kubelet[2830]: I1008 20:03:53.611831 2830 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.1.0-a-b9ef23c535" Oct 8 20:03:53.612244 kubelet[2830]: E1008 20:03:53.612211 2830 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.8.13:6443/api/v1/nodes\": dial tcp 10.200.8.13:6443: connect: connection refused" node="ci-4081.1.0-a-b9ef23c535" Oct 8 20:03:53.765834 kubelet[2830]: E1008 20:03:53.765770 2830 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.1.0-a-b9ef23c535?timeout=10s\": dial tcp 10.200.8.13:6443: connect: connection refused" interval="800ms" Oct 8 20:03:53.835028 containerd[1680]: time="2024-10-08T20:03:53.834974815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.1.0-a-b9ef23c535,Uid:34b815dd2aa2e0ea8e5389db5451c148,Namespace:kube-system,Attempt:0,}" Oct 8 20:03:53.841581 containerd[1680]: time="2024-10-08T20:03:53.841548490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.1.0-a-b9ef23c535,Uid:5e7058bc7388c6972e79cf05530149c7,Namespace:kube-system,Attempt:0,}" Oct 8 20:03:53.847084 containerd[1680]: time="2024-10-08T20:03:53.847045152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.1.0-a-b9ef23c535,Uid:d37f964d12d466217a4a315e331fceea,Namespace:kube-system,Attempt:0,}" Oct 8 20:03:54.014538 kubelet[2830]: I1008 20:03:54.014498 2830 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.1.0-a-b9ef23c535" Oct 8 20:03:54.014943 kubelet[2830]: E1008 20:03:54.014895 2830 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.8.13:6443/api/v1/nodes\": dial tcp 10.200.8.13:6443: connect: connection refused" node="ci-4081.1.0-a-b9ef23c535" Oct 8 20:03:54.081906 kubelet[2830]: W1008 20:03:54.081760 2830 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.1.0-a-b9ef23c535&limit=500&resourceVersion=0": dial tcp 10.200.8.13:6443: connect: connection refused Oct 8 20:03:54.081906 kubelet[2830]: E1008 20:03:54.081838 2830 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.8.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.1.0-a-b9ef23c535&limit=500&resourceVersion=0\": dial tcp 10.200.8.13:6443: connect: connection refused" logger="UnhandledError" Oct 8 20:03:54.384897 kubelet[2830]: W1008 20:03:54.384720 2830 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.13:6443: connect: connection refused Oct 8 20:03:54.384897 kubelet[2830]: E1008 20:03:54.384804 2830 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.8.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.13:6443: connect: connection refused" logger="UnhandledError" Oct 8 20:03:54.417567 kubelet[2830]: W1008 20:03:54.417530 2830 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.13:6443: connect: connection refused Oct 8 20:03:54.417707 kubelet[2830]: E1008 20:03:54.417578 2830 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.8.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.13:6443: connect: connection refused" logger="UnhandledError" Oct 8 20:03:54.438857 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount373514542.mount: Deactivated successfully. Oct 8 20:03:54.475689 containerd[1680]: time="2024-10-08T20:03:54.475636842Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 20:03:54.478006 containerd[1680]: time="2024-10-08T20:03:54.477952768Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Oct 8 20:03:54.482512 containerd[1680]: time="2024-10-08T20:03:54.482478919Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 20:03:54.485707 containerd[1680]: time="2024-10-08T20:03:54.485676155Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 20:03:54.491388 containerd[1680]: time="2024-10-08T20:03:54.491339919Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 8 20:03:54.497295 containerd[1680]: time="2024-10-08T20:03:54.497257786Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 20:03:54.501165 containerd[1680]: time="2024-10-08T20:03:54.500876926Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 8 20:03:54.504630 kubelet[2830]: W1008 20:03:54.504578 2830 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.8.13:6443: connect: connection refused Oct 8 20:03:54.504713 kubelet[2830]: E1008 20:03:54.504650 2830 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.8.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.13:6443: connect: connection refused" logger="UnhandledError" Oct 8 20:03:54.506199 containerd[1680]: time="2024-10-08T20:03:54.506142386Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 20:03:54.507125 containerd[1680]: time="2024-10-08T20:03:54.506869194Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 659.749242ms" Oct 8 20:03:54.509033 containerd[1680]: time="2024-10-08T20:03:54.509003418Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 667.391228ms" Oct 8 20:03:54.509550 containerd[1680]: time="2024-10-08T20:03:54.509519624Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 674.451708ms" Oct 8 20:03:54.566423 kubelet[2830]: E1008 20:03:54.566368 2830 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.1.0-a-b9ef23c535?timeout=10s\": dial tcp 10.200.8.13:6443: connect: connection refused" interval="1.6s" Oct 8 20:03:54.816910 kubelet[2830]: I1008 20:03:54.816823 2830 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.1.0-a-b9ef23c535" Oct 8 20:03:54.817235 kubelet[2830]: E1008 20:03:54.817203 2830 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.8.13:6443/api/v1/nodes\": dial tcp 10.200.8.13:6443: connect: connection refused" node="ci-4081.1.0-a-b9ef23c535" Oct 8 20:03:55.258894 kubelet[2830]: E1008 20:03:55.258849 2830 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.8.13:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.13:6443: connect: connection refused" logger="UnhandledError" Oct 8 20:03:55.329771 containerd[1680]: time="2024-10-08T20:03:55.329481873Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:03:55.329771 containerd[1680]: time="2024-10-08T20:03:55.329546273Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:03:55.329771 containerd[1680]: time="2024-10-08T20:03:55.329580974Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:03:55.329771 containerd[1680]: time="2024-10-08T20:03:55.329668775Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:03:55.333472 containerd[1680]: time="2024-10-08T20:03:55.333274615Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:03:55.333472 containerd[1680]: time="2024-10-08T20:03:55.333320416Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:03:55.333472 containerd[1680]: time="2024-10-08T20:03:55.333335116Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:03:55.333472 containerd[1680]: time="2024-10-08T20:03:55.333401517Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:03:55.336669 containerd[1680]: time="2024-10-08T20:03:55.336375450Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:03:55.336669 containerd[1680]: time="2024-10-08T20:03:55.336437651Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:03:55.336669 containerd[1680]: time="2024-10-08T20:03:55.336478752Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:03:55.336669 containerd[1680]: time="2024-10-08T20:03:55.336575953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:03:55.365109 systemd[1]: Started cri-containerd-1040cb632b5921b802df7d0fe2e4401809cb42fc87a8e67663283677943b7386.scope - libcontainer container 1040cb632b5921b802df7d0fe2e4401809cb42fc87a8e67663283677943b7386. Oct 8 20:03:55.371884 systemd[1]: Started cri-containerd-7aa8643ffaeed00195745316e2eba76c5266f1b14bdec92562ab0b987b8518e0.scope - libcontainer container 7aa8643ffaeed00195745316e2eba76c5266f1b14bdec92562ab0b987b8518e0. Oct 8 20:03:55.374832 systemd[1]: Started cri-containerd-e08988ac5a4875aa7a413873461421cf748c41991efd0f95c0c692f26d30adbd.scope - libcontainer container e08988ac5a4875aa7a413873461421cf748c41991efd0f95c0c692f26d30adbd. Oct 8 20:03:55.439343 containerd[1680]: time="2024-10-08T20:03:55.439036308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.1.0-a-b9ef23c535,Uid:d37f964d12d466217a4a315e331fceea,Namespace:kube-system,Attempt:0,} returns sandbox id \"7aa8643ffaeed00195745316e2eba76c5266f1b14bdec92562ab0b987b8518e0\"" Oct 8 20:03:55.447437 containerd[1680]: time="2024-10-08T20:03:55.446855397Z" level=info msg="CreateContainer within sandbox \"7aa8643ffaeed00195745316e2eba76c5266f1b14bdec92562ab0b987b8518e0\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 8 20:03:55.460529 containerd[1680]: time="2024-10-08T20:03:55.460493850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.1.0-a-b9ef23c535,Uid:5e7058bc7388c6972e79cf05530149c7,Namespace:kube-system,Attempt:0,} returns sandbox id \"e08988ac5a4875aa7a413873461421cf748c41991efd0f95c0c692f26d30adbd\"" Oct 8 20:03:55.466671 containerd[1680]: time="2024-10-08T20:03:55.466130514Z" level=info msg="CreateContainer within sandbox \"e08988ac5a4875aa7a413873461421cf748c41991efd0f95c0c692f26d30adbd\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 8 20:03:55.471639 containerd[1680]: time="2024-10-08T20:03:55.469574053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.1.0-a-b9ef23c535,Uid:34b815dd2aa2e0ea8e5389db5451c148,Namespace:kube-system,Attempt:0,} returns sandbox id \"1040cb632b5921b802df7d0fe2e4401809cb42fc87a8e67663283677943b7386\"" Oct 8 20:03:55.473386 containerd[1680]: time="2024-10-08T20:03:55.473355495Z" level=info msg="CreateContainer within sandbox \"1040cb632b5921b802df7d0fe2e4401809cb42fc87a8e67663283677943b7386\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 8 20:03:55.519668 containerd[1680]: time="2024-10-08T20:03:55.519564317Z" level=info msg="CreateContainer within sandbox \"7aa8643ffaeed00195745316e2eba76c5266f1b14bdec92562ab0b987b8518e0\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7f928ca229fd56ab4a8ae12f2fd2d44eed1ca5bc9da066e9c4559b4dd5b75942\"" Oct 8 20:03:55.520803 containerd[1680]: time="2024-10-08T20:03:55.520233324Z" level=info msg="StartContainer for \"7f928ca229fd56ab4a8ae12f2fd2d44eed1ca5bc9da066e9c4559b4dd5b75942\"" Oct 8 20:03:55.541128 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4209500324.mount: Deactivated successfully. Oct 8 20:03:55.557085 systemd[1]: Started cri-containerd-7f928ca229fd56ab4a8ae12f2fd2d44eed1ca5bc9da066e9c4559b4dd5b75942.scope - libcontainer container 7f928ca229fd56ab4a8ae12f2fd2d44eed1ca5bc9da066e9c4559b4dd5b75942. Oct 8 20:03:55.562010 containerd[1680]: time="2024-10-08T20:03:55.561963495Z" level=info msg="CreateContainer within sandbox \"e08988ac5a4875aa7a413873461421cf748c41991efd0f95c0c692f26d30adbd\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a823d247111a6f8241c7d130de9fd0ff291a53292a1e0460634c9228d4db3ad1\"" Oct 8 20:03:55.562670 containerd[1680]: time="2024-10-08T20:03:55.562575802Z" level=info msg="StartContainer for \"a823d247111a6f8241c7d130de9fd0ff291a53292a1e0460634c9228d4db3ad1\"" Oct 8 20:03:55.584349 containerd[1680]: time="2024-10-08T20:03:55.584209946Z" level=info msg="CreateContainer within sandbox \"1040cb632b5921b802df7d0fe2e4401809cb42fc87a8e67663283677943b7386\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"bf10ae919908515aaa863e7659970fec8dc1c29f6dcefe2de1f8b7efcb99be88\"" Oct 8 20:03:55.586027 containerd[1680]: time="2024-10-08T20:03:55.585832064Z" level=info msg="StartContainer for \"bf10ae919908515aaa863e7659970fec8dc1c29f6dcefe2de1f8b7efcb99be88\"" Oct 8 20:03:55.610283 systemd[1]: Started cri-containerd-a823d247111a6f8241c7d130de9fd0ff291a53292a1e0460634c9228d4db3ad1.scope - libcontainer container a823d247111a6f8241c7d130de9fd0ff291a53292a1e0460634c9228d4db3ad1. Oct 8 20:03:55.643113 systemd[1]: Started cri-containerd-bf10ae919908515aaa863e7659970fec8dc1c29f6dcefe2de1f8b7efcb99be88.scope - libcontainer container bf10ae919908515aaa863e7659970fec8dc1c29f6dcefe2de1f8b7efcb99be88. Oct 8 20:03:55.648524 containerd[1680]: time="2024-10-08T20:03:55.648460070Z" level=info msg="StartContainer for \"7f928ca229fd56ab4a8ae12f2fd2d44eed1ca5bc9da066e9c4559b4dd5b75942\" returns successfully" Oct 8 20:03:55.730949 containerd[1680]: time="2024-10-08T20:03:55.729474284Z" level=info msg="StartContainer for \"a823d247111a6f8241c7d130de9fd0ff291a53292a1e0460634c9228d4db3ad1\" returns successfully" Oct 8 20:03:55.734842 containerd[1680]: time="2024-10-08T20:03:55.734802944Z" level=info msg="StartContainer for \"bf10ae919908515aaa863e7659970fec8dc1c29f6dcefe2de1f8b7efcb99be88\" returns successfully" Oct 8 20:03:56.420318 kubelet[2830]: I1008 20:03:56.420284 2830 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.1.0-a-b9ef23c535" Oct 8 20:03:57.827551 kubelet[2830]: E1008 20:03:57.827507 2830 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.1.0-a-b9ef23c535\" not found" node="ci-4081.1.0-a-b9ef23c535" Oct 8 20:03:57.902587 kubelet[2830]: E1008 20:03:57.902463 2830 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081.1.0-a-b9ef23c535.17fc92dc984beb1e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.1.0-a-b9ef23c535,UID:ci-4081.1.0-a-b9ef23c535,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.1.0-a-b9ef23c535,},FirstTimestamp:2024-10-08 20:03:53.149000478 +0000 UTC m=+1.062425885,LastTimestamp:2024-10-08 20:03:53.149000478 +0000 UTC m=+1.062425885,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.1.0-a-b9ef23c535,}" Oct 8 20:03:57.962508 kubelet[2830]: E1008 20:03:57.962172 2830 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081.1.0-a-b9ef23c535.17fc92dc98f8d85e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.1.0-a-b9ef23c535,UID:ci-4081.1.0-a-b9ef23c535,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-4081.1.0-a-b9ef23c535,},FirstTimestamp:2024-10-08 20:03:53.160333406 +0000 UTC m=+1.073758813,LastTimestamp:2024-10-08 20:03:53.160333406 +0000 UTC m=+1.073758813,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.1.0-a-b9ef23c535,}" Oct 8 20:03:58.637615 kubelet[2830]: E1008 20:03:58.635775 2830 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081.1.0-a-b9ef23c535.17fc92dc9f279ee0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.1.0-a-b9ef23c535,UID:ci-4081.1.0-a-b9ef23c535,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ci-4081.1.0-a-b9ef23c535 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ci-4081.1.0-a-b9ef23c535,},FirstTimestamp:2024-10-08 20:03:53.264062176 +0000 UTC m=+1.177487583,LastTimestamp:2024-10-08 20:03:53.264062176 +0000 UTC m=+1.177487583,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.1.0-a-b9ef23c535,}" Oct 8 20:03:58.640690 kubelet[2830]: I1008 20:03:58.640642 2830 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.1.0-a-b9ef23c535" Oct 8 20:03:59.635961 kubelet[2830]: I1008 20:03:59.635851 2830 apiserver.go:52] "Watching apiserver" Oct 8 20:03:59.665083 kubelet[2830]: I1008 20:03:59.665037 2830 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 8 20:03:59.850607 kubelet[2830]: W1008 20:03:59.850376 2830 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 8 20:04:00.566796 systemd[1]: Reloading requested from client PID 3100 ('systemctl') (unit session-9.scope)... Oct 8 20:04:00.566813 systemd[1]: Reloading... Oct 8 20:04:00.668940 zram_generator::config[3140]: No configuration found. Oct 8 20:04:00.798849 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 20:04:00.801392 kubelet[2830]: W1008 20:04:00.800905 2830 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 8 20:04:00.898757 systemd[1]: Reloading finished in 331 ms. Oct 8 20:04:00.942787 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:04:00.958315 systemd[1]: kubelet.service: Deactivated successfully. Oct 8 20:04:00.958593 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:04:00.963217 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:04:01.111119 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:04:01.112614 (kubelet)[3207]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 8 20:04:01.152627 kubelet[3207]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 20:04:01.152627 kubelet[3207]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 8 20:04:01.152627 kubelet[3207]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 20:04:01.153092 kubelet[3207]: I1008 20:04:01.152622 3207 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 8 20:04:01.159652 kubelet[3207]: I1008 20:04:01.159615 3207 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Oct 8 20:04:01.159652 kubelet[3207]: I1008 20:04:01.159638 3207 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 8 20:04:01.159968 kubelet[3207]: I1008 20:04:01.159910 3207 server.go:929] "Client rotation is on, will bootstrap in background" Oct 8 20:04:01.161167 kubelet[3207]: I1008 20:04:01.161143 3207 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 8 20:04:01.163673 kubelet[3207]: I1008 20:04:01.163356 3207 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 8 20:04:01.167329 kubelet[3207]: E1008 20:04:01.167271 3207 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 8 20:04:01.167329 kubelet[3207]: I1008 20:04:01.167313 3207 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Oct 8 20:04:01.172547 kubelet[3207]: I1008 20:04:01.172335 3207 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 8 20:04:01.172547 kubelet[3207]: I1008 20:04:01.172453 3207 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Oct 8 20:04:01.172678 kubelet[3207]: I1008 20:04:01.172571 3207 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 8 20:04:01.172789 kubelet[3207]: I1008 20:04:01.172602 3207 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.1.0-a-b9ef23c535","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 8 20:04:01.172907 kubelet[3207]: I1008 20:04:01.172794 3207 topology_manager.go:138] "Creating topology manager with none policy" Oct 8 20:04:01.172907 kubelet[3207]: I1008 20:04:01.172807 3207 container_manager_linux.go:300] "Creating device plugin manager" Oct 8 20:04:01.172907 kubelet[3207]: I1008 20:04:01.172843 3207 state_mem.go:36] "Initialized new in-memory state store" Oct 8 20:04:01.173057 kubelet[3207]: I1008 20:04:01.172982 3207 kubelet.go:408] "Attempting to sync node with API server" Oct 8 20:04:01.173057 kubelet[3207]: I1008 20:04:01.172997 3207 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 8 20:04:01.173057 kubelet[3207]: I1008 20:04:01.173025 3207 kubelet.go:314] "Adding apiserver pod source" Oct 8 20:04:01.173057 kubelet[3207]: I1008 20:04:01.173037 3207 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 8 20:04:01.173999 kubelet[3207]: I1008 20:04:01.173976 3207 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Oct 8 20:04:01.174670 kubelet[3207]: I1008 20:04:01.174436 3207 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 8 20:04:01.175963 kubelet[3207]: I1008 20:04:01.175942 3207 server.go:1269] "Started kubelet" Oct 8 20:04:01.178706 kubelet[3207]: I1008 20:04:01.178685 3207 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 8 20:04:01.188601 kubelet[3207]: I1008 20:04:01.186448 3207 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 8 20:04:01.188601 kubelet[3207]: I1008 20:04:01.187501 3207 server.go:460] "Adding debug handlers to kubelet server" Oct 8 20:04:01.188601 kubelet[3207]: I1008 20:04:01.188500 3207 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 8 20:04:01.188781 kubelet[3207]: I1008 20:04:01.188721 3207 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 8 20:04:01.189047 kubelet[3207]: I1008 20:04:01.189028 3207 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 8 20:04:01.190674 kubelet[3207]: I1008 20:04:01.190654 3207 volume_manager.go:289] "Starting Kubelet Volume Manager" Oct 8 20:04:01.190926 kubelet[3207]: E1008 20:04:01.190892 3207 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.1.0-a-b9ef23c535\" not found" Oct 8 20:04:01.195050 kubelet[3207]: I1008 20:04:01.195032 3207 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 8 20:04:01.195275 kubelet[3207]: I1008 20:04:01.195177 3207 reconciler.go:26] "Reconciler: start to sync state" Oct 8 20:04:01.199198 kubelet[3207]: I1008 20:04:01.199152 3207 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 8 20:04:01.201155 kubelet[3207]: I1008 20:04:01.200998 3207 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 8 20:04:01.201155 kubelet[3207]: I1008 20:04:01.201033 3207 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 8 20:04:01.201959 kubelet[3207]: I1008 20:04:01.201441 3207 factory.go:221] Registration of the systemd container factory successfully Oct 8 20:04:01.201959 kubelet[3207]: I1008 20:04:01.201560 3207 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 8 20:04:01.202086 kubelet[3207]: I1008 20:04:01.201966 3207 kubelet.go:2321] "Starting kubelet main sync loop" Oct 8 20:04:01.202086 kubelet[3207]: E1008 20:04:01.202017 3207 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 8 20:04:01.215939 kubelet[3207]: I1008 20:04:01.214243 3207 factory.go:221] Registration of the containerd container factory successfully Oct 8 20:04:01.257042 kubelet[3207]: I1008 20:04:01.257010 3207 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 8 20:04:01.257042 kubelet[3207]: I1008 20:04:01.257027 3207 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 8 20:04:01.257042 kubelet[3207]: I1008 20:04:01.257048 3207 state_mem.go:36] "Initialized new in-memory state store" Oct 8 20:04:01.257249 kubelet[3207]: I1008 20:04:01.257207 3207 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 8 20:04:01.257249 kubelet[3207]: I1008 20:04:01.257219 3207 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 8 20:04:01.257249 kubelet[3207]: I1008 20:04:01.257241 3207 policy_none.go:49] "None policy: Start" Oct 8 20:04:01.257873 kubelet[3207]: I1008 20:04:01.257829 3207 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 8 20:04:01.257873 kubelet[3207]: I1008 20:04:01.257856 3207 state_mem.go:35] "Initializing new in-memory state store" Oct 8 20:04:01.258081 kubelet[3207]: I1008 20:04:01.258068 3207 state_mem.go:75] "Updated machine memory state" Oct 8 20:04:01.262456 kubelet[3207]: I1008 20:04:01.262101 3207 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 8 20:04:01.262456 kubelet[3207]: I1008 20:04:01.262277 3207 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 8 20:04:01.262456 kubelet[3207]: I1008 20:04:01.262290 3207 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 8 20:04:01.262649 kubelet[3207]: I1008 20:04:01.262489 3207 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 8 20:04:01.312205 kubelet[3207]: W1008 20:04:01.312169 3207 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 8 20:04:01.312362 kubelet[3207]: W1008 20:04:01.312251 3207 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 8 20:04:01.312362 kubelet[3207]: E1008 20:04:01.312303 3207 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081.1.0-a-b9ef23c535\" already exists" pod="kube-system/kube-scheduler-ci-4081.1.0-a-b9ef23c535" Oct 8 20:04:01.313102 kubelet[3207]: W1008 20:04:01.313065 3207 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 8 20:04:01.313221 kubelet[3207]: E1008 20:04:01.313157 3207 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.1.0-a-b9ef23c535\" already exists" pod="kube-system/kube-apiserver-ci-4081.1.0-a-b9ef23c535" Oct 8 20:04:01.366034 kubelet[3207]: I1008 20:04:01.365995 3207 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.1.0-a-b9ef23c535" Oct 8 20:04:01.373668 kubelet[3207]: I1008 20:04:01.373633 3207 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081.1.0-a-b9ef23c535" Oct 8 20:04:01.373797 kubelet[3207]: I1008 20:04:01.373710 3207 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.1.0-a-b9ef23c535" Oct 8 20:04:01.395483 kubelet[3207]: I1008 20:04:01.395451 3207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d37f964d12d466217a4a315e331fceea-ca-certs\") pod \"kube-apiserver-ci-4081.1.0-a-b9ef23c535\" (UID: \"d37f964d12d466217a4a315e331fceea\") " pod="kube-system/kube-apiserver-ci-4081.1.0-a-b9ef23c535" Oct 8 20:04:01.395483 kubelet[3207]: I1008 20:04:01.395482 3207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d37f964d12d466217a4a315e331fceea-k8s-certs\") pod \"kube-apiserver-ci-4081.1.0-a-b9ef23c535\" (UID: \"d37f964d12d466217a4a315e331fceea\") " pod="kube-system/kube-apiserver-ci-4081.1.0-a-b9ef23c535" Oct 8 20:04:01.395723 kubelet[3207]: I1008 20:04:01.395512 3207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/34b815dd2aa2e0ea8e5389db5451c148-k8s-certs\") pod \"kube-controller-manager-ci-4081.1.0-a-b9ef23c535\" (UID: \"34b815dd2aa2e0ea8e5389db5451c148\") " pod="kube-system/kube-controller-manager-ci-4081.1.0-a-b9ef23c535" Oct 8 20:04:01.395723 kubelet[3207]: I1008 20:04:01.395536 3207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d37f964d12d466217a4a315e331fceea-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.1.0-a-b9ef23c535\" (UID: \"d37f964d12d466217a4a315e331fceea\") " pod="kube-system/kube-apiserver-ci-4081.1.0-a-b9ef23c535" Oct 8 20:04:01.395723 kubelet[3207]: I1008 20:04:01.395563 3207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/34b815dd2aa2e0ea8e5389db5451c148-ca-certs\") pod \"kube-controller-manager-ci-4081.1.0-a-b9ef23c535\" (UID: \"34b815dd2aa2e0ea8e5389db5451c148\") " pod="kube-system/kube-controller-manager-ci-4081.1.0-a-b9ef23c535" Oct 8 20:04:01.395723 kubelet[3207]: I1008 20:04:01.395585 3207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/34b815dd2aa2e0ea8e5389db5451c148-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.1.0-a-b9ef23c535\" (UID: \"34b815dd2aa2e0ea8e5389db5451c148\") " pod="kube-system/kube-controller-manager-ci-4081.1.0-a-b9ef23c535" Oct 8 20:04:01.395723 kubelet[3207]: I1008 20:04:01.395604 3207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/34b815dd2aa2e0ea8e5389db5451c148-kubeconfig\") pod \"kube-controller-manager-ci-4081.1.0-a-b9ef23c535\" (UID: \"34b815dd2aa2e0ea8e5389db5451c148\") " pod="kube-system/kube-controller-manager-ci-4081.1.0-a-b9ef23c535" Oct 8 20:04:01.395896 kubelet[3207]: I1008 20:04:01.395624 3207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/34b815dd2aa2e0ea8e5389db5451c148-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.1.0-a-b9ef23c535\" (UID: \"34b815dd2aa2e0ea8e5389db5451c148\") " pod="kube-system/kube-controller-manager-ci-4081.1.0-a-b9ef23c535" Oct 8 20:04:01.395896 kubelet[3207]: I1008 20:04:01.395645 3207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5e7058bc7388c6972e79cf05530149c7-kubeconfig\") pod \"kube-scheduler-ci-4081.1.0-a-b9ef23c535\" (UID: \"5e7058bc7388c6972e79cf05530149c7\") " pod="kube-system/kube-scheduler-ci-4081.1.0-a-b9ef23c535" Oct 8 20:04:02.185427 kubelet[3207]: I1008 20:04:02.185361 3207 apiserver.go:52] "Watching apiserver" Oct 8 20:04:02.195766 kubelet[3207]: I1008 20:04:02.195733 3207 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 8 20:04:02.249080 kubelet[3207]: W1008 20:04:02.249047 3207 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 8 20:04:02.249248 kubelet[3207]: E1008 20:04:02.249110 3207 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.1.0-a-b9ef23c535\" already exists" pod="kube-system/kube-apiserver-ci-4081.1.0-a-b9ef23c535" Oct 8 20:04:02.276099 kubelet[3207]: I1008 20:04:02.276031 3207 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.1.0-a-b9ef23c535" podStartSLOduration=2.276006373 podStartE2EDuration="2.276006373s" podCreationTimestamp="2024-10-08 20:04:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:04:02.264897868 +0000 UTC m=+1.147434261" watchObservedRunningTime="2024-10-08 20:04:02.276006373 +0000 UTC m=+1.158542666" Oct 8 20:04:02.289731 kubelet[3207]: I1008 20:04:02.289159 3207 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.1.0-a-b9ef23c535" podStartSLOduration=3.2890713959999998 podStartE2EDuration="3.289071396s" podCreationTimestamp="2024-10-08 20:03:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:04:02.276568878 +0000 UTC m=+1.159105171" watchObservedRunningTime="2024-10-08 20:04:02.289071396 +0000 UTC m=+1.171607689" Oct 8 20:04:02.301100 kubelet[3207]: I1008 20:04:02.301035 3207 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.1.0-a-b9ef23c535" podStartSLOduration=1.301012809 podStartE2EDuration="1.301012809s" podCreationTimestamp="2024-10-08 20:04:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:04:02.290632111 +0000 UTC m=+1.173168404" watchObservedRunningTime="2024-10-08 20:04:02.301012809 +0000 UTC m=+1.183549102" Oct 8 20:04:05.030459 kubelet[3207]: I1008 20:04:05.030414 3207 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 8 20:04:05.031645 kubelet[3207]: I1008 20:04:05.031064 3207 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 8 20:04:05.031713 containerd[1680]: time="2024-10-08T20:04:05.030796615Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 8 20:04:05.888593 systemd[1]: Created slice kubepods-besteffort-pod6c3fd548_c65e_43f4_a54c_cd0f7039ed4f.slice - libcontainer container kubepods-besteffort-pod6c3fd548_c65e_43f4_a54c_cd0f7039ed4f.slice. Oct 8 20:04:06.022675 kubelet[3207]: I1008 20:04:06.022616 3207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6c3fd548-c65e-43f4-a54c-cd0f7039ed4f-lib-modules\") pod \"kube-proxy-dhfw7\" (UID: \"6c3fd548-c65e-43f4-a54c-cd0f7039ed4f\") " pod="kube-system/kube-proxy-dhfw7" Oct 8 20:04:06.022675 kubelet[3207]: I1008 20:04:06.022681 3207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6c3fd548-c65e-43f4-a54c-cd0f7039ed4f-kube-proxy\") pod \"kube-proxy-dhfw7\" (UID: \"6c3fd548-c65e-43f4-a54c-cd0f7039ed4f\") " pod="kube-system/kube-proxy-dhfw7" Oct 8 20:04:06.022899 kubelet[3207]: I1008 20:04:06.022705 3207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6c3fd548-c65e-43f4-a54c-cd0f7039ed4f-xtables-lock\") pod \"kube-proxy-dhfw7\" (UID: \"6c3fd548-c65e-43f4-a54c-cd0f7039ed4f\") " pod="kube-system/kube-proxy-dhfw7" Oct 8 20:04:06.022899 kubelet[3207]: I1008 20:04:06.022731 3207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzkjf\" (UniqueName: \"kubernetes.io/projected/6c3fd548-c65e-43f4-a54c-cd0f7039ed4f-kube-api-access-zzkjf\") pod \"kube-proxy-dhfw7\" (UID: \"6c3fd548-c65e-43f4-a54c-cd0f7039ed4f\") " pod="kube-system/kube-proxy-dhfw7" Oct 8 20:04:06.197972 containerd[1680]: time="2024-10-08T20:04:06.197097973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dhfw7,Uid:6c3fd548-c65e-43f4-a54c-cd0f7039ed4f,Namespace:kube-system,Attempt:0,}" Oct 8 20:04:06.259096 containerd[1680]: time="2024-10-08T20:04:06.258660526Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:04:06.259096 containerd[1680]: time="2024-10-08T20:04:06.258730726Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:04:06.259096 containerd[1680]: time="2024-10-08T20:04:06.258767627Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:04:06.259096 containerd[1680]: time="2024-10-08T20:04:06.258887328Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:04:06.280568 kubelet[3207]: W1008 20:04:06.279589 3207 reflector.go:561] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:ci-4081.1.0-a-b9ef23c535" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ci-4081.1.0-a-b9ef23c535' and this object Oct 8 20:04:06.280568 kubelet[3207]: E1008 20:04:06.280488 3207 reflector.go:158] "Unhandled Error" err="object-\"tigera-operator\"/\"kubernetes-services-endpoint\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kubernetes-services-endpoint\" is forbidden: User \"system:node:ci-4081.1.0-a-b9ef23c535\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'ci-4081.1.0-a-b9ef23c535' and this object" logger="UnhandledError" Oct 8 20:04:06.295023 systemd[1]: Created slice kubepods-besteffort-pod1089580d_b9c0_4a1f_bee8_5912a66c6f7f.slice - libcontainer container kubepods-besteffort-pod1089580d_b9c0_4a1f_bee8_5912a66c6f7f.slice. Oct 8 20:04:06.319344 systemd[1]: Started cri-containerd-3589c1378c74c09e4a5b840fad419364773890510529f2956f299886c1dafcb4.scope - libcontainer container 3589c1378c74c09e4a5b840fad419364773890510529f2956f299886c1dafcb4. Oct 8 20:04:06.328381 kubelet[3207]: I1008 20:04:06.328288 3207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1089580d-b9c0-4a1f-bee8-5912a66c6f7f-var-lib-calico\") pod \"tigera-operator-55748b469f-7w6kn\" (UID: \"1089580d-b9c0-4a1f-bee8-5912a66c6f7f\") " pod="tigera-operator/tigera-operator-55748b469f-7w6kn" Oct 8 20:04:06.328381 kubelet[3207]: I1008 20:04:06.328332 3207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7xzp\" (UniqueName: \"kubernetes.io/projected/1089580d-b9c0-4a1f-bee8-5912a66c6f7f-kube-api-access-f7xzp\") pod \"tigera-operator-55748b469f-7w6kn\" (UID: \"1089580d-b9c0-4a1f-bee8-5912a66c6f7f\") " pod="tigera-operator/tigera-operator-55748b469f-7w6kn" Oct 8 20:04:06.360681 containerd[1680]: time="2024-10-08T20:04:06.360616906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dhfw7,Uid:6c3fd548-c65e-43f4-a54c-cd0f7039ed4f,Namespace:kube-system,Attempt:0,} returns sandbox id \"3589c1378c74c09e4a5b840fad419364773890510529f2956f299886c1dafcb4\"" Oct 8 20:04:06.365273 containerd[1680]: time="2024-10-08T20:04:06.365201155Z" level=info msg="CreateContainer within sandbox \"3589c1378c74c09e4a5b840fad419364773890510529f2956f299886c1dafcb4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 8 20:04:06.432941 containerd[1680]: time="2024-10-08T20:04:06.431195854Z" level=info msg="CreateContainer within sandbox \"3589c1378c74c09e4a5b840fad419364773890510529f2956f299886c1dafcb4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"82f9adbc831c6c535141aa387d0dcc2f05eee51d29dd1762be19942734e159b1\"" Oct 8 20:04:06.432941 containerd[1680]: time="2024-10-08T20:04:06.432105764Z" level=info msg="StartContainer for \"82f9adbc831c6c535141aa387d0dcc2f05eee51d29dd1762be19942734e159b1\"" Oct 8 20:04:06.467095 systemd[1]: Started cri-containerd-82f9adbc831c6c535141aa387d0dcc2f05eee51d29dd1762be19942734e159b1.scope - libcontainer container 82f9adbc831c6c535141aa387d0dcc2f05eee51d29dd1762be19942734e159b1. Oct 8 20:04:06.504762 containerd[1680]: time="2024-10-08T20:04:06.504554631Z" level=info msg="StartContainer for \"82f9adbc831c6c535141aa387d0dcc2f05eee51d29dd1762be19942734e159b1\" returns successfully" Oct 8 20:04:06.602613 containerd[1680]: time="2024-10-08T20:04:06.602571470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-55748b469f-7w6kn,Uid:1089580d-b9c0-4a1f-bee8-5912a66c6f7f,Namespace:tigera-operator,Attempt:0,}" Oct 8 20:04:06.666266 sudo[2338]: pam_unix(sudo:session): session closed for user root Oct 8 20:04:06.677703 containerd[1680]: time="2024-10-08T20:04:06.677532564Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:04:06.677703 containerd[1680]: time="2024-10-08T20:04:06.677662865Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:04:06.677703 containerd[1680]: time="2024-10-08T20:04:06.677694366Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:04:06.678063 containerd[1680]: time="2024-10-08T20:04:06.677828067Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:04:06.705110 systemd[1]: Started cri-containerd-7a0b1d470b976c92c51b4d2e8bad5b1e4e65355ded4e710978335fd34fca5128.scope - libcontainer container 7a0b1d470b976c92c51b4d2e8bad5b1e4e65355ded4e710978335fd34fca5128. Oct 8 20:04:06.746357 containerd[1680]: time="2024-10-08T20:04:06.746099691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-55748b469f-7w6kn,Uid:1089580d-b9c0-4a1f-bee8-5912a66c6f7f,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"7a0b1d470b976c92c51b4d2e8bad5b1e4e65355ded4e710978335fd34fca5128\"" Oct 8 20:04:06.748986 containerd[1680]: time="2024-10-08T20:04:06.748672618Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Oct 8 20:04:06.769760 sshd[2335]: pam_unix(sshd:session): session closed for user core Oct 8 20:04:06.773142 systemd[1]: sshd@6-10.200.8.13:22-10.200.16.10:45742.service: Deactivated successfully. Oct 8 20:04:06.775177 systemd[1]: session-9.scope: Deactivated successfully. Oct 8 20:04:06.775384 systemd[1]: session-9.scope: Consumed 4.512s CPU time, 155.5M memory peak, 0B memory swap peak. Oct 8 20:04:06.776892 systemd-logind[1658]: Session 9 logged out. Waiting for processes to exit. Oct 8 20:04:06.778167 systemd-logind[1658]: Removed session 9. Oct 8 20:04:08.546934 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount493338545.mount: Deactivated successfully. Oct 8 20:04:08.878575 kubelet[3207]: I1008 20:04:08.877816 3207 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dhfw7" podStartSLOduration=3.877797278 podStartE2EDuration="3.877797278s" podCreationTimestamp="2024-10-08 20:04:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:04:07.285168503 +0000 UTC m=+6.167704896" watchObservedRunningTime="2024-10-08 20:04:08.877797278 +0000 UTC m=+7.760333671" Oct 8 20:04:09.407493 containerd[1680]: time="2024-10-08T20:04:09.407450690Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:04:09.410309 containerd[1680]: time="2024-10-08T20:04:09.410257820Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=22136541" Oct 8 20:04:09.415214 containerd[1680]: time="2024-10-08T20:04:09.414061060Z" level=info msg="ImageCreate event name:\"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:04:09.418720 containerd[1680]: time="2024-10-08T20:04:09.418653109Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:04:09.419883 containerd[1680]: time="2024-10-08T20:04:09.419329116Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"22130728\" in 2.670619498s" Oct 8 20:04:09.419883 containerd[1680]: time="2024-10-08T20:04:09.419370417Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\"" Oct 8 20:04:09.421855 containerd[1680]: time="2024-10-08T20:04:09.421714541Z" level=info msg="CreateContainer within sandbox \"7a0b1d470b976c92c51b4d2e8bad5b1e4e65355ded4e710978335fd34fca5128\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 8 20:04:09.464373 containerd[1680]: time="2024-10-08T20:04:09.464332193Z" level=info msg="CreateContainer within sandbox \"7a0b1d470b976c92c51b4d2e8bad5b1e4e65355ded4e710978335fd34fca5128\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"bcdad53f2db77ea3d9af01e08a0a2f36641ec923d66233d0d0c1b3df2a509f74\"" Oct 8 20:04:09.465278 containerd[1680]: time="2024-10-08T20:04:09.464911799Z" level=info msg="StartContainer for \"bcdad53f2db77ea3d9af01e08a0a2f36641ec923d66233d0d0c1b3df2a509f74\"" Oct 8 20:04:09.493076 systemd[1]: Started cri-containerd-bcdad53f2db77ea3d9af01e08a0a2f36641ec923d66233d0d0c1b3df2a509f74.scope - libcontainer container bcdad53f2db77ea3d9af01e08a0a2f36641ec923d66233d0d0c1b3df2a509f74. Oct 8 20:04:09.525142 containerd[1680]: time="2024-10-08T20:04:09.525099737Z" level=info msg="StartContainer for \"bcdad53f2db77ea3d9af01e08a0a2f36641ec923d66233d0d0c1b3df2a509f74\" returns successfully" Oct 8 20:04:12.585477 kubelet[3207]: I1008 20:04:12.585370 3207 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-55748b469f-7w6kn" podStartSLOduration=3.913177749 podStartE2EDuration="6.585339263s" podCreationTimestamp="2024-10-08 20:04:06 +0000 UTC" firstStartedPulling="2024-10-08 20:04:06.747996011 +0000 UTC m=+5.630532304" lastFinishedPulling="2024-10-08 20:04:09.420157425 +0000 UTC m=+8.302693818" observedRunningTime="2024-10-08 20:04:10.274364676 +0000 UTC m=+9.156901069" watchObservedRunningTime="2024-10-08 20:04:12.585339263 +0000 UTC m=+11.467875656" Oct 8 20:04:12.599271 systemd[1]: Created slice kubepods-besteffort-pod5de0ac78_cd6f_4fec_b192_f3d95df51dee.slice - libcontainer container kubepods-besteffort-pod5de0ac78_cd6f_4fec_b192_f3d95df51dee.slice. Oct 8 20:04:12.733195 systemd[1]: Created slice kubepods-besteffort-pod68af1901_6c31_43ba_bfa0_bea661dcd695.slice - libcontainer container kubepods-besteffort-pod68af1901_6c31_43ba_bfa0_bea661dcd695.slice. Oct 8 20:04:12.776440 kubelet[3207]: I1008 20:04:12.776392 3207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/68af1901-6c31-43ba-bfa0-bea661dcd695-cni-log-dir\") pod \"calico-node-nr7ws\" (UID: \"68af1901-6c31-43ba-bfa0-bea661dcd695\") " pod="calico-system/calico-node-nr7ws" Oct 8 20:04:12.776440 kubelet[3207]: I1008 20:04:12.776451 3207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/5de0ac78-cd6f-4fec-b192-f3d95df51dee-typha-certs\") pod \"calico-typha-6dc748bd9b-nrl5f\" (UID: \"5de0ac78-cd6f-4fec-b192-f3d95df51dee\") " pod="calico-system/calico-typha-6dc748bd9b-nrl5f" Oct 8 20:04:12.776657 kubelet[3207]: I1008 20:04:12.776478 3207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wthvd\" (UniqueName: \"kubernetes.io/projected/5de0ac78-cd6f-4fec-b192-f3d95df51dee-kube-api-access-wthvd\") pod \"calico-typha-6dc748bd9b-nrl5f\" (UID: \"5de0ac78-cd6f-4fec-b192-f3d95df51dee\") " pod="calico-system/calico-typha-6dc748bd9b-nrl5f" Oct 8 20:04:12.776657 kubelet[3207]: I1008 20:04:12.776501 3207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/68af1901-6c31-43ba-bfa0-bea661dcd695-lib-modules\") pod \"calico-node-nr7ws\" (UID: \"68af1901-6c31-43ba-bfa0-bea661dcd695\") " pod="calico-system/calico-node-nr7ws" Oct 8 20:04:12.776657 kubelet[3207]: I1008 20:04:12.776520 3207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/68af1901-6c31-43ba-bfa0-bea661dcd695-node-certs\") pod \"calico-node-nr7ws\" (UID: \"68af1901-6c31-43ba-bfa0-bea661dcd695\") " pod="calico-system/calico-node-nr7ws" Oct 8 20:04:12.776657 kubelet[3207]: I1008 20:04:12.776538 3207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/68af1901-6c31-43ba-bfa0-bea661dcd695-policysync\") pod \"calico-node-nr7ws\" (UID: \"68af1901-6c31-43ba-bfa0-bea661dcd695\") " pod="calico-system/calico-node-nr7ws" Oct 8 20:04:12.776657 kubelet[3207]: I1008 20:04:12.776558 3207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/68af1901-6c31-43ba-bfa0-bea661dcd695-var-lib-calico\") pod \"calico-node-nr7ws\" (UID: \"68af1901-6c31-43ba-bfa0-bea661dcd695\") " pod="calico-system/calico-node-nr7ws" Oct 8 20:04:12.776861 kubelet[3207]: I1008 20:04:12.776576 3207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/68af1901-6c31-43ba-bfa0-bea661dcd695-cni-bin-dir\") pod \"calico-node-nr7ws\" (UID: \"68af1901-6c31-43ba-bfa0-bea661dcd695\") " pod="calico-system/calico-node-nr7ws" Oct 8 20:04:12.776861 kubelet[3207]: I1008 20:04:12.776600 3207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/68af1901-6c31-43ba-bfa0-bea661dcd695-cni-net-dir\") pod \"calico-node-nr7ws\" (UID: \"68af1901-6c31-43ba-bfa0-bea661dcd695\") " pod="calico-system/calico-node-nr7ws" Oct 8 20:04:12.776861 kubelet[3207]: I1008 20:04:12.776622 3207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/68af1901-6c31-43ba-bfa0-bea661dcd695-tigera-ca-bundle\") pod \"calico-node-nr7ws\" (UID: \"68af1901-6c31-43ba-bfa0-bea661dcd695\") " pod="calico-system/calico-node-nr7ws" Oct 8 20:04:12.776861 kubelet[3207]: I1008 20:04:12.776647 3207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75rkh\" (UniqueName: \"kubernetes.io/projected/68af1901-6c31-43ba-bfa0-bea661dcd695-kube-api-access-75rkh\") pod \"calico-node-nr7ws\" (UID: \"68af1901-6c31-43ba-bfa0-bea661dcd695\") " pod="calico-system/calico-node-nr7ws" Oct 8 20:04:12.776861 kubelet[3207]: I1008 20:04:12.776668 3207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/68af1901-6c31-43ba-bfa0-bea661dcd695-xtables-lock\") pod \"calico-node-nr7ws\" (UID: \"68af1901-6c31-43ba-bfa0-bea661dcd695\") " pod="calico-system/calico-node-nr7ws" Oct 8 20:04:12.777075 kubelet[3207]: I1008 20:04:12.776692 3207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/68af1901-6c31-43ba-bfa0-bea661dcd695-var-run-calico\") pod \"calico-node-nr7ws\" (UID: \"68af1901-6c31-43ba-bfa0-bea661dcd695\") " pod="calico-system/calico-node-nr7ws" Oct 8 20:04:12.777075 kubelet[3207]: I1008 20:04:12.776716 3207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5de0ac78-cd6f-4fec-b192-f3d95df51dee-tigera-ca-bundle\") pod \"calico-typha-6dc748bd9b-nrl5f\" (UID: \"5de0ac78-cd6f-4fec-b192-f3d95df51dee\") " pod="calico-system/calico-typha-6dc748bd9b-nrl5f" Oct 8 20:04:12.777075 kubelet[3207]: I1008 20:04:12.776738 3207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/68af1901-6c31-43ba-bfa0-bea661dcd695-flexvol-driver-host\") pod \"calico-node-nr7ws\" (UID: \"68af1901-6c31-43ba-bfa0-bea661dcd695\") " pod="calico-system/calico-node-nr7ws" Oct 8 20:04:12.860006 kubelet[3207]: E1008 20:04:12.859866 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2j8vr" podUID="0a5c8fac-8ac4-4f20-883d-6418322f8148" Oct 8 20:04:12.878101 kubelet[3207]: I1008 20:04:12.877126 3207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0a5c8fac-8ac4-4f20-883d-6418322f8148-socket-dir\") pod \"csi-node-driver-2j8vr\" (UID: \"0a5c8fac-8ac4-4f20-883d-6418322f8148\") " pod="calico-system/csi-node-driver-2j8vr" Oct 8 20:04:12.878101 kubelet[3207]: I1008 20:04:12.877171 3207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0a5c8fac-8ac4-4f20-883d-6418322f8148-registration-dir\") pod \"csi-node-driver-2j8vr\" (UID: \"0a5c8fac-8ac4-4f20-883d-6418322f8148\") " pod="calico-system/csi-node-driver-2j8vr" Oct 8 20:04:12.878101 kubelet[3207]: I1008 20:04:12.877255 3207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0a5c8fac-8ac4-4f20-883d-6418322f8148-kubelet-dir\") pod \"csi-node-driver-2j8vr\" (UID: \"0a5c8fac-8ac4-4f20-883d-6418322f8148\") " pod="calico-system/csi-node-driver-2j8vr" Oct 8 20:04:12.878101 kubelet[3207]: I1008 20:04:12.877366 3207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn4f5\" (UniqueName: \"kubernetes.io/projected/0a5c8fac-8ac4-4f20-883d-6418322f8148-kube-api-access-vn4f5\") pod \"csi-node-driver-2j8vr\" (UID: \"0a5c8fac-8ac4-4f20-883d-6418322f8148\") " pod="calico-system/csi-node-driver-2j8vr" Oct 8 20:04:12.878101 kubelet[3207]: I1008 20:04:12.877467 3207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/0a5c8fac-8ac4-4f20-883d-6418322f8148-varrun\") pod \"csi-node-driver-2j8vr\" (UID: \"0a5c8fac-8ac4-4f20-883d-6418322f8148\") " pod="calico-system/csi-node-driver-2j8vr" Oct 8 20:04:12.881678 kubelet[3207]: E1008 20:04:12.881644 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:12.881678 kubelet[3207]: W1008 20:04:12.881675 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:12.881839 kubelet[3207]: E1008 20:04:12.881698 3207 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:12.882348 kubelet[3207]: E1008 20:04:12.882324 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:12.882348 kubelet[3207]: W1008 20:04:12.882346 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:12.883266 kubelet[3207]: E1008 20:04:12.882904 3207 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:12.883266 kubelet[3207]: E1008 20:04:12.883203 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:12.883627 kubelet[3207]: W1008 20:04:12.883221 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:12.883627 kubelet[3207]: E1008 20:04:12.883589 3207 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:12.886931 kubelet[3207]: E1008 20:04:12.884454 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:12.886931 kubelet[3207]: W1008 20:04:12.884572 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:12.887052 kubelet[3207]: E1008 20:04:12.887006 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:12.887052 kubelet[3207]: W1008 20:04:12.887028 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:12.887939 kubelet[3207]: E1008 20:04:12.887231 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:12.887939 kubelet[3207]: W1008 20:04:12.887245 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:12.887939 kubelet[3207]: E1008 20:04:12.887287 3207 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:12.888295 kubelet[3207]: E1008 20:04:12.888113 3207 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:12.888295 kubelet[3207]: E1008 20:04:12.888139 3207 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:12.888549 kubelet[3207]: E1008 20:04:12.888453 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:12.888549 kubelet[3207]: W1008 20:04:12.888468 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:12.889612 kubelet[3207]: E1008 20:04:12.889054 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:12.889612 kubelet[3207]: W1008 20:04:12.889079 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:12.889612 kubelet[3207]: E1008 20:04:12.889055 3207 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:12.889612 kubelet[3207]: E1008 20:04:12.889129 3207 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:12.891543 kubelet[3207]: E1008 20:04:12.890524 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:12.891543 kubelet[3207]: W1008 20:04:12.890539 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:12.891543 kubelet[3207]: E1008 20:04:12.890566 3207 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:12.892192 kubelet[3207]: E1008 20:04:12.891817 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:12.892192 kubelet[3207]: W1008 20:04:12.891834 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:12.895931 kubelet[3207]: E1008 20:04:12.892967 3207 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:12.895931 kubelet[3207]: E1008 20:04:12.893730 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:12.895931 kubelet[3207]: W1008 20:04:12.893742 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:12.895931 kubelet[3207]: E1008 20:04:12.893769 3207 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:12.896363 kubelet[3207]: E1008 20:04:12.896245 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:12.896363 kubelet[3207]: W1008 20:04:12.896260 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:12.896363 kubelet[3207]: E1008 20:04:12.896296 3207 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:12.896597 kubelet[3207]: E1008 20:04:12.896584 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:12.896787 kubelet[3207]: W1008 20:04:12.896681 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:12.896787 kubelet[3207]: E1008 20:04:12.896714 3207 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:12.897970 kubelet[3207]: E1008 20:04:12.897953 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:12.898184 kubelet[3207]: W1008 20:04:12.898069 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:12.898184 kubelet[3207]: E1008 20:04:12.898174 3207 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:12.898371 kubelet[3207]: E1008 20:04:12.898315 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:12.898371 kubelet[3207]: W1008 20:04:12.898327 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:12.898597 kubelet[3207]: E1008 20:04:12.898424 3207 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:12.898597 kubelet[3207]: E1008 20:04:12.898531 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:12.898597 kubelet[3207]: W1008 20:04:12.898541 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:12.898597 kubelet[3207]: E1008 20:04:12.898562 3207 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:12.898869 kubelet[3207]: E1008 20:04:12.898712 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:12.898869 kubelet[3207]: W1008 20:04:12.898721 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:12.898869 kubelet[3207]: E1008 20:04:12.898748 3207 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:12.899087 kubelet[3207]: E1008 20:04:12.898910 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:12.899087 kubelet[3207]: W1008 20:04:12.898954 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:12.899087 kubelet[3207]: E1008 20:04:12.898981 3207 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:12.899335 kubelet[3207]: E1008 20:04:12.899152 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:12.899335 kubelet[3207]: W1008 20:04:12.899161 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:12.899335 kubelet[3207]: E1008 20:04:12.899182 3207 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:12.899548 kubelet[3207]: E1008 20:04:12.899358 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:12.899548 kubelet[3207]: W1008 20:04:12.899367 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:12.899548 kubelet[3207]: E1008 20:04:12.899420 3207 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:12.899548 kubelet[3207]: E1008 20:04:12.899534 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:12.899548 kubelet[3207]: W1008 20:04:12.899542 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:12.899894 kubelet[3207]: E1008 20:04:12.899626 3207 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:12.899894 kubelet[3207]: E1008 20:04:12.899752 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:12.899894 kubelet[3207]: W1008 20:04:12.899762 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:12.899894 kubelet[3207]: E1008 20:04:12.899786 3207 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:12.900089 kubelet[3207]: E1008 20:04:12.899974 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:12.900089 kubelet[3207]: W1008 20:04:12.899984 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:12.900182 kubelet[3207]: E1008 20:04:12.900098 3207 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:12.900264 kubelet[3207]: E1008 20:04:12.900257 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:12.900309 kubelet[3207]: W1008 20:04:12.900267 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:12.900389 kubelet[3207]: E1008 20:04:12.900351 3207 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:12.900672 kubelet[3207]: E1008 20:04:12.900582 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:12.900672 kubelet[3207]: W1008 20:04:12.900595 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:12.900897 kubelet[3207]: E1008 20:04:12.900791 3207 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:12.901122 kubelet[3207]: E1008 20:04:12.901063 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:12.901122 kubelet[3207]: W1008 20:04:12.901076 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:12.901241 kubelet[3207]: E1008 20:04:12.901154 3207 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:12.901375 kubelet[3207]: E1008 20:04:12.901359 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:12.901375 kubelet[3207]: W1008 20:04:12.901373 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:12.901479 kubelet[3207]: E1008 20:04:12.901422 3207 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:12.901657 kubelet[3207]: E1008 20:04:12.901633 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:12.901657 kubelet[3207]: W1008 20:04:12.901647 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:12.901757 kubelet[3207]: E1008 20:04:12.901701 3207 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:12.902007 kubelet[3207]: E1008 20:04:12.901869 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:12.902007 kubelet[3207]: W1008 20:04:12.901879 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:12.902007 kubelet[3207]: E1008 20:04:12.901903 3207 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:12.902160 kubelet[3207]: E1008 20:04:12.902136 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:12.902160 kubelet[3207]: W1008 20:04:12.902146 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:12.902270 kubelet[3207]: E1008 20:04:12.902243 3207 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:12.902414 kubelet[3207]: E1008 20:04:12.902396 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:12.902414 kubelet[3207]: W1008 20:04:12.902408 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:12.902513 kubelet[3207]: E1008 20:04:12.902496 3207 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:12.902659 kubelet[3207]: E1008 20:04:12.902642 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:12.902659 kubelet[3207]: W1008 20:04:12.902653 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:12.902757 kubelet[3207]: E1008 20:04:12.902743 3207 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:12.902981 kubelet[3207]: E1008 20:04:12.902900 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:12.903038 kubelet[3207]: W1008 20:04:12.902910 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:12.903142 kubelet[3207]: E1008 20:04:12.903117 3207 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:12.903284 kubelet[3207]: E1008 20:04:12.903270 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:12.903284 kubelet[3207]: W1008 20:04:12.903281 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:12.903385 kubelet[3207]: E1008 20:04:12.903367 3207 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:12.903539 kubelet[3207]: E1008 20:04:12.903527 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:12.905887 kubelet[3207]: W1008 20:04:12.903539 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:12.905887 kubelet[3207]: E1008 20:04:12.903707 3207 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:12.905887 kubelet[3207]: E1008 20:04:12.903910 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:12.905887 kubelet[3207]: W1008 20:04:12.903956 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:12.905887 kubelet[3207]: E1008 20:04:12.904058 3207 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:12.905887 kubelet[3207]: E1008 20:04:12.904226 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:12.905887 kubelet[3207]: W1008 20:04:12.904237 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:12.905887 kubelet[3207]: E1008 20:04:12.904294 3207 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:12.905887 kubelet[3207]: E1008 20:04:12.905423 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:12.905887 kubelet[3207]: W1008 20:04:12.905435 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:12.906307 kubelet[3207]: E1008 20:04:12.905637 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:12.906307 kubelet[3207]: W1008 20:04:12.905648 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:12.906307 kubelet[3207]: E1008 20:04:12.905822 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:12.906307 kubelet[3207]: W1008 20:04:12.905833 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:12.906307 kubelet[3207]: E1008 20:04:12.906153 3207 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:12.906307 kubelet[3207]: E1008 20:04:12.906177 3207 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:12.906547 kubelet[3207]: E1008 20:04:12.906384 3207 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:12.906547 kubelet[3207]: E1008 20:04:12.906479 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:12.906547 kubelet[3207]: W1008 20:04:12.906501 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:12.906687 kubelet[3207]: E1008 20:04:12.906597 3207 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:12.906780 kubelet[3207]: E1008 20:04:12.906759 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:12.906780 kubelet[3207]: W1008 20:04:12.906775 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:12.906882 kubelet[3207]: E1008 20:04:12.906871 3207 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:12.907105 kubelet[3207]: E1008 20:04:12.907085 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:12.907105 kubelet[3207]: W1008 20:04:12.907102 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:12.907217 kubelet[3207]: E1008 20:04:12.907147 3207 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:12.907453 kubelet[3207]: E1008 20:04:12.907432 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:12.907525 kubelet[3207]: W1008 20:04:12.907467 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:12.907573 kubelet[3207]: E1008 20:04:12.907554 3207 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:12.907781 kubelet[3207]: E1008 20:04:12.907758 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:12.907864 kubelet[3207]: W1008 20:04:12.907782 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:12.907938 kubelet[3207]: E1008 20:04:12.907877 3207 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:12.908165 kubelet[3207]: E1008 20:04:12.908141 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:12.908165 kubelet[3207]: W1008 20:04:12.908158 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:12.908280 kubelet[3207]: E1008 20:04:12.908257 3207 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:12.908669 kubelet[3207]: E1008 20:04:12.908647 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:12.908744 kubelet[3207]: W1008 20:04:12.908681 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:12.908801 kubelet[3207]: E1008 20:04:12.908759 3207 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:12.909256 kubelet[3207]: E1008 20:04:12.908996 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:12.909256 kubelet[3207]: W1008 20:04:12.909019 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:12.909256 kubelet[3207]: E1008 20:04:12.909104 3207 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:12.910456 kubelet[3207]: E1008 20:04:12.910225 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:12.910456 kubelet[3207]: W1008 20:04:12.910240 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:12.910580 kubelet[3207]: E1008 20:04:12.910517 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:12.910580 kubelet[3207]: W1008 20:04:12.910528 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:12.910580 kubelet[3207]: E1008 20:04:12.910542 3207 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:12.910707 kubelet[3207]: E1008 20:04:12.910590 3207 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:12.919801 kubelet[3207]: E1008 20:04:12.917191 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:12.919801 kubelet[3207]: W1008 20:04:12.917208 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:12.919801 kubelet[3207]: E1008 20:04:12.917224 3207 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:12.950978 kubelet[3207]: E1008 20:04:12.948083 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:12.950978 kubelet[3207]: W1008 20:04:12.948105 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:12.950978 kubelet[3207]: E1008 20:04:12.948127 3207 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:12.961861 kubelet[3207]: E1008 20:04:12.960980 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:12.961861 kubelet[3207]: W1008 20:04:12.961001 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:12.961861 kubelet[3207]: E1008 20:04:12.961021 3207 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:12.977991 kubelet[3207]: E1008 20:04:12.977967 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:12.978115 kubelet[3207]: W1008 20:04:12.978102 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:12.978195 kubelet[3207]: E1008 20:04:12.978185 3207 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:12.978501 kubelet[3207]: E1008 20:04:12.978486 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:12.978578 kubelet[3207]: W1008 20:04:12.978565 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:12.978670 kubelet[3207]: E1008 20:04:12.978657 3207 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:12.978986 kubelet[3207]: E1008 20:04:12.978971 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:12.979164 kubelet[3207]: W1008 20:04:12.979076 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:12.979164 kubelet[3207]: E1008 20:04:12.979102 3207 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:12.979608 kubelet[3207]: E1008 20:04:12.979494 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:12.979608 kubelet[3207]: W1008 20:04:12.979508 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:12.979608 kubelet[3207]: E1008 20:04:12.979528 3207 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:12.979883 kubelet[3207]: E1008 20:04:12.979811 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:12.979883 kubelet[3207]: W1008 20:04:12.979868 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:12.980189 kubelet[3207]: E1008 20:04:12.980030 3207 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:12.980452 kubelet[3207]: E1008 20:04:12.980379 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:12.980452 kubelet[3207]: W1008 20:04:12.980394 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:12.980834 kubelet[3207]: E1008 20:04:12.980579 3207 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:12.981128 kubelet[3207]: E1008 20:04:12.981112 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:12.981287 kubelet[3207]: W1008 20:04:12.981202 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:12.981591 kubelet[3207]: E1008 20:04:12.981474 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:12.981591 kubelet[3207]: W1008 20:04:12.981490 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:12.981591 kubelet[3207]: E1008 20:04:12.981504 3207 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:12.981978 kubelet[3207]: E1008 20:04:12.981858 3207 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:12.982226 kubelet[3207]: E1008 20:04:12.982114 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:12.982226 kubelet[3207]: W1008 20:04:12.982126 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:12.982226 kubelet[3207]: E1008 20:04:12.982171 3207 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:12.983564 kubelet[3207]: E1008 20:04:12.983383 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:12.983564 kubelet[3207]: W1008 20:04:12.983398 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:12.983564 kubelet[3207]: E1008 20:04:12.983424 3207 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:12.983891 kubelet[3207]: E1008 20:04:12.983807 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:12.983891 kubelet[3207]: W1008 20:04:12.983830 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:12.984072 kubelet[3207]: E1008 20:04:12.983925 3207 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:12.984456 kubelet[3207]: E1008 20:04:12.984428 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:12.984456 kubelet[3207]: W1008 20:04:12.984446 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:12.984625 kubelet[3207]: E1008 20:04:12.984595 3207 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:12.984814 kubelet[3207]: E1008 20:04:12.984799 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:12.984814 kubelet[3207]: W1008 20:04:12.984813 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:12.985012 kubelet[3207]: E1008 20:04:12.984933 3207 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:12.985936 kubelet[3207]: E1008 20:04:12.985162 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:12.985936 kubelet[3207]: W1008 20:04:12.985177 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:12.985936 kubelet[3207]: E1008 20:04:12.985193 3207 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:12.985936 kubelet[3207]: E1008 20:04:12.985480 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:12.985936 kubelet[3207]: W1008 20:04:12.985489 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:12.985936 kubelet[3207]: E1008 20:04:12.985509 3207 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:12.985936 kubelet[3207]: E1008 20:04:12.985884 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:12.985936 kubelet[3207]: W1008 20:04:12.985894 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:12.985936 kubelet[3207]: E1008 20:04:12.985907 3207 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:12.986328 kubelet[3207]: E1008 20:04:12.986233 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:12.986328 kubelet[3207]: W1008 20:04:12.986244 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:12.986428 kubelet[3207]: E1008 20:04:12.986332 3207 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:12.986565 kubelet[3207]: E1008 20:04:12.986553 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:12.986565 kubelet[3207]: W1008 20:04:12.986564 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:12.986661 kubelet[3207]: E1008 20:04:12.986646 3207 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:12.986981 kubelet[3207]: E1008 20:04:12.986856 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:12.986981 kubelet[3207]: W1008 20:04:12.986868 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:12.986981 kubelet[3207]: E1008 20:04:12.986883 3207 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:12.987941 kubelet[3207]: E1008 20:04:12.987228 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:12.987941 kubelet[3207]: W1008 20:04:12.987240 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:12.987941 kubelet[3207]: E1008 20:04:12.987266 3207 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:12.987941 kubelet[3207]: E1008 20:04:12.987473 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:12.987941 kubelet[3207]: W1008 20:04:12.987481 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:12.987941 kubelet[3207]: E1008 20:04:12.987506 3207 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:12.987941 kubelet[3207]: E1008 20:04:12.987818 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:12.987941 kubelet[3207]: W1008 20:04:12.987827 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:12.987941 kubelet[3207]: E1008 20:04:12.987846 3207 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:12.988367 kubelet[3207]: E1008 20:04:12.988044 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:12.988367 kubelet[3207]: W1008 20:04:12.988055 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:12.988367 kubelet[3207]: E1008 20:04:12.988068 3207 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:12.988497 kubelet[3207]: E1008 20:04:12.988396 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:12.988497 kubelet[3207]: W1008 20:04:12.988407 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:12.988497 kubelet[3207]: E1008 20:04:12.988419 3207 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:12.988960 kubelet[3207]: E1008 20:04:12.988634 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:12.988960 kubelet[3207]: W1008 20:04:12.988647 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:12.988960 kubelet[3207]: E1008 20:04:12.988659 3207 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:12.998522 kubelet[3207]: E1008 20:04:12.998500 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:12.998522 kubelet[3207]: W1008 20:04:12.998519 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:12.998640 kubelet[3207]: E1008 20:04:12.998534 3207 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:13.037230 containerd[1680]: time="2024-10-08T20:04:13.037183879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nr7ws,Uid:68af1901-6c31-43ba-bfa0-bea661dcd695,Namespace:calico-system,Attempt:0,}" Oct 8 20:04:13.102245 containerd[1680]: time="2024-10-08T20:04:13.102150191Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:04:13.102245 containerd[1680]: time="2024-10-08T20:04:13.102204591Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:04:13.102245 containerd[1680]: time="2024-10-08T20:04:13.102218991Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:04:13.102658 containerd[1680]: time="2024-10-08T20:04:13.102564795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:04:13.129217 systemd[1]: Started cri-containerd-0d1d5cf0eee7ff97025052c4166b0bee697a6e09e5cffe83aa5d29b1daa5b204.scope - libcontainer container 0d1d5cf0eee7ff97025052c4166b0bee697a6e09e5cffe83aa5d29b1daa5b204. Oct 8 20:04:13.164516 containerd[1680]: time="2024-10-08T20:04:13.164483978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nr7ws,Uid:68af1901-6c31-43ba-bfa0-bea661dcd695,Namespace:calico-system,Attempt:0,} returns sandbox id \"0d1d5cf0eee7ff97025052c4166b0bee697a6e09e5cffe83aa5d29b1daa5b204\"" Oct 8 20:04:13.166193 containerd[1680]: time="2024-10-08T20:04:13.166168594Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Oct 8 20:04:13.205748 containerd[1680]: time="2024-10-08T20:04:13.205688766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6dc748bd9b-nrl5f,Uid:5de0ac78-cd6f-4fec-b192-f3d95df51dee,Namespace:calico-system,Attempt:0,}" Oct 8 20:04:13.296103 containerd[1680]: time="2024-10-08T20:04:13.294502503Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:04:13.296103 containerd[1680]: time="2024-10-08T20:04:13.294762106Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:04:13.296103 containerd[1680]: time="2024-10-08T20:04:13.294780506Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:04:13.296103 containerd[1680]: time="2024-10-08T20:04:13.295052508Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:04:13.338638 systemd[1]: Started cri-containerd-89546276c96b7d69da18609f5a6c19d681eae4c42860adc216d7703c98699f07.scope - libcontainer container 89546276c96b7d69da18609f5a6c19d681eae4c42860adc216d7703c98699f07. Oct 8 20:04:13.417362 containerd[1680]: time="2024-10-08T20:04:13.416778755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6dc748bd9b-nrl5f,Uid:5de0ac78-cd6f-4fec-b192-f3d95df51dee,Namespace:calico-system,Attempt:0,} returns sandbox id \"89546276c96b7d69da18609f5a6c19d681eae4c42860adc216d7703c98699f07\"" Oct 8 20:04:14.202678 kubelet[3207]: E1008 20:04:14.202616 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2j8vr" podUID="0a5c8fac-8ac4-4f20-883d-6418322f8148" Oct 8 20:04:14.600824 containerd[1680]: time="2024-10-08T20:04:14.600703611Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:04:14.603469 containerd[1680]: time="2024-10-08T20:04:14.603398936Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=5141007" Oct 8 20:04:14.606948 containerd[1680]: time="2024-10-08T20:04:14.606874569Z" level=info msg="ImageCreate event name:\"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:04:14.610328 containerd[1680]: time="2024-10-08T20:04:14.610274001Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:04:14.611269 containerd[1680]: time="2024-10-08T20:04:14.610874907Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6633368\" in 1.444411309s" Oct 8 20:04:14.611269 containerd[1680]: time="2024-10-08T20:04:14.610926207Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\"" Oct 8 20:04:14.612048 containerd[1680]: time="2024-10-08T20:04:14.612026317Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Oct 8 20:04:14.613289 containerd[1680]: time="2024-10-08T20:04:14.613091527Z" level=info msg="CreateContainer within sandbox \"0d1d5cf0eee7ff97025052c4166b0bee697a6e09e5cffe83aa5d29b1daa5b204\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 8 20:04:14.667780 containerd[1680]: time="2024-10-08T20:04:14.667732642Z" level=info msg="CreateContainer within sandbox \"0d1d5cf0eee7ff97025052c4166b0bee697a6e09e5cffe83aa5d29b1daa5b204\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f7a899094fda98269707eb0c10eb8c16834efb6849a8ce61bbd16fd1a7645d6f\"" Oct 8 20:04:14.668506 containerd[1680]: time="2024-10-08T20:04:14.668391549Z" level=info msg="StartContainer for \"f7a899094fda98269707eb0c10eb8c16834efb6849a8ce61bbd16fd1a7645d6f\"" Oct 8 20:04:14.717059 systemd[1]: Started cri-containerd-f7a899094fda98269707eb0c10eb8c16834efb6849a8ce61bbd16fd1a7645d6f.scope - libcontainer container f7a899094fda98269707eb0c10eb8c16834efb6849a8ce61bbd16fd1a7645d6f. Oct 8 20:04:14.749218 containerd[1680]: time="2024-10-08T20:04:14.749110109Z" level=info msg="StartContainer for \"f7a899094fda98269707eb0c10eb8c16834efb6849a8ce61bbd16fd1a7645d6f\" returns successfully" Oct 8 20:04:14.756339 systemd[1]: cri-containerd-f7a899094fda98269707eb0c10eb8c16834efb6849a8ce61bbd16fd1a7645d6f.scope: Deactivated successfully. Oct 8 20:04:14.784702 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f7a899094fda98269707eb0c10eb8c16834efb6849a8ce61bbd16fd1a7645d6f-rootfs.mount: Deactivated successfully. Oct 8 20:04:15.280147 containerd[1680]: time="2024-10-08T20:04:15.280108512Z" level=info msg="StopContainer for \"f7a899094fda98269707eb0c10eb8c16834efb6849a8ce61bbd16fd1a7645d6f\" with timeout 5 (s)" Oct 8 20:04:15.473456 containerd[1680]: time="2024-10-08T20:04:15.472957930Z" level=info msg="Stop container \"f7a899094fda98269707eb0c10eb8c16834efb6849a8ce61bbd16fd1a7645d6f\" with signal terminated" Oct 8 20:04:15.474365 containerd[1680]: time="2024-10-08T20:04:15.474295542Z" level=info msg="shim disconnected" id=f7a899094fda98269707eb0c10eb8c16834efb6849a8ce61bbd16fd1a7645d6f namespace=k8s.io Oct 8 20:04:15.474365 containerd[1680]: time="2024-10-08T20:04:15.474363843Z" level=warning msg="cleaning up after shim disconnected" id=f7a899094fda98269707eb0c10eb8c16834efb6849a8ce61bbd16fd1a7645d6f namespace=k8s.io Oct 8 20:04:15.474541 containerd[1680]: time="2024-10-08T20:04:15.474374543Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:04:15.497413 containerd[1680]: time="2024-10-08T20:04:15.497290959Z" level=info msg="StopContainer for \"f7a899094fda98269707eb0c10eb8c16834efb6849a8ce61bbd16fd1a7645d6f\" returns successfully" Oct 8 20:04:15.498413 containerd[1680]: time="2024-10-08T20:04:15.498337369Z" level=info msg="StopPodSandbox for \"0d1d5cf0eee7ff97025052c4166b0bee697a6e09e5cffe83aa5d29b1daa5b204\"" Oct 8 20:04:15.498413 containerd[1680]: time="2024-10-08T20:04:15.498397569Z" level=info msg="Container to stop \"f7a899094fda98269707eb0c10eb8c16834efb6849a8ce61bbd16fd1a7645d6f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 8 20:04:15.500795 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0d1d5cf0eee7ff97025052c4166b0bee697a6e09e5cffe83aa5d29b1daa5b204-shm.mount: Deactivated successfully. Oct 8 20:04:15.507858 systemd[1]: cri-containerd-0d1d5cf0eee7ff97025052c4166b0bee697a6e09e5cffe83aa5d29b1daa5b204.scope: Deactivated successfully. Oct 8 20:04:15.529822 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0d1d5cf0eee7ff97025052c4166b0bee697a6e09e5cffe83aa5d29b1daa5b204-rootfs.mount: Deactivated successfully. Oct 8 20:04:15.539733 containerd[1680]: time="2024-10-08T20:04:15.539460456Z" level=info msg="shim disconnected" id=0d1d5cf0eee7ff97025052c4166b0bee697a6e09e5cffe83aa5d29b1daa5b204 namespace=k8s.io Oct 8 20:04:15.539733 containerd[1680]: time="2024-10-08T20:04:15.539516557Z" level=warning msg="cleaning up after shim disconnected" id=0d1d5cf0eee7ff97025052c4166b0bee697a6e09e5cffe83aa5d29b1daa5b204 namespace=k8s.io Oct 8 20:04:15.539733 containerd[1680]: time="2024-10-08T20:04:15.539528357Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:04:15.552024 containerd[1680]: time="2024-10-08T20:04:15.551986374Z" level=info msg="TearDown network for sandbox \"0d1d5cf0eee7ff97025052c4166b0bee697a6e09e5cffe83aa5d29b1daa5b204\" successfully" Oct 8 20:04:15.552024 containerd[1680]: time="2024-10-08T20:04:15.552019774Z" level=info msg="StopPodSandbox for \"0d1d5cf0eee7ff97025052c4166b0bee697a6e09e5cffe83aa5d29b1daa5b204\" returns successfully" Oct 8 20:04:15.620390 kubelet[3207]: I1008 20:04:15.619891 3207 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/68af1901-6c31-43ba-bfa0-bea661dcd695-cni-log-dir\") pod \"68af1901-6c31-43ba-bfa0-bea661dcd695\" (UID: \"68af1901-6c31-43ba-bfa0-bea661dcd695\") " Oct 8 20:04:15.620390 kubelet[3207]: I1008 20:04:15.619961 3207 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/68af1901-6c31-43ba-bfa0-bea661dcd695-cni-bin-dir\") pod \"68af1901-6c31-43ba-bfa0-bea661dcd695\" (UID: \"68af1901-6c31-43ba-bfa0-bea661dcd695\") " Oct 8 20:04:15.620390 kubelet[3207]: I1008 20:04:15.619975 3207 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68af1901-6c31-43ba-bfa0-bea661dcd695-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "68af1901-6c31-43ba-bfa0-bea661dcd695" (UID: "68af1901-6c31-43ba-bfa0-bea661dcd695"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 20:04:15.620390 kubelet[3207]: I1008 20:04:15.619986 3207 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/68af1901-6c31-43ba-bfa0-bea661dcd695-cni-net-dir\") pod \"68af1901-6c31-43ba-bfa0-bea661dcd695\" (UID: \"68af1901-6c31-43ba-bfa0-bea661dcd695\") " Oct 8 20:04:15.620390 kubelet[3207]: I1008 20:04:15.620013 3207 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/68af1901-6c31-43ba-bfa0-bea661dcd695-policysync\") pod \"68af1901-6c31-43ba-bfa0-bea661dcd695\" (UID: \"68af1901-6c31-43ba-bfa0-bea661dcd695\") " Oct 8 20:04:15.620390 kubelet[3207]: I1008 20:04:15.620020 3207 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68af1901-6c31-43ba-bfa0-bea661dcd695-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "68af1901-6c31-43ba-bfa0-bea661dcd695" (UID: "68af1901-6c31-43ba-bfa0-bea661dcd695"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 20:04:15.621141 kubelet[3207]: I1008 20:04:15.620033 3207 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/68af1901-6c31-43ba-bfa0-bea661dcd695-var-lib-calico\") pod \"68af1901-6c31-43ba-bfa0-bea661dcd695\" (UID: \"68af1901-6c31-43ba-bfa0-bea661dcd695\") " Oct 8 20:04:15.621141 kubelet[3207]: I1008 20:04:15.620041 3207 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68af1901-6c31-43ba-bfa0-bea661dcd695-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "68af1901-6c31-43ba-bfa0-bea661dcd695" (UID: "68af1901-6c31-43ba-bfa0-bea661dcd695"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 20:04:15.621141 kubelet[3207]: I1008 20:04:15.620062 3207 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/68af1901-6c31-43ba-bfa0-bea661dcd695-node-certs\") pod \"68af1901-6c31-43ba-bfa0-bea661dcd695\" (UID: \"68af1901-6c31-43ba-bfa0-bea661dcd695\") " Oct 8 20:04:15.621141 kubelet[3207]: I1008 20:04:15.620066 3207 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68af1901-6c31-43ba-bfa0-bea661dcd695-policysync" (OuterVolumeSpecName: "policysync") pod "68af1901-6c31-43ba-bfa0-bea661dcd695" (UID: "68af1901-6c31-43ba-bfa0-bea661dcd695"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 20:04:15.621141 kubelet[3207]: I1008 20:04:15.620088 3207 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68af1901-6c31-43ba-bfa0-bea661dcd695-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "68af1901-6c31-43ba-bfa0-bea661dcd695" (UID: "68af1901-6c31-43ba-bfa0-bea661dcd695"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 20:04:15.621465 kubelet[3207]: I1008 20:04:15.620088 3207 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75rkh\" (UniqueName: \"kubernetes.io/projected/68af1901-6c31-43ba-bfa0-bea661dcd695-kube-api-access-75rkh\") pod \"68af1901-6c31-43ba-bfa0-bea661dcd695\" (UID: \"68af1901-6c31-43ba-bfa0-bea661dcd695\") " Oct 8 20:04:15.621465 kubelet[3207]: I1008 20:04:15.620124 3207 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/68af1901-6c31-43ba-bfa0-bea661dcd695-tigera-ca-bundle\") pod \"68af1901-6c31-43ba-bfa0-bea661dcd695\" (UID: \"68af1901-6c31-43ba-bfa0-bea661dcd695\") " Oct 8 20:04:15.621465 kubelet[3207]: I1008 20:04:15.620145 3207 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/68af1901-6c31-43ba-bfa0-bea661dcd695-lib-modules\") pod \"68af1901-6c31-43ba-bfa0-bea661dcd695\" (UID: \"68af1901-6c31-43ba-bfa0-bea661dcd695\") " Oct 8 20:04:15.621465 kubelet[3207]: I1008 20:04:15.620165 3207 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/68af1901-6c31-43ba-bfa0-bea661dcd695-var-run-calico\") pod \"68af1901-6c31-43ba-bfa0-bea661dcd695\" (UID: \"68af1901-6c31-43ba-bfa0-bea661dcd695\") " Oct 8 20:04:15.621465 kubelet[3207]: I1008 20:04:15.620187 3207 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/68af1901-6c31-43ba-bfa0-bea661dcd695-xtables-lock\") pod \"68af1901-6c31-43ba-bfa0-bea661dcd695\" (UID: \"68af1901-6c31-43ba-bfa0-bea661dcd695\") " Oct 8 20:04:15.621465 kubelet[3207]: I1008 20:04:15.620210 3207 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/68af1901-6c31-43ba-bfa0-bea661dcd695-flexvol-driver-host\") pod \"68af1901-6c31-43ba-bfa0-bea661dcd695\" (UID: \"68af1901-6c31-43ba-bfa0-bea661dcd695\") " Oct 8 20:04:15.621718 kubelet[3207]: I1008 20:04:15.620261 3207 reconciler_common.go:288] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/68af1901-6c31-43ba-bfa0-bea661dcd695-var-lib-calico\") on node \"ci-4081.1.0-a-b9ef23c535\" DevicePath \"\"" Oct 8 20:04:15.621718 kubelet[3207]: I1008 20:04:15.620275 3207 reconciler_common.go:288] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/68af1901-6c31-43ba-bfa0-bea661dcd695-policysync\") on node \"ci-4081.1.0-a-b9ef23c535\" DevicePath \"\"" Oct 8 20:04:15.621718 kubelet[3207]: I1008 20:04:15.620301 3207 reconciler_common.go:288] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/68af1901-6c31-43ba-bfa0-bea661dcd695-cni-bin-dir\") on node \"ci-4081.1.0-a-b9ef23c535\" DevicePath \"\"" Oct 8 20:04:15.621718 kubelet[3207]: I1008 20:04:15.620317 3207 reconciler_common.go:288] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/68af1901-6c31-43ba-bfa0-bea661dcd695-cni-net-dir\") on node \"ci-4081.1.0-a-b9ef23c535\" DevicePath \"\"" Oct 8 20:04:15.621718 kubelet[3207]: I1008 20:04:15.620330 3207 reconciler_common.go:288] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/68af1901-6c31-43ba-bfa0-bea661dcd695-cni-log-dir\") on node \"ci-4081.1.0-a-b9ef23c535\" DevicePath \"\"" Oct 8 20:04:15.621718 kubelet[3207]: I1008 20:04:15.620357 3207 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68af1901-6c31-43ba-bfa0-bea661dcd695-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "68af1901-6c31-43ba-bfa0-bea661dcd695" (UID: "68af1901-6c31-43ba-bfa0-bea661dcd695"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 20:04:15.623532 kubelet[3207]: I1008 20:04:15.620717 3207 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68af1901-6c31-43ba-bfa0-bea661dcd695-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "68af1901-6c31-43ba-bfa0-bea661dcd695" (UID: "68af1901-6c31-43ba-bfa0-bea661dcd695"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 8 20:04:15.623532 kubelet[3207]: I1008 20:04:15.620762 3207 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68af1901-6c31-43ba-bfa0-bea661dcd695-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "68af1901-6c31-43ba-bfa0-bea661dcd695" (UID: "68af1901-6c31-43ba-bfa0-bea661dcd695"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 20:04:15.623532 kubelet[3207]: I1008 20:04:15.620783 3207 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68af1901-6c31-43ba-bfa0-bea661dcd695-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "68af1901-6c31-43ba-bfa0-bea661dcd695" (UID: "68af1901-6c31-43ba-bfa0-bea661dcd695"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 20:04:15.623532 kubelet[3207]: I1008 20:04:15.620805 3207 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68af1901-6c31-43ba-bfa0-bea661dcd695-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "68af1901-6c31-43ba-bfa0-bea661dcd695" (UID: "68af1901-6c31-43ba-bfa0-bea661dcd695"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 20:04:15.626376 kubelet[3207]: I1008 20:04:15.626321 3207 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68af1901-6c31-43ba-bfa0-bea661dcd695-kube-api-access-75rkh" (OuterVolumeSpecName: "kube-api-access-75rkh") pod "68af1901-6c31-43ba-bfa0-bea661dcd695" (UID: "68af1901-6c31-43ba-bfa0-bea661dcd695"). InnerVolumeSpecName "kube-api-access-75rkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 8 20:04:15.628119 kubelet[3207]: I1008 20:04:15.628084 3207 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68af1901-6c31-43ba-bfa0-bea661dcd695-node-certs" (OuterVolumeSpecName: "node-certs") pod "68af1901-6c31-43ba-bfa0-bea661dcd695" (UID: "68af1901-6c31-43ba-bfa0-bea661dcd695"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 8 20:04:15.628227 systemd[1]: var-lib-kubelet-pods-68af1901\x2d6c31\x2d43ba\x2dbfa0\x2dbea661dcd695-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d75rkh.mount: Deactivated successfully. Oct 8 20:04:15.628367 systemd[1]: var-lib-kubelet-pods-68af1901\x2d6c31\x2d43ba\x2dbfa0\x2dbea661dcd695-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Oct 8 20:04:15.720762 kubelet[3207]: I1008 20:04:15.720723 3207 reconciler_common.go:288] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/68af1901-6c31-43ba-bfa0-bea661dcd695-node-certs\") on node \"ci-4081.1.0-a-b9ef23c535\" DevicePath \"\"" Oct 8 20:04:15.720762 kubelet[3207]: I1008 20:04:15.720766 3207 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-75rkh\" (UniqueName: \"kubernetes.io/projected/68af1901-6c31-43ba-bfa0-bea661dcd695-kube-api-access-75rkh\") on node \"ci-4081.1.0-a-b9ef23c535\" DevicePath \"\"" Oct 8 20:04:15.721272 kubelet[3207]: I1008 20:04:15.720783 3207 reconciler_common.go:288] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/68af1901-6c31-43ba-bfa0-bea661dcd695-tigera-ca-bundle\") on node \"ci-4081.1.0-a-b9ef23c535\" DevicePath \"\"" Oct 8 20:04:15.721272 kubelet[3207]: I1008 20:04:15.720795 3207 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/68af1901-6c31-43ba-bfa0-bea661dcd695-lib-modules\") on node \"ci-4081.1.0-a-b9ef23c535\" DevicePath \"\"" Oct 8 20:04:15.721272 kubelet[3207]: I1008 20:04:15.720807 3207 reconciler_common.go:288] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/68af1901-6c31-43ba-bfa0-bea661dcd695-var-run-calico\") on node \"ci-4081.1.0-a-b9ef23c535\" DevicePath \"\"" Oct 8 20:04:15.721272 kubelet[3207]: I1008 20:04:15.720819 3207 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/68af1901-6c31-43ba-bfa0-bea661dcd695-xtables-lock\") on node \"ci-4081.1.0-a-b9ef23c535\" DevicePath \"\"" Oct 8 20:04:15.721272 kubelet[3207]: I1008 20:04:15.720830 3207 reconciler_common.go:288] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/68af1901-6c31-43ba-bfa0-bea661dcd695-flexvol-driver-host\") on node \"ci-4081.1.0-a-b9ef23c535\" DevicePath \"\"" Oct 8 20:04:16.202652 kubelet[3207]: E1008 20:04:16.202462 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2j8vr" podUID="0a5c8fac-8ac4-4f20-883d-6418322f8148" Oct 8 20:04:16.283683 kubelet[3207]: I1008 20:04:16.283640 3207 scope.go:117] "RemoveContainer" containerID="f7a899094fda98269707eb0c10eb8c16834efb6849a8ce61bbd16fd1a7645d6f" Oct 8 20:04:16.288090 containerd[1680]: time="2024-10-08T20:04:16.287945309Z" level=info msg="RemoveContainer for \"f7a899094fda98269707eb0c10eb8c16834efb6849a8ce61bbd16fd1a7645d6f\"" Oct 8 20:04:16.293140 systemd[1]: Removed slice kubepods-besteffort-pod68af1901_6c31_43ba_bfa0_bea661dcd695.slice - libcontainer container kubepods-besteffort-pod68af1901_6c31_43ba_bfa0_bea661dcd695.slice. Oct 8 20:04:16.301313 containerd[1680]: time="2024-10-08T20:04:16.301274134Z" level=info msg="RemoveContainer for \"f7a899094fda98269707eb0c10eb8c16834efb6849a8ce61bbd16fd1a7645d6f\" returns successfully" Oct 8 20:04:16.301881 kubelet[3207]: I1008 20:04:16.301568 3207 scope.go:117] "RemoveContainer" containerID="f7a899094fda98269707eb0c10eb8c16834efb6849a8ce61bbd16fd1a7645d6f" Oct 8 20:04:16.302003 containerd[1680]: time="2024-10-08T20:04:16.301805439Z" level=error msg="ContainerStatus for \"f7a899094fda98269707eb0c10eb8c16834efb6849a8ce61bbd16fd1a7645d6f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f7a899094fda98269707eb0c10eb8c16834efb6849a8ce61bbd16fd1a7645d6f\": not found" Oct 8 20:04:16.302250 kubelet[3207]: E1008 20:04:16.302137 3207 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f7a899094fda98269707eb0c10eb8c16834efb6849a8ce61bbd16fd1a7645d6f\": not found" containerID="f7a899094fda98269707eb0c10eb8c16834efb6849a8ce61bbd16fd1a7645d6f" Oct 8 20:04:16.302250 kubelet[3207]: I1008 20:04:16.302172 3207 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f7a899094fda98269707eb0c10eb8c16834efb6849a8ce61bbd16fd1a7645d6f"} err="failed to get container status \"f7a899094fda98269707eb0c10eb8c16834efb6849a8ce61bbd16fd1a7645d6f\": rpc error: code = NotFound desc = an error occurred when try to find container \"f7a899094fda98269707eb0c10eb8c16834efb6849a8ce61bbd16fd1a7645d6f\": not found" Oct 8 20:04:16.342087 kubelet[3207]: E1008 20:04:16.341499 3207 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="68af1901-6c31-43ba-bfa0-bea661dcd695" containerName="flexvol-driver" Oct 8 20:04:16.342087 kubelet[3207]: I1008 20:04:16.341677 3207 memory_manager.go:354] "RemoveStaleState removing state" podUID="68af1901-6c31-43ba-bfa0-bea661dcd695" containerName="flexvol-driver" Oct 8 20:04:16.360682 systemd[1]: Created slice kubepods-besteffort-pod83eae44c_000f_46b5_870f_172197d4096b.slice - libcontainer container kubepods-besteffort-pod83eae44c_000f_46b5_870f_172197d4096b.slice. Oct 8 20:04:16.425868 kubelet[3207]: I1008 20:04:16.425368 3207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/83eae44c-000f-46b5-870f-172197d4096b-xtables-lock\") pod \"calico-node-z9cjg\" (UID: \"83eae44c-000f-46b5-870f-172197d4096b\") " pod="calico-system/calico-node-z9cjg" Oct 8 20:04:16.425868 kubelet[3207]: I1008 20:04:16.425441 3207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hcb4\" (UniqueName: \"kubernetes.io/projected/83eae44c-000f-46b5-870f-172197d4096b-kube-api-access-5hcb4\") pod \"calico-node-z9cjg\" (UID: \"83eae44c-000f-46b5-870f-172197d4096b\") " pod="calico-system/calico-node-z9cjg" Oct 8 20:04:16.425868 kubelet[3207]: I1008 20:04:16.425476 3207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/83eae44c-000f-46b5-870f-172197d4096b-var-run-calico\") pod \"calico-node-z9cjg\" (UID: \"83eae44c-000f-46b5-870f-172197d4096b\") " pod="calico-system/calico-node-z9cjg" Oct 8 20:04:16.425868 kubelet[3207]: I1008 20:04:16.425502 3207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/83eae44c-000f-46b5-870f-172197d4096b-var-lib-calico\") pod \"calico-node-z9cjg\" (UID: \"83eae44c-000f-46b5-870f-172197d4096b\") " pod="calico-system/calico-node-z9cjg" Oct 8 20:04:16.425868 kubelet[3207]: I1008 20:04:16.425529 3207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/83eae44c-000f-46b5-870f-172197d4096b-flexvol-driver-host\") pod \"calico-node-z9cjg\" (UID: \"83eae44c-000f-46b5-870f-172197d4096b\") " pod="calico-system/calico-node-z9cjg" Oct 8 20:04:16.426316 kubelet[3207]: I1008 20:04:16.425561 3207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/83eae44c-000f-46b5-870f-172197d4096b-cni-bin-dir\") pod \"calico-node-z9cjg\" (UID: \"83eae44c-000f-46b5-870f-172197d4096b\") " pod="calico-system/calico-node-z9cjg" Oct 8 20:04:16.426316 kubelet[3207]: I1008 20:04:16.425589 3207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/83eae44c-000f-46b5-870f-172197d4096b-lib-modules\") pod \"calico-node-z9cjg\" (UID: \"83eae44c-000f-46b5-870f-172197d4096b\") " pod="calico-system/calico-node-z9cjg" Oct 8 20:04:16.426316 kubelet[3207]: I1008 20:04:16.425615 3207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/83eae44c-000f-46b5-870f-172197d4096b-cni-log-dir\") pod \"calico-node-z9cjg\" (UID: \"83eae44c-000f-46b5-870f-172197d4096b\") " pod="calico-system/calico-node-z9cjg" Oct 8 20:04:16.426316 kubelet[3207]: I1008 20:04:16.425643 3207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83eae44c-000f-46b5-870f-172197d4096b-tigera-ca-bundle\") pod \"calico-node-z9cjg\" (UID: \"83eae44c-000f-46b5-870f-172197d4096b\") " pod="calico-system/calico-node-z9cjg" Oct 8 20:04:16.426316 kubelet[3207]: I1008 20:04:16.425667 3207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/83eae44c-000f-46b5-870f-172197d4096b-node-certs\") pod \"calico-node-z9cjg\" (UID: \"83eae44c-000f-46b5-870f-172197d4096b\") " pod="calico-system/calico-node-z9cjg" Oct 8 20:04:16.426604 kubelet[3207]: I1008 20:04:16.425692 3207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/83eae44c-000f-46b5-870f-172197d4096b-policysync\") pod \"calico-node-z9cjg\" (UID: \"83eae44c-000f-46b5-870f-172197d4096b\") " pod="calico-system/calico-node-z9cjg" Oct 8 20:04:16.426604 kubelet[3207]: I1008 20:04:16.425717 3207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/83eae44c-000f-46b5-870f-172197d4096b-cni-net-dir\") pod \"calico-node-z9cjg\" (UID: \"83eae44c-000f-46b5-870f-172197d4096b\") " pod="calico-system/calico-node-z9cjg" Oct 8 20:04:16.667487 containerd[1680]: time="2024-10-08T20:04:16.667361384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-z9cjg,Uid:83eae44c-000f-46b5-870f-172197d4096b,Namespace:calico-system,Attempt:0,}" Oct 8 20:04:16.736417 containerd[1680]: time="2024-10-08T20:04:16.736071131Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:04:16.736417 containerd[1680]: time="2024-10-08T20:04:16.736241933Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:04:16.736417 containerd[1680]: time="2024-10-08T20:04:16.736280433Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:04:16.736816 containerd[1680]: time="2024-10-08T20:04:16.736390534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:04:16.766088 systemd[1]: Started cri-containerd-4119896361fe0a9a2375bd173f80b5ce0f39d0c0b0ec9e620f7edc4c02309d93.scope - libcontainer container 4119896361fe0a9a2375bd173f80b5ce0f39d0c0b0ec9e620f7edc4c02309d93. Oct 8 20:04:16.821608 containerd[1680]: time="2024-10-08T20:04:16.821526836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-z9cjg,Uid:83eae44c-000f-46b5-870f-172197d4096b,Namespace:calico-system,Attempt:0,} returns sandbox id \"4119896361fe0a9a2375bd173f80b5ce0f39d0c0b0ec9e620f7edc4c02309d93\"" Oct 8 20:04:16.825770 containerd[1680]: time="2024-10-08T20:04:16.825553774Z" level=info msg="CreateContainer within sandbox \"4119896361fe0a9a2375bd173f80b5ce0f39d0c0b0ec9e620f7edc4c02309d93\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 8 20:04:16.870508 containerd[1680]: time="2024-10-08T20:04:16.870459997Z" level=info msg="CreateContainer within sandbox \"4119896361fe0a9a2375bd173f80b5ce0f39d0c0b0ec9e620f7edc4c02309d93\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ff33c31768a4c2d274b1eb4f7bf4e0586414f917f6a4ba229f3d1bddd3527b9a\"" Oct 8 20:04:16.871793 containerd[1680]: time="2024-10-08T20:04:16.871567808Z" level=info msg="StartContainer for \"ff33c31768a4c2d274b1eb4f7bf4e0586414f917f6a4ba229f3d1bddd3527b9a\"" Oct 8 20:04:16.926659 systemd[1]: Started cri-containerd-ff33c31768a4c2d274b1eb4f7bf4e0586414f917f6a4ba229f3d1bddd3527b9a.scope - libcontainer container ff33c31768a4c2d274b1eb4f7bf4e0586414f917f6a4ba229f3d1bddd3527b9a. Oct 8 20:04:16.987958 containerd[1680]: time="2024-10-08T20:04:16.987869704Z" level=info msg="StartContainer for \"ff33c31768a4c2d274b1eb4f7bf4e0586414f917f6a4ba229f3d1bddd3527b9a\" returns successfully" Oct 8 20:04:16.996001 systemd[1]: cri-containerd-ff33c31768a4c2d274b1eb4f7bf4e0586414f917f6a4ba229f3d1bddd3527b9a.scope: Deactivated successfully. Oct 8 20:04:17.265355 kubelet[3207]: I1008 20:04:17.264251 3207 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68af1901-6c31-43ba-bfa0-bea661dcd695" path="/var/lib/kubelet/pods/68af1901-6c31-43ba-bfa0-bea661dcd695/volumes" Oct 8 20:04:17.287210 containerd[1680]: time="2024-10-08T20:04:17.287146524Z" level=info msg="shim disconnected" id=ff33c31768a4c2d274b1eb4f7bf4e0586414f917f6a4ba229f3d1bddd3527b9a namespace=k8s.io Oct 8 20:04:17.287210 containerd[1680]: time="2024-10-08T20:04:17.287206624Z" level=warning msg="cleaning up after shim disconnected" id=ff33c31768a4c2d274b1eb4f7bf4e0586414f917f6a4ba229f3d1bddd3527b9a namespace=k8s.io Oct 8 20:04:17.287210 containerd[1680]: time="2024-10-08T20:04:17.287218624Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:04:17.331511 containerd[1680]: time="2024-10-08T20:04:17.331110538Z" level=warning msg="cleanup warnings time=\"2024-10-08T20:04:17Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Oct 8 20:04:18.072751 containerd[1680]: time="2024-10-08T20:04:18.072530324Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:04:18.077014 containerd[1680]: time="2024-10-08T20:04:18.076095857Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=29471335" Oct 8 20:04:18.087230 containerd[1680]: time="2024-10-08T20:04:18.087130661Z" level=info msg="ImageCreate event name:\"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:04:18.094042 containerd[1680]: time="2024-10-08T20:04:18.093399520Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:04:18.094328 containerd[1680]: time="2024-10-08T20:04:18.094290629Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"30963728\" in 3.481238802s" Oct 8 20:04:18.094414 containerd[1680]: time="2024-10-08T20:04:18.094335529Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\"" Oct 8 20:04:18.112208 containerd[1680]: time="2024-10-08T20:04:18.112167197Z" level=info msg="CreateContainer within sandbox \"89546276c96b7d69da18609f5a6c19d681eae4c42860adc216d7703c98699f07\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 8 20:04:18.154426 containerd[1680]: time="2024-10-08T20:04:18.154380995Z" level=info msg="CreateContainer within sandbox \"89546276c96b7d69da18609f5a6c19d681eae4c42860adc216d7703c98699f07\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"4726e78541e827ca3a08c5e1e730c34acb86436992e25ad893525cb9cecfe6ab\"" Oct 8 20:04:18.154993 containerd[1680]: time="2024-10-08T20:04:18.154963601Z" level=info msg="StartContainer for \"4726e78541e827ca3a08c5e1e730c34acb86436992e25ad893525cb9cecfe6ab\"" Oct 8 20:04:18.190083 systemd[1]: Started cri-containerd-4726e78541e827ca3a08c5e1e730c34acb86436992e25ad893525cb9cecfe6ab.scope - libcontainer container 4726e78541e827ca3a08c5e1e730c34acb86436992e25ad893525cb9cecfe6ab. Oct 8 20:04:18.202726 kubelet[3207]: E1008 20:04:18.202671 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2j8vr" podUID="0a5c8fac-8ac4-4f20-883d-6418322f8148" Oct 8 20:04:18.239732 containerd[1680]: time="2024-10-08T20:04:18.239581198Z" level=info msg="StartContainer for \"4726e78541e827ca3a08c5e1e730c34acb86436992e25ad893525cb9cecfe6ab\" returns successfully" Oct 8 20:04:18.300243 containerd[1680]: time="2024-10-08T20:04:18.300196069Z" level=info msg="StopContainer for \"4726e78541e827ca3a08c5e1e730c34acb86436992e25ad893525cb9cecfe6ab\" with timeout 300 (s)" Oct 8 20:04:18.301406 containerd[1680]: time="2024-10-08T20:04:18.301364980Z" level=info msg="Stop container \"4726e78541e827ca3a08c5e1e730c34acb86436992e25ad893525cb9cecfe6ab\" with signal terminated" Oct 8 20:04:18.308947 containerd[1680]: time="2024-10-08T20:04:18.308904851Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Oct 8 20:04:18.326120 systemd[1]: cri-containerd-4726e78541e827ca3a08c5e1e730c34acb86436992e25ad893525cb9cecfe6ab.scope: Deactivated successfully. Oct 8 20:04:18.402310 kubelet[3207]: I1008 20:04:18.402242 3207 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6dc748bd9b-nrl5f" podStartSLOduration=1.725203161 podStartE2EDuration="6.40221643s" podCreationTimestamp="2024-10-08 20:04:12 +0000 UTC" firstStartedPulling="2024-10-08 20:04:13.41832067 +0000 UTC m=+12.300856963" lastFinishedPulling="2024-10-08 20:04:18.095333939 +0000 UTC m=+16.977870232" observedRunningTime="2024-10-08 20:04:18.360968242 +0000 UTC m=+17.243504535" watchObservedRunningTime="2024-10-08 20:04:18.40221643 +0000 UTC m=+17.284752823" Oct 8 20:04:18.534321 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4726e78541e827ca3a08c5e1e730c34acb86436992e25ad893525cb9cecfe6ab-rootfs.mount: Deactivated successfully. Oct 8 20:04:18.871650 containerd[1680]: time="2024-10-08T20:04:18.871579853Z" level=info msg="shim disconnected" id=4726e78541e827ca3a08c5e1e730c34acb86436992e25ad893525cb9cecfe6ab namespace=k8s.io Oct 8 20:04:18.871650 containerd[1680]: time="2024-10-08T20:04:18.871638753Z" level=warning msg="cleaning up after shim disconnected" id=4726e78541e827ca3a08c5e1e730c34acb86436992e25ad893525cb9cecfe6ab namespace=k8s.io Oct 8 20:04:18.871650 containerd[1680]: time="2024-10-08T20:04:18.871650953Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:04:18.891770 containerd[1680]: time="2024-10-08T20:04:18.891729943Z" level=info msg="StopContainer for \"4726e78541e827ca3a08c5e1e730c34acb86436992e25ad893525cb9cecfe6ab\" returns successfully" Oct 8 20:04:18.892245 containerd[1680]: time="2024-10-08T20:04:18.892214447Z" level=info msg="StopPodSandbox for \"89546276c96b7d69da18609f5a6c19d681eae4c42860adc216d7703c98699f07\"" Oct 8 20:04:18.892531 containerd[1680]: time="2024-10-08T20:04:18.892262248Z" level=info msg="Container to stop \"4726e78541e827ca3a08c5e1e730c34acb86436992e25ad893525cb9cecfe6ab\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 8 20:04:18.895450 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-89546276c96b7d69da18609f5a6c19d681eae4c42860adc216d7703c98699f07-shm.mount: Deactivated successfully. Oct 8 20:04:18.901053 systemd[1]: cri-containerd-89546276c96b7d69da18609f5a6c19d681eae4c42860adc216d7703c98699f07.scope: Deactivated successfully. Oct 8 20:04:18.922044 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-89546276c96b7d69da18609f5a6c19d681eae4c42860adc216d7703c98699f07-rootfs.mount: Deactivated successfully. Oct 8 20:04:18.938681 containerd[1680]: time="2024-10-08T20:04:18.938622985Z" level=info msg="shim disconnected" id=89546276c96b7d69da18609f5a6c19d681eae4c42860adc216d7703c98699f07 namespace=k8s.io Oct 8 20:04:18.938992 containerd[1680]: time="2024-10-08T20:04:18.938828786Z" level=warning msg="cleaning up after shim disconnected" id=89546276c96b7d69da18609f5a6c19d681eae4c42860adc216d7703c98699f07 namespace=k8s.io Oct 8 20:04:18.938992 containerd[1680]: time="2024-10-08T20:04:18.938848387Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:04:18.951727 containerd[1680]: time="2024-10-08T20:04:18.951691908Z" level=info msg="TearDown network for sandbox \"89546276c96b7d69da18609f5a6c19d681eae4c42860adc216d7703c98699f07\" successfully" Oct 8 20:04:18.951727 containerd[1680]: time="2024-10-08T20:04:18.951720908Z" level=info msg="StopPodSandbox for \"89546276c96b7d69da18609f5a6c19d681eae4c42860adc216d7703c98699f07\" returns successfully" Oct 8 20:04:18.991953 kubelet[3207]: E1008 20:04:18.991762 3207 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5de0ac78-cd6f-4fec-b192-f3d95df51dee" containerName="calico-typha" Oct 8 20:04:18.991953 kubelet[3207]: I1008 20:04:18.991855 3207 memory_manager.go:354] "RemoveStaleState removing state" podUID="5de0ac78-cd6f-4fec-b192-f3d95df51dee" containerName="calico-typha" Oct 8 20:04:19.008126 systemd[1]: Created slice kubepods-besteffort-pod5dbc900f_ad54_414a_b5a4_08183d13295e.slice - libcontainer container kubepods-besteffort-pod5dbc900f_ad54_414a_b5a4_08183d13295e.slice. Oct 8 20:04:19.144696 kubelet[3207]: I1008 20:04:19.143856 3207 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5de0ac78-cd6f-4fec-b192-f3d95df51dee-tigera-ca-bundle\") pod \"5de0ac78-cd6f-4fec-b192-f3d95df51dee\" (UID: \"5de0ac78-cd6f-4fec-b192-f3d95df51dee\") " Oct 8 20:04:19.144696 kubelet[3207]: I1008 20:04:19.144013 3207 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wthvd\" (UniqueName: \"kubernetes.io/projected/5de0ac78-cd6f-4fec-b192-f3d95df51dee-kube-api-access-wthvd\") pod \"5de0ac78-cd6f-4fec-b192-f3d95df51dee\" (UID: \"5de0ac78-cd6f-4fec-b192-f3d95df51dee\") " Oct 8 20:04:19.144696 kubelet[3207]: I1008 20:04:19.144051 3207 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/5de0ac78-cd6f-4fec-b192-f3d95df51dee-typha-certs\") pod \"5de0ac78-cd6f-4fec-b192-f3d95df51dee\" (UID: \"5de0ac78-cd6f-4fec-b192-f3d95df51dee\") " Oct 8 20:04:19.144696 kubelet[3207]: I1008 20:04:19.144138 3207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5dbc900f-ad54-414a-b5a4-08183d13295e-tigera-ca-bundle\") pod \"calico-typha-55d694f95d-nvzp5\" (UID: \"5dbc900f-ad54-414a-b5a4-08183d13295e\") " pod="calico-system/calico-typha-55d694f95d-nvzp5" Oct 8 20:04:19.144696 kubelet[3207]: I1008 20:04:19.144175 3207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/5dbc900f-ad54-414a-b5a4-08183d13295e-typha-certs\") pod \"calico-typha-55d694f95d-nvzp5\" (UID: \"5dbc900f-ad54-414a-b5a4-08183d13295e\") " pod="calico-system/calico-typha-55d694f95d-nvzp5" Oct 8 20:04:19.145146 kubelet[3207]: I1008 20:04:19.144222 3207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4w7lb\" (UniqueName: \"kubernetes.io/projected/5dbc900f-ad54-414a-b5a4-08183d13295e-kube-api-access-4w7lb\") pod \"calico-typha-55d694f95d-nvzp5\" (UID: \"5dbc900f-ad54-414a-b5a4-08183d13295e\") " pod="calico-system/calico-typha-55d694f95d-nvzp5" Oct 8 20:04:19.152842 systemd[1]: var-lib-kubelet-pods-5de0ac78\x2dcd6f\x2d4fec\x2db192\x2df3d95df51dee-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. Oct 8 20:04:19.158504 kubelet[3207]: I1008 20:04:19.158061 3207 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5de0ac78-cd6f-4fec-b192-f3d95df51dee-kube-api-access-wthvd" (OuterVolumeSpecName: "kube-api-access-wthvd") pod "5de0ac78-cd6f-4fec-b192-f3d95df51dee" (UID: "5de0ac78-cd6f-4fec-b192-f3d95df51dee"). InnerVolumeSpecName "kube-api-access-wthvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 8 20:04:19.158558 systemd[1]: var-lib-kubelet-pods-5de0ac78\x2dcd6f\x2d4fec\x2db192\x2df3d95df51dee-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwthvd.mount: Deactivated successfully. Oct 8 20:04:19.159230 kubelet[3207]: I1008 20:04:19.159066 3207 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5de0ac78-cd6f-4fec-b192-f3d95df51dee-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "5de0ac78-cd6f-4fec-b192-f3d95df51dee" (UID: "5de0ac78-cd6f-4fec-b192-f3d95df51dee"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 8 20:04:19.159230 kubelet[3207]: I1008 20:04:19.159198 3207 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5de0ac78-cd6f-4fec-b192-f3d95df51dee-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "5de0ac78-cd6f-4fec-b192-f3d95df51dee" (UID: "5de0ac78-cd6f-4fec-b192-f3d95df51dee"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 8 20:04:19.210264 systemd[1]: Removed slice kubepods-besteffort-pod5de0ac78_cd6f_4fec_b192_f3d95df51dee.slice - libcontainer container kubepods-besteffort-pod5de0ac78_cd6f_4fec_b192_f3d95df51dee.slice. Oct 8 20:04:19.244710 kubelet[3207]: I1008 20:04:19.244656 3207 reconciler_common.go:288] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/5de0ac78-cd6f-4fec-b192-f3d95df51dee-typha-certs\") on node \"ci-4081.1.0-a-b9ef23c535\" DevicePath \"\"" Oct 8 20:04:19.244710 kubelet[3207]: I1008 20:04:19.244711 3207 reconciler_common.go:288] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5de0ac78-cd6f-4fec-b192-f3d95df51dee-tigera-ca-bundle\") on node \"ci-4081.1.0-a-b9ef23c535\" DevicePath \"\"" Oct 8 20:04:19.244710 kubelet[3207]: I1008 20:04:19.244730 3207 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-wthvd\" (UniqueName: \"kubernetes.io/projected/5de0ac78-cd6f-4fec-b192-f3d95df51dee-kube-api-access-wthvd\") on node \"ci-4081.1.0-a-b9ef23c535\" DevicePath \"\"" Oct 8 20:04:19.310630 kubelet[3207]: I1008 20:04:19.310580 3207 scope.go:117] "RemoveContainer" containerID="4726e78541e827ca3a08c5e1e730c34acb86436992e25ad893525cb9cecfe6ab" Oct 8 20:04:19.312654 containerd[1680]: time="2024-10-08T20:04:19.312536908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-55d694f95d-nvzp5,Uid:5dbc900f-ad54-414a-b5a4-08183d13295e,Namespace:calico-system,Attempt:0,}" Oct 8 20:04:19.316773 containerd[1680]: time="2024-10-08T20:04:19.316542145Z" level=info msg="RemoveContainer for \"4726e78541e827ca3a08c5e1e730c34acb86436992e25ad893525cb9cecfe6ab\"" Oct 8 20:04:19.351658 containerd[1680]: time="2024-10-08T20:04:19.351604976Z" level=info msg="RemoveContainer for \"4726e78541e827ca3a08c5e1e730c34acb86436992e25ad893525cb9cecfe6ab\" returns successfully" Oct 8 20:04:19.351953 kubelet[3207]: I1008 20:04:19.351902 3207 scope.go:117] "RemoveContainer" containerID="4726e78541e827ca3a08c5e1e730c34acb86436992e25ad893525cb9cecfe6ab" Oct 8 20:04:19.352242 containerd[1680]: time="2024-10-08T20:04:19.352202181Z" level=error msg="ContainerStatus for \"4726e78541e827ca3a08c5e1e730c34acb86436992e25ad893525cb9cecfe6ab\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4726e78541e827ca3a08c5e1e730c34acb86436992e25ad893525cb9cecfe6ab\": not found" Oct 8 20:04:19.352414 kubelet[3207]: E1008 20:04:19.352385 3207 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4726e78541e827ca3a08c5e1e730c34acb86436992e25ad893525cb9cecfe6ab\": not found" containerID="4726e78541e827ca3a08c5e1e730c34acb86436992e25ad893525cb9cecfe6ab" Oct 8 20:04:19.352511 kubelet[3207]: I1008 20:04:19.352426 3207 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4726e78541e827ca3a08c5e1e730c34acb86436992e25ad893525cb9cecfe6ab"} err="failed to get container status \"4726e78541e827ca3a08c5e1e730c34acb86436992e25ad893525cb9cecfe6ab\": rpc error: code = NotFound desc = an error occurred when try to find container \"4726e78541e827ca3a08c5e1e730c34acb86436992e25ad893525cb9cecfe6ab\": not found" Oct 8 20:04:19.395853 containerd[1680]: time="2024-10-08T20:04:19.395672591Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:04:19.395853 containerd[1680]: time="2024-10-08T20:04:19.395716291Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:04:19.395853 containerd[1680]: time="2024-10-08T20:04:19.395730592Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:04:19.396179 containerd[1680]: time="2024-10-08T20:04:19.395860193Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:04:19.417101 systemd[1]: Started cri-containerd-0329a9fbc9c105f1dcbd7167861959452208e8e22ecc3fdd7dc270735d3f0ce7.scope - libcontainer container 0329a9fbc9c105f1dcbd7167861959452208e8e22ecc3fdd7dc270735d3f0ce7. Oct 8 20:04:19.458609 containerd[1680]: time="2024-10-08T20:04:19.458561384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-55d694f95d-nvzp5,Uid:5dbc900f-ad54-414a-b5a4-08183d13295e,Namespace:calico-system,Attempt:0,} returns sandbox id \"0329a9fbc9c105f1dcbd7167861959452208e8e22ecc3fdd7dc270735d3f0ce7\"" Oct 8 20:04:19.467131 containerd[1680]: time="2024-10-08T20:04:19.467091264Z" level=info msg="CreateContainer within sandbox \"0329a9fbc9c105f1dcbd7167861959452208e8e22ecc3fdd7dc270735d3f0ce7\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 8 20:04:19.511263 containerd[1680]: time="2024-10-08T20:04:19.511208980Z" level=info msg="CreateContainer within sandbox \"0329a9fbc9c105f1dcbd7167861959452208e8e22ecc3fdd7dc270735d3f0ce7\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"76912f388b1cce738446984cfe0dd908f9c598c4dd18d5318bb2a92828a69246\"" Oct 8 20:04:19.511957 containerd[1680]: time="2024-10-08T20:04:19.511861986Z" level=info msg="StartContainer for \"76912f388b1cce738446984cfe0dd908f9c598c4dd18d5318bb2a92828a69246\"" Oct 8 20:04:19.546699 systemd[1]: var-lib-kubelet-pods-5de0ac78\x2dcd6f\x2d4fec\x2db192\x2df3d95df51dee-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. Oct 8 20:04:19.556120 systemd[1]: Started cri-containerd-76912f388b1cce738446984cfe0dd908f9c598c4dd18d5318bb2a92828a69246.scope - libcontainer container 76912f388b1cce738446984cfe0dd908f9c598c4dd18d5318bb2a92828a69246. Oct 8 20:04:19.606708 containerd[1680]: time="2024-10-08T20:04:19.606657179Z" level=info msg="StartContainer for \"76912f388b1cce738446984cfe0dd908f9c598c4dd18d5318bb2a92828a69246\" returns successfully" Oct 8 20:04:20.202389 kubelet[3207]: E1008 20:04:20.202334 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2j8vr" podUID="0a5c8fac-8ac4-4f20-883d-6418322f8148" Oct 8 20:04:21.207623 kubelet[3207]: I1008 20:04:21.207580 3207 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5de0ac78-cd6f-4fec-b192-f3d95df51dee" path="/var/lib/kubelet/pods/5de0ac78-cd6f-4fec-b192-f3d95df51dee/volumes" Oct 8 20:04:22.203005 kubelet[3207]: E1008 20:04:22.202953 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2j8vr" podUID="0a5c8fac-8ac4-4f20-883d-6418322f8148" Oct 8 20:04:23.733540 containerd[1680]: time="2024-10-08T20:04:23.733482302Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:04:23.735830 containerd[1680]: time="2024-10-08T20:04:23.735766923Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=93083736" Oct 8 20:04:23.739926 containerd[1680]: time="2024-10-08T20:04:23.739864360Z" level=info msg="ImageCreate event name:\"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:04:23.744652 containerd[1680]: time="2024-10-08T20:04:23.744609904Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:04:23.745548 containerd[1680]: time="2024-10-08T20:04:23.745408011Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"94576137\" in 5.43645156s" Oct 8 20:04:23.745548 containerd[1680]: time="2024-10-08T20:04:23.745445511Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\"" Oct 8 20:04:23.748784 containerd[1680]: time="2024-10-08T20:04:23.748748341Z" level=info msg="CreateContainer within sandbox \"4119896361fe0a9a2375bd173f80b5ce0f39d0c0b0ec9e620f7edc4c02309d93\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 8 20:04:23.791593 containerd[1680]: time="2024-10-08T20:04:23.791550332Z" level=info msg="CreateContainer within sandbox \"4119896361fe0a9a2375bd173f80b5ce0f39d0c0b0ec9e620f7edc4c02309d93\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e3140bcc8c5bd9bd28c095ed7685b2a5e78873da1219745c1ed211fce355924d\"" Oct 8 20:04:23.792075 containerd[1680]: time="2024-10-08T20:04:23.791978435Z" level=info msg="StartContainer for \"e3140bcc8c5bd9bd28c095ed7685b2a5e78873da1219745c1ed211fce355924d\"" Oct 8 20:04:23.829083 systemd[1]: Started cri-containerd-e3140bcc8c5bd9bd28c095ed7685b2a5e78873da1219745c1ed211fce355924d.scope - libcontainer container e3140bcc8c5bd9bd28c095ed7685b2a5e78873da1219745c1ed211fce355924d. Oct 8 20:04:23.857756 containerd[1680]: time="2024-10-08T20:04:23.857709935Z" level=info msg="StartContainer for \"e3140bcc8c5bd9bd28c095ed7685b2a5e78873da1219745c1ed211fce355924d\" returns successfully" Oct 8 20:04:24.202715 kubelet[3207]: E1008 20:04:24.202277 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2j8vr" podUID="0a5c8fac-8ac4-4f20-883d-6418322f8148" Oct 8 20:04:24.345488 kubelet[3207]: I1008 20:04:24.345238 3207 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-55d694f95d-nvzp5" podStartSLOduration=11.34521398 podStartE2EDuration="11.34521398s" podCreationTimestamp="2024-10-08 20:04:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:04:20.330459299 +0000 UTC m=+19.212995692" watchObservedRunningTime="2024-10-08 20:04:24.34521398 +0000 UTC m=+23.227750273" Oct 8 20:04:25.223766 containerd[1680]: time="2024-10-08T20:04:25.223699290Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 8 20:04:25.225959 systemd[1]: cri-containerd-e3140bcc8c5bd9bd28c095ed7685b2a5e78873da1219745c1ed211fce355924d.scope: Deactivated successfully. Oct 8 20:04:25.238004 kubelet[3207]: I1008 20:04:25.237482 3207 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Oct 8 20:04:25.255156 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e3140bcc8c5bd9bd28c095ed7685b2a5e78873da1219745c1ed211fce355924d-rootfs.mount: Deactivated successfully. Oct 8 20:04:25.291848 systemd[1]: Created slice kubepods-burstable-pod0a1778ab_0ba0_44e2_a8c0_fe7fdaa08be5.slice - libcontainer container kubepods-burstable-pod0a1778ab_0ba0_44e2_a8c0_fe7fdaa08be5.slice. Oct 8 20:04:25.308012 systemd[1]: Created slice kubepods-besteffort-pod115f7617_1709_468f_88b3_4136a07ce1cb.slice - libcontainer container kubepods-besteffort-pod115f7617_1709_468f_88b3_4136a07ce1cb.slice. Oct 8 20:04:25.803945 kubelet[3207]: I1008 20:04:25.387362 3207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bm487\" (UniqueName: \"kubernetes.io/projected/0a1778ab-0ba0-44e2-a8c0-fe7fdaa08be5-kube-api-access-bm487\") pod \"coredns-6f6b679f8f-fkjkp\" (UID: \"0a1778ab-0ba0-44e2-a8c0-fe7fdaa08be5\") " pod="kube-system/coredns-6f6b679f8f-fkjkp" Oct 8 20:04:25.803945 kubelet[3207]: I1008 20:04:25.387438 3207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a1778ab-0ba0-44e2-a8c0-fe7fdaa08be5-config-volume\") pod \"coredns-6f6b679f8f-fkjkp\" (UID: \"0a1778ab-0ba0-44e2-a8c0-fe7fdaa08be5\") " pod="kube-system/coredns-6f6b679f8f-fkjkp" Oct 8 20:04:25.803945 kubelet[3207]: I1008 20:04:25.488057 3207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/115f7617-1709-468f-88b3-4136a07ce1cb-tigera-ca-bundle\") pod \"calico-kube-controllers-7fb9bb5bf5-cl6qd\" (UID: \"115f7617-1709-468f-88b3-4136a07ce1cb\") " pod="calico-system/calico-kube-controllers-7fb9bb5bf5-cl6qd" Oct 8 20:04:25.803945 kubelet[3207]: I1008 20:04:25.488109 3207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2l49\" (UniqueName: \"kubernetes.io/projected/115f7617-1709-468f-88b3-4136a07ce1cb-kube-api-access-b2l49\") pod \"calico-kube-controllers-7fb9bb5bf5-cl6qd\" (UID: \"115f7617-1709-468f-88b3-4136a07ce1cb\") " pod="calico-system/calico-kube-controllers-7fb9bb5bf5-cl6qd" Oct 8 20:04:25.803945 kubelet[3207]: I1008 20:04:25.488151 3207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c43b05fe-589a-477a-823a-198b47900c84-config-volume\") pod \"coredns-6f6b679f8f-dnl97\" (UID: \"c43b05fe-589a-477a-823a-198b47900c84\") " pod="kube-system/coredns-6f6b679f8f-dnl97" Oct 8 20:04:25.316816 systemd[1]: Created slice kubepods-burstable-podc43b05fe_589a_477a_823a_198b47900c84.slice - libcontainer container kubepods-burstable-podc43b05fe_589a_477a_823a_198b47900c84.slice. Oct 8 20:04:25.804384 kubelet[3207]: I1008 20:04:25.488175 3207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64twc\" (UniqueName: \"kubernetes.io/projected/c43b05fe-589a-477a-823a-198b47900c84-kube-api-access-64twc\") pod \"coredns-6f6b679f8f-dnl97\" (UID: \"c43b05fe-589a-477a-823a-198b47900c84\") " pod="kube-system/coredns-6f6b679f8f-dnl97" Oct 8 20:04:26.102627 containerd[1680]: time="2024-10-08T20:04:26.102494104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7fb9bb5bf5-cl6qd,Uid:115f7617-1709-468f-88b3-4136a07ce1cb,Namespace:calico-system,Attempt:0,}" Oct 8 20:04:26.106233 containerd[1680]: time="2024-10-08T20:04:26.106194037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-fkjkp,Uid:0a1778ab-0ba0-44e2-a8c0-fe7fdaa08be5,Namespace:kube-system,Attempt:0,}" Oct 8 20:04:26.107950 containerd[1680]: time="2024-10-08T20:04:26.107695651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-dnl97,Uid:c43b05fe-589a-477a-823a-198b47900c84,Namespace:kube-system,Attempt:0,}" Oct 8 20:04:26.209172 systemd[1]: Created slice kubepods-besteffort-pod0a5c8fac_8ac4_4f20_883d_6418322f8148.slice - libcontainer container kubepods-besteffort-pod0a5c8fac_8ac4_4f20_883d_6418322f8148.slice. Oct 8 20:04:26.212347 containerd[1680]: time="2024-10-08T20:04:26.212281205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2j8vr,Uid:0a5c8fac-8ac4-4f20-883d-6418322f8148,Namespace:calico-system,Attempt:0,}" Oct 8 20:04:26.854281 containerd[1680]: time="2024-10-08T20:04:26.854192558Z" level=info msg="shim disconnected" id=e3140bcc8c5bd9bd28c095ed7685b2a5e78873da1219745c1ed211fce355924d namespace=k8s.io Oct 8 20:04:26.854281 containerd[1680]: time="2024-10-08T20:04:26.854268458Z" level=warning msg="cleaning up after shim disconnected" id=e3140bcc8c5bd9bd28c095ed7685b2a5e78873da1219745c1ed211fce355924d namespace=k8s.io Oct 8 20:04:26.854281 containerd[1680]: time="2024-10-08T20:04:26.854280559Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:04:27.111579 containerd[1680]: time="2024-10-08T20:04:27.110369594Z" level=error msg="Failed to destroy network for sandbox \"ba35c60d38929cfd3ddf111ca464d068384bf00795e4821b25b7ca58cd6de724\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:04:27.111579 containerd[1680]: time="2024-10-08T20:04:27.110799498Z" level=error msg="encountered an error cleaning up failed sandbox \"ba35c60d38929cfd3ddf111ca464d068384bf00795e4821b25b7ca58cd6de724\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:04:27.111579 containerd[1680]: time="2024-10-08T20:04:27.110869698Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7fb9bb5bf5-cl6qd,Uid:115f7617-1709-468f-88b3-4136a07ce1cb,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ba35c60d38929cfd3ddf111ca464d068384bf00795e4821b25b7ca58cd6de724\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:04:27.112352 kubelet[3207]: E1008 20:04:27.111146 3207 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba35c60d38929cfd3ddf111ca464d068384bf00795e4821b25b7ca58cd6de724\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:04:27.112352 kubelet[3207]: E1008 20:04:27.111253 3207 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba35c60d38929cfd3ddf111ca464d068384bf00795e4821b25b7ca58cd6de724\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7fb9bb5bf5-cl6qd" Oct 8 20:04:27.112352 kubelet[3207]: E1008 20:04:27.111279 3207 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba35c60d38929cfd3ddf111ca464d068384bf00795e4821b25b7ca58cd6de724\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7fb9bb5bf5-cl6qd" Oct 8 20:04:27.112793 kubelet[3207]: E1008 20:04:27.111336 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7fb9bb5bf5-cl6qd_calico-system(115f7617-1709-468f-88b3-4136a07ce1cb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7fb9bb5bf5-cl6qd_calico-system(115f7617-1709-468f-88b3-4136a07ce1cb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ba35c60d38929cfd3ddf111ca464d068384bf00795e4821b25b7ca58cd6de724\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7fb9bb5bf5-cl6qd" podUID="115f7617-1709-468f-88b3-4136a07ce1cb" Oct 8 20:04:27.124957 containerd[1680]: time="2024-10-08T20:04:27.124415922Z" level=error msg="Failed to destroy network for sandbox \"72b13551b41a8209e6c060d3441ffbc5ae5ea761a8c510e5d0f43df8df28a11f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:04:27.124957 containerd[1680]: time="2024-10-08T20:04:27.124877626Z" level=error msg="encountered an error cleaning up failed sandbox \"72b13551b41a8209e6c060d3441ffbc5ae5ea761a8c510e5d0f43df8df28a11f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:04:27.125995 containerd[1680]: time="2024-10-08T20:04:27.124970927Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-dnl97,Uid:c43b05fe-589a-477a-823a-198b47900c84,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"72b13551b41a8209e6c060d3441ffbc5ae5ea761a8c510e5d0f43df8df28a11f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:04:27.126110 kubelet[3207]: E1008 20:04:27.125245 3207 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72b13551b41a8209e6c060d3441ffbc5ae5ea761a8c510e5d0f43df8df28a11f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:04:27.126110 kubelet[3207]: E1008 20:04:27.125317 3207 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72b13551b41a8209e6c060d3441ffbc5ae5ea761a8c510e5d0f43df8df28a11f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-dnl97" Oct 8 20:04:27.126110 kubelet[3207]: E1008 20:04:27.125342 3207 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72b13551b41a8209e6c060d3441ffbc5ae5ea761a8c510e5d0f43df8df28a11f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-dnl97" Oct 8 20:04:27.126300 kubelet[3207]: E1008 20:04:27.125407 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-dnl97_kube-system(c43b05fe-589a-477a-823a-198b47900c84)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-dnl97_kube-system(c43b05fe-589a-477a-823a-198b47900c84)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"72b13551b41a8209e6c060d3441ffbc5ae5ea761a8c510e5d0f43df8df28a11f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-dnl97" podUID="c43b05fe-589a-477a-823a-198b47900c84" Oct 8 20:04:27.132929 containerd[1680]: time="2024-10-08T20:04:27.132856099Z" level=error msg="Failed to destroy network for sandbox \"60223960baf2536862e75c6293089dd464aacce4ea598ea63ccac2fb7e6b3bdc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:04:27.133476 containerd[1680]: time="2024-10-08T20:04:27.133428804Z" level=error msg="encountered an error cleaning up failed sandbox \"60223960baf2536862e75c6293089dd464aacce4ea598ea63ccac2fb7e6b3bdc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:04:27.133692 containerd[1680]: time="2024-10-08T20:04:27.133642006Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2j8vr,Uid:0a5c8fac-8ac4-4f20-883d-6418322f8148,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"60223960baf2536862e75c6293089dd464aacce4ea598ea63ccac2fb7e6b3bdc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:04:27.134537 kubelet[3207]: E1008 20:04:27.134000 3207 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60223960baf2536862e75c6293089dd464aacce4ea598ea63ccac2fb7e6b3bdc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:04:27.134537 kubelet[3207]: E1008 20:04:27.134071 3207 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60223960baf2536862e75c6293089dd464aacce4ea598ea63ccac2fb7e6b3bdc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2j8vr" Oct 8 20:04:27.134537 kubelet[3207]: E1008 20:04:27.134097 3207 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60223960baf2536862e75c6293089dd464aacce4ea598ea63ccac2fb7e6b3bdc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2j8vr" Oct 8 20:04:27.134718 kubelet[3207]: E1008 20:04:27.134141 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2j8vr_calico-system(0a5c8fac-8ac4-4f20-883d-6418322f8148)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2j8vr_calico-system(0a5c8fac-8ac4-4f20-883d-6418322f8148)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"60223960baf2536862e75c6293089dd464aacce4ea598ea63ccac2fb7e6b3bdc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2j8vr" podUID="0a5c8fac-8ac4-4f20-883d-6418322f8148" Oct 8 20:04:27.135811 containerd[1680]: time="2024-10-08T20:04:27.135778525Z" level=error msg="Failed to destroy network for sandbox \"a87e53211819594214e02738e8dd0ae6a3b7145d92adc9b16201d69a67ac3959\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:04:27.136134 containerd[1680]: time="2024-10-08T20:04:27.136097228Z" level=error msg="encountered an error cleaning up failed sandbox \"a87e53211819594214e02738e8dd0ae6a3b7145d92adc9b16201d69a67ac3959\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:04:27.136210 containerd[1680]: time="2024-10-08T20:04:27.136159929Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-fkjkp,Uid:0a1778ab-0ba0-44e2-a8c0-fe7fdaa08be5,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a87e53211819594214e02738e8dd0ae6a3b7145d92adc9b16201d69a67ac3959\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:04:27.136396 kubelet[3207]: E1008 20:04:27.136368 3207 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a87e53211819594214e02738e8dd0ae6a3b7145d92adc9b16201d69a67ac3959\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:04:27.136488 kubelet[3207]: E1008 20:04:27.136417 3207 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a87e53211819594214e02738e8dd0ae6a3b7145d92adc9b16201d69a67ac3959\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-fkjkp" Oct 8 20:04:27.136488 kubelet[3207]: E1008 20:04:27.136442 3207 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a87e53211819594214e02738e8dd0ae6a3b7145d92adc9b16201d69a67ac3959\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-fkjkp" Oct 8 20:04:27.136578 kubelet[3207]: E1008 20:04:27.136498 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-fkjkp_kube-system(0a1778ab-0ba0-44e2-a8c0-fe7fdaa08be5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-fkjkp_kube-system(0a1778ab-0ba0-44e2-a8c0-fe7fdaa08be5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a87e53211819594214e02738e8dd0ae6a3b7145d92adc9b16201d69a67ac3959\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-fkjkp" podUID="0a1778ab-0ba0-44e2-a8c0-fe7fdaa08be5" Oct 8 20:04:27.334988 kubelet[3207]: I1008 20:04:27.334940 3207 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60223960baf2536862e75c6293089dd464aacce4ea598ea63ccac2fb7e6b3bdc" Oct 8 20:04:27.336290 containerd[1680]: time="2024-10-08T20:04:27.336078252Z" level=info msg="StopPodSandbox for \"60223960baf2536862e75c6293089dd464aacce4ea598ea63ccac2fb7e6b3bdc\"" Oct 8 20:04:27.336743 containerd[1680]: time="2024-10-08T20:04:27.336664357Z" level=info msg="Ensure that sandbox 60223960baf2536862e75c6293089dd464aacce4ea598ea63ccac2fb7e6b3bdc in task-service has been cleanup successfully" Oct 8 20:04:27.343828 containerd[1680]: time="2024-10-08T20:04:27.342454610Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Oct 8 20:04:27.346485 kubelet[3207]: I1008 20:04:27.346455 3207 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72b13551b41a8209e6c060d3441ffbc5ae5ea761a8c510e5d0f43df8df28a11f" Oct 8 20:04:27.347204 containerd[1680]: time="2024-10-08T20:04:27.347031552Z" level=info msg="StopPodSandbox for \"72b13551b41a8209e6c060d3441ffbc5ae5ea761a8c510e5d0f43df8df28a11f\"" Oct 8 20:04:27.347289 containerd[1680]: time="2024-10-08T20:04:27.347223453Z" level=info msg="Ensure that sandbox 72b13551b41a8209e6c060d3441ffbc5ae5ea761a8c510e5d0f43df8df28a11f in task-service has been cleanup successfully" Oct 8 20:04:27.355953 kubelet[3207]: I1008 20:04:27.355187 3207 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a87e53211819594214e02738e8dd0ae6a3b7145d92adc9b16201d69a67ac3959" Oct 8 20:04:27.357109 containerd[1680]: time="2024-10-08T20:04:27.357083543Z" level=info msg="StopPodSandbox for \"a87e53211819594214e02738e8dd0ae6a3b7145d92adc9b16201d69a67ac3959\"" Oct 8 20:04:27.357736 containerd[1680]: time="2024-10-08T20:04:27.357711549Z" level=info msg="Ensure that sandbox a87e53211819594214e02738e8dd0ae6a3b7145d92adc9b16201d69a67ac3959 in task-service has been cleanup successfully" Oct 8 20:04:27.365586 kubelet[3207]: I1008 20:04:27.365478 3207 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba35c60d38929cfd3ddf111ca464d068384bf00795e4821b25b7ca58cd6de724" Oct 8 20:04:27.367946 containerd[1680]: time="2024-10-08T20:04:27.367574339Z" level=info msg="StopPodSandbox for \"ba35c60d38929cfd3ddf111ca464d068384bf00795e4821b25b7ca58cd6de724\"" Oct 8 20:04:27.367946 containerd[1680]: time="2024-10-08T20:04:27.367814641Z" level=info msg="Ensure that sandbox ba35c60d38929cfd3ddf111ca464d068384bf00795e4821b25b7ca58cd6de724 in task-service has been cleanup successfully" Oct 8 20:04:27.412256 containerd[1680]: time="2024-10-08T20:04:27.412199746Z" level=error msg="StopPodSandbox for \"60223960baf2536862e75c6293089dd464aacce4ea598ea63ccac2fb7e6b3bdc\" failed" error="failed to destroy network for sandbox \"60223960baf2536862e75c6293089dd464aacce4ea598ea63ccac2fb7e6b3bdc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:04:27.412924 kubelet[3207]: E1008 20:04:27.412741 3207 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"60223960baf2536862e75c6293089dd464aacce4ea598ea63ccac2fb7e6b3bdc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="60223960baf2536862e75c6293089dd464aacce4ea598ea63ccac2fb7e6b3bdc" Oct 8 20:04:27.412924 kubelet[3207]: E1008 20:04:27.412797 3207 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"60223960baf2536862e75c6293089dd464aacce4ea598ea63ccac2fb7e6b3bdc"} Oct 8 20:04:27.412924 kubelet[3207]: E1008 20:04:27.412840 3207 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0a5c8fac-8ac4-4f20-883d-6418322f8148\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"60223960baf2536862e75c6293089dd464aacce4ea598ea63ccac2fb7e6b3bdc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 20:04:27.412924 kubelet[3207]: E1008 20:04:27.412869 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0a5c8fac-8ac4-4f20-883d-6418322f8148\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"60223960baf2536862e75c6293089dd464aacce4ea598ea63ccac2fb7e6b3bdc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2j8vr" podUID="0a5c8fac-8ac4-4f20-883d-6418322f8148" Oct 8 20:04:27.438955 containerd[1680]: time="2024-10-08T20:04:27.438457285Z" level=error msg="StopPodSandbox for \"72b13551b41a8209e6c060d3441ffbc5ae5ea761a8c510e5d0f43df8df28a11f\" failed" error="failed to destroy network for sandbox \"72b13551b41a8209e6c060d3441ffbc5ae5ea761a8c510e5d0f43df8df28a11f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:04:27.439116 kubelet[3207]: E1008 20:04:27.438780 3207 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"72b13551b41a8209e6c060d3441ffbc5ae5ea761a8c510e5d0f43df8df28a11f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="72b13551b41a8209e6c060d3441ffbc5ae5ea761a8c510e5d0f43df8df28a11f" Oct 8 20:04:27.439116 kubelet[3207]: E1008 20:04:27.438843 3207 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"72b13551b41a8209e6c060d3441ffbc5ae5ea761a8c510e5d0f43df8df28a11f"} Oct 8 20:04:27.439116 kubelet[3207]: E1008 20:04:27.438888 3207 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c43b05fe-589a-477a-823a-198b47900c84\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"72b13551b41a8209e6c060d3441ffbc5ae5ea761a8c510e5d0f43df8df28a11f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 20:04:27.440053 kubelet[3207]: E1008 20:04:27.438934 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c43b05fe-589a-477a-823a-198b47900c84\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"72b13551b41a8209e6c060d3441ffbc5ae5ea761a8c510e5d0f43df8df28a11f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-dnl97" podUID="c43b05fe-589a-477a-823a-198b47900c84" Oct 8 20:04:27.443048 containerd[1680]: time="2024-10-08T20:04:27.442992227Z" level=error msg="StopPodSandbox for \"a87e53211819594214e02738e8dd0ae6a3b7145d92adc9b16201d69a67ac3959\" failed" error="failed to destroy network for sandbox \"a87e53211819594214e02738e8dd0ae6a3b7145d92adc9b16201d69a67ac3959\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:04:27.443513 kubelet[3207]: E1008 20:04:27.443244 3207 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a87e53211819594214e02738e8dd0ae6a3b7145d92adc9b16201d69a67ac3959\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a87e53211819594214e02738e8dd0ae6a3b7145d92adc9b16201d69a67ac3959" Oct 8 20:04:27.443513 kubelet[3207]: E1008 20:04:27.443287 3207 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a87e53211819594214e02738e8dd0ae6a3b7145d92adc9b16201d69a67ac3959"} Oct 8 20:04:27.443513 kubelet[3207]: E1008 20:04:27.443328 3207 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0a1778ab-0ba0-44e2-a8c0-fe7fdaa08be5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a87e53211819594214e02738e8dd0ae6a3b7145d92adc9b16201d69a67ac3959\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 20:04:27.443513 kubelet[3207]: E1008 20:04:27.443355 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0a1778ab-0ba0-44e2-a8c0-fe7fdaa08be5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a87e53211819594214e02738e8dd0ae6a3b7145d92adc9b16201d69a67ac3959\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-fkjkp" podUID="0a1778ab-0ba0-44e2-a8c0-fe7fdaa08be5" Oct 8 20:04:27.446345 containerd[1680]: time="2024-10-08T20:04:27.446309057Z" level=error msg="StopPodSandbox for \"ba35c60d38929cfd3ddf111ca464d068384bf00795e4821b25b7ca58cd6de724\" failed" error="failed to destroy network for sandbox \"ba35c60d38929cfd3ddf111ca464d068384bf00795e4821b25b7ca58cd6de724\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:04:27.446530 kubelet[3207]: E1008 20:04:27.446497 3207 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ba35c60d38929cfd3ddf111ca464d068384bf00795e4821b25b7ca58cd6de724\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ba35c60d38929cfd3ddf111ca464d068384bf00795e4821b25b7ca58cd6de724" Oct 8 20:04:27.446615 kubelet[3207]: E1008 20:04:27.446540 3207 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ba35c60d38929cfd3ddf111ca464d068384bf00795e4821b25b7ca58cd6de724"} Oct 8 20:04:27.446615 kubelet[3207]: E1008 20:04:27.446585 3207 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"115f7617-1709-468f-88b3-4136a07ce1cb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ba35c60d38929cfd3ddf111ca464d068384bf00795e4821b25b7ca58cd6de724\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 20:04:27.446716 kubelet[3207]: E1008 20:04:27.446614 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"115f7617-1709-468f-88b3-4136a07ce1cb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ba35c60d38929cfd3ddf111ca464d068384bf00795e4821b25b7ca58cd6de724\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7fb9bb5bf5-cl6qd" podUID="115f7617-1709-468f-88b3-4136a07ce1cb" Oct 8 20:04:27.946937 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-60223960baf2536862e75c6293089dd464aacce4ea598ea63ccac2fb7e6b3bdc-shm.mount: Deactivated successfully. Oct 8 20:04:27.947067 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-72b13551b41a8209e6c060d3441ffbc5ae5ea761a8c510e5d0f43df8df28a11f-shm.mount: Deactivated successfully. Oct 8 20:04:27.947153 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a87e53211819594214e02738e8dd0ae6a3b7145d92adc9b16201d69a67ac3959-shm.mount: Deactivated successfully. Oct 8 20:04:27.947228 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ba35c60d38929cfd3ddf111ca464d068384bf00795e4821b25b7ca58cd6de724-shm.mount: Deactivated successfully. Oct 8 20:04:32.189901 kubelet[3207]: I1008 20:04:32.189851 3207 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 8 20:04:34.326196 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1155844871.mount: Deactivated successfully. Oct 8 20:04:34.377778 containerd[1680]: time="2024-10-08T20:04:34.377716375Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:04:34.380175 containerd[1680]: time="2024-10-08T20:04:34.380108595Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=117873564" Oct 8 20:04:34.385955 containerd[1680]: time="2024-10-08T20:04:34.385906844Z" level=info msg="ImageCreate event name:\"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:04:34.391266 containerd[1680]: time="2024-10-08T20:04:34.390566284Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:04:34.391266 containerd[1680]: time="2024-10-08T20:04:34.391133088Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"117873426\" in 7.048506177s" Oct 8 20:04:34.391266 containerd[1680]: time="2024-10-08T20:04:34.391167289Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\"" Oct 8 20:04:34.408095 containerd[1680]: time="2024-10-08T20:04:34.408038232Z" level=info msg="CreateContainer within sandbox \"4119896361fe0a9a2375bd173f80b5ce0f39d0c0b0ec9e620f7edc4c02309d93\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 8 20:04:34.457609 containerd[1680]: time="2024-10-08T20:04:34.457565352Z" level=info msg="CreateContainer within sandbox \"4119896361fe0a9a2375bd173f80b5ce0f39d0c0b0ec9e620f7edc4c02309d93\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"8d3513ae716741f5f62f574b2776027eb94235b5a6325d93331027c2aac23214\"" Oct 8 20:04:34.459833 containerd[1680]: time="2024-10-08T20:04:34.458121857Z" level=info msg="StartContainer for \"8d3513ae716741f5f62f574b2776027eb94235b5a6325d93331027c2aac23214\"" Oct 8 20:04:34.488051 systemd[1]: Started cri-containerd-8d3513ae716741f5f62f574b2776027eb94235b5a6325d93331027c2aac23214.scope - libcontainer container 8d3513ae716741f5f62f574b2776027eb94235b5a6325d93331027c2aac23214. Oct 8 20:04:34.520768 containerd[1680]: time="2024-10-08T20:04:34.520628387Z" level=info msg="StartContainer for \"8d3513ae716741f5f62f574b2776027eb94235b5a6325d93331027c2aac23214\" returns successfully" Oct 8 20:04:34.839882 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 8 20:04:34.840180 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 8 20:04:35.423721 kubelet[3207]: I1008 20:04:35.423462 3207 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-z9cjg" podStartSLOduration=3.339944195 podStartE2EDuration="19.423389043s" podCreationTimestamp="2024-10-08 20:04:16 +0000 UTC" firstStartedPulling="2024-10-08 20:04:18.308638349 +0000 UTC m=+17.191174742" lastFinishedPulling="2024-10-08 20:04:34.392083297 +0000 UTC m=+33.274619590" observedRunningTime="2024-10-08 20:04:35.418453301 +0000 UTC m=+34.300989594" watchObservedRunningTime="2024-10-08 20:04:35.423389043 +0000 UTC m=+34.305925436" Oct 8 20:04:36.436437 systemd[1]: run-containerd-runc-k8s.io-8d3513ae716741f5f62f574b2776027eb94235b5a6325d93331027c2aac23214-runc.oFJfQx.mount: Deactivated successfully. Oct 8 20:04:36.492022 kernel: bpftool[4679]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Oct 8 20:04:36.856275 systemd-networkd[1320]: vxlan.calico: Link UP Oct 8 20:04:36.856285 systemd-networkd[1320]: vxlan.calico: Gained carrier Oct 8 20:04:38.203596 containerd[1680]: time="2024-10-08T20:04:38.203334585Z" level=info msg="StopPodSandbox for \"72b13551b41a8209e6c060d3441ffbc5ae5ea761a8c510e5d0f43df8df28a11f\"" Oct 8 20:04:38.277452 containerd[1680]: 2024-10-08 20:04:38.245 [INFO][4771] k8s.go 608: Cleaning up netns ContainerID="72b13551b41a8209e6c060d3441ffbc5ae5ea761a8c510e5d0f43df8df28a11f" Oct 8 20:04:38.277452 containerd[1680]: 2024-10-08 20:04:38.247 [INFO][4771] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="72b13551b41a8209e6c060d3441ffbc5ae5ea761a8c510e5d0f43df8df28a11f" iface="eth0" netns="/var/run/netns/cni-485df1e3-8149-5046-4b0a-d8a7940b2e7f" Oct 8 20:04:38.277452 containerd[1680]: 2024-10-08 20:04:38.247 [INFO][4771] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="72b13551b41a8209e6c060d3441ffbc5ae5ea761a8c510e5d0f43df8df28a11f" iface="eth0" netns="/var/run/netns/cni-485df1e3-8149-5046-4b0a-d8a7940b2e7f" Oct 8 20:04:38.277452 containerd[1680]: 2024-10-08 20:04:38.247 [INFO][4771] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="72b13551b41a8209e6c060d3441ffbc5ae5ea761a8c510e5d0f43df8df28a11f" iface="eth0" netns="/var/run/netns/cni-485df1e3-8149-5046-4b0a-d8a7940b2e7f" Oct 8 20:04:38.277452 containerd[1680]: 2024-10-08 20:04:38.247 [INFO][4771] k8s.go 615: Releasing IP address(es) ContainerID="72b13551b41a8209e6c060d3441ffbc5ae5ea761a8c510e5d0f43df8df28a11f" Oct 8 20:04:38.277452 containerd[1680]: 2024-10-08 20:04:38.248 [INFO][4771] utils.go 188: Calico CNI releasing IP address ContainerID="72b13551b41a8209e6c060d3441ffbc5ae5ea761a8c510e5d0f43df8df28a11f" Oct 8 20:04:38.277452 containerd[1680]: 2024-10-08 20:04:38.266 [INFO][4777] ipam_plugin.go 417: Releasing address using handleID ContainerID="72b13551b41a8209e6c060d3441ffbc5ae5ea761a8c510e5d0f43df8df28a11f" HandleID="k8s-pod-network.72b13551b41a8209e6c060d3441ffbc5ae5ea761a8c510e5d0f43df8df28a11f" Workload="ci--4081.1.0--a--b9ef23c535-k8s-coredns--6f6b679f8f--dnl97-eth0" Oct 8 20:04:38.277452 containerd[1680]: 2024-10-08 20:04:38.266 [INFO][4777] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:04:38.277452 containerd[1680]: 2024-10-08 20:04:38.267 [INFO][4777] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:04:38.277452 containerd[1680]: 2024-10-08 20:04:38.272 [WARNING][4777] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="72b13551b41a8209e6c060d3441ffbc5ae5ea761a8c510e5d0f43df8df28a11f" HandleID="k8s-pod-network.72b13551b41a8209e6c060d3441ffbc5ae5ea761a8c510e5d0f43df8df28a11f" Workload="ci--4081.1.0--a--b9ef23c535-k8s-coredns--6f6b679f8f--dnl97-eth0" Oct 8 20:04:38.277452 containerd[1680]: 2024-10-08 20:04:38.272 [INFO][4777] ipam_plugin.go 445: Releasing address using workloadID ContainerID="72b13551b41a8209e6c060d3441ffbc5ae5ea761a8c510e5d0f43df8df28a11f" HandleID="k8s-pod-network.72b13551b41a8209e6c060d3441ffbc5ae5ea761a8c510e5d0f43df8df28a11f" Workload="ci--4081.1.0--a--b9ef23c535-k8s-coredns--6f6b679f8f--dnl97-eth0" Oct 8 20:04:38.277452 containerd[1680]: 2024-10-08 20:04:38.274 [INFO][4777] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:04:38.277452 containerd[1680]: 2024-10-08 20:04:38.276 [INFO][4771] k8s.go 621: Teardown processing complete. ContainerID="72b13551b41a8209e6c060d3441ffbc5ae5ea761a8c510e5d0f43df8df28a11f" Oct 8 20:04:38.281182 containerd[1680]: time="2024-10-08T20:04:38.280085699Z" level=info msg="TearDown network for sandbox \"72b13551b41a8209e6c060d3441ffbc5ae5ea761a8c510e5d0f43df8df28a11f\" successfully" Oct 8 20:04:38.281182 containerd[1680]: time="2024-10-08T20:04:38.280134400Z" level=info msg="StopPodSandbox for \"72b13551b41a8209e6c060d3441ffbc5ae5ea761a8c510e5d0f43df8df28a11f\" returns successfully" Oct 8 20:04:38.281364 systemd[1]: run-netns-cni\x2d485df1e3\x2d8149\x2d5046\x2d4b0a\x2dd8a7940b2e7f.mount: Deactivated successfully. Oct 8 20:04:38.282824 containerd[1680]: time="2024-10-08T20:04:38.282736324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-dnl97,Uid:c43b05fe-589a-477a-823a-198b47900c84,Namespace:kube-system,Attempt:1,}" Oct 8 20:04:38.423909 systemd-networkd[1320]: calid95116a9c13: Link UP Oct 8 20:04:38.424161 systemd-networkd[1320]: calid95116a9c13: Gained carrier Oct 8 20:04:38.442051 containerd[1680]: 2024-10-08 20:04:38.361 [INFO][4783] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.1.0--a--b9ef23c535-k8s-coredns--6f6b679f8f--dnl97-eth0 coredns-6f6b679f8f- kube-system c43b05fe-589a-477a-823a-198b47900c84 783 0 2024-10-08 20:04:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.1.0-a-b9ef23c535 coredns-6f6b679f8f-dnl97 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid95116a9c13 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="c214543d7156630e131a1be688408989a00c794cbe783812f3c1ddc39ce408bc" Namespace="kube-system" Pod="coredns-6f6b679f8f-dnl97" WorkloadEndpoint="ci--4081.1.0--a--b9ef23c535-k8s-coredns--6f6b679f8f--dnl97-" Oct 8 20:04:38.442051 containerd[1680]: 2024-10-08 20:04:38.362 [INFO][4783] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c214543d7156630e131a1be688408989a00c794cbe783812f3c1ddc39ce408bc" Namespace="kube-system" Pod="coredns-6f6b679f8f-dnl97" WorkloadEndpoint="ci--4081.1.0--a--b9ef23c535-k8s-coredns--6f6b679f8f--dnl97-eth0" Oct 8 20:04:38.442051 containerd[1680]: 2024-10-08 20:04:38.386 [INFO][4794] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c214543d7156630e131a1be688408989a00c794cbe783812f3c1ddc39ce408bc" HandleID="k8s-pod-network.c214543d7156630e131a1be688408989a00c794cbe783812f3c1ddc39ce408bc" Workload="ci--4081.1.0--a--b9ef23c535-k8s-coredns--6f6b679f8f--dnl97-eth0" Oct 8 20:04:38.442051 containerd[1680]: 2024-10-08 20:04:38.394 [INFO][4794] ipam_plugin.go 270: Auto assigning IP ContainerID="c214543d7156630e131a1be688408989a00c794cbe783812f3c1ddc39ce408bc" HandleID="k8s-pod-network.c214543d7156630e131a1be688408989a00c794cbe783812f3c1ddc39ce408bc" Workload="ci--4081.1.0--a--b9ef23c535-k8s-coredns--6f6b679f8f--dnl97-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00036e040), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.1.0-a-b9ef23c535", "pod":"coredns-6f6b679f8f-dnl97", "timestamp":"2024-10-08 20:04:38.386306988 +0000 UTC"}, Hostname:"ci-4081.1.0-a-b9ef23c535", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 20:04:38.442051 containerd[1680]: 2024-10-08 20:04:38.394 [INFO][4794] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:04:38.442051 containerd[1680]: 2024-10-08 20:04:38.394 [INFO][4794] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:04:38.442051 containerd[1680]: 2024-10-08 20:04:38.394 [INFO][4794] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.1.0-a-b9ef23c535' Oct 8 20:04:38.442051 containerd[1680]: 2024-10-08 20:04:38.396 [INFO][4794] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c214543d7156630e131a1be688408989a00c794cbe783812f3c1ddc39ce408bc" host="ci-4081.1.0-a-b9ef23c535" Oct 8 20:04:38.442051 containerd[1680]: 2024-10-08 20:04:38.399 [INFO][4794] ipam.go 372: Looking up existing affinities for host host="ci-4081.1.0-a-b9ef23c535" Oct 8 20:04:38.442051 containerd[1680]: 2024-10-08 20:04:38.403 [INFO][4794] ipam.go 489: Trying affinity for 192.168.58.64/26 host="ci-4081.1.0-a-b9ef23c535" Oct 8 20:04:38.442051 containerd[1680]: 2024-10-08 20:04:38.404 [INFO][4794] ipam.go 155: Attempting to load block cidr=192.168.58.64/26 host="ci-4081.1.0-a-b9ef23c535" Oct 8 20:04:38.442051 containerd[1680]: 2024-10-08 20:04:38.406 [INFO][4794] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.58.64/26 host="ci-4081.1.0-a-b9ef23c535" Oct 8 20:04:38.442051 containerd[1680]: 2024-10-08 20:04:38.406 [INFO][4794] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.58.64/26 handle="k8s-pod-network.c214543d7156630e131a1be688408989a00c794cbe783812f3c1ddc39ce408bc" host="ci-4081.1.0-a-b9ef23c535" Oct 8 20:04:38.442051 containerd[1680]: 2024-10-08 20:04:38.407 [INFO][4794] ipam.go 1685: Creating new handle: k8s-pod-network.c214543d7156630e131a1be688408989a00c794cbe783812f3c1ddc39ce408bc Oct 8 20:04:38.442051 containerd[1680]: 2024-10-08 20:04:38.411 [INFO][4794] ipam.go 1203: Writing block in order to claim IPs block=192.168.58.64/26 handle="k8s-pod-network.c214543d7156630e131a1be688408989a00c794cbe783812f3c1ddc39ce408bc" host="ci-4081.1.0-a-b9ef23c535" Oct 8 20:04:38.442051 containerd[1680]: 2024-10-08 20:04:38.415 [INFO][4794] ipam.go 1216: Successfully claimed IPs: [192.168.58.65/26] block=192.168.58.64/26 handle="k8s-pod-network.c214543d7156630e131a1be688408989a00c794cbe783812f3c1ddc39ce408bc" host="ci-4081.1.0-a-b9ef23c535" Oct 8 20:04:38.442051 containerd[1680]: 2024-10-08 20:04:38.416 [INFO][4794] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.58.65/26] handle="k8s-pod-network.c214543d7156630e131a1be688408989a00c794cbe783812f3c1ddc39ce408bc" host="ci-4081.1.0-a-b9ef23c535" Oct 8 20:04:38.442051 containerd[1680]: 2024-10-08 20:04:38.416 [INFO][4794] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:04:38.442051 containerd[1680]: 2024-10-08 20:04:38.416 [INFO][4794] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.58.65/26] IPv6=[] ContainerID="c214543d7156630e131a1be688408989a00c794cbe783812f3c1ddc39ce408bc" HandleID="k8s-pod-network.c214543d7156630e131a1be688408989a00c794cbe783812f3c1ddc39ce408bc" Workload="ci--4081.1.0--a--b9ef23c535-k8s-coredns--6f6b679f8f--dnl97-eth0" Oct 8 20:04:38.443016 containerd[1680]: 2024-10-08 20:04:38.418 [INFO][4783] k8s.go 386: Populated endpoint ContainerID="c214543d7156630e131a1be688408989a00c794cbe783812f3c1ddc39ce408bc" Namespace="kube-system" Pod="coredns-6f6b679f8f-dnl97" WorkloadEndpoint="ci--4081.1.0--a--b9ef23c535-k8s-coredns--6f6b679f8f--dnl97-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.1.0--a--b9ef23c535-k8s-coredns--6f6b679f8f--dnl97-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"c43b05fe-589a-477a-823a-198b47900c84", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 4, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.1.0-a-b9ef23c535", ContainerID:"", Pod:"coredns-6f6b679f8f-dnl97", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.58.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid95116a9c13", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:04:38.443016 containerd[1680]: 2024-10-08 20:04:38.418 [INFO][4783] k8s.go 387: Calico CNI using IPs: [192.168.58.65/32] ContainerID="c214543d7156630e131a1be688408989a00c794cbe783812f3c1ddc39ce408bc" Namespace="kube-system" Pod="coredns-6f6b679f8f-dnl97" WorkloadEndpoint="ci--4081.1.0--a--b9ef23c535-k8s-coredns--6f6b679f8f--dnl97-eth0" Oct 8 20:04:38.443016 containerd[1680]: 2024-10-08 20:04:38.418 [INFO][4783] dataplane_linux.go 68: Setting the host side veth name to calid95116a9c13 ContainerID="c214543d7156630e131a1be688408989a00c794cbe783812f3c1ddc39ce408bc" Namespace="kube-system" Pod="coredns-6f6b679f8f-dnl97" WorkloadEndpoint="ci--4081.1.0--a--b9ef23c535-k8s-coredns--6f6b679f8f--dnl97-eth0" Oct 8 20:04:38.443016 containerd[1680]: 2024-10-08 20:04:38.422 [INFO][4783] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="c214543d7156630e131a1be688408989a00c794cbe783812f3c1ddc39ce408bc" Namespace="kube-system" Pod="coredns-6f6b679f8f-dnl97" WorkloadEndpoint="ci--4081.1.0--a--b9ef23c535-k8s-coredns--6f6b679f8f--dnl97-eth0" Oct 8 20:04:38.443016 containerd[1680]: 2024-10-08 20:04:38.422 [INFO][4783] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c214543d7156630e131a1be688408989a00c794cbe783812f3c1ddc39ce408bc" Namespace="kube-system" Pod="coredns-6f6b679f8f-dnl97" WorkloadEndpoint="ci--4081.1.0--a--b9ef23c535-k8s-coredns--6f6b679f8f--dnl97-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.1.0--a--b9ef23c535-k8s-coredns--6f6b679f8f--dnl97-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"c43b05fe-589a-477a-823a-198b47900c84", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 4, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.1.0-a-b9ef23c535", ContainerID:"c214543d7156630e131a1be688408989a00c794cbe783812f3c1ddc39ce408bc", Pod:"coredns-6f6b679f8f-dnl97", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.58.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid95116a9c13", MAC:"8e:38:b7:89:68:ac", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:04:38.443393 containerd[1680]: 2024-10-08 20:04:38.438 [INFO][4783] k8s.go 500: Wrote updated endpoint to datastore ContainerID="c214543d7156630e131a1be688408989a00c794cbe783812f3c1ddc39ce408bc" Namespace="kube-system" Pod="coredns-6f6b679f8f-dnl97" WorkloadEndpoint="ci--4081.1.0--a--b9ef23c535-k8s-coredns--6f6b679f8f--dnl97-eth0" Oct 8 20:04:38.448141 systemd-networkd[1320]: vxlan.calico: Gained IPv6LL Oct 8 20:04:38.471930 containerd[1680]: time="2024-10-08T20:04:38.471682683Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:04:38.471930 containerd[1680]: time="2024-10-08T20:04:38.471789584Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:04:38.472228 containerd[1680]: time="2024-10-08T20:04:38.472153987Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:04:38.473256 containerd[1680]: time="2024-10-08T20:04:38.473038296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:04:38.505076 systemd[1]: Started cri-containerd-c214543d7156630e131a1be688408989a00c794cbe783812f3c1ddc39ce408bc.scope - libcontainer container c214543d7156630e131a1be688408989a00c794cbe783812f3c1ddc39ce408bc. Oct 8 20:04:38.543581 containerd[1680]: time="2024-10-08T20:04:38.543518052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-dnl97,Uid:c43b05fe-589a-477a-823a-198b47900c84,Namespace:kube-system,Attempt:1,} returns sandbox id \"c214543d7156630e131a1be688408989a00c794cbe783812f3c1ddc39ce408bc\"" Oct 8 20:04:38.546623 containerd[1680]: time="2024-10-08T20:04:38.546542980Z" level=info msg="CreateContainer within sandbox \"c214543d7156630e131a1be688408989a00c794cbe783812f3c1ddc39ce408bc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 8 20:04:38.583205 containerd[1680]: time="2024-10-08T20:04:38.583154921Z" level=info msg="CreateContainer within sandbox \"c214543d7156630e131a1be688408989a00c794cbe783812f3c1ddc39ce408bc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"063312a08598f6e8e620f0e8492ff26ceecc040e14288897e82b7e7e9f084849\"" Oct 8 20:04:38.583980 containerd[1680]: time="2024-10-08T20:04:38.583682926Z" level=info msg="StartContainer for \"063312a08598f6e8e620f0e8492ff26ceecc040e14288897e82b7e7e9f084849\"" Oct 8 20:04:38.609090 systemd[1]: Started cri-containerd-063312a08598f6e8e620f0e8492ff26ceecc040e14288897e82b7e7e9f084849.scope - libcontainer container 063312a08598f6e8e620f0e8492ff26ceecc040e14288897e82b7e7e9f084849. Oct 8 20:04:38.635645 containerd[1680]: time="2024-10-08T20:04:38.635531409Z" level=info msg="StartContainer for \"063312a08598f6e8e620f0e8492ff26ceecc040e14288897e82b7e7e9f084849\" returns successfully" Oct 8 20:04:39.284260 systemd[1]: run-containerd-runc-k8s.io-c214543d7156630e131a1be688408989a00c794cbe783812f3c1ddc39ce408bc-runc.4kQsl7.mount: Deactivated successfully. Oct 8 20:04:39.417233 kubelet[3207]: I1008 20:04:39.417165 3207 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-dnl97" podStartSLOduration=33.417143985 podStartE2EDuration="33.417143985s" podCreationTimestamp="2024-10-08 20:04:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:04:39.41651598 +0000 UTC m=+38.299052273" watchObservedRunningTime="2024-10-08 20:04:39.417143985 +0000 UTC m=+38.299680378" Oct 8 20:04:39.856118 systemd-networkd[1320]: calid95116a9c13: Gained IPv6LL Oct 8 20:04:40.204194 containerd[1680]: time="2024-10-08T20:04:40.203138103Z" level=info msg="StopPodSandbox for \"ba35c60d38929cfd3ddf111ca464d068384bf00795e4821b25b7ca58cd6de724\"" Oct 8 20:04:40.274825 containerd[1680]: 2024-10-08 20:04:40.244 [INFO][4910] k8s.go 608: Cleaning up netns ContainerID="ba35c60d38929cfd3ddf111ca464d068384bf00795e4821b25b7ca58cd6de724" Oct 8 20:04:40.274825 containerd[1680]: 2024-10-08 20:04:40.247 [INFO][4910] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="ba35c60d38929cfd3ddf111ca464d068384bf00795e4821b25b7ca58cd6de724" iface="eth0" netns="/var/run/netns/cni-1899de3e-ee80-3a41-ca12-e62d2de881ab" Oct 8 20:04:40.274825 containerd[1680]: 2024-10-08 20:04:40.247 [INFO][4910] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="ba35c60d38929cfd3ddf111ca464d068384bf00795e4821b25b7ca58cd6de724" iface="eth0" netns="/var/run/netns/cni-1899de3e-ee80-3a41-ca12-e62d2de881ab" Oct 8 20:04:40.274825 containerd[1680]: 2024-10-08 20:04:40.247 [INFO][4910] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="ba35c60d38929cfd3ddf111ca464d068384bf00795e4821b25b7ca58cd6de724" iface="eth0" netns="/var/run/netns/cni-1899de3e-ee80-3a41-ca12-e62d2de881ab" Oct 8 20:04:40.274825 containerd[1680]: 2024-10-08 20:04:40.247 [INFO][4910] k8s.go 615: Releasing IP address(es) ContainerID="ba35c60d38929cfd3ddf111ca464d068384bf00795e4821b25b7ca58cd6de724" Oct 8 20:04:40.274825 containerd[1680]: 2024-10-08 20:04:40.247 [INFO][4910] utils.go 188: Calico CNI releasing IP address ContainerID="ba35c60d38929cfd3ddf111ca464d068384bf00795e4821b25b7ca58cd6de724" Oct 8 20:04:40.274825 containerd[1680]: 2024-10-08 20:04:40.266 [INFO][4917] ipam_plugin.go 417: Releasing address using handleID ContainerID="ba35c60d38929cfd3ddf111ca464d068384bf00795e4821b25b7ca58cd6de724" HandleID="k8s-pod-network.ba35c60d38929cfd3ddf111ca464d068384bf00795e4821b25b7ca58cd6de724" Workload="ci--4081.1.0--a--b9ef23c535-k8s-calico--kube--controllers--7fb9bb5bf5--cl6qd-eth0" Oct 8 20:04:40.274825 containerd[1680]: 2024-10-08 20:04:40.266 [INFO][4917] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:04:40.274825 containerd[1680]: 2024-10-08 20:04:40.266 [INFO][4917] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:04:40.274825 containerd[1680]: 2024-10-08 20:04:40.271 [WARNING][4917] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="ba35c60d38929cfd3ddf111ca464d068384bf00795e4821b25b7ca58cd6de724" HandleID="k8s-pod-network.ba35c60d38929cfd3ddf111ca464d068384bf00795e4821b25b7ca58cd6de724" Workload="ci--4081.1.0--a--b9ef23c535-k8s-calico--kube--controllers--7fb9bb5bf5--cl6qd-eth0" Oct 8 20:04:40.274825 containerd[1680]: 2024-10-08 20:04:40.271 [INFO][4917] ipam_plugin.go 445: Releasing address using workloadID ContainerID="ba35c60d38929cfd3ddf111ca464d068384bf00795e4821b25b7ca58cd6de724" HandleID="k8s-pod-network.ba35c60d38929cfd3ddf111ca464d068384bf00795e4821b25b7ca58cd6de724" Workload="ci--4081.1.0--a--b9ef23c535-k8s-calico--kube--controllers--7fb9bb5bf5--cl6qd-eth0" Oct 8 20:04:40.274825 containerd[1680]: 2024-10-08 20:04:40.272 [INFO][4917] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:04:40.274825 containerd[1680]: 2024-10-08 20:04:40.273 [INFO][4910] k8s.go 621: Teardown processing complete. ContainerID="ba35c60d38929cfd3ddf111ca464d068384bf00795e4821b25b7ca58cd6de724" Oct 8 20:04:40.275733 containerd[1680]: time="2024-10-08T20:04:40.275575077Z" level=info msg="TearDown network for sandbox \"ba35c60d38929cfd3ddf111ca464d068384bf00795e4821b25b7ca58cd6de724\" successfully" Oct 8 20:04:40.275733 containerd[1680]: time="2024-10-08T20:04:40.275616778Z" level=info msg="StopPodSandbox for \"ba35c60d38929cfd3ddf111ca464d068384bf00795e4821b25b7ca58cd6de724\" returns successfully" Oct 8 20:04:40.278976 containerd[1680]: time="2024-10-08T20:04:40.278835008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7fb9bb5bf5-cl6qd,Uid:115f7617-1709-468f-88b3-4136a07ce1cb,Namespace:calico-system,Attempt:1,}" Oct 8 20:04:40.279663 systemd[1]: run-netns-cni\x2d1899de3e\x2dee80\x2d3a41\x2dca12\x2de62d2de881ab.mount: Deactivated successfully. Oct 8 20:04:40.424304 systemd-networkd[1320]: califa4412f4bd1: Link UP Oct 8 20:04:40.425386 systemd-networkd[1320]: califa4412f4bd1: Gained carrier Oct 8 20:04:40.447484 containerd[1680]: 2024-10-08 20:04:40.362 [INFO][4923] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.1.0--a--b9ef23c535-k8s-calico--kube--controllers--7fb9bb5bf5--cl6qd-eth0 calico-kube-controllers-7fb9bb5bf5- calico-system 115f7617-1709-468f-88b3-4136a07ce1cb 804 0 2024-10-08 20:04:13 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7fb9bb5bf5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.1.0-a-b9ef23c535 calico-kube-controllers-7fb9bb5bf5-cl6qd eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] califa4412f4bd1 [] []}} ContainerID="299ee57de537d281f4cd0f5eb32dbe456e0ac3be21ff0089923bf3cac71d7e2c" Namespace="calico-system" Pod="calico-kube-controllers-7fb9bb5bf5-cl6qd" WorkloadEndpoint="ci--4081.1.0--a--b9ef23c535-k8s-calico--kube--controllers--7fb9bb5bf5--cl6qd-" Oct 8 20:04:40.447484 containerd[1680]: 2024-10-08 20:04:40.362 [INFO][4923] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="299ee57de537d281f4cd0f5eb32dbe456e0ac3be21ff0089923bf3cac71d7e2c" Namespace="calico-system" Pod="calico-kube-controllers-7fb9bb5bf5-cl6qd" WorkloadEndpoint="ci--4081.1.0--a--b9ef23c535-k8s-calico--kube--controllers--7fb9bb5bf5--cl6qd-eth0" Oct 8 20:04:40.447484 containerd[1680]: 2024-10-08 20:04:40.386 [INFO][4935] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="299ee57de537d281f4cd0f5eb32dbe456e0ac3be21ff0089923bf3cac71d7e2c" HandleID="k8s-pod-network.299ee57de537d281f4cd0f5eb32dbe456e0ac3be21ff0089923bf3cac71d7e2c" Workload="ci--4081.1.0--a--b9ef23c535-k8s-calico--kube--controllers--7fb9bb5bf5--cl6qd-eth0" Oct 8 20:04:40.447484 containerd[1680]: 2024-10-08 20:04:40.394 [INFO][4935] ipam_plugin.go 270: Auto assigning IP ContainerID="299ee57de537d281f4cd0f5eb32dbe456e0ac3be21ff0089923bf3cac71d7e2c" HandleID="k8s-pod-network.299ee57de537d281f4cd0f5eb32dbe456e0ac3be21ff0089923bf3cac71d7e2c" Workload="ci--4081.1.0--a--b9ef23c535-k8s-calico--kube--controllers--7fb9bb5bf5--cl6qd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000293390), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.1.0-a-b9ef23c535", "pod":"calico-kube-controllers-7fb9bb5bf5-cl6qd", "timestamp":"2024-10-08 20:04:40.386763713 +0000 UTC"}, Hostname:"ci-4081.1.0-a-b9ef23c535", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 20:04:40.447484 containerd[1680]: 2024-10-08 20:04:40.394 [INFO][4935] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:04:40.447484 containerd[1680]: 2024-10-08 20:04:40.394 [INFO][4935] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:04:40.447484 containerd[1680]: 2024-10-08 20:04:40.394 [INFO][4935] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.1.0-a-b9ef23c535' Oct 8 20:04:40.447484 containerd[1680]: 2024-10-08 20:04:40.396 [INFO][4935] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.299ee57de537d281f4cd0f5eb32dbe456e0ac3be21ff0089923bf3cac71d7e2c" host="ci-4081.1.0-a-b9ef23c535" Oct 8 20:04:40.447484 containerd[1680]: 2024-10-08 20:04:40.399 [INFO][4935] ipam.go 372: Looking up existing affinities for host host="ci-4081.1.0-a-b9ef23c535" Oct 8 20:04:40.447484 containerd[1680]: 2024-10-08 20:04:40.403 [INFO][4935] ipam.go 489: Trying affinity for 192.168.58.64/26 host="ci-4081.1.0-a-b9ef23c535" Oct 8 20:04:40.447484 containerd[1680]: 2024-10-08 20:04:40.405 [INFO][4935] ipam.go 155: Attempting to load block cidr=192.168.58.64/26 host="ci-4081.1.0-a-b9ef23c535" Oct 8 20:04:40.447484 containerd[1680]: 2024-10-08 20:04:40.407 [INFO][4935] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.58.64/26 host="ci-4081.1.0-a-b9ef23c535" Oct 8 20:04:40.447484 containerd[1680]: 2024-10-08 20:04:40.407 [INFO][4935] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.58.64/26 handle="k8s-pod-network.299ee57de537d281f4cd0f5eb32dbe456e0ac3be21ff0089923bf3cac71d7e2c" host="ci-4081.1.0-a-b9ef23c535" Oct 8 20:04:40.447484 containerd[1680]: 2024-10-08 20:04:40.408 [INFO][4935] ipam.go 1685: Creating new handle: k8s-pod-network.299ee57de537d281f4cd0f5eb32dbe456e0ac3be21ff0089923bf3cac71d7e2c Oct 8 20:04:40.447484 containerd[1680]: 2024-10-08 20:04:40.412 [INFO][4935] ipam.go 1203: Writing block in order to claim IPs block=192.168.58.64/26 handle="k8s-pod-network.299ee57de537d281f4cd0f5eb32dbe456e0ac3be21ff0089923bf3cac71d7e2c" host="ci-4081.1.0-a-b9ef23c535" Oct 8 20:04:40.447484 containerd[1680]: 2024-10-08 20:04:40.419 [INFO][4935] ipam.go 1216: Successfully claimed IPs: [192.168.58.66/26] block=192.168.58.64/26 handle="k8s-pod-network.299ee57de537d281f4cd0f5eb32dbe456e0ac3be21ff0089923bf3cac71d7e2c" host="ci-4081.1.0-a-b9ef23c535" Oct 8 20:04:40.447484 containerd[1680]: 2024-10-08 20:04:40.420 [INFO][4935] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.58.66/26] handle="k8s-pod-network.299ee57de537d281f4cd0f5eb32dbe456e0ac3be21ff0089923bf3cac71d7e2c" host="ci-4081.1.0-a-b9ef23c535" Oct 8 20:04:40.447484 containerd[1680]: 2024-10-08 20:04:40.420 [INFO][4935] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:04:40.447484 containerd[1680]: 2024-10-08 20:04:40.420 [INFO][4935] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.58.66/26] IPv6=[] ContainerID="299ee57de537d281f4cd0f5eb32dbe456e0ac3be21ff0089923bf3cac71d7e2c" HandleID="k8s-pod-network.299ee57de537d281f4cd0f5eb32dbe456e0ac3be21ff0089923bf3cac71d7e2c" Workload="ci--4081.1.0--a--b9ef23c535-k8s-calico--kube--controllers--7fb9bb5bf5--cl6qd-eth0" Oct 8 20:04:40.450078 containerd[1680]: 2024-10-08 20:04:40.421 [INFO][4923] k8s.go 386: Populated endpoint ContainerID="299ee57de537d281f4cd0f5eb32dbe456e0ac3be21ff0089923bf3cac71d7e2c" Namespace="calico-system" Pod="calico-kube-controllers-7fb9bb5bf5-cl6qd" WorkloadEndpoint="ci--4081.1.0--a--b9ef23c535-k8s-calico--kube--controllers--7fb9bb5bf5--cl6qd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.1.0--a--b9ef23c535-k8s-calico--kube--controllers--7fb9bb5bf5--cl6qd-eth0", GenerateName:"calico-kube-controllers-7fb9bb5bf5-", Namespace:"calico-system", SelfLink:"", UID:"115f7617-1709-468f-88b3-4136a07ce1cb", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 4, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7fb9bb5bf5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.1.0-a-b9ef23c535", ContainerID:"", Pod:"calico-kube-controllers-7fb9bb5bf5-cl6qd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.58.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califa4412f4bd1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:04:40.450078 containerd[1680]: 2024-10-08 20:04:40.421 [INFO][4923] k8s.go 387: Calico CNI using IPs: [192.168.58.66/32] ContainerID="299ee57de537d281f4cd0f5eb32dbe456e0ac3be21ff0089923bf3cac71d7e2c" Namespace="calico-system" Pod="calico-kube-controllers-7fb9bb5bf5-cl6qd" WorkloadEndpoint="ci--4081.1.0--a--b9ef23c535-k8s-calico--kube--controllers--7fb9bb5bf5--cl6qd-eth0" Oct 8 20:04:40.450078 containerd[1680]: 2024-10-08 20:04:40.421 [INFO][4923] dataplane_linux.go 68: Setting the host side veth name to califa4412f4bd1 ContainerID="299ee57de537d281f4cd0f5eb32dbe456e0ac3be21ff0089923bf3cac71d7e2c" Namespace="calico-system" Pod="calico-kube-controllers-7fb9bb5bf5-cl6qd" WorkloadEndpoint="ci--4081.1.0--a--b9ef23c535-k8s-calico--kube--controllers--7fb9bb5bf5--cl6qd-eth0" Oct 8 20:04:40.450078 containerd[1680]: 2024-10-08 20:04:40.424 [INFO][4923] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="299ee57de537d281f4cd0f5eb32dbe456e0ac3be21ff0089923bf3cac71d7e2c" Namespace="calico-system" Pod="calico-kube-controllers-7fb9bb5bf5-cl6qd" WorkloadEndpoint="ci--4081.1.0--a--b9ef23c535-k8s-calico--kube--controllers--7fb9bb5bf5--cl6qd-eth0" Oct 8 20:04:40.450078 containerd[1680]: 2024-10-08 20:04:40.425 [INFO][4923] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="299ee57de537d281f4cd0f5eb32dbe456e0ac3be21ff0089923bf3cac71d7e2c" Namespace="calico-system" Pod="calico-kube-controllers-7fb9bb5bf5-cl6qd" WorkloadEndpoint="ci--4081.1.0--a--b9ef23c535-k8s-calico--kube--controllers--7fb9bb5bf5--cl6qd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.1.0--a--b9ef23c535-k8s-calico--kube--controllers--7fb9bb5bf5--cl6qd-eth0", GenerateName:"calico-kube-controllers-7fb9bb5bf5-", Namespace:"calico-system", SelfLink:"", UID:"115f7617-1709-468f-88b3-4136a07ce1cb", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 4, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7fb9bb5bf5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.1.0-a-b9ef23c535", ContainerID:"299ee57de537d281f4cd0f5eb32dbe456e0ac3be21ff0089923bf3cac71d7e2c", Pod:"calico-kube-controllers-7fb9bb5bf5-cl6qd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.58.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califa4412f4bd1", MAC:"62:4e:b1:15:27:ce", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:04:40.450078 containerd[1680]: 2024-10-08 20:04:40.443 [INFO][4923] k8s.go 500: Wrote updated endpoint to datastore ContainerID="299ee57de537d281f4cd0f5eb32dbe456e0ac3be21ff0089923bf3cac71d7e2c" Namespace="calico-system" Pod="calico-kube-controllers-7fb9bb5bf5-cl6qd" WorkloadEndpoint="ci--4081.1.0--a--b9ef23c535-k8s-calico--kube--controllers--7fb9bb5bf5--cl6qd-eth0" Oct 8 20:04:40.483294 containerd[1680]: time="2024-10-08T20:04:40.476601749Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:04:40.483294 containerd[1680]: time="2024-10-08T20:04:40.476734450Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:04:40.483294 containerd[1680]: time="2024-10-08T20:04:40.476756650Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:04:40.483294 containerd[1680]: time="2024-10-08T20:04:40.476879852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:04:40.506239 systemd[1]: run-containerd-runc-k8s.io-299ee57de537d281f4cd0f5eb32dbe456e0ac3be21ff0089923bf3cac71d7e2c-runc.W3ZPhG.mount: Deactivated successfully. Oct 8 20:04:40.518074 systemd[1]: Started cri-containerd-299ee57de537d281f4cd0f5eb32dbe456e0ac3be21ff0089923bf3cac71d7e2c.scope - libcontainer container 299ee57de537d281f4cd0f5eb32dbe456e0ac3be21ff0089923bf3cac71d7e2c. Oct 8 20:04:40.558577 containerd[1680]: time="2024-10-08T20:04:40.558526312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7fb9bb5bf5-cl6qd,Uid:115f7617-1709-468f-88b3-4136a07ce1cb,Namespace:calico-system,Attempt:1,} returns sandbox id \"299ee57de537d281f4cd0f5eb32dbe456e0ac3be21ff0089923bf3cac71d7e2c\"" Oct 8 20:04:40.560379 containerd[1680]: time="2024-10-08T20:04:40.560346229Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Oct 8 20:04:41.204514 containerd[1680]: time="2024-10-08T20:04:41.204457725Z" level=info msg="StopPodSandbox for \"a87e53211819594214e02738e8dd0ae6a3b7145d92adc9b16201d69a67ac3959\"" Oct 8 20:04:41.280808 containerd[1680]: 2024-10-08 20:04:41.248 [INFO][5008] k8s.go 608: Cleaning up netns ContainerID="a87e53211819594214e02738e8dd0ae6a3b7145d92adc9b16201d69a67ac3959" Oct 8 20:04:41.280808 containerd[1680]: 2024-10-08 20:04:41.248 [INFO][5008] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="a87e53211819594214e02738e8dd0ae6a3b7145d92adc9b16201d69a67ac3959" iface="eth0" netns="/var/run/netns/cni-3a5ceffe-47b4-58db-dee5-fa7ce27f043b" Oct 8 20:04:41.280808 containerd[1680]: 2024-10-08 20:04:41.249 [INFO][5008] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="a87e53211819594214e02738e8dd0ae6a3b7145d92adc9b16201d69a67ac3959" iface="eth0" netns="/var/run/netns/cni-3a5ceffe-47b4-58db-dee5-fa7ce27f043b" Oct 8 20:04:41.280808 containerd[1680]: 2024-10-08 20:04:41.249 [INFO][5008] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="a87e53211819594214e02738e8dd0ae6a3b7145d92adc9b16201d69a67ac3959" iface="eth0" netns="/var/run/netns/cni-3a5ceffe-47b4-58db-dee5-fa7ce27f043b" Oct 8 20:04:41.280808 containerd[1680]: 2024-10-08 20:04:41.249 [INFO][5008] k8s.go 615: Releasing IP address(es) ContainerID="a87e53211819594214e02738e8dd0ae6a3b7145d92adc9b16201d69a67ac3959" Oct 8 20:04:41.280808 containerd[1680]: 2024-10-08 20:04:41.249 [INFO][5008] utils.go 188: Calico CNI releasing IP address ContainerID="a87e53211819594214e02738e8dd0ae6a3b7145d92adc9b16201d69a67ac3959" Oct 8 20:04:41.280808 containerd[1680]: 2024-10-08 20:04:41.271 [INFO][5015] ipam_plugin.go 417: Releasing address using handleID ContainerID="a87e53211819594214e02738e8dd0ae6a3b7145d92adc9b16201d69a67ac3959" HandleID="k8s-pod-network.a87e53211819594214e02738e8dd0ae6a3b7145d92adc9b16201d69a67ac3959" Workload="ci--4081.1.0--a--b9ef23c535-k8s-coredns--6f6b679f8f--fkjkp-eth0" Oct 8 20:04:41.280808 containerd[1680]: 2024-10-08 20:04:41.271 [INFO][5015] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:04:41.280808 containerd[1680]: 2024-10-08 20:04:41.271 [INFO][5015] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:04:41.280808 containerd[1680]: 2024-10-08 20:04:41.277 [WARNING][5015] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="a87e53211819594214e02738e8dd0ae6a3b7145d92adc9b16201d69a67ac3959" HandleID="k8s-pod-network.a87e53211819594214e02738e8dd0ae6a3b7145d92adc9b16201d69a67ac3959" Workload="ci--4081.1.0--a--b9ef23c535-k8s-coredns--6f6b679f8f--fkjkp-eth0" Oct 8 20:04:41.280808 containerd[1680]: 2024-10-08 20:04:41.277 [INFO][5015] ipam_plugin.go 445: Releasing address using workloadID ContainerID="a87e53211819594214e02738e8dd0ae6a3b7145d92adc9b16201d69a67ac3959" HandleID="k8s-pod-network.a87e53211819594214e02738e8dd0ae6a3b7145d92adc9b16201d69a67ac3959" Workload="ci--4081.1.0--a--b9ef23c535-k8s-coredns--6f6b679f8f--fkjkp-eth0" Oct 8 20:04:41.280808 containerd[1680]: 2024-10-08 20:04:41.278 [INFO][5015] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:04:41.280808 containerd[1680]: 2024-10-08 20:04:41.279 [INFO][5008] k8s.go 621: Teardown processing complete. ContainerID="a87e53211819594214e02738e8dd0ae6a3b7145d92adc9b16201d69a67ac3959" Oct 8 20:04:41.281638 containerd[1680]: time="2024-10-08T20:04:41.281048538Z" level=info msg="TearDown network for sandbox \"a87e53211819594214e02738e8dd0ae6a3b7145d92adc9b16201d69a67ac3959\" successfully" Oct 8 20:04:41.281638 containerd[1680]: time="2024-10-08T20:04:41.281093939Z" level=info msg="StopPodSandbox for \"a87e53211819594214e02738e8dd0ae6a3b7145d92adc9b16201d69a67ac3959\" returns successfully" Oct 8 20:04:41.282291 containerd[1680]: time="2024-10-08T20:04:41.282255750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-fkjkp,Uid:0a1778ab-0ba0-44e2-a8c0-fe7fdaa08be5,Namespace:kube-system,Attempt:1,}" Oct 8 20:04:41.324297 systemd[1]: run-netns-cni\x2d3a5ceffe\x2d47b4\x2d58db\x2ddee5\x2dfa7ce27f043b.mount: Deactivated successfully. Oct 8 20:04:41.451398 systemd-networkd[1320]: cali5baf0cf16a6: Link UP Oct 8 20:04:41.453444 systemd-networkd[1320]: cali5baf0cf16a6: Gained carrier Oct 8 20:04:41.470010 containerd[1680]: 2024-10-08 20:04:41.387 [INFO][5023] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.1.0--a--b9ef23c535-k8s-coredns--6f6b679f8f--fkjkp-eth0 coredns-6f6b679f8f- kube-system 0a1778ab-0ba0-44e2-a8c0-fe7fdaa08be5 811 0 2024-10-08 20:04:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.1.0-a-b9ef23c535 coredns-6f6b679f8f-fkjkp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5baf0cf16a6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="b50f575e2c49dcd5c1c9f83873b5dc0cf966ae9223611ceaeb4a610b4dd51244" Namespace="kube-system" Pod="coredns-6f6b679f8f-fkjkp" WorkloadEndpoint="ci--4081.1.0--a--b9ef23c535-k8s-coredns--6f6b679f8f--fkjkp-" Oct 8 20:04:41.470010 containerd[1680]: 2024-10-08 20:04:41.387 [INFO][5023] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b50f575e2c49dcd5c1c9f83873b5dc0cf966ae9223611ceaeb4a610b4dd51244" Namespace="kube-system" Pod="coredns-6f6b679f8f-fkjkp" WorkloadEndpoint="ci--4081.1.0--a--b9ef23c535-k8s-coredns--6f6b679f8f--fkjkp-eth0" Oct 8 20:04:41.470010 containerd[1680]: 2024-10-08 20:04:41.414 [INFO][5034] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b50f575e2c49dcd5c1c9f83873b5dc0cf966ae9223611ceaeb4a610b4dd51244" HandleID="k8s-pod-network.b50f575e2c49dcd5c1c9f83873b5dc0cf966ae9223611ceaeb4a610b4dd51244" Workload="ci--4081.1.0--a--b9ef23c535-k8s-coredns--6f6b679f8f--fkjkp-eth0" Oct 8 20:04:41.470010 containerd[1680]: 2024-10-08 20:04:41.422 [INFO][5034] ipam_plugin.go 270: Auto assigning IP ContainerID="b50f575e2c49dcd5c1c9f83873b5dc0cf966ae9223611ceaeb4a610b4dd51244" HandleID="k8s-pod-network.b50f575e2c49dcd5c1c9f83873b5dc0cf966ae9223611ceaeb4a610b4dd51244" Workload="ci--4081.1.0--a--b9ef23c535-k8s-coredns--6f6b679f8f--fkjkp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003107e0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.1.0-a-b9ef23c535", "pod":"coredns-6f6b679f8f-fkjkp", "timestamp":"2024-10-08 20:04:41.414324679 +0000 UTC"}, Hostname:"ci-4081.1.0-a-b9ef23c535", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 20:04:41.470010 containerd[1680]: 2024-10-08 20:04:41.422 [INFO][5034] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:04:41.470010 containerd[1680]: 2024-10-08 20:04:41.422 [INFO][5034] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:04:41.470010 containerd[1680]: 2024-10-08 20:04:41.422 [INFO][5034] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.1.0-a-b9ef23c535' Oct 8 20:04:41.470010 containerd[1680]: 2024-10-08 20:04:41.424 [INFO][5034] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b50f575e2c49dcd5c1c9f83873b5dc0cf966ae9223611ceaeb4a610b4dd51244" host="ci-4081.1.0-a-b9ef23c535" Oct 8 20:04:41.470010 containerd[1680]: 2024-10-08 20:04:41.427 [INFO][5034] ipam.go 372: Looking up existing affinities for host host="ci-4081.1.0-a-b9ef23c535" Oct 8 20:04:41.470010 containerd[1680]: 2024-10-08 20:04:41.430 [INFO][5034] ipam.go 489: Trying affinity for 192.168.58.64/26 host="ci-4081.1.0-a-b9ef23c535" Oct 8 20:04:41.470010 containerd[1680]: 2024-10-08 20:04:41.432 [INFO][5034] ipam.go 155: Attempting to load block cidr=192.168.58.64/26 host="ci-4081.1.0-a-b9ef23c535" Oct 8 20:04:41.470010 containerd[1680]: 2024-10-08 20:04:41.433 [INFO][5034] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.58.64/26 host="ci-4081.1.0-a-b9ef23c535" Oct 8 20:04:41.470010 containerd[1680]: 2024-10-08 20:04:41.433 [INFO][5034] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.58.64/26 handle="k8s-pod-network.b50f575e2c49dcd5c1c9f83873b5dc0cf966ae9223611ceaeb4a610b4dd51244" host="ci-4081.1.0-a-b9ef23c535" Oct 8 20:04:41.470010 containerd[1680]: 2024-10-08 20:04:41.434 [INFO][5034] ipam.go 1685: Creating new handle: k8s-pod-network.b50f575e2c49dcd5c1c9f83873b5dc0cf966ae9223611ceaeb4a610b4dd51244 Oct 8 20:04:41.470010 containerd[1680]: 2024-10-08 20:04:41.438 [INFO][5034] ipam.go 1203: Writing block in order to claim IPs block=192.168.58.64/26 handle="k8s-pod-network.b50f575e2c49dcd5c1c9f83873b5dc0cf966ae9223611ceaeb4a610b4dd51244" host="ci-4081.1.0-a-b9ef23c535" Oct 8 20:04:41.470010 containerd[1680]: 2024-10-08 20:04:41.446 [INFO][5034] ipam.go 1216: Successfully claimed IPs: [192.168.58.67/26] block=192.168.58.64/26 handle="k8s-pod-network.b50f575e2c49dcd5c1c9f83873b5dc0cf966ae9223611ceaeb4a610b4dd51244" host="ci-4081.1.0-a-b9ef23c535" Oct 8 20:04:41.470010 containerd[1680]: 2024-10-08 20:04:41.446 [INFO][5034] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.58.67/26] handle="k8s-pod-network.b50f575e2c49dcd5c1c9f83873b5dc0cf966ae9223611ceaeb4a610b4dd51244" host="ci-4081.1.0-a-b9ef23c535" Oct 8 20:04:41.470010 containerd[1680]: 2024-10-08 20:04:41.446 [INFO][5034] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:04:41.470010 containerd[1680]: 2024-10-08 20:04:41.446 [INFO][5034] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.58.67/26] IPv6=[] ContainerID="b50f575e2c49dcd5c1c9f83873b5dc0cf966ae9223611ceaeb4a610b4dd51244" HandleID="k8s-pod-network.b50f575e2c49dcd5c1c9f83873b5dc0cf966ae9223611ceaeb4a610b4dd51244" Workload="ci--4081.1.0--a--b9ef23c535-k8s-coredns--6f6b679f8f--fkjkp-eth0" Oct 8 20:04:41.470819 containerd[1680]: 2024-10-08 20:04:41.448 [INFO][5023] k8s.go 386: Populated endpoint ContainerID="b50f575e2c49dcd5c1c9f83873b5dc0cf966ae9223611ceaeb4a610b4dd51244" Namespace="kube-system" Pod="coredns-6f6b679f8f-fkjkp" WorkloadEndpoint="ci--4081.1.0--a--b9ef23c535-k8s-coredns--6f6b679f8f--fkjkp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.1.0--a--b9ef23c535-k8s-coredns--6f6b679f8f--fkjkp-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"0a1778ab-0ba0-44e2-a8c0-fe7fdaa08be5", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 4, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.1.0-a-b9ef23c535", ContainerID:"", Pod:"coredns-6f6b679f8f-fkjkp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.58.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5baf0cf16a6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:04:41.470819 containerd[1680]: 2024-10-08 20:04:41.448 [INFO][5023] k8s.go 387: Calico CNI using IPs: [192.168.58.67/32] ContainerID="b50f575e2c49dcd5c1c9f83873b5dc0cf966ae9223611ceaeb4a610b4dd51244" Namespace="kube-system" Pod="coredns-6f6b679f8f-fkjkp" WorkloadEndpoint="ci--4081.1.0--a--b9ef23c535-k8s-coredns--6f6b679f8f--fkjkp-eth0" Oct 8 20:04:41.470819 containerd[1680]: 2024-10-08 20:04:41.448 [INFO][5023] dataplane_linux.go 68: Setting the host side veth name to cali5baf0cf16a6 ContainerID="b50f575e2c49dcd5c1c9f83873b5dc0cf966ae9223611ceaeb4a610b4dd51244" Namespace="kube-system" Pod="coredns-6f6b679f8f-fkjkp" WorkloadEndpoint="ci--4081.1.0--a--b9ef23c535-k8s-coredns--6f6b679f8f--fkjkp-eth0" Oct 8 20:04:41.470819 containerd[1680]: 2024-10-08 20:04:41.452 [INFO][5023] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="b50f575e2c49dcd5c1c9f83873b5dc0cf966ae9223611ceaeb4a610b4dd51244" Namespace="kube-system" Pod="coredns-6f6b679f8f-fkjkp" WorkloadEndpoint="ci--4081.1.0--a--b9ef23c535-k8s-coredns--6f6b679f8f--fkjkp-eth0" Oct 8 20:04:41.470819 containerd[1680]: 2024-10-08 20:04:41.452 [INFO][5023] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b50f575e2c49dcd5c1c9f83873b5dc0cf966ae9223611ceaeb4a610b4dd51244" Namespace="kube-system" Pod="coredns-6f6b679f8f-fkjkp" WorkloadEndpoint="ci--4081.1.0--a--b9ef23c535-k8s-coredns--6f6b679f8f--fkjkp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.1.0--a--b9ef23c535-k8s-coredns--6f6b679f8f--fkjkp-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"0a1778ab-0ba0-44e2-a8c0-fe7fdaa08be5", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 4, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.1.0-a-b9ef23c535", ContainerID:"b50f575e2c49dcd5c1c9f83873b5dc0cf966ae9223611ceaeb4a610b4dd51244", Pod:"coredns-6f6b679f8f-fkjkp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.58.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5baf0cf16a6", MAC:"f2:69:78:79:d5:53", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:04:41.475126 containerd[1680]: 2024-10-08 20:04:41.466 [INFO][5023] k8s.go 500: Wrote updated endpoint to datastore ContainerID="b50f575e2c49dcd5c1c9f83873b5dc0cf966ae9223611ceaeb4a610b4dd51244" Namespace="kube-system" Pod="coredns-6f6b679f8f-fkjkp" WorkloadEndpoint="ci--4081.1.0--a--b9ef23c535-k8s-coredns--6f6b679f8f--fkjkp-eth0" Oct 8 20:04:41.523039 containerd[1680]: time="2024-10-08T20:04:41.522703088Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:04:41.523039 containerd[1680]: time="2024-10-08T20:04:41.522762389Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:04:41.523039 containerd[1680]: time="2024-10-08T20:04:41.522790989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:04:41.523039 containerd[1680]: time="2024-10-08T20:04:41.522882590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:04:41.575279 systemd[1]: Started cri-containerd-b50f575e2c49dcd5c1c9f83873b5dc0cf966ae9223611ceaeb4a610b4dd51244.scope - libcontainer container b50f575e2c49dcd5c1c9f83873b5dc0cf966ae9223611ceaeb4a610b4dd51244. Oct 8 20:04:41.645069 containerd[1680]: time="2024-10-08T20:04:41.644910526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-fkjkp,Uid:0a1778ab-0ba0-44e2-a8c0-fe7fdaa08be5,Namespace:kube-system,Attempt:1,} returns sandbox id \"b50f575e2c49dcd5c1c9f83873b5dc0cf966ae9223611ceaeb4a610b4dd51244\"" Oct 8 20:04:41.648200 containerd[1680]: time="2024-10-08T20:04:41.648163656Z" level=info msg="CreateContainer within sandbox \"b50f575e2c49dcd5c1c9f83873b5dc0cf966ae9223611ceaeb4a610b4dd51244\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 8 20:04:41.683789 containerd[1680]: time="2024-10-08T20:04:41.683737787Z" level=info msg="CreateContainer within sandbox \"b50f575e2c49dcd5c1c9f83873b5dc0cf966ae9223611ceaeb4a610b4dd51244\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"395f76c2e31758fc4e0fbe14f3241d4b75672be8cd5cd8f036b3f51c8dead1e3\"" Oct 8 20:04:41.684612 containerd[1680]: time="2024-10-08T20:04:41.684572495Z" level=info msg="StartContainer for \"395f76c2e31758fc4e0fbe14f3241d4b75672be8cd5cd8f036b3f51c8dead1e3\"" Oct 8 20:04:41.713121 systemd[1]: Started cri-containerd-395f76c2e31758fc4e0fbe14f3241d4b75672be8cd5cd8f036b3f51c8dead1e3.scope - libcontainer container 395f76c2e31758fc4e0fbe14f3241d4b75672be8cd5cd8f036b3f51c8dead1e3. Oct 8 20:04:41.739809 containerd[1680]: time="2024-10-08T20:04:41.739665208Z" level=info msg="StartContainer for \"395f76c2e31758fc4e0fbe14f3241d4b75672be8cd5cd8f036b3f51c8dead1e3\" returns successfully" Oct 8 20:04:42.096243 systemd-networkd[1320]: califa4412f4bd1: Gained IPv6LL Oct 8 20:04:42.219108 containerd[1680]: time="2024-10-08T20:04:42.218895970Z" level=info msg="StopPodSandbox for \"60223960baf2536862e75c6293089dd464aacce4ea598ea63ccac2fb7e6b3bdc\"" Oct 8 20:04:42.310132 containerd[1680]: 2024-10-08 20:04:42.273 [INFO][5151] k8s.go 608: Cleaning up netns ContainerID="60223960baf2536862e75c6293089dd464aacce4ea598ea63ccac2fb7e6b3bdc" Oct 8 20:04:42.310132 containerd[1680]: 2024-10-08 20:04:42.273 [INFO][5151] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="60223960baf2536862e75c6293089dd464aacce4ea598ea63ccac2fb7e6b3bdc" iface="eth0" netns="/var/run/netns/cni-a242b4ac-6b6f-8539-55c1-aa19f8640138" Oct 8 20:04:42.310132 containerd[1680]: 2024-10-08 20:04:42.274 [INFO][5151] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="60223960baf2536862e75c6293089dd464aacce4ea598ea63ccac2fb7e6b3bdc" iface="eth0" netns="/var/run/netns/cni-a242b4ac-6b6f-8539-55c1-aa19f8640138" Oct 8 20:04:42.310132 containerd[1680]: 2024-10-08 20:04:42.275 [INFO][5151] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="60223960baf2536862e75c6293089dd464aacce4ea598ea63ccac2fb7e6b3bdc" iface="eth0" netns="/var/run/netns/cni-a242b4ac-6b6f-8539-55c1-aa19f8640138" Oct 8 20:04:42.310132 containerd[1680]: 2024-10-08 20:04:42.275 [INFO][5151] k8s.go 615: Releasing IP address(es) ContainerID="60223960baf2536862e75c6293089dd464aacce4ea598ea63ccac2fb7e6b3bdc" Oct 8 20:04:42.310132 containerd[1680]: 2024-10-08 20:04:42.275 [INFO][5151] utils.go 188: Calico CNI releasing IP address ContainerID="60223960baf2536862e75c6293089dd464aacce4ea598ea63ccac2fb7e6b3bdc" Oct 8 20:04:42.310132 containerd[1680]: 2024-10-08 20:04:42.301 [INFO][5157] ipam_plugin.go 417: Releasing address using handleID ContainerID="60223960baf2536862e75c6293089dd464aacce4ea598ea63ccac2fb7e6b3bdc" HandleID="k8s-pod-network.60223960baf2536862e75c6293089dd464aacce4ea598ea63ccac2fb7e6b3bdc" Workload="ci--4081.1.0--a--b9ef23c535-k8s-csi--node--driver--2j8vr-eth0" Oct 8 20:04:42.310132 containerd[1680]: 2024-10-08 20:04:42.301 [INFO][5157] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:04:42.310132 containerd[1680]: 2024-10-08 20:04:42.301 [INFO][5157] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:04:42.310132 containerd[1680]: 2024-10-08 20:04:42.306 [WARNING][5157] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="60223960baf2536862e75c6293089dd464aacce4ea598ea63ccac2fb7e6b3bdc" HandleID="k8s-pod-network.60223960baf2536862e75c6293089dd464aacce4ea598ea63ccac2fb7e6b3bdc" Workload="ci--4081.1.0--a--b9ef23c535-k8s-csi--node--driver--2j8vr-eth0" Oct 8 20:04:42.310132 containerd[1680]: 2024-10-08 20:04:42.306 [INFO][5157] ipam_plugin.go 445: Releasing address using workloadID ContainerID="60223960baf2536862e75c6293089dd464aacce4ea598ea63ccac2fb7e6b3bdc" HandleID="k8s-pod-network.60223960baf2536862e75c6293089dd464aacce4ea598ea63ccac2fb7e6b3bdc" Workload="ci--4081.1.0--a--b9ef23c535-k8s-csi--node--driver--2j8vr-eth0" Oct 8 20:04:42.310132 containerd[1680]: 2024-10-08 20:04:42.308 [INFO][5157] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:04:42.310132 containerd[1680]: 2024-10-08 20:04:42.309 [INFO][5151] k8s.go 621: Teardown processing complete. ContainerID="60223960baf2536862e75c6293089dd464aacce4ea598ea63ccac2fb7e6b3bdc" Oct 8 20:04:42.310817 containerd[1680]: time="2024-10-08T20:04:42.310242720Z" level=info msg="TearDown network for sandbox \"60223960baf2536862e75c6293089dd464aacce4ea598ea63ccac2fb7e6b3bdc\" successfully" Oct 8 20:04:42.310817 containerd[1680]: time="2024-10-08T20:04:42.310272920Z" level=info msg="StopPodSandbox for \"60223960baf2536862e75c6293089dd464aacce4ea598ea63ccac2fb7e6b3bdc\" returns successfully" Oct 8 20:04:42.311026 containerd[1680]: time="2024-10-08T20:04:42.310992127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2j8vr,Uid:0a5c8fac-8ac4-4f20-883d-6418322f8148,Namespace:calico-system,Attempt:1,}" Oct 8 20:04:42.325487 systemd[1]: run-netns-cni\x2da242b4ac\x2d6b6f\x2d8539\x2d55c1\x2daa19f8640138.mount: Deactivated successfully. Oct 8 20:04:42.453104 kubelet[3207]: I1008 20:04:42.453034 3207 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-fkjkp" podStartSLOduration=36.453009049 podStartE2EDuration="36.453009049s" podCreationTimestamp="2024-10-08 20:04:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:04:42.431274447 +0000 UTC m=+41.313810840" watchObservedRunningTime="2024-10-08 20:04:42.453009049 +0000 UTC m=+41.335545342" Oct 8 20:04:42.605255 systemd-networkd[1320]: calidc7f4a7cdaa: Link UP Oct 8 20:04:42.606013 systemd-networkd[1320]: calidc7f4a7cdaa: Gained carrier Oct 8 20:04:42.626398 containerd[1680]: 2024-10-08 20:04:42.401 [INFO][5164] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.1.0--a--b9ef23c535-k8s-csi--node--driver--2j8vr-eth0 csi-node-driver- calico-system 0a5c8fac-8ac4-4f20-883d-6418322f8148 822 0 2024-10-08 20:04:12 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:779867c8f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ci-4081.1.0-a-b9ef23c535 csi-node-driver-2j8vr eth0 default [] [] [kns.calico-system ksa.calico-system.default] calidc7f4a7cdaa [] []}} ContainerID="603cda6e0990e6d789a7097b45984f7f56525a7babf95b972cdc57b587fe5369" Namespace="calico-system" Pod="csi-node-driver-2j8vr" WorkloadEndpoint="ci--4081.1.0--a--b9ef23c535-k8s-csi--node--driver--2j8vr-" Oct 8 20:04:42.626398 containerd[1680]: 2024-10-08 20:04:42.402 [INFO][5164] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="603cda6e0990e6d789a7097b45984f7f56525a7babf95b972cdc57b587fe5369" Namespace="calico-system" Pod="csi-node-driver-2j8vr" WorkloadEndpoint="ci--4081.1.0--a--b9ef23c535-k8s-csi--node--driver--2j8vr-eth0" Oct 8 20:04:42.626398 containerd[1680]: 2024-10-08 20:04:42.447 [INFO][5175] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="603cda6e0990e6d789a7097b45984f7f56525a7babf95b972cdc57b587fe5369" HandleID="k8s-pod-network.603cda6e0990e6d789a7097b45984f7f56525a7babf95b972cdc57b587fe5369" Workload="ci--4081.1.0--a--b9ef23c535-k8s-csi--node--driver--2j8vr-eth0" Oct 8 20:04:42.626398 containerd[1680]: 2024-10-08 20:04:42.473 [INFO][5175] ipam_plugin.go 270: Auto assigning IP ContainerID="603cda6e0990e6d789a7097b45984f7f56525a7babf95b972cdc57b587fe5369" HandleID="k8s-pod-network.603cda6e0990e6d789a7097b45984f7f56525a7babf95b972cdc57b587fe5369" Workload="ci--4081.1.0--a--b9ef23c535-k8s-csi--node--driver--2j8vr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000379c50), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.1.0-a-b9ef23c535", "pod":"csi-node-driver-2j8vr", "timestamp":"2024-10-08 20:04:42.4476878 +0000 UTC"}, Hostname:"ci-4081.1.0-a-b9ef23c535", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 20:04:42.626398 containerd[1680]: 2024-10-08 20:04:42.473 [INFO][5175] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:04:42.626398 containerd[1680]: 2024-10-08 20:04:42.473 [INFO][5175] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:04:42.626398 containerd[1680]: 2024-10-08 20:04:42.473 [INFO][5175] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.1.0-a-b9ef23c535' Oct 8 20:04:42.626398 containerd[1680]: 2024-10-08 20:04:42.477 [INFO][5175] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.603cda6e0990e6d789a7097b45984f7f56525a7babf95b972cdc57b587fe5369" host="ci-4081.1.0-a-b9ef23c535" Oct 8 20:04:42.626398 containerd[1680]: 2024-10-08 20:04:42.576 [INFO][5175] ipam.go 372: Looking up existing affinities for host host="ci-4081.1.0-a-b9ef23c535" Oct 8 20:04:42.626398 containerd[1680]: 2024-10-08 20:04:42.581 [INFO][5175] ipam.go 489: Trying affinity for 192.168.58.64/26 host="ci-4081.1.0-a-b9ef23c535" Oct 8 20:04:42.626398 containerd[1680]: 2024-10-08 20:04:42.582 [INFO][5175] ipam.go 155: Attempting to load block cidr=192.168.58.64/26 host="ci-4081.1.0-a-b9ef23c535" Oct 8 20:04:42.626398 containerd[1680]: 2024-10-08 20:04:42.584 [INFO][5175] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.58.64/26 host="ci-4081.1.0-a-b9ef23c535" Oct 8 20:04:42.626398 containerd[1680]: 2024-10-08 20:04:42.584 [INFO][5175] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.58.64/26 handle="k8s-pod-network.603cda6e0990e6d789a7097b45984f7f56525a7babf95b972cdc57b587fe5369" host="ci-4081.1.0-a-b9ef23c535" Oct 8 20:04:42.626398 containerd[1680]: 2024-10-08 20:04:42.585 [INFO][5175] ipam.go 1685: Creating new handle: k8s-pod-network.603cda6e0990e6d789a7097b45984f7f56525a7babf95b972cdc57b587fe5369 Oct 8 20:04:42.626398 containerd[1680]: 2024-10-08 20:04:42.591 [INFO][5175] ipam.go 1203: Writing block in order to claim IPs block=192.168.58.64/26 handle="k8s-pod-network.603cda6e0990e6d789a7097b45984f7f56525a7babf95b972cdc57b587fe5369" host="ci-4081.1.0-a-b9ef23c535" Oct 8 20:04:42.626398 containerd[1680]: 2024-10-08 20:04:42.600 [INFO][5175] ipam.go 1216: Successfully claimed IPs: [192.168.58.68/26] block=192.168.58.64/26 handle="k8s-pod-network.603cda6e0990e6d789a7097b45984f7f56525a7babf95b972cdc57b587fe5369" host="ci-4081.1.0-a-b9ef23c535" Oct 8 20:04:42.626398 containerd[1680]: 2024-10-08 20:04:42.600 [INFO][5175] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.58.68/26] handle="k8s-pod-network.603cda6e0990e6d789a7097b45984f7f56525a7babf95b972cdc57b587fe5369" host="ci-4081.1.0-a-b9ef23c535" Oct 8 20:04:42.626398 containerd[1680]: 2024-10-08 20:04:42.600 [INFO][5175] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:04:42.626398 containerd[1680]: 2024-10-08 20:04:42.600 [INFO][5175] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.58.68/26] IPv6=[] ContainerID="603cda6e0990e6d789a7097b45984f7f56525a7babf95b972cdc57b587fe5369" HandleID="k8s-pod-network.603cda6e0990e6d789a7097b45984f7f56525a7babf95b972cdc57b587fe5369" Workload="ci--4081.1.0--a--b9ef23c535-k8s-csi--node--driver--2j8vr-eth0" Oct 8 20:04:42.627853 containerd[1680]: 2024-10-08 20:04:42.602 [INFO][5164] k8s.go 386: Populated endpoint ContainerID="603cda6e0990e6d789a7097b45984f7f56525a7babf95b972cdc57b587fe5369" Namespace="calico-system" Pod="csi-node-driver-2j8vr" WorkloadEndpoint="ci--4081.1.0--a--b9ef23c535-k8s-csi--node--driver--2j8vr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.1.0--a--b9ef23c535-k8s-csi--node--driver--2j8vr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0a5c8fac-8ac4-4f20-883d-6418322f8148", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 4, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"779867c8f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.1.0-a-b9ef23c535", ContainerID:"", Pod:"csi-node-driver-2j8vr", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.58.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calidc7f4a7cdaa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:04:42.627853 containerd[1680]: 2024-10-08 20:04:42.602 [INFO][5164] k8s.go 387: Calico CNI using IPs: [192.168.58.68/32] ContainerID="603cda6e0990e6d789a7097b45984f7f56525a7babf95b972cdc57b587fe5369" Namespace="calico-system" Pod="csi-node-driver-2j8vr" WorkloadEndpoint="ci--4081.1.0--a--b9ef23c535-k8s-csi--node--driver--2j8vr-eth0" Oct 8 20:04:42.627853 containerd[1680]: 2024-10-08 20:04:42.602 [INFO][5164] dataplane_linux.go 68: Setting the host side veth name to calidc7f4a7cdaa ContainerID="603cda6e0990e6d789a7097b45984f7f56525a7babf95b972cdc57b587fe5369" Namespace="calico-system" Pod="csi-node-driver-2j8vr" WorkloadEndpoint="ci--4081.1.0--a--b9ef23c535-k8s-csi--node--driver--2j8vr-eth0" Oct 8 20:04:42.627853 containerd[1680]: 2024-10-08 20:04:42.604 [INFO][5164] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="603cda6e0990e6d789a7097b45984f7f56525a7babf95b972cdc57b587fe5369" Namespace="calico-system" Pod="csi-node-driver-2j8vr" WorkloadEndpoint="ci--4081.1.0--a--b9ef23c535-k8s-csi--node--driver--2j8vr-eth0" Oct 8 20:04:42.627853 containerd[1680]: 2024-10-08 20:04:42.604 [INFO][5164] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="603cda6e0990e6d789a7097b45984f7f56525a7babf95b972cdc57b587fe5369" Namespace="calico-system" Pod="csi-node-driver-2j8vr" WorkloadEndpoint="ci--4081.1.0--a--b9ef23c535-k8s-csi--node--driver--2j8vr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.1.0--a--b9ef23c535-k8s-csi--node--driver--2j8vr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0a5c8fac-8ac4-4f20-883d-6418322f8148", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 4, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"779867c8f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.1.0-a-b9ef23c535", ContainerID:"603cda6e0990e6d789a7097b45984f7f56525a7babf95b972cdc57b587fe5369", Pod:"csi-node-driver-2j8vr", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.58.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calidc7f4a7cdaa", MAC:"22:01:47:15:c5:9b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:04:42.627853 containerd[1680]: 2024-10-08 20:04:42.622 [INFO][5164] k8s.go 500: Wrote updated endpoint to datastore ContainerID="603cda6e0990e6d789a7097b45984f7f56525a7babf95b972cdc57b587fe5369" Namespace="calico-system" Pod="csi-node-driver-2j8vr" WorkloadEndpoint="ci--4081.1.0--a--b9ef23c535-k8s-csi--node--driver--2j8vr-eth0" Oct 8 20:04:42.678498 containerd[1680]: time="2024-10-08T20:04:42.678405048Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:04:42.678936 containerd[1680]: time="2024-10-08T20:04:42.678797651Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:04:42.678936 containerd[1680]: time="2024-10-08T20:04:42.678842552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:04:42.680316 containerd[1680]: time="2024-10-08T20:04:42.680016063Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:04:42.724207 systemd[1]: Started cri-containerd-603cda6e0990e6d789a7097b45984f7f56525a7babf95b972cdc57b587fe5369.scope - libcontainer container 603cda6e0990e6d789a7097b45984f7f56525a7babf95b972cdc57b587fe5369. Oct 8 20:04:42.771290 containerd[1680]: time="2024-10-08T20:04:42.771247812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2j8vr,Uid:0a5c8fac-8ac4-4f20-883d-6418322f8148,Namespace:calico-system,Attempt:1,} returns sandbox id \"603cda6e0990e6d789a7097b45984f7f56525a7babf95b972cdc57b587fe5369\"" Oct 8 20:04:42.864135 systemd-networkd[1320]: cali5baf0cf16a6: Gained IPv6LL Oct 8 20:04:43.377306 containerd[1680]: time="2024-10-08T20:04:43.377245654Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:04:43.380943 containerd[1680]: time="2024-10-08T20:04:43.380458984Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=33507125" Oct 8 20:04:43.385267 containerd[1680]: time="2024-10-08T20:04:43.385013326Z" level=info msg="ImageCreate event name:\"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:04:43.391577 containerd[1680]: time="2024-10-08T20:04:43.391532887Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:04:43.392805 containerd[1680]: time="2024-10-08T20:04:43.392774299Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"34999494\" in 2.832212768s" Oct 8 20:04:43.393104 containerd[1680]: time="2024-10-08T20:04:43.393028201Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\"" Oct 8 20:04:43.399276 containerd[1680]: time="2024-10-08T20:04:43.399246259Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Oct 8 20:04:43.432939 containerd[1680]: time="2024-10-08T20:04:43.431775662Z" level=info msg="CreateContainer within sandbox \"299ee57de537d281f4cd0f5eb32dbe456e0ac3be21ff0089923bf3cac71d7e2c\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Oct 8 20:04:43.481119 containerd[1680]: time="2024-10-08T20:04:43.481066321Z" level=info msg="CreateContainer within sandbox \"299ee57de537d281f4cd0f5eb32dbe456e0ac3be21ff0089923bf3cac71d7e2c\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"80814047ea54c07653549732638bae7216ac7044edaa6a31853196b9ccef9361\"" Oct 8 20:04:43.482275 containerd[1680]: time="2024-10-08T20:04:43.482240932Z" level=info msg="StartContainer for \"80814047ea54c07653549732638bae7216ac7044edaa6a31853196b9ccef9361\"" Oct 8 20:04:43.529121 systemd[1]: Started cri-containerd-80814047ea54c07653549732638bae7216ac7044edaa6a31853196b9ccef9361.scope - libcontainer container 80814047ea54c07653549732638bae7216ac7044edaa6a31853196b9ccef9361. Oct 8 20:04:43.589234 containerd[1680]: time="2024-10-08T20:04:43.589186927Z" level=info msg="StartContainer for \"80814047ea54c07653549732638bae7216ac7044edaa6a31853196b9ccef9361\" returns successfully" Oct 8 20:04:44.462940 kubelet[3207]: I1008 20:04:44.461721 3207 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7fb9bb5bf5-cl6qd" podStartSLOduration=28.626730056 podStartE2EDuration="31.46169825s" podCreationTimestamp="2024-10-08 20:04:13 +0000 UTC" firstStartedPulling="2024-10-08 20:04:40.559996225 +0000 UTC m=+39.442532518" lastFinishedPulling="2024-10-08 20:04:43.394964319 +0000 UTC m=+42.277500712" observedRunningTime="2024-10-08 20:04:44.459763132 +0000 UTC m=+43.342299525" watchObservedRunningTime="2024-10-08 20:04:44.46169825 +0000 UTC m=+43.344234643" Oct 8 20:04:44.592376 systemd-networkd[1320]: calidc7f4a7cdaa: Gained IPv6LL Oct 8 20:04:45.440494 kubelet[3207]: I1008 20:04:45.440451 3207 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 8 20:04:50.673637 containerd[1680]: time="2024-10-08T20:04:50.673571222Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:04:50.676455 containerd[1680]: time="2024-10-08T20:04:50.676391848Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7642081" Oct 8 20:04:50.680850 containerd[1680]: time="2024-10-08T20:04:50.680725287Z" level=info msg="ImageCreate event name:\"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:04:50.687964 containerd[1680]: time="2024-10-08T20:04:50.687874452Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:04:50.689049 containerd[1680]: time="2024-10-08T20:04:50.688510357Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"9134482\" in 7.289109597s" Oct 8 20:04:50.689049 containerd[1680]: time="2024-10-08T20:04:50.688552558Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\"" Oct 8 20:04:50.691076 containerd[1680]: time="2024-10-08T20:04:50.691045880Z" level=info msg="CreateContainer within sandbox \"603cda6e0990e6d789a7097b45984f7f56525a7babf95b972cdc57b587fe5369\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Oct 8 20:04:50.747976 containerd[1680]: time="2024-10-08T20:04:50.747896594Z" level=info msg="CreateContainer within sandbox \"603cda6e0990e6d789a7097b45984f7f56525a7babf95b972cdc57b587fe5369\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"78bb0ab602d8c749e0a5fd63911db239a6b91bdc539e7b728dee5445850270ce\"" Oct 8 20:04:50.748978 containerd[1680]: time="2024-10-08T20:04:50.748687701Z" level=info msg="StartContainer for \"78bb0ab602d8c749e0a5fd63911db239a6b91bdc539e7b728dee5445850270ce\"" Oct 8 20:04:50.793096 systemd[1]: Started cri-containerd-78bb0ab602d8c749e0a5fd63911db239a6b91bdc539e7b728dee5445850270ce.scope - libcontainer container 78bb0ab602d8c749e0a5fd63911db239a6b91bdc539e7b728dee5445850270ce. Oct 8 20:04:50.824219 containerd[1680]: time="2024-10-08T20:04:50.824175284Z" level=info msg="StartContainer for \"78bb0ab602d8c749e0a5fd63911db239a6b91bdc539e7b728dee5445850270ce\" returns successfully" Oct 8 20:04:50.826193 containerd[1680]: time="2024-10-08T20:04:50.825880899Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Oct 8 20:04:52.378149 systemd[1]: Created slice kubepods-besteffort-pod4c1c7a4a_32cd_4388_9566_741848cbc5dd.slice - libcontainer container kubepods-besteffort-pod4c1c7a4a_32cd_4388_9566_741848cbc5dd.slice. Oct 8 20:04:52.400326 systemd[1]: Created slice kubepods-besteffort-pod8b239943_e886_4f96_b792_a45bff18b054.slice - libcontainer container kubepods-besteffort-pod8b239943_e886_4f96_b792_a45bff18b054.slice. Oct 8 20:04:52.460392 kubelet[3207]: I1008 20:04:52.459900 3207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2z785\" (UniqueName: \"kubernetes.io/projected/8b239943-e886-4f96-b792-a45bff18b054-kube-api-access-2z785\") pod \"calico-apiserver-5658b7bb57-qc8lz\" (UID: \"8b239943-e886-4f96-b792-a45bff18b054\") " pod="calico-apiserver/calico-apiserver-5658b7bb57-qc8lz" Oct 8 20:04:52.462063 kubelet[3207]: I1008 20:04:52.461631 3207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8b239943-e886-4f96-b792-a45bff18b054-calico-apiserver-certs\") pod \"calico-apiserver-5658b7bb57-qc8lz\" (UID: \"8b239943-e886-4f96-b792-a45bff18b054\") " pod="calico-apiserver/calico-apiserver-5658b7bb57-qc8lz" Oct 8 20:04:52.462063 kubelet[3207]: I1008 20:04:52.461782 3207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4c1c7a4a-32cd-4388-9566-741848cbc5dd-calico-apiserver-certs\") pod \"calico-apiserver-5658b7bb57-h9lb4\" (UID: \"4c1c7a4a-32cd-4388-9566-741848cbc5dd\") " pod="calico-apiserver/calico-apiserver-5658b7bb57-h9lb4" Oct 8 20:04:52.462063 kubelet[3207]: I1008 20:04:52.461811 3207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vwwd\" (UniqueName: \"kubernetes.io/projected/4c1c7a4a-32cd-4388-9566-741848cbc5dd-kube-api-access-9vwwd\") pod \"calico-apiserver-5658b7bb57-h9lb4\" (UID: \"4c1c7a4a-32cd-4388-9566-741848cbc5dd\") " pod="calico-apiserver/calico-apiserver-5658b7bb57-h9lb4" Oct 8 20:04:52.565339 kubelet[3207]: E1008 20:04:52.563423 3207 secret.go:188] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Oct 8 20:04:52.565339 kubelet[3207]: E1008 20:04:52.564320 3207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4c1c7a4a-32cd-4388-9566-741848cbc5dd-calico-apiserver-certs podName:4c1c7a4a-32cd-4388-9566-741848cbc5dd nodeName:}" failed. No retries permitted until 2024-10-08 20:04:53.064291811 +0000 UTC m=+51.946828104 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/4c1c7a4a-32cd-4388-9566-741848cbc5dd-calico-apiserver-certs") pod "calico-apiserver-5658b7bb57-h9lb4" (UID: "4c1c7a4a-32cd-4388-9566-741848cbc5dd") : secret "calico-apiserver-certs" not found Oct 8 20:04:52.565339 kubelet[3207]: E1008 20:04:52.564596 3207 secret.go:188] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Oct 8 20:04:52.565339 kubelet[3207]: E1008 20:04:52.564640 3207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8b239943-e886-4f96-b792-a45bff18b054-calico-apiserver-certs podName:8b239943-e886-4f96-b792-a45bff18b054 nodeName:}" failed. No retries permitted until 2024-10-08 20:04:53.064626914 +0000 UTC m=+51.947163207 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/8b239943-e886-4f96-b792-a45bff18b054-calico-apiserver-certs") pod "calico-apiserver-5658b7bb57-qc8lz" (UID: "8b239943-e886-4f96-b792-a45bff18b054") : secret "calico-apiserver-certs" not found Oct 8 20:04:52.807129 containerd[1680]: time="2024-10-08T20:04:52.807017805Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:04:52.809946 containerd[1680]: time="2024-10-08T20:04:52.809866930Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12907822" Oct 8 20:04:52.816012 containerd[1680]: time="2024-10-08T20:04:52.815969186Z" level=info msg="ImageCreate event name:\"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:04:52.821440 containerd[1680]: time="2024-10-08T20:04:52.821357434Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:04:52.822245 containerd[1680]: time="2024-10-08T20:04:52.822074941Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"14400175\" in 1.996154242s" Oct 8 20:04:52.822245 containerd[1680]: time="2024-10-08T20:04:52.822117841Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\"" Oct 8 20:04:52.824854 containerd[1680]: time="2024-10-08T20:04:52.824673564Z" level=info msg="CreateContainer within sandbox \"603cda6e0990e6d789a7097b45984f7f56525a7babf95b972cdc57b587fe5369\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Oct 8 20:04:52.869306 containerd[1680]: time="2024-10-08T20:04:52.869257167Z" level=info msg="CreateContainer within sandbox \"603cda6e0990e6d789a7097b45984f7f56525a7babf95b972cdc57b587fe5369\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"7e30d4d4df42d8e8a2a82262867a0731aa5119a339af500d3cd03a19335cbb7d\"" Oct 8 20:04:52.869896 containerd[1680]: time="2024-10-08T20:04:52.869866873Z" level=info msg="StartContainer for \"7e30d4d4df42d8e8a2a82262867a0731aa5119a339af500d3cd03a19335cbb7d\"" Oct 8 20:04:52.907084 systemd[1]: Started cri-containerd-7e30d4d4df42d8e8a2a82262867a0731aa5119a339af500d3cd03a19335cbb7d.scope - libcontainer container 7e30d4d4df42d8e8a2a82262867a0731aa5119a339af500d3cd03a19335cbb7d. Oct 8 20:04:52.938939 containerd[1680]: time="2024-10-08T20:04:52.937645985Z" level=info msg="StartContainer for \"7e30d4d4df42d8e8a2a82262867a0731aa5119a339af500d3cd03a19335cbb7d\" returns successfully" Oct 8 20:04:53.300444 kubelet[3207]: I1008 20:04:53.300404 3207 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Oct 8 20:04:53.300444 kubelet[3207]: I1008 20:04:53.300443 3207 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Oct 8 20:04:53.302239 containerd[1680]: time="2024-10-08T20:04:53.301800542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5658b7bb57-h9lb4,Uid:4c1c7a4a-32cd-4388-9566-741848cbc5dd,Namespace:calico-apiserver,Attempt:0,}" Oct 8 20:04:53.306840 containerd[1680]: time="2024-10-08T20:04:53.306803386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5658b7bb57-qc8lz,Uid:8b239943-e886-4f96-b792-a45bff18b054,Namespace:calico-apiserver,Attempt:0,}" Oct 8 20:04:53.600814 systemd[1]: run-containerd-runc-k8s.io-7e30d4d4df42d8e8a2a82262867a0731aa5119a339af500d3cd03a19335cbb7d-runc.VacCEZ.mount: Deactivated successfully. Oct 8 20:04:53.639871 systemd-networkd[1320]: cali804a65f3e27: Link UP Oct 8 20:04:53.640716 systemd-networkd[1320]: cali804a65f3e27: Gained carrier Oct 8 20:04:53.656630 kubelet[3207]: I1008 20:04:53.656552 3207 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-2j8vr" podStartSLOduration=31.608040406 podStartE2EDuration="41.656526813s" podCreationTimestamp="2024-10-08 20:04:12 +0000 UTC" firstStartedPulling="2024-10-08 20:04:42.774672644 +0000 UTC m=+41.657208937" lastFinishedPulling="2024-10-08 20:04:52.823159051 +0000 UTC m=+51.705695344" observedRunningTime="2024-10-08 20:04:53.496706984 +0000 UTC m=+52.379243277" watchObservedRunningTime="2024-10-08 20:04:53.656526813 +0000 UTC m=+52.539063106" Oct 8 20:04:53.664268 containerd[1680]: 2024-10-08 20:04:53.441 [INFO][5455] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.1.0--a--b9ef23c535-k8s-calico--apiserver--5658b7bb57--qc8lz-eth0 calico-apiserver-5658b7bb57- calico-apiserver 8b239943-e886-4f96-b792-a45bff18b054 922 0 2024-10-08 20:04:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5658b7bb57 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.1.0-a-b9ef23c535 calico-apiserver-5658b7bb57-qc8lz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali804a65f3e27 [] []}} ContainerID="ff30b0edaf347d08846163891bf48ce554a2de50962fb81f939ff3224a10e112" Namespace="calico-apiserver" Pod="calico-apiserver-5658b7bb57-qc8lz" WorkloadEndpoint="ci--4081.1.0--a--b9ef23c535-k8s-calico--apiserver--5658b7bb57--qc8lz-" Oct 8 20:04:53.664268 containerd[1680]: 2024-10-08 20:04:53.441 [INFO][5455] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ff30b0edaf347d08846163891bf48ce554a2de50962fb81f939ff3224a10e112" Namespace="calico-apiserver" Pod="calico-apiserver-5658b7bb57-qc8lz" WorkloadEndpoint="ci--4081.1.0--a--b9ef23c535-k8s-calico--apiserver--5658b7bb57--qc8lz-eth0" Oct 8 20:04:53.664268 containerd[1680]: 2024-10-08 20:04:53.492 [INFO][5466] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ff30b0edaf347d08846163891bf48ce554a2de50962fb81f939ff3224a10e112" HandleID="k8s-pod-network.ff30b0edaf347d08846163891bf48ce554a2de50962fb81f939ff3224a10e112" Workload="ci--4081.1.0--a--b9ef23c535-k8s-calico--apiserver--5658b7bb57--qc8lz-eth0" Oct 8 20:04:53.664268 containerd[1680]: 2024-10-08 20:04:53.505 [INFO][5466] ipam_plugin.go 270: Auto assigning IP ContainerID="ff30b0edaf347d08846163891bf48ce554a2de50962fb81f939ff3224a10e112" HandleID="k8s-pod-network.ff30b0edaf347d08846163891bf48ce554a2de50962fb81f939ff3224a10e112" Workload="ci--4081.1.0--a--b9ef23c535-k8s-calico--apiserver--5658b7bb57--qc8lz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003428b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.1.0-a-b9ef23c535", "pod":"calico-apiserver-5658b7bb57-qc8lz", "timestamp":"2024-10-08 20:04:53.492144743 +0000 UTC"}, Hostname:"ci-4081.1.0-a-b9ef23c535", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 20:04:53.664268 containerd[1680]: 2024-10-08 20:04:53.506 [INFO][5466] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:04:53.664268 containerd[1680]: 2024-10-08 20:04:53.506 [INFO][5466] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:04:53.664268 containerd[1680]: 2024-10-08 20:04:53.506 [INFO][5466] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.1.0-a-b9ef23c535' Oct 8 20:04:53.664268 containerd[1680]: 2024-10-08 20:04:53.511 [INFO][5466] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ff30b0edaf347d08846163891bf48ce554a2de50962fb81f939ff3224a10e112" host="ci-4081.1.0-a-b9ef23c535" Oct 8 20:04:53.664268 containerd[1680]: 2024-10-08 20:04:53.605 [INFO][5466] ipam.go 372: Looking up existing affinities for host host="ci-4081.1.0-a-b9ef23c535" Oct 8 20:04:53.664268 containerd[1680]: 2024-10-08 20:04:53.610 [INFO][5466] ipam.go 489: Trying affinity for 192.168.58.64/26 host="ci-4081.1.0-a-b9ef23c535" Oct 8 20:04:53.664268 containerd[1680]: 2024-10-08 20:04:53.612 [INFO][5466] ipam.go 155: Attempting to load block cidr=192.168.58.64/26 host="ci-4081.1.0-a-b9ef23c535" Oct 8 20:04:53.664268 containerd[1680]: 2024-10-08 20:04:53.614 [INFO][5466] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.58.64/26 host="ci-4081.1.0-a-b9ef23c535" Oct 8 20:04:53.664268 containerd[1680]: 2024-10-08 20:04:53.614 [INFO][5466] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.58.64/26 handle="k8s-pod-network.ff30b0edaf347d08846163891bf48ce554a2de50962fb81f939ff3224a10e112" host="ci-4081.1.0-a-b9ef23c535" Oct 8 20:04:53.664268 containerd[1680]: 2024-10-08 20:04:53.615 [INFO][5466] ipam.go 1685: Creating new handle: k8s-pod-network.ff30b0edaf347d08846163891bf48ce554a2de50962fb81f939ff3224a10e112 Oct 8 20:04:53.664268 containerd[1680]: 2024-10-08 20:04:53.620 [INFO][5466] ipam.go 1203: Writing block in order to claim IPs block=192.168.58.64/26 handle="k8s-pod-network.ff30b0edaf347d08846163891bf48ce554a2de50962fb81f939ff3224a10e112" host="ci-4081.1.0-a-b9ef23c535" Oct 8 20:04:53.664268 containerd[1680]: 2024-10-08 20:04:53.629 [INFO][5466] ipam.go 1216: Successfully claimed IPs: [192.168.58.69/26] block=192.168.58.64/26 handle="k8s-pod-network.ff30b0edaf347d08846163891bf48ce554a2de50962fb81f939ff3224a10e112" host="ci-4081.1.0-a-b9ef23c535" Oct 8 20:04:53.664268 containerd[1680]: 2024-10-08 20:04:53.629 [INFO][5466] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.58.69/26] handle="k8s-pod-network.ff30b0edaf347d08846163891bf48ce554a2de50962fb81f939ff3224a10e112" host="ci-4081.1.0-a-b9ef23c535" Oct 8 20:04:53.664268 containerd[1680]: 2024-10-08 20:04:53.629 [INFO][5466] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:04:53.664268 containerd[1680]: 2024-10-08 20:04:53.629 [INFO][5466] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.58.69/26] IPv6=[] ContainerID="ff30b0edaf347d08846163891bf48ce554a2de50962fb81f939ff3224a10e112" HandleID="k8s-pod-network.ff30b0edaf347d08846163891bf48ce554a2de50962fb81f939ff3224a10e112" Workload="ci--4081.1.0--a--b9ef23c535-k8s-calico--apiserver--5658b7bb57--qc8lz-eth0" Oct 8 20:04:53.668610 containerd[1680]: 2024-10-08 20:04:53.633 [INFO][5455] k8s.go 386: Populated endpoint ContainerID="ff30b0edaf347d08846163891bf48ce554a2de50962fb81f939ff3224a10e112" Namespace="calico-apiserver" Pod="calico-apiserver-5658b7bb57-qc8lz" WorkloadEndpoint="ci--4081.1.0--a--b9ef23c535-k8s-calico--apiserver--5658b7bb57--qc8lz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.1.0--a--b9ef23c535-k8s-calico--apiserver--5658b7bb57--qc8lz-eth0", GenerateName:"calico-apiserver-5658b7bb57-", Namespace:"calico-apiserver", SelfLink:"", UID:"8b239943-e886-4f96-b792-a45bff18b054", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 4, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5658b7bb57", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.1.0-a-b9ef23c535", ContainerID:"", Pod:"calico-apiserver-5658b7bb57-qc8lz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.58.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali804a65f3e27", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:04:53.668610 containerd[1680]: 2024-10-08 20:04:53.633 [INFO][5455] k8s.go 387: Calico CNI using IPs: [192.168.58.69/32] ContainerID="ff30b0edaf347d08846163891bf48ce554a2de50962fb81f939ff3224a10e112" Namespace="calico-apiserver" Pod="calico-apiserver-5658b7bb57-qc8lz" WorkloadEndpoint="ci--4081.1.0--a--b9ef23c535-k8s-calico--apiserver--5658b7bb57--qc8lz-eth0" Oct 8 20:04:53.668610 containerd[1680]: 2024-10-08 20:04:53.633 [INFO][5455] dataplane_linux.go 68: Setting the host side veth name to cali804a65f3e27 ContainerID="ff30b0edaf347d08846163891bf48ce554a2de50962fb81f939ff3224a10e112" Namespace="calico-apiserver" Pod="calico-apiserver-5658b7bb57-qc8lz" WorkloadEndpoint="ci--4081.1.0--a--b9ef23c535-k8s-calico--apiserver--5658b7bb57--qc8lz-eth0" Oct 8 20:04:53.668610 containerd[1680]: 2024-10-08 20:04:53.640 [INFO][5455] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="ff30b0edaf347d08846163891bf48ce554a2de50962fb81f939ff3224a10e112" Namespace="calico-apiserver" Pod="calico-apiserver-5658b7bb57-qc8lz" WorkloadEndpoint="ci--4081.1.0--a--b9ef23c535-k8s-calico--apiserver--5658b7bb57--qc8lz-eth0" Oct 8 20:04:53.668610 containerd[1680]: 2024-10-08 20:04:53.640 [INFO][5455] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ff30b0edaf347d08846163891bf48ce554a2de50962fb81f939ff3224a10e112" Namespace="calico-apiserver" Pod="calico-apiserver-5658b7bb57-qc8lz" WorkloadEndpoint="ci--4081.1.0--a--b9ef23c535-k8s-calico--apiserver--5658b7bb57--qc8lz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.1.0--a--b9ef23c535-k8s-calico--apiserver--5658b7bb57--qc8lz-eth0", GenerateName:"calico-apiserver-5658b7bb57-", Namespace:"calico-apiserver", SelfLink:"", UID:"8b239943-e886-4f96-b792-a45bff18b054", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 4, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5658b7bb57", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.1.0-a-b9ef23c535", ContainerID:"ff30b0edaf347d08846163891bf48ce554a2de50962fb81f939ff3224a10e112", Pod:"calico-apiserver-5658b7bb57-qc8lz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.58.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali804a65f3e27", MAC:"86:3b:42:33:b6:69", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:04:53.668610 containerd[1680]: 2024-10-08 20:04:53.660 [INFO][5455] k8s.go 500: Wrote updated endpoint to datastore ContainerID="ff30b0edaf347d08846163891bf48ce554a2de50962fb81f939ff3224a10e112" Namespace="calico-apiserver" Pod="calico-apiserver-5658b7bb57-qc8lz" WorkloadEndpoint="ci--4081.1.0--a--b9ef23c535-k8s-calico--apiserver--5658b7bb57--qc8lz-eth0" Oct 8 20:04:53.697663 containerd[1680]: time="2024-10-08T20:04:53.697506579Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:04:53.698595 containerd[1680]: time="2024-10-08T20:04:53.698416487Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:04:53.698595 containerd[1680]: time="2024-10-08T20:04:53.698439588Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:04:53.698901 containerd[1680]: time="2024-10-08T20:04:53.698757891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:04:53.740608 systemd[1]: Started cri-containerd-ff30b0edaf347d08846163891bf48ce554a2de50962fb81f939ff3224a10e112.scope - libcontainer container ff30b0edaf347d08846163891bf48ce554a2de50962fb81f939ff3224a10e112. Oct 8 20:04:53.765198 systemd-networkd[1320]: cali3fc3cee7f1a: Link UP Oct 8 20:04:53.766567 systemd-networkd[1320]: cali3fc3cee7f1a: Gained carrier Oct 8 20:04:53.796684 containerd[1680]: 2024-10-08 20:04:53.439 [INFO][5440] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.1.0--a--b9ef23c535-k8s-calico--apiserver--5658b7bb57--h9lb4-eth0 calico-apiserver-5658b7bb57- calico-apiserver 4c1c7a4a-32cd-4388-9566-741848cbc5dd 920 0 2024-10-08 20:04:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5658b7bb57 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.1.0-a-b9ef23c535 calico-apiserver-5658b7bb57-h9lb4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3fc3cee7f1a [] []}} ContainerID="7daaebd488d7c40dc05c5f384f7c2dd027f1a9a5edbc178d885df6f9f0d84a3c" Namespace="calico-apiserver" Pod="calico-apiserver-5658b7bb57-h9lb4" WorkloadEndpoint="ci--4081.1.0--a--b9ef23c535-k8s-calico--apiserver--5658b7bb57--h9lb4-" Oct 8 20:04:53.796684 containerd[1680]: 2024-10-08 20:04:53.440 [INFO][5440] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7daaebd488d7c40dc05c5f384f7c2dd027f1a9a5edbc178d885df6f9f0d84a3c" Namespace="calico-apiserver" Pod="calico-apiserver-5658b7bb57-h9lb4" WorkloadEndpoint="ci--4081.1.0--a--b9ef23c535-k8s-calico--apiserver--5658b7bb57--h9lb4-eth0" Oct 8 20:04:53.796684 containerd[1680]: 2024-10-08 20:04:53.489 [INFO][5465] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7daaebd488d7c40dc05c5f384f7c2dd027f1a9a5edbc178d885df6f9f0d84a3c" HandleID="k8s-pod-network.7daaebd488d7c40dc05c5f384f7c2dd027f1a9a5edbc178d885df6f9f0d84a3c" Workload="ci--4081.1.0--a--b9ef23c535-k8s-calico--apiserver--5658b7bb57--h9lb4-eth0" Oct 8 20:04:53.796684 containerd[1680]: 2024-10-08 20:04:53.604 [INFO][5465] ipam_plugin.go 270: Auto assigning IP ContainerID="7daaebd488d7c40dc05c5f384f7c2dd027f1a9a5edbc178d885df6f9f0d84a3c" HandleID="k8s-pod-network.7daaebd488d7c40dc05c5f384f7c2dd027f1a9a5edbc178d885df6f9f0d84a3c" Workload="ci--4081.1.0--a--b9ef23c535-k8s-calico--apiserver--5658b7bb57--h9lb4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290210), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.1.0-a-b9ef23c535", "pod":"calico-apiserver-5658b7bb57-h9lb4", "timestamp":"2024-10-08 20:04:53.489893023 +0000 UTC"}, Hostname:"ci-4081.1.0-a-b9ef23c535", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 20:04:53.796684 containerd[1680]: 2024-10-08 20:04:53.605 [INFO][5465] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:04:53.796684 containerd[1680]: 2024-10-08 20:04:53.631 [INFO][5465] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:04:53.796684 containerd[1680]: 2024-10-08 20:04:53.631 [INFO][5465] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.1.0-a-b9ef23c535' Oct 8 20:04:53.796684 containerd[1680]: 2024-10-08 20:04:53.634 [INFO][5465] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7daaebd488d7c40dc05c5f384f7c2dd027f1a9a5edbc178d885df6f9f0d84a3c" host="ci-4081.1.0-a-b9ef23c535" Oct 8 20:04:53.796684 containerd[1680]: 2024-10-08 20:04:53.706 [INFO][5465] ipam.go 372: Looking up existing affinities for host host="ci-4081.1.0-a-b9ef23c535" Oct 8 20:04:53.796684 containerd[1680]: 2024-10-08 20:04:53.716 [INFO][5465] ipam.go 489: Trying affinity for 192.168.58.64/26 host="ci-4081.1.0-a-b9ef23c535" Oct 8 20:04:53.796684 containerd[1680]: 2024-10-08 20:04:53.718 [INFO][5465] ipam.go 155: Attempting to load block cidr=192.168.58.64/26 host="ci-4081.1.0-a-b9ef23c535" Oct 8 20:04:53.796684 containerd[1680]: 2024-10-08 20:04:53.724 [INFO][5465] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.58.64/26 host="ci-4081.1.0-a-b9ef23c535" Oct 8 20:04:53.796684 containerd[1680]: 2024-10-08 20:04:53.724 [INFO][5465] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.58.64/26 handle="k8s-pod-network.7daaebd488d7c40dc05c5f384f7c2dd027f1a9a5edbc178d885df6f9f0d84a3c" host="ci-4081.1.0-a-b9ef23c535" Oct 8 20:04:53.796684 containerd[1680]: 2024-10-08 20:04:53.728 [INFO][5465] ipam.go 1685: Creating new handle: k8s-pod-network.7daaebd488d7c40dc05c5f384f7c2dd027f1a9a5edbc178d885df6f9f0d84a3c Oct 8 20:04:53.796684 containerd[1680]: 2024-10-08 20:04:53.736 [INFO][5465] ipam.go 1203: Writing block in order to claim IPs block=192.168.58.64/26 handle="k8s-pod-network.7daaebd488d7c40dc05c5f384f7c2dd027f1a9a5edbc178d885df6f9f0d84a3c" host="ci-4081.1.0-a-b9ef23c535" Oct 8 20:04:53.796684 containerd[1680]: 2024-10-08 20:04:53.755 [INFO][5465] ipam.go 1216: Successfully claimed IPs: [192.168.58.70/26] block=192.168.58.64/26 handle="k8s-pod-network.7daaebd488d7c40dc05c5f384f7c2dd027f1a9a5edbc178d885df6f9f0d84a3c" host="ci-4081.1.0-a-b9ef23c535" Oct 8 20:04:53.796684 containerd[1680]: 2024-10-08 20:04:53.759 [INFO][5465] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.58.70/26] handle="k8s-pod-network.7daaebd488d7c40dc05c5f384f7c2dd027f1a9a5edbc178d885df6f9f0d84a3c" host="ci-4081.1.0-a-b9ef23c535" Oct 8 20:04:53.796684 containerd[1680]: 2024-10-08 20:04:53.759 [INFO][5465] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:04:53.796684 containerd[1680]: 2024-10-08 20:04:53.759 [INFO][5465] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.58.70/26] IPv6=[] ContainerID="7daaebd488d7c40dc05c5f384f7c2dd027f1a9a5edbc178d885df6f9f0d84a3c" HandleID="k8s-pod-network.7daaebd488d7c40dc05c5f384f7c2dd027f1a9a5edbc178d885df6f9f0d84a3c" Workload="ci--4081.1.0--a--b9ef23c535-k8s-calico--apiserver--5658b7bb57--h9lb4-eth0" Oct 8 20:04:53.798363 containerd[1680]: 2024-10-08 20:04:53.761 [INFO][5440] k8s.go 386: Populated endpoint ContainerID="7daaebd488d7c40dc05c5f384f7c2dd027f1a9a5edbc178d885df6f9f0d84a3c" Namespace="calico-apiserver" Pod="calico-apiserver-5658b7bb57-h9lb4" WorkloadEndpoint="ci--4081.1.0--a--b9ef23c535-k8s-calico--apiserver--5658b7bb57--h9lb4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.1.0--a--b9ef23c535-k8s-calico--apiserver--5658b7bb57--h9lb4-eth0", GenerateName:"calico-apiserver-5658b7bb57-", Namespace:"calico-apiserver", SelfLink:"", UID:"4c1c7a4a-32cd-4388-9566-741848cbc5dd", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 4, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5658b7bb57", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.1.0-a-b9ef23c535", ContainerID:"", Pod:"calico-apiserver-5658b7bb57-h9lb4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.58.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3fc3cee7f1a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:04:53.798363 containerd[1680]: 2024-10-08 20:04:53.761 [INFO][5440] k8s.go 387: Calico CNI using IPs: [192.168.58.70/32] ContainerID="7daaebd488d7c40dc05c5f384f7c2dd027f1a9a5edbc178d885df6f9f0d84a3c" Namespace="calico-apiserver" Pod="calico-apiserver-5658b7bb57-h9lb4" WorkloadEndpoint="ci--4081.1.0--a--b9ef23c535-k8s-calico--apiserver--5658b7bb57--h9lb4-eth0" Oct 8 20:04:53.798363 containerd[1680]: 2024-10-08 20:04:53.761 [INFO][5440] dataplane_linux.go 68: Setting the host side veth name to cali3fc3cee7f1a ContainerID="7daaebd488d7c40dc05c5f384f7c2dd027f1a9a5edbc178d885df6f9f0d84a3c" Namespace="calico-apiserver" Pod="calico-apiserver-5658b7bb57-h9lb4" WorkloadEndpoint="ci--4081.1.0--a--b9ef23c535-k8s-calico--apiserver--5658b7bb57--h9lb4-eth0" Oct 8 20:04:53.798363 containerd[1680]: 2024-10-08 20:04:53.766 [INFO][5440] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="7daaebd488d7c40dc05c5f384f7c2dd027f1a9a5edbc178d885df6f9f0d84a3c" Namespace="calico-apiserver" Pod="calico-apiserver-5658b7bb57-h9lb4" WorkloadEndpoint="ci--4081.1.0--a--b9ef23c535-k8s-calico--apiserver--5658b7bb57--h9lb4-eth0" Oct 8 20:04:53.798363 containerd[1680]: 2024-10-08 20:04:53.767 [INFO][5440] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7daaebd488d7c40dc05c5f384f7c2dd027f1a9a5edbc178d885df6f9f0d84a3c" Namespace="calico-apiserver" Pod="calico-apiserver-5658b7bb57-h9lb4" WorkloadEndpoint="ci--4081.1.0--a--b9ef23c535-k8s-calico--apiserver--5658b7bb57--h9lb4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.1.0--a--b9ef23c535-k8s-calico--apiserver--5658b7bb57--h9lb4-eth0", GenerateName:"calico-apiserver-5658b7bb57-", Namespace:"calico-apiserver", SelfLink:"", UID:"4c1c7a4a-32cd-4388-9566-741848cbc5dd", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 4, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5658b7bb57", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.1.0-a-b9ef23c535", ContainerID:"7daaebd488d7c40dc05c5f384f7c2dd027f1a9a5edbc178d885df6f9f0d84a3c", Pod:"calico-apiserver-5658b7bb57-h9lb4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.58.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3fc3cee7f1a", MAC:"c6:fb:3c:28:c8:10", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:04:53.798363 containerd[1680]: 2024-10-08 20:04:53.791 [INFO][5440] k8s.go 500: Wrote updated endpoint to datastore ContainerID="7daaebd488d7c40dc05c5f384f7c2dd027f1a9a5edbc178d885df6f9f0d84a3c" Namespace="calico-apiserver" Pod="calico-apiserver-5658b7bb57-h9lb4" WorkloadEndpoint="ci--4081.1.0--a--b9ef23c535-k8s-calico--apiserver--5658b7bb57--h9lb4-eth0" Oct 8 20:04:53.834308 containerd[1680]: time="2024-10-08T20:04:53.833882499Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:04:53.834308 containerd[1680]: time="2024-10-08T20:04:53.833947599Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:04:53.834308 containerd[1680]: time="2024-10-08T20:04:53.833981299Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:04:53.834308 containerd[1680]: time="2024-10-08T20:04:53.834093600Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:04:53.871396 containerd[1680]: time="2024-10-08T20:04:53.871191232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5658b7bb57-qc8lz,Uid:8b239943-e886-4f96-b792-a45bff18b054,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"ff30b0edaf347d08846163891bf48ce554a2de50962fb81f939ff3224a10e112\"" Oct 8 20:04:53.877310 containerd[1680]: time="2024-10-08T20:04:53.877197486Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Oct 8 20:04:53.880276 systemd[1]: Started cri-containerd-7daaebd488d7c40dc05c5f384f7c2dd027f1a9a5edbc178d885df6f9f0d84a3c.scope - libcontainer container 7daaebd488d7c40dc05c5f384f7c2dd027f1a9a5edbc178d885df6f9f0d84a3c. Oct 8 20:04:53.926895 containerd[1680]: time="2024-10-08T20:04:53.926851630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5658b7bb57-h9lb4,Uid:4c1c7a4a-32cd-4388-9566-741848cbc5dd,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"7daaebd488d7c40dc05c5f384f7c2dd027f1a9a5edbc178d885df6f9f0d84a3c\"" Oct 8 20:04:55.088184 systemd-networkd[1320]: cali804a65f3e27: Gained IPv6LL Oct 8 20:04:55.216208 systemd-networkd[1320]: cali3fc3cee7f1a: Gained IPv6LL Oct 8 20:04:56.930335 containerd[1680]: time="2024-10-08T20:04:56.929314472Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:04:56.935087 containerd[1680]: time="2024-10-08T20:04:56.935008723Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=40419849" Oct 8 20:04:56.944039 containerd[1680]: time="2024-10-08T20:04:56.943973103Z" level=info msg="ImageCreate event name:\"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:04:56.951212 containerd[1680]: time="2024-10-08T20:04:56.951155467Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:04:56.952075 containerd[1680]: time="2024-10-08T20:04:56.951861873Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"41912266\" in 3.074621587s" Oct 8 20:04:56.952075 containerd[1680]: time="2024-10-08T20:04:56.951906974Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\"" Oct 8 20:04:56.953210 containerd[1680]: time="2024-10-08T20:04:56.953183985Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Oct 8 20:04:56.954948 containerd[1680]: time="2024-10-08T20:04:56.954744099Z" level=info msg="CreateContainer within sandbox \"ff30b0edaf347d08846163891bf48ce554a2de50962fb81f939ff3224a10e112\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 8 20:04:57.015809 containerd[1680]: time="2024-10-08T20:04:57.015754945Z" level=info msg="CreateContainer within sandbox \"ff30b0edaf347d08846163891bf48ce554a2de50962fb81f939ff3224a10e112\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0496ce3d030837a704d18b2b0a9cbba391b3a6fdfd41b26517bbe8da5b960fa2\"" Oct 8 20:04:57.016549 containerd[1680]: time="2024-10-08T20:04:57.016376050Z" level=info msg="StartContainer for \"0496ce3d030837a704d18b2b0a9cbba391b3a6fdfd41b26517bbe8da5b960fa2\"" Oct 8 20:04:57.062075 systemd[1]: Started cri-containerd-0496ce3d030837a704d18b2b0a9cbba391b3a6fdfd41b26517bbe8da5b960fa2.scope - libcontainer container 0496ce3d030837a704d18b2b0a9cbba391b3a6fdfd41b26517bbe8da5b960fa2. Oct 8 20:04:57.109103 containerd[1680]: time="2024-10-08T20:04:57.108991878Z" level=info msg="StartContainer for \"0496ce3d030837a704d18b2b0a9cbba391b3a6fdfd41b26517bbe8da5b960fa2\" returns successfully" Oct 8 20:04:57.283103 containerd[1680]: time="2024-10-08T20:04:57.282069725Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:04:57.287933 containerd[1680]: time="2024-10-08T20:04:57.287380373Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=77" Oct 8 20:04:57.291793 containerd[1680]: time="2024-10-08T20:04:57.291748112Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"41912266\" in 338.525927ms" Oct 8 20:04:57.291942 containerd[1680]: time="2024-10-08T20:04:57.291910013Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\"" Oct 8 20:04:57.295891 containerd[1680]: time="2024-10-08T20:04:57.294946441Z" level=info msg="CreateContainer within sandbox \"7daaebd488d7c40dc05c5f384f7c2dd027f1a9a5edbc178d885df6f9f0d84a3c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 8 20:04:57.339477 containerd[1680]: time="2024-10-08T20:04:57.339425838Z" level=info msg="CreateContainer within sandbox \"7daaebd488d7c40dc05c5f384f7c2dd027f1a9a5edbc178d885df6f9f0d84a3c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9370116ae091a4ff6f79c954d5249fcbf1bbe79998e788d706925ff2fbd3d703\"" Oct 8 20:04:57.341783 containerd[1680]: time="2024-10-08T20:04:57.340278046Z" level=info msg="StartContainer for \"9370116ae091a4ff6f79c954d5249fcbf1bbe79998e788d706925ff2fbd3d703\"" Oct 8 20:04:57.374148 systemd[1]: Started cri-containerd-9370116ae091a4ff6f79c954d5249fcbf1bbe79998e788d706925ff2fbd3d703.scope - libcontainer container 9370116ae091a4ff6f79c954d5249fcbf1bbe79998e788d706925ff2fbd3d703. Oct 8 20:04:57.432626 containerd[1680]: time="2024-10-08T20:04:57.432491770Z" level=info msg="StartContainer for \"9370116ae091a4ff6f79c954d5249fcbf1bbe79998e788d706925ff2fbd3d703\" returns successfully" Oct 8 20:04:57.503010 kubelet[3207]: I1008 20:04:57.502947 3207 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5658b7bb57-qc8lz" podStartSLOduration=2.42579969 podStartE2EDuration="5.502923s" podCreationTimestamp="2024-10-08 20:04:52 +0000 UTC" firstStartedPulling="2024-10-08 20:04:53.875817573 +0000 UTC m=+52.758353866" lastFinishedPulling="2024-10-08 20:04:56.952940883 +0000 UTC m=+55.835477176" observedRunningTime="2024-10-08 20:04:57.500623379 +0000 UTC m=+56.383159772" watchObservedRunningTime="2024-10-08 20:04:57.502923 +0000 UTC m=+56.385459393" Oct 8 20:04:58.487872 kubelet[3207]: I1008 20:04:58.487826 3207 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 8 20:04:58.488361 kubelet[3207]: I1008 20:04:58.487826 3207 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 8 20:05:01.225663 containerd[1680]: time="2024-10-08T20:05:01.225619356Z" level=info msg="StopPodSandbox for \"89546276c96b7d69da18609f5a6c19d681eae4c42860adc216d7703c98699f07\"" Oct 8 20:05:01.226222 containerd[1680]: time="2024-10-08T20:05:01.225722357Z" level=info msg="TearDown network for sandbox \"89546276c96b7d69da18609f5a6c19d681eae4c42860adc216d7703c98699f07\" successfully" Oct 8 20:05:01.226222 containerd[1680]: time="2024-10-08T20:05:01.225738257Z" level=info msg="StopPodSandbox for \"89546276c96b7d69da18609f5a6c19d681eae4c42860adc216d7703c98699f07\" returns successfully" Oct 8 20:05:01.226222 containerd[1680]: time="2024-10-08T20:05:01.226199861Z" level=info msg="RemovePodSandbox for \"89546276c96b7d69da18609f5a6c19d681eae4c42860adc216d7703c98699f07\"" Oct 8 20:05:01.226359 containerd[1680]: time="2024-10-08T20:05:01.226231561Z" level=info msg="Forcibly stopping sandbox \"89546276c96b7d69da18609f5a6c19d681eae4c42860adc216d7703c98699f07\"" Oct 8 20:05:01.226359 containerd[1680]: time="2024-10-08T20:05:01.226307362Z" level=info msg="TearDown network for sandbox \"89546276c96b7d69da18609f5a6c19d681eae4c42860adc216d7703c98699f07\" successfully" Oct 8 20:05:01.240802 containerd[1680]: time="2024-10-08T20:05:01.240720778Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"89546276c96b7d69da18609f5a6c19d681eae4c42860adc216d7703c98699f07\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 20:05:01.240988 containerd[1680]: time="2024-10-08T20:05:01.240820379Z" level=info msg="RemovePodSandbox \"89546276c96b7d69da18609f5a6c19d681eae4c42860adc216d7703c98699f07\" returns successfully" Oct 8 20:05:01.241350 containerd[1680]: time="2024-10-08T20:05:01.241314883Z" level=info msg="StopPodSandbox for \"72b13551b41a8209e6c060d3441ffbc5ae5ea761a8c510e5d0f43df8df28a11f\"" Oct 8 20:05:01.309225 containerd[1680]: 2024-10-08 20:05:01.278 [WARNING][5702] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="72b13551b41a8209e6c060d3441ffbc5ae5ea761a8c510e5d0f43df8df28a11f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.1.0--a--b9ef23c535-k8s-coredns--6f6b679f8f--dnl97-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"c43b05fe-589a-477a-823a-198b47900c84", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 4, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.1.0-a-b9ef23c535", ContainerID:"c214543d7156630e131a1be688408989a00c794cbe783812f3c1ddc39ce408bc", Pod:"coredns-6f6b679f8f-dnl97", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.58.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid95116a9c13", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:05:01.309225 containerd[1680]: 2024-10-08 20:05:01.278 [INFO][5702] k8s.go 608: Cleaning up netns ContainerID="72b13551b41a8209e6c060d3441ffbc5ae5ea761a8c510e5d0f43df8df28a11f" Oct 8 20:05:01.309225 containerd[1680]: 2024-10-08 20:05:01.278 [INFO][5702] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="72b13551b41a8209e6c060d3441ffbc5ae5ea761a8c510e5d0f43df8df28a11f" iface="eth0" netns="" Oct 8 20:05:01.309225 containerd[1680]: 2024-10-08 20:05:01.278 [INFO][5702] k8s.go 615: Releasing IP address(es) ContainerID="72b13551b41a8209e6c060d3441ffbc5ae5ea761a8c510e5d0f43df8df28a11f" Oct 8 20:05:01.309225 containerd[1680]: 2024-10-08 20:05:01.278 [INFO][5702] utils.go 188: Calico CNI releasing IP address ContainerID="72b13551b41a8209e6c060d3441ffbc5ae5ea761a8c510e5d0f43df8df28a11f" Oct 8 20:05:01.309225 containerd[1680]: 2024-10-08 20:05:01.299 [INFO][5708] ipam_plugin.go 417: Releasing address using handleID ContainerID="72b13551b41a8209e6c060d3441ffbc5ae5ea761a8c510e5d0f43df8df28a11f" HandleID="k8s-pod-network.72b13551b41a8209e6c060d3441ffbc5ae5ea761a8c510e5d0f43df8df28a11f" Workload="ci--4081.1.0--a--b9ef23c535-k8s-coredns--6f6b679f8f--dnl97-eth0" Oct 8 20:05:01.309225 containerd[1680]: 2024-10-08 20:05:01.299 [INFO][5708] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:05:01.309225 containerd[1680]: 2024-10-08 20:05:01.300 [INFO][5708] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:05:01.309225 containerd[1680]: 2024-10-08 20:05:01.305 [WARNING][5708] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="72b13551b41a8209e6c060d3441ffbc5ae5ea761a8c510e5d0f43df8df28a11f" HandleID="k8s-pod-network.72b13551b41a8209e6c060d3441ffbc5ae5ea761a8c510e5d0f43df8df28a11f" Workload="ci--4081.1.0--a--b9ef23c535-k8s-coredns--6f6b679f8f--dnl97-eth0" Oct 8 20:05:01.309225 containerd[1680]: 2024-10-08 20:05:01.305 [INFO][5708] ipam_plugin.go 445: Releasing address using workloadID ContainerID="72b13551b41a8209e6c060d3441ffbc5ae5ea761a8c510e5d0f43df8df28a11f" HandleID="k8s-pod-network.72b13551b41a8209e6c060d3441ffbc5ae5ea761a8c510e5d0f43df8df28a11f" Workload="ci--4081.1.0--a--b9ef23c535-k8s-coredns--6f6b679f8f--dnl97-eth0" Oct 8 20:05:01.309225 containerd[1680]: 2024-10-08 20:05:01.307 [INFO][5708] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:05:01.309225 containerd[1680]: 2024-10-08 20:05:01.308 [INFO][5702] k8s.go 621: Teardown processing complete. ContainerID="72b13551b41a8209e6c060d3441ffbc5ae5ea761a8c510e5d0f43df8df28a11f" Oct 8 20:05:01.309863 containerd[1680]: time="2024-10-08T20:05:01.309264134Z" level=info msg="TearDown network for sandbox \"72b13551b41a8209e6c060d3441ffbc5ae5ea761a8c510e5d0f43df8df28a11f\" successfully" Oct 8 20:05:01.309863 containerd[1680]: time="2024-10-08T20:05:01.309297234Z" level=info msg="StopPodSandbox for \"72b13551b41a8209e6c060d3441ffbc5ae5ea761a8c510e5d0f43df8df28a11f\" returns successfully" Oct 8 20:05:01.310024 containerd[1680]: time="2024-10-08T20:05:01.309995740Z" level=info msg="RemovePodSandbox for \"72b13551b41a8209e6c060d3441ffbc5ae5ea761a8c510e5d0f43df8df28a11f\"" Oct 8 20:05:01.310080 containerd[1680]: time="2024-10-08T20:05:01.310054240Z" level=info msg="Forcibly stopping sandbox \"72b13551b41a8209e6c060d3441ffbc5ae5ea761a8c510e5d0f43df8df28a11f\"" Oct 8 20:05:01.367995 containerd[1680]: 2024-10-08 20:05:01.341 [WARNING][5726] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="72b13551b41a8209e6c060d3441ffbc5ae5ea761a8c510e5d0f43df8df28a11f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.1.0--a--b9ef23c535-k8s-coredns--6f6b679f8f--dnl97-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"c43b05fe-589a-477a-823a-198b47900c84", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 4, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.1.0-a-b9ef23c535", ContainerID:"c214543d7156630e131a1be688408989a00c794cbe783812f3c1ddc39ce408bc", Pod:"coredns-6f6b679f8f-dnl97", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.58.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid95116a9c13", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:05:01.367995 containerd[1680]: 2024-10-08 20:05:01.341 [INFO][5726] k8s.go 608: Cleaning up netns ContainerID="72b13551b41a8209e6c060d3441ffbc5ae5ea761a8c510e5d0f43df8df28a11f" Oct 8 20:05:01.367995 containerd[1680]: 2024-10-08 20:05:01.341 [INFO][5726] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="72b13551b41a8209e6c060d3441ffbc5ae5ea761a8c510e5d0f43df8df28a11f" iface="eth0" netns="" Oct 8 20:05:01.367995 containerd[1680]: 2024-10-08 20:05:01.341 [INFO][5726] k8s.go 615: Releasing IP address(es) ContainerID="72b13551b41a8209e6c060d3441ffbc5ae5ea761a8c510e5d0f43df8df28a11f" Oct 8 20:05:01.367995 containerd[1680]: 2024-10-08 20:05:01.341 [INFO][5726] utils.go 188: Calico CNI releasing IP address ContainerID="72b13551b41a8209e6c060d3441ffbc5ae5ea761a8c510e5d0f43df8df28a11f" Oct 8 20:05:01.367995 containerd[1680]: 2024-10-08 20:05:01.359 [INFO][5732] ipam_plugin.go 417: Releasing address using handleID ContainerID="72b13551b41a8209e6c060d3441ffbc5ae5ea761a8c510e5d0f43df8df28a11f" HandleID="k8s-pod-network.72b13551b41a8209e6c060d3441ffbc5ae5ea761a8c510e5d0f43df8df28a11f" Workload="ci--4081.1.0--a--b9ef23c535-k8s-coredns--6f6b679f8f--dnl97-eth0" Oct 8 20:05:01.367995 containerd[1680]: 2024-10-08 20:05:01.359 [INFO][5732] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:05:01.367995 containerd[1680]: 2024-10-08 20:05:01.359 [INFO][5732] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:05:01.367995 containerd[1680]: 2024-10-08 20:05:01.364 [WARNING][5732] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="72b13551b41a8209e6c060d3441ffbc5ae5ea761a8c510e5d0f43df8df28a11f" HandleID="k8s-pod-network.72b13551b41a8209e6c060d3441ffbc5ae5ea761a8c510e5d0f43df8df28a11f" Workload="ci--4081.1.0--a--b9ef23c535-k8s-coredns--6f6b679f8f--dnl97-eth0" Oct 8 20:05:01.367995 containerd[1680]: 2024-10-08 20:05:01.364 [INFO][5732] ipam_plugin.go 445: Releasing address using workloadID ContainerID="72b13551b41a8209e6c060d3441ffbc5ae5ea761a8c510e5d0f43df8df28a11f" HandleID="k8s-pod-network.72b13551b41a8209e6c060d3441ffbc5ae5ea761a8c510e5d0f43df8df28a11f" Workload="ci--4081.1.0--a--b9ef23c535-k8s-coredns--6f6b679f8f--dnl97-eth0" Oct 8 20:05:01.367995 containerd[1680]: 2024-10-08 20:05:01.366 [INFO][5732] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:05:01.367995 containerd[1680]: 2024-10-08 20:05:01.367 [INFO][5726] k8s.go 621: Teardown processing complete. ContainerID="72b13551b41a8209e6c060d3441ffbc5ae5ea761a8c510e5d0f43df8df28a11f" Oct 8 20:05:01.368761 containerd[1680]: time="2024-10-08T20:05:01.368044911Z" level=info msg="TearDown network for sandbox \"72b13551b41a8209e6c060d3441ffbc5ae5ea761a8c510e5d0f43df8df28a11f\" successfully" Oct 8 20:05:01.378128 containerd[1680]: time="2024-10-08T20:05:01.378086092Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"72b13551b41a8209e6c060d3441ffbc5ae5ea761a8c510e5d0f43df8df28a11f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 20:05:01.378804 containerd[1680]: time="2024-10-08T20:05:01.378254093Z" level=info msg="RemovePodSandbox \"72b13551b41a8209e6c060d3441ffbc5ae5ea761a8c510e5d0f43df8df28a11f\" returns successfully" Oct 8 20:05:01.379062 containerd[1680]: time="2024-10-08T20:05:01.379039000Z" level=info msg="StopPodSandbox for \"a87e53211819594214e02738e8dd0ae6a3b7145d92adc9b16201d69a67ac3959\"" Oct 8 20:05:01.453502 containerd[1680]: 2024-10-08 20:05:01.425 [WARNING][5754] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a87e53211819594214e02738e8dd0ae6a3b7145d92adc9b16201d69a67ac3959" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.1.0--a--b9ef23c535-k8s-coredns--6f6b679f8f--fkjkp-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"0a1778ab-0ba0-44e2-a8c0-fe7fdaa08be5", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 4, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.1.0-a-b9ef23c535", ContainerID:"b50f575e2c49dcd5c1c9f83873b5dc0cf966ae9223611ceaeb4a610b4dd51244", Pod:"coredns-6f6b679f8f-fkjkp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.58.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5baf0cf16a6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:05:01.453502 containerd[1680]: 2024-10-08 20:05:01.425 [INFO][5754] k8s.go 608: Cleaning up netns ContainerID="a87e53211819594214e02738e8dd0ae6a3b7145d92adc9b16201d69a67ac3959" Oct 8 20:05:01.453502 containerd[1680]: 2024-10-08 20:05:01.425 [INFO][5754] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="a87e53211819594214e02738e8dd0ae6a3b7145d92adc9b16201d69a67ac3959" iface="eth0" netns="" Oct 8 20:05:01.453502 containerd[1680]: 2024-10-08 20:05:01.425 [INFO][5754] k8s.go 615: Releasing IP address(es) ContainerID="a87e53211819594214e02738e8dd0ae6a3b7145d92adc9b16201d69a67ac3959" Oct 8 20:05:01.453502 containerd[1680]: 2024-10-08 20:05:01.425 [INFO][5754] utils.go 188: Calico CNI releasing IP address ContainerID="a87e53211819594214e02738e8dd0ae6a3b7145d92adc9b16201d69a67ac3959" Oct 8 20:05:01.453502 containerd[1680]: 2024-10-08 20:05:01.443 [INFO][5761] ipam_plugin.go 417: Releasing address using handleID ContainerID="a87e53211819594214e02738e8dd0ae6a3b7145d92adc9b16201d69a67ac3959" HandleID="k8s-pod-network.a87e53211819594214e02738e8dd0ae6a3b7145d92adc9b16201d69a67ac3959" Workload="ci--4081.1.0--a--b9ef23c535-k8s-coredns--6f6b679f8f--fkjkp-eth0" Oct 8 20:05:01.453502 containerd[1680]: 2024-10-08 20:05:01.443 [INFO][5761] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:05:01.453502 containerd[1680]: 2024-10-08 20:05:01.443 [INFO][5761] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:05:01.453502 containerd[1680]: 2024-10-08 20:05:01.448 [WARNING][5761] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="a87e53211819594214e02738e8dd0ae6a3b7145d92adc9b16201d69a67ac3959" HandleID="k8s-pod-network.a87e53211819594214e02738e8dd0ae6a3b7145d92adc9b16201d69a67ac3959" Workload="ci--4081.1.0--a--b9ef23c535-k8s-coredns--6f6b679f8f--fkjkp-eth0" Oct 8 20:05:01.453502 containerd[1680]: 2024-10-08 20:05:01.448 [INFO][5761] ipam_plugin.go 445: Releasing address using workloadID ContainerID="a87e53211819594214e02738e8dd0ae6a3b7145d92adc9b16201d69a67ac3959" HandleID="k8s-pod-network.a87e53211819594214e02738e8dd0ae6a3b7145d92adc9b16201d69a67ac3959" Workload="ci--4081.1.0--a--b9ef23c535-k8s-coredns--6f6b679f8f--fkjkp-eth0" Oct 8 20:05:01.453502 containerd[1680]: 2024-10-08 20:05:01.451 [INFO][5761] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:05:01.453502 containerd[1680]: 2024-10-08 20:05:01.452 [INFO][5754] k8s.go 621: Teardown processing complete. ContainerID="a87e53211819594214e02738e8dd0ae6a3b7145d92adc9b16201d69a67ac3959" Oct 8 20:05:01.454383 containerd[1680]: time="2024-10-08T20:05:01.453555104Z" level=info msg="TearDown network for sandbox \"a87e53211819594214e02738e8dd0ae6a3b7145d92adc9b16201d69a67ac3959\" successfully" Oct 8 20:05:01.454383 containerd[1680]: time="2024-10-08T20:05:01.453586404Z" level=info msg="StopPodSandbox for \"a87e53211819594214e02738e8dd0ae6a3b7145d92adc9b16201d69a67ac3959\" returns successfully" Oct 8 20:05:01.454383 containerd[1680]: time="2024-10-08T20:05:01.454313410Z" level=info msg="RemovePodSandbox for \"a87e53211819594214e02738e8dd0ae6a3b7145d92adc9b16201d69a67ac3959\"" Oct 8 20:05:01.454383 containerd[1680]: time="2024-10-08T20:05:01.454346810Z" level=info msg="Forcibly stopping sandbox \"a87e53211819594214e02738e8dd0ae6a3b7145d92adc9b16201d69a67ac3959\"" Oct 8 20:05:01.516265 containerd[1680]: 2024-10-08 20:05:01.485 [WARNING][5779] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a87e53211819594214e02738e8dd0ae6a3b7145d92adc9b16201d69a67ac3959" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.1.0--a--b9ef23c535-k8s-coredns--6f6b679f8f--fkjkp-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"0a1778ab-0ba0-44e2-a8c0-fe7fdaa08be5", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 4, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.1.0-a-b9ef23c535", ContainerID:"b50f575e2c49dcd5c1c9f83873b5dc0cf966ae9223611ceaeb4a610b4dd51244", Pod:"coredns-6f6b679f8f-fkjkp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.58.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5baf0cf16a6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:05:01.516265 containerd[1680]: 2024-10-08 20:05:01.485 [INFO][5779] k8s.go 608: Cleaning up netns ContainerID="a87e53211819594214e02738e8dd0ae6a3b7145d92adc9b16201d69a67ac3959" Oct 8 20:05:01.516265 containerd[1680]: 2024-10-08 20:05:01.485 [INFO][5779] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="a87e53211819594214e02738e8dd0ae6a3b7145d92adc9b16201d69a67ac3959" iface="eth0" netns="" Oct 8 20:05:01.516265 containerd[1680]: 2024-10-08 20:05:01.485 [INFO][5779] k8s.go 615: Releasing IP address(es) ContainerID="a87e53211819594214e02738e8dd0ae6a3b7145d92adc9b16201d69a67ac3959" Oct 8 20:05:01.516265 containerd[1680]: 2024-10-08 20:05:01.485 [INFO][5779] utils.go 188: Calico CNI releasing IP address ContainerID="a87e53211819594214e02738e8dd0ae6a3b7145d92adc9b16201d69a67ac3959" Oct 8 20:05:01.516265 containerd[1680]: 2024-10-08 20:05:01.507 [INFO][5785] ipam_plugin.go 417: Releasing address using handleID ContainerID="a87e53211819594214e02738e8dd0ae6a3b7145d92adc9b16201d69a67ac3959" HandleID="k8s-pod-network.a87e53211819594214e02738e8dd0ae6a3b7145d92adc9b16201d69a67ac3959" Workload="ci--4081.1.0--a--b9ef23c535-k8s-coredns--6f6b679f8f--fkjkp-eth0" Oct 8 20:05:01.516265 containerd[1680]: 2024-10-08 20:05:01.507 [INFO][5785] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:05:01.516265 containerd[1680]: 2024-10-08 20:05:01.507 [INFO][5785] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:05:01.516265 containerd[1680]: 2024-10-08 20:05:01.512 [WARNING][5785] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="a87e53211819594214e02738e8dd0ae6a3b7145d92adc9b16201d69a67ac3959" HandleID="k8s-pod-network.a87e53211819594214e02738e8dd0ae6a3b7145d92adc9b16201d69a67ac3959" Workload="ci--4081.1.0--a--b9ef23c535-k8s-coredns--6f6b679f8f--fkjkp-eth0" Oct 8 20:05:01.516265 containerd[1680]: 2024-10-08 20:05:01.512 [INFO][5785] ipam_plugin.go 445: Releasing address using workloadID ContainerID="a87e53211819594214e02738e8dd0ae6a3b7145d92adc9b16201d69a67ac3959" HandleID="k8s-pod-network.a87e53211819594214e02738e8dd0ae6a3b7145d92adc9b16201d69a67ac3959" Workload="ci--4081.1.0--a--b9ef23c535-k8s-coredns--6f6b679f8f--fkjkp-eth0" Oct 8 20:05:01.516265 containerd[1680]: 2024-10-08 20:05:01.514 [INFO][5785] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:05:01.516265 containerd[1680]: 2024-10-08 20:05:01.515 [INFO][5779] k8s.go 621: Teardown processing complete. ContainerID="a87e53211819594214e02738e8dd0ae6a3b7145d92adc9b16201d69a67ac3959" Oct 8 20:05:01.516265 containerd[1680]: time="2024-10-08T20:05:01.516216312Z" level=info msg="TearDown network for sandbox \"a87e53211819594214e02738e8dd0ae6a3b7145d92adc9b16201d69a67ac3959\" successfully" Oct 8 20:05:01.528033 containerd[1680]: time="2024-10-08T20:05:01.527988407Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a87e53211819594214e02738e8dd0ae6a3b7145d92adc9b16201d69a67ac3959\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 20:05:01.528163 containerd[1680]: time="2024-10-08T20:05:01.528062908Z" level=info msg="RemovePodSandbox \"a87e53211819594214e02738e8dd0ae6a3b7145d92adc9b16201d69a67ac3959\" returns successfully" Oct 8 20:05:01.528646 containerd[1680]: time="2024-10-08T20:05:01.528564612Z" level=info msg="StopPodSandbox for \"60223960baf2536862e75c6293089dd464aacce4ea598ea63ccac2fb7e6b3bdc\"" Oct 8 20:05:01.592735 containerd[1680]: 2024-10-08 20:05:01.563 [WARNING][5803] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="60223960baf2536862e75c6293089dd464aacce4ea598ea63ccac2fb7e6b3bdc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.1.0--a--b9ef23c535-k8s-csi--node--driver--2j8vr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0a5c8fac-8ac4-4f20-883d-6418322f8148", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 4, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"779867c8f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.1.0-a-b9ef23c535", ContainerID:"603cda6e0990e6d789a7097b45984f7f56525a7babf95b972cdc57b587fe5369", Pod:"csi-node-driver-2j8vr", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.58.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calidc7f4a7cdaa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:05:01.592735 containerd[1680]: 2024-10-08 20:05:01.563 [INFO][5803] k8s.go 608: Cleaning up netns ContainerID="60223960baf2536862e75c6293089dd464aacce4ea598ea63ccac2fb7e6b3bdc" Oct 8 20:05:01.592735 containerd[1680]: 2024-10-08 20:05:01.563 [INFO][5803] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="60223960baf2536862e75c6293089dd464aacce4ea598ea63ccac2fb7e6b3bdc" iface="eth0" netns="" Oct 8 20:05:01.592735 containerd[1680]: 2024-10-08 20:05:01.563 [INFO][5803] k8s.go 615: Releasing IP address(es) ContainerID="60223960baf2536862e75c6293089dd464aacce4ea598ea63ccac2fb7e6b3bdc" Oct 8 20:05:01.592735 containerd[1680]: 2024-10-08 20:05:01.563 [INFO][5803] utils.go 188: Calico CNI releasing IP address ContainerID="60223960baf2536862e75c6293089dd464aacce4ea598ea63ccac2fb7e6b3bdc" Oct 8 20:05:01.592735 containerd[1680]: 2024-10-08 20:05:01.583 [INFO][5809] ipam_plugin.go 417: Releasing address using handleID ContainerID="60223960baf2536862e75c6293089dd464aacce4ea598ea63ccac2fb7e6b3bdc" HandleID="k8s-pod-network.60223960baf2536862e75c6293089dd464aacce4ea598ea63ccac2fb7e6b3bdc" Workload="ci--4081.1.0--a--b9ef23c535-k8s-csi--node--driver--2j8vr-eth0" Oct 8 20:05:01.592735 containerd[1680]: 2024-10-08 20:05:01.583 [INFO][5809] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:05:01.592735 containerd[1680]: 2024-10-08 20:05:01.583 [INFO][5809] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:05:01.592735 containerd[1680]: 2024-10-08 20:05:01.589 [WARNING][5809] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="60223960baf2536862e75c6293089dd464aacce4ea598ea63ccac2fb7e6b3bdc" HandleID="k8s-pod-network.60223960baf2536862e75c6293089dd464aacce4ea598ea63ccac2fb7e6b3bdc" Workload="ci--4081.1.0--a--b9ef23c535-k8s-csi--node--driver--2j8vr-eth0" Oct 8 20:05:01.592735 containerd[1680]: 2024-10-08 20:05:01.589 [INFO][5809] ipam_plugin.go 445: Releasing address using workloadID ContainerID="60223960baf2536862e75c6293089dd464aacce4ea598ea63ccac2fb7e6b3bdc" HandleID="k8s-pod-network.60223960baf2536862e75c6293089dd464aacce4ea598ea63ccac2fb7e6b3bdc" Workload="ci--4081.1.0--a--b9ef23c535-k8s-csi--node--driver--2j8vr-eth0" Oct 8 20:05:01.592735 containerd[1680]: 2024-10-08 20:05:01.590 [INFO][5809] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:05:01.592735 containerd[1680]: 2024-10-08 20:05:01.591 [INFO][5803] k8s.go 621: Teardown processing complete. ContainerID="60223960baf2536862e75c6293089dd464aacce4ea598ea63ccac2fb7e6b3bdc" Oct 8 20:05:01.593691 containerd[1680]: time="2024-10-08T20:05:01.592773032Z" level=info msg="TearDown network for sandbox \"60223960baf2536862e75c6293089dd464aacce4ea598ea63ccac2fb7e6b3bdc\" successfully" Oct 8 20:05:01.593691 containerd[1680]: time="2024-10-08T20:05:01.592803432Z" level=info msg="StopPodSandbox for \"60223960baf2536862e75c6293089dd464aacce4ea598ea63ccac2fb7e6b3bdc\" returns successfully" Oct 8 20:05:01.593871 containerd[1680]: time="2024-10-08T20:05:01.593843441Z" level=info msg="RemovePodSandbox for \"60223960baf2536862e75c6293089dd464aacce4ea598ea63ccac2fb7e6b3bdc\"" Oct 8 20:05:01.593982 containerd[1680]: time="2024-10-08T20:05:01.593877241Z" level=info msg="Forcibly stopping sandbox \"60223960baf2536862e75c6293089dd464aacce4ea598ea63ccac2fb7e6b3bdc\"" Oct 8 20:05:01.652792 containerd[1680]: 2024-10-08 20:05:01.625 [WARNING][5827] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="60223960baf2536862e75c6293089dd464aacce4ea598ea63ccac2fb7e6b3bdc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.1.0--a--b9ef23c535-k8s-csi--node--driver--2j8vr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0a5c8fac-8ac4-4f20-883d-6418322f8148", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 4, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"779867c8f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.1.0-a-b9ef23c535", ContainerID:"603cda6e0990e6d789a7097b45984f7f56525a7babf95b972cdc57b587fe5369", Pod:"csi-node-driver-2j8vr", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.58.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calidc7f4a7cdaa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:05:01.652792 containerd[1680]: 2024-10-08 20:05:01.625 [INFO][5827] k8s.go 608: Cleaning up netns ContainerID="60223960baf2536862e75c6293089dd464aacce4ea598ea63ccac2fb7e6b3bdc" Oct 8 20:05:01.652792 containerd[1680]: 2024-10-08 20:05:01.625 [INFO][5827] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="60223960baf2536862e75c6293089dd464aacce4ea598ea63ccac2fb7e6b3bdc" iface="eth0" netns="" Oct 8 20:05:01.652792 containerd[1680]: 2024-10-08 20:05:01.625 [INFO][5827] k8s.go 615: Releasing IP address(es) ContainerID="60223960baf2536862e75c6293089dd464aacce4ea598ea63ccac2fb7e6b3bdc" Oct 8 20:05:01.652792 containerd[1680]: 2024-10-08 20:05:01.625 [INFO][5827] utils.go 188: Calico CNI releasing IP address ContainerID="60223960baf2536862e75c6293089dd464aacce4ea598ea63ccac2fb7e6b3bdc" Oct 8 20:05:01.652792 containerd[1680]: 2024-10-08 20:05:01.643 [INFO][5833] ipam_plugin.go 417: Releasing address using handleID ContainerID="60223960baf2536862e75c6293089dd464aacce4ea598ea63ccac2fb7e6b3bdc" HandleID="k8s-pod-network.60223960baf2536862e75c6293089dd464aacce4ea598ea63ccac2fb7e6b3bdc" Workload="ci--4081.1.0--a--b9ef23c535-k8s-csi--node--driver--2j8vr-eth0" Oct 8 20:05:01.652792 containerd[1680]: 2024-10-08 20:05:01.643 [INFO][5833] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:05:01.652792 containerd[1680]: 2024-10-08 20:05:01.643 [INFO][5833] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:05:01.652792 containerd[1680]: 2024-10-08 20:05:01.649 [WARNING][5833] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="60223960baf2536862e75c6293089dd464aacce4ea598ea63ccac2fb7e6b3bdc" HandleID="k8s-pod-network.60223960baf2536862e75c6293089dd464aacce4ea598ea63ccac2fb7e6b3bdc" Workload="ci--4081.1.0--a--b9ef23c535-k8s-csi--node--driver--2j8vr-eth0" Oct 8 20:05:01.652792 containerd[1680]: 2024-10-08 20:05:01.649 [INFO][5833] ipam_plugin.go 445: Releasing address using workloadID ContainerID="60223960baf2536862e75c6293089dd464aacce4ea598ea63ccac2fb7e6b3bdc" HandleID="k8s-pod-network.60223960baf2536862e75c6293089dd464aacce4ea598ea63ccac2fb7e6b3bdc" Workload="ci--4081.1.0--a--b9ef23c535-k8s-csi--node--driver--2j8vr-eth0" Oct 8 20:05:01.652792 containerd[1680]: 2024-10-08 20:05:01.650 [INFO][5833] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:05:01.652792 containerd[1680]: 2024-10-08 20:05:01.651 [INFO][5827] k8s.go 621: Teardown processing complete. ContainerID="60223960baf2536862e75c6293089dd464aacce4ea598ea63ccac2fb7e6b3bdc" Oct 8 20:05:01.653616 containerd[1680]: time="2024-10-08T20:05:01.652831219Z" level=info msg="TearDown network for sandbox \"60223960baf2536862e75c6293089dd464aacce4ea598ea63ccac2fb7e6b3bdc\" successfully" Oct 8 20:05:01.662385 containerd[1680]: time="2024-10-08T20:05:01.662340196Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"60223960baf2536862e75c6293089dd464aacce4ea598ea63ccac2fb7e6b3bdc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 20:05:01.662531 containerd[1680]: time="2024-10-08T20:05:01.662426997Z" level=info msg="RemovePodSandbox \"60223960baf2536862e75c6293089dd464aacce4ea598ea63ccac2fb7e6b3bdc\" returns successfully" Oct 8 20:05:01.663061 containerd[1680]: time="2024-10-08T20:05:01.663032602Z" level=info msg="StopPodSandbox for \"0d1d5cf0eee7ff97025052c4166b0bee697a6e09e5cffe83aa5d29b1daa5b204\"" Oct 8 20:05:01.663155 containerd[1680]: time="2024-10-08T20:05:01.663123702Z" level=info msg="TearDown network for sandbox \"0d1d5cf0eee7ff97025052c4166b0bee697a6e09e5cffe83aa5d29b1daa5b204\" successfully" Oct 8 20:05:01.663155 containerd[1680]: time="2024-10-08T20:05:01.663138503Z" level=info msg="StopPodSandbox for \"0d1d5cf0eee7ff97025052c4166b0bee697a6e09e5cffe83aa5d29b1daa5b204\" returns successfully" Oct 8 20:05:01.663536 containerd[1680]: time="2024-10-08T20:05:01.663507406Z" level=info msg="RemovePodSandbox for \"0d1d5cf0eee7ff97025052c4166b0bee697a6e09e5cffe83aa5d29b1daa5b204\"" Oct 8 20:05:01.663639 containerd[1680]: time="2024-10-08T20:05:01.663539206Z" level=info msg="Forcibly stopping sandbox \"0d1d5cf0eee7ff97025052c4166b0bee697a6e09e5cffe83aa5d29b1daa5b204\"" Oct 8 20:05:01.663639 containerd[1680]: time="2024-10-08T20:05:01.663600906Z" level=info msg="TearDown network for sandbox \"0d1d5cf0eee7ff97025052c4166b0bee697a6e09e5cffe83aa5d29b1daa5b204\" successfully" Oct 8 20:05:01.673634 containerd[1680]: time="2024-10-08T20:05:01.673588587Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0d1d5cf0eee7ff97025052c4166b0bee697a6e09e5cffe83aa5d29b1daa5b204\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 20:05:01.673764 containerd[1680]: time="2024-10-08T20:05:01.673669788Z" level=info msg="RemovePodSandbox \"0d1d5cf0eee7ff97025052c4166b0bee697a6e09e5cffe83aa5d29b1daa5b204\" returns successfully" Oct 8 20:05:01.674281 containerd[1680]: time="2024-10-08T20:05:01.674208292Z" level=info msg="StopPodSandbox for \"ba35c60d38929cfd3ddf111ca464d068384bf00795e4821b25b7ca58cd6de724\"" Oct 8 20:05:01.734714 containerd[1680]: 2024-10-08 20:05:01.706 [WARNING][5851] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ba35c60d38929cfd3ddf111ca464d068384bf00795e4821b25b7ca58cd6de724" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.1.0--a--b9ef23c535-k8s-calico--kube--controllers--7fb9bb5bf5--cl6qd-eth0", GenerateName:"calico-kube-controllers-7fb9bb5bf5-", Namespace:"calico-system", SelfLink:"", UID:"115f7617-1709-468f-88b3-4136a07ce1cb", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 4, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7fb9bb5bf5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.1.0-a-b9ef23c535", ContainerID:"299ee57de537d281f4cd0f5eb32dbe456e0ac3be21ff0089923bf3cac71d7e2c", Pod:"calico-kube-controllers-7fb9bb5bf5-cl6qd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.58.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califa4412f4bd1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:05:01.734714 containerd[1680]: 2024-10-08 20:05:01.706 [INFO][5851] k8s.go 608: Cleaning up netns ContainerID="ba35c60d38929cfd3ddf111ca464d068384bf00795e4821b25b7ca58cd6de724" Oct 8 20:05:01.734714 containerd[1680]: 2024-10-08 20:05:01.706 [INFO][5851] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="ba35c60d38929cfd3ddf111ca464d068384bf00795e4821b25b7ca58cd6de724" iface="eth0" netns="" Oct 8 20:05:01.734714 containerd[1680]: 2024-10-08 20:05:01.706 [INFO][5851] k8s.go 615: Releasing IP address(es) ContainerID="ba35c60d38929cfd3ddf111ca464d068384bf00795e4821b25b7ca58cd6de724" Oct 8 20:05:01.734714 containerd[1680]: 2024-10-08 20:05:01.706 [INFO][5851] utils.go 188: Calico CNI releasing IP address ContainerID="ba35c60d38929cfd3ddf111ca464d068384bf00795e4821b25b7ca58cd6de724" Oct 8 20:05:01.734714 containerd[1680]: 2024-10-08 20:05:01.725 [INFO][5858] ipam_plugin.go 417: Releasing address using handleID ContainerID="ba35c60d38929cfd3ddf111ca464d068384bf00795e4821b25b7ca58cd6de724" HandleID="k8s-pod-network.ba35c60d38929cfd3ddf111ca464d068384bf00795e4821b25b7ca58cd6de724" Workload="ci--4081.1.0--a--b9ef23c535-k8s-calico--kube--controllers--7fb9bb5bf5--cl6qd-eth0" Oct 8 20:05:01.734714 containerd[1680]: 2024-10-08 20:05:01.725 [INFO][5858] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:05:01.734714 containerd[1680]: 2024-10-08 20:05:01.725 [INFO][5858] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:05:01.734714 containerd[1680]: 2024-10-08 20:05:01.731 [WARNING][5858] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="ba35c60d38929cfd3ddf111ca464d068384bf00795e4821b25b7ca58cd6de724" HandleID="k8s-pod-network.ba35c60d38929cfd3ddf111ca464d068384bf00795e4821b25b7ca58cd6de724" Workload="ci--4081.1.0--a--b9ef23c535-k8s-calico--kube--controllers--7fb9bb5bf5--cl6qd-eth0" Oct 8 20:05:01.734714 containerd[1680]: 2024-10-08 20:05:01.731 [INFO][5858] ipam_plugin.go 445: Releasing address using workloadID ContainerID="ba35c60d38929cfd3ddf111ca464d068384bf00795e4821b25b7ca58cd6de724" HandleID="k8s-pod-network.ba35c60d38929cfd3ddf111ca464d068384bf00795e4821b25b7ca58cd6de724" Workload="ci--4081.1.0--a--b9ef23c535-k8s-calico--kube--controllers--7fb9bb5bf5--cl6qd-eth0" Oct 8 20:05:01.734714 containerd[1680]: 2024-10-08 20:05:01.732 [INFO][5858] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:05:01.734714 containerd[1680]: 2024-10-08 20:05:01.733 [INFO][5851] k8s.go 621: Teardown processing complete. ContainerID="ba35c60d38929cfd3ddf111ca464d068384bf00795e4821b25b7ca58cd6de724" Oct 8 20:05:01.734714 containerd[1680]: time="2024-10-08T20:05:01.734706483Z" level=info msg="TearDown network for sandbox \"ba35c60d38929cfd3ddf111ca464d068384bf00795e4821b25b7ca58cd6de724\" successfully" Oct 8 20:05:01.735629 containerd[1680]: time="2024-10-08T20:05:01.734737383Z" level=info msg="StopPodSandbox for \"ba35c60d38929cfd3ddf111ca464d068384bf00795e4821b25b7ca58cd6de724\" returns successfully" Oct 8 20:05:01.735629 containerd[1680]: time="2024-10-08T20:05:01.735508889Z" level=info msg="RemovePodSandbox for \"ba35c60d38929cfd3ddf111ca464d068384bf00795e4821b25b7ca58cd6de724\"" Oct 8 20:05:01.735629 containerd[1680]: time="2024-10-08T20:05:01.735543989Z" level=info msg="Forcibly stopping sandbox \"ba35c60d38929cfd3ddf111ca464d068384bf00795e4821b25b7ca58cd6de724\"" Oct 8 20:05:01.796557 containerd[1680]: 2024-10-08 20:05:01.766 [WARNING][5877] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ba35c60d38929cfd3ddf111ca464d068384bf00795e4821b25b7ca58cd6de724" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.1.0--a--b9ef23c535-k8s-calico--kube--controllers--7fb9bb5bf5--cl6qd-eth0", GenerateName:"calico-kube-controllers-7fb9bb5bf5-", Namespace:"calico-system", SelfLink:"", UID:"115f7617-1709-468f-88b3-4136a07ce1cb", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 4, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7fb9bb5bf5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.1.0-a-b9ef23c535", ContainerID:"299ee57de537d281f4cd0f5eb32dbe456e0ac3be21ff0089923bf3cac71d7e2c", Pod:"calico-kube-controllers-7fb9bb5bf5-cl6qd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.58.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califa4412f4bd1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:05:01.796557 containerd[1680]: 2024-10-08 20:05:01.767 [INFO][5877] k8s.go 608: Cleaning up netns ContainerID="ba35c60d38929cfd3ddf111ca464d068384bf00795e4821b25b7ca58cd6de724" Oct 8 20:05:01.796557 containerd[1680]: 2024-10-08 20:05:01.767 [INFO][5877] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="ba35c60d38929cfd3ddf111ca464d068384bf00795e4821b25b7ca58cd6de724" iface="eth0" netns="" Oct 8 20:05:01.796557 containerd[1680]: 2024-10-08 20:05:01.767 [INFO][5877] k8s.go 615: Releasing IP address(es) ContainerID="ba35c60d38929cfd3ddf111ca464d068384bf00795e4821b25b7ca58cd6de724" Oct 8 20:05:01.796557 containerd[1680]: 2024-10-08 20:05:01.767 [INFO][5877] utils.go 188: Calico CNI releasing IP address ContainerID="ba35c60d38929cfd3ddf111ca464d068384bf00795e4821b25b7ca58cd6de724" Oct 8 20:05:01.796557 containerd[1680]: 2024-10-08 20:05:01.786 [INFO][5883] ipam_plugin.go 417: Releasing address using handleID ContainerID="ba35c60d38929cfd3ddf111ca464d068384bf00795e4821b25b7ca58cd6de724" HandleID="k8s-pod-network.ba35c60d38929cfd3ddf111ca464d068384bf00795e4821b25b7ca58cd6de724" Workload="ci--4081.1.0--a--b9ef23c535-k8s-calico--kube--controllers--7fb9bb5bf5--cl6qd-eth0" Oct 8 20:05:01.796557 containerd[1680]: 2024-10-08 20:05:01.786 [INFO][5883] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:05:01.796557 containerd[1680]: 2024-10-08 20:05:01.786 [INFO][5883] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:05:01.796557 containerd[1680]: 2024-10-08 20:05:01.791 [WARNING][5883] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="ba35c60d38929cfd3ddf111ca464d068384bf00795e4821b25b7ca58cd6de724" HandleID="k8s-pod-network.ba35c60d38929cfd3ddf111ca464d068384bf00795e4821b25b7ca58cd6de724" Workload="ci--4081.1.0--a--b9ef23c535-k8s-calico--kube--controllers--7fb9bb5bf5--cl6qd-eth0" Oct 8 20:05:01.796557 containerd[1680]: 2024-10-08 20:05:01.791 [INFO][5883] ipam_plugin.go 445: Releasing address using workloadID ContainerID="ba35c60d38929cfd3ddf111ca464d068384bf00795e4821b25b7ca58cd6de724" HandleID="k8s-pod-network.ba35c60d38929cfd3ddf111ca464d068384bf00795e4821b25b7ca58cd6de724" Workload="ci--4081.1.0--a--b9ef23c535-k8s-calico--kube--controllers--7fb9bb5bf5--cl6qd-eth0" Oct 8 20:05:01.796557 containerd[1680]: 2024-10-08 20:05:01.792 [INFO][5883] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:05:01.796557 containerd[1680]: 2024-10-08 20:05:01.793 [INFO][5877] k8s.go 621: Teardown processing complete. ContainerID="ba35c60d38929cfd3ddf111ca464d068384bf00795e4821b25b7ca58cd6de724" Oct 8 20:05:01.796557 containerd[1680]: time="2024-10-08T20:05:01.794436167Z" level=info msg="TearDown network for sandbox \"ba35c60d38929cfd3ddf111ca464d068384bf00795e4821b25b7ca58cd6de724\" successfully" Oct 8 20:05:01.808171 containerd[1680]: time="2024-10-08T20:05:01.808126278Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ba35c60d38929cfd3ddf111ca464d068384bf00795e4821b25b7ca58cd6de724\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 20:05:01.808306 containerd[1680]: time="2024-10-08T20:05:01.808215079Z" level=info msg="RemovePodSandbox \"ba35c60d38929cfd3ddf111ca464d068384bf00795e4821b25b7ca58cd6de724\" returns successfully" Oct 8 20:05:15.293182 kubelet[3207]: I1008 20:05:15.292494 3207 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 8 20:05:15.356767 kubelet[3207]: I1008 20:05:15.356393 3207 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5658b7bb57-h9lb4" podStartSLOduration=19.991823534 podStartE2EDuration="23.356370614s" podCreationTimestamp="2024-10-08 20:04:52 +0000 UTC" firstStartedPulling="2024-10-08 20:04:53.928384743 +0000 UTC m=+52.810921036" lastFinishedPulling="2024-10-08 20:04:57.292931823 +0000 UTC m=+56.175468116" observedRunningTime="2024-10-08 20:04:57.517033026 +0000 UTC m=+56.399569419" watchObservedRunningTime="2024-10-08 20:05:15.356370614 +0000 UTC m=+74.238906907" Oct 8 20:05:47.147170 kubelet[3207]: I1008 20:05:47.146613 3207 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 8 20:06:01.040265 systemd[1]: Started sshd@7-10.200.8.13:22-10.200.16.10:33942.service - OpenSSH per-connection server daemon (10.200.16.10:33942). Oct 8 20:06:01.696346 sshd[6049]: Accepted publickey for core from 10.200.16.10 port 33942 ssh2: RSA SHA256:9U3oUBAdXYwgJqp6v+f9jEdEmxxRHlTxYCPOmLL0ALI Oct 8 20:06:01.697845 sshd[6049]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:06:01.701661 systemd-logind[1658]: New session 10 of user core. Oct 8 20:06:01.707084 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 8 20:06:02.231977 sshd[6049]: pam_unix(sshd:session): session closed for user core Oct 8 20:06:02.235987 systemd[1]: sshd@7-10.200.8.13:22-10.200.16.10:33942.service: Deactivated successfully. Oct 8 20:06:02.238201 systemd[1]: session-10.scope: Deactivated successfully. Oct 8 20:06:02.239068 systemd-logind[1658]: Session 10 logged out. Waiting for processes to exit. Oct 8 20:06:02.240175 systemd-logind[1658]: Removed session 10. Oct 8 20:06:07.343723 systemd[1]: Started sshd@8-10.200.8.13:22-10.200.16.10:52574.service - OpenSSH per-connection server daemon (10.200.16.10:52574). Oct 8 20:06:07.986445 sshd[6079]: Accepted publickey for core from 10.200.16.10 port 52574 ssh2: RSA SHA256:9U3oUBAdXYwgJqp6v+f9jEdEmxxRHlTxYCPOmLL0ALI Oct 8 20:06:07.988235 sshd[6079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:06:07.992227 systemd-logind[1658]: New session 11 of user core. Oct 8 20:06:07.996313 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 8 20:06:08.507103 sshd[6079]: pam_unix(sshd:session): session closed for user core Oct 8 20:06:08.510885 systemd[1]: sshd@8-10.200.8.13:22-10.200.16.10:52574.service: Deactivated successfully. Oct 8 20:06:08.513021 systemd[1]: session-11.scope: Deactivated successfully. Oct 8 20:06:08.513806 systemd-logind[1658]: Session 11 logged out. Waiting for processes to exit. Oct 8 20:06:08.514871 systemd-logind[1658]: Removed session 11. Oct 8 20:06:13.624228 systemd[1]: Started sshd@9-10.200.8.13:22-10.200.16.10:52582.service - OpenSSH per-connection server daemon (10.200.16.10:52582). Oct 8 20:06:14.253439 sshd[6097]: Accepted publickey for core from 10.200.16.10 port 52582 ssh2: RSA SHA256:9U3oUBAdXYwgJqp6v+f9jEdEmxxRHlTxYCPOmLL0ALI Oct 8 20:06:14.255434 sshd[6097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:06:14.259553 systemd-logind[1658]: New session 12 of user core. Oct 8 20:06:14.266087 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 8 20:06:14.757968 sshd[6097]: pam_unix(sshd:session): session closed for user core Oct 8 20:06:14.762542 systemd[1]: sshd@9-10.200.8.13:22-10.200.16.10:52582.service: Deactivated successfully. Oct 8 20:06:14.765319 systemd[1]: session-12.scope: Deactivated successfully. Oct 8 20:06:14.766501 systemd-logind[1658]: Session 12 logged out. Waiting for processes to exit. Oct 8 20:06:14.767634 systemd-logind[1658]: Removed session 12. Oct 8 20:06:14.874299 systemd[1]: Started sshd@10-10.200.8.13:22-10.200.16.10:48330.service - OpenSSH per-connection server daemon (10.200.16.10:48330). Oct 8 20:06:15.509762 sshd[6112]: Accepted publickey for core from 10.200.16.10 port 48330 ssh2: RSA SHA256:9U3oUBAdXYwgJqp6v+f9jEdEmxxRHlTxYCPOmLL0ALI Oct 8 20:06:15.511549 sshd[6112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:06:15.515989 systemd-logind[1658]: New session 13 of user core. Oct 8 20:06:15.524067 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 8 20:06:16.060475 sshd[6112]: pam_unix(sshd:session): session closed for user core Oct 8 20:06:16.063801 systemd[1]: sshd@10-10.200.8.13:22-10.200.16.10:48330.service: Deactivated successfully. Oct 8 20:06:16.066114 systemd[1]: session-13.scope: Deactivated successfully. Oct 8 20:06:16.067796 systemd-logind[1658]: Session 13 logged out. Waiting for processes to exit. Oct 8 20:06:16.069028 systemd-logind[1658]: Removed session 13. Oct 8 20:06:16.183584 systemd[1]: Started sshd@11-10.200.8.13:22-10.200.16.10:48332.service - OpenSSH per-connection server daemon (10.200.16.10:48332). Oct 8 20:06:16.811756 sshd[6142]: Accepted publickey for core from 10.200.16.10 port 48332 ssh2: RSA SHA256:9U3oUBAdXYwgJqp6v+f9jEdEmxxRHlTxYCPOmLL0ALI Oct 8 20:06:16.813375 sshd[6142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:06:16.818190 systemd-logind[1658]: New session 14 of user core. Oct 8 20:06:16.823098 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 8 20:06:17.329748 sshd[6142]: pam_unix(sshd:session): session closed for user core Oct 8 20:06:17.333756 systemd-logind[1658]: Session 14 logged out. Waiting for processes to exit. Oct 8 20:06:17.334565 systemd[1]: sshd@11-10.200.8.13:22-10.200.16.10:48332.service: Deactivated successfully. Oct 8 20:06:17.339132 systemd[1]: session-14.scope: Deactivated successfully. Oct 8 20:06:17.342629 systemd-logind[1658]: Removed session 14. Oct 8 20:06:22.447251 systemd[1]: Started sshd@12-10.200.8.13:22-10.200.16.10:48334.service - OpenSSH per-connection server daemon (10.200.16.10:48334). Oct 8 20:06:23.084578 sshd[6204]: Accepted publickey for core from 10.200.16.10 port 48334 ssh2: RSA SHA256:9U3oUBAdXYwgJqp6v+f9jEdEmxxRHlTxYCPOmLL0ALI Oct 8 20:06:23.086481 sshd[6204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:06:23.091016 systemd-logind[1658]: New session 15 of user core. Oct 8 20:06:23.096085 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 8 20:06:23.601788 sshd[6204]: pam_unix(sshd:session): session closed for user core Oct 8 20:06:23.606372 systemd[1]: sshd@12-10.200.8.13:22-10.200.16.10:48334.service: Deactivated successfully. Oct 8 20:06:23.609151 systemd[1]: session-15.scope: Deactivated successfully. Oct 8 20:06:23.610115 systemd-logind[1658]: Session 15 logged out. Waiting for processes to exit. Oct 8 20:06:23.611205 systemd-logind[1658]: Removed session 15. Oct 8 20:06:28.714506 systemd[1]: Started sshd@13-10.200.8.13:22-10.200.16.10:40078.service - OpenSSH per-connection server daemon (10.200.16.10:40078). Oct 8 20:06:29.351693 sshd[6216]: Accepted publickey for core from 10.200.16.10 port 40078 ssh2: RSA SHA256:9U3oUBAdXYwgJqp6v+f9jEdEmxxRHlTxYCPOmLL0ALI Oct 8 20:06:29.353325 sshd[6216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:06:29.358094 systemd-logind[1658]: New session 16 of user core. Oct 8 20:06:29.367146 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 8 20:06:29.865227 sshd[6216]: pam_unix(sshd:session): session closed for user core Oct 8 20:06:29.869716 systemd[1]: sshd@13-10.200.8.13:22-10.200.16.10:40078.service: Deactivated successfully. Oct 8 20:06:29.871790 systemd[1]: session-16.scope: Deactivated successfully. Oct 8 20:06:29.872616 systemd-logind[1658]: Session 16 logged out. Waiting for processes to exit. Oct 8 20:06:29.873774 systemd-logind[1658]: Removed session 16. Oct 8 20:06:34.986124 systemd[1]: Started sshd@14-10.200.8.13:22-10.200.16.10:58684.service - OpenSSH per-connection server daemon (10.200.16.10:58684). Oct 8 20:06:35.632201 sshd[6234]: Accepted publickey for core from 10.200.16.10 port 58684 ssh2: RSA SHA256:9U3oUBAdXYwgJqp6v+f9jEdEmxxRHlTxYCPOmLL0ALI Oct 8 20:06:35.633709 sshd[6234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:06:35.638175 systemd-logind[1658]: New session 17 of user core. Oct 8 20:06:35.641078 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 8 20:06:36.146902 sshd[6234]: pam_unix(sshd:session): session closed for user core Oct 8 20:06:36.150376 systemd[1]: sshd@14-10.200.8.13:22-10.200.16.10:58684.service: Deactivated successfully. Oct 8 20:06:36.152380 systemd[1]: session-17.scope: Deactivated successfully. Oct 8 20:06:36.153868 systemd-logind[1658]: Session 17 logged out. Waiting for processes to exit. Oct 8 20:06:36.155048 systemd-logind[1658]: Removed session 17. Oct 8 20:06:36.261204 systemd[1]: Started sshd@15-10.200.8.13:22-10.200.16.10:58696.service - OpenSSH per-connection server daemon (10.200.16.10:58696). Oct 8 20:06:36.892306 sshd[6246]: Accepted publickey for core from 10.200.16.10 port 58696 ssh2: RSA SHA256:9U3oUBAdXYwgJqp6v+f9jEdEmxxRHlTxYCPOmLL0ALI Oct 8 20:06:36.895430 sshd[6246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:06:36.904283 systemd-logind[1658]: New session 18 of user core. Oct 8 20:06:36.914170 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 8 20:06:37.475778 sshd[6246]: pam_unix(sshd:session): session closed for user core Oct 8 20:06:37.478805 systemd[1]: sshd@15-10.200.8.13:22-10.200.16.10:58696.service: Deactivated successfully. Oct 8 20:06:37.480877 systemd[1]: session-18.scope: Deactivated successfully. Oct 8 20:06:37.482618 systemd-logind[1658]: Session 18 logged out. Waiting for processes to exit. Oct 8 20:06:37.483898 systemd-logind[1658]: Removed session 18. Oct 8 20:06:37.589290 systemd[1]: Started sshd@16-10.200.8.13:22-10.200.16.10:58702.service - OpenSSH per-connection server daemon (10.200.16.10:58702). Oct 8 20:06:38.236219 sshd[6259]: Accepted publickey for core from 10.200.16.10 port 58702 ssh2: RSA SHA256:9U3oUBAdXYwgJqp6v+f9jEdEmxxRHlTxYCPOmLL0ALI Oct 8 20:06:38.237679 sshd[6259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:06:38.242241 systemd-logind[1658]: New session 19 of user core. Oct 8 20:06:38.249126 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 8 20:06:40.486246 sshd[6259]: pam_unix(sshd:session): session closed for user core Oct 8 20:06:40.489204 systemd[1]: sshd@16-10.200.8.13:22-10.200.16.10:58702.service: Deactivated successfully. Oct 8 20:06:40.491264 systemd[1]: session-19.scope: Deactivated successfully. Oct 8 20:06:40.492800 systemd-logind[1658]: Session 19 logged out. Waiting for processes to exit. Oct 8 20:06:40.493987 systemd-logind[1658]: Removed session 19. Oct 8 20:06:40.606137 systemd[1]: Started sshd@17-10.200.8.13:22-10.200.16.10:58714.service - OpenSSH per-connection server daemon (10.200.16.10:58714). Oct 8 20:06:41.277390 sshd[6277]: Accepted publickey for core from 10.200.16.10 port 58714 ssh2: RSA SHA256:9U3oUBAdXYwgJqp6v+f9jEdEmxxRHlTxYCPOmLL0ALI Oct 8 20:06:41.279032 sshd[6277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:06:41.283846 systemd-logind[1658]: New session 20 of user core. Oct 8 20:06:41.290204 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 8 20:06:41.917257 sshd[6277]: pam_unix(sshd:session): session closed for user core Oct 8 20:06:41.922266 systemd[1]: sshd@17-10.200.8.13:22-10.200.16.10:58714.service: Deactivated successfully. Oct 8 20:06:41.924341 systemd[1]: session-20.scope: Deactivated successfully. Oct 8 20:06:41.925328 systemd-logind[1658]: Session 20 logged out. Waiting for processes to exit. Oct 8 20:06:41.926439 systemd-logind[1658]: Removed session 20. Oct 8 20:06:42.022644 systemd[1]: Started sshd@18-10.200.8.13:22-10.200.16.10:58730.service - OpenSSH per-connection server daemon (10.200.16.10:58730). Oct 8 20:06:42.661411 sshd[6288]: Accepted publickey for core from 10.200.16.10 port 58730 ssh2: RSA SHA256:9U3oUBAdXYwgJqp6v+f9jEdEmxxRHlTxYCPOmLL0ALI Oct 8 20:06:42.662979 sshd[6288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:06:42.666971 systemd-logind[1658]: New session 21 of user core. Oct 8 20:06:42.674079 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 8 20:06:43.168409 sshd[6288]: pam_unix(sshd:session): session closed for user core Oct 8 20:06:43.172343 systemd[1]: sshd@18-10.200.8.13:22-10.200.16.10:58730.service: Deactivated successfully. Oct 8 20:06:43.174324 systemd[1]: session-21.scope: Deactivated successfully. Oct 8 20:06:43.175142 systemd-logind[1658]: Session 21 logged out. Waiting for processes to exit. Oct 8 20:06:43.176140 systemd-logind[1658]: Removed session 21. Oct 8 20:06:48.284156 systemd[1]: Started sshd@19-10.200.8.13:22-10.200.16.10:35514.service - OpenSSH per-connection server daemon (10.200.16.10:35514). Oct 8 20:06:48.926161 sshd[6347]: Accepted publickey for core from 10.200.16.10 port 35514 ssh2: RSA SHA256:9U3oUBAdXYwgJqp6v+f9jEdEmxxRHlTxYCPOmLL0ALI Oct 8 20:06:48.927636 sshd[6347]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:06:48.931532 systemd-logind[1658]: New session 22 of user core. Oct 8 20:06:48.938091 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 8 20:06:49.439303 sshd[6347]: pam_unix(sshd:session): session closed for user core Oct 8 20:06:49.442293 systemd[1]: sshd@19-10.200.8.13:22-10.200.16.10:35514.service: Deactivated successfully. Oct 8 20:06:49.444345 systemd[1]: session-22.scope: Deactivated successfully. Oct 8 20:06:49.445886 systemd-logind[1658]: Session 22 logged out. Waiting for processes to exit. Oct 8 20:06:49.447258 systemd-logind[1658]: Removed session 22. Oct 8 20:06:54.550114 systemd[1]: Started sshd@20-10.200.8.13:22-10.200.16.10:46198.service - OpenSSH per-connection server daemon (10.200.16.10:46198). Oct 8 20:06:55.187337 sshd[6367]: Accepted publickey for core from 10.200.16.10 port 46198 ssh2: RSA SHA256:9U3oUBAdXYwgJqp6v+f9jEdEmxxRHlTxYCPOmLL0ALI Oct 8 20:06:55.188947 sshd[6367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:06:55.192848 systemd-logind[1658]: New session 23 of user core. Oct 8 20:06:55.203133 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 8 20:06:55.692501 sshd[6367]: pam_unix(sshd:session): session closed for user core Oct 8 20:06:55.696237 systemd[1]: sshd@20-10.200.8.13:22-10.200.16.10:46198.service: Deactivated successfully. Oct 8 20:06:55.698305 systemd[1]: session-23.scope: Deactivated successfully. Oct 8 20:06:55.699165 systemd-logind[1658]: Session 23 logged out. Waiting for processes to exit. Oct 8 20:06:55.700200 systemd-logind[1658]: Removed session 23. Oct 8 20:07:00.811157 systemd[1]: Started sshd@21-10.200.8.13:22-10.200.16.10:46208.service - OpenSSH per-connection server daemon (10.200.16.10:46208). Oct 8 20:07:01.455732 sshd[6379]: Accepted publickey for core from 10.200.16.10 port 46208 ssh2: RSA SHA256:9U3oUBAdXYwgJqp6v+f9jEdEmxxRHlTxYCPOmLL0ALI Oct 8 20:07:01.457415 sshd[6379]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:07:01.462379 systemd-logind[1658]: New session 24 of user core. Oct 8 20:07:01.465086 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 8 20:07:01.971985 sshd[6379]: pam_unix(sshd:session): session closed for user core Oct 8 20:07:01.976266 systemd[1]: sshd@21-10.200.8.13:22-10.200.16.10:46208.service: Deactivated successfully. Oct 8 20:07:01.978255 systemd[1]: session-24.scope: Deactivated successfully. Oct 8 20:07:01.979210 systemd-logind[1658]: Session 24 logged out. Waiting for processes to exit. Oct 8 20:07:01.980315 systemd-logind[1658]: Removed session 24. Oct 8 20:07:07.086238 systemd[1]: Started sshd@22-10.200.8.13:22-10.200.16.10:38292.service - OpenSSH per-connection server daemon (10.200.16.10:38292). Oct 8 20:07:07.715879 sshd[6401]: Accepted publickey for core from 10.200.16.10 port 38292 ssh2: RSA SHA256:9U3oUBAdXYwgJqp6v+f9jEdEmxxRHlTxYCPOmLL0ALI Oct 8 20:07:07.717633 sshd[6401]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:07:07.722251 systemd-logind[1658]: New session 25 of user core. Oct 8 20:07:07.731091 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 8 20:07:08.228464 sshd[6401]: pam_unix(sshd:session): session closed for user core Oct 8 20:07:08.231620 systemd[1]: sshd@22-10.200.8.13:22-10.200.16.10:38292.service: Deactivated successfully. Oct 8 20:07:08.233674 systemd[1]: session-25.scope: Deactivated successfully. Oct 8 20:07:08.235265 systemd-logind[1658]: Session 25 logged out. Waiting for processes to exit. Oct 8 20:07:08.236289 systemd-logind[1658]: Removed session 25. Oct 8 20:07:13.486402 systemd[1]: Started sshd@23-10.200.8.13:22-10.200.16.10:38300.service - OpenSSH per-connection server daemon (10.200.16.10:38300). Oct 8 20:07:14.121149 sshd[6414]: Accepted publickey for core from 10.200.16.10 port 38300 ssh2: RSA SHA256:9U3oUBAdXYwgJqp6v+f9jEdEmxxRHlTxYCPOmLL0ALI Oct 8 20:07:14.122791 sshd[6414]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:07:14.128823 systemd-logind[1658]: New session 26 of user core. Oct 8 20:07:14.131162 systemd[1]: Started session-26.scope - Session 26 of User core. Oct 8 20:07:14.639049 sshd[6414]: pam_unix(sshd:session): session closed for user core Oct 8 20:07:14.643703 systemd[1]: sshd@23-10.200.8.13:22-10.200.16.10:38300.service: Deactivated successfully. Oct 8 20:07:14.646233 systemd[1]: session-26.scope: Deactivated successfully. Oct 8 20:07:14.647464 systemd-logind[1658]: Session 26 logged out. Waiting for processes to exit. Oct 8 20:07:14.648912 systemd-logind[1658]: Removed session 26. Oct 8 20:07:19.752867 systemd[1]: Started sshd@24-10.200.8.13:22-10.200.16.10:54852.service - OpenSSH per-connection server daemon (10.200.16.10:54852). Oct 8 20:07:20.393695 sshd[6479]: Accepted publickey for core from 10.200.16.10 port 54852 ssh2: RSA SHA256:9U3oUBAdXYwgJqp6v+f9jEdEmxxRHlTxYCPOmLL0ALI Oct 8 20:07:20.395240 sshd[6479]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:07:20.399315 systemd-logind[1658]: New session 27 of user core. Oct 8 20:07:20.405075 systemd[1]: Started session-27.scope - Session 27 of User core. Oct 8 20:07:20.899074 sshd[6479]: pam_unix(sshd:session): session closed for user core Oct 8 20:07:20.903291 systemd[1]: sshd@24-10.200.8.13:22-10.200.16.10:54852.service: Deactivated successfully. Oct 8 20:07:20.905314 systemd[1]: session-27.scope: Deactivated successfully. Oct 8 20:07:20.906267 systemd-logind[1658]: Session 27 logged out. Waiting for processes to exit. Oct 8 20:07:20.907412 systemd-logind[1658]: Removed session 27.