Jun 25 18:42:44.019713 kernel: Linux version 6.6.35-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Tue Jun 25 17:21:28 -00 2024 Jun 25 18:42:44.019751 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4483672da8ac4c95f5ee13a489103440a13110ce1f63977ab5a6a33d0c137bf8 Jun 25 18:42:44.019766 kernel: BIOS-provided physical RAM map: Jun 25 18:42:44.019777 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jun 25 18:42:44.019788 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jun 25 18:42:44.019799 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Jun 25 18:42:44.019812 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Jun 25 18:42:44.019826 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Jun 25 18:42:44.019838 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jun 25 18:42:44.019849 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jun 25 18:42:44.019861 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jun 25 18:42:44.019872 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jun 25 18:42:44.019883 kernel: printk: bootconsole [earlyser0] enabled Jun 25 18:42:44.019895 kernel: NX (Execute Disable) protection: active Jun 25 18:42:44.019912 kernel: APIC: Static calls initialized Jun 25 18:42:44.019925 kernel: efi: EFI v2.7 by Microsoft Jun 25 18:42:44.019938 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c1a98 Jun 25 18:42:44.019950 kernel: SMBIOS 3.1.0 present. Jun 25 18:42:44.019963 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Jun 25 18:42:44.019976 kernel: Hypervisor detected: Microsoft Hyper-V Jun 25 18:42:44.019989 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Jun 25 18:42:44.020001 kernel: Hyper-V: Host Build 10.0.20348.1633-1-0 Jun 25 18:42:44.020014 kernel: Hyper-V: Nested features: 0x1e0101 Jun 25 18:42:44.020037 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jun 25 18:42:44.020051 kernel: Hyper-V: Using hypercall for remote TLB flush Jun 25 18:42:44.020061 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jun 25 18:42:44.020073 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jun 25 18:42:44.020086 kernel: tsc: Marking TSC unstable due to running on Hyper-V Jun 25 18:42:44.020100 kernel: tsc: Detected 2593.907 MHz processor Jun 25 18:42:44.020112 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jun 25 18:42:44.020126 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jun 25 18:42:44.020139 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Jun 25 18:42:44.020152 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jun 25 18:42:44.020168 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jun 25 18:42:44.020181 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Jun 25 18:42:44.020193 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Jun 25 18:42:44.020206 kernel: Using GB pages for direct mapping Jun 25 18:42:44.020219 kernel: Secure boot disabled Jun 25 18:42:44.020232 kernel: ACPI: Early table checksum verification disabled Jun 25 18:42:44.020245 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jun 25 18:42:44.020264 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:42:44.020280 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:42:44.020294 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jun 25 18:42:44.020307 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jun 25 18:42:44.020321 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:42:44.020335 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:42:44.020349 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:42:44.020365 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:42:44.020379 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:42:44.020393 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:42:44.020411 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:42:44.020425 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jun 25 18:42:44.020438 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Jun 25 18:42:44.020452 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jun 25 18:42:44.020466 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jun 25 18:42:44.020486 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jun 25 18:42:44.020500 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jun 25 18:42:44.020514 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jun 25 18:42:44.020527 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Jun 25 18:42:44.020540 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jun 25 18:42:44.020554 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Jun 25 18:42:44.020567 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jun 25 18:42:44.020581 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jun 25 18:42:44.020595 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jun 25 18:42:44.020611 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Jun 25 18:42:44.020625 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Jun 25 18:42:44.020639 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jun 25 18:42:44.020654 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jun 25 18:42:44.020668 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jun 25 18:42:44.020681 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jun 25 18:42:44.020696 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jun 25 18:42:44.020710 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jun 25 18:42:44.020724 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jun 25 18:42:44.020741 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jun 25 18:42:44.020754 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jun 25 18:42:44.020768 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Jun 25 18:42:44.020782 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Jun 25 18:42:44.020796 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Jun 25 18:42:44.020810 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Jun 25 18:42:44.020824 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Jun 25 18:42:44.020837 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Jun 25 18:42:44.020851 kernel: Zone ranges: Jun 25 18:42:44.020868 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jun 25 18:42:44.020881 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jun 25 18:42:44.020894 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jun 25 18:42:44.020907 kernel: Movable zone start for each node Jun 25 18:42:44.020921 kernel: Early memory node ranges Jun 25 18:42:44.020935 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jun 25 18:42:44.020949 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Jun 25 18:42:44.020962 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jun 25 18:42:44.020976 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jun 25 18:42:44.020993 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jun 25 18:42:44.021007 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 25 18:42:44.021020 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jun 25 18:42:44.024444 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Jun 25 18:42:44.024455 kernel: ACPI: PM-Timer IO Port: 0x408 Jun 25 18:42:44.024466 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jun 25 18:42:44.024474 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Jun 25 18:42:44.024485 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jun 25 18:42:44.024493 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jun 25 18:42:44.024507 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jun 25 18:42:44.024515 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jun 25 18:42:44.024525 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jun 25 18:42:44.024532 kernel: Booting paravirtualized kernel on Hyper-V Jun 25 18:42:44.024543 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jun 25 18:42:44.024551 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jun 25 18:42:44.024561 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Jun 25 18:42:44.024569 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Jun 25 18:42:44.024578 kernel: pcpu-alloc: [0] 0 1 Jun 25 18:42:44.024588 kernel: Hyper-V: PV spinlocks enabled Jun 25 18:42:44.024599 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jun 25 18:42:44.024608 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4483672da8ac4c95f5ee13a489103440a13110ce1f63977ab5a6a33d0c137bf8 Jun 25 18:42:44.024619 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 25 18:42:44.024626 kernel: random: crng init done Jun 25 18:42:44.024636 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jun 25 18:42:44.024644 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jun 25 18:42:44.024655 kernel: Fallback order for Node 0: 0 Jun 25 18:42:44.024665 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Jun 25 18:42:44.024682 kernel: Policy zone: Normal Jun 25 18:42:44.024693 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 25 18:42:44.024704 kernel: software IO TLB: area num 2. Jun 25 18:42:44.024715 kernel: Memory: 8070932K/8387460K available (12288K kernel code, 2302K rwdata, 22636K rodata, 49384K init, 1964K bss, 316268K reserved, 0K cma-reserved) Jun 25 18:42:44.024726 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jun 25 18:42:44.024734 kernel: ftrace: allocating 37650 entries in 148 pages Jun 25 18:42:44.024745 kernel: ftrace: allocated 148 pages with 3 groups Jun 25 18:42:44.024753 kernel: Dynamic Preempt: voluntary Jun 25 18:42:44.024764 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 25 18:42:44.024773 kernel: rcu: RCU event tracing is enabled. Jun 25 18:42:44.024786 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jun 25 18:42:44.024795 kernel: Trampoline variant of Tasks RCU enabled. Jun 25 18:42:44.024805 kernel: Rude variant of Tasks RCU enabled. Jun 25 18:42:44.024813 kernel: Tracing variant of Tasks RCU enabled. Jun 25 18:42:44.024824 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 25 18:42:44.024836 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jun 25 18:42:44.024845 kernel: Using NULL legacy PIC Jun 25 18:42:44.024855 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jun 25 18:42:44.024864 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 25 18:42:44.024875 kernel: Console: colour dummy device 80x25 Jun 25 18:42:44.024883 kernel: printk: console [tty1] enabled Jun 25 18:42:44.024894 kernel: printk: console [ttyS0] enabled Jun 25 18:42:44.024902 kernel: printk: bootconsole [earlyser0] disabled Jun 25 18:42:44.024913 kernel: ACPI: Core revision 20230628 Jun 25 18:42:44.024921 kernel: Failed to register legacy timer interrupt Jun 25 18:42:44.024934 kernel: APIC: Switch to symmetric I/O mode setup Jun 25 18:42:44.024943 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jun 25 18:42:44.024953 kernel: Hyper-V: Using IPI hypercalls Jun 25 18:42:44.024963 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jun 25 18:42:44.024973 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jun 25 18:42:44.024982 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jun 25 18:42:44.024992 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jun 25 18:42:44.025001 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jun 25 18:42:44.025011 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jun 25 18:42:44.025022 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593907) Jun 25 18:42:44.025044 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jun 25 18:42:44.025052 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jun 25 18:42:44.025063 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jun 25 18:42:44.025070 kernel: Spectre V2 : Mitigation: Retpolines Jun 25 18:42:44.025081 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jun 25 18:42:44.025089 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jun 25 18:42:44.025100 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jun 25 18:42:44.025108 kernel: RETBleed: Vulnerable Jun 25 18:42:44.025121 kernel: Speculative Store Bypass: Vulnerable Jun 25 18:42:44.025129 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Jun 25 18:42:44.025140 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jun 25 18:42:44.025148 kernel: GDS: Unknown: Dependent on hypervisor status Jun 25 18:42:44.025159 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jun 25 18:42:44.025167 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jun 25 18:42:44.025177 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jun 25 18:42:44.025185 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jun 25 18:42:44.025196 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jun 25 18:42:44.025204 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jun 25 18:42:44.025214 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jun 25 18:42:44.025226 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jun 25 18:42:44.025235 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jun 25 18:42:44.025245 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jun 25 18:42:44.025253 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Jun 25 18:42:44.025264 kernel: Freeing SMP alternatives memory: 32K Jun 25 18:42:44.025272 kernel: pid_max: default: 32768 minimum: 301 Jun 25 18:42:44.025283 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jun 25 18:42:44.025291 kernel: SELinux: Initializing. Jun 25 18:42:44.025301 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 25 18:42:44.025310 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 25 18:42:44.025320 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Jun 25 18:42:44.025328 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jun 25 18:42:44.025341 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jun 25 18:42:44.025352 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jun 25 18:42:44.025360 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jun 25 18:42:44.025371 kernel: signal: max sigframe size: 3632 Jun 25 18:42:44.025379 kernel: rcu: Hierarchical SRCU implementation. Jun 25 18:42:44.025390 kernel: rcu: Max phase no-delay instances is 400. Jun 25 18:42:44.025398 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jun 25 18:42:44.025409 kernel: smp: Bringing up secondary CPUs ... Jun 25 18:42:44.025417 kernel: smpboot: x86: Booting SMP configuration: Jun 25 18:42:44.025430 kernel: .... node #0, CPUs: #1 Jun 25 18:42:44.025439 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Jun 25 18:42:44.025450 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jun 25 18:42:44.025460 kernel: smp: Brought up 1 node, 2 CPUs Jun 25 18:42:44.025473 kernel: smpboot: Max logical packages: 1 Jun 25 18:42:44.025494 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Jun 25 18:42:44.025502 kernel: devtmpfs: initialized Jun 25 18:42:44.025510 kernel: x86/mm: Memory block size: 128MB Jun 25 18:42:44.025521 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jun 25 18:42:44.025529 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 25 18:42:44.025544 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jun 25 18:42:44.025558 kernel: pinctrl core: initialized pinctrl subsystem Jun 25 18:42:44.025566 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 25 18:42:44.025574 kernel: audit: initializing netlink subsys (disabled) Jun 25 18:42:44.025592 kernel: audit: type=2000 audit(1719340962.027:1): state=initialized audit_enabled=0 res=1 Jun 25 18:42:44.025610 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 25 18:42:44.025619 kernel: thermal_sys: Registered thermal governor 'user_space' Jun 25 18:42:44.025631 kernel: cpuidle: using governor menu Jun 25 18:42:44.025650 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 25 18:42:44.025665 kernel: dca service started, version 1.12.1 Jun 25 18:42:44.025674 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Jun 25 18:42:44.025682 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jun 25 18:42:44.025702 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 25 18:42:44.025719 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jun 25 18:42:44.025734 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 25 18:42:44.025746 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jun 25 18:42:44.025757 kernel: ACPI: Added _OSI(Module Device) Jun 25 18:42:44.025771 kernel: ACPI: Added _OSI(Processor Device) Jun 25 18:42:44.025787 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jun 25 18:42:44.025803 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 25 18:42:44.025812 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 25 18:42:44.025820 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jun 25 18:42:44.025837 kernel: ACPI: Interpreter enabled Jun 25 18:42:44.025848 kernel: ACPI: PM: (supports S0 S5) Jun 25 18:42:44.025856 kernel: ACPI: Using IOAPIC for interrupt routing Jun 25 18:42:44.025878 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jun 25 18:42:44.025893 kernel: PCI: Ignoring E820 reservations for host bridge windows Jun 25 18:42:44.025901 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jun 25 18:42:44.025912 kernel: iommu: Default domain type: Translated Jun 25 18:42:44.025935 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jun 25 18:42:44.025951 kernel: efivars: Registered efivars operations Jun 25 18:42:44.025959 kernel: PCI: Using ACPI for IRQ routing Jun 25 18:42:44.025967 kernel: PCI: System does not support PCI Jun 25 18:42:44.025984 kernel: vgaarb: loaded Jun 25 18:42:44.026006 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Jun 25 18:42:44.026018 kernel: VFS: Disk quotas dquot_6.6.0 Jun 25 18:42:44.026047 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 25 18:42:44.026062 kernel: pnp: PnP ACPI init Jun 25 18:42:44.026073 kernel: pnp: PnP ACPI: found 3 devices Jun 25 18:42:44.026081 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jun 25 18:42:44.026097 kernel: NET: Registered PF_INET protocol family Jun 25 18:42:44.026116 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jun 25 18:42:44.026126 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jun 25 18:42:44.026138 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 25 18:42:44.026159 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jun 25 18:42:44.026176 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jun 25 18:42:44.026187 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jun 25 18:42:44.026195 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jun 25 18:42:44.026210 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jun 25 18:42:44.026227 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 25 18:42:44.026238 kernel: NET: Registered PF_XDP protocol family Jun 25 18:42:44.026246 kernel: PCI: CLS 0 bytes, default 64 Jun 25 18:42:44.026266 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jun 25 18:42:44.026279 kernel: software IO TLB: mapped [mem 0x000000003b5c1000-0x000000003f5c1000] (64MB) Jun 25 18:42:44.026287 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jun 25 18:42:44.026300 kernel: Initialise system trusted keyrings Jun 25 18:42:44.026317 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jun 25 18:42:44.026325 kernel: Key type asymmetric registered Jun 25 18:42:44.026337 kernel: Asymmetric key parser 'x509' registered Jun 25 18:42:44.026354 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jun 25 18:42:44.026366 kernel: io scheduler mq-deadline registered Jun 25 18:42:44.026377 kernel: io scheduler kyber registered Jun 25 18:42:44.026396 kernel: io scheduler bfq registered Jun 25 18:42:44.026410 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jun 25 18:42:44.026419 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 25 18:42:44.026427 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jun 25 18:42:44.026445 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jun 25 18:42:44.026460 kernel: i8042: PNP: No PS/2 controller found. Jun 25 18:42:44.026640 kernel: rtc_cmos 00:02: registered as rtc0 Jun 25 18:42:44.026771 kernel: rtc_cmos 00:02: setting system clock to 2024-06-25T18:42:43 UTC (1719340963) Jun 25 18:42:44.026885 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jun 25 18:42:44.026905 kernel: intel_pstate: CPU model not supported Jun 25 18:42:44.026920 kernel: efifb: probing for efifb Jun 25 18:42:44.026935 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jun 25 18:42:44.026950 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jun 25 18:42:44.026965 kernel: efifb: scrolling: redraw Jun 25 18:42:44.026980 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jun 25 18:42:44.026999 kernel: Console: switching to colour frame buffer device 128x48 Jun 25 18:42:44.027014 kernel: fb0: EFI VGA frame buffer device Jun 25 18:42:44.027041 kernel: pstore: Using crash dump compression: deflate Jun 25 18:42:44.027056 kernel: pstore: Registered efi_pstore as persistent store backend Jun 25 18:42:44.027071 kernel: NET: Registered PF_INET6 protocol family Jun 25 18:42:44.027086 kernel: Segment Routing with IPv6 Jun 25 18:42:44.027101 kernel: In-situ OAM (IOAM) with IPv6 Jun 25 18:42:44.027116 kernel: NET: Registered PF_PACKET protocol family Jun 25 18:42:44.027131 kernel: Key type dns_resolver registered Jun 25 18:42:44.027145 kernel: IPI shorthand broadcast: enabled Jun 25 18:42:44.027164 kernel: sched_clock: Marking stable (742003000, 36792500)->(935155700, -156360200) Jun 25 18:42:44.027179 kernel: registered taskstats version 1 Jun 25 18:42:44.027194 kernel: Loading compiled-in X.509 certificates Jun 25 18:42:44.027209 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.35-flatcar: 60204e9db5f484c670a1c92aec37e9a0c4d3ae90' Jun 25 18:42:44.027224 kernel: Key type .fscrypt registered Jun 25 18:42:44.027238 kernel: Key type fscrypt-provisioning registered Jun 25 18:42:44.027253 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 25 18:42:44.027268 kernel: ima: Allocated hash algorithm: sha1 Jun 25 18:42:44.027286 kernel: ima: No architecture policies found Jun 25 18:42:44.027301 kernel: clk: Disabling unused clocks Jun 25 18:42:44.027316 kernel: Freeing unused kernel image (initmem) memory: 49384K Jun 25 18:42:44.027331 kernel: Write protecting the kernel read-only data: 36864k Jun 25 18:42:44.027346 kernel: Freeing unused kernel image (rodata/data gap) memory: 1940K Jun 25 18:42:44.027361 kernel: Run /init as init process Jun 25 18:42:44.027375 kernel: with arguments: Jun 25 18:42:44.027389 kernel: /init Jun 25 18:42:44.027404 kernel: with environment: Jun 25 18:42:44.027418 kernel: HOME=/ Jun 25 18:42:44.027436 kernel: TERM=linux Jun 25 18:42:44.027451 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 25 18:42:44.027468 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jun 25 18:42:44.027486 systemd[1]: Detected virtualization microsoft. Jun 25 18:42:44.027502 systemd[1]: Detected architecture x86-64. Jun 25 18:42:44.027517 systemd[1]: Running in initrd. Jun 25 18:42:44.027533 systemd[1]: No hostname configured, using default hostname. Jun 25 18:42:44.027551 systemd[1]: Hostname set to . Jun 25 18:42:44.027567 systemd[1]: Initializing machine ID from random generator. Jun 25 18:42:44.027583 systemd[1]: Queued start job for default target initrd.target. Jun 25 18:42:44.027600 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 18:42:44.027615 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 18:42:44.027632 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 25 18:42:44.027648 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 25 18:42:44.027663 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 25 18:42:44.027683 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 25 18:42:44.027701 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 25 18:42:44.027717 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 25 18:42:44.027733 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 18:42:44.027749 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 18:42:44.027764 systemd[1]: Reached target paths.target - Path Units. Jun 25 18:42:44.027780 systemd[1]: Reached target slices.target - Slice Units. Jun 25 18:42:44.027799 systemd[1]: Reached target swap.target - Swaps. Jun 25 18:42:44.027815 systemd[1]: Reached target timers.target - Timer Units. Jun 25 18:42:44.027831 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 18:42:44.027847 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 18:42:44.027863 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 25 18:42:44.027879 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jun 25 18:42:44.027895 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 18:42:44.027911 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 18:42:44.027930 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 18:42:44.027946 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 18:42:44.027962 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 25 18:42:44.027980 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 18:42:44.027996 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 25 18:42:44.028012 systemd[1]: Starting systemd-fsck-usr.service... Jun 25 18:42:44.028055 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 18:42:44.028072 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 18:42:44.028088 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:42:44.028132 systemd-journald[176]: Collecting audit messages is disabled. Jun 25 18:42:44.028168 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 25 18:42:44.028184 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 18:42:44.028200 systemd-journald[176]: Journal started Jun 25 18:42:44.028241 systemd-journald[176]: Runtime Journal (/run/log/journal/14cc18b6c7e64dc4aa4ec7ef0b456cd0) is 8.0M, max 158.8M, 150.8M free. Jun 25 18:42:44.035069 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 18:42:44.041091 systemd[1]: Finished systemd-fsck-usr.service. Jun 25 18:42:44.044310 systemd-modules-load[177]: Inserted module 'overlay' Jun 25 18:42:44.046520 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:42:44.058257 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:42:44.070172 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 25 18:42:44.080209 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 18:42:44.102169 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 25 18:42:44.109295 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 25 18:42:44.109258 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:42:44.115699 kernel: Bridge firewalling registered Jun 25 18:42:44.114139 systemd-modules-load[177]: Inserted module 'br_netfilter' Jun 25 18:42:44.121951 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 18:42:44.130206 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 25 18:42:44.138200 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 18:42:44.143521 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 18:42:44.148749 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 18:42:44.155122 dracut-cmdline[202]: dracut-dracut-053 Jun 25 18:42:44.155122 dracut-cmdline[202]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4483672da8ac4c95f5ee13a489103440a13110ce1f63977ab5a6a33d0c137bf8 Jun 25 18:42:44.183128 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 18:42:44.187170 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:42:44.202257 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 18:42:44.247819 systemd-resolved[248]: Positive Trust Anchors: Jun 25 18:42:44.247843 systemd-resolved[248]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 18:42:44.247891 systemd-resolved[248]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jun 25 18:42:44.252927 systemd-resolved[248]: Defaulting to hostname 'linux'. Jun 25 18:42:44.276100 kernel: SCSI subsystem initialized Jun 25 18:42:44.255371 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 18:42:44.259072 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 18:42:44.288045 kernel: Loading iSCSI transport class v2.0-870. Jun 25 18:42:44.301051 kernel: iscsi: registered transport (tcp) Jun 25 18:42:44.325473 kernel: iscsi: registered transport (qla4xxx) Jun 25 18:42:44.325576 kernel: QLogic iSCSI HBA Driver Jun 25 18:42:44.361529 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 25 18:42:44.369199 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 25 18:42:44.397558 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 25 18:42:44.397663 kernel: device-mapper: uevent: version 1.0.3 Jun 25 18:42:44.400300 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jun 25 18:42:44.445065 kernel: raid6: avx512x4 gen() 18256 MB/s Jun 25 18:42:44.463046 kernel: raid6: avx512x2 gen() 18257 MB/s Jun 25 18:42:44.481061 kernel: raid6: avx512x1 gen() 18257 MB/s Jun 25 18:42:44.500043 kernel: raid6: avx2x4 gen() 18238 MB/s Jun 25 18:42:44.518040 kernel: raid6: avx2x2 gen() 18203 MB/s Jun 25 18:42:44.537078 kernel: raid6: avx2x1 gen() 13949 MB/s Jun 25 18:42:44.537139 kernel: raid6: using algorithm avx512x2 gen() 18257 MB/s Jun 25 18:42:44.557621 kernel: raid6: .... xor() 29522 MB/s, rmw enabled Jun 25 18:42:44.557670 kernel: raid6: using avx512x2 recovery algorithm Jun 25 18:42:44.583053 kernel: xor: automatically using best checksumming function avx Jun 25 18:42:44.753056 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 25 18:42:44.762569 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 25 18:42:44.770335 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 18:42:44.787081 systemd-udevd[395]: Using default interface naming scheme 'v255'. Jun 25 18:42:44.792981 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 18:42:44.802176 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 25 18:42:44.818429 dracut-pre-trigger[404]: rd.md=0: removing MD RAID activation Jun 25 18:42:44.846072 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 18:42:44.854255 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 18:42:44.894408 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 18:42:44.907388 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 25 18:42:44.939797 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 25 18:42:44.945810 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 18:42:44.949185 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 18:42:44.954097 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 18:42:44.968182 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 25 18:42:44.983041 kernel: cryptd: max_cpu_qlen set to 1000 Jun 25 18:42:44.992537 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 25 18:42:45.003396 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 18:42:45.005789 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:42:45.010893 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:42:45.015845 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 18:42:45.016110 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:42:45.021420 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:42:45.031891 kernel: AVX2 version of gcm_enc/dec engaged. Jun 25 18:42:45.031932 kernel: AES CTR mode by8 optimization enabled Jun 25 18:42:45.035581 kernel: hv_vmbus: Vmbus version:5.2 Jun 25 18:42:45.038386 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:42:45.041933 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 18:42:45.042557 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:42:45.064451 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:42:45.082086 kernel: hv_vmbus: registering driver hyperv_keyboard Jun 25 18:42:45.090840 kernel: hv_vmbus: registering driver hv_storvsc Jun 25 18:42:45.098833 kernel: pps_core: LinuxPPS API ver. 1 registered Jun 25 18:42:45.098891 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jun 25 18:42:45.098911 kernel: scsi host0: storvsc_host_t Jun 25 18:42:45.109980 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jun 25 18:42:45.110067 kernel: PTP clock support registered Jun 25 18:42:45.110087 kernel: scsi host1: storvsc_host_t Jun 25 18:42:45.110804 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:42:45.124361 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jun 25 18:42:45.124415 kernel: hid: raw HID events driver (C) Jiri Kosina Jun 25 18:42:45.133017 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jun 25 18:42:45.132267 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:42:45.142982 kernel: hv_utils: Registering HyperV Utility Driver Jun 25 18:42:45.143034 kernel: hv_vmbus: registering driver hv_netvsc Jun 25 18:42:45.143050 kernel: hv_vmbus: registering driver hv_utils Jun 25 18:42:45.150433 kernel: hv_utils: Heartbeat IC version 3.0 Jun 25 18:42:45.150498 kernel: hv_utils: Shutdown IC version 3.2 Jun 25 18:42:45.153039 kernel: hv_utils: TimeSync IC version 4.0 Jun 25 18:42:45.871355 systemd-resolved[248]: Clock change detected. Flushing caches. Jun 25 18:42:45.880582 kernel: hv_vmbus: registering driver hid_hyperv Jun 25 18:42:45.884588 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jun 25 18:42:45.888752 kernel: hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jun 25 18:42:45.903873 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:42:45.923612 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jun 25 18:42:45.925880 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jun 25 18:42:45.925915 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jun 25 18:42:45.933474 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jun 25 18:42:45.946688 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jun 25 18:42:45.946905 kernel: sd 0:0:0:0: [sda] Write Protect is off Jun 25 18:42:45.947068 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jun 25 18:42:45.947252 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jun 25 18:42:45.947415 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 18:42:45.947435 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jun 25 18:42:46.116238 kernel: hv_netvsc 0022489f-834d-0022-489f-834d0022489f eth0: VF slot 1 added Jun 25 18:42:46.125424 kernel: hv_vmbus: registering driver hv_pci Jun 25 18:42:46.125484 kernel: hv_pci a10a8ba1-a95e-4464-a951-95c0d8607eb3: PCI VMBus probing: Using version 0x10004 Jun 25 18:42:46.165688 kernel: hv_pci a10a8ba1-a95e-4464-a951-95c0d8607eb3: PCI host bridge to bus a95e:00 Jun 25 18:42:46.165888 kernel: pci_bus a95e:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Jun 25 18:42:46.166067 kernel: pci_bus a95e:00: No busn resource found for root bus, will use [bus 00-ff] Jun 25 18:42:46.166214 kernel: pci a95e:00:02.0: [15b3:1016] type 00 class 0x020000 Jun 25 18:42:46.166418 kernel: pci a95e:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Jun 25 18:42:46.166604 kernel: pci a95e:00:02.0: enabling Extended Tags Jun 25 18:42:46.166783 kernel: pci a95e:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at a95e:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jun 25 18:42:46.166954 kernel: pci_bus a95e:00: busn_res: [bus 00-ff] end is updated to 00 Jun 25 18:42:46.167103 kernel: pci a95e:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Jun 25 18:42:46.356812 kernel: mlx5_core a95e:00:02.0: enabling device (0000 -> 0002) Jun 25 18:42:46.588675 kernel: mlx5_core a95e:00:02.0: firmware version: 14.30.1284 Jun 25 18:42:46.588901 kernel: hv_netvsc 0022489f-834d-0022-489f-834d0022489f eth0: VF registering: eth1 Jun 25 18:42:46.589066 kernel: mlx5_core a95e:00:02.0 eth1: joined to eth0 Jun 25 18:42:46.589254 kernel: mlx5_core a95e:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jun 25 18:42:46.511161 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jun 25 18:42:46.600589 kernel: mlx5_core a95e:00:02.0 enP43358s1: renamed from eth1 Jun 25 18:42:46.619672 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (450) Jun 25 18:42:46.622481 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jun 25 18:42:46.641559 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jun 25 18:42:46.682603 kernel: BTRFS: device fsid 329ce27e-ea89-47b5-8f8b-f762c8412eb0 devid 1 transid 31 /dev/sda3 scanned by (udev-worker) (446) Jun 25 18:42:46.697327 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jun 25 18:42:46.699887 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jun 25 18:42:46.715794 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 25 18:42:46.728590 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 18:42:46.740582 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 18:42:46.785730 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 18:42:47.752640 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 18:42:47.752716 disk-uuid[602]: The operation has completed successfully. Jun 25 18:42:47.826613 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 25 18:42:47.826737 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 25 18:42:47.854731 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 25 18:42:47.871416 sh[715]: Success Jun 25 18:42:47.901598 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jun 25 18:42:48.094833 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 25 18:42:48.109761 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 25 18:42:48.112743 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 25 18:42:48.137585 kernel: BTRFS info (device dm-0): first mount of filesystem 329ce27e-ea89-47b5-8f8b-f762c8412eb0 Jun 25 18:42:48.137641 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jun 25 18:42:48.141546 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jun 25 18:42:48.143843 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jun 25 18:42:48.145858 kernel: BTRFS info (device dm-0): using free space tree Jun 25 18:42:48.486369 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 25 18:42:48.490895 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 25 18:42:48.499731 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 25 18:42:48.506805 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 25 18:42:48.525429 kernel: BTRFS info (device sda6): first mount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:42:48.525505 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 18:42:48.528431 kernel: BTRFS info (device sda6): using free space tree Jun 25 18:42:48.547592 kernel: BTRFS info (device sda6): auto enabling async discard Jun 25 18:42:48.557687 systemd[1]: mnt-oem.mount: Deactivated successfully. Jun 25 18:42:48.562021 kernel: BTRFS info (device sda6): last unmount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:42:48.569352 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 25 18:42:48.581998 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 25 18:42:48.601708 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 18:42:48.607877 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 18:42:48.629828 systemd-networkd[899]: lo: Link UP Jun 25 18:42:48.629838 systemd-networkd[899]: lo: Gained carrier Jun 25 18:42:48.634635 systemd-networkd[899]: Enumeration completed Jun 25 18:42:48.634972 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 18:42:48.637071 systemd[1]: Reached target network.target - Network. Jun 25 18:42:48.639368 systemd-networkd[899]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:42:48.639372 systemd-networkd[899]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 18:42:48.701594 kernel: mlx5_core a95e:00:02.0 enP43358s1: Link up Jun 25 18:42:48.732600 kernel: hv_netvsc 0022489f-834d-0022-489f-834d0022489f eth0: Data path switched to VF: enP43358s1 Jun 25 18:42:48.733631 systemd-networkd[899]: enP43358s1: Link UP Jun 25 18:42:48.733812 systemd-networkd[899]: eth0: Link UP Jun 25 18:42:48.734070 systemd-networkd[899]: eth0: Gained carrier Jun 25 18:42:48.734085 systemd-networkd[899]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:42:48.743446 systemd-networkd[899]: enP43358s1: Gained carrier Jun 25 18:42:48.783624 systemd-networkd[899]: eth0: DHCPv4 address 10.200.8.42/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jun 25 18:42:49.460682 ignition[868]: Ignition 2.19.0 Jun 25 18:42:49.460694 ignition[868]: Stage: fetch-offline Jun 25 18:42:49.460745 ignition[868]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:42:49.460756 ignition[868]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:42:49.465169 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 18:42:49.460882 ignition[868]: parsed url from cmdline: "" Jun 25 18:42:49.460887 ignition[868]: no config URL provided Jun 25 18:42:49.460895 ignition[868]: reading system config file "/usr/lib/ignition/user.ign" Jun 25 18:42:49.460906 ignition[868]: no config at "/usr/lib/ignition/user.ign" Jun 25 18:42:49.460913 ignition[868]: failed to fetch config: resource requires networking Jun 25 18:42:49.463490 ignition[868]: Ignition finished successfully Jun 25 18:42:49.481861 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jun 25 18:42:49.495077 ignition[908]: Ignition 2.19.0 Jun 25 18:42:49.495088 ignition[908]: Stage: fetch Jun 25 18:42:49.495305 ignition[908]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:42:49.495316 ignition[908]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:42:49.495410 ignition[908]: parsed url from cmdline: "" Jun 25 18:42:49.495413 ignition[908]: no config URL provided Jun 25 18:42:49.495418 ignition[908]: reading system config file "/usr/lib/ignition/user.ign" Jun 25 18:42:49.495426 ignition[908]: no config at "/usr/lib/ignition/user.ign" Jun 25 18:42:49.495447 ignition[908]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jun 25 18:42:49.606870 ignition[908]: GET result: OK Jun 25 18:42:49.607034 ignition[908]: config has been read from IMDS userdata Jun 25 18:42:49.607079 ignition[908]: parsing config with SHA512: 3d738fb7a83a6746bd56d92f796f8d9ee5af1ae79662a9dba623759b5c7ec4ed6286ae2aa4084cce4360b5ca9215772d78dfe5c74064d5a8f2d628b13050dfd7 Jun 25 18:42:49.615733 unknown[908]: fetched base config from "system" Jun 25 18:42:49.615756 unknown[908]: fetched base config from "system" Jun 25 18:42:49.615775 unknown[908]: fetched user config from "azure" Jun 25 18:42:49.618025 ignition[908]: fetch: fetch complete Jun 25 18:42:49.619716 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jun 25 18:42:49.618032 ignition[908]: fetch: fetch passed Jun 25 18:42:49.618090 ignition[908]: Ignition finished successfully Jun 25 18:42:49.635743 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 25 18:42:49.651787 ignition[915]: Ignition 2.19.0 Jun 25 18:42:49.651797 ignition[915]: Stage: kargs Jun 25 18:42:49.652005 ignition[915]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:42:49.652018 ignition[915]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:42:49.652980 ignition[915]: kargs: kargs passed Jun 25 18:42:49.653027 ignition[915]: Ignition finished successfully Jun 25 18:42:49.661022 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 25 18:42:49.669722 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 25 18:42:49.683103 ignition[922]: Ignition 2.19.0 Jun 25 18:42:49.683114 ignition[922]: Stage: disks Jun 25 18:42:49.683340 ignition[922]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:42:49.685203 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 25 18:42:49.683353 ignition[922]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:42:49.688164 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 25 18:42:49.684311 ignition[922]: disks: disks passed Jun 25 18:42:49.691390 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 18:42:49.684355 ignition[922]: Ignition finished successfully Jun 25 18:42:49.695791 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 18:42:49.699674 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 18:42:49.701671 systemd[1]: Reached target basic.target - Basic System. Jun 25 18:42:49.724782 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 25 18:42:49.781254 systemd-fsck[931]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jun 25 18:42:49.785802 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 25 18:42:49.799697 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 25 18:42:49.849717 systemd-networkd[899]: enP43358s1: Gained IPv6LL Jun 25 18:42:49.901584 kernel: EXT4-fs (sda9): mounted filesystem ed685e11-963b-427a-9b96-a4691c40e909 r/w with ordered data mode. Quota mode: none. Jun 25 18:42:49.901938 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 25 18:42:49.904247 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 25 18:42:49.938671 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 18:42:49.942333 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 25 18:42:49.950761 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jun 25 18:42:49.955763 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 25 18:42:49.965630 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (942) Jun 25 18:42:49.955796 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 18:42:49.970594 kernel: BTRFS info (device sda6): first mount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:42:49.970625 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 18:42:49.970709 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 25 18:42:49.977962 kernel: BTRFS info (device sda6): using free space tree Jun 25 18:42:49.982959 kernel: BTRFS info (device sda6): auto enabling async discard Jun 25 18:42:49.984160 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 25 18:42:49.988028 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 18:42:50.297738 systemd-networkd[899]: eth0: Gained IPv6LL Jun 25 18:42:50.562603 coreos-metadata[944]: Jun 25 18:42:50.562 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jun 25 18:42:50.565872 coreos-metadata[944]: Jun 25 18:42:50.565 INFO Fetch successful Jun 25 18:42:50.565872 coreos-metadata[944]: Jun 25 18:42:50.565 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jun 25 18:42:50.580052 coreos-metadata[944]: Jun 25 18:42:50.580 INFO Fetch successful Jun 25 18:42:50.594628 coreos-metadata[944]: Jun 25 18:42:50.594 INFO wrote hostname ci-4012.0.0-a-bcd7e269e6 to /sysroot/etc/hostname Jun 25 18:42:50.596499 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 25 18:42:50.692945 initrd-setup-root[972]: cut: /sysroot/etc/passwd: No such file or directory Jun 25 18:42:50.712665 initrd-setup-root[979]: cut: /sysroot/etc/group: No such file or directory Jun 25 18:42:50.731357 initrd-setup-root[986]: cut: /sysroot/etc/shadow: No such file or directory Jun 25 18:42:50.737272 initrd-setup-root[993]: cut: /sysroot/etc/gshadow: No such file or directory Jun 25 18:42:51.367386 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 25 18:42:51.375693 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 25 18:42:51.380738 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 25 18:42:51.391345 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 25 18:42:51.396626 kernel: BTRFS info (device sda6): last unmount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:42:51.416756 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 25 18:42:51.424066 ignition[1065]: INFO : Ignition 2.19.0 Jun 25 18:42:51.424066 ignition[1065]: INFO : Stage: mount Jun 25 18:42:51.427330 ignition[1065]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:42:51.427330 ignition[1065]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:42:51.427330 ignition[1065]: INFO : mount: mount passed Jun 25 18:42:51.427330 ignition[1065]: INFO : Ignition finished successfully Jun 25 18:42:51.426160 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 25 18:42:51.437518 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 25 18:42:51.443858 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 18:42:51.459584 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1077) Jun 25 18:42:51.459628 kernel: BTRFS info (device sda6): first mount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:42:51.463587 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 18:42:51.466871 kernel: BTRFS info (device sda6): using free space tree Jun 25 18:42:51.472594 kernel: BTRFS info (device sda6): auto enabling async discard Jun 25 18:42:51.474133 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 18:42:51.496392 ignition[1093]: INFO : Ignition 2.19.0 Jun 25 18:42:51.496392 ignition[1093]: INFO : Stage: files Jun 25 18:42:51.499973 ignition[1093]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:42:51.499973 ignition[1093]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:42:51.499973 ignition[1093]: DEBUG : files: compiled without relabeling support, skipping Jun 25 18:42:51.521973 ignition[1093]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 25 18:42:51.521973 ignition[1093]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 25 18:42:51.593698 ignition[1093]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 25 18:42:51.596972 ignition[1093]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 25 18:42:51.596972 ignition[1093]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 25 18:42:51.596972 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jun 25 18:42:51.596972 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jun 25 18:42:51.596972 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 25 18:42:51.596972 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jun 25 18:42:51.594277 unknown[1093]: wrote ssh authorized keys file for user: core Jun 25 18:42:51.669754 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jun 25 18:42:51.762370 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 25 18:42:51.766758 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jun 25 18:42:51.766758 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jun 25 18:42:51.766758 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 25 18:42:51.766758 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 25 18:42:51.766758 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 18:42:51.766758 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 18:42:51.766758 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 18:42:51.766758 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 18:42:51.766758 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 18:42:51.766758 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 18:42:51.766758 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 25 18:42:51.807909 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 25 18:42:51.807909 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 25 18:42:51.807909 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-x86-64.raw: attempt #1 Jun 25 18:42:52.408983 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jun 25 18:42:52.720667 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 25 18:42:52.720667 ignition[1093]: INFO : files: op(c): [started] processing unit "containerd.service" Jun 25 18:42:52.736973 ignition[1093]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jun 25 18:42:52.741881 ignition[1093]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jun 25 18:42:52.741881 ignition[1093]: INFO : files: op(c): [finished] processing unit "containerd.service" Jun 25 18:42:52.741881 ignition[1093]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jun 25 18:42:52.751624 ignition[1093]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 18:42:52.755372 ignition[1093]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 18:42:52.755372 ignition[1093]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jun 25 18:42:52.755372 ignition[1093]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jun 25 18:42:52.764196 ignition[1093]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jun 25 18:42:52.767058 ignition[1093]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 25 18:42:52.770483 ignition[1093]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 25 18:42:52.773900 ignition[1093]: INFO : files: files passed Jun 25 18:42:52.773900 ignition[1093]: INFO : Ignition finished successfully Jun 25 18:42:52.775750 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 25 18:42:52.784928 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 25 18:42:52.789826 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 25 18:42:52.792313 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 25 18:42:52.794627 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 25 18:42:52.810637 initrd-setup-root-after-ignition[1123]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:42:52.810637 initrd-setup-root-after-ignition[1123]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:42:52.817717 initrd-setup-root-after-ignition[1127]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:42:52.814458 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 18:42:52.825096 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 25 18:42:52.832740 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 25 18:42:52.856958 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 25 18:42:52.857084 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 25 18:42:52.862059 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 25 18:42:52.866343 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 25 18:42:52.870107 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 25 18:42:52.881827 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 25 18:42:52.893720 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 18:42:52.900744 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 25 18:42:52.912356 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 25 18:42:52.916932 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 18:42:52.921652 systemd[1]: Stopped target timers.target - Timer Units. Jun 25 18:42:52.923597 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 25 18:42:52.923729 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 18:42:52.929727 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 25 18:42:52.933435 systemd[1]: Stopped target basic.target - Basic System. Jun 25 18:42:52.937006 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 25 18:42:52.940779 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 18:42:52.945056 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 25 18:42:52.949287 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 25 18:42:52.953237 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 18:42:52.957615 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 25 18:42:52.961843 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 25 18:42:52.965678 systemd[1]: Stopped target swap.target - Swaps. Jun 25 18:42:52.969013 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 25 18:42:52.969177 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 25 18:42:52.973028 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 25 18:42:52.976449 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 18:42:52.984756 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 25 18:42:52.986610 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 18:42:52.991764 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 25 18:42:52.991921 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 25 18:42:52.996147 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 25 18:42:52.996302 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 18:42:53.004703 systemd[1]: ignition-files.service: Deactivated successfully. Jun 25 18:42:53.004860 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 25 18:42:53.008505 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jun 25 18:42:53.008665 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 25 18:42:53.023849 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 25 18:42:53.025647 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 25 18:42:53.027453 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 18:42:53.033738 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 25 18:42:53.036048 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 25 18:42:53.040048 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 18:42:53.044776 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 25 18:42:53.044928 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 18:42:53.055369 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 25 18:42:53.055481 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 25 18:42:53.063548 ignition[1147]: INFO : Ignition 2.19.0 Jun 25 18:42:53.063548 ignition[1147]: INFO : Stage: umount Jun 25 18:42:53.063548 ignition[1147]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:42:53.063548 ignition[1147]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:42:53.063548 ignition[1147]: INFO : umount: umount passed Jun 25 18:42:53.063548 ignition[1147]: INFO : Ignition finished successfully Jun 25 18:42:53.064010 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 25 18:42:53.064107 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 25 18:42:53.071467 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 25 18:42:53.071547 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 25 18:42:53.078725 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 25 18:42:53.078795 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 25 18:42:53.086162 systemd[1]: ignition-fetch.service: Deactivated successfully. Jun 25 18:42:53.086235 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jun 25 18:42:53.091650 systemd[1]: Stopped target network.target - Network. Jun 25 18:42:53.095143 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 25 18:42:53.095219 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 18:42:53.101049 systemd[1]: Stopped target paths.target - Path Units. Jun 25 18:42:53.104383 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 25 18:42:53.108592 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 18:42:53.117203 systemd[1]: Stopped target slices.target - Slice Units. Jun 25 18:42:53.119033 systemd[1]: Stopped target sockets.target - Socket Units. Jun 25 18:42:53.122384 systemd[1]: iscsid.socket: Deactivated successfully. Jun 25 18:42:53.122443 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 18:42:53.125951 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 25 18:42:53.126001 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 18:42:53.129425 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 25 18:42:53.129492 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 25 18:42:53.133171 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 25 18:42:53.133231 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 25 18:42:53.139445 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 25 18:42:53.139657 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 25 18:42:53.141297 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 25 18:42:53.141810 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 25 18:42:53.141889 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 25 18:42:53.151373 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 25 18:42:53.151467 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 25 18:42:53.164621 systemd-networkd[899]: eth0: DHCPv6 lease lost Jun 25 18:42:53.166798 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 25 18:42:53.166914 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 25 18:42:53.171267 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 25 18:42:53.171347 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 25 18:42:53.184652 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 25 18:42:53.187956 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 25 18:42:53.188015 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 18:42:53.192562 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 18:42:53.195191 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 25 18:42:53.195312 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 25 18:42:53.209142 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 25 18:42:53.210465 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:42:53.210915 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 25 18:42:53.210951 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 25 18:42:53.211186 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 25 18:42:53.211218 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 18:42:53.223334 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 25 18:42:53.223500 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 18:42:53.230887 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 25 18:42:53.230962 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 25 18:42:53.234131 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 25 18:42:53.234172 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 18:42:53.237433 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 25 18:42:53.237481 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 25 18:42:53.251803 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 25 18:42:53.251868 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 25 18:42:53.257643 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 18:42:53.259487 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:42:53.268726 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 25 18:42:53.270789 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 25 18:42:53.270854 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 18:42:53.275322 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 18:42:53.275380 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:42:53.283030 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 25 18:42:53.283236 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 25 18:42:53.297579 kernel: hv_netvsc 0022489f-834d-0022-489f-834d0022489f eth0: Data path switched from VF: enP43358s1 Jun 25 18:42:53.310082 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 25 18:42:53.310202 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 25 18:42:53.314049 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 25 18:42:53.326728 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 25 18:42:53.415407 systemd[1]: Switching root. Jun 25 18:42:53.441721 systemd-journald[176]: Journal stopped Jun 25 18:42:44.019713 kernel: Linux version 6.6.35-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Tue Jun 25 17:21:28 -00 2024 Jun 25 18:42:44.019751 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4483672da8ac4c95f5ee13a489103440a13110ce1f63977ab5a6a33d0c137bf8 Jun 25 18:42:44.019766 kernel: BIOS-provided physical RAM map: Jun 25 18:42:44.019777 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jun 25 18:42:44.019788 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jun 25 18:42:44.019799 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Jun 25 18:42:44.019812 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Jun 25 18:42:44.019826 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Jun 25 18:42:44.019838 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jun 25 18:42:44.019849 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jun 25 18:42:44.019861 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jun 25 18:42:44.019872 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jun 25 18:42:44.019883 kernel: printk: bootconsole [earlyser0] enabled Jun 25 18:42:44.019895 kernel: NX (Execute Disable) protection: active Jun 25 18:42:44.019912 kernel: APIC: Static calls initialized Jun 25 18:42:44.019925 kernel: efi: EFI v2.7 by Microsoft Jun 25 18:42:44.019938 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c1a98 Jun 25 18:42:44.019950 kernel: SMBIOS 3.1.0 present. Jun 25 18:42:44.019963 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Jun 25 18:42:44.019976 kernel: Hypervisor detected: Microsoft Hyper-V Jun 25 18:42:44.019989 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Jun 25 18:42:44.020001 kernel: Hyper-V: Host Build 10.0.20348.1633-1-0 Jun 25 18:42:44.020014 kernel: Hyper-V: Nested features: 0x1e0101 Jun 25 18:42:44.020037 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jun 25 18:42:44.020051 kernel: Hyper-V: Using hypercall for remote TLB flush Jun 25 18:42:44.020061 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jun 25 18:42:44.020073 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jun 25 18:42:44.020086 kernel: tsc: Marking TSC unstable due to running on Hyper-V Jun 25 18:42:44.020100 kernel: tsc: Detected 2593.907 MHz processor Jun 25 18:42:44.020112 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jun 25 18:42:44.020126 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jun 25 18:42:44.020139 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Jun 25 18:42:44.020152 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jun 25 18:42:44.020168 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jun 25 18:42:44.020181 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Jun 25 18:42:44.020193 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Jun 25 18:42:44.020206 kernel: Using GB pages for direct mapping Jun 25 18:42:44.020219 kernel: Secure boot disabled Jun 25 18:42:44.020232 kernel: ACPI: Early table checksum verification disabled Jun 25 18:42:44.020245 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jun 25 18:42:44.020264 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:42:44.020280 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:42:44.020294 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jun 25 18:42:44.020307 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jun 25 18:42:44.020321 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:42:44.020335 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:42:44.020349 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:42:44.020365 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:42:44.020379 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:42:44.020393 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:42:44.020411 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:42:44.020425 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jun 25 18:42:44.020438 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Jun 25 18:42:44.020452 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jun 25 18:42:44.020466 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jun 25 18:42:44.020486 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jun 25 18:42:44.020500 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jun 25 18:42:44.020514 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jun 25 18:42:44.020527 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Jun 25 18:42:44.020540 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jun 25 18:42:44.020554 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Jun 25 18:42:44.020567 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jun 25 18:42:44.020581 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jun 25 18:42:44.020595 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jun 25 18:42:44.020611 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Jun 25 18:42:44.020625 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Jun 25 18:42:44.020639 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jun 25 18:42:44.020654 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jun 25 18:42:44.020668 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jun 25 18:42:44.020681 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jun 25 18:42:44.020696 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jun 25 18:42:44.020710 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jun 25 18:42:44.020724 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jun 25 18:42:44.020741 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jun 25 18:42:44.020754 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jun 25 18:42:44.020768 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Jun 25 18:42:44.020782 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Jun 25 18:42:44.020796 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Jun 25 18:42:44.020810 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Jun 25 18:42:44.020824 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Jun 25 18:42:44.020837 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Jun 25 18:42:44.020851 kernel: Zone ranges: Jun 25 18:42:44.020868 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jun 25 18:42:44.020881 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jun 25 18:42:44.020894 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jun 25 18:42:44.020907 kernel: Movable zone start for each node Jun 25 18:42:44.020921 kernel: Early memory node ranges Jun 25 18:42:44.020935 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jun 25 18:42:44.020949 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Jun 25 18:42:44.020962 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jun 25 18:42:44.020976 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jun 25 18:42:44.020993 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jun 25 18:42:44.021007 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 25 18:42:44.021020 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jun 25 18:42:44.024444 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Jun 25 18:42:44.024455 kernel: ACPI: PM-Timer IO Port: 0x408 Jun 25 18:42:44.024466 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jun 25 18:42:44.024474 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Jun 25 18:42:44.024485 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jun 25 18:42:44.024493 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jun 25 18:42:44.024507 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jun 25 18:42:44.024515 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jun 25 18:42:44.024525 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jun 25 18:42:44.024532 kernel: Booting paravirtualized kernel on Hyper-V Jun 25 18:42:44.024543 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jun 25 18:42:44.024551 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jun 25 18:42:44.024561 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Jun 25 18:42:44.024569 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Jun 25 18:42:44.024578 kernel: pcpu-alloc: [0] 0 1 Jun 25 18:42:44.024588 kernel: Hyper-V: PV spinlocks enabled Jun 25 18:42:44.024599 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jun 25 18:42:44.024608 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4483672da8ac4c95f5ee13a489103440a13110ce1f63977ab5a6a33d0c137bf8 Jun 25 18:42:44.024619 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 25 18:42:44.024626 kernel: random: crng init done Jun 25 18:42:44.024636 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jun 25 18:42:44.024644 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jun 25 18:42:44.024655 kernel: Fallback order for Node 0: 0 Jun 25 18:42:44.024665 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Jun 25 18:42:44.024682 kernel: Policy zone: Normal Jun 25 18:42:44.024693 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 25 18:42:44.024704 kernel: software IO TLB: area num 2. Jun 25 18:42:44.024715 kernel: Memory: 8070932K/8387460K available (12288K kernel code, 2302K rwdata, 22636K rodata, 49384K init, 1964K bss, 316268K reserved, 0K cma-reserved) Jun 25 18:42:44.024726 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jun 25 18:42:44.024734 kernel: ftrace: allocating 37650 entries in 148 pages Jun 25 18:42:44.024745 kernel: ftrace: allocated 148 pages with 3 groups Jun 25 18:42:44.024753 kernel: Dynamic Preempt: voluntary Jun 25 18:42:44.024764 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 25 18:42:44.024773 kernel: rcu: RCU event tracing is enabled. Jun 25 18:42:44.024786 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jun 25 18:42:44.024795 kernel: Trampoline variant of Tasks RCU enabled. Jun 25 18:42:44.024805 kernel: Rude variant of Tasks RCU enabled. Jun 25 18:42:44.024813 kernel: Tracing variant of Tasks RCU enabled. Jun 25 18:42:44.024824 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 25 18:42:44.024836 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jun 25 18:42:44.024845 kernel: Using NULL legacy PIC Jun 25 18:42:44.024855 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jun 25 18:42:44.024864 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 25 18:42:44.024875 kernel: Console: colour dummy device 80x25 Jun 25 18:42:44.024883 kernel: printk: console [tty1] enabled Jun 25 18:42:44.024894 kernel: printk: console [ttyS0] enabled Jun 25 18:42:44.024902 kernel: printk: bootconsole [earlyser0] disabled Jun 25 18:42:44.024913 kernel: ACPI: Core revision 20230628 Jun 25 18:42:44.024921 kernel: Failed to register legacy timer interrupt Jun 25 18:42:44.024934 kernel: APIC: Switch to symmetric I/O mode setup Jun 25 18:42:44.024943 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jun 25 18:42:44.024953 kernel: Hyper-V: Using IPI hypercalls Jun 25 18:42:44.024963 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jun 25 18:42:44.024973 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jun 25 18:42:44.024982 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jun 25 18:42:44.024992 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jun 25 18:42:44.025001 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jun 25 18:42:44.025011 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jun 25 18:42:44.025022 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593907) Jun 25 18:42:44.025044 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jun 25 18:42:44.025052 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jun 25 18:42:44.025063 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jun 25 18:42:44.025070 kernel: Spectre V2 : Mitigation: Retpolines Jun 25 18:42:44.025081 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jun 25 18:42:44.025089 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jun 25 18:42:44.025100 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jun 25 18:42:44.025108 kernel: RETBleed: Vulnerable Jun 25 18:42:44.025121 kernel: Speculative Store Bypass: Vulnerable Jun 25 18:42:44.025129 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Jun 25 18:42:44.025140 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jun 25 18:42:44.025148 kernel: GDS: Unknown: Dependent on hypervisor status Jun 25 18:42:44.025159 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jun 25 18:42:44.025167 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jun 25 18:42:44.025177 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jun 25 18:42:44.025185 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jun 25 18:42:44.025196 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jun 25 18:42:44.025204 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jun 25 18:42:44.025214 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jun 25 18:42:44.025226 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jun 25 18:42:44.025235 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jun 25 18:42:44.025245 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jun 25 18:42:44.025253 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Jun 25 18:42:44.025264 kernel: Freeing SMP alternatives memory: 32K Jun 25 18:42:44.025272 kernel: pid_max: default: 32768 minimum: 301 Jun 25 18:42:44.025283 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jun 25 18:42:44.025291 kernel: SELinux: Initializing. Jun 25 18:42:44.025301 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 25 18:42:44.025310 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 25 18:42:44.025320 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Jun 25 18:42:44.025328 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jun 25 18:42:44.025341 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jun 25 18:42:44.025352 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jun 25 18:42:44.025360 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jun 25 18:42:44.025371 kernel: signal: max sigframe size: 3632 Jun 25 18:42:44.025379 kernel: rcu: Hierarchical SRCU implementation. Jun 25 18:42:44.025390 kernel: rcu: Max phase no-delay instances is 400. Jun 25 18:42:44.025398 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jun 25 18:42:44.025409 kernel: smp: Bringing up secondary CPUs ... Jun 25 18:42:44.025417 kernel: smpboot: x86: Booting SMP configuration: Jun 25 18:42:44.025430 kernel: .... node #0, CPUs: #1 Jun 25 18:42:44.025439 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Jun 25 18:42:44.025450 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jun 25 18:42:44.025460 kernel: smp: Brought up 1 node, 2 CPUs Jun 25 18:42:44.025473 kernel: smpboot: Max logical packages: 1 Jun 25 18:42:44.025494 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Jun 25 18:42:44.025502 kernel: devtmpfs: initialized Jun 25 18:42:44.025510 kernel: x86/mm: Memory block size: 128MB Jun 25 18:42:44.025521 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jun 25 18:42:44.025529 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 25 18:42:44.025544 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jun 25 18:42:44.025558 kernel: pinctrl core: initialized pinctrl subsystem Jun 25 18:42:44.025566 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 25 18:42:44.025574 kernel: audit: initializing netlink subsys (disabled) Jun 25 18:42:44.025592 kernel: audit: type=2000 audit(1719340962.027:1): state=initialized audit_enabled=0 res=1 Jun 25 18:42:44.025610 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 25 18:42:44.025619 kernel: thermal_sys: Registered thermal governor 'user_space' Jun 25 18:42:44.025631 kernel: cpuidle: using governor menu Jun 25 18:42:44.025650 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 25 18:42:44.025665 kernel: dca service started, version 1.12.1 Jun 25 18:42:44.025674 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Jun 25 18:42:44.025682 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jun 25 18:42:44.025702 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 25 18:42:44.025719 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jun 25 18:42:44.025734 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 25 18:42:44.025746 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jun 25 18:42:44.025757 kernel: ACPI: Added _OSI(Module Device) Jun 25 18:42:44.025771 kernel: ACPI: Added _OSI(Processor Device) Jun 25 18:42:44.025787 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jun 25 18:42:44.025803 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 25 18:42:44.025812 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 25 18:42:44.025820 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jun 25 18:42:44.025837 kernel: ACPI: Interpreter enabled Jun 25 18:42:44.025848 kernel: ACPI: PM: (supports S0 S5) Jun 25 18:42:44.025856 kernel: ACPI: Using IOAPIC for interrupt routing Jun 25 18:42:44.025878 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jun 25 18:42:44.025893 kernel: PCI: Ignoring E820 reservations for host bridge windows Jun 25 18:42:44.025901 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jun 25 18:42:44.025912 kernel: iommu: Default domain type: Translated Jun 25 18:42:44.025935 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jun 25 18:42:44.025951 kernel: efivars: Registered efivars operations Jun 25 18:42:44.025959 kernel: PCI: Using ACPI for IRQ routing Jun 25 18:42:44.025967 kernel: PCI: System does not support PCI Jun 25 18:42:44.025984 kernel: vgaarb: loaded Jun 25 18:42:44.026006 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Jun 25 18:42:44.026018 kernel: VFS: Disk quotas dquot_6.6.0 Jun 25 18:42:44.026047 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 25 18:42:44.026062 kernel: pnp: PnP ACPI init Jun 25 18:42:44.026073 kernel: pnp: PnP ACPI: found 3 devices Jun 25 18:42:44.026081 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jun 25 18:42:44.026097 kernel: NET: Registered PF_INET protocol family Jun 25 18:42:44.026116 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jun 25 18:42:44.026126 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jun 25 18:42:44.026138 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 25 18:42:44.026159 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jun 25 18:42:44.026176 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jun 25 18:42:44.026187 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jun 25 18:42:44.026195 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jun 25 18:42:44.026210 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jun 25 18:42:44.026227 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 25 18:42:44.026238 kernel: NET: Registered PF_XDP protocol family Jun 25 18:42:44.026246 kernel: PCI: CLS 0 bytes, default 64 Jun 25 18:42:44.026266 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jun 25 18:42:44.026279 kernel: software IO TLB: mapped [mem 0x000000003b5c1000-0x000000003f5c1000] (64MB) Jun 25 18:42:44.026287 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jun 25 18:42:44.026300 kernel: Initialise system trusted keyrings Jun 25 18:42:44.026317 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jun 25 18:42:44.026325 kernel: Key type asymmetric registered Jun 25 18:42:44.026337 kernel: Asymmetric key parser 'x509' registered Jun 25 18:42:44.026354 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jun 25 18:42:44.026366 kernel: io scheduler mq-deadline registered Jun 25 18:42:44.026377 kernel: io scheduler kyber registered Jun 25 18:42:44.026396 kernel: io scheduler bfq registered Jun 25 18:42:44.026410 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jun 25 18:42:44.026419 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 25 18:42:44.026427 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jun 25 18:42:44.026445 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jun 25 18:42:44.026460 kernel: i8042: PNP: No PS/2 controller found. Jun 25 18:42:44.026640 kernel: rtc_cmos 00:02: registered as rtc0 Jun 25 18:42:44.026771 kernel: rtc_cmos 00:02: setting system clock to 2024-06-25T18:42:43 UTC (1719340963) Jun 25 18:42:44.026885 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jun 25 18:42:44.026905 kernel: intel_pstate: CPU model not supported Jun 25 18:42:44.026920 kernel: efifb: probing for efifb Jun 25 18:42:44.026935 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jun 25 18:42:44.026950 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jun 25 18:42:44.026965 kernel: efifb: scrolling: redraw Jun 25 18:42:44.026980 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jun 25 18:42:44.026999 kernel: Console: switching to colour frame buffer device 128x48 Jun 25 18:42:44.027014 kernel: fb0: EFI VGA frame buffer device Jun 25 18:42:44.027041 kernel: pstore: Using crash dump compression: deflate Jun 25 18:42:44.027056 kernel: pstore: Registered efi_pstore as persistent store backend Jun 25 18:42:44.027071 kernel: NET: Registered PF_INET6 protocol family Jun 25 18:42:44.027086 kernel: Segment Routing with IPv6 Jun 25 18:42:44.027101 kernel: In-situ OAM (IOAM) with IPv6 Jun 25 18:42:44.027116 kernel: NET: Registered PF_PACKET protocol family Jun 25 18:42:44.027131 kernel: Key type dns_resolver registered Jun 25 18:42:44.027145 kernel: IPI shorthand broadcast: enabled Jun 25 18:42:44.027164 kernel: sched_clock: Marking stable (742003000, 36792500)->(935155700, -156360200) Jun 25 18:42:44.027179 kernel: registered taskstats version 1 Jun 25 18:42:44.027194 kernel: Loading compiled-in X.509 certificates Jun 25 18:42:44.027209 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.35-flatcar: 60204e9db5f484c670a1c92aec37e9a0c4d3ae90' Jun 25 18:42:44.027224 kernel: Key type .fscrypt registered Jun 25 18:42:44.027238 kernel: Key type fscrypt-provisioning registered Jun 25 18:42:44.027253 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 25 18:42:44.027268 kernel: ima: Allocated hash algorithm: sha1 Jun 25 18:42:44.027286 kernel: ima: No architecture policies found Jun 25 18:42:44.027301 kernel: clk: Disabling unused clocks Jun 25 18:42:44.027316 kernel: Freeing unused kernel image (initmem) memory: 49384K Jun 25 18:42:44.027331 kernel: Write protecting the kernel read-only data: 36864k Jun 25 18:42:44.027346 kernel: Freeing unused kernel image (rodata/data gap) memory: 1940K Jun 25 18:42:44.027361 kernel: Run /init as init process Jun 25 18:42:44.027375 kernel: with arguments: Jun 25 18:42:44.027389 kernel: /init Jun 25 18:42:44.027404 kernel: with environment: Jun 25 18:42:44.027418 kernel: HOME=/ Jun 25 18:42:44.027436 kernel: TERM=linux Jun 25 18:42:44.027451 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 25 18:42:44.027468 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jun 25 18:42:44.027486 systemd[1]: Detected virtualization microsoft. Jun 25 18:42:44.027502 systemd[1]: Detected architecture x86-64. Jun 25 18:42:44.027517 systemd[1]: Running in initrd. Jun 25 18:42:44.027533 systemd[1]: No hostname configured, using default hostname. Jun 25 18:42:44.027551 systemd[1]: Hostname set to . Jun 25 18:42:44.027567 systemd[1]: Initializing machine ID from random generator. Jun 25 18:42:44.027583 systemd[1]: Queued start job for default target initrd.target. Jun 25 18:42:44.027600 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 18:42:44.027615 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 18:42:44.027632 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 25 18:42:44.027648 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 25 18:42:44.027663 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 25 18:42:44.027683 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 25 18:42:44.027701 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 25 18:42:44.027717 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 25 18:42:44.027733 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 18:42:44.027749 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 18:42:44.027764 systemd[1]: Reached target paths.target - Path Units. Jun 25 18:42:44.027780 systemd[1]: Reached target slices.target - Slice Units. Jun 25 18:42:44.027799 systemd[1]: Reached target swap.target - Swaps. Jun 25 18:42:44.027815 systemd[1]: Reached target timers.target - Timer Units. Jun 25 18:42:44.027831 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 18:42:44.027847 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 18:42:44.027863 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 25 18:42:44.027879 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jun 25 18:42:44.027895 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 18:42:44.027911 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 18:42:44.027930 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 18:42:44.027946 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 18:42:44.027962 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 25 18:42:44.027980 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 18:42:44.027996 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 25 18:42:44.028012 systemd[1]: Starting systemd-fsck-usr.service... Jun 25 18:42:44.028055 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 18:42:44.028072 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 18:42:44.028088 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:42:44.028132 systemd-journald[176]: Collecting audit messages is disabled. Jun 25 18:42:44.028168 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 25 18:42:44.028184 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 18:42:44.028200 systemd-journald[176]: Journal started Jun 25 18:42:44.028241 systemd-journald[176]: Runtime Journal (/run/log/journal/14cc18b6c7e64dc4aa4ec7ef0b456cd0) is 8.0M, max 158.8M, 150.8M free. Jun 25 18:42:44.035069 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 18:42:44.041091 systemd[1]: Finished systemd-fsck-usr.service. Jun 25 18:42:44.044310 systemd-modules-load[177]: Inserted module 'overlay' Jun 25 18:42:44.046520 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:42:44.058257 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:42:44.070172 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 25 18:42:44.080209 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 18:42:44.102169 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 25 18:42:44.109295 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 25 18:42:44.109258 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:42:44.115699 kernel: Bridge firewalling registered Jun 25 18:42:44.114139 systemd-modules-load[177]: Inserted module 'br_netfilter' Jun 25 18:42:44.121951 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 18:42:44.130206 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 25 18:42:44.138200 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 18:42:44.143521 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 18:42:44.148749 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 18:42:44.155122 dracut-cmdline[202]: dracut-dracut-053 Jun 25 18:42:44.155122 dracut-cmdline[202]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4483672da8ac4c95f5ee13a489103440a13110ce1f63977ab5a6a33d0c137bf8 Jun 25 18:42:44.183128 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 18:42:44.187170 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:42:44.202257 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 18:42:44.247819 systemd-resolved[248]: Positive Trust Anchors: Jun 25 18:42:44.247843 systemd-resolved[248]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 18:42:44.247891 systemd-resolved[248]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jun 25 18:42:44.252927 systemd-resolved[248]: Defaulting to hostname 'linux'. Jun 25 18:42:44.276100 kernel: SCSI subsystem initialized Jun 25 18:42:44.255371 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 18:42:44.259072 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 18:42:44.288045 kernel: Loading iSCSI transport class v2.0-870. Jun 25 18:42:44.301051 kernel: iscsi: registered transport (tcp) Jun 25 18:42:44.325473 kernel: iscsi: registered transport (qla4xxx) Jun 25 18:42:44.325576 kernel: QLogic iSCSI HBA Driver Jun 25 18:42:44.361529 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 25 18:42:44.369199 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 25 18:42:44.397558 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 25 18:42:44.397663 kernel: device-mapper: uevent: version 1.0.3 Jun 25 18:42:44.400300 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jun 25 18:42:44.445065 kernel: raid6: avx512x4 gen() 18256 MB/s Jun 25 18:42:44.463046 kernel: raid6: avx512x2 gen() 18257 MB/s Jun 25 18:42:44.481061 kernel: raid6: avx512x1 gen() 18257 MB/s Jun 25 18:42:44.500043 kernel: raid6: avx2x4 gen() 18238 MB/s Jun 25 18:42:44.518040 kernel: raid6: avx2x2 gen() 18203 MB/s Jun 25 18:42:44.537078 kernel: raid6: avx2x1 gen() 13949 MB/s Jun 25 18:42:44.537139 kernel: raid6: using algorithm avx512x2 gen() 18257 MB/s Jun 25 18:42:44.557621 kernel: raid6: .... xor() 29522 MB/s, rmw enabled Jun 25 18:42:44.557670 kernel: raid6: using avx512x2 recovery algorithm Jun 25 18:42:44.583053 kernel: xor: automatically using best checksumming function avx Jun 25 18:42:44.753056 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 25 18:42:44.762569 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 25 18:42:44.770335 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 18:42:44.787081 systemd-udevd[395]: Using default interface naming scheme 'v255'. Jun 25 18:42:44.792981 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 18:42:44.802176 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 25 18:42:44.818429 dracut-pre-trigger[404]: rd.md=0: removing MD RAID activation Jun 25 18:42:44.846072 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 18:42:44.854255 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 18:42:44.894408 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 18:42:44.907388 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 25 18:42:44.939797 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 25 18:42:44.945810 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 18:42:44.949185 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 18:42:44.954097 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 18:42:44.968182 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 25 18:42:44.983041 kernel: cryptd: max_cpu_qlen set to 1000 Jun 25 18:42:44.992537 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 25 18:42:45.003396 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 18:42:45.005789 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:42:45.010893 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:42:45.015845 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 18:42:45.016110 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:42:45.021420 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:42:45.031891 kernel: AVX2 version of gcm_enc/dec engaged. Jun 25 18:42:45.031932 kernel: AES CTR mode by8 optimization enabled Jun 25 18:42:45.035581 kernel: hv_vmbus: Vmbus version:5.2 Jun 25 18:42:45.038386 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:42:45.041933 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 18:42:45.042557 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:42:45.064451 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:42:45.082086 kernel: hv_vmbus: registering driver hyperv_keyboard Jun 25 18:42:45.090840 kernel: hv_vmbus: registering driver hv_storvsc Jun 25 18:42:45.098833 kernel: pps_core: LinuxPPS API ver. 1 registered Jun 25 18:42:45.098891 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jun 25 18:42:45.098911 kernel: scsi host0: storvsc_host_t Jun 25 18:42:45.109980 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jun 25 18:42:45.110067 kernel: PTP clock support registered Jun 25 18:42:45.110087 kernel: scsi host1: storvsc_host_t Jun 25 18:42:45.110804 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:42:45.124361 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jun 25 18:42:45.124415 kernel: hid: raw HID events driver (C) Jiri Kosina Jun 25 18:42:45.133017 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jun 25 18:42:45.132267 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:42:45.142982 kernel: hv_utils: Registering HyperV Utility Driver Jun 25 18:42:45.143034 kernel: hv_vmbus: registering driver hv_netvsc Jun 25 18:42:45.143050 kernel: hv_vmbus: registering driver hv_utils Jun 25 18:42:45.150433 kernel: hv_utils: Heartbeat IC version 3.0 Jun 25 18:42:45.150498 kernel: hv_utils: Shutdown IC version 3.2 Jun 25 18:42:45.153039 kernel: hv_utils: TimeSync IC version 4.0 Jun 25 18:42:45.871355 systemd-resolved[248]: Clock change detected. Flushing caches. Jun 25 18:42:45.880582 kernel: hv_vmbus: registering driver hid_hyperv Jun 25 18:42:45.884588 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jun 25 18:42:45.888752 kernel: hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jun 25 18:42:45.903873 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:42:45.923612 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jun 25 18:42:45.925880 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jun 25 18:42:45.925915 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jun 25 18:42:45.933474 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jun 25 18:42:45.946688 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jun 25 18:42:45.946905 kernel: sd 0:0:0:0: [sda] Write Protect is off Jun 25 18:42:45.947068 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jun 25 18:42:45.947252 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jun 25 18:42:45.947415 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 18:42:45.947435 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jun 25 18:42:46.116238 kernel: hv_netvsc 0022489f-834d-0022-489f-834d0022489f eth0: VF slot 1 added Jun 25 18:42:46.125424 kernel: hv_vmbus: registering driver hv_pci Jun 25 18:42:46.125484 kernel: hv_pci a10a8ba1-a95e-4464-a951-95c0d8607eb3: PCI VMBus probing: Using version 0x10004 Jun 25 18:42:46.165688 kernel: hv_pci a10a8ba1-a95e-4464-a951-95c0d8607eb3: PCI host bridge to bus a95e:00 Jun 25 18:42:46.165888 kernel: pci_bus a95e:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Jun 25 18:42:46.166067 kernel: pci_bus a95e:00: No busn resource found for root bus, will use [bus 00-ff] Jun 25 18:42:46.166214 kernel: pci a95e:00:02.0: [15b3:1016] type 00 class 0x020000 Jun 25 18:42:46.166418 kernel: pci a95e:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Jun 25 18:42:46.166604 kernel: pci a95e:00:02.0: enabling Extended Tags Jun 25 18:42:46.166783 kernel: pci a95e:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at a95e:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jun 25 18:42:46.166954 kernel: pci_bus a95e:00: busn_res: [bus 00-ff] end is updated to 00 Jun 25 18:42:46.167103 kernel: pci a95e:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Jun 25 18:42:46.356812 kernel: mlx5_core a95e:00:02.0: enabling device (0000 -> 0002) Jun 25 18:42:46.588675 kernel: mlx5_core a95e:00:02.0: firmware version: 14.30.1284 Jun 25 18:42:46.588901 kernel: hv_netvsc 0022489f-834d-0022-489f-834d0022489f eth0: VF registering: eth1 Jun 25 18:42:46.589066 kernel: mlx5_core a95e:00:02.0 eth1: joined to eth0 Jun 25 18:42:46.589254 kernel: mlx5_core a95e:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jun 25 18:42:46.511161 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jun 25 18:42:46.600589 kernel: mlx5_core a95e:00:02.0 enP43358s1: renamed from eth1 Jun 25 18:42:46.619672 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (450) Jun 25 18:42:46.622481 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jun 25 18:42:46.641559 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jun 25 18:42:46.682603 kernel: BTRFS: device fsid 329ce27e-ea89-47b5-8f8b-f762c8412eb0 devid 1 transid 31 /dev/sda3 scanned by (udev-worker) (446) Jun 25 18:42:46.697327 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jun 25 18:42:46.699887 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jun 25 18:42:46.715794 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 25 18:42:46.728590 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 18:42:46.740582 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 18:42:46.785730 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 18:42:47.752640 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 18:42:47.752716 disk-uuid[602]: The operation has completed successfully. Jun 25 18:42:47.826613 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 25 18:42:47.826737 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 25 18:42:47.854731 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 25 18:42:47.871416 sh[715]: Success Jun 25 18:42:47.901598 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jun 25 18:42:48.094833 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 25 18:42:48.109761 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 25 18:42:48.112743 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 25 18:42:48.137585 kernel: BTRFS info (device dm-0): first mount of filesystem 329ce27e-ea89-47b5-8f8b-f762c8412eb0 Jun 25 18:42:48.137641 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jun 25 18:42:48.141546 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jun 25 18:42:48.143843 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jun 25 18:42:48.145858 kernel: BTRFS info (device dm-0): using free space tree Jun 25 18:42:48.486369 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 25 18:42:48.490895 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 25 18:42:48.499731 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 25 18:42:48.506805 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 25 18:42:48.525429 kernel: BTRFS info (device sda6): first mount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:42:48.525505 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 18:42:48.528431 kernel: BTRFS info (device sda6): using free space tree Jun 25 18:42:48.547592 kernel: BTRFS info (device sda6): auto enabling async discard Jun 25 18:42:48.557687 systemd[1]: mnt-oem.mount: Deactivated successfully. Jun 25 18:42:48.562021 kernel: BTRFS info (device sda6): last unmount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:42:48.569352 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 25 18:42:48.581998 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 25 18:42:48.601708 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 18:42:48.607877 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 18:42:48.629828 systemd-networkd[899]: lo: Link UP Jun 25 18:42:48.629838 systemd-networkd[899]: lo: Gained carrier Jun 25 18:42:48.634635 systemd-networkd[899]: Enumeration completed Jun 25 18:42:48.634972 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 18:42:48.637071 systemd[1]: Reached target network.target - Network. Jun 25 18:42:48.639368 systemd-networkd[899]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:42:48.639372 systemd-networkd[899]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 18:42:48.701594 kernel: mlx5_core a95e:00:02.0 enP43358s1: Link up Jun 25 18:42:48.732600 kernel: hv_netvsc 0022489f-834d-0022-489f-834d0022489f eth0: Data path switched to VF: enP43358s1 Jun 25 18:42:48.733631 systemd-networkd[899]: enP43358s1: Link UP Jun 25 18:42:48.733812 systemd-networkd[899]: eth0: Link UP Jun 25 18:42:48.734070 systemd-networkd[899]: eth0: Gained carrier Jun 25 18:42:48.734085 systemd-networkd[899]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:42:48.743446 systemd-networkd[899]: enP43358s1: Gained carrier Jun 25 18:42:48.783624 systemd-networkd[899]: eth0: DHCPv4 address 10.200.8.42/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jun 25 18:42:49.460682 ignition[868]: Ignition 2.19.0 Jun 25 18:42:49.460694 ignition[868]: Stage: fetch-offline Jun 25 18:42:49.460745 ignition[868]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:42:49.460756 ignition[868]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:42:49.465169 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 18:42:49.460882 ignition[868]: parsed url from cmdline: "" Jun 25 18:42:49.460887 ignition[868]: no config URL provided Jun 25 18:42:49.460895 ignition[868]: reading system config file "/usr/lib/ignition/user.ign" Jun 25 18:42:49.460906 ignition[868]: no config at "/usr/lib/ignition/user.ign" Jun 25 18:42:49.460913 ignition[868]: failed to fetch config: resource requires networking Jun 25 18:42:49.463490 ignition[868]: Ignition finished successfully Jun 25 18:42:49.481861 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jun 25 18:42:49.495077 ignition[908]: Ignition 2.19.0 Jun 25 18:42:49.495088 ignition[908]: Stage: fetch Jun 25 18:42:49.495305 ignition[908]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:42:49.495316 ignition[908]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:42:49.495410 ignition[908]: parsed url from cmdline: "" Jun 25 18:42:49.495413 ignition[908]: no config URL provided Jun 25 18:42:49.495418 ignition[908]: reading system config file "/usr/lib/ignition/user.ign" Jun 25 18:42:49.495426 ignition[908]: no config at "/usr/lib/ignition/user.ign" Jun 25 18:42:49.495447 ignition[908]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jun 25 18:42:49.606870 ignition[908]: GET result: OK Jun 25 18:42:49.607034 ignition[908]: config has been read from IMDS userdata Jun 25 18:42:49.607079 ignition[908]: parsing config with SHA512: 3d738fb7a83a6746bd56d92f796f8d9ee5af1ae79662a9dba623759b5c7ec4ed6286ae2aa4084cce4360b5ca9215772d78dfe5c74064d5a8f2d628b13050dfd7 Jun 25 18:42:49.615733 unknown[908]: fetched base config from "system" Jun 25 18:42:49.615756 unknown[908]: fetched base config from "system" Jun 25 18:42:49.615775 unknown[908]: fetched user config from "azure" Jun 25 18:42:49.618025 ignition[908]: fetch: fetch complete Jun 25 18:42:49.619716 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jun 25 18:42:49.618032 ignition[908]: fetch: fetch passed Jun 25 18:42:49.618090 ignition[908]: Ignition finished successfully Jun 25 18:42:49.635743 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 25 18:42:49.651787 ignition[915]: Ignition 2.19.0 Jun 25 18:42:49.651797 ignition[915]: Stage: kargs Jun 25 18:42:49.652005 ignition[915]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:42:49.652018 ignition[915]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:42:49.652980 ignition[915]: kargs: kargs passed Jun 25 18:42:49.653027 ignition[915]: Ignition finished successfully Jun 25 18:42:49.661022 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 25 18:42:49.669722 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 25 18:42:49.683103 ignition[922]: Ignition 2.19.0 Jun 25 18:42:49.683114 ignition[922]: Stage: disks Jun 25 18:42:49.683340 ignition[922]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:42:49.685203 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 25 18:42:49.683353 ignition[922]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:42:49.688164 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 25 18:42:49.684311 ignition[922]: disks: disks passed Jun 25 18:42:49.691390 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 18:42:49.684355 ignition[922]: Ignition finished successfully Jun 25 18:42:49.695791 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 18:42:49.699674 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 18:42:49.701671 systemd[1]: Reached target basic.target - Basic System. Jun 25 18:42:49.724782 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 25 18:42:49.781254 systemd-fsck[931]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jun 25 18:42:49.785802 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 25 18:42:49.799697 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 25 18:42:49.849717 systemd-networkd[899]: enP43358s1: Gained IPv6LL Jun 25 18:42:49.901584 kernel: EXT4-fs (sda9): mounted filesystem ed685e11-963b-427a-9b96-a4691c40e909 r/w with ordered data mode. Quota mode: none. Jun 25 18:42:49.901938 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 25 18:42:49.904247 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 25 18:42:49.938671 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 18:42:49.942333 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 25 18:42:49.950761 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jun 25 18:42:49.955763 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 25 18:42:49.965630 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (942) Jun 25 18:42:49.955796 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 18:42:49.970594 kernel: BTRFS info (device sda6): first mount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:42:49.970625 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 18:42:49.970709 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 25 18:42:49.977962 kernel: BTRFS info (device sda6): using free space tree Jun 25 18:42:49.982959 kernel: BTRFS info (device sda6): auto enabling async discard Jun 25 18:42:49.984160 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 25 18:42:49.988028 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 18:42:50.297738 systemd-networkd[899]: eth0: Gained IPv6LL Jun 25 18:42:50.562603 coreos-metadata[944]: Jun 25 18:42:50.562 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jun 25 18:42:50.565872 coreos-metadata[944]: Jun 25 18:42:50.565 INFO Fetch successful Jun 25 18:42:50.565872 coreos-metadata[944]: Jun 25 18:42:50.565 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jun 25 18:42:50.580052 coreos-metadata[944]: Jun 25 18:42:50.580 INFO Fetch successful Jun 25 18:42:50.594628 coreos-metadata[944]: Jun 25 18:42:50.594 INFO wrote hostname ci-4012.0.0-a-bcd7e269e6 to /sysroot/etc/hostname Jun 25 18:42:50.596499 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 25 18:42:50.692945 initrd-setup-root[972]: cut: /sysroot/etc/passwd: No such file or directory Jun 25 18:42:50.712665 initrd-setup-root[979]: cut: /sysroot/etc/group: No such file or directory Jun 25 18:42:50.731357 initrd-setup-root[986]: cut: /sysroot/etc/shadow: No such file or directory Jun 25 18:42:50.737272 initrd-setup-root[993]: cut: /sysroot/etc/gshadow: No such file or directory Jun 25 18:42:51.367386 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 25 18:42:51.375693 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 25 18:42:51.380738 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 25 18:42:51.391345 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 25 18:42:51.396626 kernel: BTRFS info (device sda6): last unmount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:42:51.416756 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 25 18:42:51.424066 ignition[1065]: INFO : Ignition 2.19.0 Jun 25 18:42:51.424066 ignition[1065]: INFO : Stage: mount Jun 25 18:42:51.427330 ignition[1065]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:42:51.427330 ignition[1065]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:42:51.427330 ignition[1065]: INFO : mount: mount passed Jun 25 18:42:51.427330 ignition[1065]: INFO : Ignition finished successfully Jun 25 18:42:51.426160 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 25 18:42:51.437518 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 25 18:42:51.443858 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 18:42:51.459584 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1077) Jun 25 18:42:51.459628 kernel: BTRFS info (device sda6): first mount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:42:51.463587 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 18:42:51.466871 kernel: BTRFS info (device sda6): using free space tree Jun 25 18:42:51.472594 kernel: BTRFS info (device sda6): auto enabling async discard Jun 25 18:42:51.474133 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 18:42:51.496392 ignition[1093]: INFO : Ignition 2.19.0 Jun 25 18:42:51.496392 ignition[1093]: INFO : Stage: files Jun 25 18:42:51.499973 ignition[1093]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:42:51.499973 ignition[1093]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:42:51.499973 ignition[1093]: DEBUG : files: compiled without relabeling support, skipping Jun 25 18:42:51.521973 ignition[1093]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 25 18:42:51.521973 ignition[1093]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 25 18:42:51.593698 ignition[1093]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 25 18:42:51.596972 ignition[1093]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 25 18:42:51.596972 ignition[1093]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 25 18:42:51.596972 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jun 25 18:42:51.596972 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jun 25 18:42:51.596972 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 25 18:42:51.596972 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jun 25 18:42:51.594277 unknown[1093]: wrote ssh authorized keys file for user: core Jun 25 18:42:51.669754 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jun 25 18:42:51.762370 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 25 18:42:51.766758 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jun 25 18:42:51.766758 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jun 25 18:42:51.766758 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 25 18:42:51.766758 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 25 18:42:51.766758 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 18:42:51.766758 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 18:42:51.766758 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 18:42:51.766758 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 18:42:51.766758 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 18:42:51.766758 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 18:42:51.766758 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 25 18:42:51.807909 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 25 18:42:51.807909 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 25 18:42:51.807909 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-x86-64.raw: attempt #1 Jun 25 18:42:52.408983 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jun 25 18:42:52.720667 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 25 18:42:52.720667 ignition[1093]: INFO : files: op(c): [started] processing unit "containerd.service" Jun 25 18:42:52.736973 ignition[1093]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jun 25 18:42:52.741881 ignition[1093]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jun 25 18:42:52.741881 ignition[1093]: INFO : files: op(c): [finished] processing unit "containerd.service" Jun 25 18:42:52.741881 ignition[1093]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jun 25 18:42:52.751624 ignition[1093]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 18:42:52.755372 ignition[1093]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 18:42:52.755372 ignition[1093]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jun 25 18:42:52.755372 ignition[1093]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jun 25 18:42:52.764196 ignition[1093]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jun 25 18:42:52.767058 ignition[1093]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 25 18:42:52.770483 ignition[1093]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 25 18:42:52.773900 ignition[1093]: INFO : files: files passed Jun 25 18:42:52.773900 ignition[1093]: INFO : Ignition finished successfully Jun 25 18:42:52.775750 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 25 18:42:52.784928 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 25 18:42:52.789826 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 25 18:42:52.792313 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 25 18:42:52.794627 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 25 18:42:52.810637 initrd-setup-root-after-ignition[1123]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:42:52.810637 initrd-setup-root-after-ignition[1123]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:42:52.817717 initrd-setup-root-after-ignition[1127]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:42:52.814458 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 18:42:52.825096 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 25 18:42:52.832740 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 25 18:42:52.856958 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 25 18:42:52.857084 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 25 18:42:52.862059 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 25 18:42:52.866343 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 25 18:42:52.870107 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 25 18:42:52.881827 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 25 18:42:52.893720 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 18:42:52.900744 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 25 18:42:52.912356 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 25 18:42:52.916932 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 18:42:52.921652 systemd[1]: Stopped target timers.target - Timer Units. Jun 25 18:42:52.923597 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 25 18:42:52.923729 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 18:42:52.929727 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 25 18:42:52.933435 systemd[1]: Stopped target basic.target - Basic System. Jun 25 18:42:52.937006 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 25 18:42:52.940779 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 18:42:52.945056 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 25 18:42:52.949287 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 25 18:42:52.953237 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 18:42:52.957615 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 25 18:42:52.961843 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 25 18:42:52.965678 systemd[1]: Stopped target swap.target - Swaps. Jun 25 18:42:52.969013 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 25 18:42:52.969177 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 25 18:42:52.973028 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 25 18:42:52.976449 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 18:42:52.984756 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 25 18:42:52.986610 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 18:42:52.991764 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 25 18:42:52.991921 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 25 18:42:52.996147 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 25 18:42:52.996302 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 18:42:53.004703 systemd[1]: ignition-files.service: Deactivated successfully. Jun 25 18:42:53.004860 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 25 18:42:53.008505 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jun 25 18:42:53.008665 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 25 18:42:53.023849 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 25 18:42:53.025647 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 25 18:42:53.027453 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 18:42:53.033738 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 25 18:42:53.036048 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 25 18:42:53.040048 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 18:42:53.044776 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 25 18:42:53.044928 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 18:42:53.055369 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 25 18:42:53.055481 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 25 18:42:53.063548 ignition[1147]: INFO : Ignition 2.19.0 Jun 25 18:42:53.063548 ignition[1147]: INFO : Stage: umount Jun 25 18:42:53.063548 ignition[1147]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:42:53.063548 ignition[1147]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:42:53.063548 ignition[1147]: INFO : umount: umount passed Jun 25 18:42:53.063548 ignition[1147]: INFO : Ignition finished successfully Jun 25 18:42:53.064010 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 25 18:42:53.064107 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 25 18:42:53.071467 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 25 18:42:53.071547 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 25 18:42:53.078725 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 25 18:42:53.078795 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 25 18:42:53.086162 systemd[1]: ignition-fetch.service: Deactivated successfully. Jun 25 18:42:53.086235 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jun 25 18:42:53.091650 systemd[1]: Stopped target network.target - Network. Jun 25 18:42:53.095143 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 25 18:42:53.095219 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 18:42:53.101049 systemd[1]: Stopped target paths.target - Path Units. Jun 25 18:42:53.104383 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 25 18:42:53.108592 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 18:42:53.117203 systemd[1]: Stopped target slices.target - Slice Units. Jun 25 18:42:53.119033 systemd[1]: Stopped target sockets.target - Socket Units. Jun 25 18:42:53.122384 systemd[1]: iscsid.socket: Deactivated successfully. Jun 25 18:42:53.122443 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 18:42:53.125951 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 25 18:42:53.126001 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 18:42:53.129425 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 25 18:42:53.129492 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 25 18:42:53.133171 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 25 18:42:53.133231 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 25 18:42:53.139445 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 25 18:42:53.139657 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 25 18:42:53.141297 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 25 18:42:53.141810 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 25 18:42:53.141889 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 25 18:42:53.151373 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 25 18:42:53.151467 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 25 18:42:53.164621 systemd-networkd[899]: eth0: DHCPv6 lease lost Jun 25 18:42:53.166798 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 25 18:42:53.166914 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 25 18:42:53.171267 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 25 18:42:53.171347 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 25 18:42:53.184652 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 25 18:42:53.187956 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 25 18:42:53.188015 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 18:42:53.192562 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 18:42:53.195191 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 25 18:42:53.195312 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 25 18:42:53.209142 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 25 18:42:53.210465 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:42:53.210915 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 25 18:42:53.210951 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 25 18:42:53.211186 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 25 18:42:53.211218 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 18:42:53.223334 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 25 18:42:53.223500 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 18:42:53.230887 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 25 18:42:53.230962 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 25 18:42:53.234131 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 25 18:42:53.234172 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 18:42:53.237433 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 25 18:42:53.237481 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 25 18:42:53.251803 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 25 18:42:53.251868 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 25 18:42:53.257643 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 18:42:53.259487 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:42:53.268726 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 25 18:42:53.270789 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 25 18:42:53.270854 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 18:42:53.275322 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 18:42:53.275380 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:42:53.283030 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 25 18:42:53.283236 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 25 18:42:53.297579 kernel: hv_netvsc 0022489f-834d-0022-489f-834d0022489f eth0: Data path switched from VF: enP43358s1 Jun 25 18:42:53.310082 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 25 18:42:53.310202 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 25 18:42:53.314049 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 25 18:42:53.326728 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 25 18:42:53.415407 systemd[1]: Switching root. Jun 25 18:42:53.441721 systemd-journald[176]: Journal stopped Jun 25 18:43:01.154891 systemd-journald[176]: Received SIGTERM from PID 1 (systemd). Jun 25 18:43:01.154949 kernel: SELinux: policy capability network_peer_controls=1 Jun 25 18:43:01.154967 kernel: SELinux: policy capability open_perms=1 Jun 25 18:43:01.154980 kernel: SELinux: policy capability extended_socket_class=1 Jun 25 18:43:01.154993 kernel: SELinux: policy capability always_check_network=0 Jun 25 18:43:01.155006 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 25 18:43:01.155020 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 25 18:43:01.155036 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 25 18:43:01.155050 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 25 18:43:01.155063 kernel: audit: type=1403 audit(1719340977.291:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 25 18:43:01.155078 systemd[1]: Successfully loaded SELinux policy in 155.765ms. Jun 25 18:43:01.155095 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.688ms. Jun 25 18:43:01.155111 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jun 25 18:43:01.155126 systemd[1]: Detected virtualization microsoft. Jun 25 18:43:01.155144 systemd[1]: Detected architecture x86-64. Jun 25 18:43:01.158340 systemd[1]: Detected first boot. Jun 25 18:43:01.158365 systemd[1]: Hostname set to . Jun 25 18:43:01.158384 systemd[1]: Initializing machine ID from random generator. Jun 25 18:43:01.158401 zram_generator::config[1207]: No configuration found. Jun 25 18:43:01.158425 systemd[1]: Populated /etc with preset unit settings. Jun 25 18:43:01.158440 systemd[1]: Queued start job for default target multi-user.target. Jun 25 18:43:01.158457 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jun 25 18:43:01.158474 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 25 18:43:01.158489 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 25 18:43:01.158505 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 25 18:43:01.158522 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 25 18:43:01.158542 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 25 18:43:01.158559 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 25 18:43:01.158601 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 25 18:43:01.158619 systemd[1]: Created slice user.slice - User and Session Slice. Jun 25 18:43:01.158635 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 18:43:01.158651 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 18:43:01.158667 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 25 18:43:01.158686 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 25 18:43:01.158702 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 25 18:43:01.158718 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 25 18:43:01.158733 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jun 25 18:43:01.158749 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 18:43:01.158765 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 25 18:43:01.158781 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 18:43:01.158802 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 18:43:01.158819 systemd[1]: Reached target slices.target - Slice Units. Jun 25 18:43:01.158838 systemd[1]: Reached target swap.target - Swaps. Jun 25 18:43:01.158855 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 25 18:43:01.158872 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 25 18:43:01.158890 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 25 18:43:01.158907 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jun 25 18:43:01.158924 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 18:43:01.158941 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 18:43:01.158960 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 18:43:01.158976 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 25 18:43:01.158994 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 25 18:43:01.159008 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 25 18:43:01.159021 systemd[1]: Mounting media.mount - External Media Directory... Jun 25 18:43:01.159038 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:43:01.159055 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 25 18:43:01.159071 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 25 18:43:01.159088 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 25 18:43:01.159104 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 25 18:43:01.159121 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 18:43:01.159138 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 18:43:01.159155 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 25 18:43:01.159173 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 18:43:01.159191 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 18:43:01.159207 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 18:43:01.159223 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 25 18:43:01.159241 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 18:43:01.159258 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 25 18:43:01.159275 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jun 25 18:43:01.159292 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jun 25 18:43:01.159311 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 18:43:01.159328 kernel: loop: module loaded Jun 25 18:43:01.159343 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 18:43:01.159360 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 25 18:43:01.159409 systemd-journald[1319]: Collecting audit messages is disabled. Jun 25 18:43:01.159446 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 25 18:43:01.159463 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 18:43:01.159480 systemd-journald[1319]: Journal started Jun 25 18:43:01.159514 systemd-journald[1319]: Runtime Journal (/run/log/journal/592b0d9311634d54937aa5a977bc50be) is 8.0M, max 158.8M, 150.8M free. Jun 25 18:43:01.173626 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:43:01.195303 kernel: ACPI: bus type drm_connector registered Jun 25 18:43:01.195394 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 18:43:01.195430 kernel: fuse: init (API version 7.39) Jun 25 18:43:01.204012 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 25 18:43:01.207228 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 25 18:43:01.210246 systemd[1]: Mounted media.mount - External Media Directory. Jun 25 18:43:01.212730 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 25 18:43:01.215288 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 25 18:43:01.217965 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 25 18:43:01.220553 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 25 18:43:01.225519 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 18:43:01.229678 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 25 18:43:01.229975 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 25 18:43:01.233501 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 18:43:01.233847 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 18:43:01.236989 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 18:43:01.237320 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 18:43:01.240287 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 18:43:01.240519 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 18:43:01.244925 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 25 18:43:01.245233 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 25 18:43:01.248399 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 18:43:01.250791 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 18:43:01.255299 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 18:43:01.259375 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 25 18:43:01.263910 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 25 18:43:01.284379 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 25 18:43:01.294727 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 25 18:43:01.313680 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 25 18:43:01.316138 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 25 18:43:01.321771 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 25 18:43:01.333805 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 25 18:43:01.336545 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 18:43:01.339725 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 25 18:43:01.341849 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 18:43:01.350721 systemd-journald[1319]: Time spent on flushing to /var/log/journal/592b0d9311634d54937aa5a977bc50be is 61.816ms for 943 entries. Jun 25 18:43:01.350721 systemd-journald[1319]: System Journal (/var/log/journal/592b0d9311634d54937aa5a977bc50be) is 8.0M, max 2.6G, 2.6G free. Jun 25 18:43:01.434847 systemd-journald[1319]: Received client request to flush runtime journal. Jun 25 18:43:01.353848 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 18:43:01.358770 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 25 18:43:01.365660 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 18:43:01.368322 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 25 18:43:01.373795 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 25 18:43:01.386754 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jun 25 18:43:01.409145 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 25 18:43:01.414650 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 25 18:43:01.417637 udevadm[1371]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jun 25 18:43:01.440687 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 25 18:43:01.476650 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:43:01.512726 systemd-tmpfiles[1366]: ACLs are not supported, ignoring. Jun 25 18:43:01.512752 systemd-tmpfiles[1366]: ACLs are not supported, ignoring. Jun 25 18:43:01.520296 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 25 18:43:01.529800 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 25 18:43:01.687056 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 25 18:43:01.695839 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 18:43:01.714166 systemd-tmpfiles[1387]: ACLs are not supported, ignoring. Jun 25 18:43:01.714192 systemd-tmpfiles[1387]: ACLs are not supported, ignoring. Jun 25 18:43:01.721173 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 18:43:02.795412 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 25 18:43:02.803751 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 18:43:02.838244 systemd-udevd[1393]: Using default interface naming scheme 'v255'. Jun 25 18:43:03.245255 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 18:43:03.257725 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 18:43:03.301371 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jun 25 18:43:03.319600 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1409) Jun 25 18:43:03.418030 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 25 18:43:03.424610 kernel: mousedev: PS/2 mouse device common for all mice Jun 25 18:43:03.440952 kernel: hv_vmbus: registering driver hv_balloon Jun 25 18:43:03.441034 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jun 25 18:43:03.480205 kernel: hv_vmbus: registering driver hyperv_fb Jun 25 18:43:03.506602 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jun 25 18:43:03.512209 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jun 25 18:43:03.515555 kernel: Console: switching to colour dummy device 80x25 Jun 25 18:43:03.525595 kernel: Console: switching to colour frame buffer device 128x48 Jun 25 18:43:03.518190 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 25 18:43:03.693990 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:43:03.724898 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 18:43:03.725294 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:43:03.744978 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:43:03.842369 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (1406) Jun 25 18:43:03.839802 systemd-networkd[1397]: lo: Link UP Jun 25 18:43:03.839819 systemd-networkd[1397]: lo: Gained carrier Jun 25 18:43:03.845391 systemd-networkd[1397]: Enumeration completed Jun 25 18:43:03.846196 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 18:43:03.855272 systemd-networkd[1397]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:43:03.855280 systemd-networkd[1397]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 18:43:03.858622 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Jun 25 18:43:03.864074 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 25 18:43:03.926090 kernel: mlx5_core a95e:00:02.0 enP43358s1: Link up Jun 25 18:43:03.947611 kernel: hv_netvsc 0022489f-834d-0022-489f-834d0022489f eth0: Data path switched to VF: enP43358s1 Jun 25 18:43:03.950087 systemd-networkd[1397]: enP43358s1: Link UP Jun 25 18:43:03.950287 systemd-networkd[1397]: eth0: Link UP Jun 25 18:43:03.950297 systemd-networkd[1397]: eth0: Gained carrier Jun 25 18:43:03.950322 systemd-networkd[1397]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:43:03.956921 systemd-networkd[1397]: enP43358s1: Gained carrier Jun 25 18:43:03.971074 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jun 25 18:43:03.983910 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jun 25 18:43:03.989866 systemd-networkd[1397]: eth0: DHCPv4 address 10.200.8.42/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jun 25 18:43:03.992843 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jun 25 18:43:04.141392 lvm[1482]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 18:43:04.167050 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:43:04.172371 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jun 25 18:43:04.175394 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 18:43:04.183754 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jun 25 18:43:04.189169 lvm[1493]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 18:43:04.215872 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jun 25 18:43:04.218803 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 18:43:04.221496 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 25 18:43:04.221652 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 18:43:04.224009 systemd[1]: Reached target machines.target - Containers. Jun 25 18:43:04.227643 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jun 25 18:43:04.235730 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 25 18:43:04.239433 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 25 18:43:04.241673 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 18:43:04.244712 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 25 18:43:04.248787 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jun 25 18:43:04.262775 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 25 18:43:04.266863 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 25 18:43:04.358118 kernel: loop0: detected capacity change from 0 to 209816 Jun 25 18:43:04.358271 kernel: block loop0: the capability attribute has been deprecated. Jun 25 18:43:04.371535 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 25 18:43:04.372562 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jun 25 18:43:04.384082 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 25 18:43:04.389586 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 25 18:43:04.434595 kernel: loop1: detected capacity change from 0 to 62456 Jun 25 18:43:04.863602 kernel: loop2: detected capacity change from 0 to 139760 Jun 25 18:43:05.081930 systemd-networkd[1397]: eth0: Gained IPv6LL Jun 25 18:43:05.088704 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 25 18:43:05.371587 kernel: loop3: detected capacity change from 0 to 80568 Jun 25 18:43:05.762589 kernel: loop4: detected capacity change from 0 to 209816 Jun 25 18:43:05.771590 kernel: loop5: detected capacity change from 0 to 62456 Jun 25 18:43:05.777591 kernel: loop6: detected capacity change from 0 to 139760 Jun 25 18:43:05.788590 kernel: loop7: detected capacity change from 0 to 80568 Jun 25 18:43:05.801712 (sd-merge)[1516]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jun 25 18:43:05.802280 (sd-merge)[1516]: Merged extensions into '/usr'. Jun 25 18:43:05.806229 systemd[1]: Reloading requested from client PID 1501 ('systemd-sysext') (unit systemd-sysext.service)... Jun 25 18:43:05.806244 systemd[1]: Reloading... Jun 25 18:43:05.849759 systemd-networkd[1397]: enP43358s1: Gained IPv6LL Jun 25 18:43:05.862727 zram_generator::config[1538]: No configuration found. Jun 25 18:43:06.022959 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:43:06.095179 systemd[1]: Reloading finished in 288 ms. Jun 25 18:43:06.113359 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 25 18:43:06.125762 systemd[1]: Starting ensure-sysext.service... Jun 25 18:43:06.129746 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 18:43:06.135321 systemd[1]: Reloading requested from client PID 1605 ('systemctl') (unit ensure-sysext.service)... Jun 25 18:43:06.135339 systemd[1]: Reloading... Jun 25 18:43:06.178356 systemd-tmpfiles[1606]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 25 18:43:06.178995 systemd-tmpfiles[1606]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 25 18:43:06.181221 systemd-tmpfiles[1606]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 25 18:43:06.182680 systemd-tmpfiles[1606]: ACLs are not supported, ignoring. Jun 25 18:43:06.182773 systemd-tmpfiles[1606]: ACLs are not supported, ignoring. Jun 25 18:43:06.189531 systemd-tmpfiles[1606]: Detected autofs mount point /boot during canonicalization of boot. Jun 25 18:43:06.189550 systemd-tmpfiles[1606]: Skipping /boot Jun 25 18:43:06.209925 systemd-tmpfiles[1606]: Detected autofs mount point /boot during canonicalization of boot. Jun 25 18:43:06.209944 systemd-tmpfiles[1606]: Skipping /boot Jun 25 18:43:06.217594 zram_generator::config[1633]: No configuration found. Jun 25 18:43:06.361141 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:43:06.434327 systemd[1]: Reloading finished in 298 ms. Jun 25 18:43:06.457250 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 18:43:06.466336 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:43:06.474818 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 18:43:06.480826 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 25 18:43:06.483281 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 18:43:06.486831 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 18:43:06.492083 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 18:43:06.498017 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 18:43:06.500943 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 18:43:06.506847 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 25 18:43:06.522506 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 18:43:06.533312 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 25 18:43:06.540240 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:43:06.545724 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 18:43:06.545950 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 18:43:06.549853 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 18:43:06.550078 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 18:43:06.554036 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 18:43:06.555232 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 18:43:06.568287 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:43:06.568992 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 18:43:06.575135 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 18:43:06.584925 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 18:43:06.593950 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 18:43:06.597852 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 18:43:06.598130 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:43:06.604092 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 18:43:06.604314 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 18:43:06.610655 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 18:43:06.610865 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 18:43:06.614426 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 18:43:06.614649 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 18:43:06.619539 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 25 18:43:06.655812 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:43:06.657032 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 18:43:06.667181 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 18:43:06.679853 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 18:43:06.688470 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 18:43:06.702838 augenrules[1741]: No rules Jun 25 18:43:06.704974 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 18:43:06.707371 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 18:43:06.708719 systemd[1]: Reached target time-set.target - System Time Set. Jun 25 18:43:06.716324 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:43:06.720485 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 18:43:06.723824 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 25 18:43:06.727098 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 18:43:06.727306 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 18:43:06.730552 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 18:43:06.731134 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 18:43:06.732667 systemd-resolved[1714]: Positive Trust Anchors: Jun 25 18:43:06.732965 systemd-resolved[1714]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 18:43:06.733053 systemd-resolved[1714]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jun 25 18:43:06.735411 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 18:43:06.735676 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 18:43:06.739142 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 18:43:06.739391 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 18:43:06.745865 systemd[1]: Finished ensure-sysext.service. Jun 25 18:43:06.754819 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 18:43:06.754889 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 18:43:06.756472 systemd-resolved[1714]: Using system hostname 'ci-4012.0.0-a-bcd7e269e6'. Jun 25 18:43:06.758709 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 18:43:06.761116 systemd[1]: Reached target network.target - Network. Jun 25 18:43:06.763077 systemd[1]: Reached target network-online.target - Network is Online. Jun 25 18:43:06.765102 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 18:43:06.977468 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 25 18:43:06.981614 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 25 18:43:11.190340 ldconfig[1497]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 25 18:43:11.201363 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 25 18:43:11.208964 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 25 18:43:11.224077 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 25 18:43:11.226701 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 18:43:11.228971 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 25 18:43:11.231462 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 25 18:43:11.234360 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 25 18:43:11.236768 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 25 18:43:11.239274 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 25 18:43:11.241967 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 25 18:43:11.242036 systemd[1]: Reached target paths.target - Path Units. Jun 25 18:43:11.243784 systemd[1]: Reached target timers.target - Timer Units. Jun 25 18:43:11.261816 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 25 18:43:11.265807 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 25 18:43:11.282535 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 25 18:43:11.285372 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 25 18:43:11.287548 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 18:43:11.289444 systemd[1]: Reached target basic.target - Basic System. Jun 25 18:43:11.291605 systemd[1]: System is tainted: cgroupsv1 Jun 25 18:43:11.291667 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 25 18:43:11.291710 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 25 18:43:11.297652 systemd[1]: Starting chronyd.service - NTP client/server... Jun 25 18:43:11.303690 systemd[1]: Starting containerd.service - containerd container runtime... Jun 25 18:43:11.308076 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jun 25 18:43:11.314288 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 25 18:43:11.327262 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 25 18:43:11.334341 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 25 18:43:11.338838 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 25 18:43:11.345873 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:43:11.357665 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 25 18:43:11.367871 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 25 18:43:11.373817 jq[1782]: false Jun 25 18:43:11.388978 (chronyd)[1778]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jun 25 18:43:11.389743 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 25 18:43:11.396499 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 25 18:43:11.408785 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 25 18:43:11.422184 chronyd[1802]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jun 25 18:43:11.428534 chronyd[1802]: Timezone right/UTC failed leap second check, ignoring Jun 25 18:43:11.428852 chronyd[1802]: Loaded seccomp filter (level 2) Jun 25 18:43:11.428759 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 25 18:43:11.435094 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 25 18:43:11.436792 extend-filesystems[1784]: Found loop4 Jun 25 18:43:11.436792 extend-filesystems[1784]: Found loop5 Jun 25 18:43:11.436792 extend-filesystems[1784]: Found loop6 Jun 25 18:43:11.436792 extend-filesystems[1784]: Found loop7 Jun 25 18:43:11.436792 extend-filesystems[1784]: Found sda Jun 25 18:43:11.436792 extend-filesystems[1784]: Found sda1 Jun 25 18:43:11.436792 extend-filesystems[1784]: Found sda2 Jun 25 18:43:11.436792 extend-filesystems[1784]: Found sda3 Jun 25 18:43:11.436792 extend-filesystems[1784]: Found usr Jun 25 18:43:11.453291 systemd[1]: Starting update-engine.service - Update Engine... Jun 25 18:43:11.457838 extend-filesystems[1784]: Found sda4 Jun 25 18:43:11.457838 extend-filesystems[1784]: Found sda6 Jun 25 18:43:11.457838 extend-filesystems[1784]: Found sda7 Jun 25 18:43:11.457838 extend-filesystems[1784]: Found sda9 Jun 25 18:43:11.457838 extend-filesystems[1784]: Checking size of /dev/sda9 Jun 25 18:43:11.483050 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 25 18:43:11.489021 systemd[1]: Started chronyd.service - NTP client/server. Jun 25 18:43:11.497020 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 25 18:43:11.497322 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 25 18:43:11.505043 systemd[1]: motdgen.service: Deactivated successfully. Jun 25 18:43:11.505346 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 25 18:43:11.508599 jq[1818]: true Jun 25 18:43:11.510204 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 25 18:43:11.523992 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 25 18:43:11.524289 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 25 18:43:11.583011 extend-filesystems[1784]: Old size kept for /dev/sda9 Jun 25 18:43:11.583011 extend-filesystems[1784]: Found sr0 Jun 25 18:43:11.593497 update_engine[1811]: I0625 18:43:11.589512 1811 main.cc:92] Flatcar Update Engine starting Jun 25 18:43:11.585294 (ntainerd)[1829]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 25 18:43:11.585897 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 25 18:43:11.586209 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 25 18:43:11.602290 jq[1828]: true Jun 25 18:43:11.615953 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 25 18:43:11.615394 dbus-daemon[1781]: [system] SELinux support is enabled Jun 25 18:43:11.625871 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 25 18:43:11.625914 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 25 18:43:11.630845 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 25 18:43:11.630871 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 25 18:43:11.636674 update_engine[1811]: I0625 18:43:11.636448 1811 update_check_scheduler.cc:74] Next update check in 8m57s Jun 25 18:43:11.639917 systemd[1]: Started update-engine.service - Update Engine. Jun 25 18:43:11.644363 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 25 18:43:11.646906 tar[1826]: linux-amd64/helm Jun 25 18:43:11.652761 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 25 18:43:11.713412 systemd-logind[1805]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 25 18:43:11.720155 systemd-logind[1805]: New seat seat0. Jun 25 18:43:11.725099 systemd[1]: Started systemd-logind.service - User Login Management. Jun 25 18:43:11.789574 bash[1869]: Updated "/home/core/.ssh/authorized_keys" Jun 25 18:43:11.796940 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 25 18:43:11.810758 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jun 25 18:43:11.811697 coreos-metadata[1780]: Jun 25 18:43:11.811 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jun 25 18:43:11.816591 coreos-metadata[1780]: Jun 25 18:43:11.816 INFO Fetch successful Jun 25 18:43:11.816591 coreos-metadata[1780]: Jun 25 18:43:11.816 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jun 25 18:43:11.825583 coreos-metadata[1780]: Jun 25 18:43:11.823 INFO Fetch successful Jun 25 18:43:11.825583 coreos-metadata[1780]: Jun 25 18:43:11.824 INFO Fetching http://168.63.129.16/machine/2a9c3234-db15-48b3-9774-16a1fca9a3db/bde36ebe%2Db392%2D4a4e%2Db11a%2D6d3e152dd28c.%5Fci%2D4012.0.0%2Da%2Dbcd7e269e6?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jun 25 18:43:11.828532 coreos-metadata[1780]: Jun 25 18:43:11.826 INFO Fetch successful Jun 25 18:43:11.830117 coreos-metadata[1780]: Jun 25 18:43:11.830 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jun 25 18:43:11.845104 coreos-metadata[1780]: Jun 25 18:43:11.845 INFO Fetch successful Jun 25 18:43:11.922653 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (1875) Jun 25 18:43:11.909092 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jun 25 18:43:11.916223 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 25 18:43:12.135710 locksmithd[1852]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 25 18:43:12.375871 sshd_keygen[1820]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 25 18:43:12.431214 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 25 18:43:12.451941 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 25 18:43:12.473805 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jun 25 18:43:12.487526 systemd[1]: issuegen.service: Deactivated successfully. Jun 25 18:43:12.487862 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 25 18:43:12.502149 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 25 18:43:12.560337 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jun 25 18:43:12.595113 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 25 18:43:12.615488 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 25 18:43:12.629919 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 25 18:43:12.632440 systemd[1]: Reached target getty.target - Login Prompts. Jun 25 18:43:12.758752 containerd[1829]: time="2024-06-25T18:43:12.756991100Z" level=info msg="starting containerd" revision=cd7148ac666309abf41fd4a49a8a5895b905e7f3 version=v1.7.18 Jun 25 18:43:12.762100 tar[1826]: linux-amd64/LICENSE Jun 25 18:43:12.762506 tar[1826]: linux-amd64/README.md Jun 25 18:43:12.792149 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 25 18:43:12.816653 containerd[1829]: time="2024-06-25T18:43:12.815668000Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jun 25 18:43:12.816653 containerd[1829]: time="2024-06-25T18:43:12.815739300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:43:12.819651 containerd[1829]: time="2024-06-25T18:43:12.819591300Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.35-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:43:12.819651 containerd[1829]: time="2024-06-25T18:43:12.819649000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:43:12.820615 containerd[1829]: time="2024-06-25T18:43:12.820024000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:43:12.820615 containerd[1829]: time="2024-06-25T18:43:12.820068700Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jun 25 18:43:12.820615 containerd[1829]: time="2024-06-25T18:43:12.820181200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jun 25 18:43:12.820615 containerd[1829]: time="2024-06-25T18:43:12.820249300Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:43:12.820615 containerd[1829]: time="2024-06-25T18:43:12.820272800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jun 25 18:43:12.820615 containerd[1829]: time="2024-06-25T18:43:12.820353400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:43:12.820615 containerd[1829]: time="2024-06-25T18:43:12.820609900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jun 25 18:43:12.820897 containerd[1829]: time="2024-06-25T18:43:12.820643700Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jun 25 18:43:12.820897 containerd[1829]: time="2024-06-25T18:43:12.820663000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:43:12.820977 containerd[1829]: time="2024-06-25T18:43:12.820890900Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:43:12.820977 containerd[1829]: time="2024-06-25T18:43:12.820911700Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jun 25 18:43:12.821050 containerd[1829]: time="2024-06-25T18:43:12.820991100Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jun 25 18:43:12.821050 containerd[1829]: time="2024-06-25T18:43:12.821014200Z" level=info msg="metadata content store policy set" policy=shared Jun 25 18:43:12.831836 containerd[1829]: time="2024-06-25T18:43:12.831626700Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jun 25 18:43:12.831836 containerd[1829]: time="2024-06-25T18:43:12.831681000Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jun 25 18:43:12.831836 containerd[1829]: time="2024-06-25T18:43:12.831701000Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jun 25 18:43:12.831836 containerd[1829]: time="2024-06-25T18:43:12.831750200Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jun 25 18:43:12.831836 containerd[1829]: time="2024-06-25T18:43:12.831771200Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jun 25 18:43:12.831836 containerd[1829]: time="2024-06-25T18:43:12.831786400Z" level=info msg="NRI interface is disabled by configuration." Jun 25 18:43:12.831836 containerd[1829]: time="2024-06-25T18:43:12.831803100Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jun 25 18:43:12.832159 containerd[1829]: time="2024-06-25T18:43:12.831962300Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jun 25 18:43:12.832159 containerd[1829]: time="2024-06-25T18:43:12.831985400Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jun 25 18:43:12.832159 containerd[1829]: time="2024-06-25T18:43:12.832003700Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jun 25 18:43:12.832159 containerd[1829]: time="2024-06-25T18:43:12.832024000Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jun 25 18:43:12.832159 containerd[1829]: time="2024-06-25T18:43:12.832043600Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jun 25 18:43:12.832159 containerd[1829]: time="2024-06-25T18:43:12.832067800Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jun 25 18:43:12.832159 containerd[1829]: time="2024-06-25T18:43:12.832087100Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jun 25 18:43:12.832159 containerd[1829]: time="2024-06-25T18:43:12.832104500Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jun 25 18:43:12.832159 containerd[1829]: time="2024-06-25T18:43:12.832123300Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jun 25 18:43:12.832159 containerd[1829]: time="2024-06-25T18:43:12.832142500Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jun 25 18:43:12.832465 containerd[1829]: time="2024-06-25T18:43:12.832176100Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jun 25 18:43:12.832465 containerd[1829]: time="2024-06-25T18:43:12.832195100Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jun 25 18:43:12.832465 containerd[1829]: time="2024-06-25T18:43:12.832322600Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jun 25 18:43:12.832993 containerd[1829]: time="2024-06-25T18:43:12.832774000Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jun 25 18:43:12.832993 containerd[1829]: time="2024-06-25T18:43:12.832812600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jun 25 18:43:12.832993 containerd[1829]: time="2024-06-25T18:43:12.832832000Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jun 25 18:43:12.832993 containerd[1829]: time="2024-06-25T18:43:12.832861500Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jun 25 18:43:12.832993 containerd[1829]: time="2024-06-25T18:43:12.832916900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jun 25 18:43:12.832993 containerd[1829]: time="2024-06-25T18:43:12.832934400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jun 25 18:43:12.832993 containerd[1829]: time="2024-06-25T18:43:12.832950100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jun 25 18:43:12.832993 containerd[1829]: time="2024-06-25T18:43:12.832965700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jun 25 18:43:12.832993 containerd[1829]: time="2024-06-25T18:43:12.832982000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jun 25 18:43:12.832993 containerd[1829]: time="2024-06-25T18:43:12.832998000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jun 25 18:43:12.833482 containerd[1829]: time="2024-06-25T18:43:12.833013300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jun 25 18:43:12.833482 containerd[1829]: time="2024-06-25T18:43:12.833028300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jun 25 18:43:12.833482 containerd[1829]: time="2024-06-25T18:43:12.833047000Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jun 25 18:43:12.833482 containerd[1829]: time="2024-06-25T18:43:12.833200400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jun 25 18:43:12.833482 containerd[1829]: time="2024-06-25T18:43:12.833224600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jun 25 18:43:12.833482 containerd[1829]: time="2024-06-25T18:43:12.833241900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jun 25 18:43:12.833482 containerd[1829]: time="2024-06-25T18:43:12.833268500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jun 25 18:43:12.833482 containerd[1829]: time="2024-06-25T18:43:12.833289200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jun 25 18:43:12.833482 containerd[1829]: time="2024-06-25T18:43:12.833331300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jun 25 18:43:12.833482 containerd[1829]: time="2024-06-25T18:43:12.833349600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jun 25 18:43:12.833482 containerd[1829]: time="2024-06-25T18:43:12.833365200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jun 25 18:43:12.834451 containerd[1829]: time="2024-06-25T18:43:12.833789400Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jun 25 18:43:12.834451 containerd[1829]: time="2024-06-25T18:43:12.833895000Z" level=info msg="Connect containerd service" Jun 25 18:43:12.834451 containerd[1829]: time="2024-06-25T18:43:12.833951800Z" level=info msg="using legacy CRI server" Jun 25 18:43:12.834451 containerd[1829]: time="2024-06-25T18:43:12.833962700Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 25 18:43:12.834451 containerd[1829]: time="2024-06-25T18:43:12.834079300Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jun 25 18:43:12.834919 containerd[1829]: time="2024-06-25T18:43:12.834883800Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 25 18:43:12.834978 containerd[1829]: time="2024-06-25T18:43:12.834929700Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jun 25 18:43:12.834978 containerd[1829]: time="2024-06-25T18:43:12.834953500Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jun 25 18:43:12.834978 containerd[1829]: time="2024-06-25T18:43:12.834968800Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jun 25 18:43:12.835083 containerd[1829]: time="2024-06-25T18:43:12.834988100Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jun 25 18:43:12.835723 containerd[1829]: time="2024-06-25T18:43:12.835285700Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 25 18:43:12.835723 containerd[1829]: time="2024-06-25T18:43:12.835340200Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 25 18:43:12.835723 containerd[1829]: time="2024-06-25T18:43:12.835433800Z" level=info msg="Start subscribing containerd event" Jun 25 18:43:12.835723 containerd[1829]: time="2024-06-25T18:43:12.835473400Z" level=info msg="Start recovering state" Jun 25 18:43:12.835723 containerd[1829]: time="2024-06-25T18:43:12.835540900Z" level=info msg="Start event monitor" Jun 25 18:43:12.835723 containerd[1829]: time="2024-06-25T18:43:12.835557500Z" level=info msg="Start snapshots syncer" Jun 25 18:43:12.835723 containerd[1829]: time="2024-06-25T18:43:12.835591400Z" level=info msg="Start cni network conf syncer for default" Jun 25 18:43:12.835723 containerd[1829]: time="2024-06-25T18:43:12.835602400Z" level=info msg="Start streaming server" Jun 25 18:43:12.835723 containerd[1829]: time="2024-06-25T18:43:12.835672800Z" level=info msg="containerd successfully booted in 0.085398s" Jun 25 18:43:12.836232 systemd[1]: Started containerd.service - containerd container runtime. Jun 25 18:43:13.146834 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:43:13.150103 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 25 18:43:13.153885 systemd[1]: Startup finished in 917ms (firmware) + 29.366s (loader) + 13.609s (kernel) + 16.017s (userspace) = 59.910s. Jun 25 18:43:13.160767 (kubelet)[1965]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:43:13.454131 login[1946]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jun 25 18:43:13.457130 login[1947]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jun 25 18:43:13.468232 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 25 18:43:13.477297 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 25 18:43:13.480059 systemd-logind[1805]: New session 2 of user core. Jun 25 18:43:13.482978 systemd-logind[1805]: New session 1 of user core. Jun 25 18:43:13.503350 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 25 18:43:13.515949 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 25 18:43:13.520230 (systemd)[1978]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:43:13.894331 systemd[1978]: Queued start job for default target default.target. Jun 25 18:43:13.894822 systemd[1978]: Created slice app.slice - User Application Slice. Jun 25 18:43:13.894848 systemd[1978]: Reached target paths.target - Paths. Jun 25 18:43:13.894866 systemd[1978]: Reached target timers.target - Timers. Jun 25 18:43:13.902781 systemd[1978]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 25 18:43:13.912288 systemd[1978]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 25 18:43:13.912361 systemd[1978]: Reached target sockets.target - Sockets. Jun 25 18:43:13.912380 systemd[1978]: Reached target basic.target - Basic System. Jun 25 18:43:13.912426 systemd[1978]: Reached target default.target - Main User Target. Jun 25 18:43:13.912459 systemd[1978]: Startup finished in 383ms. Jun 25 18:43:13.912971 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 25 18:43:13.923948 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 25 18:43:13.927870 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 25 18:43:14.047555 kubelet[1965]: E0625 18:43:14.047479 1965 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:43:14.051164 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:43:14.053863 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:43:14.445402 waagent[1943]: 2024-06-25T18:43:14.445293Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jun 25 18:43:14.472289 waagent[1943]: 2024-06-25T18:43:14.446408Z INFO Daemon Daemon OS: flatcar 4012.0.0 Jun 25 18:43:14.472289 waagent[1943]: 2024-06-25T18:43:14.447116Z INFO Daemon Daemon Python: 3.11.9 Jun 25 18:43:14.472289 waagent[1943]: 2024-06-25T18:43:14.447982Z INFO Daemon Daemon Run daemon Jun 25 18:43:14.472289 waagent[1943]: 2024-06-25T18:43:14.448539Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4012.0.0' Jun 25 18:43:14.472289 waagent[1943]: 2024-06-25T18:43:14.449115Z INFO Daemon Daemon Using waagent for provisioning Jun 25 18:43:14.472289 waagent[1943]: 2024-06-25T18:43:14.449957Z INFO Daemon Daemon Activate resource disk Jun 25 18:43:14.472289 waagent[1943]: 2024-06-25T18:43:14.450472Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jun 25 18:43:14.472289 waagent[1943]: 2024-06-25T18:43:14.454777Z INFO Daemon Daemon Found device: None Jun 25 18:43:14.472289 waagent[1943]: 2024-06-25T18:43:14.455502Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jun 25 18:43:14.472289 waagent[1943]: 2024-06-25T18:43:14.456156Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jun 25 18:43:14.472289 waagent[1943]: 2024-06-25T18:43:14.458420Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jun 25 18:43:14.472289 waagent[1943]: 2024-06-25T18:43:14.458996Z INFO Daemon Daemon Running default provisioning handler Jun 25 18:43:14.475435 waagent[1943]: 2024-06-25T18:43:14.475352Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jun 25 18:43:14.480970 waagent[1943]: 2024-06-25T18:43:14.480899Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jun 25 18:43:14.484653 waagent[1943]: 2024-06-25T18:43:14.484592Z INFO Daemon Daemon cloud-init is enabled: False Jun 25 18:43:14.488009 waagent[1943]: 2024-06-25T18:43:14.485388Z INFO Daemon Daemon Copying ovf-env.xml Jun 25 18:43:14.691196 waagent[1943]: 2024-06-25T18:43:14.688925Z INFO Daemon Daemon Successfully mounted dvd Jun 25 18:43:14.705714 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jun 25 18:43:14.713247 waagent[1943]: 2024-06-25T18:43:14.706786Z INFO Daemon Daemon Detect protocol endpoint Jun 25 18:43:14.713247 waagent[1943]: 2024-06-25T18:43:14.707731Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jun 25 18:43:14.713247 waagent[1943]: 2024-06-25T18:43:14.708527Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jun 25 18:43:14.713247 waagent[1943]: 2024-06-25T18:43:14.709154Z INFO Daemon Daemon Test for route to 168.63.129.16 Jun 25 18:43:14.713247 waagent[1943]: 2024-06-25T18:43:14.709977Z INFO Daemon Daemon Route to 168.63.129.16 exists Jun 25 18:43:14.713247 waagent[1943]: 2024-06-25T18:43:14.710506Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jun 25 18:43:14.759108 waagent[1943]: 2024-06-25T18:43:14.759045Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jun 25 18:43:14.765089 waagent[1943]: 2024-06-25T18:43:14.760214Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jun 25 18:43:14.765089 waagent[1943]: 2024-06-25T18:43:14.760635Z INFO Daemon Daemon Server preferred version:2015-04-05 Jun 25 18:43:14.822679 waagent[1943]: 2024-06-25T18:43:14.822550Z INFO Daemon Daemon Initializing goal state during protocol detection Jun 25 18:43:14.825312 waagent[1943]: 2024-06-25T18:43:14.825233Z INFO Daemon Daemon Forcing an update of the goal state. Jun 25 18:43:14.831265 waagent[1943]: 2024-06-25T18:43:14.831207Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jun 25 18:43:14.847017 waagent[1943]: 2024-06-25T18:43:14.846933Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.151 Jun 25 18:43:14.858964 waagent[1943]: 2024-06-25T18:43:14.848688Z INFO Daemon Jun 25 18:43:14.858964 waagent[1943]: 2024-06-25T18:43:14.849647Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 5263a0ff-b501-454b-856b-4439e3ced5f4 eTag: 6680907127046234947 source: Fabric] Jun 25 18:43:14.858964 waagent[1943]: 2024-06-25T18:43:14.850837Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jun 25 18:43:14.858964 waagent[1943]: 2024-06-25T18:43:14.852057Z INFO Daemon Jun 25 18:43:14.858964 waagent[1943]: 2024-06-25T18:43:14.852579Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jun 25 18:43:14.862925 waagent[1943]: 2024-06-25T18:43:14.862879Z INFO Daemon Daemon Downloading artifacts profile blob Jun 25 18:43:14.942176 waagent[1943]: 2024-06-25T18:43:14.942080Z INFO Daemon Downloaded certificate {'thumbprint': '76303BA7F18694F11951F99892AE53A613A7AD4D', 'hasPrivateKey': True} Jun 25 18:43:14.947481 waagent[1943]: 2024-06-25T18:43:14.947409Z INFO Daemon Downloaded certificate {'thumbprint': '206FAC9998C8FA12F97D7D64BBA5F2D99E34555A', 'hasPrivateKey': False} Jun 25 18:43:14.951450 waagent[1943]: 2024-06-25T18:43:14.951387Z INFO Daemon Fetch goal state completed Jun 25 18:43:14.961263 waagent[1943]: 2024-06-25T18:43:14.961200Z INFO Daemon Daemon Starting provisioning Jun 25 18:43:14.963637 waagent[1943]: 2024-06-25T18:43:14.963506Z INFO Daemon Daemon Handle ovf-env.xml. Jun 25 18:43:14.967676 waagent[1943]: 2024-06-25T18:43:14.964417Z INFO Daemon Daemon Set hostname [ci-4012.0.0-a-bcd7e269e6] Jun 25 18:43:15.065740 waagent[1943]: 2024-06-25T18:43:15.065635Z INFO Daemon Daemon Publish hostname [ci-4012.0.0-a-bcd7e269e6] Jun 25 18:43:15.072185 waagent[1943]: 2024-06-25T18:43:15.067263Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jun 25 18:43:15.072185 waagent[1943]: 2024-06-25T18:43:15.069088Z INFO Daemon Daemon Primary interface is [eth0] Jun 25 18:43:15.116689 systemd-networkd[1397]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:43:15.116699 systemd-networkd[1397]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 18:43:15.116757 systemd-networkd[1397]: eth0: DHCP lease lost Jun 25 18:43:15.118153 waagent[1943]: 2024-06-25T18:43:15.118063Z INFO Daemon Daemon Create user account if not exists Jun 25 18:43:15.130787 waagent[1943]: 2024-06-25T18:43:15.119143Z INFO Daemon Daemon User core already exists, skip useradd Jun 25 18:43:15.130787 waagent[1943]: 2024-06-25T18:43:15.119747Z INFO Daemon Daemon Configure sudoer Jun 25 18:43:15.130787 waagent[1943]: 2024-06-25T18:43:15.120762Z INFO Daemon Daemon Configure sshd Jun 25 18:43:15.130787 waagent[1943]: 2024-06-25T18:43:15.121714Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jun 25 18:43:15.130787 waagent[1943]: 2024-06-25T18:43:15.122180Z INFO Daemon Daemon Deploy ssh public key. Jun 25 18:43:15.133671 systemd-networkd[1397]: eth0: DHCPv6 lease lost Jun 25 18:43:15.172674 systemd-networkd[1397]: eth0: DHCPv4 address 10.200.8.42/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jun 25 18:43:16.468765 waagent[1943]: 2024-06-25T18:43:16.468664Z INFO Daemon Daemon Provisioning complete Jun 25 18:43:16.487623 waagent[1943]: 2024-06-25T18:43:16.487520Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jun 25 18:43:16.490298 waagent[1943]: 2024-06-25T18:43:16.490216Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jun 25 18:43:16.493942 waagent[1943]: 2024-06-25T18:43:16.493880Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jun 25 18:43:16.620740 waagent[2033]: 2024-06-25T18:43:16.620650Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jun 25 18:43:16.621216 waagent[2033]: 2024-06-25T18:43:16.620820Z INFO ExtHandler ExtHandler OS: flatcar 4012.0.0 Jun 25 18:43:16.621216 waagent[2033]: 2024-06-25T18:43:16.620900Z INFO ExtHandler ExtHandler Python: 3.11.9 Jun 25 18:43:16.657906 waagent[2033]: 2024-06-25T18:43:16.657790Z INFO ExtHandler ExtHandler Distro: flatcar-4012.0.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jun 25 18:43:16.658176 waagent[2033]: 2024-06-25T18:43:16.658118Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jun 25 18:43:16.658291 waagent[2033]: 2024-06-25T18:43:16.658240Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jun 25 18:43:16.667535 waagent[2033]: 2024-06-25T18:43:16.667455Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jun 25 18:43:16.673974 waagent[2033]: 2024-06-25T18:43:16.673919Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.151 Jun 25 18:43:16.674478 waagent[2033]: 2024-06-25T18:43:16.674424Z INFO ExtHandler Jun 25 18:43:16.674592 waagent[2033]: 2024-06-25T18:43:16.674515Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 6451818d-28ff-47dc-928a-20760eb3df3d eTag: 6680907127046234947 source: Fabric] Jun 25 18:43:16.674897 waagent[2033]: 2024-06-25T18:43:16.674849Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jun 25 18:43:16.675456 waagent[2033]: 2024-06-25T18:43:16.675405Z INFO ExtHandler Jun 25 18:43:16.675532 waagent[2033]: 2024-06-25T18:43:16.675487Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jun 25 18:43:16.680148 waagent[2033]: 2024-06-25T18:43:16.680105Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jun 25 18:43:16.764845 waagent[2033]: 2024-06-25T18:43:16.764704Z INFO ExtHandler Downloaded certificate {'thumbprint': '76303BA7F18694F11951F99892AE53A613A7AD4D', 'hasPrivateKey': True} Jun 25 18:43:16.765228 waagent[2033]: 2024-06-25T18:43:16.765175Z INFO ExtHandler Downloaded certificate {'thumbprint': '206FAC9998C8FA12F97D7D64BBA5F2D99E34555A', 'hasPrivateKey': False} Jun 25 18:43:16.765704 waagent[2033]: 2024-06-25T18:43:16.765653Z INFO ExtHandler Fetch goal state completed Jun 25 18:43:16.785106 waagent[2033]: 2024-06-25T18:43:16.785035Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 2033 Jun 25 18:43:16.785259 waagent[2033]: 2024-06-25T18:43:16.785221Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jun 25 18:43:16.786864 waagent[2033]: 2024-06-25T18:43:16.786807Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4012.0.0', '', 'Flatcar Container Linux by Kinvolk'] Jun 25 18:43:16.787239 waagent[2033]: 2024-06-25T18:43:16.787188Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jun 25 18:43:16.806112 waagent[2033]: 2024-06-25T18:43:16.806058Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jun 25 18:43:16.806353 waagent[2033]: 2024-06-25T18:43:16.806299Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jun 25 18:43:16.813264 waagent[2033]: 2024-06-25T18:43:16.813220Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jun 25 18:43:16.820311 systemd[1]: Reloading requested from client PID 2048 ('systemctl') (unit waagent.service)... Jun 25 18:43:16.820329 systemd[1]: Reloading... Jun 25 18:43:16.906599 zram_generator::config[2082]: No configuration found. Jun 25 18:43:17.027682 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:43:17.104276 systemd[1]: Reloading finished in 283 ms. Jun 25 18:43:17.129159 waagent[2033]: 2024-06-25T18:43:17.129007Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jun 25 18:43:17.136794 systemd[1]: Reloading requested from client PID 2141 ('systemctl') (unit waagent.service)... Jun 25 18:43:17.136811 systemd[1]: Reloading... Jun 25 18:43:17.215596 zram_generator::config[2172]: No configuration found. Jun 25 18:43:17.340508 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:43:17.416714 systemd[1]: Reloading finished in 279 ms. Jun 25 18:43:17.441598 waagent[2033]: 2024-06-25T18:43:17.440985Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jun 25 18:43:17.441598 waagent[2033]: 2024-06-25T18:43:17.441176Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jun 25 18:43:17.922396 waagent[2033]: 2024-06-25T18:43:17.922298Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jun 25 18:43:17.925357 waagent[2033]: 2024-06-25T18:43:17.925289Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jun 25 18:43:17.926162 waagent[2033]: 2024-06-25T18:43:17.926103Z INFO ExtHandler ExtHandler Starting env monitor service. Jun 25 18:43:17.926334 waagent[2033]: 2024-06-25T18:43:17.926251Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jun 25 18:43:17.926773 waagent[2033]: 2024-06-25T18:43:17.926719Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jun 25 18:43:17.926865 waagent[2033]: 2024-06-25T18:43:17.926788Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jun 25 18:43:17.926959 waagent[2033]: 2024-06-25T18:43:17.926920Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jun 25 18:43:17.927064 waagent[2033]: 2024-06-25T18:43:17.927019Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jun 25 18:43:17.927361 waagent[2033]: 2024-06-25T18:43:17.927315Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jun 25 18:43:17.927528 waagent[2033]: 2024-06-25T18:43:17.927481Z INFO EnvHandler ExtHandler Configure routes Jun 25 18:43:17.927650 waagent[2033]: 2024-06-25T18:43:17.927608Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jun 25 18:43:17.928144 waagent[2033]: 2024-06-25T18:43:17.928087Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jun 25 18:43:17.928301 waagent[2033]: 2024-06-25T18:43:17.928264Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jun 25 18:43:17.928664 waagent[2033]: 2024-06-25T18:43:17.928616Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jun 25 18:43:17.928739 waagent[2033]: 2024-06-25T18:43:17.928673Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jun 25 18:43:17.928848 waagent[2033]: 2024-06-25T18:43:17.928783Z INFO EnvHandler ExtHandler Gateway:None Jun 25 18:43:17.930159 waagent[2033]: 2024-06-25T18:43:17.930112Z INFO EnvHandler ExtHandler Routes:None Jun 25 18:43:17.930237 waagent[2033]: 2024-06-25T18:43:17.930157Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jun 25 18:43:17.930237 waagent[2033]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jun 25 18:43:17.930237 waagent[2033]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Jun 25 18:43:17.930237 waagent[2033]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jun 25 18:43:17.930237 waagent[2033]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jun 25 18:43:17.930237 waagent[2033]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jun 25 18:43:17.930237 waagent[2033]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jun 25 18:43:17.937232 waagent[2033]: 2024-06-25T18:43:17.937187Z INFO ExtHandler ExtHandler Jun 25 18:43:17.937344 waagent[2033]: 2024-06-25T18:43:17.937303Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: faa4b62b-4ac7-42a9-a32a-18f2331b2ff3 correlation 2999e2b9-afe1-4cec-9926-3133d80ea0dd created: 2024-06-25T18:41:57.624374Z] Jun 25 18:43:17.937721 waagent[2033]: 2024-06-25T18:43:17.937676Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jun 25 18:43:17.938258 waagent[2033]: 2024-06-25T18:43:17.938212Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Jun 25 18:43:17.972693 waagent[2033]: 2024-06-25T18:43:17.972515Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: F0649D2D-544C-4B24-AF98-685017F2DBC9;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jun 25 18:43:17.988865 waagent[2033]: 2024-06-25T18:43:17.988796Z INFO MonitorHandler ExtHandler Network interfaces: Jun 25 18:43:17.988865 waagent[2033]: Executing ['ip', '-a', '-o', 'link']: Jun 25 18:43:17.988865 waagent[2033]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jun 25 18:43:17.988865 waagent[2033]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:9f:83:4d brd ff:ff:ff:ff:ff:ff Jun 25 18:43:17.988865 waagent[2033]: 3: enP43358s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:9f:83:4d brd ff:ff:ff:ff:ff:ff\ altname enP43358p0s2 Jun 25 18:43:17.988865 waagent[2033]: Executing ['ip', '-4', '-a', '-o', 'address']: Jun 25 18:43:17.988865 waagent[2033]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jun 25 18:43:17.988865 waagent[2033]: 2: eth0 inet 10.200.8.42/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Jun 25 18:43:17.988865 waagent[2033]: Executing ['ip', '-6', '-a', '-o', 'address']: Jun 25 18:43:17.988865 waagent[2033]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jun 25 18:43:17.988865 waagent[2033]: 2: eth0 inet6 fe80::222:48ff:fe9f:834d/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jun 25 18:43:17.988865 waagent[2033]: 3: enP43358s1 inet6 fe80::222:48ff:fe9f:834d/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jun 25 18:43:18.026927 waagent[2033]: 2024-06-25T18:43:18.026859Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jun 25 18:43:18.026927 waagent[2033]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jun 25 18:43:18.026927 waagent[2033]: pkts bytes target prot opt in out source destination Jun 25 18:43:18.026927 waagent[2033]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jun 25 18:43:18.026927 waagent[2033]: pkts bytes target prot opt in out source destination Jun 25 18:43:18.026927 waagent[2033]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jun 25 18:43:18.026927 waagent[2033]: pkts bytes target prot opt in out source destination Jun 25 18:43:18.026927 waagent[2033]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jun 25 18:43:18.026927 waagent[2033]: 4 415 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jun 25 18:43:18.026927 waagent[2033]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jun 25 18:43:18.031233 waagent[2033]: 2024-06-25T18:43:18.031173Z INFO EnvHandler ExtHandler Current Firewall rules: Jun 25 18:43:18.031233 waagent[2033]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jun 25 18:43:18.031233 waagent[2033]: pkts bytes target prot opt in out source destination Jun 25 18:43:18.031233 waagent[2033]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jun 25 18:43:18.031233 waagent[2033]: pkts bytes target prot opt in out source destination Jun 25 18:43:18.031233 waagent[2033]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jun 25 18:43:18.031233 waagent[2033]: pkts bytes target prot opt in out source destination Jun 25 18:43:18.031233 waagent[2033]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jun 25 18:43:18.031233 waagent[2033]: 11 926 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jun 25 18:43:18.031233 waagent[2033]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jun 25 18:43:18.031674 waagent[2033]: 2024-06-25T18:43:18.031468Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jun 25 18:43:24.160878 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 25 18:43:24.173805 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:43:24.322760 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:43:24.326508 (kubelet)[2280]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:43:24.768914 kubelet[2280]: E0625 18:43:24.768865 2280 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:43:24.773266 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:43:24.773602 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:43:34.910999 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 25 18:43:34.923799 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:43:35.220661 chronyd[1802]: Selected source PHC0 Jun 25 18:43:36.419762 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:43:36.424843 (kubelet)[2301]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:43:36.474735 kubelet[2301]: E0625 18:43:36.474657 2301 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:43:36.477493 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:43:36.477837 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:43:46.660996 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jun 25 18:43:46.669789 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:43:47.008768 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:43:47.020047 (kubelet)[2325]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:43:47.064829 kubelet[2325]: E0625 18:43:47.064768 2325 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:43:47.068160 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:43:47.068428 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:43:51.534500 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Jun 25 18:43:56.537833 update_engine[1811]: I0625 18:43:56.537732 1811 update_attempter.cc:509] Updating boot flags... Jun 25 18:43:56.603599 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (2346) Jun 25 18:43:56.712674 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (2345) Jun 25 18:43:56.816657 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (2345) Jun 25 18:43:57.160919 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jun 25 18:43:57.173793 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:43:57.297750 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:43:57.301415 (kubelet)[2439]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:43:57.759289 kubelet[2439]: E0625 18:43:57.759225 2439 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:43:57.761926 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:43:57.762243 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:44:04.695330 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 25 18:44:04.701160 systemd[1]: Started sshd@0-10.200.8.42:22-10.200.16.10:34976.service - OpenSSH per-connection server daemon (10.200.16.10:34976). Jun 25 18:44:05.395793 sshd[2447]: Accepted publickey for core from 10.200.16.10 port 34976 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:44:05.397608 sshd[2447]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:44:05.402752 systemd-logind[1805]: New session 3 of user core. Jun 25 18:44:05.411827 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 25 18:44:05.982896 systemd[1]: Started sshd@1-10.200.8.42:22-10.200.16.10:34984.service - OpenSSH per-connection server daemon (10.200.16.10:34984). Jun 25 18:44:06.625002 sshd[2452]: Accepted publickey for core from 10.200.16.10 port 34984 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:44:06.626786 sshd[2452]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:44:06.632088 systemd-logind[1805]: New session 4 of user core. Jun 25 18:44:06.638813 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 25 18:44:07.087598 sshd[2452]: pam_unix(sshd:session): session closed for user core Jun 25 18:44:07.092151 systemd[1]: sshd@1-10.200.8.42:22-10.200.16.10:34984.service: Deactivated successfully. Jun 25 18:44:07.096069 systemd[1]: session-4.scope: Deactivated successfully. Jun 25 18:44:07.097002 systemd-logind[1805]: Session 4 logged out. Waiting for processes to exit. Jun 25 18:44:07.097904 systemd-logind[1805]: Removed session 4. Jun 25 18:44:07.202887 systemd[1]: Started sshd@2-10.200.8.42:22-10.200.16.10:34992.service - OpenSSH per-connection server daemon (10.200.16.10:34992). Jun 25 18:44:07.841686 sshd[2460]: Accepted publickey for core from 10.200.16.10 port 34992 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:44:07.843431 sshd[2460]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:44:07.844549 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jun 25 18:44:07.850796 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:44:07.854613 systemd-logind[1805]: New session 5 of user core. Jun 25 18:44:07.867923 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 25 18:44:07.962764 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:44:07.967587 (kubelet)[2476]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:44:08.293401 sshd[2460]: pam_unix(sshd:session): session closed for user core Jun 25 18:44:08.297890 systemd[1]: sshd@2-10.200.8.42:22-10.200.16.10:34992.service: Deactivated successfully. Jun 25 18:44:08.302349 systemd[1]: session-5.scope: Deactivated successfully. Jun 25 18:44:08.303144 systemd-logind[1805]: Session 5 logged out. Waiting for processes to exit. Jun 25 18:44:08.304293 systemd-logind[1805]: Removed session 5. Jun 25 18:44:08.403925 systemd[1]: Started sshd@3-10.200.8.42:22-10.200.16.10:35000.service - OpenSSH per-connection server daemon (10.200.16.10:35000). Jun 25 18:44:08.461205 kubelet[2476]: E0625 18:44:08.461134 2476 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:44:08.463861 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:44:08.464177 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:44:09.053198 sshd[2487]: Accepted publickey for core from 10.200.16.10 port 35000 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:44:09.396978 sshd[2487]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:44:09.402219 systemd-logind[1805]: New session 6 of user core. Jun 25 18:44:09.412806 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 25 18:44:09.775951 sshd[2487]: pam_unix(sshd:session): session closed for user core Jun 25 18:44:09.779522 systemd[1]: sshd@3-10.200.8.42:22-10.200.16.10:35000.service: Deactivated successfully. Jun 25 18:44:09.784407 systemd-logind[1805]: Session 6 logged out. Waiting for processes to exit. Jun 25 18:44:09.785831 systemd[1]: session-6.scope: Deactivated successfully. Jun 25 18:44:09.787756 systemd-logind[1805]: Removed session 6. Jun 25 18:44:09.887932 systemd[1]: Started sshd@4-10.200.8.42:22-10.200.16.10:35002.service - OpenSSH per-connection server daemon (10.200.16.10:35002). Jun 25 18:44:10.530247 sshd[2498]: Accepted publickey for core from 10.200.16.10 port 35002 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:44:10.532017 sshd[2498]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:44:10.537555 systemd-logind[1805]: New session 7 of user core. Jun 25 18:44:10.546830 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 25 18:44:11.030111 sudo[2502]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 25 18:44:11.030462 sudo[2502]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:44:11.041940 sudo[2502]: pam_unix(sudo:session): session closed for user root Jun 25 18:44:11.146241 sshd[2498]: pam_unix(sshd:session): session closed for user core Jun 25 18:44:11.150090 systemd[1]: sshd@4-10.200.8.42:22-10.200.16.10:35002.service: Deactivated successfully. Jun 25 18:44:11.154362 systemd-logind[1805]: Session 7 logged out. Waiting for processes to exit. Jun 25 18:44:11.155806 systemd[1]: session-7.scope: Deactivated successfully. Jun 25 18:44:11.157538 systemd-logind[1805]: Removed session 7. Jun 25 18:44:11.257150 systemd[1]: Started sshd@5-10.200.8.42:22-10.200.16.10:35018.service - OpenSSH per-connection server daemon (10.200.16.10:35018). Jun 25 18:44:11.906800 sshd[2507]: Accepted publickey for core from 10.200.16.10 port 35018 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:44:11.908585 sshd[2507]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:44:11.914211 systemd-logind[1805]: New session 8 of user core. Jun 25 18:44:11.924853 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 25 18:44:12.262797 sudo[2512]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 25 18:44:12.263124 sudo[2512]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:44:12.266465 sudo[2512]: pam_unix(sudo:session): session closed for user root Jun 25 18:44:12.271527 sudo[2511]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jun 25 18:44:12.271861 sudo[2511]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:44:12.286898 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jun 25 18:44:12.288623 auditctl[2515]: No rules Jun 25 18:44:12.289198 systemd[1]: audit-rules.service: Deactivated successfully. Jun 25 18:44:12.289518 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jun 25 18:44:12.294701 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 18:44:12.320358 augenrules[2534]: No rules Jun 25 18:44:12.322252 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 18:44:12.324789 sudo[2511]: pam_unix(sudo:session): session closed for user root Jun 25 18:44:12.429947 sshd[2507]: pam_unix(sshd:session): session closed for user core Jun 25 18:44:12.434735 systemd[1]: sshd@5-10.200.8.42:22-10.200.16.10:35018.service: Deactivated successfully. Jun 25 18:44:12.438845 systemd-logind[1805]: Session 8 logged out. Waiting for processes to exit. Jun 25 18:44:12.438986 systemd[1]: session-8.scope: Deactivated successfully. Jun 25 18:44:12.440224 systemd-logind[1805]: Removed session 8. Jun 25 18:44:12.541159 systemd[1]: Started sshd@6-10.200.8.42:22-10.200.16.10:35030.service - OpenSSH per-connection server daemon (10.200.16.10:35030). Jun 25 18:44:13.185017 sshd[2543]: Accepted publickey for core from 10.200.16.10 port 35030 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:44:13.186781 sshd[2543]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:44:13.191978 systemd-logind[1805]: New session 9 of user core. Jun 25 18:44:13.201823 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 25 18:44:13.541099 sudo[2547]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 25 18:44:13.541429 sudo[2547]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:44:14.083875 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 25 18:44:14.086445 (dockerd)[2556]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 25 18:44:15.314937 dockerd[2556]: time="2024-06-25T18:44:15.314874649Z" level=info msg="Starting up" Jun 25 18:44:15.438756 dockerd[2556]: time="2024-06-25T18:44:15.438710750Z" level=info msg="Loading containers: start." Jun 25 18:44:15.721812 kernel: Initializing XFRM netlink socket Jun 25 18:44:15.917371 systemd-networkd[1397]: docker0: Link UP Jun 25 18:44:15.944271 dockerd[2556]: time="2024-06-25T18:44:15.944224495Z" level=info msg="Loading containers: done." Jun 25 18:44:16.328223 dockerd[2556]: time="2024-06-25T18:44:16.328169549Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 25 18:44:16.328745 dockerd[2556]: time="2024-06-25T18:44:16.328415650Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jun 25 18:44:16.328745 dockerd[2556]: time="2024-06-25T18:44:16.328557951Z" level=info msg="Daemon has completed initialization" Jun 25 18:44:16.374310 dockerd[2556]: time="2024-06-25T18:44:16.373781834Z" level=info msg="API listen on /run/docker.sock" Jun 25 18:44:16.374448 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 25 18:44:18.071263 containerd[1829]: time="2024-06-25T18:44:18.071219203Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\"" Jun 25 18:44:18.660773 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jun 25 18:44:18.669195 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:44:19.111751 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:44:19.111998 (kubelet)[2695]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:44:19.470000 kubelet[2695]: E0625 18:44:19.469868 2695 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:44:19.472676 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:44:19.472992 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:44:20.537303 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3751251434.mount: Deactivated successfully. Jun 25 18:44:22.801902 containerd[1829]: time="2024-06-25T18:44:22.801845446Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:22.805057 containerd[1829]: time="2024-06-25T18:44:22.804993559Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.11: active requests=0, bytes read=34605186" Jun 25 18:44:22.807819 containerd[1829]: time="2024-06-25T18:44:22.807765570Z" level=info msg="ImageCreate event name:\"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:22.811557 containerd[1829]: time="2024-06-25T18:44:22.811505285Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:22.812546 containerd[1829]: time="2024-06-25T18:44:22.812507489Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.11\" with image id \"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\", size \"34601978\" in 4.741243086s" Jun 25 18:44:22.813065 containerd[1829]: time="2024-06-25T18:44:22.812551690Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\" returns image reference \"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\"" Jun 25 18:44:22.833612 containerd[1829]: time="2024-06-25T18:44:22.833573675Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\"" Jun 25 18:44:25.015399 containerd[1829]: time="2024-06-25T18:44:25.015338385Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:25.018770 containerd[1829]: time="2024-06-25T18:44:25.018699492Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.11: active requests=0, bytes read=31719499" Jun 25 18:44:25.021966 containerd[1829]: time="2024-06-25T18:44:25.021905899Z" level=info msg="ImageCreate event name:\"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:25.027865 containerd[1829]: time="2024-06-25T18:44:25.027805612Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:25.028972 containerd[1829]: time="2024-06-25T18:44:25.028801915Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.11\" with image id \"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\", size \"33315989\" in 2.19509714s" Jun 25 18:44:25.028972 containerd[1829]: time="2024-06-25T18:44:25.028845515Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\" returns image reference \"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\"" Jun 25 18:44:25.051071 containerd[1829]: time="2024-06-25T18:44:25.051021764Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\"" Jun 25 18:44:26.403004 containerd[1829]: time="2024-06-25T18:44:26.402954781Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:26.406471 containerd[1829]: time="2024-06-25T18:44:26.406417988Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.11: active requests=0, bytes read=16925513" Jun 25 18:44:26.411315 containerd[1829]: time="2024-06-25T18:44:26.411256399Z" level=info msg="ImageCreate event name:\"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:26.416019 containerd[1829]: time="2024-06-25T18:44:26.415965410Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:26.417132 containerd[1829]: time="2024-06-25T18:44:26.416971512Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.11\" with image id \"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\", size \"18522021\" in 1.365895048s" Jun 25 18:44:26.417132 containerd[1829]: time="2024-06-25T18:44:26.417011612Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\" returns image reference \"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\"" Jun 25 18:44:26.437742 containerd[1829]: time="2024-06-25T18:44:26.437695058Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\"" Jun 25 18:44:27.854645 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2889757941.mount: Deactivated successfully. Jun 25 18:44:28.315117 containerd[1829]: time="2024-06-25T18:44:28.315054146Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:28.317095 containerd[1829]: time="2024-06-25T18:44:28.317030751Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.11: active requests=0, bytes read=28118427" Jun 25 18:44:28.320157 containerd[1829]: time="2024-06-25T18:44:28.320112058Z" level=info msg="ImageCreate event name:\"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:28.323368 containerd[1829]: time="2024-06-25T18:44:28.323315765Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:28.324096 containerd[1829]: time="2024-06-25T18:44:28.323901566Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.11\" with image id \"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\", repo tag \"registry.k8s.io/kube-proxy:v1.28.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\", size \"28117438\" in 1.886155608s" Jun 25 18:44:28.324096 containerd[1829]: time="2024-06-25T18:44:28.323938966Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\" returns image reference \"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\"" Jun 25 18:44:28.344589 containerd[1829]: time="2024-06-25T18:44:28.344543212Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jun 25 18:44:28.811933 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3386406615.mount: Deactivated successfully. Jun 25 18:44:28.830323 containerd[1829]: time="2024-06-25T18:44:28.830269796Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:28.832695 containerd[1829]: time="2024-06-25T18:44:28.832641201Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Jun 25 18:44:28.837213 containerd[1829]: time="2024-06-25T18:44:28.837163011Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:28.841420 containerd[1829]: time="2024-06-25T18:44:28.841367621Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:28.842491 containerd[1829]: time="2024-06-25T18:44:28.842100622Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 497.49991ms" Jun 25 18:44:28.842491 containerd[1829]: time="2024-06-25T18:44:28.842137622Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jun 25 18:44:28.863492 containerd[1829]: time="2024-06-25T18:44:28.863445270Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jun 25 18:44:29.592933 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jun 25 18:44:29.598856 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:44:29.622175 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3920279006.mount: Deactivated successfully. Jun 25 18:44:29.822752 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:44:29.826374 (kubelet)[2818]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:44:30.312696 kubelet[2818]: E0625 18:44:30.312582 2818 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:44:30.315159 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:44:30.315465 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:44:34.204911 containerd[1829]: time="2024-06-25T18:44:34.204774493Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:34.207785 containerd[1829]: time="2024-06-25T18:44:34.207729504Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651633" Jun 25 18:44:34.210688 containerd[1829]: time="2024-06-25T18:44:34.210624115Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:34.214989 containerd[1829]: time="2024-06-25T18:44:34.214924231Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:34.216134 containerd[1829]: time="2024-06-25T18:44:34.216000835Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 5.352508165s" Jun 25 18:44:34.216134 containerd[1829]: time="2024-06-25T18:44:34.216040035Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jun 25 18:44:34.238202 containerd[1829]: time="2024-06-25T18:44:34.238160919Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Jun 25 18:44:34.825504 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount544471459.mount: Deactivated successfully. Jun 25 18:44:35.548790 containerd[1829]: time="2024-06-25T18:44:35.548728982Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:35.550927 containerd[1829]: time="2024-06-25T18:44:35.550863390Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=16191757" Jun 25 18:44:35.560535 containerd[1829]: time="2024-06-25T18:44:35.560480926Z" level=info msg="ImageCreate event name:\"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:35.564944 containerd[1829]: time="2024-06-25T18:44:35.564910743Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:35.565754 containerd[1829]: time="2024-06-25T18:44:35.565594345Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"16190758\" in 1.327390926s" Jun 25 18:44:35.565754 containerd[1829]: time="2024-06-25T18:44:35.565633046Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Jun 25 18:44:38.569515 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:44:38.575861 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:44:38.607289 systemd[1]: Reloading requested from client PID 2945 ('systemctl') (unit session-9.scope)... Jun 25 18:44:38.607307 systemd[1]: Reloading... Jun 25 18:44:38.722675 zram_generator::config[2982]: No configuration found. Jun 25 18:44:38.858834 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:44:38.933794 systemd[1]: Reloading finished in 325 ms. Jun 25 18:44:38.989964 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:44:38.993450 systemd[1]: kubelet.service: Deactivated successfully. Jun 25 18:44:38.993849 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:44:39.003138 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:44:40.129759 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:44:40.133354 (kubelet)[3067]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 25 18:44:40.180896 kubelet[3067]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:44:40.180896 kubelet[3067]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 18:44:40.180896 kubelet[3067]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:44:40.181402 kubelet[3067]: I0625 18:44:40.180952 3067 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 18:44:40.804536 kubelet[3067]: I0625 18:44:40.804471 3067 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jun 25 18:44:40.804536 kubelet[3067]: I0625 18:44:40.804528 3067 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 18:44:41.746929 kubelet[3067]: I0625 18:44:40.804830 3067 server.go:895] "Client rotation is on, will bootstrap in background" Jun 25 18:44:41.746929 kubelet[3067]: E0625 18:44:40.820025 3067 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.42:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.42:6443: connect: connection refused Jun 25 18:44:41.746929 kubelet[3067]: I0625 18:44:40.820240 3067 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 18:44:41.746929 kubelet[3067]: I0625 18:44:40.832862 3067 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 18:44:41.746929 kubelet[3067]: I0625 18:44:40.833238 3067 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 18:44:41.747216 kubelet[3067]: I0625 18:44:40.833377 3067 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 18:44:41.747216 kubelet[3067]: I0625 18:44:40.833844 3067 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 18:44:41.747216 kubelet[3067]: I0625 18:44:40.833862 3067 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 18:44:41.748861 kubelet[3067]: I0625 18:44:41.748417 3067 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:44:41.750924 kubelet[3067]: I0625 18:44:41.750661 3067 kubelet.go:393] "Attempting to sync node with API server" Jun 25 18:44:41.750924 kubelet[3067]: I0625 18:44:41.750696 3067 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 18:44:41.750924 kubelet[3067]: I0625 18:44:41.750729 3067 kubelet.go:309] "Adding apiserver pod source" Jun 25 18:44:41.750924 kubelet[3067]: I0625 18:44:41.750748 3067 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 18:44:41.753188 kubelet[3067]: W0625 18:44:41.752908 3067 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.8.42:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.42:6443: connect: connection refused Jun 25 18:44:41.753188 kubelet[3067]: E0625 18:44:41.752967 3067 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.42:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.42:6443: connect: connection refused Jun 25 18:44:41.753188 kubelet[3067]: W0625 18:44:41.753035 3067 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.8.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012.0.0-a-bcd7e269e6&limit=500&resourceVersion=0": dial tcp 10.200.8.42:6443: connect: connection refused Jun 25 18:44:41.753188 kubelet[3067]: E0625 18:44:41.753072 3067 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012.0.0-a-bcd7e269e6&limit=500&resourceVersion=0": dial tcp 10.200.8.42:6443: connect: connection refused Jun 25 18:44:41.754237 kubelet[3067]: I0625 18:44:41.753892 3067 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Jun 25 18:44:41.757016 kubelet[3067]: W0625 18:44:41.756995 3067 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 25 18:44:41.757877 kubelet[3067]: I0625 18:44:41.757647 3067 server.go:1232] "Started kubelet" Jun 25 18:44:41.759008 kubelet[3067]: I0625 18:44:41.758930 3067 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 18:44:41.761973 kubelet[3067]: E0625 18:44:41.760794 3067 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-4012.0.0-a-bcd7e269e6.17dc539ab9e616d7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-4012.0.0-a-bcd7e269e6", UID:"ci-4012.0.0-a-bcd7e269e6", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-4012.0.0-a-bcd7e269e6"}, FirstTimestamp:time.Date(2024, time.June, 25, 18, 44, 41, 757619927, time.Local), LastTimestamp:time.Date(2024, time.June, 25, 18, 44, 41, 757619927, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-4012.0.0-a-bcd7e269e6"}': 'Post "https://10.200.8.42:6443/api/v1/namespaces/default/events": dial tcp 10.200.8.42:6443: connect: connection refused'(may retry after sleeping) Jun 25 18:44:41.761973 kubelet[3067]: I0625 18:44:41.760934 3067 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 18:44:41.761973 kubelet[3067]: I0625 18:44:41.761946 3067 server.go:462] "Adding debug handlers to kubelet server" Jun 25 18:44:41.763369 kubelet[3067]: I0625 18:44:41.763342 3067 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jun 25 18:44:41.763589 kubelet[3067]: I0625 18:44:41.763556 3067 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 18:44:41.766344 kubelet[3067]: E0625 18:44:41.765734 3067 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jun 25 18:44:41.766344 kubelet[3067]: E0625 18:44:41.765761 3067 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 18:44:41.766344 kubelet[3067]: I0625 18:44:41.765967 3067 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 18:44:41.766344 kubelet[3067]: I0625 18:44:41.766049 3067 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jun 25 18:44:41.766344 kubelet[3067]: I0625 18:44:41.766114 3067 reconciler_new.go:29] "Reconciler: start to sync state" Jun 25 18:44:41.766562 kubelet[3067]: W0625 18:44:41.766437 3067 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.8.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.42:6443: connect: connection refused Jun 25 18:44:41.766562 kubelet[3067]: E0625 18:44:41.766487 3067 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.42:6443: connect: connection refused Jun 25 18:44:41.767310 kubelet[3067]: E0625 18:44:41.767121 3067 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012.0.0-a-bcd7e269e6?timeout=10s\": dial tcp 10.200.8.42:6443: connect: connection refused" interval="200ms" Jun 25 18:44:41.821386 kubelet[3067]: I0625 18:44:41.821357 3067 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 18:44:41.821386 kubelet[3067]: I0625 18:44:41.821383 3067 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 18:44:41.821600 kubelet[3067]: I0625 18:44:41.821403 3067 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:44:41.826663 kubelet[3067]: I0625 18:44:41.826636 3067 policy_none.go:49] "None policy: Start" Jun 25 18:44:41.827268 kubelet[3067]: I0625 18:44:41.827221 3067 memory_manager.go:169] "Starting memorymanager" policy="None" Jun 25 18:44:41.827380 kubelet[3067]: I0625 18:44:41.827301 3067 state_mem.go:35] "Initializing new in-memory state store" Jun 25 18:44:41.837602 kubelet[3067]: I0625 18:44:41.836653 3067 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 18:44:41.837602 kubelet[3067]: I0625 18:44:41.836935 3067 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 18:44:41.841380 kubelet[3067]: I0625 18:44:41.841362 3067 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 18:44:41.842914 kubelet[3067]: I0625 18:44:41.842897 3067 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 18:44:41.843012 kubelet[3067]: I0625 18:44:41.843000 3067 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 18:44:41.843103 kubelet[3067]: I0625 18:44:41.843093 3067 kubelet.go:2303] "Starting kubelet main sync loop" Jun 25 18:44:41.843210 kubelet[3067]: E0625 18:44:41.843201 3067 kubelet.go:2327] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jun 25 18:44:41.843494 kubelet[3067]: E0625 18:44:41.843478 3067 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4012.0.0-a-bcd7e269e6\" not found" Jun 25 18:44:41.846258 kubelet[3067]: W0625 18:44:41.846227 3067 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.8.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.42:6443: connect: connection refused Jun 25 18:44:41.846346 kubelet[3067]: E0625 18:44:41.846271 3067 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.42:6443: connect: connection refused Jun 25 18:44:41.867951 kubelet[3067]: I0625 18:44:41.867913 3067 kubelet_node_status.go:70] "Attempting to register node" node="ci-4012.0.0-a-bcd7e269e6" Jun 25 18:44:41.868282 kubelet[3067]: E0625 18:44:41.868258 3067 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.42:6443/api/v1/nodes\": dial tcp 10.200.8.42:6443: connect: connection refused" node="ci-4012.0.0-a-bcd7e269e6" Jun 25 18:44:41.907096 kubelet[3067]: E0625 18:44:41.906973 3067 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-4012.0.0-a-bcd7e269e6.17dc539ab9e616d7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-4012.0.0-a-bcd7e269e6", UID:"ci-4012.0.0-a-bcd7e269e6", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-4012.0.0-a-bcd7e269e6"}, FirstTimestamp:time.Date(2024, time.June, 25, 18, 44, 41, 757619927, time.Local), LastTimestamp:time.Date(2024, time.June, 25, 18, 44, 41, 757619927, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-4012.0.0-a-bcd7e269e6"}': 'Post "https://10.200.8.42:6443/api/v1/namespaces/default/events": dial tcp 10.200.8.42:6443: connect: connection refused'(may retry after sleeping) Jun 25 18:44:41.944318 kubelet[3067]: I0625 18:44:41.944267 3067 topology_manager.go:215] "Topology Admit Handler" podUID="3af617eab2a141a9d19096e6443cc1bf" podNamespace="kube-system" podName="kube-apiserver-ci-4012.0.0-a-bcd7e269e6" Jun 25 18:44:41.946373 kubelet[3067]: I0625 18:44:41.946346 3067 topology_manager.go:215] "Topology Admit Handler" podUID="52158b1cac27fcbb07c7ef803b924efe" podNamespace="kube-system" podName="kube-controller-manager-ci-4012.0.0-a-bcd7e269e6" Jun 25 18:44:41.948284 kubelet[3067]: I0625 18:44:41.948080 3067 topology_manager.go:215] "Topology Admit Handler" podUID="300d190c005cf66f17df8bd846f09089" podNamespace="kube-system" podName="kube-scheduler-ci-4012.0.0-a-bcd7e269e6" Jun 25 18:44:41.968006 kubelet[3067]: E0625 18:44:41.967975 3067 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012.0.0-a-bcd7e269e6?timeout=10s\": dial tcp 10.200.8.42:6443: connect: connection refused" interval="400ms" Jun 25 18:44:42.067689 kubelet[3067]: I0625 18:44:42.067396 3067 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/52158b1cac27fcbb07c7ef803b924efe-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4012.0.0-a-bcd7e269e6\" (UID: \"52158b1cac27fcbb07c7ef803b924efe\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-a-bcd7e269e6" Jun 25 18:44:42.067689 kubelet[3067]: I0625 18:44:42.067463 3067 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/300d190c005cf66f17df8bd846f09089-kubeconfig\") pod \"kube-scheduler-ci-4012.0.0-a-bcd7e269e6\" (UID: \"300d190c005cf66f17df8bd846f09089\") " pod="kube-system/kube-scheduler-ci-4012.0.0-a-bcd7e269e6" Jun 25 18:44:42.067689 kubelet[3067]: I0625 18:44:42.067500 3067 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3af617eab2a141a9d19096e6443cc1bf-k8s-certs\") pod \"kube-apiserver-ci-4012.0.0-a-bcd7e269e6\" (UID: \"3af617eab2a141a9d19096e6443cc1bf\") " pod="kube-system/kube-apiserver-ci-4012.0.0-a-bcd7e269e6" Jun 25 18:44:42.067689 kubelet[3067]: I0625 18:44:42.067538 3067 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3af617eab2a141a9d19096e6443cc1bf-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4012.0.0-a-bcd7e269e6\" (UID: \"3af617eab2a141a9d19096e6443cc1bf\") " pod="kube-system/kube-apiserver-ci-4012.0.0-a-bcd7e269e6" Jun 25 18:44:42.067689 kubelet[3067]: I0625 18:44:42.067596 3067 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/52158b1cac27fcbb07c7ef803b924efe-ca-certs\") pod \"kube-controller-manager-ci-4012.0.0-a-bcd7e269e6\" (UID: \"52158b1cac27fcbb07c7ef803b924efe\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-a-bcd7e269e6" Jun 25 18:44:42.068073 kubelet[3067]: I0625 18:44:42.067632 3067 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/52158b1cac27fcbb07c7ef803b924efe-kubeconfig\") pod \"kube-controller-manager-ci-4012.0.0-a-bcd7e269e6\" (UID: \"52158b1cac27fcbb07c7ef803b924efe\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-a-bcd7e269e6" Jun 25 18:44:42.068073 kubelet[3067]: I0625 18:44:42.067660 3067 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3af617eab2a141a9d19096e6443cc1bf-ca-certs\") pod \"kube-apiserver-ci-4012.0.0-a-bcd7e269e6\" (UID: \"3af617eab2a141a9d19096e6443cc1bf\") " pod="kube-system/kube-apiserver-ci-4012.0.0-a-bcd7e269e6" Jun 25 18:44:42.068073 kubelet[3067]: I0625 18:44:42.067693 3067 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/52158b1cac27fcbb07c7ef803b924efe-flexvolume-dir\") pod \"kube-controller-manager-ci-4012.0.0-a-bcd7e269e6\" (UID: \"52158b1cac27fcbb07c7ef803b924efe\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-a-bcd7e269e6" Jun 25 18:44:42.068073 kubelet[3067]: I0625 18:44:42.067726 3067 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/52158b1cac27fcbb07c7ef803b924efe-k8s-certs\") pod \"kube-controller-manager-ci-4012.0.0-a-bcd7e269e6\" (UID: \"52158b1cac27fcbb07c7ef803b924efe\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-a-bcd7e269e6" Jun 25 18:44:42.071127 kubelet[3067]: I0625 18:44:42.070663 3067 kubelet_node_status.go:70] "Attempting to register node" node="ci-4012.0.0-a-bcd7e269e6" Jun 25 18:44:42.071127 kubelet[3067]: E0625 18:44:42.071066 3067 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.42:6443/api/v1/nodes\": dial tcp 10.200.8.42:6443: connect: connection refused" node="ci-4012.0.0-a-bcd7e269e6" Jun 25 18:44:42.252858 containerd[1829]: time="2024-06-25T18:44:42.252800183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4012.0.0-a-bcd7e269e6,Uid:3af617eab2a141a9d19096e6443cc1bf,Namespace:kube-system,Attempt:0,}" Jun 25 18:44:42.257683 containerd[1829]: time="2024-06-25T18:44:42.257631685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4012.0.0-a-bcd7e269e6,Uid:52158b1cac27fcbb07c7ef803b924efe,Namespace:kube-system,Attempt:0,}" Jun 25 18:44:42.258095 containerd[1829]: time="2024-06-25T18:44:42.257632985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4012.0.0-a-bcd7e269e6,Uid:300d190c005cf66f17df8bd846f09089,Namespace:kube-system,Attempt:0,}" Jun 25 18:44:42.369004 kubelet[3067]: E0625 18:44:42.368890 3067 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012.0.0-a-bcd7e269e6?timeout=10s\": dial tcp 10.200.8.42:6443: connect: connection refused" interval="800ms" Jun 25 18:44:42.473038 kubelet[3067]: I0625 18:44:42.473008 3067 kubelet_node_status.go:70] "Attempting to register node" node="ci-4012.0.0-a-bcd7e269e6" Jun 25 18:44:42.473356 kubelet[3067]: E0625 18:44:42.473335 3067 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.42:6443/api/v1/nodes\": dial tcp 10.200.8.42:6443: connect: connection refused" node="ci-4012.0.0-a-bcd7e269e6" Jun 25 18:44:42.622501 kubelet[3067]: W0625 18:44:42.622335 3067 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.8.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012.0.0-a-bcd7e269e6&limit=500&resourceVersion=0": dial tcp 10.200.8.42:6443: connect: connection refused Jun 25 18:44:42.622501 kubelet[3067]: E0625 18:44:42.622412 3067 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012.0.0-a-bcd7e269e6&limit=500&resourceVersion=0": dial tcp 10.200.8.42:6443: connect: connection refused Jun 25 18:44:42.818179 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2954153716.mount: Deactivated successfully. Jun 25 18:44:42.832452 kubelet[3067]: W0625 18:44:42.832396 3067 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.8.42:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.42:6443: connect: connection refused Jun 25 18:44:42.832815 kubelet[3067]: E0625 18:44:42.832462 3067 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.42:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.42:6443: connect: connection refused Jun 25 18:44:42.845294 containerd[1829]: time="2024-06-25T18:44:42.845248471Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:44:42.847511 containerd[1829]: time="2024-06-25T18:44:42.847441572Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jun 25 18:44:42.851364 containerd[1829]: time="2024-06-25T18:44:42.851332573Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:44:42.854426 containerd[1829]: time="2024-06-25T18:44:42.854394874Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:44:42.858269 containerd[1829]: time="2024-06-25T18:44:42.858218975Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 18:44:42.861632 containerd[1829]: time="2024-06-25T18:44:42.861600476Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:44:42.864178 containerd[1829]: time="2024-06-25T18:44:42.863901977Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 18:44:42.869371 containerd[1829]: time="2024-06-25T18:44:42.869340778Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:44:42.870119 containerd[1829]: time="2024-06-25T18:44:42.870085479Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 611.955094ms" Jun 25 18:44:42.871637 containerd[1829]: time="2024-06-25T18:44:42.871604379Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 618.688996ms" Jun 25 18:44:42.874188 containerd[1829]: time="2024-06-25T18:44:42.874096680Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 616.372995ms" Jun 25 18:44:42.958698 kubelet[3067]: W0625 18:44:42.958637 3067 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.8.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.42:6443: connect: connection refused Jun 25 18:44:42.958698 kubelet[3067]: E0625 18:44:42.958703 3067 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.42:6443: connect: connection refused Jun 25 18:44:42.984584 kubelet[3067]: W0625 18:44:42.984542 3067 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.8.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.42:6443: connect: connection refused Jun 25 18:44:42.984584 kubelet[3067]: E0625 18:44:42.984597 3067 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.42:6443: connect: connection refused Jun 25 18:44:42.986129 kubelet[3067]: E0625 18:44:42.986102 3067 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.42:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.42:6443: connect: connection refused Jun 25 18:44:43.170266 kubelet[3067]: E0625 18:44:43.170231 3067 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012.0.0-a-bcd7e269e6?timeout=10s\": dial tcp 10.200.8.42:6443: connect: connection refused" interval="1.6s" Jun 25 18:44:43.276604 kubelet[3067]: I0625 18:44:43.276543 3067 kubelet_node_status.go:70] "Attempting to register node" node="ci-4012.0.0-a-bcd7e269e6" Jun 25 18:44:43.277145 kubelet[3067]: E0625 18:44:43.277116 3067 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.42:6443/api/v1/nodes\": dial tcp 10.200.8.42:6443: connect: connection refused" node="ci-4012.0.0-a-bcd7e269e6" Jun 25 18:44:43.794253 containerd[1829]: time="2024-06-25T18:44:43.793941871Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:44:43.794253 containerd[1829]: time="2024-06-25T18:44:43.794004171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:44:43.794253 containerd[1829]: time="2024-06-25T18:44:43.794030771Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:44:43.794253 containerd[1829]: time="2024-06-25T18:44:43.794050771Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:44:43.795407 containerd[1829]: time="2024-06-25T18:44:43.790899570Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:44:43.795407 containerd[1829]: time="2024-06-25T18:44:43.794964871Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:44:43.795407 containerd[1829]: time="2024-06-25T18:44:43.795013771Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:44:43.796015 containerd[1829]: time="2024-06-25T18:44:43.795054671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:44:43.796015 containerd[1829]: time="2024-06-25T18:44:43.795099371Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:44:43.796015 containerd[1829]: time="2024-06-25T18:44:43.795115671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:44:43.796015 containerd[1829]: time="2024-06-25T18:44:43.795033471Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:44:43.796015 containerd[1829]: time="2024-06-25T18:44:43.795053171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:44:43.839058 systemd[1]: run-containerd-runc-k8s.io-1f3aa2dc49ca631d8d2044ce40dba33a13fb2fc4375c7f6de6bbfad0ea3d28f6-runc.QgDf50.mount: Deactivated successfully. Jun 25 18:44:43.923970 containerd[1829]: time="2024-06-25T18:44:43.923838112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4012.0.0-a-bcd7e269e6,Uid:3af617eab2a141a9d19096e6443cc1bf,Namespace:kube-system,Attempt:0,} returns sandbox id \"499ae7130571b51c76d788ccaefc8c1484bfc3fc5aa54dc94c6132fdd83cc4d8\"" Jun 25 18:44:43.932185 containerd[1829]: time="2024-06-25T18:44:43.932027715Z" level=info msg="CreateContainer within sandbox \"499ae7130571b51c76d788ccaefc8c1484bfc3fc5aa54dc94c6132fdd83cc4d8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 25 18:44:43.934765 containerd[1829]: time="2024-06-25T18:44:43.934713016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4012.0.0-a-bcd7e269e6,Uid:300d190c005cf66f17df8bd846f09089,Namespace:kube-system,Attempt:0,} returns sandbox id \"608da6e0e796d3f8e3fb9ca514f8ad5b909d4cbe344eeff7e99dd17996a4b037\"" Jun 25 18:44:43.937951 containerd[1829]: time="2024-06-25T18:44:43.937896617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4012.0.0-a-bcd7e269e6,Uid:52158b1cac27fcbb07c7ef803b924efe,Namespace:kube-system,Attempt:0,} returns sandbox id \"1f3aa2dc49ca631d8d2044ce40dba33a13fb2fc4375c7f6de6bbfad0ea3d28f6\"" Jun 25 18:44:43.939318 containerd[1829]: time="2024-06-25T18:44:43.939105417Z" level=info msg="CreateContainer within sandbox \"608da6e0e796d3f8e3fb9ca514f8ad5b909d4cbe344eeff7e99dd17996a4b037\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 25 18:44:43.946963 containerd[1829]: time="2024-06-25T18:44:43.946699619Z" level=info msg="CreateContainer within sandbox \"1f3aa2dc49ca631d8d2044ce40dba33a13fb2fc4375c7f6de6bbfad0ea3d28f6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 25 18:44:43.991544 containerd[1829]: time="2024-06-25T18:44:43.991479634Z" level=info msg="CreateContainer within sandbox \"499ae7130571b51c76d788ccaefc8c1484bfc3fc5aa54dc94c6132fdd83cc4d8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e46ccc50ef8a1adb5f36b3be9dbf3016c5d7580dc3d7dc3e646cd1ee4d5c67f7\"" Jun 25 18:44:43.992293 containerd[1829]: time="2024-06-25T18:44:43.992255034Z" level=info msg="StartContainer for \"e46ccc50ef8a1adb5f36b3be9dbf3016c5d7580dc3d7dc3e646cd1ee4d5c67f7\"" Jun 25 18:44:43.997422 containerd[1829]: time="2024-06-25T18:44:43.997054835Z" level=info msg="CreateContainer within sandbox \"608da6e0e796d3f8e3fb9ca514f8ad5b909d4cbe344eeff7e99dd17996a4b037\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e710dbaa369489a41a00230332fd79bc7f171d7039ed21509cf8f80ee622ab0a\"" Jun 25 18:44:43.999004 containerd[1829]: time="2024-06-25T18:44:43.997694335Z" level=info msg="StartContainer for \"e710dbaa369489a41a00230332fd79bc7f171d7039ed21509cf8f80ee622ab0a\"" Jun 25 18:44:44.016767 containerd[1829]: time="2024-06-25T18:44:44.016714141Z" level=info msg="CreateContainer within sandbox \"1f3aa2dc49ca631d8d2044ce40dba33a13fb2fc4375c7f6de6bbfad0ea3d28f6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2648e5ddb7c6be231c0ece8f17270145e817e507d564e6bd589dd3df85f44ca1\"" Jun 25 18:44:44.017541 containerd[1829]: time="2024-06-25T18:44:44.017459342Z" level=info msg="StartContainer for \"2648e5ddb7c6be231c0ece8f17270145e817e507d564e6bd589dd3df85f44ca1\"" Jun 25 18:44:44.122688 containerd[1829]: time="2024-06-25T18:44:44.122435075Z" level=info msg="StartContainer for \"e46ccc50ef8a1adb5f36b3be9dbf3016c5d7580dc3d7dc3e646cd1ee4d5c67f7\" returns successfully" Jun 25 18:44:44.154790 containerd[1829]: time="2024-06-25T18:44:44.154737385Z" level=info msg="StartContainer for \"e710dbaa369489a41a00230332fd79bc7f171d7039ed21509cf8f80ee622ab0a\" returns successfully" Jun 25 18:44:44.206726 containerd[1829]: time="2024-06-25T18:44:44.206671402Z" level=info msg="StartContainer for \"2648e5ddb7c6be231c0ece8f17270145e817e507d564e6bd589dd3df85f44ca1\" returns successfully" Jun 25 18:44:44.879456 kubelet[3067]: I0625 18:44:44.879183 3067 kubelet_node_status.go:70] "Attempting to register node" node="ci-4012.0.0-a-bcd7e269e6" Jun 25 18:44:46.494664 kubelet[3067]: E0625 18:44:46.493936 3067 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4012.0.0-a-bcd7e269e6\" not found" node="ci-4012.0.0-a-bcd7e269e6" Jun 25 18:44:46.555666 kubelet[3067]: I0625 18:44:46.554642 3067 kubelet_node_status.go:73] "Successfully registered node" node="ci-4012.0.0-a-bcd7e269e6" Jun 25 18:44:46.754823 kubelet[3067]: I0625 18:44:46.754672 3067 apiserver.go:52] "Watching apiserver" Jun 25 18:44:46.766945 kubelet[3067]: I0625 18:44:46.766912 3067 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jun 25 18:44:46.891222 kubelet[3067]: E0625 18:44:46.891178 3067 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4012.0.0-a-bcd7e269e6\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4012.0.0-a-bcd7e269e6" Jun 25 18:44:49.242313 systemd[1]: Reloading requested from client PID 3342 ('systemctl') (unit session-9.scope)... Jun 25 18:44:49.242328 systemd[1]: Reloading... Jun 25 18:44:49.329604 zram_generator::config[3382]: No configuration found. Jun 25 18:44:49.430992 kubelet[3067]: W0625 18:44:49.429989 3067 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 25 18:44:49.463300 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:44:49.546148 systemd[1]: Reloading finished in 303 ms. Jun 25 18:44:49.579525 kubelet[3067]: I0625 18:44:49.579305 3067 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 18:44:49.579338 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:44:49.588241 systemd[1]: kubelet.service: Deactivated successfully. Jun 25 18:44:49.588868 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:44:49.596290 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:44:49.700095 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:44:49.711967 (kubelet)[3456]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 25 18:44:49.754440 kubelet[3456]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:44:49.754440 kubelet[3456]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 18:44:49.754440 kubelet[3456]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:44:49.755098 kubelet[3456]: I0625 18:44:49.754440 3456 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 18:44:49.759049 kubelet[3456]: I0625 18:44:49.759014 3456 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jun 25 18:44:49.759049 kubelet[3456]: I0625 18:44:49.759047 3456 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 18:44:49.759293 kubelet[3456]: I0625 18:44:49.759280 3456 server.go:895] "Client rotation is on, will bootstrap in background" Jun 25 18:44:49.761953 kubelet[3456]: I0625 18:44:49.760658 3456 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 25 18:44:49.761953 kubelet[3456]: I0625 18:44:49.761538 3456 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 18:44:49.770105 kubelet[3456]: I0625 18:44:49.770031 3456 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 18:44:49.770480 kubelet[3456]: I0625 18:44:49.770459 3456 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 18:44:49.770661 kubelet[3456]: I0625 18:44:49.770640 3456 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 18:44:49.770801 kubelet[3456]: I0625 18:44:49.770668 3456 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 18:44:49.770801 kubelet[3456]: I0625 18:44:49.770682 3456 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 18:44:49.770801 kubelet[3456]: I0625 18:44:49.770723 3456 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:44:49.771320 kubelet[3456]: I0625 18:44:49.770821 3456 kubelet.go:393] "Attempting to sync node with API server" Jun 25 18:44:49.771320 kubelet[3456]: I0625 18:44:49.770838 3456 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 18:44:49.771320 kubelet[3456]: I0625 18:44:49.770867 3456 kubelet.go:309] "Adding apiserver pod source" Jun 25 18:44:49.771320 kubelet[3456]: I0625 18:44:49.770885 3456 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 18:44:49.782599 kubelet[3456]: I0625 18:44:49.778179 3456 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Jun 25 18:44:49.782599 kubelet[3456]: I0625 18:44:49.778745 3456 server.go:1232] "Started kubelet" Jun 25 18:44:49.782599 kubelet[3456]: I0625 18:44:49.780432 3456 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 18:44:49.786723 kubelet[3456]: I0625 18:44:49.786484 3456 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 18:44:49.787523 kubelet[3456]: I0625 18:44:49.787503 3456 server.go:462] "Adding debug handlers to kubelet server" Jun 25 18:44:49.789831 kubelet[3456]: E0625 18:44:49.789814 3456 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jun 25 18:44:49.789946 kubelet[3456]: E0625 18:44:49.789935 3456 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 18:44:49.790827 kubelet[3456]: I0625 18:44:49.790807 3456 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jun 25 18:44:49.791024 kubelet[3456]: I0625 18:44:49.791002 3456 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 18:44:49.792675 kubelet[3456]: I0625 18:44:49.792655 3456 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 18:44:49.793266 kubelet[3456]: I0625 18:44:49.793251 3456 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jun 25 18:44:49.793736 kubelet[3456]: I0625 18:44:49.793718 3456 reconciler_new.go:29] "Reconciler: start to sync state" Jun 25 18:44:49.803793 kubelet[3456]: I0625 18:44:49.801950 3456 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 18:44:49.803793 kubelet[3456]: I0625 18:44:49.803388 3456 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 18:44:49.803793 kubelet[3456]: I0625 18:44:49.803409 3456 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 18:44:49.803793 kubelet[3456]: I0625 18:44:49.803429 3456 kubelet.go:2303] "Starting kubelet main sync loop" Jun 25 18:44:49.803793 kubelet[3456]: E0625 18:44:49.803496 3456 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 18:44:49.896006 kubelet[3456]: I0625 18:44:49.895979 3456 kubelet_node_status.go:70] "Attempting to register node" node="ci-4012.0.0-a-bcd7e269e6" Jun 25 18:44:49.903805 kubelet[3456]: E0625 18:44:49.903783 3456 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 25 18:44:49.908109 kubelet[3456]: I0625 18:44:49.908078 3456 kubelet_node_status.go:108] "Node was previously registered" node="ci-4012.0.0-a-bcd7e269e6" Jun 25 18:44:49.908228 kubelet[3456]: I0625 18:44:49.908155 3456 kubelet_node_status.go:73] "Successfully registered node" node="ci-4012.0.0-a-bcd7e269e6" Jun 25 18:44:49.933628 kubelet[3456]: I0625 18:44:49.933593 3456 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 18:44:49.933628 kubelet[3456]: I0625 18:44:49.933620 3456 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 18:44:49.933628 kubelet[3456]: I0625 18:44:49.933642 3456 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:44:49.933878 kubelet[3456]: I0625 18:44:49.933847 3456 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 25 18:44:49.933878 kubelet[3456]: I0625 18:44:49.933874 3456 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 25 18:44:49.933953 kubelet[3456]: I0625 18:44:49.933884 3456 policy_none.go:49] "None policy: Start" Jun 25 18:44:49.934645 kubelet[3456]: I0625 18:44:49.934622 3456 memory_manager.go:169] "Starting memorymanager" policy="None" Jun 25 18:44:49.934757 kubelet[3456]: I0625 18:44:49.934653 3456 state_mem.go:35] "Initializing new in-memory state store" Jun 25 18:44:49.934882 kubelet[3456]: I0625 18:44:49.934859 3456 state_mem.go:75] "Updated machine memory state" Jun 25 18:44:49.936908 kubelet[3456]: I0625 18:44:49.935908 3456 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 18:44:49.936908 kubelet[3456]: I0625 18:44:49.936821 3456 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 18:44:50.106101 kubelet[3456]: I0625 18:44:50.105036 3456 topology_manager.go:215] "Topology Admit Handler" podUID="3af617eab2a141a9d19096e6443cc1bf" podNamespace="kube-system" podName="kube-apiserver-ci-4012.0.0-a-bcd7e269e6" Jun 25 18:44:50.106101 kubelet[3456]: I0625 18:44:50.105238 3456 topology_manager.go:215] "Topology Admit Handler" podUID="52158b1cac27fcbb07c7ef803b924efe" podNamespace="kube-system" podName="kube-controller-manager-ci-4012.0.0-a-bcd7e269e6" Jun 25 18:44:50.107235 kubelet[3456]: I0625 18:44:50.107194 3456 topology_manager.go:215] "Topology Admit Handler" podUID="300d190c005cf66f17df8bd846f09089" podNamespace="kube-system" podName="kube-scheduler-ci-4012.0.0-a-bcd7e269e6" Jun 25 18:44:50.112714 kubelet[3456]: W0625 18:44:50.112691 3456 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 25 18:44:50.114668 kubelet[3456]: W0625 18:44:50.114524 3456 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 25 18:44:50.117466 kubelet[3456]: W0625 18:44:50.117445 3456 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 25 18:44:50.117543 kubelet[3456]: E0625 18:44:50.117513 3456 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4012.0.0-a-bcd7e269e6\" already exists" pod="kube-system/kube-controller-manager-ci-4012.0.0-a-bcd7e269e6" Jun 25 18:44:50.196738 kubelet[3456]: I0625 18:44:50.196599 3456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3af617eab2a141a9d19096e6443cc1bf-ca-certs\") pod \"kube-apiserver-ci-4012.0.0-a-bcd7e269e6\" (UID: \"3af617eab2a141a9d19096e6443cc1bf\") " pod="kube-system/kube-apiserver-ci-4012.0.0-a-bcd7e269e6" Jun 25 18:44:50.196738 kubelet[3456]: I0625 18:44:50.196671 3456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3af617eab2a141a9d19096e6443cc1bf-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4012.0.0-a-bcd7e269e6\" (UID: \"3af617eab2a141a9d19096e6443cc1bf\") " pod="kube-system/kube-apiserver-ci-4012.0.0-a-bcd7e269e6" Jun 25 18:44:50.196738 kubelet[3456]: I0625 18:44:50.196738 3456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/52158b1cac27fcbb07c7ef803b924efe-ca-certs\") pod \"kube-controller-manager-ci-4012.0.0-a-bcd7e269e6\" (UID: \"52158b1cac27fcbb07c7ef803b924efe\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-a-bcd7e269e6" Jun 25 18:44:50.197196 kubelet[3456]: I0625 18:44:50.196804 3456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/300d190c005cf66f17df8bd846f09089-kubeconfig\") pod \"kube-scheduler-ci-4012.0.0-a-bcd7e269e6\" (UID: \"300d190c005cf66f17df8bd846f09089\") " pod="kube-system/kube-scheduler-ci-4012.0.0-a-bcd7e269e6" Jun 25 18:44:50.197196 kubelet[3456]: I0625 18:44:50.196858 3456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/52158b1cac27fcbb07c7ef803b924efe-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4012.0.0-a-bcd7e269e6\" (UID: \"52158b1cac27fcbb07c7ef803b924efe\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-a-bcd7e269e6" Jun 25 18:44:50.197196 kubelet[3456]: I0625 18:44:50.196909 3456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3af617eab2a141a9d19096e6443cc1bf-k8s-certs\") pod \"kube-apiserver-ci-4012.0.0-a-bcd7e269e6\" (UID: \"3af617eab2a141a9d19096e6443cc1bf\") " pod="kube-system/kube-apiserver-ci-4012.0.0-a-bcd7e269e6" Jun 25 18:44:50.197196 kubelet[3456]: I0625 18:44:50.196972 3456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/52158b1cac27fcbb07c7ef803b924efe-flexvolume-dir\") pod \"kube-controller-manager-ci-4012.0.0-a-bcd7e269e6\" (UID: \"52158b1cac27fcbb07c7ef803b924efe\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-a-bcd7e269e6" Jun 25 18:44:50.197196 kubelet[3456]: I0625 18:44:50.197022 3456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/52158b1cac27fcbb07c7ef803b924efe-k8s-certs\") pod \"kube-controller-manager-ci-4012.0.0-a-bcd7e269e6\" (UID: \"52158b1cac27fcbb07c7ef803b924efe\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-a-bcd7e269e6" Jun 25 18:44:50.197424 kubelet[3456]: I0625 18:44:50.197056 3456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/52158b1cac27fcbb07c7ef803b924efe-kubeconfig\") pod \"kube-controller-manager-ci-4012.0.0-a-bcd7e269e6\" (UID: \"52158b1cac27fcbb07c7ef803b924efe\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-a-bcd7e269e6" Jun 25 18:44:50.771730 kubelet[3456]: I0625 18:44:50.771696 3456 apiserver.go:52] "Watching apiserver" Jun 25 18:44:50.793769 kubelet[3456]: I0625 18:44:50.793731 3456 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jun 25 18:44:50.889139 kubelet[3456]: I0625 18:44:50.888971 3456 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4012.0.0-a-bcd7e269e6" podStartSLOduration=0.888909361 podCreationTimestamp="2024-06-25 18:44:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:44:50.888794461 +0000 UTC m=+1.173024243" watchObservedRunningTime="2024-06-25 18:44:50.888909361 +0000 UTC m=+1.173139143" Jun 25 18:44:50.895181 kubelet[3456]: I0625 18:44:50.894991 3456 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4012.0.0-a-bcd7e269e6" podStartSLOduration=0.894954068 podCreationTimestamp="2024-06-25 18:44:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:44:50.894516067 +0000 UTC m=+1.178745949" watchObservedRunningTime="2024-06-25 18:44:50.894954068 +0000 UTC m=+1.179183850" Jun 25 18:44:50.902037 kubelet[3456]: I0625 18:44:50.901867 3456 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4012.0.0-a-bcd7e269e6" podStartSLOduration=1.901828376 podCreationTimestamp="2024-06-25 18:44:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:44:50.901523475 +0000 UTC m=+1.185753357" watchObservedRunningTime="2024-06-25 18:44:50.901828376 +0000 UTC m=+1.186058158" Jun 25 18:44:55.322715 sudo[2547]: pam_unix(sudo:session): session closed for user root Jun 25 18:44:55.426060 sshd[2543]: pam_unix(sshd:session): session closed for user core Jun 25 18:44:55.429626 systemd[1]: sshd@6-10.200.8.42:22-10.200.16.10:35030.service: Deactivated successfully. Jun 25 18:44:55.433604 systemd-logind[1805]: Session 9 logged out. Waiting for processes to exit. Jun 25 18:44:55.434005 systemd[1]: session-9.scope: Deactivated successfully. Jun 25 18:44:55.436213 systemd-logind[1805]: Removed session 9. Jun 25 18:45:02.925574 kubelet[3456]: I0625 18:45:02.925485 3456 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 25 18:45:02.929615 containerd[1829]: time="2024-06-25T18:45:02.928448814Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 25 18:45:02.934155 kubelet[3456]: I0625 18:45:02.933714 3456 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 25 18:45:03.677018 kubelet[3456]: I0625 18:45:03.676043 3456 topology_manager.go:215] "Topology Admit Handler" podUID="4ae7ecbf-1973-45c8-9109-13f8635ec1e8" podNamespace="kube-system" podName="kube-proxy-4hjf2" Jun 25 18:45:03.796078 kubelet[3456]: I0625 18:45:03.796039 3456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4ae7ecbf-1973-45c8-9109-13f8635ec1e8-kube-proxy\") pod \"kube-proxy-4hjf2\" (UID: \"4ae7ecbf-1973-45c8-9109-13f8635ec1e8\") " pod="kube-system/kube-proxy-4hjf2" Jun 25 18:45:03.796272 kubelet[3456]: I0625 18:45:03.796130 3456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wwq5\" (UniqueName: \"kubernetes.io/projected/4ae7ecbf-1973-45c8-9109-13f8635ec1e8-kube-api-access-4wwq5\") pod \"kube-proxy-4hjf2\" (UID: \"4ae7ecbf-1973-45c8-9109-13f8635ec1e8\") " pod="kube-system/kube-proxy-4hjf2" Jun 25 18:45:03.796272 kubelet[3456]: I0625 18:45:03.796169 3456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4ae7ecbf-1973-45c8-9109-13f8635ec1e8-xtables-lock\") pod \"kube-proxy-4hjf2\" (UID: \"4ae7ecbf-1973-45c8-9109-13f8635ec1e8\") " pod="kube-system/kube-proxy-4hjf2" Jun 25 18:45:03.796272 kubelet[3456]: I0625 18:45:03.796192 3456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4ae7ecbf-1973-45c8-9109-13f8635ec1e8-lib-modules\") pod \"kube-proxy-4hjf2\" (UID: \"4ae7ecbf-1973-45c8-9109-13f8635ec1e8\") " pod="kube-system/kube-proxy-4hjf2" Jun 25 18:45:03.949383 kubelet[3456]: I0625 18:45:03.949235 3456 topology_manager.go:215] "Topology Admit Handler" podUID="f5d21d28-9493-4888-842d-d6974c892614" podNamespace="tigera-operator" podName="tigera-operator-76c4974c85-kllfl" Jun 25 18:45:03.980276 containerd[1829]: time="2024-06-25T18:45:03.980225991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4hjf2,Uid:4ae7ecbf-1973-45c8-9109-13f8635ec1e8,Namespace:kube-system,Attempt:0,}" Jun 25 18:45:04.023988 containerd[1829]: time="2024-06-25T18:45:04.023742990Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:45:04.023988 containerd[1829]: time="2024-06-25T18:45:04.023802990Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:04.023988 containerd[1829]: time="2024-06-25T18:45:04.023828590Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:45:04.023988 containerd[1829]: time="2024-06-25T18:45:04.023848190Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:04.050944 systemd[1]: run-containerd-runc-k8s.io-aba6029e5a90cb7fc8180c0a14d5c1bd19e7c57b0ffd94b9d1f98c696a9704c7-runc.d5I6QK.mount: Deactivated successfully. Jun 25 18:45:04.070038 containerd[1829]: time="2024-06-25T18:45:04.069996996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4hjf2,Uid:4ae7ecbf-1973-45c8-9109-13f8635ec1e8,Namespace:kube-system,Attempt:0,} returns sandbox id \"aba6029e5a90cb7fc8180c0a14d5c1bd19e7c57b0ffd94b9d1f98c696a9704c7\"" Jun 25 18:45:04.073263 containerd[1829]: time="2024-06-25T18:45:04.073222303Z" level=info msg="CreateContainer within sandbox \"aba6029e5a90cb7fc8180c0a14d5c1bd19e7c57b0ffd94b9d1f98c696a9704c7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 25 18:45:04.097616 kubelet[3456]: I0625 18:45:04.097549 3456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5z9pp\" (UniqueName: \"kubernetes.io/projected/f5d21d28-9493-4888-842d-d6974c892614-kube-api-access-5z9pp\") pod \"tigera-operator-76c4974c85-kllfl\" (UID: \"f5d21d28-9493-4888-842d-d6974c892614\") " pod="tigera-operator/tigera-operator-76c4974c85-kllfl" Jun 25 18:45:04.097791 kubelet[3456]: I0625 18:45:04.097632 3456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f5d21d28-9493-4888-842d-d6974c892614-var-lib-calico\") pod \"tigera-operator-76c4974c85-kllfl\" (UID: \"f5d21d28-9493-4888-842d-d6974c892614\") " pod="tigera-operator/tigera-operator-76c4974c85-kllfl" Jun 25 18:45:04.107602 containerd[1829]: time="2024-06-25T18:45:04.107507081Z" level=info msg="CreateContainer within sandbox \"aba6029e5a90cb7fc8180c0a14d5c1bd19e7c57b0ffd94b9d1f98c696a9704c7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0b61e56ebd61eda3f80ca30d82fb496ace5451c68324c3cbcdae08b2dcb733f2\"" Jun 25 18:45:04.108260 containerd[1829]: time="2024-06-25T18:45:04.108214983Z" level=info msg="StartContainer for \"0b61e56ebd61eda3f80ca30d82fb496ace5451c68324c3cbcdae08b2dcb733f2\"" Jun 25 18:45:04.161808 containerd[1829]: time="2024-06-25T18:45:04.161761205Z" level=info msg="StartContainer for \"0b61e56ebd61eda3f80ca30d82fb496ace5451c68324c3cbcdae08b2dcb733f2\" returns successfully" Jun 25 18:45:04.257964 containerd[1829]: time="2024-06-25T18:45:04.257695424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-kllfl,Uid:f5d21d28-9493-4888-842d-d6974c892614,Namespace:tigera-operator,Attempt:0,}" Jun 25 18:45:04.296176 containerd[1829]: time="2024-06-25T18:45:04.295810511Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:45:04.296176 containerd[1829]: time="2024-06-25T18:45:04.295876311Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:04.296176 containerd[1829]: time="2024-06-25T18:45:04.295974112Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:45:04.296176 containerd[1829]: time="2024-06-25T18:45:04.296048812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:04.361157 containerd[1829]: time="2024-06-25T18:45:04.361092360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-kllfl,Uid:f5d21d28-9493-4888-842d-d6974c892614,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"e5627da17dc02ef9a0a84a51e0c0139ed72e0247eeffcddcb16e52a6a2f750c5\"" Jun 25 18:45:04.363537 containerd[1829]: time="2024-06-25T18:45:04.363502466Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Jun 25 18:45:06.024520 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount906144515.mount: Deactivated successfully. Jun 25 18:45:07.048226 containerd[1829]: time="2024-06-25T18:45:07.048173698Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:07.051670 containerd[1829]: time="2024-06-25T18:45:07.051582105Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=22076076" Jun 25 18:45:07.055268 containerd[1829]: time="2024-06-25T18:45:07.055210914Z" level=info msg="ImageCreate event name:\"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:07.060445 containerd[1829]: time="2024-06-25T18:45:07.060351525Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:07.061358 containerd[1829]: time="2024-06-25T18:45:07.061120927Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"22070263\" in 2.696413858s" Jun 25 18:45:07.061358 containerd[1829]: time="2024-06-25T18:45:07.061162727Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\"" Jun 25 18:45:07.063656 containerd[1829]: time="2024-06-25T18:45:07.063455332Z" level=info msg="CreateContainer within sandbox \"e5627da17dc02ef9a0a84a51e0c0139ed72e0247eeffcddcb16e52a6a2f750c5\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jun 25 18:45:07.091394 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3869650068.mount: Deactivated successfully. Jun 25 18:45:07.095465 containerd[1829]: time="2024-06-25T18:45:07.095426006Z" level=info msg="CreateContainer within sandbox \"e5627da17dc02ef9a0a84a51e0c0139ed72e0247eeffcddcb16e52a6a2f750c5\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ae6a6bed9fbfd94b6320918f28c817fe6428557a1e162fbc7e4d79a48834c5b6\"" Jun 25 18:45:07.095989 containerd[1829]: time="2024-06-25T18:45:07.095962407Z" level=info msg="StartContainer for \"ae6a6bed9fbfd94b6320918f28c817fe6428557a1e162fbc7e4d79a48834c5b6\"" Jun 25 18:45:07.153082 containerd[1829]: time="2024-06-25T18:45:07.151602934Z" level=info msg="StartContainer for \"ae6a6bed9fbfd94b6320918f28c817fe6428557a1e162fbc7e4d79a48834c5b6\" returns successfully" Jun 25 18:45:07.920429 kubelet[3456]: I0625 18:45:07.920388 3456 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-4hjf2" podStartSLOduration=4.92034479 podCreationTimestamp="2024-06-25 18:45:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:45:04.917860932 +0000 UTC m=+15.202090814" watchObservedRunningTime="2024-06-25 18:45:07.92034479 +0000 UTC m=+18.204574672" Jun 25 18:45:09.816132 kubelet[3456]: I0625 18:45:09.815973 3456 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4974c85-kllfl" podStartSLOduration=4.116977954 podCreationTimestamp="2024-06-25 18:45:03 +0000 UTC" firstStartedPulling="2024-06-25 18:45:04.362845564 +0000 UTC m=+14.647075346" lastFinishedPulling="2024-06-25 18:45:07.061790529 +0000 UTC m=+17.346020311" observedRunningTime="2024-06-25 18:45:07.921262992 +0000 UTC m=+18.205492774" watchObservedRunningTime="2024-06-25 18:45:09.815922919 +0000 UTC m=+20.100152701" Jun 25 18:45:10.108030 kubelet[3456]: I0625 18:45:10.107894 3456 topology_manager.go:215] "Topology Admit Handler" podUID="e8a4125a-0f75-4bd3-bb8f-3c11db445f16" podNamespace="calico-system" podName="calico-typha-5d6564b7b-zshk4" Jun 25 18:45:10.136399 kubelet[3456]: I0625 18:45:10.136359 3456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e8a4125a-0f75-4bd3-bb8f-3c11db445f16-typha-certs\") pod \"calico-typha-5d6564b7b-zshk4\" (UID: \"e8a4125a-0f75-4bd3-bb8f-3c11db445f16\") " pod="calico-system/calico-typha-5d6564b7b-zshk4" Jun 25 18:45:10.136692 kubelet[3456]: I0625 18:45:10.136416 3456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e8a4125a-0f75-4bd3-bb8f-3c11db445f16-tigera-ca-bundle\") pod \"calico-typha-5d6564b7b-zshk4\" (UID: \"e8a4125a-0f75-4bd3-bb8f-3c11db445f16\") " pod="calico-system/calico-typha-5d6564b7b-zshk4" Jun 25 18:45:10.136692 kubelet[3456]: I0625 18:45:10.136448 3456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w675l\" (UniqueName: \"kubernetes.io/projected/e8a4125a-0f75-4bd3-bb8f-3c11db445f16-kube-api-access-w675l\") pod \"calico-typha-5d6564b7b-zshk4\" (UID: \"e8a4125a-0f75-4bd3-bb8f-3c11db445f16\") " pod="calico-system/calico-typha-5d6564b7b-zshk4" Jun 25 18:45:10.198217 kubelet[3456]: I0625 18:45:10.198178 3456 topology_manager.go:215] "Topology Admit Handler" podUID="3fde57fd-85e3-4930-a4d4-467bc7accec7" podNamespace="calico-system" podName="calico-node-5cc7z" Jun 25 18:45:10.322683 kubelet[3456]: I0625 18:45:10.322644 3456 topology_manager.go:215] "Topology Admit Handler" podUID="1b8cd264-e868-4cd7-89a2-2e1d11e52069" podNamespace="calico-system" podName="csi-node-driver-9x5m4" Jun 25 18:45:10.323028 kubelet[3456]: E0625 18:45:10.322988 3456 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9x5m4" podUID="1b8cd264-e868-4cd7-89a2-2e1d11e52069" Jun 25 18:45:10.337507 kubelet[3456]: I0625 18:45:10.337446 3456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3fde57fd-85e3-4930-a4d4-467bc7accec7-xtables-lock\") pod \"calico-node-5cc7z\" (UID: \"3fde57fd-85e3-4930-a4d4-467bc7accec7\") " pod="calico-system/calico-node-5cc7z" Jun 25 18:45:10.337878 kubelet[3456]: I0625 18:45:10.337627 3456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3fde57fd-85e3-4930-a4d4-467bc7accec7-lib-modules\") pod \"calico-node-5cc7z\" (UID: \"3fde57fd-85e3-4930-a4d4-467bc7accec7\") " pod="calico-system/calico-node-5cc7z" Jun 25 18:45:10.337878 kubelet[3456]: I0625 18:45:10.337761 3456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/3fde57fd-85e3-4930-a4d4-467bc7accec7-node-certs\") pod \"calico-node-5cc7z\" (UID: \"3fde57fd-85e3-4930-a4d4-467bc7accec7\") " pod="calico-system/calico-node-5cc7z" Jun 25 18:45:10.337878 kubelet[3456]: I0625 18:45:10.337797 3456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/3fde57fd-85e3-4930-a4d4-467bc7accec7-cni-bin-dir\") pod \"calico-node-5cc7z\" (UID: \"3fde57fd-85e3-4930-a4d4-467bc7accec7\") " pod="calico-system/calico-node-5cc7z" Jun 25 18:45:10.338340 kubelet[3456]: I0625 18:45:10.337928 3456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3fde57fd-85e3-4930-a4d4-467bc7accec7-var-lib-calico\") pod \"calico-node-5cc7z\" (UID: \"3fde57fd-85e3-4930-a4d4-467bc7accec7\") " pod="calico-system/calico-node-5cc7z" Jun 25 18:45:10.338340 kubelet[3456]: I0625 18:45:10.337961 3456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmb72\" (UniqueName: \"kubernetes.io/projected/3fde57fd-85e3-4930-a4d4-467bc7accec7-kube-api-access-xmb72\") pod \"calico-node-5cc7z\" (UID: \"3fde57fd-85e3-4930-a4d4-467bc7accec7\") " pod="calico-system/calico-node-5cc7z" Jun 25 18:45:10.338340 kubelet[3456]: I0625 18:45:10.338091 3456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3fde57fd-85e3-4930-a4d4-467bc7accec7-tigera-ca-bundle\") pod \"calico-node-5cc7z\" (UID: \"3fde57fd-85e3-4930-a4d4-467bc7accec7\") " pod="calico-system/calico-node-5cc7z" Jun 25 18:45:10.338340 kubelet[3456]: I0625 18:45:10.338125 3456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/3fde57fd-85e3-4930-a4d4-467bc7accec7-cni-net-dir\") pod \"calico-node-5cc7z\" (UID: \"3fde57fd-85e3-4930-a4d4-467bc7accec7\") " pod="calico-system/calico-node-5cc7z" Jun 25 18:45:10.338340 kubelet[3456]: I0625 18:45:10.338264 3456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/3fde57fd-85e3-4930-a4d4-467bc7accec7-cni-log-dir\") pod \"calico-node-5cc7z\" (UID: \"3fde57fd-85e3-4930-a4d4-467bc7accec7\") " pod="calico-system/calico-node-5cc7z" Jun 25 18:45:10.339228 kubelet[3456]: I0625 18:45:10.338338 3456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/3fde57fd-85e3-4930-a4d4-467bc7accec7-var-run-calico\") pod \"calico-node-5cc7z\" (UID: \"3fde57fd-85e3-4930-a4d4-467bc7accec7\") " pod="calico-system/calico-node-5cc7z" Jun 25 18:45:10.339228 kubelet[3456]: I0625 18:45:10.338528 3456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/3fde57fd-85e3-4930-a4d4-467bc7accec7-policysync\") pod \"calico-node-5cc7z\" (UID: \"3fde57fd-85e3-4930-a4d4-467bc7accec7\") " pod="calico-system/calico-node-5cc7z" Jun 25 18:45:10.339228 kubelet[3456]: I0625 18:45:10.338579 3456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/3fde57fd-85e3-4930-a4d4-467bc7accec7-flexvol-driver-host\") pod \"calico-node-5cc7z\" (UID: \"3fde57fd-85e3-4930-a4d4-467bc7accec7\") " pod="calico-system/calico-node-5cc7z" Jun 25 18:45:10.415459 containerd[1829]: time="2024-06-25T18:45:10.415418688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5d6564b7b-zshk4,Uid:e8a4125a-0f75-4bd3-bb8f-3c11db445f16,Namespace:calico-system,Attempt:0,}" Jun 25 18:45:10.439701 kubelet[3456]: I0625 18:45:10.439668 3456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1b8cd264-e868-4cd7-89a2-2e1d11e52069-kubelet-dir\") pod \"csi-node-driver-9x5m4\" (UID: \"1b8cd264-e868-4cd7-89a2-2e1d11e52069\") " pod="calico-system/csi-node-driver-9x5m4" Jun 25 18:45:10.439924 kubelet[3456]: I0625 18:45:10.439714 3456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98mqt\" (UniqueName: \"kubernetes.io/projected/1b8cd264-e868-4cd7-89a2-2e1d11e52069-kube-api-access-98mqt\") pod \"csi-node-driver-9x5m4\" (UID: \"1b8cd264-e868-4cd7-89a2-2e1d11e52069\") " pod="calico-system/csi-node-driver-9x5m4" Jun 25 18:45:10.439924 kubelet[3456]: I0625 18:45:10.439741 3456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/1b8cd264-e868-4cd7-89a2-2e1d11e52069-varrun\") pod \"csi-node-driver-9x5m4\" (UID: \"1b8cd264-e868-4cd7-89a2-2e1d11e52069\") " pod="calico-system/csi-node-driver-9x5m4" Jun 25 18:45:10.439924 kubelet[3456]: I0625 18:45:10.439805 3456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1b8cd264-e868-4cd7-89a2-2e1d11e52069-registration-dir\") pod \"csi-node-driver-9x5m4\" (UID: \"1b8cd264-e868-4cd7-89a2-2e1d11e52069\") " pod="calico-system/csi-node-driver-9x5m4" Jun 25 18:45:10.439924 kubelet[3456]: I0625 18:45:10.439848 3456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1b8cd264-e868-4cd7-89a2-2e1d11e52069-socket-dir\") pod \"csi-node-driver-9x5m4\" (UID: \"1b8cd264-e868-4cd7-89a2-2e1d11e52069\") " pod="calico-system/csi-node-driver-9x5m4" Jun 25 18:45:10.444815 kubelet[3456]: E0625 18:45:10.444791 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:10.447657 kubelet[3456]: W0625 18:45:10.445012 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:10.447657 kubelet[3456]: E0625 18:45:10.445056 3456 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:10.448005 kubelet[3456]: E0625 18:45:10.447989 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:10.448087 kubelet[3456]: W0625 18:45:10.448076 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:10.448661 kubelet[3456]: E0625 18:45:10.448631 3456 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:10.449795 kubelet[3456]: E0625 18:45:10.449781 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:10.449905 kubelet[3456]: W0625 18:45:10.449893 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:10.449990 kubelet[3456]: E0625 18:45:10.449981 3456 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:10.450239 kubelet[3456]: E0625 18:45:10.450228 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:10.450607 kubelet[3456]: W0625 18:45:10.450593 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:10.450697 kubelet[3456]: E0625 18:45:10.450689 3456 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:10.451525 kubelet[3456]: E0625 18:45:10.451492 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:10.451652 kubelet[3456]: W0625 18:45:10.451638 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:10.451743 kubelet[3456]: E0625 18:45:10.451732 3456 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:10.454889 kubelet[3456]: E0625 18:45:10.454875 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:10.454985 kubelet[3456]: W0625 18:45:10.454975 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:10.455238 kubelet[3456]: E0625 18:45:10.455054 3456 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:10.459763 kubelet[3456]: E0625 18:45:10.459750 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:10.459863 kubelet[3456]: W0625 18:45:10.459852 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:10.459938 kubelet[3456]: E0625 18:45:10.459931 3456 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:10.508838 containerd[1829]: time="2024-06-25T18:45:10.508792502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5cc7z,Uid:3fde57fd-85e3-4930-a4d4-467bc7accec7,Namespace:calico-system,Attempt:0,}" Jun 25 18:45:10.540434 kubelet[3456]: E0625 18:45:10.540397 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:10.540806 kubelet[3456]: W0625 18:45:10.540512 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:10.540806 kubelet[3456]: E0625 18:45:10.540540 3456 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:10.541227 kubelet[3456]: E0625 18:45:10.541054 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:10.541227 kubelet[3456]: W0625 18:45:10.541066 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:10.541227 kubelet[3456]: E0625 18:45:10.541085 3456 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:10.541641 kubelet[3456]: E0625 18:45:10.541450 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:10.541641 kubelet[3456]: W0625 18:45:10.541461 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:10.542155 kubelet[3456]: E0625 18:45:10.541486 3456 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:10.542155 kubelet[3456]: E0625 18:45:10.542021 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:10.542155 kubelet[3456]: W0625 18:45:10.542096 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:10.542155 kubelet[3456]: E0625 18:45:10.542112 3456 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:10.542660 kubelet[3456]: E0625 18:45:10.542591 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:10.542660 kubelet[3456]: W0625 18:45:10.542604 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:10.542660 kubelet[3456]: E0625 18:45:10.542628 3456 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:10.543284 kubelet[3456]: E0625 18:45:10.543073 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:10.543284 kubelet[3456]: W0625 18:45:10.543084 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:10.543284 kubelet[3456]: E0625 18:45:10.543101 3456 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:10.543623 kubelet[3456]: E0625 18:45:10.543473 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:10.543623 kubelet[3456]: W0625 18:45:10.543484 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:10.543623 kubelet[3456]: E0625 18:45:10.543601 3456 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:10.544098 kubelet[3456]: E0625 18:45:10.543991 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:10.544098 kubelet[3456]: W0625 18:45:10.544004 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:10.544098 kubelet[3456]: E0625 18:45:10.544022 3456 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:10.544433 kubelet[3456]: E0625 18:45:10.544348 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:10.544433 kubelet[3456]: W0625 18:45:10.544358 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:10.544433 kubelet[3456]: E0625 18:45:10.544381 3456 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:10.544922 kubelet[3456]: E0625 18:45:10.544770 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:10.544922 kubelet[3456]: W0625 18:45:10.544781 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:10.545219 kubelet[3456]: E0625 18:45:10.545061 3456 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:10.545219 kubelet[3456]: E0625 18:45:10.545141 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:10.545219 kubelet[3456]: W0625 18:45:10.545149 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:10.545462 kubelet[3456]: E0625 18:45:10.545372 3456 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:10.545671 kubelet[3456]: E0625 18:45:10.545553 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:10.545671 kubelet[3456]: W0625 18:45:10.545600 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:10.545868 kubelet[3456]: E0625 18:45:10.545783 3456 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:10.546051 kubelet[3456]: E0625 18:45:10.545951 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:10.546051 kubelet[3456]: W0625 18:45:10.545960 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:10.546051 kubelet[3456]: E0625 18:45:10.545992 3456 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:10.546377 kubelet[3456]: E0625 18:45:10.546302 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:10.546377 kubelet[3456]: W0625 18:45:10.546312 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:10.546591 kubelet[3456]: E0625 18:45:10.546489 3456 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:10.546765 kubelet[3456]: E0625 18:45:10.546678 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:10.546765 kubelet[3456]: W0625 18:45:10.546689 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:10.546952 kubelet[3456]: E0625 18:45:10.546859 3456 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:10.547086 kubelet[3456]: E0625 18:45:10.547033 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:10.547086 kubelet[3456]: W0625 18:45:10.547043 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:10.547086 kubelet[3456]: E0625 18:45:10.547121 3456 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:10.547520 kubelet[3456]: E0625 18:45:10.547393 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:10.547520 kubelet[3456]: W0625 18:45:10.547403 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:10.547520 kubelet[3456]: E0625 18:45:10.547471 3456 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:10.547959 kubelet[3456]: E0625 18:45:10.547817 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:10.547959 kubelet[3456]: W0625 18:45:10.547829 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:10.547959 kubelet[3456]: E0625 18:45:10.547850 3456 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:10.548344 kubelet[3456]: E0625 18:45:10.548227 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:10.548344 kubelet[3456]: W0625 18:45:10.548237 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:10.548578 kubelet[3456]: E0625 18:45:10.548453 3456 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:10.548710 kubelet[3456]: E0625 18:45:10.548686 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:10.548710 kubelet[3456]: W0625 18:45:10.548697 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:10.548977 kubelet[3456]: E0625 18:45:10.548875 3456 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:10.549201 kubelet[3456]: E0625 18:45:10.549058 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:10.549201 kubelet[3456]: W0625 18:45:10.549069 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:10.549474 kubelet[3456]: E0625 18:45:10.549372 3456 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:10.549691 kubelet[3456]: E0625 18:45:10.549609 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:10.549691 kubelet[3456]: W0625 18:45:10.549621 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:10.550888 kubelet[3456]: E0625 18:45:10.549795 3456 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:10.551159 kubelet[3456]: E0625 18:45:10.551148 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:10.552004 kubelet[3456]: W0625 18:45:10.551251 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:10.552004 kubelet[3456]: E0625 18:45:10.551351 3456 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:10.552473 kubelet[3456]: E0625 18:45:10.552388 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:10.552473 kubelet[3456]: W0625 18:45:10.552402 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:10.552473 kubelet[3456]: E0625 18:45:10.552426 3456 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:10.553226 kubelet[3456]: E0625 18:45:10.553207 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:10.553226 kubelet[3456]: W0625 18:45:10.553225 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:10.555665 kubelet[3456]: E0625 18:45:10.553243 3456 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:10.564774 kubelet[3456]: E0625 18:45:10.564669 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:10.564774 kubelet[3456]: W0625 18:45:10.564692 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:10.564774 kubelet[3456]: E0625 18:45:10.564718 3456 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:11.805954 kubelet[3456]: E0625 18:45:11.805440 3456 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9x5m4" podUID="1b8cd264-e868-4cd7-89a2-2e1d11e52069" Jun 25 18:45:12.081296 containerd[1829]: time="2024-06-25T18:45:12.080761188Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:45:12.081875 containerd[1829]: time="2024-06-25T18:45:12.080840788Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:12.081875 containerd[1829]: time="2024-06-25T18:45:12.080867288Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:45:12.081875 containerd[1829]: time="2024-06-25T18:45:12.080885788Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:12.115334 containerd[1829]: time="2024-06-25T18:45:12.114942747Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:45:12.115334 containerd[1829]: time="2024-06-25T18:45:12.115085547Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:12.115334 containerd[1829]: time="2024-06-25T18:45:12.115143847Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:45:12.115334 containerd[1829]: time="2024-06-25T18:45:12.115183547Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:12.177949 containerd[1829]: time="2024-06-25T18:45:12.177909856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5cc7z,Uid:3fde57fd-85e3-4930-a4d4-467bc7accec7,Namespace:calico-system,Attempt:0,} returns sandbox id \"5a60bb87ce600832c93a960d5dda635f8c2b3d27afde0f38afab76dfc0d453c8\"" Jun 25 18:45:12.181239 containerd[1829]: time="2024-06-25T18:45:12.180701861Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Jun 25 18:45:12.294640 containerd[1829]: time="2024-06-25T18:45:12.294460557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5d6564b7b-zshk4,Uid:e8a4125a-0f75-4bd3-bb8f-3c11db445f16,Namespace:calico-system,Attempt:0,} returns sandbox id \"f9884837411d58fef3253ce016b5dd8115fab006addda9a45105db0ca30d2e60\"" Jun 25 18:45:13.805609 kubelet[3456]: E0625 18:45:13.805562 3456 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9x5m4" podUID="1b8cd264-e868-4cd7-89a2-2e1d11e52069" Jun 25 18:45:14.395839 containerd[1829]: time="2024-06-25T18:45:14.395778287Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:14.398593 containerd[1829]: time="2024-06-25T18:45:14.398509691Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=5140568" Jun 25 18:45:14.404770 containerd[1829]: time="2024-06-25T18:45:14.404005801Z" level=info msg="ImageCreate event name:\"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:14.408060 containerd[1829]: time="2024-06-25T18:45:14.407969408Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:14.410108 containerd[1829]: time="2024-06-25T18:45:14.409917011Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6588288\" in 2.22916695s" Jun 25 18:45:14.410108 containerd[1829]: time="2024-06-25T18:45:14.409984311Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\"" Jun 25 18:45:14.413317 containerd[1829]: time="2024-06-25T18:45:14.412371015Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Jun 25 18:45:14.414161 containerd[1829]: time="2024-06-25T18:45:14.414127918Z" level=info msg="CreateContainer within sandbox \"5a60bb87ce600832c93a960d5dda635f8c2b3d27afde0f38afab76dfc0d453c8\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jun 25 18:45:14.463886 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount517257892.mount: Deactivated successfully. Jun 25 18:45:14.474092 containerd[1829]: time="2024-06-25T18:45:14.473675721Z" level=info msg="CreateContainer within sandbox \"5a60bb87ce600832c93a960d5dda635f8c2b3d27afde0f38afab76dfc0d453c8\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e65e189a24462323cecb0c1d0d19599bbe17a1168d22ec43341fdc5e2474aea8\"" Jun 25 18:45:14.474556 containerd[1829]: time="2024-06-25T18:45:14.474486423Z" level=info msg="StartContainer for \"e65e189a24462323cecb0c1d0d19599bbe17a1168d22ec43341fdc5e2474aea8\"" Jun 25 18:45:14.541625 containerd[1829]: time="2024-06-25T18:45:14.541292138Z" level=info msg="StartContainer for \"e65e189a24462323cecb0c1d0d19599bbe17a1168d22ec43341fdc5e2474aea8\" returns successfully" Jun 25 18:45:14.581164 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e65e189a24462323cecb0c1d0d19599bbe17a1168d22ec43341fdc5e2474aea8-rootfs.mount: Deactivated successfully. Jun 25 18:45:16.346161 containerd[1829]: time="2024-06-25T18:45:14.940480828Z" level=info msg="StopContainer for \"e65e189a24462323cecb0c1d0d19599bbe17a1168d22ec43341fdc5e2474aea8\" with timeout 5 (s)" Jun 25 18:45:16.348865 kubelet[3456]: E0625 18:45:15.803870 3456 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9x5m4" podUID="1b8cd264-e868-4cd7-89a2-2e1d11e52069" Jun 25 18:45:16.597493 containerd[1829]: time="2024-06-25T18:45:16.597234089Z" level=info msg="Stop container \"e65e189a24462323cecb0c1d0d19599bbe17a1168d22ec43341fdc5e2474aea8\" with signal terminated" Jun 25 18:45:16.597977 containerd[1829]: time="2024-06-25T18:45:16.597733790Z" level=info msg="shim disconnected" id=e65e189a24462323cecb0c1d0d19599bbe17a1168d22ec43341fdc5e2474aea8 namespace=k8s.io Jun 25 18:45:16.597977 containerd[1829]: time="2024-06-25T18:45:16.597849590Z" level=warning msg="cleaning up after shim disconnected" id=e65e189a24462323cecb0c1d0d19599bbe17a1168d22ec43341fdc5e2474aea8 namespace=k8s.io Jun 25 18:45:16.597977 containerd[1829]: time="2024-06-25T18:45:16.597865790Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:45:16.615878 containerd[1829]: time="2024-06-25T18:45:16.615702521Z" level=info msg="StopContainer for \"e65e189a24462323cecb0c1d0d19599bbe17a1168d22ec43341fdc5e2474aea8\" returns successfully" Jun 25 18:45:16.617489 containerd[1829]: time="2024-06-25T18:45:16.616623523Z" level=info msg="StopPodSandbox for \"5a60bb87ce600832c93a960d5dda635f8c2b3d27afde0f38afab76dfc0d453c8\"" Jun 25 18:45:16.617489 containerd[1829]: time="2024-06-25T18:45:16.616664123Z" level=info msg="Container to stop \"e65e189a24462323cecb0c1d0d19599bbe17a1168d22ec43341fdc5e2474aea8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 25 18:45:16.620527 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5a60bb87ce600832c93a960d5dda635f8c2b3d27afde0f38afab76dfc0d453c8-shm.mount: Deactivated successfully. Jun 25 18:45:16.649605 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5a60bb87ce600832c93a960d5dda635f8c2b3d27afde0f38afab76dfc0d453c8-rootfs.mount: Deactivated successfully. Jun 25 18:45:16.658339 containerd[1829]: time="2024-06-25T18:45:16.657724994Z" level=info msg="shim disconnected" id=5a60bb87ce600832c93a960d5dda635f8c2b3d27afde0f38afab76dfc0d453c8 namespace=k8s.io Jun 25 18:45:16.658339 containerd[1829]: time="2024-06-25T18:45:16.657800594Z" level=warning msg="cleaning up after shim disconnected" id=5a60bb87ce600832c93a960d5dda635f8c2b3d27afde0f38afab76dfc0d453c8 namespace=k8s.io Jun 25 18:45:16.658339 containerd[1829]: time="2024-06-25T18:45:16.657813894Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:45:16.671761 containerd[1829]: time="2024-06-25T18:45:16.671713518Z" level=info msg="TearDown network for sandbox \"5a60bb87ce600832c93a960d5dda635f8c2b3d27afde0f38afab76dfc0d453c8\" successfully" Jun 25 18:45:16.671761 containerd[1829]: time="2024-06-25T18:45:16.671749118Z" level=info msg="StopPodSandbox for \"5a60bb87ce600832c93a960d5dda635f8c2b3d27afde0f38afab76dfc0d453c8\" returns successfully" Jun 25 18:45:16.691972 kubelet[3456]: I0625 18:45:16.691360 3456 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/3fde57fd-85e3-4930-a4d4-467bc7accec7-cni-bin-dir\") pod \"3fde57fd-85e3-4930-a4d4-467bc7accec7\" (UID: \"3fde57fd-85e3-4930-a4d4-467bc7accec7\") " Jun 25 18:45:16.691972 kubelet[3456]: I0625 18:45:16.691418 3456 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/3fde57fd-85e3-4930-a4d4-467bc7accec7-cni-net-dir\") pod \"3fde57fd-85e3-4930-a4d4-467bc7accec7\" (UID: \"3fde57fd-85e3-4930-a4d4-467bc7accec7\") " Jun 25 18:45:16.691972 kubelet[3456]: I0625 18:45:16.691444 3456 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/3fde57fd-85e3-4930-a4d4-467bc7accec7-cni-log-dir\") pod \"3fde57fd-85e3-4930-a4d4-467bc7accec7\" (UID: \"3fde57fd-85e3-4930-a4d4-467bc7accec7\") " Jun 25 18:45:16.691972 kubelet[3456]: I0625 18:45:16.691468 3456 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3fde57fd-85e3-4930-a4d4-467bc7accec7-xtables-lock\") pod \"3fde57fd-85e3-4930-a4d4-467bc7accec7\" (UID: \"3fde57fd-85e3-4930-a4d4-467bc7accec7\") " Jun 25 18:45:16.691972 kubelet[3456]: I0625 18:45:16.691502 3456 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/3fde57fd-85e3-4930-a4d4-467bc7accec7-node-certs\") pod \"3fde57fd-85e3-4930-a4d4-467bc7accec7\" (UID: \"3fde57fd-85e3-4930-a4d4-467bc7accec7\") " Jun 25 18:45:16.691972 kubelet[3456]: I0625 18:45:16.691526 3456 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/3fde57fd-85e3-4930-a4d4-467bc7accec7-var-run-calico\") pod \"3fde57fd-85e3-4930-a4d4-467bc7accec7\" (UID: \"3fde57fd-85e3-4930-a4d4-467bc7accec7\") " Jun 25 18:45:16.692376 kubelet[3456]: I0625 18:45:16.691532 3456 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3fde57fd-85e3-4930-a4d4-467bc7accec7-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "3fde57fd-85e3-4930-a4d4-467bc7accec7" (UID: "3fde57fd-85e3-4930-a4d4-467bc7accec7"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:45:16.692376 kubelet[3456]: I0625 18:45:16.691591 3456 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3fde57fd-85e3-4930-a4d4-467bc7accec7-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3fde57fd-85e3-4930-a4d4-467bc7accec7" (UID: "3fde57fd-85e3-4930-a4d4-467bc7accec7"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:45:16.692376 kubelet[3456]: I0625 18:45:16.691553 3456 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3fde57fd-85e3-4930-a4d4-467bc7accec7-lib-modules\") pod \"3fde57fd-85e3-4930-a4d4-467bc7accec7\" (UID: \"3fde57fd-85e3-4930-a4d4-467bc7accec7\") " Jun 25 18:45:16.692376 kubelet[3456]: I0625 18:45:16.691614 3456 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3fde57fd-85e3-4930-a4d4-467bc7accec7-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "3fde57fd-85e3-4930-a4d4-467bc7accec7" (UID: "3fde57fd-85e3-4930-a4d4-467bc7accec7"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:45:16.692376 kubelet[3456]: I0625 18:45:16.691635 3456 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3fde57fd-85e3-4930-a4d4-467bc7accec7-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "3fde57fd-85e3-4930-a4d4-467bc7accec7" (UID: "3fde57fd-85e3-4930-a4d4-467bc7accec7"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:45:16.692592 kubelet[3456]: I0625 18:45:16.691651 3456 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3fde57fd-85e3-4930-a4d4-467bc7accec7-tigera-ca-bundle\") pod \"3fde57fd-85e3-4930-a4d4-467bc7accec7\" (UID: \"3fde57fd-85e3-4930-a4d4-467bc7accec7\") " Jun 25 18:45:16.692592 kubelet[3456]: I0625 18:45:16.691678 3456 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/3fde57fd-85e3-4930-a4d4-467bc7accec7-flexvol-driver-host\") pod \"3fde57fd-85e3-4930-a4d4-467bc7accec7\" (UID: \"3fde57fd-85e3-4930-a4d4-467bc7accec7\") " Jun 25 18:45:16.692592 kubelet[3456]: I0625 18:45:16.691704 3456 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3fde57fd-85e3-4930-a4d4-467bc7accec7-var-lib-calico\") pod \"3fde57fd-85e3-4930-a4d4-467bc7accec7\" (UID: \"3fde57fd-85e3-4930-a4d4-467bc7accec7\") " Jun 25 18:45:16.692592 kubelet[3456]: I0625 18:45:16.691759 3456 reconciler_common.go:300] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/3fde57fd-85e3-4930-a4d4-467bc7accec7-cni-net-dir\") on node \"ci-4012.0.0-a-bcd7e269e6\" DevicePath \"\"" Jun 25 18:45:16.692592 kubelet[3456]: I0625 18:45:16.691775 3456 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3fde57fd-85e3-4930-a4d4-467bc7accec7-xtables-lock\") on node \"ci-4012.0.0-a-bcd7e269e6\" DevicePath \"\"" Jun 25 18:45:16.692592 kubelet[3456]: I0625 18:45:16.691788 3456 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3fde57fd-85e3-4930-a4d4-467bc7accec7-lib-modules\") on node \"ci-4012.0.0-a-bcd7e269e6\" DevicePath \"\"" Jun 25 18:45:16.692838 kubelet[3456]: I0625 18:45:16.691809 3456 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3fde57fd-85e3-4930-a4d4-467bc7accec7-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "3fde57fd-85e3-4930-a4d4-467bc7accec7" (UID: "3fde57fd-85e3-4930-a4d4-467bc7accec7"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:45:16.692838 kubelet[3456]: I0625 18:45:16.691832 3456 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3fde57fd-85e3-4930-a4d4-467bc7accec7-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "3fde57fd-85e3-4930-a4d4-467bc7accec7" (UID: "3fde57fd-85e3-4930-a4d4-467bc7accec7"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:45:16.692838 kubelet[3456]: I0625 18:45:16.692191 3456 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fde57fd-85e3-4930-a4d4-467bc7accec7-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "3fde57fd-85e3-4930-a4d4-467bc7accec7" (UID: "3fde57fd-85e3-4930-a4d4-467bc7accec7"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jun 25 18:45:16.692838 kubelet[3456]: I0625 18:45:16.692237 3456 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3fde57fd-85e3-4930-a4d4-467bc7accec7-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "3fde57fd-85e3-4930-a4d4-467bc7accec7" (UID: "3fde57fd-85e3-4930-a4d4-467bc7accec7"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:45:16.692838 kubelet[3456]: I0625 18:45:16.692262 3456 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3fde57fd-85e3-4930-a4d4-467bc7accec7-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "3fde57fd-85e3-4930-a4d4-467bc7accec7" (UID: "3fde57fd-85e3-4930-a4d4-467bc7accec7"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:45:16.696787 kubelet[3456]: I0625 18:45:16.696720 3456 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fde57fd-85e3-4930-a4d4-467bc7accec7-node-certs" (OuterVolumeSpecName: "node-certs") pod "3fde57fd-85e3-4930-a4d4-467bc7accec7" (UID: "3fde57fd-85e3-4930-a4d4-467bc7accec7"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jun 25 18:45:16.697964 systemd[1]: var-lib-kubelet-pods-3fde57fd\x2d85e3\x2d4930\x2da4d4\x2d467bc7accec7-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Jun 25 18:45:16.792398 kubelet[3456]: I0625 18:45:16.792266 3456 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/3fde57fd-85e3-4930-a4d4-467bc7accec7-policysync\") pod \"3fde57fd-85e3-4930-a4d4-467bc7accec7\" (UID: \"3fde57fd-85e3-4930-a4d4-467bc7accec7\") " Jun 25 18:45:16.792398 kubelet[3456]: I0625 18:45:16.792330 3456 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmb72\" (UniqueName: \"kubernetes.io/projected/3fde57fd-85e3-4930-a4d4-467bc7accec7-kube-api-access-xmb72\") pod \"3fde57fd-85e3-4930-a4d4-467bc7accec7\" (UID: \"3fde57fd-85e3-4930-a4d4-467bc7accec7\") " Jun 25 18:45:16.792398 kubelet[3456]: I0625 18:45:16.792357 3456 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3fde57fd-85e3-4930-a4d4-467bc7accec7-policysync" (OuterVolumeSpecName: "policysync") pod "3fde57fd-85e3-4930-a4d4-467bc7accec7" (UID: "3fde57fd-85e3-4930-a4d4-467bc7accec7"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:45:16.792398 kubelet[3456]: I0625 18:45:16.792387 3456 reconciler_common.go:300] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3fde57fd-85e3-4930-a4d4-467bc7accec7-tigera-ca-bundle\") on node \"ci-4012.0.0-a-bcd7e269e6\" DevicePath \"\"" Jun 25 18:45:16.792398 kubelet[3456]: I0625 18:45:16.792402 3456 reconciler_common.go:300] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/3fde57fd-85e3-4930-a4d4-467bc7accec7-flexvol-driver-host\") on node \"ci-4012.0.0-a-bcd7e269e6\" DevicePath \"\"" Jun 25 18:45:16.792398 kubelet[3456]: I0625 18:45:16.792415 3456 reconciler_common.go:300] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3fde57fd-85e3-4930-a4d4-467bc7accec7-var-lib-calico\") on node \"ci-4012.0.0-a-bcd7e269e6\" DevicePath \"\"" Jun 25 18:45:16.792875 kubelet[3456]: I0625 18:45:16.792427 3456 reconciler_common.go:300] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/3fde57fd-85e3-4930-a4d4-467bc7accec7-cni-bin-dir\") on node \"ci-4012.0.0-a-bcd7e269e6\" DevicePath \"\"" Jun 25 18:45:16.792875 kubelet[3456]: I0625 18:45:16.792440 3456 reconciler_common.go:300] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/3fde57fd-85e3-4930-a4d4-467bc7accec7-cni-log-dir\") on node \"ci-4012.0.0-a-bcd7e269e6\" DevicePath \"\"" Jun 25 18:45:16.792875 kubelet[3456]: I0625 18:45:16.792453 3456 reconciler_common.go:300] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/3fde57fd-85e3-4930-a4d4-467bc7accec7-node-certs\") on node \"ci-4012.0.0-a-bcd7e269e6\" DevicePath \"\"" Jun 25 18:45:16.792875 kubelet[3456]: I0625 18:45:16.792464 3456 reconciler_common.go:300] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/3fde57fd-85e3-4930-a4d4-467bc7accec7-var-run-calico\") on node \"ci-4012.0.0-a-bcd7e269e6\" DevicePath \"\"" Jun 25 18:45:16.797830 kubelet[3456]: I0625 18:45:16.797759 3456 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fde57fd-85e3-4930-a4d4-467bc7accec7-kube-api-access-xmb72" (OuterVolumeSpecName: "kube-api-access-xmb72") pod "3fde57fd-85e3-4930-a4d4-467bc7accec7" (UID: "3fde57fd-85e3-4930-a4d4-467bc7accec7"). InnerVolumeSpecName "kube-api-access-xmb72". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 25 18:45:16.798229 systemd[1]: var-lib-kubelet-pods-3fde57fd\x2d85e3\x2d4930\x2da4d4\x2d467bc7accec7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxmb72.mount: Deactivated successfully. Jun 25 18:45:16.892777 kubelet[3456]: I0625 18:45:16.892637 3456 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-xmb72\" (UniqueName: \"kubernetes.io/projected/3fde57fd-85e3-4930-a4d4-467bc7accec7-kube-api-access-xmb72\") on node \"ci-4012.0.0-a-bcd7e269e6\" DevicePath \"\"" Jun 25 18:45:16.892777 kubelet[3456]: I0625 18:45:16.892684 3456 reconciler_common.go:300] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/3fde57fd-85e3-4930-a4d4-467bc7accec7-policysync\") on node \"ci-4012.0.0-a-bcd7e269e6\" DevicePath \"\"" Jun 25 18:45:16.944630 kubelet[3456]: I0625 18:45:16.943819 3456 scope.go:117] "RemoveContainer" containerID="e65e189a24462323cecb0c1d0d19599bbe17a1168d22ec43341fdc5e2474aea8" Jun 25 18:45:16.948228 containerd[1829]: time="2024-06-25T18:45:16.948184895Z" level=info msg="RemoveContainer for \"e65e189a24462323cecb0c1d0d19599bbe17a1168d22ec43341fdc5e2474aea8\"" Jun 25 18:45:16.964373 containerd[1829]: time="2024-06-25T18:45:16.964316623Z" level=info msg="RemoveContainer for \"e65e189a24462323cecb0c1d0d19599bbe17a1168d22ec43341fdc5e2474aea8\" returns successfully" Jun 25 18:45:16.965864 kubelet[3456]: I0625 18:45:16.965828 3456 scope.go:117] "RemoveContainer" containerID="e65e189a24462323cecb0c1d0d19599bbe17a1168d22ec43341fdc5e2474aea8" Jun 25 18:45:16.967086 containerd[1829]: time="2024-06-25T18:45:16.966162426Z" level=error msg="ContainerStatus for \"e65e189a24462323cecb0c1d0d19599bbe17a1168d22ec43341fdc5e2474aea8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e65e189a24462323cecb0c1d0d19599bbe17a1168d22ec43341fdc5e2474aea8\": not found" Jun 25 18:45:16.967957 kubelet[3456]: E0625 18:45:16.967698 3456 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e65e189a24462323cecb0c1d0d19599bbe17a1168d22ec43341fdc5e2474aea8\": not found" containerID="e65e189a24462323cecb0c1d0d19599bbe17a1168d22ec43341fdc5e2474aea8" Jun 25 18:45:16.968223 kubelet[3456]: I0625 18:45:16.968075 3456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e65e189a24462323cecb0c1d0d19599bbe17a1168d22ec43341fdc5e2474aea8"} err="failed to get container status \"e65e189a24462323cecb0c1d0d19599bbe17a1168d22ec43341fdc5e2474aea8\": rpc error: code = NotFound desc = an error occurred when try to find container \"e65e189a24462323cecb0c1d0d19599bbe17a1168d22ec43341fdc5e2474aea8\": not found" Jun 25 18:45:16.987693 kubelet[3456]: I0625 18:45:16.981456 3456 topology_manager.go:215] "Topology Admit Handler" podUID="60be0771-b6fb-47ef-958d-308b69fccbf1" podNamespace="calico-system" podName="calico-node-zx8wb" Jun 25 18:45:16.987693 kubelet[3456]: E0625 18:45:16.981535 3456 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3fde57fd-85e3-4930-a4d4-467bc7accec7" containerName="flexvol-driver" Jun 25 18:45:16.987693 kubelet[3456]: I0625 18:45:16.981590 3456 memory_manager.go:346] "RemoveStaleState removing state" podUID="3fde57fd-85e3-4930-a4d4-467bc7accec7" containerName="flexvol-driver" Jun 25 18:45:16.994277 kubelet[3456]: I0625 18:45:16.994249 3456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/60be0771-b6fb-47ef-958d-308b69fccbf1-node-certs\") pod \"calico-node-zx8wb\" (UID: \"60be0771-b6fb-47ef-958d-308b69fccbf1\") " pod="calico-system/calico-node-zx8wb" Jun 25 18:45:16.994503 kubelet[3456]: I0625 18:45:16.994490 3456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/60be0771-b6fb-47ef-958d-308b69fccbf1-var-run-calico\") pod \"calico-node-zx8wb\" (UID: \"60be0771-b6fb-47ef-958d-308b69fccbf1\") " pod="calico-system/calico-node-zx8wb" Jun 25 18:45:16.994629 kubelet[3456]: I0625 18:45:16.994619 3456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/60be0771-b6fb-47ef-958d-308b69fccbf1-policysync\") pod \"calico-node-zx8wb\" (UID: \"60be0771-b6fb-47ef-958d-308b69fccbf1\") " pod="calico-system/calico-node-zx8wb" Jun 25 18:45:16.995711 kubelet[3456]: I0625 18:45:16.995687 3456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w54nr\" (UniqueName: \"kubernetes.io/projected/60be0771-b6fb-47ef-958d-308b69fccbf1-kube-api-access-w54nr\") pod \"calico-node-zx8wb\" (UID: \"60be0771-b6fb-47ef-958d-308b69fccbf1\") " pod="calico-system/calico-node-zx8wb" Jun 25 18:45:16.995858 kubelet[3456]: I0625 18:45:16.995847 3456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/60be0771-b6fb-47ef-958d-308b69fccbf1-var-lib-calico\") pod \"calico-node-zx8wb\" (UID: \"60be0771-b6fb-47ef-958d-308b69fccbf1\") " pod="calico-system/calico-node-zx8wb" Jun 25 18:45:16.995985 kubelet[3456]: I0625 18:45:16.995975 3456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/60be0771-b6fb-47ef-958d-308b69fccbf1-cni-bin-dir\") pod \"calico-node-zx8wb\" (UID: \"60be0771-b6fb-47ef-958d-308b69fccbf1\") " pod="calico-system/calico-node-zx8wb" Jun 25 18:45:16.996110 kubelet[3456]: I0625 18:45:16.996099 3456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/60be0771-b6fb-47ef-958d-308b69fccbf1-lib-modules\") pod \"calico-node-zx8wb\" (UID: \"60be0771-b6fb-47ef-958d-308b69fccbf1\") " pod="calico-system/calico-node-zx8wb" Jun 25 18:45:16.996365 kubelet[3456]: I0625 18:45:16.996352 3456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/60be0771-b6fb-47ef-958d-308b69fccbf1-tigera-ca-bundle\") pod \"calico-node-zx8wb\" (UID: \"60be0771-b6fb-47ef-958d-308b69fccbf1\") " pod="calico-system/calico-node-zx8wb" Jun 25 18:45:16.996491 kubelet[3456]: I0625 18:45:16.996482 3456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/60be0771-b6fb-47ef-958d-308b69fccbf1-cni-log-dir\") pod \"calico-node-zx8wb\" (UID: \"60be0771-b6fb-47ef-958d-308b69fccbf1\") " pod="calico-system/calico-node-zx8wb" Jun 25 18:45:16.996601 kubelet[3456]: I0625 18:45:16.996589 3456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/60be0771-b6fb-47ef-958d-308b69fccbf1-flexvol-driver-host\") pod \"calico-node-zx8wb\" (UID: \"60be0771-b6fb-47ef-958d-308b69fccbf1\") " pod="calico-system/calico-node-zx8wb" Jun 25 18:45:16.996943 kubelet[3456]: I0625 18:45:16.996798 3456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/60be0771-b6fb-47ef-958d-308b69fccbf1-xtables-lock\") pod \"calico-node-zx8wb\" (UID: \"60be0771-b6fb-47ef-958d-308b69fccbf1\") " pod="calico-system/calico-node-zx8wb" Jun 25 18:45:16.997236 kubelet[3456]: I0625 18:45:16.997168 3456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/60be0771-b6fb-47ef-958d-308b69fccbf1-cni-net-dir\") pod \"calico-node-zx8wb\" (UID: \"60be0771-b6fb-47ef-958d-308b69fccbf1\") " pod="calico-system/calico-node-zx8wb" Jun 25 18:45:17.286603 containerd[1829]: time="2024-06-25T18:45:17.286507680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zx8wb,Uid:60be0771-b6fb-47ef-958d-308b69fccbf1,Namespace:calico-system,Attempt:0,}" Jun 25 18:45:17.320933 containerd[1829]: time="2024-06-25T18:45:17.320654739Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:45:17.320933 containerd[1829]: time="2024-06-25T18:45:17.320716539Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:17.320933 containerd[1829]: time="2024-06-25T18:45:17.320742939Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:45:17.320933 containerd[1829]: time="2024-06-25T18:45:17.320762739Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:17.356032 containerd[1829]: time="2024-06-25T18:45:17.355990400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zx8wb,Uid:60be0771-b6fb-47ef-958d-308b69fccbf1,Namespace:calico-system,Attempt:0,} returns sandbox id \"07cab1a6f2f46df5020f49a9b66d403637d37d4a883765db0cb8d91138f4663b\"" Jun 25 18:45:17.360892 containerd[1829]: time="2024-06-25T18:45:17.360857008Z" level=info msg="CreateContainer within sandbox \"07cab1a6f2f46df5020f49a9b66d403637d37d4a883765db0cb8d91138f4663b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jun 25 18:45:17.394674 containerd[1829]: time="2024-06-25T18:45:17.394622367Z" level=info msg="CreateContainer within sandbox \"07cab1a6f2f46df5020f49a9b66d403637d37d4a883765db0cb8d91138f4663b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"239525721f094b0e70ff24fc37e34d3c72273a77c2526510b4dc1eb758308884\"" Jun 25 18:45:17.395367 containerd[1829]: time="2024-06-25T18:45:17.395316368Z" level=info msg="StartContainer for \"239525721f094b0e70ff24fc37e34d3c72273a77c2526510b4dc1eb758308884\"" Jun 25 18:45:17.459930 containerd[1829]: time="2024-06-25T18:45:17.459797079Z" level=info msg="StartContainer for \"239525721f094b0e70ff24fc37e34d3c72273a77c2526510b4dc1eb758308884\" returns successfully" Jun 25 18:45:17.543647 containerd[1829]: time="2024-06-25T18:45:17.542920423Z" level=info msg="shim disconnected" id=239525721f094b0e70ff24fc37e34d3c72273a77c2526510b4dc1eb758308884 namespace=k8s.io Jun 25 18:45:17.543647 containerd[1829]: time="2024-06-25T18:45:17.542997023Z" level=warning msg="cleaning up after shim disconnected" id=239525721f094b0e70ff24fc37e34d3c72273a77c2526510b4dc1eb758308884 namespace=k8s.io Jun 25 18:45:17.543647 containerd[1829]: time="2024-06-25T18:45:17.543013423Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:45:17.805203 kubelet[3456]: E0625 18:45:17.804069 3456 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9x5m4" podUID="1b8cd264-e868-4cd7-89a2-2e1d11e52069" Jun 25 18:45:17.807329 kubelet[3456]: I0625 18:45:17.807296 3456 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="3fde57fd-85e3-4930-a4d4-467bc7accec7" path="/var/lib/kubelet/pods/3fde57fd-85e3-4930-a4d4-467bc7accec7/volumes" Jun 25 18:45:19.018590 containerd[1829]: time="2024-06-25T18:45:19.018536172Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:19.028515 containerd[1829]: time="2024-06-25T18:45:19.026846086Z" level=info msg="ImageCreate event name:\"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:19.028515 containerd[1829]: time="2024-06-25T18:45:19.027065786Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=29458030" Jun 25 18:45:19.035328 containerd[1829]: time="2024-06-25T18:45:19.035279500Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:19.036025 containerd[1829]: time="2024-06-25T18:45:19.035982702Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"30905782\" in 4.623577687s" Jun 25 18:45:19.036416 containerd[1829]: time="2024-06-25T18:45:19.036028902Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\"" Jun 25 18:45:19.037598 containerd[1829]: time="2024-06-25T18:45:19.037482704Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Jun 25 18:45:19.055380 containerd[1829]: time="2024-06-25T18:45:19.055332435Z" level=info msg="CreateContainer within sandbox \"f9884837411d58fef3253ce016b5dd8115fab006addda9a45105db0ca30d2e60\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jun 25 18:45:19.087479 containerd[1829]: time="2024-06-25T18:45:19.087430091Z" level=info msg="CreateContainer within sandbox \"f9884837411d58fef3253ce016b5dd8115fab006addda9a45105db0ca30d2e60\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"96dcd1742dc4e84ea22b5b285b236706f8deac509e4a456a5313ca9662bb5f34\"" Jun 25 18:45:19.088372 containerd[1829]: time="2024-06-25T18:45:19.088136792Z" level=info msg="StartContainer for \"96dcd1742dc4e84ea22b5b285b236706f8deac509e4a456a5313ca9662bb5f34\"" Jun 25 18:45:19.159392 containerd[1829]: time="2024-06-25T18:45:19.159335315Z" level=info msg="StartContainer for \"96dcd1742dc4e84ea22b5b285b236706f8deac509e4a456a5313ca9662bb5f34\" returns successfully" Jun 25 18:45:19.804820 kubelet[3456]: E0625 18:45:19.804390 3456 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9x5m4" podUID="1b8cd264-e868-4cd7-89a2-2e1d11e52069" Jun 25 18:45:19.956868 containerd[1829]: time="2024-06-25T18:45:19.954238490Z" level=info msg="StopContainer for \"96dcd1742dc4e84ea22b5b285b236706f8deac509e4a456a5313ca9662bb5f34\" with timeout 300 (s)" Jun 25 18:45:19.956868 containerd[1829]: time="2024-06-25T18:45:19.954652691Z" level=info msg="Stop container \"96dcd1742dc4e84ea22b5b285b236706f8deac509e4a456a5313ca9662bb5f34\" with signal terminated" Jun 25 18:45:19.974492 kubelet[3456]: I0625 18:45:19.972866 3456 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-5d6564b7b-zshk4" podStartSLOduration=3.233517985 podCreationTimestamp="2024-06-25 18:45:10 +0000 UTC" firstStartedPulling="2024-06-25 18:45:12.297295162 +0000 UTC m=+22.581524944" lastFinishedPulling="2024-06-25 18:45:19.036590703 +0000 UTC m=+29.320820585" observedRunningTime="2024-06-25 18:45:19.971608923 +0000 UTC m=+30.255838705" watchObservedRunningTime="2024-06-25 18:45:19.972813626 +0000 UTC m=+30.257043408" Jun 25 18:45:20.006971 containerd[1829]: time="2024-06-25T18:45:20.006922790Z" level=error msg="collecting metrics for 96dcd1742dc4e84ea22b5b285b236706f8deac509e4a456a5313ca9662bb5f34" error="cgroups: cgroup deleted: unknown" Jun 25 18:45:20.044538 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-96dcd1742dc4e84ea22b5b285b236706f8deac509e4a456a5313ca9662bb5f34-rootfs.mount: Deactivated successfully. Jun 25 18:45:20.551924 containerd[1829]: time="2024-06-25T18:45:20.551814026Z" level=info msg="shim disconnected" id=96dcd1742dc4e84ea22b5b285b236706f8deac509e4a456a5313ca9662bb5f34 namespace=k8s.io Jun 25 18:45:20.551924 containerd[1829]: time="2024-06-25T18:45:20.551879926Z" level=warning msg="cleaning up after shim disconnected" id=96dcd1742dc4e84ea22b5b285b236706f8deac509e4a456a5313ca9662bb5f34 namespace=k8s.io Jun 25 18:45:20.551924 containerd[1829]: time="2024-06-25T18:45:20.551893526Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:45:20.607240 containerd[1829]: time="2024-06-25T18:45:20.607180632Z" level=info msg="StopContainer for \"96dcd1742dc4e84ea22b5b285b236706f8deac509e4a456a5313ca9662bb5f34\" returns successfully" Jun 25 18:45:20.607912 containerd[1829]: time="2024-06-25T18:45:20.607806633Z" level=info msg="StopPodSandbox for \"f9884837411d58fef3253ce016b5dd8115fab006addda9a45105db0ca30d2e60\"" Jun 25 18:45:20.607912 containerd[1829]: time="2024-06-25T18:45:20.607859933Z" level=info msg="Container to stop \"96dcd1742dc4e84ea22b5b285b236706f8deac509e4a456a5313ca9662bb5f34\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 25 18:45:20.613854 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f9884837411d58fef3253ce016b5dd8115fab006addda9a45105db0ca30d2e60-shm.mount: Deactivated successfully. Jun 25 18:45:20.648154 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f9884837411d58fef3253ce016b5dd8115fab006addda9a45105db0ca30d2e60-rootfs.mount: Deactivated successfully. Jun 25 18:45:20.654267 containerd[1829]: time="2024-06-25T18:45:20.654037421Z" level=info msg="shim disconnected" id=f9884837411d58fef3253ce016b5dd8115fab006addda9a45105db0ca30d2e60 namespace=k8s.io Jun 25 18:45:20.654267 containerd[1829]: time="2024-06-25T18:45:20.654101421Z" level=warning msg="cleaning up after shim disconnected" id=f9884837411d58fef3253ce016b5dd8115fab006addda9a45105db0ca30d2e60 namespace=k8s.io Jun 25 18:45:20.654267 containerd[1829]: time="2024-06-25T18:45:20.654112921Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:45:20.667625 containerd[1829]: time="2024-06-25T18:45:20.667545546Z" level=info msg="TearDown network for sandbox \"f9884837411d58fef3253ce016b5dd8115fab006addda9a45105db0ca30d2e60\" successfully" Jun 25 18:45:20.667625 containerd[1829]: time="2024-06-25T18:45:20.667607746Z" level=info msg="StopPodSandbox for \"f9884837411d58fef3253ce016b5dd8115fab006addda9a45105db0ca30d2e60\" returns successfully" Jun 25 18:45:20.688920 kubelet[3456]: I0625 18:45:20.688630 3456 topology_manager.go:215] "Topology Admit Handler" podUID="4115cc77-cb1a-4c48-a0e3-2dab9f76ed15" podNamespace="calico-system" podName="calico-typha-554f9d7f5b-h6qvk" Jun 25 18:45:20.689908 kubelet[3456]: E0625 18:45:20.689242 3456 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e8a4125a-0f75-4bd3-bb8f-3c11db445f16" containerName="calico-typha" Jun 25 18:45:20.689908 kubelet[3456]: I0625 18:45:20.689315 3456 memory_manager.go:346] "RemoveStaleState removing state" podUID="e8a4125a-0f75-4bd3-bb8f-3c11db445f16" containerName="calico-typha" Jun 25 18:45:20.723541 kubelet[3456]: I0625 18:45:20.723330 3456 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w675l\" (UniqueName: \"kubernetes.io/projected/e8a4125a-0f75-4bd3-bb8f-3c11db445f16-kube-api-access-w675l\") pod \"e8a4125a-0f75-4bd3-bb8f-3c11db445f16\" (UID: \"e8a4125a-0f75-4bd3-bb8f-3c11db445f16\") " Jun 25 18:45:20.724243 kubelet[3456]: I0625 18:45:20.723437 3456 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e8a4125a-0f75-4bd3-bb8f-3c11db445f16-tigera-ca-bundle\") pod \"e8a4125a-0f75-4bd3-bb8f-3c11db445f16\" (UID: \"e8a4125a-0f75-4bd3-bb8f-3c11db445f16\") " Jun 25 18:45:20.724243 kubelet[3456]: I0625 18:45:20.723808 3456 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e8a4125a-0f75-4bd3-bb8f-3c11db445f16-typha-certs\") pod \"e8a4125a-0f75-4bd3-bb8f-3c11db445f16\" (UID: \"e8a4125a-0f75-4bd3-bb8f-3c11db445f16\") " Jun 25 18:45:20.724243 kubelet[3456]: I0625 18:45:20.723906 3456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4115cc77-cb1a-4c48-a0e3-2dab9f76ed15-tigera-ca-bundle\") pod \"calico-typha-554f9d7f5b-h6qvk\" (UID: \"4115cc77-cb1a-4c48-a0e3-2dab9f76ed15\") " pod="calico-system/calico-typha-554f9d7f5b-h6qvk" Jun 25 18:45:20.724243 kubelet[3456]: I0625 18:45:20.723939 3456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/4115cc77-cb1a-4c48-a0e3-2dab9f76ed15-typha-certs\") pod \"calico-typha-554f9d7f5b-h6qvk\" (UID: \"4115cc77-cb1a-4c48-a0e3-2dab9f76ed15\") " pod="calico-system/calico-typha-554f9d7f5b-h6qvk" Jun 25 18:45:20.724243 kubelet[3456]: I0625 18:45:20.723974 3456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h5cx\" (UniqueName: \"kubernetes.io/projected/4115cc77-cb1a-4c48-a0e3-2dab9f76ed15-kube-api-access-7h5cx\") pod \"calico-typha-554f9d7f5b-h6qvk\" (UID: \"4115cc77-cb1a-4c48-a0e3-2dab9f76ed15\") " pod="calico-system/calico-typha-554f9d7f5b-h6qvk" Jun 25 18:45:20.734084 systemd[1]: var-lib-kubelet-pods-e8a4125a\x2d0f75\x2d4bd3\x2dbb8f\x2d3c11db445f16-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dw675l.mount: Deactivated successfully. Jun 25 18:45:20.735828 kubelet[3456]: I0625 18:45:20.734376 3456 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8a4125a-0f75-4bd3-bb8f-3c11db445f16-kube-api-access-w675l" (OuterVolumeSpecName: "kube-api-access-w675l") pod "e8a4125a-0f75-4bd3-bb8f-3c11db445f16" (UID: "e8a4125a-0f75-4bd3-bb8f-3c11db445f16"). InnerVolumeSpecName "kube-api-access-w675l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 25 18:45:20.735828 kubelet[3456]: I0625 18:45:20.735291 3456 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8a4125a-0f75-4bd3-bb8f-3c11db445f16-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "e8a4125a-0f75-4bd3-bb8f-3c11db445f16" (UID: "e8a4125a-0f75-4bd3-bb8f-3c11db445f16"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jun 25 18:45:20.736185 kubelet[3456]: I0625 18:45:20.736073 3456 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8a4125a-0f75-4bd3-bb8f-3c11db445f16-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "e8a4125a-0f75-4bd3-bb8f-3c11db445f16" (UID: "e8a4125a-0f75-4bd3-bb8f-3c11db445f16"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jun 25 18:45:20.739148 systemd[1]: var-lib-kubelet-pods-e8a4125a\x2d0f75\x2d4bd3\x2dbb8f\x2d3c11db445f16-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. Jun 25 18:45:20.739314 systemd[1]: var-lib-kubelet-pods-e8a4125a\x2d0f75\x2d4bd3\x2dbb8f\x2d3c11db445f16-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. Jun 25 18:45:20.826173 kubelet[3456]: I0625 18:45:20.824770 3456 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-w675l\" (UniqueName: \"kubernetes.io/projected/e8a4125a-0f75-4bd3-bb8f-3c11db445f16-kube-api-access-w675l\") on node \"ci-4012.0.0-a-bcd7e269e6\" DevicePath \"\"" Jun 25 18:45:20.826173 kubelet[3456]: I0625 18:45:20.824816 3456 reconciler_common.go:300] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e8a4125a-0f75-4bd3-bb8f-3c11db445f16-tigera-ca-bundle\") on node \"ci-4012.0.0-a-bcd7e269e6\" DevicePath \"\"" Jun 25 18:45:20.826173 kubelet[3456]: I0625 18:45:20.824833 3456 reconciler_common.go:300] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e8a4125a-0f75-4bd3-bb8f-3c11db445f16-typha-certs\") on node \"ci-4012.0.0-a-bcd7e269e6\" DevicePath \"\"" Jun 25 18:45:20.957857 kubelet[3456]: I0625 18:45:20.957823 3456 scope.go:117] "RemoveContainer" containerID="96dcd1742dc4e84ea22b5b285b236706f8deac509e4a456a5313ca9662bb5f34" Jun 25 18:45:20.961380 containerd[1829]: time="2024-06-25T18:45:20.960646904Z" level=info msg="RemoveContainer for \"96dcd1742dc4e84ea22b5b285b236706f8deac509e4a456a5313ca9662bb5f34\"" Jun 25 18:45:20.972168 containerd[1829]: time="2024-06-25T18:45:20.971601324Z" level=info msg="RemoveContainer for \"96dcd1742dc4e84ea22b5b285b236706f8deac509e4a456a5313ca9662bb5f34\" returns successfully" Jun 25 18:45:20.972314 kubelet[3456]: I0625 18:45:20.971794 3456 scope.go:117] "RemoveContainer" containerID="96dcd1742dc4e84ea22b5b285b236706f8deac509e4a456a5313ca9662bb5f34" Jun 25 18:45:20.973120 containerd[1829]: time="2024-06-25T18:45:20.972723426Z" level=error msg="ContainerStatus for \"96dcd1742dc4e84ea22b5b285b236706f8deac509e4a456a5313ca9662bb5f34\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"96dcd1742dc4e84ea22b5b285b236706f8deac509e4a456a5313ca9662bb5f34\": not found" Jun 25 18:45:20.973340 kubelet[3456]: E0625 18:45:20.973216 3456 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"96dcd1742dc4e84ea22b5b285b236706f8deac509e4a456a5313ca9662bb5f34\": not found" containerID="96dcd1742dc4e84ea22b5b285b236706f8deac509e4a456a5313ca9662bb5f34" Jun 25 18:45:20.973340 kubelet[3456]: I0625 18:45:20.973277 3456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"96dcd1742dc4e84ea22b5b285b236706f8deac509e4a456a5313ca9662bb5f34"} err="failed to get container status \"96dcd1742dc4e84ea22b5b285b236706f8deac509e4a456a5313ca9662bb5f34\": rpc error: code = NotFound desc = an error occurred when try to find container \"96dcd1742dc4e84ea22b5b285b236706f8deac509e4a456a5313ca9662bb5f34\": not found" Jun 25 18:45:21.000664 containerd[1829]: time="2024-06-25T18:45:21.000625380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-554f9d7f5b-h6qvk,Uid:4115cc77-cb1a-4c48-a0e3-2dab9f76ed15,Namespace:calico-system,Attempt:0,}" Jun 25 18:45:21.044672 containerd[1829]: time="2024-06-25T18:45:21.042396159Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:45:21.044672 containerd[1829]: time="2024-06-25T18:45:21.043138660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:21.044672 containerd[1829]: time="2024-06-25T18:45:21.043623261Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:45:21.044672 containerd[1829]: time="2024-06-25T18:45:21.043770962Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:21.136624 containerd[1829]: time="2024-06-25T18:45:21.135731836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-554f9d7f5b-h6qvk,Uid:4115cc77-cb1a-4c48-a0e3-2dab9f76ed15,Namespace:calico-system,Attempt:0,} returns sandbox id \"a892057ea90cae305232a29711a1ca77fd678aa8b7d27dab350bf46c3716679a\"" Jun 25 18:45:21.160766 containerd[1829]: time="2024-06-25T18:45:21.160726084Z" level=info msg="CreateContainer within sandbox \"a892057ea90cae305232a29711a1ca77fd678aa8b7d27dab350bf46c3716679a\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jun 25 18:45:21.196294 containerd[1829]: time="2024-06-25T18:45:21.196239951Z" level=info msg="CreateContainer within sandbox \"a892057ea90cae305232a29711a1ca77fd678aa8b7d27dab350bf46c3716679a\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"752da4f97e7890bbe101e8b9e0ade2faa2bd69c72b53e1ee3b6575ebbbfb9793\"" Jun 25 18:45:21.201603 containerd[1829]: time="2024-06-25T18:45:21.199195957Z" level=info msg="StartContainer for \"752da4f97e7890bbe101e8b9e0ade2faa2bd69c72b53e1ee3b6575ebbbfb9793\"" Jun 25 18:45:21.311960 containerd[1829]: time="2024-06-25T18:45:21.311915971Z" level=info msg="StartContainer for \"752da4f97e7890bbe101e8b9e0ade2faa2bd69c72b53e1ee3b6575ebbbfb9793\" returns successfully" Jun 25 18:45:21.804059 kubelet[3456]: E0625 18:45:21.804007 3456 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9x5m4" podUID="1b8cd264-e868-4cd7-89a2-2e1d11e52069" Jun 25 18:45:21.807285 kubelet[3456]: I0625 18:45:21.807253 3456 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="e8a4125a-0f75-4bd3-bb8f-3c11db445f16" path="/var/lib/kubelet/pods/e8a4125a-0f75-4bd3-bb8f-3c11db445f16/volumes" Jun 25 18:45:21.976323 kubelet[3456]: I0625 18:45:21.976290 3456 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-554f9d7f5b-h6qvk" podStartSLOduration=11.976240534 podCreationTimestamp="2024-06-25 18:45:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:45:21.975473033 +0000 UTC m=+32.259702915" watchObservedRunningTime="2024-06-25 18:45:21.976240534 +0000 UTC m=+32.260470316" Jun 25 18:45:23.804982 kubelet[3456]: E0625 18:45:23.804509 3456 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9x5m4" podUID="1b8cd264-e868-4cd7-89a2-2e1d11e52069" Jun 25 18:45:23.954501 containerd[1829]: time="2024-06-25T18:45:23.954441795Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:23.956387 containerd[1829]: time="2024-06-25T18:45:23.956320999Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=93087850" Jun 25 18:45:23.960446 containerd[1829]: time="2024-06-25T18:45:23.960346706Z" level=info msg="ImageCreate event name:\"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:23.966240 containerd[1829]: time="2024-06-25T18:45:23.965938617Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:23.967758 containerd[1829]: time="2024-06-25T18:45:23.967519820Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"94535610\" in 4.930000516s" Jun 25 18:45:23.967758 containerd[1829]: time="2024-06-25T18:45:23.967582220Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\"" Jun 25 18:45:23.970015 containerd[1829]: time="2024-06-25T18:45:23.969746424Z" level=info msg="CreateContainer within sandbox \"07cab1a6f2f46df5020f49a9b66d403637d37d4a883765db0cb8d91138f4663b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jun 25 18:45:24.015515 containerd[1829]: time="2024-06-25T18:45:24.015456511Z" level=info msg="CreateContainer within sandbox \"07cab1a6f2f46df5020f49a9b66d403637d37d4a883765db0cb8d91138f4663b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"02be27eb049299116d4d98346846babacaf7000617fb503cbe3931644044ee1f\"" Jun 25 18:45:24.017253 containerd[1829]: time="2024-06-25T18:45:24.016168912Z" level=info msg="StartContainer for \"02be27eb049299116d4d98346846babacaf7000617fb503cbe3931644044ee1f\"" Jun 25 18:45:24.082685 containerd[1829]: time="2024-06-25T18:45:24.081996138Z" level=info msg="StartContainer for \"02be27eb049299116d4d98346846babacaf7000617fb503cbe3931644044ee1f\" returns successfully" Jun 25 18:45:25.467028 kubelet[3456]: I0625 18:45:25.466954 3456 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Jun 25 18:45:25.478561 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-02be27eb049299116d4d98346846babacaf7000617fb503cbe3931644044ee1f-rootfs.mount: Deactivated successfully. Jun 25 18:45:25.516503 kubelet[3456]: I0625 18:45:25.515276 3456 topology_manager.go:215] "Topology Admit Handler" podUID="e8be7a40-e96b-4fb8-bda5-b7efa142ed7a" podNamespace="kube-system" podName="coredns-5dd5756b68-fwlqc" Jun 25 18:45:25.520326 kubelet[3456]: I0625 18:45:25.519405 3456 topology_manager.go:215] "Topology Admit Handler" podUID="237fb894-19a1-415f-a0b5-869cf0ed9074" podNamespace="kube-system" podName="coredns-5dd5756b68-ztgmd" Jun 25 18:45:25.524785 kubelet[3456]: I0625 18:45:25.523574 3456 topology_manager.go:215] "Topology Admit Handler" podUID="d23d5958-6421-407c-b29b-eb96cbc2a5d1" podNamespace="calico-system" podName="calico-kube-controllers-598f97fd4f-fntx8" Jun 25 18:45:25.559037 kubelet[3456]: I0625 18:45:25.558998 3456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d23d5958-6421-407c-b29b-eb96cbc2a5d1-tigera-ca-bundle\") pod \"calico-kube-controllers-598f97fd4f-fntx8\" (UID: \"d23d5958-6421-407c-b29b-eb96cbc2a5d1\") " pod="calico-system/calico-kube-controllers-598f97fd4f-fntx8" Jun 25 18:45:25.559282 kubelet[3456]: I0625 18:45:25.559267 3456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvqtx\" (UniqueName: \"kubernetes.io/projected/d23d5958-6421-407c-b29b-eb96cbc2a5d1-kube-api-access-mvqtx\") pod \"calico-kube-controllers-598f97fd4f-fntx8\" (UID: \"d23d5958-6421-407c-b29b-eb96cbc2a5d1\") " pod="calico-system/calico-kube-controllers-598f97fd4f-fntx8" Jun 25 18:45:25.559416 kubelet[3456]: I0625 18:45:25.559403 3456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e8be7a40-e96b-4fb8-bda5-b7efa142ed7a-config-volume\") pod \"coredns-5dd5756b68-fwlqc\" (UID: \"e8be7a40-e96b-4fb8-bda5-b7efa142ed7a\") " pod="kube-system/coredns-5dd5756b68-fwlqc" Jun 25 18:45:25.559552 kubelet[3456]: I0625 18:45:25.559539 3456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mk2zb\" (UniqueName: \"kubernetes.io/projected/e8be7a40-e96b-4fb8-bda5-b7efa142ed7a-kube-api-access-mk2zb\") pod \"coredns-5dd5756b68-fwlqc\" (UID: \"e8be7a40-e96b-4fb8-bda5-b7efa142ed7a\") " pod="kube-system/coredns-5dd5756b68-fwlqc" Jun 25 18:45:25.559732 kubelet[3456]: I0625 18:45:25.559718 3456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnds7\" (UniqueName: \"kubernetes.io/projected/237fb894-19a1-415f-a0b5-869cf0ed9074-kube-api-access-dnds7\") pod \"coredns-5dd5756b68-ztgmd\" (UID: \"237fb894-19a1-415f-a0b5-869cf0ed9074\") " pod="kube-system/coredns-5dd5756b68-ztgmd" Jun 25 18:45:25.559798 kubelet[3456]: I0625 18:45:25.559763 3456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/237fb894-19a1-415f-a0b5-869cf0ed9074-config-volume\") pod \"coredns-5dd5756b68-ztgmd\" (UID: \"237fb894-19a1-415f-a0b5-869cf0ed9074\") " pod="kube-system/coredns-5dd5756b68-ztgmd" Jun 25 18:45:25.811758 containerd[1829]: time="2024-06-25T18:45:25.811245325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9x5m4,Uid:1b8cd264-e868-4cd7-89a2-2e1d11e52069,Namespace:calico-system,Attempt:0,}" Jun 25 18:45:25.833359 containerd[1829]: time="2024-06-25T18:45:25.833254967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-598f97fd4f-fntx8,Uid:d23d5958-6421-407c-b29b-eb96cbc2a5d1,Namespace:calico-system,Attempt:0,}" Jun 25 18:45:25.833359 containerd[1829]: time="2024-06-25T18:45:25.833302467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-fwlqc,Uid:e8be7a40-e96b-4fb8-bda5-b7efa142ed7a,Namespace:kube-system,Attempt:0,}" Jun 25 18:45:25.834140 containerd[1829]: time="2024-06-25T18:45:25.834039768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-ztgmd,Uid:237fb894-19a1-415f-a0b5-869cf0ed9074,Namespace:kube-system,Attempt:0,}" Jun 25 18:45:27.106516 containerd[1829]: time="2024-06-25T18:45:27.106442487Z" level=info msg="shim disconnected" id=02be27eb049299116d4d98346846babacaf7000617fb503cbe3931644044ee1f namespace=k8s.io Jun 25 18:45:27.106516 containerd[1829]: time="2024-06-25T18:45:27.106520588Z" level=warning msg="cleaning up after shim disconnected" id=02be27eb049299116d4d98346846babacaf7000617fb503cbe3931644044ee1f namespace=k8s.io Jun 25 18:45:27.107368 containerd[1829]: time="2024-06-25T18:45:27.106533188Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:45:27.331422 containerd[1829]: time="2024-06-25T18:45:27.331366315Z" level=error msg="Failed to destroy network for sandbox \"7e08938b1d69058d3fe504729ee350281969478e1a8795602be556dd89e223ea\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:45:27.332203 containerd[1829]: time="2024-06-25T18:45:27.332066516Z" level=error msg="encountered an error cleaning up failed sandbox \"7e08938b1d69058d3fe504729ee350281969478e1a8795602be556dd89e223ea\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:45:27.332203 containerd[1829]: time="2024-06-25T18:45:27.332142216Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9x5m4,Uid:1b8cd264-e868-4cd7-89a2-2e1d11e52069,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7e08938b1d69058d3fe504729ee350281969478e1a8795602be556dd89e223ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:45:27.333645 kubelet[3456]: E0625 18:45:27.332412 3456 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e08938b1d69058d3fe504729ee350281969478e1a8795602be556dd89e223ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:45:27.333645 kubelet[3456]: E0625 18:45:27.332699 3456 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e08938b1d69058d3fe504729ee350281969478e1a8795602be556dd89e223ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9x5m4" Jun 25 18:45:27.333645 kubelet[3456]: E0625 18:45:27.332743 3456 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e08938b1d69058d3fe504729ee350281969478e1a8795602be556dd89e223ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9x5m4" Jun 25 18:45:27.334170 kubelet[3456]: E0625 18:45:27.332822 3456 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9x5m4_calico-system(1b8cd264-e868-4cd7-89a2-2e1d11e52069)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9x5m4_calico-system(1b8cd264-e868-4cd7-89a2-2e1d11e52069)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7e08938b1d69058d3fe504729ee350281969478e1a8795602be556dd89e223ea\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9x5m4" podUID="1b8cd264-e868-4cd7-89a2-2e1d11e52069" Jun 25 18:45:27.356886 containerd[1829]: time="2024-06-25T18:45:27.356663063Z" level=error msg="Failed to destroy network for sandbox \"9b5f76f88339f6bcba68a787fef2e8d88ecdd1b372c2fe546ba407aa5310d209\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:45:27.358583 containerd[1829]: time="2024-06-25T18:45:27.358410766Z" level=error msg="encountered an error cleaning up failed sandbox \"9b5f76f88339f6bcba68a787fef2e8d88ecdd1b372c2fe546ba407aa5310d209\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:45:27.358892 containerd[1829]: time="2024-06-25T18:45:27.358766367Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-fwlqc,Uid:e8be7a40-e96b-4fb8-bda5-b7efa142ed7a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9b5f76f88339f6bcba68a787fef2e8d88ecdd1b372c2fe546ba407aa5310d209\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:45:27.359402 kubelet[3456]: E0625 18:45:27.359370 3456 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b5f76f88339f6bcba68a787fef2e8d88ecdd1b372c2fe546ba407aa5310d209\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:45:27.359672 kubelet[3456]: E0625 18:45:27.359438 3456 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b5f76f88339f6bcba68a787fef2e8d88ecdd1b372c2fe546ba407aa5310d209\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-fwlqc" Jun 25 18:45:27.359672 kubelet[3456]: E0625 18:45:27.359467 3456 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b5f76f88339f6bcba68a787fef2e8d88ecdd1b372c2fe546ba407aa5310d209\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-fwlqc" Jun 25 18:45:27.359672 kubelet[3456]: E0625 18:45:27.359591 3456 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-fwlqc_kube-system(e8be7a40-e96b-4fb8-bda5-b7efa142ed7a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-fwlqc_kube-system(e8be7a40-e96b-4fb8-bda5-b7efa142ed7a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9b5f76f88339f6bcba68a787fef2e8d88ecdd1b372c2fe546ba407aa5310d209\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-fwlqc" podUID="e8be7a40-e96b-4fb8-bda5-b7efa142ed7a" Jun 25 18:45:27.362283 containerd[1829]: time="2024-06-25T18:45:27.360636971Z" level=error msg="Failed to destroy network for sandbox \"2547328064042eb80643dd2fb8273604ecf7a60f2b2e0125c271806151b96be3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:45:27.362911 containerd[1829]: time="2024-06-25T18:45:27.362845975Z" level=error msg="encountered an error cleaning up failed sandbox \"2547328064042eb80643dd2fb8273604ecf7a60f2b2e0125c271806151b96be3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:45:27.363774 containerd[1829]: time="2024-06-25T18:45:27.363741177Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-598f97fd4f-fntx8,Uid:d23d5958-6421-407c-b29b-eb96cbc2a5d1,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2547328064042eb80643dd2fb8273604ecf7a60f2b2e0125c271806151b96be3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:45:27.364068 kubelet[3456]: E0625 18:45:27.364040 3456 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2547328064042eb80643dd2fb8273604ecf7a60f2b2e0125c271806151b96be3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:45:27.364186 kubelet[3456]: E0625 18:45:27.364091 3456 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2547328064042eb80643dd2fb8273604ecf7a60f2b2e0125c271806151b96be3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-598f97fd4f-fntx8" Jun 25 18:45:27.364186 kubelet[3456]: E0625 18:45:27.364118 3456 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2547328064042eb80643dd2fb8273604ecf7a60f2b2e0125c271806151b96be3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-598f97fd4f-fntx8" Jun 25 18:45:27.364186 kubelet[3456]: E0625 18:45:27.364175 3456 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-598f97fd4f-fntx8_calico-system(d23d5958-6421-407c-b29b-eb96cbc2a5d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-598f97fd4f-fntx8_calico-system(d23d5958-6421-407c-b29b-eb96cbc2a5d1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2547328064042eb80643dd2fb8273604ecf7a60f2b2e0125c271806151b96be3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-598f97fd4f-fntx8" podUID="d23d5958-6421-407c-b29b-eb96cbc2a5d1" Jun 25 18:45:27.377845 containerd[1829]: time="2024-06-25T18:45:27.377792503Z" level=error msg="Failed to destroy network for sandbox \"78de158640117ace04ceac21b5af3a7e547f2b57508b8333d1e4c21830fb6720\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:45:27.378153 containerd[1829]: time="2024-06-25T18:45:27.378120203Z" level=error msg="encountered an error cleaning up failed sandbox \"78de158640117ace04ceac21b5af3a7e547f2b57508b8333d1e4c21830fb6720\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:45:27.378273 containerd[1829]: time="2024-06-25T18:45:27.378177904Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-ztgmd,Uid:237fb894-19a1-415f-a0b5-869cf0ed9074,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"78de158640117ace04ceac21b5af3a7e547f2b57508b8333d1e4c21830fb6720\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:45:27.378445 kubelet[3456]: E0625 18:45:27.378426 3456 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78de158640117ace04ceac21b5af3a7e547f2b57508b8333d1e4c21830fb6720\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:45:27.378526 kubelet[3456]: E0625 18:45:27.378486 3456 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78de158640117ace04ceac21b5af3a7e547f2b57508b8333d1e4c21830fb6720\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-ztgmd" Jun 25 18:45:27.378526 kubelet[3456]: E0625 18:45:27.378517 3456 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78de158640117ace04ceac21b5af3a7e547f2b57508b8333d1e4c21830fb6720\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-ztgmd" Jun 25 18:45:27.378634 kubelet[3456]: E0625 18:45:27.378595 3456 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-ztgmd_kube-system(237fb894-19a1-415f-a0b5-869cf0ed9074)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-ztgmd_kube-system(237fb894-19a1-415f-a0b5-869cf0ed9074)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"78de158640117ace04ceac21b5af3a7e547f2b57508b8333d1e4c21830fb6720\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-ztgmd" podUID="237fb894-19a1-415f-a0b5-869cf0ed9074" Jun 25 18:45:27.976987 kubelet[3456]: I0625 18:45:27.976945 3456 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b5f76f88339f6bcba68a787fef2e8d88ecdd1b372c2fe546ba407aa5310d209" Jun 25 18:45:27.978056 containerd[1829]: time="2024-06-25T18:45:27.977774043Z" level=info msg="StopPodSandbox for \"9b5f76f88339f6bcba68a787fef2e8d88ecdd1b372c2fe546ba407aa5310d209\"" Jun 25 18:45:27.978265 containerd[1829]: time="2024-06-25T18:45:27.978181343Z" level=info msg="Ensure that sandbox 9b5f76f88339f6bcba68a787fef2e8d88ecdd1b372c2fe546ba407aa5310d209 in task-service has been cleanup successfully" Jun 25 18:45:27.987233 containerd[1829]: time="2024-06-25T18:45:27.985876157Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Jun 25 18:45:27.988045 kubelet[3456]: I0625 18:45:27.988020 3456 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78de158640117ace04ceac21b5af3a7e547f2b57508b8333d1e4c21830fb6720" Jun 25 18:45:27.988923 containerd[1829]: time="2024-06-25T18:45:27.988890962Z" level=info msg="StopPodSandbox for \"78de158640117ace04ceac21b5af3a7e547f2b57508b8333d1e4c21830fb6720\"" Jun 25 18:45:27.989283 containerd[1829]: time="2024-06-25T18:45:27.989254563Z" level=info msg="Ensure that sandbox 78de158640117ace04ceac21b5af3a7e547f2b57508b8333d1e4c21830fb6720 in task-service has been cleanup successfully" Jun 25 18:45:27.991917 kubelet[3456]: I0625 18:45:27.991899 3456 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e08938b1d69058d3fe504729ee350281969478e1a8795602be556dd89e223ea" Jun 25 18:45:27.992975 containerd[1829]: time="2024-06-25T18:45:27.992948169Z" level=info msg="StopPodSandbox for \"7e08938b1d69058d3fe504729ee350281969478e1a8795602be556dd89e223ea\"" Jun 25 18:45:27.994845 containerd[1829]: time="2024-06-25T18:45:27.994818972Z" level=info msg="Ensure that sandbox 7e08938b1d69058d3fe504729ee350281969478e1a8795602be556dd89e223ea in task-service has been cleanup successfully" Jun 25 18:45:27.997189 kubelet[3456]: I0625 18:45:27.997166 3456 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2547328064042eb80643dd2fb8273604ecf7a60f2b2e0125c271806151b96be3" Jun 25 18:45:28.000185 containerd[1829]: time="2024-06-25T18:45:27.999707681Z" level=info msg="StopPodSandbox for \"2547328064042eb80643dd2fb8273604ecf7a60f2b2e0125c271806151b96be3\"" Jun 25 18:45:28.000185 containerd[1829]: time="2024-06-25T18:45:27.999934081Z" level=info msg="Ensure that sandbox 2547328064042eb80643dd2fb8273604ecf7a60f2b2e0125c271806151b96be3 in task-service has been cleanup successfully" Jun 25 18:45:28.052720 containerd[1829]: time="2024-06-25T18:45:28.052660473Z" level=error msg="StopPodSandbox for \"9b5f76f88339f6bcba68a787fef2e8d88ecdd1b372c2fe546ba407aa5310d209\" failed" error="failed to destroy network for sandbox \"9b5f76f88339f6bcba68a787fef2e8d88ecdd1b372c2fe546ba407aa5310d209\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:45:28.053360 kubelet[3456]: E0625 18:45:28.053182 3456 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9b5f76f88339f6bcba68a787fef2e8d88ecdd1b372c2fe546ba407aa5310d209\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9b5f76f88339f6bcba68a787fef2e8d88ecdd1b372c2fe546ba407aa5310d209" Jun 25 18:45:28.053360 kubelet[3456]: E0625 18:45:28.053241 3456 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9b5f76f88339f6bcba68a787fef2e8d88ecdd1b372c2fe546ba407aa5310d209"} Jun 25 18:45:28.053360 kubelet[3456]: E0625 18:45:28.053289 3456 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e8be7a40-e96b-4fb8-bda5-b7efa142ed7a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9b5f76f88339f6bcba68a787fef2e8d88ecdd1b372c2fe546ba407aa5310d209\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 18:45:28.053360 kubelet[3456]: E0625 18:45:28.053327 3456 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e8be7a40-e96b-4fb8-bda5-b7efa142ed7a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9b5f76f88339f6bcba68a787fef2e8d88ecdd1b372c2fe546ba407aa5310d209\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-fwlqc" podUID="e8be7a40-e96b-4fb8-bda5-b7efa142ed7a" Jun 25 18:45:28.085190 containerd[1829]: time="2024-06-25T18:45:28.085134729Z" level=error msg="StopPodSandbox for \"78de158640117ace04ceac21b5af3a7e547f2b57508b8333d1e4c21830fb6720\" failed" error="failed to destroy network for sandbox \"78de158640117ace04ceac21b5af3a7e547f2b57508b8333d1e4c21830fb6720\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:45:28.085774 kubelet[3456]: E0625 18:45:28.085602 3456 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"78de158640117ace04ceac21b5af3a7e547f2b57508b8333d1e4c21830fb6720\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="78de158640117ace04ceac21b5af3a7e547f2b57508b8333d1e4c21830fb6720" Jun 25 18:45:28.085774 kubelet[3456]: E0625 18:45:28.085656 3456 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"78de158640117ace04ceac21b5af3a7e547f2b57508b8333d1e4c21830fb6720"} Jun 25 18:45:28.085774 kubelet[3456]: E0625 18:45:28.085704 3456 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"237fb894-19a1-415f-a0b5-869cf0ed9074\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"78de158640117ace04ceac21b5af3a7e547f2b57508b8333d1e4c21830fb6720\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 18:45:28.085774 kubelet[3456]: E0625 18:45:28.085741 3456 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"237fb894-19a1-415f-a0b5-869cf0ed9074\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"78de158640117ace04ceac21b5af3a7e547f2b57508b8333d1e4c21830fb6720\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-ztgmd" podUID="237fb894-19a1-415f-a0b5-869cf0ed9074" Jun 25 18:45:28.086718 containerd[1829]: time="2024-06-25T18:45:28.086279831Z" level=error msg="StopPodSandbox for \"7e08938b1d69058d3fe504729ee350281969478e1a8795602be556dd89e223ea\" failed" error="failed to destroy network for sandbox \"7e08938b1d69058d3fe504729ee350281969478e1a8795602be556dd89e223ea\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:45:28.086807 kubelet[3456]: E0625 18:45:28.086549 3456 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7e08938b1d69058d3fe504729ee350281969478e1a8795602be556dd89e223ea\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7e08938b1d69058d3fe504729ee350281969478e1a8795602be556dd89e223ea" Jun 25 18:45:28.086807 kubelet[3456]: E0625 18:45:28.086615 3456 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7e08938b1d69058d3fe504729ee350281969478e1a8795602be556dd89e223ea"} Jun 25 18:45:28.086807 kubelet[3456]: E0625 18:45:28.086660 3456 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1b8cd264-e868-4cd7-89a2-2e1d11e52069\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7e08938b1d69058d3fe504729ee350281969478e1a8795602be556dd89e223ea\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 18:45:28.086807 kubelet[3456]: E0625 18:45:28.086696 3456 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1b8cd264-e868-4cd7-89a2-2e1d11e52069\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7e08938b1d69058d3fe504729ee350281969478e1a8795602be556dd89e223ea\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9x5m4" podUID="1b8cd264-e868-4cd7-89a2-2e1d11e52069" Jun 25 18:45:28.089129 containerd[1829]: time="2024-06-25T18:45:28.089084036Z" level=error msg="StopPodSandbox for \"2547328064042eb80643dd2fb8273604ecf7a60f2b2e0125c271806151b96be3\" failed" error="failed to destroy network for sandbox \"2547328064042eb80643dd2fb8273604ecf7a60f2b2e0125c271806151b96be3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:45:28.089336 kubelet[3456]: E0625 18:45:28.089318 3456 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2547328064042eb80643dd2fb8273604ecf7a60f2b2e0125c271806151b96be3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2547328064042eb80643dd2fb8273604ecf7a60f2b2e0125c271806151b96be3" Jun 25 18:45:28.089426 kubelet[3456]: E0625 18:45:28.089351 3456 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2547328064042eb80643dd2fb8273604ecf7a60f2b2e0125c271806151b96be3"} Jun 25 18:45:28.089426 kubelet[3456]: E0625 18:45:28.089389 3456 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d23d5958-6421-407c-b29b-eb96cbc2a5d1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2547328064042eb80643dd2fb8273604ecf7a60f2b2e0125c271806151b96be3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 18:45:28.089519 kubelet[3456]: E0625 18:45:28.089425 3456 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d23d5958-6421-407c-b29b-eb96cbc2a5d1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2547328064042eb80643dd2fb8273604ecf7a60f2b2e0125c271806151b96be3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-598f97fd4f-fntx8" podUID="d23d5958-6421-407c-b29b-eb96cbc2a5d1" Jun 25 18:45:28.182496 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9b5f76f88339f6bcba68a787fef2e8d88ecdd1b372c2fe546ba407aa5310d209-shm.mount: Deactivated successfully. Jun 25 18:45:28.182734 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-78de158640117ace04ceac21b5af3a7e547f2b57508b8333d1e4c21830fb6720-shm.mount: Deactivated successfully. Jun 25 18:45:28.182873 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2547328064042eb80643dd2fb8273604ecf7a60f2b2e0125c271806151b96be3-shm.mount: Deactivated successfully. Jun 25 18:45:28.183008 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7e08938b1d69058d3fe504729ee350281969478e1a8795602be556dd89e223ea-shm.mount: Deactivated successfully. Jun 25 18:45:33.535831 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3900844279.mount: Deactivated successfully. Jun 25 18:45:33.587955 containerd[1829]: time="2024-06-25T18:45:33.587894766Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:33.589966 containerd[1829]: time="2024-06-25T18:45:33.589898769Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=115238750" Jun 25 18:45:33.594799 containerd[1829]: time="2024-06-25T18:45:33.594731278Z" level=info msg="ImageCreate event name:\"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:33.598393 containerd[1829]: time="2024-06-25T18:45:33.598337584Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:33.599285 containerd[1829]: time="2024-06-25T18:45:33.598995985Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"115238612\" in 5.613077328s" Jun 25 18:45:33.599285 containerd[1829]: time="2024-06-25T18:45:33.599037485Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\"" Jun 25 18:45:33.617001 containerd[1829]: time="2024-06-25T18:45:33.616945216Z" level=info msg="CreateContainer within sandbox \"07cab1a6f2f46df5020f49a9b66d403637d37d4a883765db0cb8d91138f4663b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jun 25 18:45:33.653335 containerd[1829]: time="2024-06-25T18:45:33.653285479Z" level=info msg="CreateContainer within sandbox \"07cab1a6f2f46df5020f49a9b66d403637d37d4a883765db0cb8d91138f4663b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b0c66b6bcec15aa5966647e4a874db74e857499a32f74222084a608f3906c029\"" Jun 25 18:45:33.654778 containerd[1829]: time="2024-06-25T18:45:33.653877680Z" level=info msg="StartContainer for \"b0c66b6bcec15aa5966647e4a874db74e857499a32f74222084a608f3906c029\"" Jun 25 18:45:33.720092 containerd[1829]: time="2024-06-25T18:45:33.720034895Z" level=info msg="StartContainer for \"b0c66b6bcec15aa5966647e4a874db74e857499a32f74222084a608f3906c029\" returns successfully" Jun 25 18:45:33.929504 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jun 25 18:45:33.929696 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jun 25 18:45:34.041951 kubelet[3456]: I0625 18:45:34.041453 3456 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-zx8wb" podStartSLOduration=2.393611795 podCreationTimestamp="2024-06-25 18:45:16 +0000 UTC" firstStartedPulling="2024-06-25 18:45:17.951623929 +0000 UTC m=+28.235853711" lastFinishedPulling="2024-06-25 18:45:33.599290186 +0000 UTC m=+43.883520068" observedRunningTime="2024-06-25 18:45:34.038398947 +0000 UTC m=+44.322628829" watchObservedRunningTime="2024-06-25 18:45:34.041278152 +0000 UTC m=+44.325508034" Jun 25 18:45:35.731392 systemd-networkd[1397]: vxlan.calico: Link UP Jun 25 18:45:35.731613 systemd-networkd[1397]: vxlan.calico: Gained carrier Jun 25 18:45:37.529814 systemd-networkd[1397]: vxlan.calico: Gained IPv6LL Jun 25 18:45:40.805356 containerd[1829]: time="2024-06-25T18:45:40.805288680Z" level=info msg="StopPodSandbox for \"78de158640117ace04ceac21b5af3a7e547f2b57508b8333d1e4c21830fb6720\"" Jun 25 18:45:40.806586 containerd[1829]: time="2024-06-25T18:45:40.805513280Z" level=info msg="StopPodSandbox for \"9b5f76f88339f6bcba68a787fef2e8d88ecdd1b372c2fe546ba407aa5310d209\"" Jun 25 18:45:40.910916 containerd[1829]: 2024-06-25 18:45:40.870 [INFO][4932] k8s.go 608: Cleaning up netns ContainerID="9b5f76f88339f6bcba68a787fef2e8d88ecdd1b372c2fe546ba407aa5310d209" Jun 25 18:45:40.910916 containerd[1829]: 2024-06-25 18:45:40.871 [INFO][4932] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="9b5f76f88339f6bcba68a787fef2e8d88ecdd1b372c2fe546ba407aa5310d209" iface="eth0" netns="/var/run/netns/cni-457be7c5-aca5-705c-67e4-8d90bf3fa6ec" Jun 25 18:45:40.910916 containerd[1829]: 2024-06-25 18:45:40.871 [INFO][4932] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="9b5f76f88339f6bcba68a787fef2e8d88ecdd1b372c2fe546ba407aa5310d209" iface="eth0" netns="/var/run/netns/cni-457be7c5-aca5-705c-67e4-8d90bf3fa6ec" Jun 25 18:45:40.910916 containerd[1829]: 2024-06-25 18:45:40.872 [INFO][4932] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="9b5f76f88339f6bcba68a787fef2e8d88ecdd1b372c2fe546ba407aa5310d209" iface="eth0" netns="/var/run/netns/cni-457be7c5-aca5-705c-67e4-8d90bf3fa6ec" Jun 25 18:45:40.910916 containerd[1829]: 2024-06-25 18:45:40.872 [INFO][4932] k8s.go 615: Releasing IP address(es) ContainerID="9b5f76f88339f6bcba68a787fef2e8d88ecdd1b372c2fe546ba407aa5310d209" Jun 25 18:45:40.910916 containerd[1829]: 2024-06-25 18:45:40.872 [INFO][4932] utils.go 188: Calico CNI releasing IP address ContainerID="9b5f76f88339f6bcba68a787fef2e8d88ecdd1b372c2fe546ba407aa5310d209" Jun 25 18:45:40.910916 containerd[1829]: 2024-06-25 18:45:40.901 [INFO][4941] ipam_plugin.go 411: Releasing address using handleID ContainerID="9b5f76f88339f6bcba68a787fef2e8d88ecdd1b372c2fe546ba407aa5310d209" HandleID="k8s-pod-network.9b5f76f88339f6bcba68a787fef2e8d88ecdd1b372c2fe546ba407aa5310d209" Workload="ci--4012.0.0--a--bcd7e269e6-k8s-coredns--5dd5756b68--fwlqc-eth0" Jun 25 18:45:40.910916 containerd[1829]: 2024-06-25 18:45:40.901 [INFO][4941] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:40.910916 containerd[1829]: 2024-06-25 18:45:40.901 [INFO][4941] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:40.910916 containerd[1829]: 2024-06-25 18:45:40.906 [WARNING][4941] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="9b5f76f88339f6bcba68a787fef2e8d88ecdd1b372c2fe546ba407aa5310d209" HandleID="k8s-pod-network.9b5f76f88339f6bcba68a787fef2e8d88ecdd1b372c2fe546ba407aa5310d209" Workload="ci--4012.0.0--a--bcd7e269e6-k8s-coredns--5dd5756b68--fwlqc-eth0" Jun 25 18:45:40.910916 containerd[1829]: 2024-06-25 18:45:40.906 [INFO][4941] ipam_plugin.go 439: Releasing address using workloadID ContainerID="9b5f76f88339f6bcba68a787fef2e8d88ecdd1b372c2fe546ba407aa5310d209" HandleID="k8s-pod-network.9b5f76f88339f6bcba68a787fef2e8d88ecdd1b372c2fe546ba407aa5310d209" Workload="ci--4012.0.0--a--bcd7e269e6-k8s-coredns--5dd5756b68--fwlqc-eth0" Jun 25 18:45:40.910916 containerd[1829]: 2024-06-25 18:45:40.908 [INFO][4941] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:40.910916 containerd[1829]: 2024-06-25 18:45:40.909 [INFO][4932] k8s.go 621: Teardown processing complete. ContainerID="9b5f76f88339f6bcba68a787fef2e8d88ecdd1b372c2fe546ba407aa5310d209" Jun 25 18:45:40.910916 containerd[1829]: time="2024-06-25T18:45:40.910899991Z" level=info msg="TearDown network for sandbox \"9b5f76f88339f6bcba68a787fef2e8d88ecdd1b372c2fe546ba407aa5310d209\" successfully" Jun 25 18:45:40.913038 containerd[1829]: time="2024-06-25T18:45:40.910931691Z" level=info msg="StopPodSandbox for \"9b5f76f88339f6bcba68a787fef2e8d88ecdd1b372c2fe546ba407aa5310d209\" returns successfully" Jun 25 18:45:40.917366 containerd[1829]: time="2024-06-25T18:45:40.916316796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-fwlqc,Uid:e8be7a40-e96b-4fb8-bda5-b7efa142ed7a,Namespace:kube-system,Attempt:1,}" Jun 25 18:45:40.919552 systemd[1]: run-netns-cni\x2d457be7c5\x2daca5\x2d705c\x2d67e4\x2d8d90bf3fa6ec.mount: Deactivated successfully. Jun 25 18:45:40.930600 containerd[1829]: 2024-06-25 18:45:40.865 [INFO][4921] k8s.go 608: Cleaning up netns ContainerID="78de158640117ace04ceac21b5af3a7e547f2b57508b8333d1e4c21830fb6720" Jun 25 18:45:40.930600 containerd[1829]: 2024-06-25 18:45:40.865 [INFO][4921] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="78de158640117ace04ceac21b5af3a7e547f2b57508b8333d1e4c21830fb6720" iface="eth0" netns="/var/run/netns/cni-78eb7e4e-c88c-88c9-a6b4-44cfecd5511c" Jun 25 18:45:40.930600 containerd[1829]: 2024-06-25 18:45:40.866 [INFO][4921] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="78de158640117ace04ceac21b5af3a7e547f2b57508b8333d1e4c21830fb6720" iface="eth0" netns="/var/run/netns/cni-78eb7e4e-c88c-88c9-a6b4-44cfecd5511c" Jun 25 18:45:40.930600 containerd[1829]: 2024-06-25 18:45:40.867 [INFO][4921] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="78de158640117ace04ceac21b5af3a7e547f2b57508b8333d1e4c21830fb6720" iface="eth0" netns="/var/run/netns/cni-78eb7e4e-c88c-88c9-a6b4-44cfecd5511c" Jun 25 18:45:40.930600 containerd[1829]: 2024-06-25 18:45:40.867 [INFO][4921] k8s.go 615: Releasing IP address(es) ContainerID="78de158640117ace04ceac21b5af3a7e547f2b57508b8333d1e4c21830fb6720" Jun 25 18:45:40.930600 containerd[1829]: 2024-06-25 18:45:40.867 [INFO][4921] utils.go 188: Calico CNI releasing IP address ContainerID="78de158640117ace04ceac21b5af3a7e547f2b57508b8333d1e4c21830fb6720" Jun 25 18:45:40.930600 containerd[1829]: 2024-06-25 18:45:40.901 [INFO][4940] ipam_plugin.go 411: Releasing address using handleID ContainerID="78de158640117ace04ceac21b5af3a7e547f2b57508b8333d1e4c21830fb6720" HandleID="k8s-pod-network.78de158640117ace04ceac21b5af3a7e547f2b57508b8333d1e4c21830fb6720" Workload="ci--4012.0.0--a--bcd7e269e6-k8s-coredns--5dd5756b68--ztgmd-eth0" Jun 25 18:45:40.930600 containerd[1829]: 2024-06-25 18:45:40.901 [INFO][4940] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:40.930600 containerd[1829]: 2024-06-25 18:45:40.908 [INFO][4940] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:40.930600 containerd[1829]: 2024-06-25 18:45:40.918 [WARNING][4940] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="78de158640117ace04ceac21b5af3a7e547f2b57508b8333d1e4c21830fb6720" HandleID="k8s-pod-network.78de158640117ace04ceac21b5af3a7e547f2b57508b8333d1e4c21830fb6720" Workload="ci--4012.0.0--a--bcd7e269e6-k8s-coredns--5dd5756b68--ztgmd-eth0" Jun 25 18:45:40.930600 containerd[1829]: 2024-06-25 18:45:40.918 [INFO][4940] ipam_plugin.go 439: Releasing address using workloadID ContainerID="78de158640117ace04ceac21b5af3a7e547f2b57508b8333d1e4c21830fb6720" HandleID="k8s-pod-network.78de158640117ace04ceac21b5af3a7e547f2b57508b8333d1e4c21830fb6720" Workload="ci--4012.0.0--a--bcd7e269e6-k8s-coredns--5dd5756b68--ztgmd-eth0" Jun 25 18:45:40.930600 containerd[1829]: 2024-06-25 18:45:40.923 [INFO][4940] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:40.930600 containerd[1829]: 2024-06-25 18:45:40.927 [INFO][4921] k8s.go 621: Teardown processing complete. ContainerID="78de158640117ace04ceac21b5af3a7e547f2b57508b8333d1e4c21830fb6720" Jun 25 18:45:40.930600 containerd[1829]: time="2024-06-25T18:45:40.929332410Z" level=info msg="TearDown network for sandbox \"78de158640117ace04ceac21b5af3a7e547f2b57508b8333d1e4c21830fb6720\" successfully" Jun 25 18:45:40.930600 containerd[1829]: time="2024-06-25T18:45:40.929362610Z" level=info msg="StopPodSandbox for \"78de158640117ace04ceac21b5af3a7e547f2b57508b8333d1e4c21830fb6720\" returns successfully" Jun 25 18:45:40.936491 containerd[1829]: time="2024-06-25T18:45:40.933513614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-ztgmd,Uid:237fb894-19a1-415f-a0b5-869cf0ed9074,Namespace:kube-system,Attempt:1,}" Jun 25 18:45:40.937990 systemd[1]: run-netns-cni\x2d78eb7e4e\x2dc88c\x2d88c9\x2da6b4\x2d44cfecd5511c.mount: Deactivated successfully. Jun 25 18:45:41.203998 systemd-networkd[1397]: cali5ae3b61ec12: Link UP Jun 25 18:45:41.207774 systemd-networkd[1397]: cali5ae3b61ec12: Gained carrier Jun 25 18:45:41.236640 containerd[1829]: 2024-06-25 18:45:41.092 [INFO][4963] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4012.0.0--a--bcd7e269e6-k8s-coredns--5dd5756b68--ztgmd-eth0 coredns-5dd5756b68- kube-system 237fb894-19a1-415f-a0b5-869cf0ed9074 777 0 2024-06-25 18:45:03 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4012.0.0-a-bcd7e269e6 coredns-5dd5756b68-ztgmd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5ae3b61ec12 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="5a6b399d4be809255582744e282f23e43057f7cf98441bac2f4d44ce5d0f2572" Namespace="kube-system" Pod="coredns-5dd5756b68-ztgmd" WorkloadEndpoint="ci--4012.0.0--a--bcd7e269e6-k8s-coredns--5dd5756b68--ztgmd-" Jun 25 18:45:41.236640 containerd[1829]: 2024-06-25 18:45:41.092 [INFO][4963] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5a6b399d4be809255582744e282f23e43057f7cf98441bac2f4d44ce5d0f2572" Namespace="kube-system" Pod="coredns-5dd5756b68-ztgmd" WorkloadEndpoint="ci--4012.0.0--a--bcd7e269e6-k8s-coredns--5dd5756b68--ztgmd-eth0" Jun 25 18:45:41.236640 containerd[1829]: 2024-06-25 18:45:41.140 [INFO][4983] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5a6b399d4be809255582744e282f23e43057f7cf98441bac2f4d44ce5d0f2572" HandleID="k8s-pod-network.5a6b399d4be809255582744e282f23e43057f7cf98441bac2f4d44ce5d0f2572" Workload="ci--4012.0.0--a--bcd7e269e6-k8s-coredns--5dd5756b68--ztgmd-eth0" Jun 25 18:45:41.236640 containerd[1829]: 2024-06-25 18:45:41.156 [INFO][4983] ipam_plugin.go 264: Auto assigning IP ContainerID="5a6b399d4be809255582744e282f23e43057f7cf98441bac2f4d44ce5d0f2572" HandleID="k8s-pod-network.5a6b399d4be809255582744e282f23e43057f7cf98441bac2f4d44ce5d0f2572" Workload="ci--4012.0.0--a--bcd7e269e6-k8s-coredns--5dd5756b68--ztgmd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000291660), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4012.0.0-a-bcd7e269e6", "pod":"coredns-5dd5756b68-ztgmd", "timestamp":"2024-06-25 18:45:41.140562332 +0000 UTC"}, Hostname:"ci-4012.0.0-a-bcd7e269e6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 18:45:41.236640 containerd[1829]: 2024-06-25 18:45:41.157 [INFO][4983] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:41.236640 containerd[1829]: 2024-06-25 18:45:41.157 [INFO][4983] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:41.236640 containerd[1829]: 2024-06-25 18:45:41.157 [INFO][4983] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4012.0.0-a-bcd7e269e6' Jun 25 18:45:41.236640 containerd[1829]: 2024-06-25 18:45:41.161 [INFO][4983] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5a6b399d4be809255582744e282f23e43057f7cf98441bac2f4d44ce5d0f2572" host="ci-4012.0.0-a-bcd7e269e6" Jun 25 18:45:41.236640 containerd[1829]: 2024-06-25 18:45:41.169 [INFO][4983] ipam.go 372: Looking up existing affinities for host host="ci-4012.0.0-a-bcd7e269e6" Jun 25 18:45:41.236640 containerd[1829]: 2024-06-25 18:45:41.173 [INFO][4983] ipam.go 489: Trying affinity for 192.168.45.64/26 host="ci-4012.0.0-a-bcd7e269e6" Jun 25 18:45:41.236640 containerd[1829]: 2024-06-25 18:45:41.175 [INFO][4983] ipam.go 155: Attempting to load block cidr=192.168.45.64/26 host="ci-4012.0.0-a-bcd7e269e6" Jun 25 18:45:41.236640 containerd[1829]: 2024-06-25 18:45:41.180 [INFO][4983] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.45.64/26 host="ci-4012.0.0-a-bcd7e269e6" Jun 25 18:45:41.236640 containerd[1829]: 2024-06-25 18:45:41.180 [INFO][4983] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.45.64/26 handle="k8s-pod-network.5a6b399d4be809255582744e282f23e43057f7cf98441bac2f4d44ce5d0f2572" host="ci-4012.0.0-a-bcd7e269e6" Jun 25 18:45:41.236640 containerd[1829]: 2024-06-25 18:45:41.182 [INFO][4983] ipam.go 1685: Creating new handle: k8s-pod-network.5a6b399d4be809255582744e282f23e43057f7cf98441bac2f4d44ce5d0f2572 Jun 25 18:45:41.236640 containerd[1829]: 2024-06-25 18:45:41.189 [INFO][4983] ipam.go 1203: Writing block in order to claim IPs block=192.168.45.64/26 handle="k8s-pod-network.5a6b399d4be809255582744e282f23e43057f7cf98441bac2f4d44ce5d0f2572" host="ci-4012.0.0-a-bcd7e269e6" Jun 25 18:45:41.236640 containerd[1829]: 2024-06-25 18:45:41.193 [INFO][4983] ipam.go 1216: Successfully claimed IPs: [192.168.45.65/26] block=192.168.45.64/26 handle="k8s-pod-network.5a6b399d4be809255582744e282f23e43057f7cf98441bac2f4d44ce5d0f2572" host="ci-4012.0.0-a-bcd7e269e6" Jun 25 18:45:41.236640 containerd[1829]: 2024-06-25 18:45:41.194 [INFO][4983] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.45.65/26] handle="k8s-pod-network.5a6b399d4be809255582744e282f23e43057f7cf98441bac2f4d44ce5d0f2572" host="ci-4012.0.0-a-bcd7e269e6" Jun 25 18:45:41.236640 containerd[1829]: 2024-06-25 18:45:41.194 [INFO][4983] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:41.236640 containerd[1829]: 2024-06-25 18:45:41.194 [INFO][4983] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.45.65/26] IPv6=[] ContainerID="5a6b399d4be809255582744e282f23e43057f7cf98441bac2f4d44ce5d0f2572" HandleID="k8s-pod-network.5a6b399d4be809255582744e282f23e43057f7cf98441bac2f4d44ce5d0f2572" Workload="ci--4012.0.0--a--bcd7e269e6-k8s-coredns--5dd5756b68--ztgmd-eth0" Jun 25 18:45:41.239008 containerd[1829]: 2024-06-25 18:45:41.197 [INFO][4963] k8s.go 386: Populated endpoint ContainerID="5a6b399d4be809255582744e282f23e43057f7cf98441bac2f4d44ce5d0f2572" Namespace="kube-system" Pod="coredns-5dd5756b68-ztgmd" WorkloadEndpoint="ci--4012.0.0--a--bcd7e269e6-k8s-coredns--5dd5756b68--ztgmd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--bcd7e269e6-k8s-coredns--5dd5756b68--ztgmd-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"237fb894-19a1-415f-a0b5-869cf0ed9074", ResourceVersion:"777", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 45, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-bcd7e269e6", ContainerID:"", Pod:"coredns-5dd5756b68-ztgmd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.45.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5ae3b61ec12", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:41.239008 containerd[1829]: 2024-06-25 18:45:41.198 [INFO][4963] k8s.go 387: Calico CNI using IPs: [192.168.45.65/32] ContainerID="5a6b399d4be809255582744e282f23e43057f7cf98441bac2f4d44ce5d0f2572" Namespace="kube-system" Pod="coredns-5dd5756b68-ztgmd" WorkloadEndpoint="ci--4012.0.0--a--bcd7e269e6-k8s-coredns--5dd5756b68--ztgmd-eth0" Jun 25 18:45:41.239008 containerd[1829]: 2024-06-25 18:45:41.198 [INFO][4963] dataplane_linux.go 68: Setting the host side veth name to cali5ae3b61ec12 ContainerID="5a6b399d4be809255582744e282f23e43057f7cf98441bac2f4d44ce5d0f2572" Namespace="kube-system" Pod="coredns-5dd5756b68-ztgmd" WorkloadEndpoint="ci--4012.0.0--a--bcd7e269e6-k8s-coredns--5dd5756b68--ztgmd-eth0" Jun 25 18:45:41.239008 containerd[1829]: 2024-06-25 18:45:41.203 [INFO][4963] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="5a6b399d4be809255582744e282f23e43057f7cf98441bac2f4d44ce5d0f2572" Namespace="kube-system" Pod="coredns-5dd5756b68-ztgmd" WorkloadEndpoint="ci--4012.0.0--a--bcd7e269e6-k8s-coredns--5dd5756b68--ztgmd-eth0" Jun 25 18:45:41.239008 containerd[1829]: 2024-06-25 18:45:41.205 [INFO][4963] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5a6b399d4be809255582744e282f23e43057f7cf98441bac2f4d44ce5d0f2572" Namespace="kube-system" Pod="coredns-5dd5756b68-ztgmd" WorkloadEndpoint="ci--4012.0.0--a--bcd7e269e6-k8s-coredns--5dd5756b68--ztgmd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--bcd7e269e6-k8s-coredns--5dd5756b68--ztgmd-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"237fb894-19a1-415f-a0b5-869cf0ed9074", ResourceVersion:"777", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 45, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-bcd7e269e6", ContainerID:"5a6b399d4be809255582744e282f23e43057f7cf98441bac2f4d44ce5d0f2572", Pod:"coredns-5dd5756b68-ztgmd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.45.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5ae3b61ec12", MAC:"62:af:12:9e:6c:f1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:41.239008 containerd[1829]: 2024-06-25 18:45:41.232 [INFO][4963] k8s.go 500: Wrote updated endpoint to datastore ContainerID="5a6b399d4be809255582744e282f23e43057f7cf98441bac2f4d44ce5d0f2572" Namespace="kube-system" Pod="coredns-5dd5756b68-ztgmd" WorkloadEndpoint="ci--4012.0.0--a--bcd7e269e6-k8s-coredns--5dd5756b68--ztgmd-eth0" Jun 25 18:45:41.284586 systemd-networkd[1397]: caliaa46e8a6ba1: Link UP Jun 25 18:45:41.287772 systemd-networkd[1397]: caliaa46e8a6ba1: Gained carrier Jun 25 18:45:41.314889 containerd[1829]: 2024-06-25 18:45:41.095 [INFO][4958] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4012.0.0--a--bcd7e269e6-k8s-coredns--5dd5756b68--fwlqc-eth0 coredns-5dd5756b68- kube-system e8be7a40-e96b-4fb8-bda5-b7efa142ed7a 778 0 2024-06-25 18:45:03 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4012.0.0-a-bcd7e269e6 coredns-5dd5756b68-fwlqc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliaa46e8a6ba1 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="cb328de7141c21524e25f073f289dfc3e4a14c13a586d5fb4881e061505bb03e" Namespace="kube-system" Pod="coredns-5dd5756b68-fwlqc" WorkloadEndpoint="ci--4012.0.0--a--bcd7e269e6-k8s-coredns--5dd5756b68--fwlqc-" Jun 25 18:45:41.314889 containerd[1829]: 2024-06-25 18:45:41.095 [INFO][4958] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="cb328de7141c21524e25f073f289dfc3e4a14c13a586d5fb4881e061505bb03e" Namespace="kube-system" Pod="coredns-5dd5756b68-fwlqc" WorkloadEndpoint="ci--4012.0.0--a--bcd7e269e6-k8s-coredns--5dd5756b68--fwlqc-eth0" Jun 25 18:45:41.314889 containerd[1829]: 2024-06-25 18:45:41.157 [INFO][4987] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cb328de7141c21524e25f073f289dfc3e4a14c13a586d5fb4881e061505bb03e" HandleID="k8s-pod-network.cb328de7141c21524e25f073f289dfc3e4a14c13a586d5fb4881e061505bb03e" Workload="ci--4012.0.0--a--bcd7e269e6-k8s-coredns--5dd5756b68--fwlqc-eth0" Jun 25 18:45:41.314889 containerd[1829]: 2024-06-25 18:45:41.168 [INFO][4987] ipam_plugin.go 264: Auto assigning IP ContainerID="cb328de7141c21524e25f073f289dfc3e4a14c13a586d5fb4881e061505bb03e" HandleID="k8s-pod-network.cb328de7141c21524e25f073f289dfc3e4a14c13a586d5fb4881e061505bb03e" Workload="ci--4012.0.0--a--bcd7e269e6-k8s-coredns--5dd5756b68--fwlqc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051db0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4012.0.0-a-bcd7e269e6", "pod":"coredns-5dd5756b68-fwlqc", "timestamp":"2024-06-25 18:45:41.15794205 +0000 UTC"}, Hostname:"ci-4012.0.0-a-bcd7e269e6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 18:45:41.314889 containerd[1829]: 2024-06-25 18:45:41.168 [INFO][4987] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:41.314889 containerd[1829]: 2024-06-25 18:45:41.194 [INFO][4987] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:41.314889 containerd[1829]: 2024-06-25 18:45:41.195 [INFO][4987] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4012.0.0-a-bcd7e269e6' Jun 25 18:45:41.314889 containerd[1829]: 2024-06-25 18:45:41.197 [INFO][4987] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.cb328de7141c21524e25f073f289dfc3e4a14c13a586d5fb4881e061505bb03e" host="ci-4012.0.0-a-bcd7e269e6" Jun 25 18:45:41.314889 containerd[1829]: 2024-06-25 18:45:41.208 [INFO][4987] ipam.go 372: Looking up existing affinities for host host="ci-4012.0.0-a-bcd7e269e6" Jun 25 18:45:41.314889 containerd[1829]: 2024-06-25 18:45:41.214 [INFO][4987] ipam.go 489: Trying affinity for 192.168.45.64/26 host="ci-4012.0.0-a-bcd7e269e6" Jun 25 18:45:41.314889 containerd[1829]: 2024-06-25 18:45:41.217 [INFO][4987] ipam.go 155: Attempting to load block cidr=192.168.45.64/26 host="ci-4012.0.0-a-bcd7e269e6" Jun 25 18:45:41.314889 containerd[1829]: 2024-06-25 18:45:41.228 [INFO][4987] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.45.64/26 host="ci-4012.0.0-a-bcd7e269e6" Jun 25 18:45:41.314889 containerd[1829]: 2024-06-25 18:45:41.228 [INFO][4987] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.45.64/26 handle="k8s-pod-network.cb328de7141c21524e25f073f289dfc3e4a14c13a586d5fb4881e061505bb03e" host="ci-4012.0.0-a-bcd7e269e6" Jun 25 18:45:41.314889 containerd[1829]: 2024-06-25 18:45:41.233 [INFO][4987] ipam.go 1685: Creating new handle: k8s-pod-network.cb328de7141c21524e25f073f289dfc3e4a14c13a586d5fb4881e061505bb03e Jun 25 18:45:41.314889 containerd[1829]: 2024-06-25 18:45:41.252 [INFO][4987] ipam.go 1203: Writing block in order to claim IPs block=192.168.45.64/26 handle="k8s-pod-network.cb328de7141c21524e25f073f289dfc3e4a14c13a586d5fb4881e061505bb03e" host="ci-4012.0.0-a-bcd7e269e6" Jun 25 18:45:41.314889 containerd[1829]: 2024-06-25 18:45:41.270 [INFO][4987] ipam.go 1216: Successfully claimed IPs: [192.168.45.66/26] block=192.168.45.64/26 handle="k8s-pod-network.cb328de7141c21524e25f073f289dfc3e4a14c13a586d5fb4881e061505bb03e" host="ci-4012.0.0-a-bcd7e269e6" Jun 25 18:45:41.314889 containerd[1829]: 2024-06-25 18:45:41.270 [INFO][4987] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.45.66/26] handle="k8s-pod-network.cb328de7141c21524e25f073f289dfc3e4a14c13a586d5fb4881e061505bb03e" host="ci-4012.0.0-a-bcd7e269e6" Jun 25 18:45:41.314889 containerd[1829]: 2024-06-25 18:45:41.270 [INFO][4987] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:41.314889 containerd[1829]: 2024-06-25 18:45:41.270 [INFO][4987] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.45.66/26] IPv6=[] ContainerID="cb328de7141c21524e25f073f289dfc3e4a14c13a586d5fb4881e061505bb03e" HandleID="k8s-pod-network.cb328de7141c21524e25f073f289dfc3e4a14c13a586d5fb4881e061505bb03e" Workload="ci--4012.0.0--a--bcd7e269e6-k8s-coredns--5dd5756b68--fwlqc-eth0" Jun 25 18:45:41.316375 containerd[1829]: 2024-06-25 18:45:41.274 [INFO][4958] k8s.go 386: Populated endpoint ContainerID="cb328de7141c21524e25f073f289dfc3e4a14c13a586d5fb4881e061505bb03e" Namespace="kube-system" Pod="coredns-5dd5756b68-fwlqc" WorkloadEndpoint="ci--4012.0.0--a--bcd7e269e6-k8s-coredns--5dd5756b68--fwlqc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--bcd7e269e6-k8s-coredns--5dd5756b68--fwlqc-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"e8be7a40-e96b-4fb8-bda5-b7efa142ed7a", ResourceVersion:"778", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 45, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-bcd7e269e6", ContainerID:"", Pod:"coredns-5dd5756b68-fwlqc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.45.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaa46e8a6ba1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:41.316375 containerd[1829]: 2024-06-25 18:45:41.274 [INFO][4958] k8s.go 387: Calico CNI using IPs: [192.168.45.66/32] ContainerID="cb328de7141c21524e25f073f289dfc3e4a14c13a586d5fb4881e061505bb03e" Namespace="kube-system" Pod="coredns-5dd5756b68-fwlqc" WorkloadEndpoint="ci--4012.0.0--a--bcd7e269e6-k8s-coredns--5dd5756b68--fwlqc-eth0" Jun 25 18:45:41.316375 containerd[1829]: 2024-06-25 18:45:41.274 [INFO][4958] dataplane_linux.go 68: Setting the host side veth name to caliaa46e8a6ba1 ContainerID="cb328de7141c21524e25f073f289dfc3e4a14c13a586d5fb4881e061505bb03e" Namespace="kube-system" Pod="coredns-5dd5756b68-fwlqc" WorkloadEndpoint="ci--4012.0.0--a--bcd7e269e6-k8s-coredns--5dd5756b68--fwlqc-eth0" Jun 25 18:45:41.316375 containerd[1829]: 2024-06-25 18:45:41.288 [INFO][4958] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="cb328de7141c21524e25f073f289dfc3e4a14c13a586d5fb4881e061505bb03e" Namespace="kube-system" Pod="coredns-5dd5756b68-fwlqc" WorkloadEndpoint="ci--4012.0.0--a--bcd7e269e6-k8s-coredns--5dd5756b68--fwlqc-eth0" Jun 25 18:45:41.316375 containerd[1829]: 2024-06-25 18:45:41.291 [INFO][4958] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="cb328de7141c21524e25f073f289dfc3e4a14c13a586d5fb4881e061505bb03e" Namespace="kube-system" Pod="coredns-5dd5756b68-fwlqc" WorkloadEndpoint="ci--4012.0.0--a--bcd7e269e6-k8s-coredns--5dd5756b68--fwlqc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--bcd7e269e6-k8s-coredns--5dd5756b68--fwlqc-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"e8be7a40-e96b-4fb8-bda5-b7efa142ed7a", ResourceVersion:"778", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 45, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-bcd7e269e6", ContainerID:"cb328de7141c21524e25f073f289dfc3e4a14c13a586d5fb4881e061505bb03e", Pod:"coredns-5dd5756b68-fwlqc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.45.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaa46e8a6ba1", MAC:"c6:3f:ad:e6:34:65", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:41.316375 containerd[1829]: 2024-06-25 18:45:41.304 [INFO][4958] k8s.go 500: Wrote updated endpoint to datastore ContainerID="cb328de7141c21524e25f073f289dfc3e4a14c13a586d5fb4881e061505bb03e" Namespace="kube-system" Pod="coredns-5dd5756b68-fwlqc" WorkloadEndpoint="ci--4012.0.0--a--bcd7e269e6-k8s-coredns--5dd5756b68--fwlqc-eth0" Jun 25 18:45:41.339608 containerd[1829]: time="2024-06-25T18:45:41.339378041Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:45:41.342083 containerd[1829]: time="2024-06-25T18:45:41.341053343Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:41.342083 containerd[1829]: time="2024-06-25T18:45:41.341084743Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:45:41.342083 containerd[1829]: time="2024-06-25T18:45:41.341099243Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:41.381272 containerd[1829]: time="2024-06-25T18:45:41.380101384Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:45:41.381776 containerd[1829]: time="2024-06-25T18:45:41.381378085Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:41.381776 containerd[1829]: time="2024-06-25T18:45:41.381419685Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:45:41.381776 containerd[1829]: time="2024-06-25T18:45:41.381435285Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:41.496919 containerd[1829]: time="2024-06-25T18:45:41.496858707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-ztgmd,Uid:237fb894-19a1-415f-a0b5-869cf0ed9074,Namespace:kube-system,Attempt:1,} returns sandbox id \"5a6b399d4be809255582744e282f23e43057f7cf98441bac2f4d44ce5d0f2572\"" Jun 25 18:45:41.501213 containerd[1829]: time="2024-06-25T18:45:41.501071111Z" level=info msg="CreateContainer within sandbox \"5a6b399d4be809255582744e282f23e43057f7cf98441bac2f4d44ce5d0f2572\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 18:45:41.526924 containerd[1829]: time="2024-06-25T18:45:41.526872538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-fwlqc,Uid:e8be7a40-e96b-4fb8-bda5-b7efa142ed7a,Namespace:kube-system,Attempt:1,} returns sandbox id \"cb328de7141c21524e25f073f289dfc3e4a14c13a586d5fb4881e061505bb03e\"" Jun 25 18:45:41.531648 containerd[1829]: time="2024-06-25T18:45:41.531395643Z" level=info msg="CreateContainer within sandbox \"cb328de7141c21524e25f073f289dfc3e4a14c13a586d5fb4881e061505bb03e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 18:45:41.536761 containerd[1829]: time="2024-06-25T18:45:41.536724549Z" level=info msg="CreateContainer within sandbox \"5a6b399d4be809255582744e282f23e43057f7cf98441bac2f4d44ce5d0f2572\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a701db345eb484a66fbdc291cafd9fb722f7334a2c347adc9afc16ca075e7d21\"" Jun 25 18:45:41.537638 containerd[1829]: time="2024-06-25T18:45:41.537608550Z" level=info msg="StartContainer for \"a701db345eb484a66fbdc291cafd9fb722f7334a2c347adc9afc16ca075e7d21\"" Jun 25 18:45:41.573400 containerd[1829]: time="2024-06-25T18:45:41.573264287Z" level=info msg="CreateContainer within sandbox \"cb328de7141c21524e25f073f289dfc3e4a14c13a586d5fb4881e061505bb03e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c2c6ad2f7532a0ca00b52ec4690d2ed4231aa39583705a344be8290b45ba9a97\"" Jun 25 18:45:41.577295 containerd[1829]: time="2024-06-25T18:45:41.577042591Z" level=info msg="StartContainer for \"c2c6ad2f7532a0ca00b52ec4690d2ed4231aa39583705a344be8290b45ba9a97\"" Jun 25 18:45:41.657965 containerd[1829]: time="2024-06-25T18:45:41.657913076Z" level=info msg="StartContainer for \"a701db345eb484a66fbdc291cafd9fb722f7334a2c347adc9afc16ca075e7d21\" returns successfully" Jun 25 18:45:41.698211 containerd[1829]: time="2024-06-25T18:45:41.697819718Z" level=info msg="StartContainer for \"c2c6ad2f7532a0ca00b52ec4690d2ed4231aa39583705a344be8290b45ba9a97\" returns successfully" Jun 25 18:45:41.810686 containerd[1829]: time="2024-06-25T18:45:41.807676534Z" level=info msg="StopPodSandbox for \"7e08938b1d69058d3fe504729ee350281969478e1a8795602be556dd89e223ea\"" Jun 25 18:45:41.961674 containerd[1829]: 2024-06-25 18:45:41.890 [INFO][5194] k8s.go 608: Cleaning up netns ContainerID="7e08938b1d69058d3fe504729ee350281969478e1a8795602be556dd89e223ea" Jun 25 18:45:41.961674 containerd[1829]: 2024-06-25 18:45:41.891 [INFO][5194] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="7e08938b1d69058d3fe504729ee350281969478e1a8795602be556dd89e223ea" iface="eth0" netns="/var/run/netns/cni-aec509cc-9415-7997-d9eb-f1090a75ee89" Jun 25 18:45:41.961674 containerd[1829]: 2024-06-25 18:45:41.891 [INFO][5194] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="7e08938b1d69058d3fe504729ee350281969478e1a8795602be556dd89e223ea" iface="eth0" netns="/var/run/netns/cni-aec509cc-9415-7997-d9eb-f1090a75ee89" Jun 25 18:45:41.961674 containerd[1829]: 2024-06-25 18:45:41.892 [INFO][5194] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="7e08938b1d69058d3fe504729ee350281969478e1a8795602be556dd89e223ea" iface="eth0" netns="/var/run/netns/cni-aec509cc-9415-7997-d9eb-f1090a75ee89" Jun 25 18:45:41.961674 containerd[1829]: 2024-06-25 18:45:41.892 [INFO][5194] k8s.go 615: Releasing IP address(es) ContainerID="7e08938b1d69058d3fe504729ee350281969478e1a8795602be556dd89e223ea" Jun 25 18:45:41.961674 containerd[1829]: 2024-06-25 18:45:41.892 [INFO][5194] utils.go 188: Calico CNI releasing IP address ContainerID="7e08938b1d69058d3fe504729ee350281969478e1a8795602be556dd89e223ea" Jun 25 18:45:41.961674 containerd[1829]: 2024-06-25 18:45:41.951 [INFO][5200] ipam_plugin.go 411: Releasing address using handleID ContainerID="7e08938b1d69058d3fe504729ee350281969478e1a8795602be556dd89e223ea" HandleID="k8s-pod-network.7e08938b1d69058d3fe504729ee350281969478e1a8795602be556dd89e223ea" Workload="ci--4012.0.0--a--bcd7e269e6-k8s-csi--node--driver--9x5m4-eth0" Jun 25 18:45:41.961674 containerd[1829]: 2024-06-25 18:45:41.952 [INFO][5200] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:41.961674 containerd[1829]: 2024-06-25 18:45:41.952 [INFO][5200] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:41.961674 containerd[1829]: 2024-06-25 18:45:41.957 [WARNING][5200] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="7e08938b1d69058d3fe504729ee350281969478e1a8795602be556dd89e223ea" HandleID="k8s-pod-network.7e08938b1d69058d3fe504729ee350281969478e1a8795602be556dd89e223ea" Workload="ci--4012.0.0--a--bcd7e269e6-k8s-csi--node--driver--9x5m4-eth0" Jun 25 18:45:41.961674 containerd[1829]: 2024-06-25 18:45:41.957 [INFO][5200] ipam_plugin.go 439: Releasing address using workloadID ContainerID="7e08938b1d69058d3fe504729ee350281969478e1a8795602be556dd89e223ea" HandleID="k8s-pod-network.7e08938b1d69058d3fe504729ee350281969478e1a8795602be556dd89e223ea" Workload="ci--4012.0.0--a--bcd7e269e6-k8s-csi--node--driver--9x5m4-eth0" Jun 25 18:45:41.961674 containerd[1829]: 2024-06-25 18:45:41.958 [INFO][5200] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:41.961674 containerd[1829]: 2024-06-25 18:45:41.960 [INFO][5194] k8s.go 621: Teardown processing complete. ContainerID="7e08938b1d69058d3fe504729ee350281969478e1a8795602be556dd89e223ea" Jun 25 18:45:41.965613 containerd[1829]: time="2024-06-25T18:45:41.962636996Z" level=info msg="TearDown network for sandbox \"7e08938b1d69058d3fe504729ee350281969478e1a8795602be556dd89e223ea\" successfully" Jun 25 18:45:41.965613 containerd[1829]: time="2024-06-25T18:45:41.962705797Z" level=info msg="StopPodSandbox for \"7e08938b1d69058d3fe504729ee350281969478e1a8795602be556dd89e223ea\" returns successfully" Jun 25 18:45:41.965613 containerd[1829]: time="2024-06-25T18:45:41.963460197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9x5m4,Uid:1b8cd264-e868-4cd7-89a2-2e1d11e52069,Namespace:calico-system,Attempt:1,}" Jun 25 18:45:41.970953 systemd[1]: run-netns-cni\x2daec509cc\x2d9415\x2d7997\x2dd9eb\x2df1090a75ee89.mount: Deactivated successfully. Jun 25 18:45:42.103069 kubelet[3456]: I0625 18:45:42.101546 3456 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-fwlqc" podStartSLOduration=39.101493242 podCreationTimestamp="2024-06-25 18:45:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:45:42.100872642 +0000 UTC m=+52.385102424" watchObservedRunningTime="2024-06-25 18:45:42.101493242 +0000 UTC m=+52.385723024" Jun 25 18:45:42.106173 kubelet[3456]: I0625 18:45:42.104645 3456 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-ztgmd" podStartSLOduration=39.104592446 podCreationTimestamp="2024-06-25 18:45:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:45:42.071694211 +0000 UTC m=+52.355924093" watchObservedRunningTime="2024-06-25 18:45:42.104592446 +0000 UTC m=+52.388822328" Jun 25 18:45:42.236019 systemd-networkd[1397]: calidf8c9567c5d: Link UP Jun 25 18:45:42.240958 systemd-networkd[1397]: calidf8c9567c5d: Gained carrier Jun 25 18:45:42.254799 containerd[1829]: 2024-06-25 18:45:42.119 [INFO][5206] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4012.0.0--a--bcd7e269e6-k8s-csi--node--driver--9x5m4-eth0 csi-node-driver- calico-system 1b8cd264-e868-4cd7-89a2-2e1d11e52069 793 0 2024-06-25 18:45:10 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7d7f6c786c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ci-4012.0.0-a-bcd7e269e6 csi-node-driver-9x5m4 eth0 default [] [] [kns.calico-system ksa.calico-system.default] calidf8c9567c5d [] []}} ContainerID="9603724088123144b1298dd96b22929f340da7124b0c510ff94c2d68b0822ad8" Namespace="calico-system" Pod="csi-node-driver-9x5m4" WorkloadEndpoint="ci--4012.0.0--a--bcd7e269e6-k8s-csi--node--driver--9x5m4-" Jun 25 18:45:42.254799 containerd[1829]: 2024-06-25 18:45:42.119 [INFO][5206] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9603724088123144b1298dd96b22929f340da7124b0c510ff94c2d68b0822ad8" Namespace="calico-system" Pod="csi-node-driver-9x5m4" WorkloadEndpoint="ci--4012.0.0--a--bcd7e269e6-k8s-csi--node--driver--9x5m4-eth0" Jun 25 18:45:42.254799 containerd[1829]: 2024-06-25 18:45:42.183 [INFO][5221] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9603724088123144b1298dd96b22929f340da7124b0c510ff94c2d68b0822ad8" HandleID="k8s-pod-network.9603724088123144b1298dd96b22929f340da7124b0c510ff94c2d68b0822ad8" Workload="ci--4012.0.0--a--bcd7e269e6-k8s-csi--node--driver--9x5m4-eth0" Jun 25 18:45:42.254799 containerd[1829]: 2024-06-25 18:45:42.193 [INFO][5221] ipam_plugin.go 264: Auto assigning IP ContainerID="9603724088123144b1298dd96b22929f340da7124b0c510ff94c2d68b0822ad8" HandleID="k8s-pod-network.9603724088123144b1298dd96b22929f340da7124b0c510ff94c2d68b0822ad8" Workload="ci--4012.0.0--a--bcd7e269e6-k8s-csi--node--driver--9x5m4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002013b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4012.0.0-a-bcd7e269e6", "pod":"csi-node-driver-9x5m4", "timestamp":"2024-06-25 18:45:42.183231328 +0000 UTC"}, Hostname:"ci-4012.0.0-a-bcd7e269e6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 18:45:42.254799 containerd[1829]: 2024-06-25 18:45:42.194 [INFO][5221] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:42.254799 containerd[1829]: 2024-06-25 18:45:42.194 [INFO][5221] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:42.254799 containerd[1829]: 2024-06-25 18:45:42.194 [INFO][5221] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4012.0.0-a-bcd7e269e6' Jun 25 18:45:42.254799 containerd[1829]: 2024-06-25 18:45:42.196 [INFO][5221] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9603724088123144b1298dd96b22929f340da7124b0c510ff94c2d68b0822ad8" host="ci-4012.0.0-a-bcd7e269e6" Jun 25 18:45:42.254799 containerd[1829]: 2024-06-25 18:45:42.200 [INFO][5221] ipam.go 372: Looking up existing affinities for host host="ci-4012.0.0-a-bcd7e269e6" Jun 25 18:45:42.254799 containerd[1829]: 2024-06-25 18:45:42.204 [INFO][5221] ipam.go 489: Trying affinity for 192.168.45.64/26 host="ci-4012.0.0-a-bcd7e269e6" Jun 25 18:45:42.254799 containerd[1829]: 2024-06-25 18:45:42.206 [INFO][5221] ipam.go 155: Attempting to load block cidr=192.168.45.64/26 host="ci-4012.0.0-a-bcd7e269e6" Jun 25 18:45:42.254799 containerd[1829]: 2024-06-25 18:45:42.209 [INFO][5221] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.45.64/26 host="ci-4012.0.0-a-bcd7e269e6" Jun 25 18:45:42.254799 containerd[1829]: 2024-06-25 18:45:42.209 [INFO][5221] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.45.64/26 handle="k8s-pod-network.9603724088123144b1298dd96b22929f340da7124b0c510ff94c2d68b0822ad8" host="ci-4012.0.0-a-bcd7e269e6" Jun 25 18:45:42.254799 containerd[1829]: 2024-06-25 18:45:42.211 [INFO][5221] ipam.go 1685: Creating new handle: k8s-pod-network.9603724088123144b1298dd96b22929f340da7124b0c510ff94c2d68b0822ad8 Jun 25 18:45:42.254799 containerd[1829]: 2024-06-25 18:45:42.215 [INFO][5221] ipam.go 1203: Writing block in order to claim IPs block=192.168.45.64/26 handle="k8s-pod-network.9603724088123144b1298dd96b22929f340da7124b0c510ff94c2d68b0822ad8" host="ci-4012.0.0-a-bcd7e269e6" Jun 25 18:45:42.254799 containerd[1829]: 2024-06-25 18:45:42.226 [INFO][5221] ipam.go 1216: Successfully claimed IPs: [192.168.45.67/26] block=192.168.45.64/26 handle="k8s-pod-network.9603724088123144b1298dd96b22929f340da7124b0c510ff94c2d68b0822ad8" host="ci-4012.0.0-a-bcd7e269e6" Jun 25 18:45:42.254799 containerd[1829]: 2024-06-25 18:45:42.227 [INFO][5221] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.45.67/26] handle="k8s-pod-network.9603724088123144b1298dd96b22929f340da7124b0c510ff94c2d68b0822ad8" host="ci-4012.0.0-a-bcd7e269e6" Jun 25 18:45:42.254799 containerd[1829]: 2024-06-25 18:45:42.227 [INFO][5221] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:42.254799 containerd[1829]: 2024-06-25 18:45:42.227 [INFO][5221] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.45.67/26] IPv6=[] ContainerID="9603724088123144b1298dd96b22929f340da7124b0c510ff94c2d68b0822ad8" HandleID="k8s-pod-network.9603724088123144b1298dd96b22929f340da7124b0c510ff94c2d68b0822ad8" Workload="ci--4012.0.0--a--bcd7e269e6-k8s-csi--node--driver--9x5m4-eth0" Jun 25 18:45:42.256909 containerd[1829]: 2024-06-25 18:45:42.230 [INFO][5206] k8s.go 386: Populated endpoint ContainerID="9603724088123144b1298dd96b22929f340da7124b0c510ff94c2d68b0822ad8" Namespace="calico-system" Pod="csi-node-driver-9x5m4" WorkloadEndpoint="ci--4012.0.0--a--bcd7e269e6-k8s-csi--node--driver--9x5m4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--bcd7e269e6-k8s-csi--node--driver--9x5m4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1b8cd264-e868-4cd7-89a2-2e1d11e52069", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 45, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-bcd7e269e6", ContainerID:"", Pod:"csi-node-driver-9x5m4", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.45.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calidf8c9567c5d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:42.256909 containerd[1829]: 2024-06-25 18:45:42.231 [INFO][5206] k8s.go 387: Calico CNI using IPs: [192.168.45.67/32] ContainerID="9603724088123144b1298dd96b22929f340da7124b0c510ff94c2d68b0822ad8" Namespace="calico-system" Pod="csi-node-driver-9x5m4" WorkloadEndpoint="ci--4012.0.0--a--bcd7e269e6-k8s-csi--node--driver--9x5m4-eth0" Jun 25 18:45:42.256909 containerd[1829]: 2024-06-25 18:45:42.231 [INFO][5206] dataplane_linux.go 68: Setting the host side veth name to calidf8c9567c5d ContainerID="9603724088123144b1298dd96b22929f340da7124b0c510ff94c2d68b0822ad8" Namespace="calico-system" Pod="csi-node-driver-9x5m4" WorkloadEndpoint="ci--4012.0.0--a--bcd7e269e6-k8s-csi--node--driver--9x5m4-eth0" Jun 25 18:45:42.256909 containerd[1829]: 2024-06-25 18:45:42.234 [INFO][5206] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="9603724088123144b1298dd96b22929f340da7124b0c510ff94c2d68b0822ad8" Namespace="calico-system" Pod="csi-node-driver-9x5m4" WorkloadEndpoint="ci--4012.0.0--a--bcd7e269e6-k8s-csi--node--driver--9x5m4-eth0" Jun 25 18:45:42.256909 containerd[1829]: 2024-06-25 18:45:42.235 [INFO][5206] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9603724088123144b1298dd96b22929f340da7124b0c510ff94c2d68b0822ad8" Namespace="calico-system" Pod="csi-node-driver-9x5m4" WorkloadEndpoint="ci--4012.0.0--a--bcd7e269e6-k8s-csi--node--driver--9x5m4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--bcd7e269e6-k8s-csi--node--driver--9x5m4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1b8cd264-e868-4cd7-89a2-2e1d11e52069", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 45, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-bcd7e269e6", ContainerID:"9603724088123144b1298dd96b22929f340da7124b0c510ff94c2d68b0822ad8", Pod:"csi-node-driver-9x5m4", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.45.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calidf8c9567c5d", MAC:"5a:dd:4e:01:f4:ad", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:42.256909 containerd[1829]: 2024-06-25 18:45:42.250 [INFO][5206] k8s.go 500: Wrote updated endpoint to datastore ContainerID="9603724088123144b1298dd96b22929f340da7124b0c510ff94c2d68b0822ad8" Namespace="calico-system" Pod="csi-node-driver-9x5m4" WorkloadEndpoint="ci--4012.0.0--a--bcd7e269e6-k8s-csi--node--driver--9x5m4-eth0" Jun 25 18:45:42.298974 containerd[1829]: time="2024-06-25T18:45:42.298680450Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:45:42.298974 containerd[1829]: time="2024-06-25T18:45:42.298741550Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:42.298974 containerd[1829]: time="2024-06-25T18:45:42.298781350Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:45:42.298974 containerd[1829]: time="2024-06-25T18:45:42.298802450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:42.362015 containerd[1829]: time="2024-06-25T18:45:42.361875516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9x5m4,Uid:1b8cd264-e868-4cd7-89a2-2e1d11e52069,Namespace:calico-system,Attempt:1,} returns sandbox id \"9603724088123144b1298dd96b22929f340da7124b0c510ff94c2d68b0822ad8\"" Jun 25 18:45:42.364913 containerd[1829]: time="2024-06-25T18:45:42.364884019Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Jun 25 18:45:42.809124 containerd[1829]: time="2024-06-25T18:45:42.808674486Z" level=info msg="StopPodSandbox for \"2547328064042eb80643dd2fb8273604ecf7a60f2b2e0125c271806151b96be3\"" Jun 25 18:45:42.906276 systemd-networkd[1397]: cali5ae3b61ec12: Gained IPv6LL Jun 25 18:45:42.936246 containerd[1829]: 2024-06-25 18:45:42.885 [INFO][5296] k8s.go 608: Cleaning up netns ContainerID="2547328064042eb80643dd2fb8273604ecf7a60f2b2e0125c271806151b96be3" Jun 25 18:45:42.936246 containerd[1829]: 2024-06-25 18:45:42.886 [INFO][5296] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="2547328064042eb80643dd2fb8273604ecf7a60f2b2e0125c271806151b96be3" iface="eth0" netns="/var/run/netns/cni-c846fa6d-4421-c704-ec19-843f28a5177c" Jun 25 18:45:42.936246 containerd[1829]: 2024-06-25 18:45:42.886 [INFO][5296] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="2547328064042eb80643dd2fb8273604ecf7a60f2b2e0125c271806151b96be3" iface="eth0" netns="/var/run/netns/cni-c846fa6d-4421-c704-ec19-843f28a5177c" Jun 25 18:45:42.936246 containerd[1829]: 2024-06-25 18:45:42.886 [INFO][5296] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="2547328064042eb80643dd2fb8273604ecf7a60f2b2e0125c271806151b96be3" iface="eth0" netns="/var/run/netns/cni-c846fa6d-4421-c704-ec19-843f28a5177c" Jun 25 18:45:42.936246 containerd[1829]: 2024-06-25 18:45:42.886 [INFO][5296] k8s.go 615: Releasing IP address(es) ContainerID="2547328064042eb80643dd2fb8273604ecf7a60f2b2e0125c271806151b96be3" Jun 25 18:45:42.936246 containerd[1829]: 2024-06-25 18:45:42.886 [INFO][5296] utils.go 188: Calico CNI releasing IP address ContainerID="2547328064042eb80643dd2fb8273604ecf7a60f2b2e0125c271806151b96be3" Jun 25 18:45:42.936246 containerd[1829]: 2024-06-25 18:45:42.919 [INFO][5303] ipam_plugin.go 411: Releasing address using handleID ContainerID="2547328064042eb80643dd2fb8273604ecf7a60f2b2e0125c271806151b96be3" HandleID="k8s-pod-network.2547328064042eb80643dd2fb8273604ecf7a60f2b2e0125c271806151b96be3" Workload="ci--4012.0.0--a--bcd7e269e6-k8s-calico--kube--controllers--598f97fd4f--fntx8-eth0" Jun 25 18:45:42.936246 containerd[1829]: 2024-06-25 18:45:42.920 [INFO][5303] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:42.936246 containerd[1829]: 2024-06-25 18:45:42.920 [INFO][5303] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:42.936246 containerd[1829]: 2024-06-25 18:45:42.930 [WARNING][5303] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="2547328064042eb80643dd2fb8273604ecf7a60f2b2e0125c271806151b96be3" HandleID="k8s-pod-network.2547328064042eb80643dd2fb8273604ecf7a60f2b2e0125c271806151b96be3" Workload="ci--4012.0.0--a--bcd7e269e6-k8s-calico--kube--controllers--598f97fd4f--fntx8-eth0" Jun 25 18:45:42.936246 containerd[1829]: 2024-06-25 18:45:42.930 [INFO][5303] ipam_plugin.go 439: Releasing address using workloadID ContainerID="2547328064042eb80643dd2fb8273604ecf7a60f2b2e0125c271806151b96be3" HandleID="k8s-pod-network.2547328064042eb80643dd2fb8273604ecf7a60f2b2e0125c271806151b96be3" Workload="ci--4012.0.0--a--bcd7e269e6-k8s-calico--kube--controllers--598f97fd4f--fntx8-eth0" Jun 25 18:45:42.936246 containerd[1829]: 2024-06-25 18:45:42.932 [INFO][5303] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:42.936246 containerd[1829]: 2024-06-25 18:45:42.934 [INFO][5296] k8s.go 621: Teardown processing complete. ContainerID="2547328064042eb80643dd2fb8273604ecf7a60f2b2e0125c271806151b96be3" Jun 25 18:45:42.938490 containerd[1829]: time="2024-06-25T18:45:42.937356021Z" level=info msg="TearDown network for sandbox \"2547328064042eb80643dd2fb8273604ecf7a60f2b2e0125c271806151b96be3\" successfully" Jun 25 18:45:42.938490 containerd[1829]: time="2024-06-25T18:45:42.937408821Z" level=info msg="StopPodSandbox for \"2547328064042eb80643dd2fb8273604ecf7a60f2b2e0125c271806151b96be3\" returns successfully" Jun 25 18:45:42.940518 containerd[1829]: time="2024-06-25T18:45:42.939869724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-598f97fd4f-fntx8,Uid:d23d5958-6421-407c-b29b-eb96cbc2a5d1,Namespace:calico-system,Attempt:1,}" Jun 25 18:45:42.941371 systemd[1]: run-netns-cni\x2dc846fa6d\x2d4421\x2dc704\x2dec19\x2d843f28a5177c.mount: Deactivated successfully. Jun 25 18:45:43.100973 systemd-networkd[1397]: cali85c20294c2d: Link UP Jun 25 18:45:43.105048 systemd-networkd[1397]: cali85c20294c2d: Gained carrier Jun 25 18:45:43.121587 containerd[1829]: 2024-06-25 18:45:43.026 [INFO][5310] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4012.0.0--a--bcd7e269e6-k8s-calico--kube--controllers--598f97fd4f--fntx8-eth0 calico-kube-controllers-598f97fd4f- calico-system d23d5958-6421-407c-b29b-eb96cbc2a5d1 814 0 2024-06-25 18:45:10 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:598f97fd4f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4012.0.0-a-bcd7e269e6 calico-kube-controllers-598f97fd4f-fntx8 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali85c20294c2d [] []}} ContainerID="3d7d459820dbeaf39570e53199ac8e6e70552b526aa4f69e1560f66c0aa7c5fb" Namespace="calico-system" Pod="calico-kube-controllers-598f97fd4f-fntx8" WorkloadEndpoint="ci--4012.0.0--a--bcd7e269e6-k8s-calico--kube--controllers--598f97fd4f--fntx8-" Jun 25 18:45:43.121587 containerd[1829]: 2024-06-25 18:45:43.026 [INFO][5310] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3d7d459820dbeaf39570e53199ac8e6e70552b526aa4f69e1560f66c0aa7c5fb" Namespace="calico-system" Pod="calico-kube-controllers-598f97fd4f-fntx8" WorkloadEndpoint="ci--4012.0.0--a--bcd7e269e6-k8s-calico--kube--controllers--598f97fd4f--fntx8-eth0" Jun 25 18:45:43.121587 containerd[1829]: 2024-06-25 18:45:43.053 [INFO][5321] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3d7d459820dbeaf39570e53199ac8e6e70552b526aa4f69e1560f66c0aa7c5fb" HandleID="k8s-pod-network.3d7d459820dbeaf39570e53199ac8e6e70552b526aa4f69e1560f66c0aa7c5fb" Workload="ci--4012.0.0--a--bcd7e269e6-k8s-calico--kube--controllers--598f97fd4f--fntx8-eth0" Jun 25 18:45:43.121587 containerd[1829]: 2024-06-25 18:45:43.068 [INFO][5321] ipam_plugin.go 264: Auto assigning IP ContainerID="3d7d459820dbeaf39570e53199ac8e6e70552b526aa4f69e1560f66c0aa7c5fb" HandleID="k8s-pod-network.3d7d459820dbeaf39570e53199ac8e6e70552b526aa4f69e1560f66c0aa7c5fb" Workload="ci--4012.0.0--a--bcd7e269e6-k8s-calico--kube--controllers--598f97fd4f--fntx8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005a1770), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4012.0.0-a-bcd7e269e6", "pod":"calico-kube-controllers-598f97fd4f-fntx8", "timestamp":"2024-06-25 18:45:43.053788644 +0000 UTC"}, Hostname:"ci-4012.0.0-a-bcd7e269e6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 18:45:43.121587 containerd[1829]: 2024-06-25 18:45:43.068 [INFO][5321] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:43.121587 containerd[1829]: 2024-06-25 18:45:43.068 [INFO][5321] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:43.121587 containerd[1829]: 2024-06-25 18:45:43.068 [INFO][5321] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4012.0.0-a-bcd7e269e6' Jun 25 18:45:43.121587 containerd[1829]: 2024-06-25 18:45:43.070 [INFO][5321] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3d7d459820dbeaf39570e53199ac8e6e70552b526aa4f69e1560f66c0aa7c5fb" host="ci-4012.0.0-a-bcd7e269e6" Jun 25 18:45:43.121587 containerd[1829]: 2024-06-25 18:45:43.076 [INFO][5321] ipam.go 372: Looking up existing affinities for host host="ci-4012.0.0-a-bcd7e269e6" Jun 25 18:45:43.121587 containerd[1829]: 2024-06-25 18:45:43.080 [INFO][5321] ipam.go 489: Trying affinity for 192.168.45.64/26 host="ci-4012.0.0-a-bcd7e269e6" Jun 25 18:45:43.121587 containerd[1829]: 2024-06-25 18:45:43.081 [INFO][5321] ipam.go 155: Attempting to load block cidr=192.168.45.64/26 host="ci-4012.0.0-a-bcd7e269e6" Jun 25 18:45:43.121587 containerd[1829]: 2024-06-25 18:45:43.083 [INFO][5321] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.45.64/26 host="ci-4012.0.0-a-bcd7e269e6" Jun 25 18:45:43.121587 containerd[1829]: 2024-06-25 18:45:43.083 [INFO][5321] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.45.64/26 handle="k8s-pod-network.3d7d459820dbeaf39570e53199ac8e6e70552b526aa4f69e1560f66c0aa7c5fb" host="ci-4012.0.0-a-bcd7e269e6" Jun 25 18:45:43.121587 containerd[1829]: 2024-06-25 18:45:43.084 [INFO][5321] ipam.go 1685: Creating new handle: k8s-pod-network.3d7d459820dbeaf39570e53199ac8e6e70552b526aa4f69e1560f66c0aa7c5fb Jun 25 18:45:43.121587 containerd[1829]: 2024-06-25 18:45:43.087 [INFO][5321] ipam.go 1203: Writing block in order to claim IPs block=192.168.45.64/26 handle="k8s-pod-network.3d7d459820dbeaf39570e53199ac8e6e70552b526aa4f69e1560f66c0aa7c5fb" host="ci-4012.0.0-a-bcd7e269e6" Jun 25 18:45:43.121587 containerd[1829]: 2024-06-25 18:45:43.096 [INFO][5321] ipam.go 1216: Successfully claimed IPs: [192.168.45.68/26] block=192.168.45.64/26 handle="k8s-pod-network.3d7d459820dbeaf39570e53199ac8e6e70552b526aa4f69e1560f66c0aa7c5fb" host="ci-4012.0.0-a-bcd7e269e6" Jun 25 18:45:43.121587 containerd[1829]: 2024-06-25 18:45:43.096 [INFO][5321] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.45.68/26] handle="k8s-pod-network.3d7d459820dbeaf39570e53199ac8e6e70552b526aa4f69e1560f66c0aa7c5fb" host="ci-4012.0.0-a-bcd7e269e6" Jun 25 18:45:43.121587 containerd[1829]: 2024-06-25 18:45:43.096 [INFO][5321] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:43.121587 containerd[1829]: 2024-06-25 18:45:43.096 [INFO][5321] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.45.68/26] IPv6=[] ContainerID="3d7d459820dbeaf39570e53199ac8e6e70552b526aa4f69e1560f66c0aa7c5fb" HandleID="k8s-pod-network.3d7d459820dbeaf39570e53199ac8e6e70552b526aa4f69e1560f66c0aa7c5fb" Workload="ci--4012.0.0--a--bcd7e269e6-k8s-calico--kube--controllers--598f97fd4f--fntx8-eth0" Jun 25 18:45:43.122435 containerd[1829]: 2024-06-25 18:45:43.098 [INFO][5310] k8s.go 386: Populated endpoint ContainerID="3d7d459820dbeaf39570e53199ac8e6e70552b526aa4f69e1560f66c0aa7c5fb" Namespace="calico-system" Pod="calico-kube-controllers-598f97fd4f-fntx8" WorkloadEndpoint="ci--4012.0.0--a--bcd7e269e6-k8s-calico--kube--controllers--598f97fd4f--fntx8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--bcd7e269e6-k8s-calico--kube--controllers--598f97fd4f--fntx8-eth0", GenerateName:"calico-kube-controllers-598f97fd4f-", Namespace:"calico-system", SelfLink:"", UID:"d23d5958-6421-407c-b29b-eb96cbc2a5d1", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 45, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"598f97fd4f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-bcd7e269e6", ContainerID:"", Pod:"calico-kube-controllers-598f97fd4f-fntx8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.45.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali85c20294c2d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:43.122435 containerd[1829]: 2024-06-25 18:45:43.098 [INFO][5310] k8s.go 387: Calico CNI using IPs: [192.168.45.68/32] ContainerID="3d7d459820dbeaf39570e53199ac8e6e70552b526aa4f69e1560f66c0aa7c5fb" Namespace="calico-system" Pod="calico-kube-controllers-598f97fd4f-fntx8" WorkloadEndpoint="ci--4012.0.0--a--bcd7e269e6-k8s-calico--kube--controllers--598f97fd4f--fntx8-eth0" Jun 25 18:45:43.122435 containerd[1829]: 2024-06-25 18:45:43.098 [INFO][5310] dataplane_linux.go 68: Setting the host side veth name to cali85c20294c2d ContainerID="3d7d459820dbeaf39570e53199ac8e6e70552b526aa4f69e1560f66c0aa7c5fb" Namespace="calico-system" Pod="calico-kube-controllers-598f97fd4f-fntx8" WorkloadEndpoint="ci--4012.0.0--a--bcd7e269e6-k8s-calico--kube--controllers--598f97fd4f--fntx8-eth0" Jun 25 18:45:43.122435 containerd[1829]: 2024-06-25 18:45:43.102 [INFO][5310] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="3d7d459820dbeaf39570e53199ac8e6e70552b526aa4f69e1560f66c0aa7c5fb" Namespace="calico-system" Pod="calico-kube-controllers-598f97fd4f-fntx8" WorkloadEndpoint="ci--4012.0.0--a--bcd7e269e6-k8s-calico--kube--controllers--598f97fd4f--fntx8-eth0" Jun 25 18:45:43.122435 containerd[1829]: 2024-06-25 18:45:43.102 [INFO][5310] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3d7d459820dbeaf39570e53199ac8e6e70552b526aa4f69e1560f66c0aa7c5fb" Namespace="calico-system" Pod="calico-kube-controllers-598f97fd4f-fntx8" WorkloadEndpoint="ci--4012.0.0--a--bcd7e269e6-k8s-calico--kube--controllers--598f97fd4f--fntx8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--bcd7e269e6-k8s-calico--kube--controllers--598f97fd4f--fntx8-eth0", GenerateName:"calico-kube-controllers-598f97fd4f-", Namespace:"calico-system", SelfLink:"", UID:"d23d5958-6421-407c-b29b-eb96cbc2a5d1", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 45, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"598f97fd4f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-bcd7e269e6", ContainerID:"3d7d459820dbeaf39570e53199ac8e6e70552b526aa4f69e1560f66c0aa7c5fb", Pod:"calico-kube-controllers-598f97fd4f-fntx8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.45.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali85c20294c2d", MAC:"8e:7b:a6:5e:2a:59", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:43.122435 containerd[1829]: 2024-06-25 18:45:43.118 [INFO][5310] k8s.go 500: Wrote updated endpoint to datastore ContainerID="3d7d459820dbeaf39570e53199ac8e6e70552b526aa4f69e1560f66c0aa7c5fb" Namespace="calico-system" Pod="calico-kube-controllers-598f97fd4f-fntx8" WorkloadEndpoint="ci--4012.0.0--a--bcd7e269e6-k8s-calico--kube--controllers--598f97fd4f--fntx8-eth0" Jun 25 18:45:43.154937 containerd[1829]: time="2024-06-25T18:45:43.154853050Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:45:43.155225 containerd[1829]: time="2024-06-25T18:45:43.154951250Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:43.155225 containerd[1829]: time="2024-06-25T18:45:43.154990950Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:45:43.155225 containerd[1829]: time="2024-06-25T18:45:43.155017550Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:43.220614 containerd[1829]: time="2024-06-25T18:45:43.220523419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-598f97fd4f-fntx8,Uid:d23d5958-6421-407c-b29b-eb96cbc2a5d1,Namespace:calico-system,Attempt:1,} returns sandbox id \"3d7d459820dbeaf39570e53199ac8e6e70552b526aa4f69e1560f66c0aa7c5fb\"" Jun 25 18:45:43.289720 systemd-networkd[1397]: caliaa46e8a6ba1: Gained IPv6LL Jun 25 18:45:43.354134 systemd-networkd[1397]: calidf8c9567c5d: Gained IPv6LL Jun 25 18:45:43.631882 kubelet[3456]: I0625 18:45:43.630075 3456 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 18:45:44.204509 containerd[1829]: time="2024-06-25T18:45:44.204464320Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:44.207401 containerd[1829]: time="2024-06-25T18:45:44.207353022Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7641062" Jun 25 18:45:44.213056 containerd[1829]: time="2024-06-25T18:45:44.212993325Z" level=info msg="ImageCreate event name:\"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:44.218062 containerd[1829]: time="2024-06-25T18:45:44.218009328Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:44.219180 containerd[1829]: time="2024-06-25T18:45:44.218674329Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"9088822\" in 1.853614809s" Jun 25 18:45:44.219180 containerd[1829]: time="2024-06-25T18:45:44.218714929Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\"" Jun 25 18:45:44.219688 containerd[1829]: time="2024-06-25T18:45:44.219661729Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Jun 25 18:45:44.221033 containerd[1829]: time="2024-06-25T18:45:44.221003130Z" level=info msg="CreateContainer within sandbox \"9603724088123144b1298dd96b22929f340da7124b0c510ff94c2d68b0822ad8\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jun 25 18:45:44.254920 containerd[1829]: time="2024-06-25T18:45:44.254711052Z" level=info msg="CreateContainer within sandbox \"9603724088123144b1298dd96b22929f340da7124b0c510ff94c2d68b0822ad8\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"a1c28e317da1e713fed0ac2cbd7967601835458f41dc95c44d472cc3a2b0a729\"" Jun 25 18:45:44.258359 containerd[1829]: time="2024-06-25T18:45:44.256759753Z" level=info msg="StartContainer for \"a1c28e317da1e713fed0ac2cbd7967601835458f41dc95c44d472cc3a2b0a729\"" Jun 25 18:45:44.351202 systemd[1]: run-containerd-runc-k8s.io-a1c28e317da1e713fed0ac2cbd7967601835458f41dc95c44d472cc3a2b0a729-runc.GPYw0M.mount: Deactivated successfully. Jun 25 18:45:44.428191 containerd[1829]: time="2024-06-25T18:45:44.428140862Z" level=info msg="StartContainer for \"a1c28e317da1e713fed0ac2cbd7967601835458f41dc95c44d472cc3a2b0a729\" returns successfully" Jun 25 18:45:44.954332 systemd-networkd[1397]: cali85c20294c2d: Gained IPv6LL Jun 25 18:45:46.763003 containerd[1829]: time="2024-06-25T18:45:46.762934450Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:46.765001 containerd[1829]: time="2024-06-25T18:45:46.764944751Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=33505793" Jun 25 18:45:46.768689 containerd[1829]: time="2024-06-25T18:45:46.768625254Z" level=info msg="ImageCreate event name:\"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:46.774268 containerd[1829]: time="2024-06-25T18:45:46.774206857Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:46.775094 containerd[1829]: time="2024-06-25T18:45:46.774958558Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"34953521\" in 2.555170028s" Jun 25 18:45:46.775094 containerd[1829]: time="2024-06-25T18:45:46.774996958Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\"" Jun 25 18:45:46.777090 containerd[1829]: time="2024-06-25T18:45:46.776019958Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Jun 25 18:45:46.789910 containerd[1829]: time="2024-06-25T18:45:46.789867067Z" level=info msg="CreateContainer within sandbox \"3d7d459820dbeaf39570e53199ac8e6e70552b526aa4f69e1560f66c0aa7c5fb\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jun 25 18:45:46.828691 containerd[1829]: time="2024-06-25T18:45:46.828641992Z" level=info msg="CreateContainer within sandbox \"3d7d459820dbeaf39570e53199ac8e6e70552b526aa4f69e1560f66c0aa7c5fb\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"f8c08540c79f1c839d72354bef7340754bdd0ce88ff5c54338b8f0f07ae36cc3\"" Jun 25 18:45:46.829291 containerd[1829]: time="2024-06-25T18:45:46.829216892Z" level=info msg="StartContainer for \"f8c08540c79f1c839d72354bef7340754bdd0ce88ff5c54338b8f0f07ae36cc3\"" Jun 25 18:45:46.919709 containerd[1829]: time="2024-06-25T18:45:46.919663850Z" level=info msg="StartContainer for \"f8c08540c79f1c839d72354bef7340754bdd0ce88ff5c54338b8f0f07ae36cc3\" returns successfully" Jun 25 18:45:47.099684 kubelet[3456]: I0625 18:45:47.098521 3456 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-598f97fd4f-fntx8" podStartSLOduration=33.545209127 podCreationTimestamp="2024-06-25 18:45:10 +0000 UTC" firstStartedPulling="2024-06-25 18:45:43.222249121 +0000 UTC m=+53.506478903" lastFinishedPulling="2024-06-25 18:45:46.775510158 +0000 UTC m=+57.059739940" observedRunningTime="2024-06-25 18:45:47.097876564 +0000 UTC m=+57.382106346" watchObservedRunningTime="2024-06-25 18:45:47.098470164 +0000 UTC m=+57.382699946" Jun 25 18:45:48.480396 containerd[1829]: time="2024-06-25T18:45:48.480343945Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:48.482232 containerd[1829]: time="2024-06-25T18:45:48.482169646Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=10147655" Jun 25 18:45:48.486019 containerd[1829]: time="2024-06-25T18:45:48.485964448Z" level=info msg="ImageCreate event name:\"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:48.490099 containerd[1829]: time="2024-06-25T18:45:48.490047051Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:48.490852 containerd[1829]: time="2024-06-25T18:45:48.490667151Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"11595367\" in 1.714593492s" Jun 25 18:45:48.490852 containerd[1829]: time="2024-06-25T18:45:48.490709651Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\"" Jun 25 18:45:48.493004 containerd[1829]: time="2024-06-25T18:45:48.492970353Z" level=info msg="CreateContainer within sandbox \"9603724088123144b1298dd96b22929f340da7124b0c510ff94c2d68b0822ad8\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jun 25 18:45:48.531472 containerd[1829]: time="2024-06-25T18:45:48.531424477Z" level=info msg="CreateContainer within sandbox \"9603724088123144b1298dd96b22929f340da7124b0c510ff94c2d68b0822ad8\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"514bc0b0c869061b8d66e6b3ea646b5cfa51bb0feba1451817b377207b6218c3\"" Jun 25 18:45:48.532122 containerd[1829]: time="2024-06-25T18:45:48.532085978Z" level=info msg="StartContainer for \"514bc0b0c869061b8d66e6b3ea646b5cfa51bb0feba1451817b377207b6218c3\"" Jun 25 18:45:48.594435 containerd[1829]: time="2024-06-25T18:45:48.594019817Z" level=info msg="StartContainer for \"514bc0b0c869061b8d66e6b3ea646b5cfa51bb0feba1451817b377207b6218c3\" returns successfully" Jun 25 18:45:48.978366 kubelet[3456]: I0625 18:45:48.978272 3456 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jun 25 18:45:48.978366 kubelet[3456]: I0625 18:45:48.978308 3456 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jun 25 18:45:49.102712 kubelet[3456]: I0625 18:45:49.102676 3456 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-9x5m4" podStartSLOduration=32.975214508 podCreationTimestamp="2024-06-25 18:45:10 +0000 UTC" firstStartedPulling="2024-06-25 18:45:42.363542018 +0000 UTC m=+52.647771900" lastFinishedPulling="2024-06-25 18:45:48.490955551 +0000 UTC m=+58.775185433" observedRunningTime="2024-06-25 18:45:49.102328241 +0000 UTC m=+59.386558123" watchObservedRunningTime="2024-06-25 18:45:49.102628041 +0000 UTC m=+59.386857823" Jun 25 18:45:49.797839 containerd[1829]: time="2024-06-25T18:45:49.797781984Z" level=info msg="StopPodSandbox for \"5a60bb87ce600832c93a960d5dda635f8c2b3d27afde0f38afab76dfc0d453c8\"" Jun 25 18:45:49.798335 containerd[1829]: time="2024-06-25T18:45:49.797953984Z" level=info msg="TearDown network for sandbox \"5a60bb87ce600832c93a960d5dda635f8c2b3d27afde0f38afab76dfc0d453c8\" successfully" Jun 25 18:45:49.798335 containerd[1829]: time="2024-06-25T18:45:49.797973984Z" level=info msg="StopPodSandbox for \"5a60bb87ce600832c93a960d5dda635f8c2b3d27afde0f38afab76dfc0d453c8\" returns successfully" Jun 25 18:45:49.800601 containerd[1829]: time="2024-06-25T18:45:49.799581985Z" level=info msg="RemovePodSandbox for \"5a60bb87ce600832c93a960d5dda635f8c2b3d27afde0f38afab76dfc0d453c8\"" Jun 25 18:45:49.800601 containerd[1829]: time="2024-06-25T18:45:49.799621385Z" level=info msg="Forcibly stopping sandbox \"5a60bb87ce600832c93a960d5dda635f8c2b3d27afde0f38afab76dfc0d453c8\"" Jun 25 18:45:49.800601 containerd[1829]: time="2024-06-25T18:45:49.799692085Z" level=info msg="TearDown network for sandbox \"5a60bb87ce600832c93a960d5dda635f8c2b3d27afde0f38afab76dfc0d453c8\" successfully" Jun 25 18:45:49.810451 containerd[1829]: time="2024-06-25T18:45:49.810288892Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5a60bb87ce600832c93a960d5dda635f8c2b3d27afde0f38afab76dfc0d453c8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 18:45:49.810451 containerd[1829]: time="2024-06-25T18:45:49.810357892Z" level=info msg="RemovePodSandbox \"5a60bb87ce600832c93a960d5dda635f8c2b3d27afde0f38afab76dfc0d453c8\" returns successfully" Jun 25 18:45:49.811308 containerd[1829]: time="2024-06-25T18:45:49.810955593Z" level=info msg="StopPodSandbox for \"9b5f76f88339f6bcba68a787fef2e8d88ecdd1b372c2fe546ba407aa5310d209\"" Jun 25 18:45:49.919595 containerd[1829]: 2024-06-25 18:45:49.864 [WARNING][5592] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9b5f76f88339f6bcba68a787fef2e8d88ecdd1b372c2fe546ba407aa5310d209" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--bcd7e269e6-k8s-coredns--5dd5756b68--fwlqc-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"e8be7a40-e96b-4fb8-bda5-b7efa142ed7a", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 45, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-bcd7e269e6", ContainerID:"cb328de7141c21524e25f073f289dfc3e4a14c13a586d5fb4881e061505bb03e", Pod:"coredns-5dd5756b68-fwlqc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.45.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaa46e8a6ba1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:49.919595 containerd[1829]: 2024-06-25 18:45:49.866 [INFO][5592] k8s.go 608: Cleaning up netns ContainerID="9b5f76f88339f6bcba68a787fef2e8d88ecdd1b372c2fe546ba407aa5310d209" Jun 25 18:45:49.919595 containerd[1829]: 2024-06-25 18:45:49.867 [INFO][5592] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="9b5f76f88339f6bcba68a787fef2e8d88ecdd1b372c2fe546ba407aa5310d209" iface="eth0" netns="" Jun 25 18:45:49.919595 containerd[1829]: 2024-06-25 18:45:49.867 [INFO][5592] k8s.go 615: Releasing IP address(es) ContainerID="9b5f76f88339f6bcba68a787fef2e8d88ecdd1b372c2fe546ba407aa5310d209" Jun 25 18:45:49.919595 containerd[1829]: 2024-06-25 18:45:49.867 [INFO][5592] utils.go 188: Calico CNI releasing IP address ContainerID="9b5f76f88339f6bcba68a787fef2e8d88ecdd1b372c2fe546ba407aa5310d209" Jun 25 18:45:49.919595 containerd[1829]: 2024-06-25 18:45:49.902 [INFO][5598] ipam_plugin.go 411: Releasing address using handleID ContainerID="9b5f76f88339f6bcba68a787fef2e8d88ecdd1b372c2fe546ba407aa5310d209" HandleID="k8s-pod-network.9b5f76f88339f6bcba68a787fef2e8d88ecdd1b372c2fe546ba407aa5310d209" Workload="ci--4012.0.0--a--bcd7e269e6-k8s-coredns--5dd5756b68--fwlqc-eth0" Jun 25 18:45:49.919595 containerd[1829]: 2024-06-25 18:45:49.903 [INFO][5598] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:49.919595 containerd[1829]: 2024-06-25 18:45:49.903 [INFO][5598] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:49.919595 containerd[1829]: 2024-06-25 18:45:49.911 [WARNING][5598] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="9b5f76f88339f6bcba68a787fef2e8d88ecdd1b372c2fe546ba407aa5310d209" HandleID="k8s-pod-network.9b5f76f88339f6bcba68a787fef2e8d88ecdd1b372c2fe546ba407aa5310d209" Workload="ci--4012.0.0--a--bcd7e269e6-k8s-coredns--5dd5756b68--fwlqc-eth0" Jun 25 18:45:49.919595 containerd[1829]: 2024-06-25 18:45:49.911 [INFO][5598] ipam_plugin.go 439: Releasing address using workloadID ContainerID="9b5f76f88339f6bcba68a787fef2e8d88ecdd1b372c2fe546ba407aa5310d209" HandleID="k8s-pod-network.9b5f76f88339f6bcba68a787fef2e8d88ecdd1b372c2fe546ba407aa5310d209" Workload="ci--4012.0.0--a--bcd7e269e6-k8s-coredns--5dd5756b68--fwlqc-eth0" Jun 25 18:45:49.919595 containerd[1829]: 2024-06-25 18:45:49.916 [INFO][5598] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:49.919595 containerd[1829]: 2024-06-25 18:45:49.917 [INFO][5592] k8s.go 621: Teardown processing complete. ContainerID="9b5f76f88339f6bcba68a787fef2e8d88ecdd1b372c2fe546ba407aa5310d209" Jun 25 18:45:49.921069 containerd[1829]: time="2024-06-25T18:45:49.920740962Z" level=info msg="TearDown network for sandbox \"9b5f76f88339f6bcba68a787fef2e8d88ecdd1b372c2fe546ba407aa5310d209\" successfully" Jun 25 18:45:49.921069 containerd[1829]: time="2024-06-25T18:45:49.920789763Z" level=info msg="StopPodSandbox for \"9b5f76f88339f6bcba68a787fef2e8d88ecdd1b372c2fe546ba407aa5310d209\" returns successfully" Jun 25 18:45:49.922205 containerd[1829]: time="2024-06-25T18:45:49.922096063Z" level=info msg="RemovePodSandbox for \"9b5f76f88339f6bcba68a787fef2e8d88ecdd1b372c2fe546ba407aa5310d209\"" Jun 25 18:45:49.922873 containerd[1829]: time="2024-06-25T18:45:49.922690964Z" level=info msg="Forcibly stopping sandbox \"9b5f76f88339f6bcba68a787fef2e8d88ecdd1b372c2fe546ba407aa5310d209\"" Jun 25 18:45:49.966441 kubelet[3456]: I0625 18:45:49.964754 3456 topology_manager.go:215] "Topology Admit Handler" podUID="a950e326-947a-4913-9b8b-f081f690e731" podNamespace="calico-apiserver" podName="calico-apiserver-5457598f49-svlgk" Jun 25 18:45:49.994826 kubelet[3456]: I0625 18:45:49.994688 3456 topology_manager.go:215] "Topology Admit Handler" podUID="d515cb8b-51e2-411f-9225-8b0dd2c88cc9" podNamespace="calico-apiserver" podName="calico-apiserver-5457598f49-tznh7" Jun 25 18:45:50.026139 kubelet[3456]: I0625 18:45:50.025791 3456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfqhd\" (UniqueName: \"kubernetes.io/projected/d515cb8b-51e2-411f-9225-8b0dd2c88cc9-kube-api-access-sfqhd\") pod \"calico-apiserver-5457598f49-tznh7\" (UID: \"d515cb8b-51e2-411f-9225-8b0dd2c88cc9\") " pod="calico-apiserver/calico-apiserver-5457598f49-tznh7" Jun 25 18:45:50.026139 kubelet[3456]: I0625 18:45:50.025929 3456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a950e326-947a-4913-9b8b-f081f690e731-calico-apiserver-certs\") pod \"calico-apiserver-5457598f49-svlgk\" (UID: \"a950e326-947a-4913-9b8b-f081f690e731\") " pod="calico-apiserver/calico-apiserver-5457598f49-svlgk" Jun 25 18:45:50.026139 kubelet[3456]: I0625 18:45:50.026002 3456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d515cb8b-51e2-411f-9225-8b0dd2c88cc9-calico-apiserver-certs\") pod \"calico-apiserver-5457598f49-tznh7\" (UID: \"d515cb8b-51e2-411f-9225-8b0dd2c88cc9\") " pod="calico-apiserver/calico-apiserver-5457598f49-tznh7" Jun 25 18:45:50.026139 kubelet[3456]: I0625 18:45:50.026074 3456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtp2h\" (UniqueName: \"kubernetes.io/projected/a950e326-947a-4913-9b8b-f081f690e731-kube-api-access-wtp2h\") pod \"calico-apiserver-5457598f49-svlgk\" (UID: \"a950e326-947a-4913-9b8b-f081f690e731\") " pod="calico-apiserver/calico-apiserver-5457598f49-svlgk" Jun 25 18:45:50.121537 containerd[1829]: 2024-06-25 18:45:50.053 [WARNING][5617] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9b5f76f88339f6bcba68a787fef2e8d88ecdd1b372c2fe546ba407aa5310d209" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--bcd7e269e6-k8s-coredns--5dd5756b68--fwlqc-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"e8be7a40-e96b-4fb8-bda5-b7efa142ed7a", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 45, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-bcd7e269e6", ContainerID:"cb328de7141c21524e25f073f289dfc3e4a14c13a586d5fb4881e061505bb03e", Pod:"coredns-5dd5756b68-fwlqc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.45.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaa46e8a6ba1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:50.121537 containerd[1829]: 2024-06-25 18:45:50.053 [INFO][5617] k8s.go 608: Cleaning up netns ContainerID="9b5f76f88339f6bcba68a787fef2e8d88ecdd1b372c2fe546ba407aa5310d209" Jun 25 18:45:50.121537 containerd[1829]: 2024-06-25 18:45:50.053 [INFO][5617] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="9b5f76f88339f6bcba68a787fef2e8d88ecdd1b372c2fe546ba407aa5310d209" iface="eth0" netns="" Jun 25 18:45:50.121537 containerd[1829]: 2024-06-25 18:45:50.054 [INFO][5617] k8s.go 615: Releasing IP address(es) ContainerID="9b5f76f88339f6bcba68a787fef2e8d88ecdd1b372c2fe546ba407aa5310d209" Jun 25 18:45:50.121537 containerd[1829]: 2024-06-25 18:45:50.054 [INFO][5617] utils.go 188: Calico CNI releasing IP address ContainerID="9b5f76f88339f6bcba68a787fef2e8d88ecdd1b372c2fe546ba407aa5310d209" Jun 25 18:45:50.121537 containerd[1829]: 2024-06-25 18:45:50.107 [INFO][5625] ipam_plugin.go 411: Releasing address using handleID ContainerID="9b5f76f88339f6bcba68a787fef2e8d88ecdd1b372c2fe546ba407aa5310d209" HandleID="k8s-pod-network.9b5f76f88339f6bcba68a787fef2e8d88ecdd1b372c2fe546ba407aa5310d209" Workload="ci--4012.0.0--a--bcd7e269e6-k8s-coredns--5dd5756b68--fwlqc-eth0" Jun 25 18:45:50.121537 containerd[1829]: 2024-06-25 18:45:50.107 [INFO][5625] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:50.121537 containerd[1829]: 2024-06-25 18:45:50.107 [INFO][5625] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:50.121537 containerd[1829]: 2024-06-25 18:45:50.115 [WARNING][5625] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="9b5f76f88339f6bcba68a787fef2e8d88ecdd1b372c2fe546ba407aa5310d209" HandleID="k8s-pod-network.9b5f76f88339f6bcba68a787fef2e8d88ecdd1b372c2fe546ba407aa5310d209" Workload="ci--4012.0.0--a--bcd7e269e6-k8s-coredns--5dd5756b68--fwlqc-eth0" Jun 25 18:45:50.121537 containerd[1829]: 2024-06-25 18:45:50.115 [INFO][5625] ipam_plugin.go 439: Releasing address using workloadID ContainerID="9b5f76f88339f6bcba68a787fef2e8d88ecdd1b372c2fe546ba407aa5310d209" HandleID="k8s-pod-network.9b5f76f88339f6bcba68a787fef2e8d88ecdd1b372c2fe546ba407aa5310d209" Workload="ci--4012.0.0--a--bcd7e269e6-k8s-coredns--5dd5756b68--fwlqc-eth0" Jun 25 18:45:50.121537 containerd[1829]: 2024-06-25 18:45:50.117 [INFO][5625] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:50.121537 containerd[1829]: 2024-06-25 18:45:50.119 [INFO][5617] k8s.go 621: Teardown processing complete. ContainerID="9b5f76f88339f6bcba68a787fef2e8d88ecdd1b372c2fe546ba407aa5310d209" Jun 25 18:45:50.121537 containerd[1829]: time="2024-06-25T18:45:50.121467690Z" level=info msg="TearDown network for sandbox \"9b5f76f88339f6bcba68a787fef2e8d88ecdd1b372c2fe546ba407aa5310d209\" successfully" Jun 25 18:45:50.128087 kubelet[3456]: E0625 18:45:50.127481 3456 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jun 25 18:45:50.128087 kubelet[3456]: E0625 18:45:50.127814 3456 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jun 25 18:45:50.128714 kubelet[3456]: E0625 18:45:50.127880 3456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a950e326-947a-4913-9b8b-f081f690e731-calico-apiserver-certs podName:a950e326-947a-4913-9b8b-f081f690e731 nodeName:}" failed. No retries permitted until 2024-06-25 18:45:50.627739694 +0000 UTC m=+60.911969576 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/a950e326-947a-4913-9b8b-f081f690e731-calico-apiserver-certs") pod "calico-apiserver-5457598f49-svlgk" (UID: "a950e326-947a-4913-9b8b-f081f690e731") : secret "calico-apiserver-certs" not found Jun 25 18:45:50.128714 kubelet[3456]: E0625 18:45:50.128683 3456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d515cb8b-51e2-411f-9225-8b0dd2c88cc9-calico-apiserver-certs podName:d515cb8b-51e2-411f-9225-8b0dd2c88cc9 nodeName:}" failed. No retries permitted until 2024-06-25 18:45:50.628662295 +0000 UTC m=+60.912892077 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/d515cb8b-51e2-411f-9225-8b0dd2c88cc9-calico-apiserver-certs") pod "calico-apiserver-5457598f49-tznh7" (UID: "d515cb8b-51e2-411f-9225-8b0dd2c88cc9") : secret "calico-apiserver-certs" not found Jun 25 18:45:50.129795 containerd[1829]: time="2024-06-25T18:45:50.129668896Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9b5f76f88339f6bcba68a787fef2e8d88ecdd1b372c2fe546ba407aa5310d209\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 18:45:50.129795 containerd[1829]: time="2024-06-25T18:45:50.129743996Z" level=info msg="RemovePodSandbox \"9b5f76f88339f6bcba68a787fef2e8d88ecdd1b372c2fe546ba407aa5310d209\" returns successfully" Jun 25 18:45:50.131401 containerd[1829]: time="2024-06-25T18:45:50.130731696Z" level=info msg="StopPodSandbox for \"f9884837411d58fef3253ce016b5dd8115fab006addda9a45105db0ca30d2e60\"" Jun 25 18:45:50.131401 containerd[1829]: time="2024-06-25T18:45:50.130829296Z" level=info msg="TearDown network for sandbox \"f9884837411d58fef3253ce016b5dd8115fab006addda9a45105db0ca30d2e60\" successfully" Jun 25 18:45:50.131401 containerd[1829]: time="2024-06-25T18:45:50.130845396Z" level=info msg="StopPodSandbox for \"f9884837411d58fef3253ce016b5dd8115fab006addda9a45105db0ca30d2e60\" returns successfully" Jun 25 18:45:50.133149 containerd[1829]: time="2024-06-25T18:45:50.132816398Z" level=info msg="RemovePodSandbox for \"f9884837411d58fef3253ce016b5dd8115fab006addda9a45105db0ca30d2e60\"" Jun 25 18:45:50.133149 containerd[1829]: time="2024-06-25T18:45:50.132852998Z" level=info msg="Forcibly stopping sandbox \"f9884837411d58fef3253ce016b5dd8115fab006addda9a45105db0ca30d2e60\"" Jun 25 18:45:50.133149 containerd[1829]: time="2024-06-25T18:45:50.132926398Z" level=info msg="TearDown network for sandbox \"f9884837411d58fef3253ce016b5dd8115fab006addda9a45105db0ca30d2e60\" successfully" Jun 25 18:45:50.150367 containerd[1829]: time="2024-06-25T18:45:50.149665708Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f9884837411d58fef3253ce016b5dd8115fab006addda9a45105db0ca30d2e60\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 18:45:50.150367 containerd[1829]: time="2024-06-25T18:45:50.149758008Z" level=info msg="RemovePodSandbox \"f9884837411d58fef3253ce016b5dd8115fab006addda9a45105db0ca30d2e60\" returns successfully" Jun 25 18:45:50.154235 containerd[1829]: time="2024-06-25T18:45:50.153981311Z" level=info msg="StopPodSandbox for \"2547328064042eb80643dd2fb8273604ecf7a60f2b2e0125c271806151b96be3\"" Jun 25 18:45:50.285665 containerd[1829]: 2024-06-25 18:45:50.224 [WARNING][5648] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2547328064042eb80643dd2fb8273604ecf7a60f2b2e0125c271806151b96be3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--bcd7e269e6-k8s-calico--kube--controllers--598f97fd4f--fntx8-eth0", GenerateName:"calico-kube-controllers-598f97fd4f-", Namespace:"calico-system", SelfLink:"", UID:"d23d5958-6421-407c-b29b-eb96cbc2a5d1", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 45, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"598f97fd4f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-bcd7e269e6", ContainerID:"3d7d459820dbeaf39570e53199ac8e6e70552b526aa4f69e1560f66c0aa7c5fb", Pod:"calico-kube-controllers-598f97fd4f-fntx8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.45.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali85c20294c2d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:50.285665 containerd[1829]: 2024-06-25 18:45:50.224 [INFO][5648] k8s.go 608: Cleaning up netns ContainerID="2547328064042eb80643dd2fb8273604ecf7a60f2b2e0125c271806151b96be3" Jun 25 18:45:50.285665 containerd[1829]: 2024-06-25 18:45:50.224 [INFO][5648] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="2547328064042eb80643dd2fb8273604ecf7a60f2b2e0125c271806151b96be3" iface="eth0" netns="" Jun 25 18:45:50.285665 containerd[1829]: 2024-06-25 18:45:50.224 [INFO][5648] k8s.go 615: Releasing IP address(es) ContainerID="2547328064042eb80643dd2fb8273604ecf7a60f2b2e0125c271806151b96be3" Jun 25 18:45:50.285665 containerd[1829]: 2024-06-25 18:45:50.224 [INFO][5648] utils.go 188: Calico CNI releasing IP address ContainerID="2547328064042eb80643dd2fb8273604ecf7a60f2b2e0125c271806151b96be3" Jun 25 18:45:50.285665 containerd[1829]: 2024-06-25 18:45:50.264 [INFO][5654] ipam_plugin.go 411: Releasing address using handleID ContainerID="2547328064042eb80643dd2fb8273604ecf7a60f2b2e0125c271806151b96be3" HandleID="k8s-pod-network.2547328064042eb80643dd2fb8273604ecf7a60f2b2e0125c271806151b96be3" Workload="ci--4012.0.0--a--bcd7e269e6-k8s-calico--kube--controllers--598f97fd4f--fntx8-eth0" Jun 25 18:45:50.285665 containerd[1829]: 2024-06-25 18:45:50.264 [INFO][5654] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:50.285665 containerd[1829]: 2024-06-25 18:45:50.264 [INFO][5654] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:50.285665 containerd[1829]: 2024-06-25 18:45:50.277 [WARNING][5654] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="2547328064042eb80643dd2fb8273604ecf7a60f2b2e0125c271806151b96be3" HandleID="k8s-pod-network.2547328064042eb80643dd2fb8273604ecf7a60f2b2e0125c271806151b96be3" Workload="ci--4012.0.0--a--bcd7e269e6-k8s-calico--kube--controllers--598f97fd4f--fntx8-eth0" Jun 25 18:45:50.285665 containerd[1829]: 2024-06-25 18:45:50.277 [INFO][5654] ipam_plugin.go 439: Releasing address using workloadID ContainerID="2547328064042eb80643dd2fb8273604ecf7a60f2b2e0125c271806151b96be3" HandleID="k8s-pod-network.2547328064042eb80643dd2fb8273604ecf7a60f2b2e0125c271806151b96be3" Workload="ci--4012.0.0--a--bcd7e269e6-k8s-calico--kube--controllers--598f97fd4f--fntx8-eth0" Jun 25 18:45:50.285665 containerd[1829]: 2024-06-25 18:45:50.279 [INFO][5654] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:50.285665 containerd[1829]: 2024-06-25 18:45:50.282 [INFO][5648] k8s.go 621: Teardown processing complete. ContainerID="2547328064042eb80643dd2fb8273604ecf7a60f2b2e0125c271806151b96be3" Jun 25 18:45:50.285665 containerd[1829]: time="2024-06-25T18:45:50.285759195Z" level=info msg="TearDown network for sandbox \"2547328064042eb80643dd2fb8273604ecf7a60f2b2e0125c271806151b96be3\" successfully" Jun 25 18:45:50.285665 containerd[1829]: time="2024-06-25T18:45:50.285795495Z" level=info msg="StopPodSandbox for \"2547328064042eb80643dd2fb8273604ecf7a60f2b2e0125c271806151b96be3\" returns successfully" Jun 25 18:45:50.286762 containerd[1829]: time="2024-06-25T18:45:50.286360795Z" level=info msg="RemovePodSandbox for \"2547328064042eb80643dd2fb8273604ecf7a60f2b2e0125c271806151b96be3\"" Jun 25 18:45:50.286762 containerd[1829]: time="2024-06-25T18:45:50.286395795Z" level=info msg="Forcibly stopping sandbox \"2547328064042eb80643dd2fb8273604ecf7a60f2b2e0125c271806151b96be3\"" Jun 25 18:45:50.354548 containerd[1829]: 2024-06-25 18:45:50.326 [WARNING][5673] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2547328064042eb80643dd2fb8273604ecf7a60f2b2e0125c271806151b96be3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--bcd7e269e6-k8s-calico--kube--controllers--598f97fd4f--fntx8-eth0", GenerateName:"calico-kube-controllers-598f97fd4f-", Namespace:"calico-system", SelfLink:"", UID:"d23d5958-6421-407c-b29b-eb96cbc2a5d1", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 45, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"598f97fd4f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-bcd7e269e6", ContainerID:"3d7d459820dbeaf39570e53199ac8e6e70552b526aa4f69e1560f66c0aa7c5fb", Pod:"calico-kube-controllers-598f97fd4f-fntx8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.45.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali85c20294c2d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:50.354548 containerd[1829]: 2024-06-25 18:45:50.326 [INFO][5673] k8s.go 608: Cleaning up netns ContainerID="2547328064042eb80643dd2fb8273604ecf7a60f2b2e0125c271806151b96be3" Jun 25 18:45:50.354548 containerd[1829]: 2024-06-25 18:45:50.326 [INFO][5673] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="2547328064042eb80643dd2fb8273604ecf7a60f2b2e0125c271806151b96be3" iface="eth0" netns="" Jun 25 18:45:50.354548 containerd[1829]: 2024-06-25 18:45:50.327 [INFO][5673] k8s.go 615: Releasing IP address(es) ContainerID="2547328064042eb80643dd2fb8273604ecf7a60f2b2e0125c271806151b96be3" Jun 25 18:45:50.354548 containerd[1829]: 2024-06-25 18:45:50.327 [INFO][5673] utils.go 188: Calico CNI releasing IP address ContainerID="2547328064042eb80643dd2fb8273604ecf7a60f2b2e0125c271806151b96be3" Jun 25 18:45:50.354548 containerd[1829]: 2024-06-25 18:45:50.346 [INFO][5679] ipam_plugin.go 411: Releasing address using handleID ContainerID="2547328064042eb80643dd2fb8273604ecf7a60f2b2e0125c271806151b96be3" HandleID="k8s-pod-network.2547328064042eb80643dd2fb8273604ecf7a60f2b2e0125c271806151b96be3" Workload="ci--4012.0.0--a--bcd7e269e6-k8s-calico--kube--controllers--598f97fd4f--fntx8-eth0" Jun 25 18:45:50.354548 containerd[1829]: 2024-06-25 18:45:50.346 [INFO][5679] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:50.354548 containerd[1829]: 2024-06-25 18:45:50.346 [INFO][5679] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:50.354548 containerd[1829]: 2024-06-25 18:45:50.351 [WARNING][5679] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="2547328064042eb80643dd2fb8273604ecf7a60f2b2e0125c271806151b96be3" HandleID="k8s-pod-network.2547328064042eb80643dd2fb8273604ecf7a60f2b2e0125c271806151b96be3" Workload="ci--4012.0.0--a--bcd7e269e6-k8s-calico--kube--controllers--598f97fd4f--fntx8-eth0" Jun 25 18:45:50.354548 containerd[1829]: 2024-06-25 18:45:50.351 [INFO][5679] ipam_plugin.go 439: Releasing address using workloadID ContainerID="2547328064042eb80643dd2fb8273604ecf7a60f2b2e0125c271806151b96be3" HandleID="k8s-pod-network.2547328064042eb80643dd2fb8273604ecf7a60f2b2e0125c271806151b96be3" Workload="ci--4012.0.0--a--bcd7e269e6-k8s-calico--kube--controllers--598f97fd4f--fntx8-eth0" Jun 25 18:45:50.354548 containerd[1829]: 2024-06-25 18:45:50.352 [INFO][5679] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:50.354548 containerd[1829]: 2024-06-25 18:45:50.353 [INFO][5673] k8s.go 621: Teardown processing complete. ContainerID="2547328064042eb80643dd2fb8273604ecf7a60f2b2e0125c271806151b96be3" Jun 25 18:45:50.355394 containerd[1829]: time="2024-06-25T18:45:50.354541239Z" level=info msg="TearDown network for sandbox \"2547328064042eb80643dd2fb8273604ecf7a60f2b2e0125c271806151b96be3\" successfully" Jun 25 18:45:50.360491 containerd[1829]: time="2024-06-25T18:45:50.360449543Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2547328064042eb80643dd2fb8273604ecf7a60f2b2e0125c271806151b96be3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 18:45:50.360791 containerd[1829]: time="2024-06-25T18:45:50.360524143Z" level=info msg="RemovePodSandbox \"2547328064042eb80643dd2fb8273604ecf7a60f2b2e0125c271806151b96be3\" returns successfully" Jun 25 18:45:50.361223 containerd[1829]: time="2024-06-25T18:45:50.361113343Z" level=info msg="StopPodSandbox for \"7e08938b1d69058d3fe504729ee350281969478e1a8795602be556dd89e223ea\"" Jun 25 18:45:50.427104 containerd[1829]: 2024-06-25 18:45:50.394 [WARNING][5697] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7e08938b1d69058d3fe504729ee350281969478e1a8795602be556dd89e223ea" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--bcd7e269e6-k8s-csi--node--driver--9x5m4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1b8cd264-e868-4cd7-89a2-2e1d11e52069", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 45, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-bcd7e269e6", ContainerID:"9603724088123144b1298dd96b22929f340da7124b0c510ff94c2d68b0822ad8", Pod:"csi-node-driver-9x5m4", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.45.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calidf8c9567c5d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:50.427104 containerd[1829]: 2024-06-25 18:45:50.394 [INFO][5697] k8s.go 608: Cleaning up netns ContainerID="7e08938b1d69058d3fe504729ee350281969478e1a8795602be556dd89e223ea" Jun 25 18:45:50.427104 containerd[1829]: 2024-06-25 18:45:50.394 [INFO][5697] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="7e08938b1d69058d3fe504729ee350281969478e1a8795602be556dd89e223ea" iface="eth0" netns="" Jun 25 18:45:50.427104 containerd[1829]: 2024-06-25 18:45:50.394 [INFO][5697] k8s.go 615: Releasing IP address(es) ContainerID="7e08938b1d69058d3fe504729ee350281969478e1a8795602be556dd89e223ea" Jun 25 18:45:50.427104 containerd[1829]: 2024-06-25 18:45:50.394 [INFO][5697] utils.go 188: Calico CNI releasing IP address ContainerID="7e08938b1d69058d3fe504729ee350281969478e1a8795602be556dd89e223ea" Jun 25 18:45:50.427104 containerd[1829]: 2024-06-25 18:45:50.418 [INFO][5703] ipam_plugin.go 411: Releasing address using handleID ContainerID="7e08938b1d69058d3fe504729ee350281969478e1a8795602be556dd89e223ea" HandleID="k8s-pod-network.7e08938b1d69058d3fe504729ee350281969478e1a8795602be556dd89e223ea" Workload="ci--4012.0.0--a--bcd7e269e6-k8s-csi--node--driver--9x5m4-eth0" Jun 25 18:45:50.427104 containerd[1829]: 2024-06-25 18:45:50.418 [INFO][5703] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:50.427104 containerd[1829]: 2024-06-25 18:45:50.418 [INFO][5703] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:50.427104 containerd[1829]: 2024-06-25 18:45:50.423 [WARNING][5703] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="7e08938b1d69058d3fe504729ee350281969478e1a8795602be556dd89e223ea" HandleID="k8s-pod-network.7e08938b1d69058d3fe504729ee350281969478e1a8795602be556dd89e223ea" Workload="ci--4012.0.0--a--bcd7e269e6-k8s-csi--node--driver--9x5m4-eth0" Jun 25 18:45:50.427104 containerd[1829]: 2024-06-25 18:45:50.423 [INFO][5703] ipam_plugin.go 439: Releasing address using workloadID ContainerID="7e08938b1d69058d3fe504729ee350281969478e1a8795602be556dd89e223ea" HandleID="k8s-pod-network.7e08938b1d69058d3fe504729ee350281969478e1a8795602be556dd89e223ea" Workload="ci--4012.0.0--a--bcd7e269e6-k8s-csi--node--driver--9x5m4-eth0" Jun 25 18:45:50.427104 containerd[1829]: 2024-06-25 18:45:50.424 [INFO][5703] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:50.427104 containerd[1829]: 2024-06-25 18:45:50.426 [INFO][5697] k8s.go 621: Teardown processing complete. ContainerID="7e08938b1d69058d3fe504729ee350281969478e1a8795602be556dd89e223ea" Jun 25 18:45:50.427803 containerd[1829]: time="2024-06-25T18:45:50.427163985Z" level=info msg="TearDown network for sandbox \"7e08938b1d69058d3fe504729ee350281969478e1a8795602be556dd89e223ea\" successfully" Jun 25 18:45:50.427803 containerd[1829]: time="2024-06-25T18:45:50.427206485Z" level=info msg="StopPodSandbox for \"7e08938b1d69058d3fe504729ee350281969478e1a8795602be556dd89e223ea\" returns successfully" Jun 25 18:45:50.427803 containerd[1829]: time="2024-06-25T18:45:50.427775786Z" level=info msg="RemovePodSandbox for \"7e08938b1d69058d3fe504729ee350281969478e1a8795602be556dd89e223ea\"" Jun 25 18:45:50.427923 containerd[1829]: time="2024-06-25T18:45:50.427815286Z" level=info msg="Forcibly stopping sandbox \"7e08938b1d69058d3fe504729ee350281969478e1a8795602be556dd89e223ea\"" Jun 25 18:45:50.499677 containerd[1829]: 2024-06-25 18:45:50.466 [WARNING][5721] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7e08938b1d69058d3fe504729ee350281969478e1a8795602be556dd89e223ea" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--bcd7e269e6-k8s-csi--node--driver--9x5m4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1b8cd264-e868-4cd7-89a2-2e1d11e52069", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 45, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-bcd7e269e6", ContainerID:"9603724088123144b1298dd96b22929f340da7124b0c510ff94c2d68b0822ad8", Pod:"csi-node-driver-9x5m4", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.45.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calidf8c9567c5d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:50.499677 containerd[1829]: 2024-06-25 18:45:50.466 [INFO][5721] k8s.go 608: Cleaning up netns ContainerID="7e08938b1d69058d3fe504729ee350281969478e1a8795602be556dd89e223ea" Jun 25 18:45:50.499677 containerd[1829]: 2024-06-25 18:45:50.467 [INFO][5721] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="7e08938b1d69058d3fe504729ee350281969478e1a8795602be556dd89e223ea" iface="eth0" netns="" Jun 25 18:45:50.499677 containerd[1829]: 2024-06-25 18:45:50.467 [INFO][5721] k8s.go 615: Releasing IP address(es) ContainerID="7e08938b1d69058d3fe504729ee350281969478e1a8795602be556dd89e223ea" Jun 25 18:45:50.499677 containerd[1829]: 2024-06-25 18:45:50.467 [INFO][5721] utils.go 188: Calico CNI releasing IP address ContainerID="7e08938b1d69058d3fe504729ee350281969478e1a8795602be556dd89e223ea" Jun 25 18:45:50.499677 containerd[1829]: 2024-06-25 18:45:50.488 [INFO][5727] ipam_plugin.go 411: Releasing address using handleID ContainerID="7e08938b1d69058d3fe504729ee350281969478e1a8795602be556dd89e223ea" HandleID="k8s-pod-network.7e08938b1d69058d3fe504729ee350281969478e1a8795602be556dd89e223ea" Workload="ci--4012.0.0--a--bcd7e269e6-k8s-csi--node--driver--9x5m4-eth0" Jun 25 18:45:50.499677 containerd[1829]: 2024-06-25 18:45:50.488 [INFO][5727] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:50.499677 containerd[1829]: 2024-06-25 18:45:50.488 [INFO][5727] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:50.499677 containerd[1829]: 2024-06-25 18:45:50.495 [WARNING][5727] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="7e08938b1d69058d3fe504729ee350281969478e1a8795602be556dd89e223ea" HandleID="k8s-pod-network.7e08938b1d69058d3fe504729ee350281969478e1a8795602be556dd89e223ea" Workload="ci--4012.0.0--a--bcd7e269e6-k8s-csi--node--driver--9x5m4-eth0" Jun 25 18:45:50.499677 containerd[1829]: 2024-06-25 18:45:50.495 [INFO][5727] ipam_plugin.go 439: Releasing address using workloadID ContainerID="7e08938b1d69058d3fe504729ee350281969478e1a8795602be556dd89e223ea" HandleID="k8s-pod-network.7e08938b1d69058d3fe504729ee350281969478e1a8795602be556dd89e223ea" Workload="ci--4012.0.0--a--bcd7e269e6-k8s-csi--node--driver--9x5m4-eth0" Jun 25 18:45:50.499677 containerd[1829]: 2024-06-25 18:45:50.497 [INFO][5727] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:50.499677 containerd[1829]: 2024-06-25 18:45:50.498 [INFO][5721] k8s.go 621: Teardown processing complete. ContainerID="7e08938b1d69058d3fe504729ee350281969478e1a8795602be556dd89e223ea" Jun 25 18:45:50.500324 containerd[1829]: time="2024-06-25T18:45:50.499715631Z" level=info msg="TearDown network for sandbox \"7e08938b1d69058d3fe504729ee350281969478e1a8795602be556dd89e223ea\" successfully" Jun 25 18:45:50.507130 containerd[1829]: time="2024-06-25T18:45:50.507079836Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7e08938b1d69058d3fe504729ee350281969478e1a8795602be556dd89e223ea\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 18:45:50.507265 containerd[1829]: time="2024-06-25T18:45:50.507161336Z" level=info msg="RemovePodSandbox \"7e08938b1d69058d3fe504729ee350281969478e1a8795602be556dd89e223ea\" returns successfully" Jun 25 18:45:50.508105 containerd[1829]: time="2024-06-25T18:45:50.507986737Z" level=info msg="StopPodSandbox for \"78de158640117ace04ceac21b5af3a7e547f2b57508b8333d1e4c21830fb6720\"" Jun 25 18:45:50.594423 containerd[1829]: 2024-06-25 18:45:50.546 [WARNING][5746] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="78de158640117ace04ceac21b5af3a7e547f2b57508b8333d1e4c21830fb6720" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--bcd7e269e6-k8s-coredns--5dd5756b68--ztgmd-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"237fb894-19a1-415f-a0b5-869cf0ed9074", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 45, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-bcd7e269e6", ContainerID:"5a6b399d4be809255582744e282f23e43057f7cf98441bac2f4d44ce5d0f2572", Pod:"coredns-5dd5756b68-ztgmd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.45.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5ae3b61ec12", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:50.594423 containerd[1829]: 2024-06-25 18:45:50.547 [INFO][5746] k8s.go 608: Cleaning up netns ContainerID="78de158640117ace04ceac21b5af3a7e547f2b57508b8333d1e4c21830fb6720" Jun 25 18:45:50.594423 containerd[1829]: 2024-06-25 18:45:50.547 [INFO][5746] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="78de158640117ace04ceac21b5af3a7e547f2b57508b8333d1e4c21830fb6720" iface="eth0" netns="" Jun 25 18:45:50.594423 containerd[1829]: 2024-06-25 18:45:50.547 [INFO][5746] k8s.go 615: Releasing IP address(es) ContainerID="78de158640117ace04ceac21b5af3a7e547f2b57508b8333d1e4c21830fb6720" Jun 25 18:45:50.594423 containerd[1829]: 2024-06-25 18:45:50.547 [INFO][5746] utils.go 188: Calico CNI releasing IP address ContainerID="78de158640117ace04ceac21b5af3a7e547f2b57508b8333d1e4c21830fb6720" Jun 25 18:45:50.594423 containerd[1829]: 2024-06-25 18:45:50.582 [INFO][5752] ipam_plugin.go 411: Releasing address using handleID ContainerID="78de158640117ace04ceac21b5af3a7e547f2b57508b8333d1e4c21830fb6720" HandleID="k8s-pod-network.78de158640117ace04ceac21b5af3a7e547f2b57508b8333d1e4c21830fb6720" Workload="ci--4012.0.0--a--bcd7e269e6-k8s-coredns--5dd5756b68--ztgmd-eth0" Jun 25 18:45:50.594423 containerd[1829]: 2024-06-25 18:45:50.582 [INFO][5752] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:50.594423 containerd[1829]: 2024-06-25 18:45:50.582 [INFO][5752] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:50.594423 containerd[1829]: 2024-06-25 18:45:50.589 [WARNING][5752] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="78de158640117ace04ceac21b5af3a7e547f2b57508b8333d1e4c21830fb6720" HandleID="k8s-pod-network.78de158640117ace04ceac21b5af3a7e547f2b57508b8333d1e4c21830fb6720" Workload="ci--4012.0.0--a--bcd7e269e6-k8s-coredns--5dd5756b68--ztgmd-eth0" Jun 25 18:45:50.594423 containerd[1829]: 2024-06-25 18:45:50.589 [INFO][5752] ipam_plugin.go 439: Releasing address using workloadID ContainerID="78de158640117ace04ceac21b5af3a7e547f2b57508b8333d1e4c21830fb6720" HandleID="k8s-pod-network.78de158640117ace04ceac21b5af3a7e547f2b57508b8333d1e4c21830fb6720" Workload="ci--4012.0.0--a--bcd7e269e6-k8s-coredns--5dd5756b68--ztgmd-eth0" Jun 25 18:45:50.594423 containerd[1829]: 2024-06-25 18:45:50.591 [INFO][5752] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:50.594423 containerd[1829]: 2024-06-25 18:45:50.593 [INFO][5746] k8s.go 621: Teardown processing complete. ContainerID="78de158640117ace04ceac21b5af3a7e547f2b57508b8333d1e4c21830fb6720" Jun 25 18:45:50.595358 containerd[1829]: time="2024-06-25T18:45:50.594493692Z" level=info msg="TearDown network for sandbox \"78de158640117ace04ceac21b5af3a7e547f2b57508b8333d1e4c21830fb6720\" successfully" Jun 25 18:45:50.595358 containerd[1829]: time="2024-06-25T18:45:50.594525392Z" level=info msg="StopPodSandbox for \"78de158640117ace04ceac21b5af3a7e547f2b57508b8333d1e4c21830fb6720\" returns successfully" Jun 25 18:45:50.595358 containerd[1829]: time="2024-06-25T18:45:50.595024492Z" level=info msg="RemovePodSandbox for \"78de158640117ace04ceac21b5af3a7e547f2b57508b8333d1e4c21830fb6720\"" Jun 25 18:45:50.595358 containerd[1829]: time="2024-06-25T18:45:50.595058092Z" level=info msg="Forcibly stopping sandbox \"78de158640117ace04ceac21b5af3a7e547f2b57508b8333d1e4c21830fb6720\"" Jun 25 18:45:50.702069 containerd[1829]: 2024-06-25 18:45:50.652 [WARNING][5771] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="78de158640117ace04ceac21b5af3a7e547f2b57508b8333d1e4c21830fb6720" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--bcd7e269e6-k8s-coredns--5dd5756b68--ztgmd-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"237fb894-19a1-415f-a0b5-869cf0ed9074", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 45, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-bcd7e269e6", ContainerID:"5a6b399d4be809255582744e282f23e43057f7cf98441bac2f4d44ce5d0f2572", Pod:"coredns-5dd5756b68-ztgmd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.45.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5ae3b61ec12", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:50.702069 containerd[1829]: 2024-06-25 18:45:50.654 [INFO][5771] k8s.go 608: Cleaning up netns ContainerID="78de158640117ace04ceac21b5af3a7e547f2b57508b8333d1e4c21830fb6720" Jun 25 18:45:50.702069 containerd[1829]: 2024-06-25 18:45:50.654 [INFO][5771] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="78de158640117ace04ceac21b5af3a7e547f2b57508b8333d1e4c21830fb6720" iface="eth0" netns="" Jun 25 18:45:50.702069 containerd[1829]: 2024-06-25 18:45:50.654 [INFO][5771] k8s.go 615: Releasing IP address(es) ContainerID="78de158640117ace04ceac21b5af3a7e547f2b57508b8333d1e4c21830fb6720" Jun 25 18:45:50.702069 containerd[1829]: 2024-06-25 18:45:50.654 [INFO][5771] utils.go 188: Calico CNI releasing IP address ContainerID="78de158640117ace04ceac21b5af3a7e547f2b57508b8333d1e4c21830fb6720" Jun 25 18:45:50.702069 containerd[1829]: 2024-06-25 18:45:50.689 [INFO][5779] ipam_plugin.go 411: Releasing address using handleID ContainerID="78de158640117ace04ceac21b5af3a7e547f2b57508b8333d1e4c21830fb6720" HandleID="k8s-pod-network.78de158640117ace04ceac21b5af3a7e547f2b57508b8333d1e4c21830fb6720" Workload="ci--4012.0.0--a--bcd7e269e6-k8s-coredns--5dd5756b68--ztgmd-eth0" Jun 25 18:45:50.702069 containerd[1829]: 2024-06-25 18:45:50.689 [INFO][5779] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:50.702069 containerd[1829]: 2024-06-25 18:45:50.689 [INFO][5779] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:50.702069 containerd[1829]: 2024-06-25 18:45:50.697 [WARNING][5779] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="78de158640117ace04ceac21b5af3a7e547f2b57508b8333d1e4c21830fb6720" HandleID="k8s-pod-network.78de158640117ace04ceac21b5af3a7e547f2b57508b8333d1e4c21830fb6720" Workload="ci--4012.0.0--a--bcd7e269e6-k8s-coredns--5dd5756b68--ztgmd-eth0" Jun 25 18:45:50.702069 containerd[1829]: 2024-06-25 18:45:50.697 [INFO][5779] ipam_plugin.go 439: Releasing address using workloadID ContainerID="78de158640117ace04ceac21b5af3a7e547f2b57508b8333d1e4c21830fb6720" HandleID="k8s-pod-network.78de158640117ace04ceac21b5af3a7e547f2b57508b8333d1e4c21830fb6720" Workload="ci--4012.0.0--a--bcd7e269e6-k8s-coredns--5dd5756b68--ztgmd-eth0" Jun 25 18:45:50.702069 containerd[1829]: 2024-06-25 18:45:50.698 [INFO][5779] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:50.702069 containerd[1829]: 2024-06-25 18:45:50.700 [INFO][5771] k8s.go 621: Teardown processing complete. ContainerID="78de158640117ace04ceac21b5af3a7e547f2b57508b8333d1e4c21830fb6720" Jun 25 18:45:50.702069 containerd[1829]: time="2024-06-25T18:45:50.701075360Z" level=info msg="TearDown network for sandbox \"78de158640117ace04ceac21b5af3a7e547f2b57508b8333d1e4c21830fb6720\" successfully" Jun 25 18:45:50.710698 containerd[1829]: time="2024-06-25T18:45:50.710640066Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"78de158640117ace04ceac21b5af3a7e547f2b57508b8333d1e4c21830fb6720\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 18:45:50.710876 containerd[1829]: time="2024-06-25T18:45:50.710720566Z" level=info msg="RemovePodSandbox \"78de158640117ace04ceac21b5af3a7e547f2b57508b8333d1e4c21830fb6720\" returns successfully" Jun 25 18:45:50.883847 containerd[1829]: time="2024-06-25T18:45:50.883783476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5457598f49-svlgk,Uid:a950e326-947a-4913-9b8b-f081f690e731,Namespace:calico-apiserver,Attempt:0,}" Jun 25 18:45:50.917037 containerd[1829]: time="2024-06-25T18:45:50.916961697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5457598f49-tznh7,Uid:d515cb8b-51e2-411f-9225-8b0dd2c88cc9,Namespace:calico-apiserver,Attempt:0,}" Jun 25 18:45:51.087505 systemd-networkd[1397]: cali9b5722a6eee: Link UP Jun 25 18:45:51.088309 systemd-networkd[1397]: cali9b5722a6eee: Gained carrier Jun 25 18:45:51.122488 containerd[1829]: 2024-06-25 18:45:50.951 [INFO][5791] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4012.0.0--a--bcd7e269e6-k8s-calico--apiserver--5457598f49--svlgk-eth0 calico-apiserver-5457598f49- calico-apiserver a950e326-947a-4913-9b8b-f081f690e731 896 0 2024-06-25 18:45:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5457598f49 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4012.0.0-a-bcd7e269e6 calico-apiserver-5457598f49-svlgk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9b5722a6eee [] []}} ContainerID="154c9e64b30ea611f5c1fa63dcce643223bd8c71c0d914ad0cade5ab46d1c5e6" Namespace="calico-apiserver" Pod="calico-apiserver-5457598f49-svlgk" WorkloadEndpoint="ci--4012.0.0--a--bcd7e269e6-k8s-calico--apiserver--5457598f49--svlgk-" Jun 25 18:45:51.122488 containerd[1829]: 2024-06-25 18:45:50.951 [INFO][5791] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="154c9e64b30ea611f5c1fa63dcce643223bd8c71c0d914ad0cade5ab46d1c5e6" Namespace="calico-apiserver" Pod="calico-apiserver-5457598f49-svlgk" WorkloadEndpoint="ci--4012.0.0--a--bcd7e269e6-k8s-calico--apiserver--5457598f49--svlgk-eth0" Jun 25 18:45:51.122488 containerd[1829]: 2024-06-25 18:45:51.011 [INFO][5808] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="154c9e64b30ea611f5c1fa63dcce643223bd8c71c0d914ad0cade5ab46d1c5e6" HandleID="k8s-pod-network.154c9e64b30ea611f5c1fa63dcce643223bd8c71c0d914ad0cade5ab46d1c5e6" Workload="ci--4012.0.0--a--bcd7e269e6-k8s-calico--apiserver--5457598f49--svlgk-eth0" Jun 25 18:45:51.122488 containerd[1829]: 2024-06-25 18:45:51.026 [INFO][5808] ipam_plugin.go 264: Auto assigning IP ContainerID="154c9e64b30ea611f5c1fa63dcce643223bd8c71c0d914ad0cade5ab46d1c5e6" HandleID="k8s-pod-network.154c9e64b30ea611f5c1fa63dcce643223bd8c71c0d914ad0cade5ab46d1c5e6" Workload="ci--4012.0.0--a--bcd7e269e6-k8s-calico--apiserver--5457598f49--svlgk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050420), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4012.0.0-a-bcd7e269e6", "pod":"calico-apiserver-5457598f49-svlgk", "timestamp":"2024-06-25 18:45:51.011316057 +0000 UTC"}, Hostname:"ci-4012.0.0-a-bcd7e269e6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 18:45:51.122488 containerd[1829]: 2024-06-25 18:45:51.026 [INFO][5808] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:51.122488 containerd[1829]: 2024-06-25 18:45:51.026 [INFO][5808] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:51.122488 containerd[1829]: 2024-06-25 18:45:51.027 [INFO][5808] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4012.0.0-a-bcd7e269e6' Jun 25 18:45:51.122488 containerd[1829]: 2024-06-25 18:45:51.029 [INFO][5808] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.154c9e64b30ea611f5c1fa63dcce643223bd8c71c0d914ad0cade5ab46d1c5e6" host="ci-4012.0.0-a-bcd7e269e6" Jun 25 18:45:51.122488 containerd[1829]: 2024-06-25 18:45:51.034 [INFO][5808] ipam.go 372: Looking up existing affinities for host host="ci-4012.0.0-a-bcd7e269e6" Jun 25 18:45:51.122488 containerd[1829]: 2024-06-25 18:45:51.042 [INFO][5808] ipam.go 489: Trying affinity for 192.168.45.64/26 host="ci-4012.0.0-a-bcd7e269e6" Jun 25 18:45:51.122488 containerd[1829]: 2024-06-25 18:45:51.047 [INFO][5808] ipam.go 155: Attempting to load block cidr=192.168.45.64/26 host="ci-4012.0.0-a-bcd7e269e6" Jun 25 18:45:51.122488 containerd[1829]: 2024-06-25 18:45:51.051 [INFO][5808] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.45.64/26 host="ci-4012.0.0-a-bcd7e269e6" Jun 25 18:45:51.122488 containerd[1829]: 2024-06-25 18:45:51.052 [INFO][5808] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.45.64/26 handle="k8s-pod-network.154c9e64b30ea611f5c1fa63dcce643223bd8c71c0d914ad0cade5ab46d1c5e6" host="ci-4012.0.0-a-bcd7e269e6" Jun 25 18:45:51.122488 containerd[1829]: 2024-06-25 18:45:51.055 [INFO][5808] ipam.go 1685: Creating new handle: k8s-pod-network.154c9e64b30ea611f5c1fa63dcce643223bd8c71c0d914ad0cade5ab46d1c5e6 Jun 25 18:45:51.122488 containerd[1829]: 2024-06-25 18:45:51.069 [INFO][5808] ipam.go 1203: Writing block in order to claim IPs block=192.168.45.64/26 handle="k8s-pod-network.154c9e64b30ea611f5c1fa63dcce643223bd8c71c0d914ad0cade5ab46d1c5e6" host="ci-4012.0.0-a-bcd7e269e6" Jun 25 18:45:51.122488 containerd[1829]: 2024-06-25 18:45:51.079 [INFO][5808] ipam.go 1216: Successfully claimed IPs: [192.168.45.69/26] block=192.168.45.64/26 handle="k8s-pod-network.154c9e64b30ea611f5c1fa63dcce643223bd8c71c0d914ad0cade5ab46d1c5e6" host="ci-4012.0.0-a-bcd7e269e6" Jun 25 18:45:51.122488 containerd[1829]: 2024-06-25 18:45:51.079 [INFO][5808] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.45.69/26] handle="k8s-pod-network.154c9e64b30ea611f5c1fa63dcce643223bd8c71c0d914ad0cade5ab46d1c5e6" host="ci-4012.0.0-a-bcd7e269e6" Jun 25 18:45:51.122488 containerd[1829]: 2024-06-25 18:45:51.079 [INFO][5808] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:51.122488 containerd[1829]: 2024-06-25 18:45:51.079 [INFO][5808] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.45.69/26] IPv6=[] ContainerID="154c9e64b30ea611f5c1fa63dcce643223bd8c71c0d914ad0cade5ab46d1c5e6" HandleID="k8s-pod-network.154c9e64b30ea611f5c1fa63dcce643223bd8c71c0d914ad0cade5ab46d1c5e6" Workload="ci--4012.0.0--a--bcd7e269e6-k8s-calico--apiserver--5457598f49--svlgk-eth0" Jun 25 18:45:51.125220 containerd[1829]: 2024-06-25 18:45:51.083 [INFO][5791] k8s.go 386: Populated endpoint ContainerID="154c9e64b30ea611f5c1fa63dcce643223bd8c71c0d914ad0cade5ab46d1c5e6" Namespace="calico-apiserver" Pod="calico-apiserver-5457598f49-svlgk" WorkloadEndpoint="ci--4012.0.0--a--bcd7e269e6-k8s-calico--apiserver--5457598f49--svlgk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--bcd7e269e6-k8s-calico--apiserver--5457598f49--svlgk-eth0", GenerateName:"calico-apiserver-5457598f49-", Namespace:"calico-apiserver", SelfLink:"", UID:"a950e326-947a-4913-9b8b-f081f690e731", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 45, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5457598f49", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-bcd7e269e6", ContainerID:"", Pod:"calico-apiserver-5457598f49-svlgk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.45.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9b5722a6eee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:51.125220 containerd[1829]: 2024-06-25 18:45:51.084 [INFO][5791] k8s.go 387: Calico CNI using IPs: [192.168.45.69/32] ContainerID="154c9e64b30ea611f5c1fa63dcce643223bd8c71c0d914ad0cade5ab46d1c5e6" Namespace="calico-apiserver" Pod="calico-apiserver-5457598f49-svlgk" WorkloadEndpoint="ci--4012.0.0--a--bcd7e269e6-k8s-calico--apiserver--5457598f49--svlgk-eth0" Jun 25 18:45:51.125220 containerd[1829]: 2024-06-25 18:45:51.084 [INFO][5791] dataplane_linux.go 68: Setting the host side veth name to cali9b5722a6eee ContainerID="154c9e64b30ea611f5c1fa63dcce643223bd8c71c0d914ad0cade5ab46d1c5e6" Namespace="calico-apiserver" Pod="calico-apiserver-5457598f49-svlgk" WorkloadEndpoint="ci--4012.0.0--a--bcd7e269e6-k8s-calico--apiserver--5457598f49--svlgk-eth0" Jun 25 18:45:51.125220 containerd[1829]: 2024-06-25 18:45:51.088 [INFO][5791] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="154c9e64b30ea611f5c1fa63dcce643223bd8c71c0d914ad0cade5ab46d1c5e6" Namespace="calico-apiserver" Pod="calico-apiserver-5457598f49-svlgk" WorkloadEndpoint="ci--4012.0.0--a--bcd7e269e6-k8s-calico--apiserver--5457598f49--svlgk-eth0" Jun 25 18:45:51.125220 containerd[1829]: 2024-06-25 18:45:51.088 [INFO][5791] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="154c9e64b30ea611f5c1fa63dcce643223bd8c71c0d914ad0cade5ab46d1c5e6" Namespace="calico-apiserver" Pod="calico-apiserver-5457598f49-svlgk" WorkloadEndpoint="ci--4012.0.0--a--bcd7e269e6-k8s-calico--apiserver--5457598f49--svlgk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--bcd7e269e6-k8s-calico--apiserver--5457598f49--svlgk-eth0", GenerateName:"calico-apiserver-5457598f49-", Namespace:"calico-apiserver", SelfLink:"", UID:"a950e326-947a-4913-9b8b-f081f690e731", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 45, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5457598f49", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-bcd7e269e6", ContainerID:"154c9e64b30ea611f5c1fa63dcce643223bd8c71c0d914ad0cade5ab46d1c5e6", Pod:"calico-apiserver-5457598f49-svlgk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.45.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9b5722a6eee", MAC:"7a:a4:b8:6f:3e:03", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:51.125220 containerd[1829]: 2024-06-25 18:45:51.104 [INFO][5791] k8s.go 500: Wrote updated endpoint to datastore ContainerID="154c9e64b30ea611f5c1fa63dcce643223bd8c71c0d914ad0cade5ab46d1c5e6" Namespace="calico-apiserver" Pod="calico-apiserver-5457598f49-svlgk" WorkloadEndpoint="ci--4012.0.0--a--bcd7e269e6-k8s-calico--apiserver--5457598f49--svlgk-eth0" Jun 25 18:45:51.184916 systemd-networkd[1397]: calia3de5770beb: Link UP Jun 25 18:45:51.185663 systemd-networkd[1397]: calia3de5770beb: Gained carrier Jun 25 18:45:51.191295 containerd[1829]: time="2024-06-25T18:45:51.190878372Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:45:51.191295 containerd[1829]: time="2024-06-25T18:45:51.190949772Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:51.191295 containerd[1829]: time="2024-06-25T18:45:51.191026272Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:45:51.191295 containerd[1829]: time="2024-06-25T18:45:51.191046772Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:51.211277 containerd[1829]: 2024-06-25 18:45:51.008 [INFO][5799] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4012.0.0--a--bcd7e269e6-k8s-calico--apiserver--5457598f49--tznh7-eth0 calico-apiserver-5457598f49- calico-apiserver d515cb8b-51e2-411f-9225-8b0dd2c88cc9 901 0 2024-06-25 18:45:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5457598f49 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4012.0.0-a-bcd7e269e6 calico-apiserver-5457598f49-tznh7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia3de5770beb [] []}} ContainerID="e2a8126b32274968b68e2504ff31fc20045f441446b21a9d19d3ad655915bcf4" Namespace="calico-apiserver" Pod="calico-apiserver-5457598f49-tznh7" WorkloadEndpoint="ci--4012.0.0--a--bcd7e269e6-k8s-calico--apiserver--5457598f49--tznh7-" Jun 25 18:45:51.211277 containerd[1829]: 2024-06-25 18:45:51.008 [INFO][5799] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e2a8126b32274968b68e2504ff31fc20045f441446b21a9d19d3ad655915bcf4" Namespace="calico-apiserver" Pod="calico-apiserver-5457598f49-tznh7" WorkloadEndpoint="ci--4012.0.0--a--bcd7e269e6-k8s-calico--apiserver--5457598f49--tznh7-eth0" Jun 25 18:45:51.211277 containerd[1829]: 2024-06-25 18:45:51.066 [INFO][5816] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e2a8126b32274968b68e2504ff31fc20045f441446b21a9d19d3ad655915bcf4" HandleID="k8s-pod-network.e2a8126b32274968b68e2504ff31fc20045f441446b21a9d19d3ad655915bcf4" Workload="ci--4012.0.0--a--bcd7e269e6-k8s-calico--apiserver--5457598f49--tznh7-eth0" Jun 25 18:45:51.211277 containerd[1829]: 2024-06-25 18:45:51.089 [INFO][5816] ipam_plugin.go 264: Auto assigning IP ContainerID="e2a8126b32274968b68e2504ff31fc20045f441446b21a9d19d3ad655915bcf4" HandleID="k8s-pod-network.e2a8126b32274968b68e2504ff31fc20045f441446b21a9d19d3ad655915bcf4" Workload="ci--4012.0.0--a--bcd7e269e6-k8s-calico--apiserver--5457598f49--tznh7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001029e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4012.0.0-a-bcd7e269e6", "pod":"calico-apiserver-5457598f49-tznh7", "timestamp":"2024-06-25 18:45:51.066952393 +0000 UTC"}, Hostname:"ci-4012.0.0-a-bcd7e269e6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 18:45:51.211277 containerd[1829]: 2024-06-25 18:45:51.089 [INFO][5816] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:51.211277 containerd[1829]: 2024-06-25 18:45:51.090 [INFO][5816] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:51.211277 containerd[1829]: 2024-06-25 18:45:51.090 [INFO][5816] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4012.0.0-a-bcd7e269e6' Jun 25 18:45:51.211277 containerd[1829]: 2024-06-25 18:45:51.101 [INFO][5816] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e2a8126b32274968b68e2504ff31fc20045f441446b21a9d19d3ad655915bcf4" host="ci-4012.0.0-a-bcd7e269e6" Jun 25 18:45:51.211277 containerd[1829]: 2024-06-25 18:45:51.125 [INFO][5816] ipam.go 372: Looking up existing affinities for host host="ci-4012.0.0-a-bcd7e269e6" Jun 25 18:45:51.211277 containerd[1829]: 2024-06-25 18:45:51.144 [INFO][5816] ipam.go 489: Trying affinity for 192.168.45.64/26 host="ci-4012.0.0-a-bcd7e269e6" Jun 25 18:45:51.211277 containerd[1829]: 2024-06-25 18:45:51.147 [INFO][5816] ipam.go 155: Attempting to load block cidr=192.168.45.64/26 host="ci-4012.0.0-a-bcd7e269e6" Jun 25 18:45:51.211277 containerd[1829]: 2024-06-25 18:45:51.155 [INFO][5816] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.45.64/26 host="ci-4012.0.0-a-bcd7e269e6" Jun 25 18:45:51.211277 containerd[1829]: 2024-06-25 18:45:51.155 [INFO][5816] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.45.64/26 handle="k8s-pod-network.e2a8126b32274968b68e2504ff31fc20045f441446b21a9d19d3ad655915bcf4" host="ci-4012.0.0-a-bcd7e269e6" Jun 25 18:45:51.211277 containerd[1829]: 2024-06-25 18:45:51.160 [INFO][5816] ipam.go 1685: Creating new handle: k8s-pod-network.e2a8126b32274968b68e2504ff31fc20045f441446b21a9d19d3ad655915bcf4 Jun 25 18:45:51.211277 containerd[1829]: 2024-06-25 18:45:51.169 [INFO][5816] ipam.go 1203: Writing block in order to claim IPs block=192.168.45.64/26 handle="k8s-pod-network.e2a8126b32274968b68e2504ff31fc20045f441446b21a9d19d3ad655915bcf4" host="ci-4012.0.0-a-bcd7e269e6" Jun 25 18:45:51.211277 containerd[1829]: 2024-06-25 18:45:51.176 [INFO][5816] ipam.go 1216: Successfully claimed IPs: [192.168.45.70/26] block=192.168.45.64/26 handle="k8s-pod-network.e2a8126b32274968b68e2504ff31fc20045f441446b21a9d19d3ad655915bcf4" host="ci-4012.0.0-a-bcd7e269e6" Jun 25 18:45:51.211277 containerd[1829]: 2024-06-25 18:45:51.176 [INFO][5816] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.45.70/26] handle="k8s-pod-network.e2a8126b32274968b68e2504ff31fc20045f441446b21a9d19d3ad655915bcf4" host="ci-4012.0.0-a-bcd7e269e6" Jun 25 18:45:51.211277 containerd[1829]: 2024-06-25 18:45:51.176 [INFO][5816] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:51.211277 containerd[1829]: 2024-06-25 18:45:51.176 [INFO][5816] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.45.70/26] IPv6=[] ContainerID="e2a8126b32274968b68e2504ff31fc20045f441446b21a9d19d3ad655915bcf4" HandleID="k8s-pod-network.e2a8126b32274968b68e2504ff31fc20045f441446b21a9d19d3ad655915bcf4" Workload="ci--4012.0.0--a--bcd7e269e6-k8s-calico--apiserver--5457598f49--tznh7-eth0" Jun 25 18:45:51.216577 containerd[1829]: 2024-06-25 18:45:51.179 [INFO][5799] k8s.go 386: Populated endpoint ContainerID="e2a8126b32274968b68e2504ff31fc20045f441446b21a9d19d3ad655915bcf4" Namespace="calico-apiserver" Pod="calico-apiserver-5457598f49-tznh7" WorkloadEndpoint="ci--4012.0.0--a--bcd7e269e6-k8s-calico--apiserver--5457598f49--tznh7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--bcd7e269e6-k8s-calico--apiserver--5457598f49--tznh7-eth0", GenerateName:"calico-apiserver-5457598f49-", Namespace:"calico-apiserver", SelfLink:"", UID:"d515cb8b-51e2-411f-9225-8b0dd2c88cc9", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 45, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5457598f49", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-bcd7e269e6", ContainerID:"", Pod:"calico-apiserver-5457598f49-tznh7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.45.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia3de5770beb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:51.216577 containerd[1829]: 2024-06-25 18:45:51.180 [INFO][5799] k8s.go 387: Calico CNI using IPs: [192.168.45.70/32] ContainerID="e2a8126b32274968b68e2504ff31fc20045f441446b21a9d19d3ad655915bcf4" Namespace="calico-apiserver" Pod="calico-apiserver-5457598f49-tznh7" WorkloadEndpoint="ci--4012.0.0--a--bcd7e269e6-k8s-calico--apiserver--5457598f49--tznh7-eth0" Jun 25 18:45:51.216577 containerd[1829]: 2024-06-25 18:45:51.180 [INFO][5799] dataplane_linux.go 68: Setting the host side veth name to calia3de5770beb ContainerID="e2a8126b32274968b68e2504ff31fc20045f441446b21a9d19d3ad655915bcf4" Namespace="calico-apiserver" Pod="calico-apiserver-5457598f49-tznh7" WorkloadEndpoint="ci--4012.0.0--a--bcd7e269e6-k8s-calico--apiserver--5457598f49--tznh7-eth0" Jun 25 18:45:51.216577 containerd[1829]: 2024-06-25 18:45:51.186 [INFO][5799] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="e2a8126b32274968b68e2504ff31fc20045f441446b21a9d19d3ad655915bcf4" Namespace="calico-apiserver" Pod="calico-apiserver-5457598f49-tznh7" WorkloadEndpoint="ci--4012.0.0--a--bcd7e269e6-k8s-calico--apiserver--5457598f49--tznh7-eth0" Jun 25 18:45:51.216577 containerd[1829]: 2024-06-25 18:45:51.186 [INFO][5799] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e2a8126b32274968b68e2504ff31fc20045f441446b21a9d19d3ad655915bcf4" Namespace="calico-apiserver" Pod="calico-apiserver-5457598f49-tznh7" WorkloadEndpoint="ci--4012.0.0--a--bcd7e269e6-k8s-calico--apiserver--5457598f49--tznh7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--bcd7e269e6-k8s-calico--apiserver--5457598f49--tznh7-eth0", GenerateName:"calico-apiserver-5457598f49-", Namespace:"calico-apiserver", SelfLink:"", UID:"d515cb8b-51e2-411f-9225-8b0dd2c88cc9", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 45, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5457598f49", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-bcd7e269e6", ContainerID:"e2a8126b32274968b68e2504ff31fc20045f441446b21a9d19d3ad655915bcf4", Pod:"calico-apiserver-5457598f49-tznh7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.45.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia3de5770beb", MAC:"1a:b5:09:f3:df:3d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:51.216577 containerd[1829]: 2024-06-25 18:45:51.199 [INFO][5799] k8s.go 500: Wrote updated endpoint to datastore ContainerID="e2a8126b32274968b68e2504ff31fc20045f441446b21a9d19d3ad655915bcf4" Namespace="calico-apiserver" Pod="calico-apiserver-5457598f49-tznh7" WorkloadEndpoint="ci--4012.0.0--a--bcd7e269e6-k8s-calico--apiserver--5457598f49--tznh7-eth0" Jun 25 18:45:51.270713 containerd[1829]: time="2024-06-25T18:45:51.268297621Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:45:51.270713 containerd[1829]: time="2024-06-25T18:45:51.268368121Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:51.270713 containerd[1829]: time="2024-06-25T18:45:51.268396621Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:45:51.270713 containerd[1829]: time="2024-06-25T18:45:51.268417521Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:51.329905 containerd[1829]: time="2024-06-25T18:45:51.329864260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5457598f49-svlgk,Uid:a950e326-947a-4913-9b8b-f081f690e731,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"154c9e64b30ea611f5c1fa63dcce643223bd8c71c0d914ad0cade5ab46d1c5e6\"" Jun 25 18:45:51.332523 containerd[1829]: time="2024-06-25T18:45:51.332324662Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jun 25 18:45:51.370233 containerd[1829]: time="2024-06-25T18:45:51.370088986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5457598f49-tznh7,Uid:d515cb8b-51e2-411f-9225-8b0dd2c88cc9,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"e2a8126b32274968b68e2504ff31fc20045f441446b21a9d19d3ad655915bcf4\"" Jun 25 18:45:52.569707 systemd-networkd[1397]: calia3de5770beb: Gained IPv6LL Jun 25 18:45:52.633836 systemd-networkd[1397]: cali9b5722a6eee: Gained IPv6LL Jun 25 18:45:54.778653 containerd[1829]: time="2024-06-25T18:45:54.778601876Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:54.781490 containerd[1829]: time="2024-06-25T18:45:54.781427778Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=40421260" Jun 25 18:45:54.788388 containerd[1829]: time="2024-06-25T18:45:54.786999183Z" level=info msg="ImageCreate event name:\"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:54.791458 containerd[1829]: time="2024-06-25T18:45:54.791417786Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:54.792236 containerd[1829]: time="2024-06-25T18:45:54.792203587Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"41869036\" in 3.459827225s" Jun 25 18:45:54.792418 containerd[1829]: time="2024-06-25T18:45:54.792242887Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\"" Jun 25 18:45:54.793806 containerd[1829]: time="2024-06-25T18:45:54.793777588Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jun 25 18:45:54.795269 containerd[1829]: time="2024-06-25T18:45:54.795209190Z" level=info msg="CreateContainer within sandbox \"154c9e64b30ea611f5c1fa63dcce643223bd8c71c0d914ad0cade5ab46d1c5e6\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 25 18:45:54.827710 containerd[1829]: time="2024-06-25T18:45:54.827664417Z" level=info msg="CreateContainer within sandbox \"154c9e64b30ea611f5c1fa63dcce643223bd8c71c0d914ad0cade5ab46d1c5e6\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6333b0ab14a99098a3def89550ae24827618a289a7ff80ffbbab3e6e6c4ab6de\"" Jun 25 18:45:54.828401 containerd[1829]: time="2024-06-25T18:45:54.828367318Z" level=info msg="StartContainer for \"6333b0ab14a99098a3def89550ae24827618a289a7ff80ffbbab3e6e6c4ab6de\"" Jun 25 18:45:54.918872 containerd[1829]: time="2024-06-25T18:45:54.918819995Z" level=info msg="StartContainer for \"6333b0ab14a99098a3def89550ae24827618a289a7ff80ffbbab3e6e6c4ab6de\" returns successfully" Jun 25 18:45:55.456875 containerd[1829]: time="2024-06-25T18:45:55.456828652Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:55.459864 containerd[1829]: time="2024-06-25T18:45:55.459780055Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=77" Jun 25 18:45:55.461973 containerd[1829]: time="2024-06-25T18:45:55.461938057Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"41869036\" in 668.122369ms" Jun 25 18:45:55.461973 containerd[1829]: time="2024-06-25T18:45:55.461973257Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\"" Jun 25 18:45:55.464167 containerd[1829]: time="2024-06-25T18:45:55.464132959Z" level=info msg="CreateContainer within sandbox \"e2a8126b32274968b68e2504ff31fc20045f441446b21a9d19d3ad655915bcf4\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 25 18:45:55.498121 containerd[1829]: time="2024-06-25T18:45:55.498071987Z" level=info msg="CreateContainer within sandbox \"e2a8126b32274968b68e2504ff31fc20045f441446b21a9d19d3ad655915bcf4\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"af6adab9a1987cc826793167a3d7ef2c655ea76779f2f8f4c5bcbfc07ce7b802\"" Jun 25 18:45:55.498856 containerd[1829]: time="2024-06-25T18:45:55.498700888Z" level=info msg="StartContainer for \"af6adab9a1987cc826793167a3d7ef2c655ea76779f2f8f4c5bcbfc07ce7b802\"" Jun 25 18:45:55.570391 containerd[1829]: time="2024-06-25T18:45:55.570279349Z" level=info msg="StartContainer for \"af6adab9a1987cc826793167a3d7ef2c655ea76779f2f8f4c5bcbfc07ce7b802\" returns successfully" Jun 25 18:45:55.872322 systemd[1]: run-containerd-runc-k8s.io-f8c08540c79f1c839d72354bef7340754bdd0ce88ff5c54338b8f0f07ae36cc3-runc.ppk1c8.mount: Deactivated successfully. Jun 25 18:45:56.183766 kubelet[3456]: I0625 18:45:56.183728 3456 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5457598f49-svlgk" podStartSLOduration=3.722802645 podCreationTimestamp="2024-06-25 18:45:49 +0000 UTC" firstStartedPulling="2024-06-25 18:45:51.331958062 +0000 UTC m=+61.616187844" lastFinishedPulling="2024-06-25 18:45:54.792834088 +0000 UTC m=+65.077063870" observedRunningTime="2024-06-25 18:45:55.185315321 +0000 UTC m=+65.469545203" watchObservedRunningTime="2024-06-25 18:45:56.183678671 +0000 UTC m=+66.467908553" Jun 25 18:45:56.199011 kubelet[3456]: I0625 18:45:56.198973 3456 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5457598f49-tznh7" podStartSLOduration=3.113566217 podCreationTimestamp="2024-06-25 18:45:49 +0000 UTC" firstStartedPulling="2024-06-25 18:45:51.37687069 +0000 UTC m=+61.661100472" lastFinishedPulling="2024-06-25 18:45:55.462229657 +0000 UTC m=+65.746459539" observedRunningTime="2024-06-25 18:45:56.184583571 +0000 UTC m=+66.468813353" watchObservedRunningTime="2024-06-25 18:45:56.198925284 +0000 UTC m=+66.483155166" Jun 25 18:46:13.666245 systemd[1]: run-containerd-runc-k8s.io-b0c66b6bcec15aa5966647e4a874db74e857499a32f74222084a608f3906c029-runc.mZm082.mount: Deactivated successfully. Jun 25 18:46:38.880167 systemd[1]: run-containerd-runc-k8s.io-f8c08540c79f1c839d72354bef7340754bdd0ce88ff5c54338b8f0f07ae36cc3-runc.AuixEo.mount: Deactivated successfully. Jun 25 18:46:50.229937 systemd[1]: Started sshd@7-10.200.8.42:22-10.200.16.10:46138.service - OpenSSH per-connection server daemon (10.200.16.10:46138). Jun 25 18:46:50.890754 sshd[6184]: Accepted publickey for core from 10.200.16.10 port 46138 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:46:50.892174 sshd[6184]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:46:50.896640 systemd-logind[1805]: New session 10 of user core. Jun 25 18:46:50.901806 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 25 18:46:51.410346 sshd[6184]: pam_unix(sshd:session): session closed for user core Jun 25 18:46:51.415178 systemd[1]: sshd@7-10.200.8.42:22-10.200.16.10:46138.service: Deactivated successfully. Jun 25 18:46:51.420230 systemd-logind[1805]: Session 10 logged out. Waiting for processes to exit. Jun 25 18:46:51.420668 systemd[1]: session-10.scope: Deactivated successfully. Jun 25 18:46:51.422040 systemd-logind[1805]: Removed session 10. Jun 25 18:46:56.524898 systemd[1]: Started sshd@8-10.200.8.42:22-10.200.16.10:56936.service - OpenSSH per-connection server daemon (10.200.16.10:56936). Jun 25 18:46:57.167526 sshd[6223]: Accepted publickey for core from 10.200.16.10 port 56936 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:46:57.169435 sshd[6223]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:46:57.173645 systemd-logind[1805]: New session 11 of user core. Jun 25 18:46:57.180022 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 25 18:46:57.680704 sshd[6223]: pam_unix(sshd:session): session closed for user core Jun 25 18:46:57.685656 systemd[1]: sshd@8-10.200.8.42:22-10.200.16.10:56936.service: Deactivated successfully. Jun 25 18:46:57.690687 systemd[1]: session-11.scope: Deactivated successfully. Jun 25 18:46:57.691625 systemd-logind[1805]: Session 11 logged out. Waiting for processes to exit. Jun 25 18:46:57.692545 systemd-logind[1805]: Removed session 11. Jun 25 18:47:02.792868 systemd[1]: Started sshd@9-10.200.8.42:22-10.200.16.10:56944.service - OpenSSH per-connection server daemon (10.200.16.10:56944). Jun 25 18:47:03.434333 sshd[6243]: Accepted publickey for core from 10.200.16.10 port 56944 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:47:03.436072 sshd[6243]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:47:03.440529 systemd-logind[1805]: New session 12 of user core. Jun 25 18:47:03.445830 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 25 18:47:03.948216 sshd[6243]: pam_unix(sshd:session): session closed for user core Jun 25 18:47:03.952306 systemd[1]: sshd@9-10.200.8.42:22-10.200.16.10:56944.service: Deactivated successfully. Jun 25 18:47:03.957499 systemd[1]: session-12.scope: Deactivated successfully. Jun 25 18:47:03.958755 systemd-logind[1805]: Session 12 logged out. Waiting for processes to exit. Jun 25 18:47:03.959789 systemd-logind[1805]: Removed session 12. Jun 25 18:47:09.057360 systemd[1]: Started sshd@10-10.200.8.42:22-10.200.16.10:57256.service - OpenSSH per-connection server daemon (10.200.16.10:57256). Jun 25 18:47:09.711481 sshd[6265]: Accepted publickey for core from 10.200.16.10 port 57256 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:47:09.713017 sshd[6265]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:47:09.717253 systemd-logind[1805]: New session 13 of user core. Jun 25 18:47:09.721835 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 25 18:47:10.238826 sshd[6265]: pam_unix(sshd:session): session closed for user core Jun 25 18:47:10.243981 systemd[1]: sshd@10-10.200.8.42:22-10.200.16.10:57256.service: Deactivated successfully. Jun 25 18:47:10.248140 systemd[1]: session-13.scope: Deactivated successfully. Jun 25 18:47:10.249254 systemd-logind[1805]: Session 13 logged out. Waiting for processes to exit. Jun 25 18:47:10.250363 systemd-logind[1805]: Removed session 13. Jun 25 18:47:10.351954 systemd[1]: Started sshd@11-10.200.8.42:22-10.200.16.10:57260.service - OpenSSH per-connection server daemon (10.200.16.10:57260). Jun 25 18:47:11.000656 sshd[6293]: Accepted publickey for core from 10.200.16.10 port 57260 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:47:11.002598 sshd[6293]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:47:11.009197 systemd-logind[1805]: New session 14 of user core. Jun 25 18:47:11.013906 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 25 18:47:12.165244 sshd[6293]: pam_unix(sshd:session): session closed for user core Jun 25 18:47:12.170345 systemd[1]: sshd@11-10.200.8.42:22-10.200.16.10:57260.service: Deactivated successfully. Jun 25 18:47:12.174172 systemd-logind[1805]: Session 14 logged out. Waiting for processes to exit. Jun 25 18:47:12.174501 systemd[1]: session-14.scope: Deactivated successfully. Jun 25 18:47:12.176882 systemd-logind[1805]: Removed session 14. Jun 25 18:47:12.276867 systemd[1]: Started sshd@12-10.200.8.42:22-10.200.16.10:57272.service - OpenSSH per-connection server daemon (10.200.16.10:57272). Jun 25 18:47:12.914032 sshd[6307]: Accepted publickey for core from 10.200.16.10 port 57272 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:47:12.915677 sshd[6307]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:47:12.919632 systemd-logind[1805]: New session 15 of user core. Jun 25 18:47:12.925865 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 25 18:47:13.445642 sshd[6307]: pam_unix(sshd:session): session closed for user core Jun 25 18:47:13.450510 systemd[1]: sshd@12-10.200.8.42:22-10.200.16.10:57272.service: Deactivated successfully. Jun 25 18:47:13.456259 systemd[1]: session-15.scope: Deactivated successfully. Jun 25 18:47:13.457146 systemd-logind[1805]: Session 15 logged out. Waiting for processes to exit. Jun 25 18:47:13.458155 systemd-logind[1805]: Removed session 15. Jun 25 18:47:18.557868 systemd[1]: Started sshd@13-10.200.8.42:22-10.200.16.10:56872.service - OpenSSH per-connection server daemon (10.200.16.10:56872). Jun 25 18:47:19.194319 sshd[6342]: Accepted publickey for core from 10.200.16.10 port 56872 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:47:19.196049 sshd[6342]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:47:19.200774 systemd-logind[1805]: New session 16 of user core. Jun 25 18:47:19.205979 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 25 18:47:19.704914 sshd[6342]: pam_unix(sshd:session): session closed for user core Jun 25 18:47:19.707924 systemd[1]: sshd@13-10.200.8.42:22-10.200.16.10:56872.service: Deactivated successfully. Jun 25 18:47:19.713455 systemd-logind[1805]: Session 16 logged out. Waiting for processes to exit. Jun 25 18:47:19.714582 systemd[1]: session-16.scope: Deactivated successfully. Jun 25 18:47:19.716030 systemd-logind[1805]: Removed session 16. Jun 25 18:47:24.817901 systemd[1]: Started sshd@14-10.200.8.42:22-10.200.16.10:35220.service - OpenSSH per-connection server daemon (10.200.16.10:35220). Jun 25 18:47:25.480266 sshd[6365]: Accepted publickey for core from 10.200.16.10 port 35220 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:47:25.481879 sshd[6365]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:47:25.486758 systemd-logind[1805]: New session 17 of user core. Jun 25 18:47:25.489149 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 25 18:47:26.022637 sshd[6365]: pam_unix(sshd:session): session closed for user core Jun 25 18:47:26.027484 systemd[1]: sshd@14-10.200.8.42:22-10.200.16.10:35220.service: Deactivated successfully. Jun 25 18:47:26.031410 systemd[1]: session-17.scope: Deactivated successfully. Jun 25 18:47:26.032398 systemd-logind[1805]: Session 17 logged out. Waiting for processes to exit. Jun 25 18:47:26.033705 systemd-logind[1805]: Removed session 17. Jun 25 18:47:31.138880 systemd[1]: Started sshd@15-10.200.8.42:22-10.200.16.10:35234.service - OpenSSH per-connection server daemon (10.200.16.10:35234). Jun 25 18:47:31.786920 sshd[6403]: Accepted publickey for core from 10.200.16.10 port 35234 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:47:31.788721 sshd[6403]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:47:31.792943 systemd-logind[1805]: New session 18 of user core. Jun 25 18:47:31.797963 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 25 18:47:32.302664 sshd[6403]: pam_unix(sshd:session): session closed for user core Jun 25 18:47:32.306949 systemd[1]: sshd@15-10.200.8.42:22-10.200.16.10:35234.service: Deactivated successfully. Jun 25 18:47:32.312901 systemd[1]: session-18.scope: Deactivated successfully. Jun 25 18:47:32.314200 systemd-logind[1805]: Session 18 logged out. Waiting for processes to exit. Jun 25 18:47:32.315274 systemd-logind[1805]: Removed session 18. Jun 25 18:47:37.413977 systemd[1]: Started sshd@16-10.200.8.42:22-10.200.16.10:53318.service - OpenSSH per-connection server daemon (10.200.16.10:53318). Jun 25 18:47:38.056217 sshd[6418]: Accepted publickey for core from 10.200.16.10 port 53318 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:47:38.057823 sshd[6418]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:47:38.062820 systemd-logind[1805]: New session 19 of user core. Jun 25 18:47:38.069247 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 25 18:47:38.566707 sshd[6418]: pam_unix(sshd:session): session closed for user core Jun 25 18:47:38.570057 systemd[1]: sshd@16-10.200.8.42:22-10.200.16.10:53318.service: Deactivated successfully. Jun 25 18:47:38.575797 systemd-logind[1805]: Session 19 logged out. Waiting for processes to exit. Jun 25 18:47:38.576733 systemd[1]: session-19.scope: Deactivated successfully. Jun 25 18:47:38.578503 systemd-logind[1805]: Removed session 19. Jun 25 18:47:38.679862 systemd[1]: Started sshd@17-10.200.8.42:22-10.200.16.10:53322.service - OpenSSH per-connection server daemon (10.200.16.10:53322). Jun 25 18:47:38.878551 systemd[1]: run-containerd-runc-k8s.io-f8c08540c79f1c839d72354bef7340754bdd0ce88ff5c54338b8f0f07ae36cc3-runc.b4t9qx.mount: Deactivated successfully. Jun 25 18:47:39.322302 sshd[6431]: Accepted publickey for core from 10.200.16.10 port 53322 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:47:39.323994 sshd[6431]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:47:39.329223 systemd-logind[1805]: New session 20 of user core. Jun 25 18:47:39.334942 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 25 18:47:39.897032 sshd[6431]: pam_unix(sshd:session): session closed for user core Jun 25 18:47:39.904917 systemd-logind[1805]: Session 20 logged out. Waiting for processes to exit. Jun 25 18:47:39.906468 systemd[1]: sshd@17-10.200.8.42:22-10.200.16.10:53322.service: Deactivated successfully. Jun 25 18:47:39.915115 systemd[1]: session-20.scope: Deactivated successfully. Jun 25 18:47:39.916490 systemd-logind[1805]: Removed session 20. Jun 25 18:47:40.007880 systemd[1]: Started sshd@18-10.200.8.42:22-10.200.16.10:53326.service - OpenSSH per-connection server daemon (10.200.16.10:53326). Jun 25 18:47:40.673927 sshd[6462]: Accepted publickey for core from 10.200.16.10 port 53326 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:47:40.675475 sshd[6462]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:47:40.679610 systemd-logind[1805]: New session 21 of user core. Jun 25 18:47:40.683890 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 25 18:47:42.028276 sshd[6462]: pam_unix(sshd:session): session closed for user core Jun 25 18:47:42.033249 systemd[1]: sshd@18-10.200.8.42:22-10.200.16.10:53326.service: Deactivated successfully. Jun 25 18:47:42.037816 systemd[1]: session-21.scope: Deactivated successfully. Jun 25 18:47:42.038726 systemd-logind[1805]: Session 21 logged out. Waiting for processes to exit. Jun 25 18:47:42.039833 systemd-logind[1805]: Removed session 21. Jun 25 18:47:42.138871 systemd[1]: Started sshd@19-10.200.8.42:22-10.200.16.10:53340.service - OpenSSH per-connection server daemon (10.200.16.10:53340). Jun 25 18:47:42.781141 sshd[6486]: Accepted publickey for core from 10.200.16.10 port 53340 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:47:42.784080 sshd[6486]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:47:42.788852 systemd-logind[1805]: New session 22 of user core. Jun 25 18:47:42.792813 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 25 18:47:43.497663 sshd[6486]: pam_unix(sshd:session): session closed for user core Jun 25 18:47:43.501137 systemd[1]: sshd@19-10.200.8.42:22-10.200.16.10:53340.service: Deactivated successfully. Jun 25 18:47:43.507522 systemd-logind[1805]: Session 22 logged out. Waiting for processes to exit. Jun 25 18:47:43.508265 systemd[1]: session-22.scope: Deactivated successfully. Jun 25 18:47:43.510283 systemd-logind[1805]: Removed session 22. Jun 25 18:47:43.611297 systemd[1]: Started sshd@20-10.200.8.42:22-10.200.16.10:53342.service - OpenSSH per-connection server daemon (10.200.16.10:53342). Jun 25 18:47:44.254092 sshd[6498]: Accepted publickey for core from 10.200.16.10 port 53342 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:47:44.255703 sshd[6498]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:47:44.259794 systemd-logind[1805]: New session 23 of user core. Jun 25 18:47:44.264140 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 25 18:47:44.803763 sshd[6498]: pam_unix(sshd:session): session closed for user core Jun 25 18:47:44.808866 systemd[1]: sshd@20-10.200.8.42:22-10.200.16.10:53342.service: Deactivated successfully. Jun 25 18:47:44.815317 systemd[1]: session-23.scope: Deactivated successfully. Jun 25 18:47:44.817270 systemd-logind[1805]: Session 23 logged out. Waiting for processes to exit. Jun 25 18:47:44.818950 systemd-logind[1805]: Removed session 23. Jun 25 18:47:49.916170 systemd[1]: Started sshd@21-10.200.8.42:22-10.200.16.10:45568.service - OpenSSH per-connection server daemon (10.200.16.10:45568). Jun 25 18:47:50.562024 sshd[6539]: Accepted publickey for core from 10.200.16.10 port 45568 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:47:50.563725 sshd[6539]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:47:50.568366 systemd-logind[1805]: New session 24 of user core. Jun 25 18:47:50.572856 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 25 18:47:51.093334 sshd[6539]: pam_unix(sshd:session): session closed for user core Jun 25 18:47:51.096974 systemd[1]: sshd@21-10.200.8.42:22-10.200.16.10:45568.service: Deactivated successfully. Jun 25 18:47:51.102309 systemd-logind[1805]: Session 24 logged out. Waiting for processes to exit. Jun 25 18:47:51.103561 systemd[1]: session-24.scope: Deactivated successfully. Jun 25 18:47:51.105880 systemd-logind[1805]: Removed session 24. Jun 25 18:47:56.203950 systemd[1]: Started sshd@22-10.200.8.42:22-10.200.16.10:35130.service - OpenSSH per-connection server daemon (10.200.16.10:35130). Jun 25 18:47:56.843137 sshd[6577]: Accepted publickey for core from 10.200.16.10 port 35130 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:47:56.844786 sshd[6577]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:47:56.848768 systemd-logind[1805]: New session 25 of user core. Jun 25 18:47:56.854474 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 25 18:47:57.351780 sshd[6577]: pam_unix(sshd:session): session closed for user core Jun 25 18:47:57.356265 systemd[1]: sshd@22-10.200.8.42:22-10.200.16.10:35130.service: Deactivated successfully. Jun 25 18:47:57.360486 systemd-logind[1805]: Session 25 logged out. Waiting for processes to exit. Jun 25 18:47:57.361558 systemd[1]: session-25.scope: Deactivated successfully. Jun 25 18:47:57.363371 systemd-logind[1805]: Removed session 25. Jun 25 18:48:02.463881 systemd[1]: Started sshd@23-10.200.8.42:22-10.200.16.10:35136.service - OpenSSH per-connection server daemon (10.200.16.10:35136). Jun 25 18:48:03.098897 sshd[6597]: Accepted publickey for core from 10.200.16.10 port 35136 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:48:03.100394 sshd[6597]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:48:03.106458 systemd-logind[1805]: New session 26 of user core. Jun 25 18:48:03.109841 systemd[1]: Started session-26.scope - Session 26 of User core. Jun 25 18:48:03.608116 sshd[6597]: pam_unix(sshd:session): session closed for user core Jun 25 18:48:03.611946 systemd[1]: sshd@23-10.200.8.42:22-10.200.16.10:35136.service: Deactivated successfully. Jun 25 18:48:03.616840 systemd[1]: session-26.scope: Deactivated successfully. Jun 25 18:48:03.617800 systemd-logind[1805]: Session 26 logged out. Waiting for processes to exit. Jun 25 18:48:03.618717 systemd-logind[1805]: Removed session 26. Jun 25 18:48:08.719214 systemd[1]: Started sshd@24-10.200.8.42:22-10.200.16.10:59716.service - OpenSSH per-connection server daemon (10.200.16.10:59716). Jun 25 18:48:09.368485 sshd[6615]: Accepted publickey for core from 10.200.16.10 port 59716 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:48:09.370089 sshd[6615]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:48:09.374547 systemd-logind[1805]: New session 27 of user core. Jun 25 18:48:09.376854 systemd[1]: Started session-27.scope - Session 27 of User core. Jun 25 18:48:09.884442 sshd[6615]: pam_unix(sshd:session): session closed for user core Jun 25 18:48:09.889649 systemd[1]: sshd@24-10.200.8.42:22-10.200.16.10:59716.service: Deactivated successfully. Jun 25 18:48:09.894027 systemd[1]: session-27.scope: Deactivated successfully. Jun 25 18:48:09.895064 systemd-logind[1805]: Session 27 logged out. Waiting for processes to exit. Jun 25 18:48:09.896029 systemd-logind[1805]: Removed session 27. Jun 25 18:48:14.996890 systemd[1]: Started sshd@25-10.200.8.42:22-10.200.16.10:34400.service - OpenSSH per-connection server daemon (10.200.16.10:34400). Jun 25 18:48:15.645577 sshd[6656]: Accepted publickey for core from 10.200.16.10 port 34400 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:48:15.647095 sshd[6656]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:48:15.651753 systemd-logind[1805]: New session 28 of user core. Jun 25 18:48:15.657977 systemd[1]: Started session-28.scope - Session 28 of User core. Jun 25 18:48:16.152640 sshd[6656]: pam_unix(sshd:session): session closed for user core Jun 25 18:48:16.156371 systemd[1]: sshd@25-10.200.8.42:22-10.200.16.10:34400.service: Deactivated successfully. Jun 25 18:48:16.162040 systemd-logind[1805]: Session 28 logged out. Waiting for processes to exit. Jun 25 18:48:16.162961 systemd[1]: session-28.scope: Deactivated successfully. Jun 25 18:48:16.164150 systemd-logind[1805]: Removed session 28. Jun 25 18:48:30.952926 containerd[1829]: time="2024-06-25T18:48:30.952710986Z" level=info msg="shim disconnected" id=2648e5ddb7c6be231c0ece8f17270145e817e507d564e6bd589dd3df85f44ca1 namespace=k8s.io Jun 25 18:48:30.952926 containerd[1829]: time="2024-06-25T18:48:30.952778086Z" level=warning msg="cleaning up after shim disconnected" id=2648e5ddb7c6be231c0ece8f17270145e817e507d564e6bd589dd3df85f44ca1 namespace=k8s.io Jun 25 18:48:30.952926 containerd[1829]: time="2024-06-25T18:48:30.952790186Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:48:30.955978 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2648e5ddb7c6be231c0ece8f17270145e817e507d564e6bd589dd3df85f44ca1-rootfs.mount: Deactivated successfully. Jun 25 18:48:31.516490 kubelet[3456]: I0625 18:48:31.516401 3456 scope.go:117] "RemoveContainer" containerID="2648e5ddb7c6be231c0ece8f17270145e817e507d564e6bd589dd3df85f44ca1" Jun 25 18:48:31.518860 containerd[1829]: time="2024-06-25T18:48:31.518817652Z" level=info msg="CreateContainer within sandbox \"1f3aa2dc49ca631d8d2044ce40dba33a13fb2fc4375c7f6de6bbfad0ea3d28f6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jun 25 18:48:31.549781 containerd[1829]: time="2024-06-25T18:48:31.549735477Z" level=info msg="CreateContainer within sandbox \"1f3aa2dc49ca631d8d2044ce40dba33a13fb2fc4375c7f6de6bbfad0ea3d28f6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"d9c1dfbb53262447da7d9d737de1aa62b0cb6f72c803671a89efa61264db83de\"" Jun 25 18:48:31.550388 containerd[1829]: time="2024-06-25T18:48:31.550287277Z" level=info msg="StartContainer for \"d9c1dfbb53262447da7d9d737de1aa62b0cb6f72c803671a89efa61264db83de\"" Jun 25 18:48:31.626206 containerd[1829]: time="2024-06-25T18:48:31.626156746Z" level=info msg="StartContainer for \"d9c1dfbb53262447da7d9d737de1aa62b0cb6f72c803671a89efa61264db83de\" returns successfully" Jun 25 18:48:32.145436 containerd[1829]: time="2024-06-25T18:48:32.145284017Z" level=info msg="shim disconnected" id=ae6a6bed9fbfd94b6320918f28c817fe6428557a1e162fbc7e4d79a48834c5b6 namespace=k8s.io Jun 25 18:48:32.145436 containerd[1829]: time="2024-06-25T18:48:32.145378517Z" level=warning msg="cleaning up after shim disconnected" id=ae6a6bed9fbfd94b6320918f28c817fe6428557a1e162fbc7e4d79a48834c5b6 namespace=k8s.io Jun 25 18:48:32.145436 containerd[1829]: time="2024-06-25T18:48:32.145391517Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:48:32.151506 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ae6a6bed9fbfd94b6320918f28c817fe6428557a1e162fbc7e4d79a48834c5b6-rootfs.mount: Deactivated successfully. Jun 25 18:48:32.520415 kubelet[3456]: I0625 18:48:32.520376 3456 scope.go:117] "RemoveContainer" containerID="ae6a6bed9fbfd94b6320918f28c817fe6428557a1e162fbc7e4d79a48834c5b6" Jun 25 18:48:32.522745 containerd[1829]: time="2024-06-25T18:48:32.522696033Z" level=info msg="CreateContainer within sandbox \"e5627da17dc02ef9a0a84a51e0c0139ed72e0247eeffcddcb16e52a6a2f750c5\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jun 25 18:48:32.552850 containerd[1829]: time="2024-06-25T18:48:32.552803966Z" level=info msg="CreateContainer within sandbox \"e5627da17dc02ef9a0a84a51e0c0139ed72e0247eeffcddcb16e52a6a2f750c5\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"32d76a29caf6d077fc45794a62bc6ffc2eb3ebe91ddbe35bec10b13000c91bc4\"" Jun 25 18:48:32.553393 containerd[1829]: time="2024-06-25T18:48:32.553354066Z" level=info msg="StartContainer for \"32d76a29caf6d077fc45794a62bc6ffc2eb3ebe91ddbe35bec10b13000c91bc4\"" Jun 25 18:48:32.612467 containerd[1829]: time="2024-06-25T18:48:32.612062631Z" level=info msg="StartContainer for \"32d76a29caf6d077fc45794a62bc6ffc2eb3ebe91ddbe35bec10b13000c91bc4\" returns successfully" Jun 25 18:48:34.511043 kubelet[3456]: E0625 18:48:34.510966 3456 controller.go:193] "Failed to update lease" err="Put \"https://10.200.8.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012.0.0-a-bcd7e269e6?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jun 25 18:48:35.002581 kubelet[3456]: E0625 18:48:35.002532 3456 controller.go:193] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.42:35820->10.200.8.27:2379: read: connection timed out" Jun 25 18:48:35.033028 containerd[1829]: time="2024-06-25T18:48:35.032933896Z" level=info msg="shim disconnected" id=e710dbaa369489a41a00230332fd79bc7f171d7039ed21509cf8f80ee622ab0a namespace=k8s.io Jun 25 18:48:35.034621 containerd[1829]: time="2024-06-25T18:48:35.033049996Z" level=warning msg="cleaning up after shim disconnected" id=e710dbaa369489a41a00230332fd79bc7f171d7039ed21509cf8f80ee622ab0a namespace=k8s.io Jun 25 18:48:35.034621 containerd[1829]: time="2024-06-25T18:48:35.033063596Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:48:35.037770 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e710dbaa369489a41a00230332fd79bc7f171d7039ed21509cf8f80ee622ab0a-rootfs.mount: Deactivated successfully. Jun 25 18:48:35.533251 kubelet[3456]: I0625 18:48:35.533216 3456 scope.go:117] "RemoveContainer" containerID="e710dbaa369489a41a00230332fd79bc7f171d7039ed21509cf8f80ee622ab0a" Jun 25 18:48:35.535499 containerd[1829]: time="2024-06-25T18:48:35.535462350Z" level=info msg="CreateContainer within sandbox \"608da6e0e796d3f8e3fb9ca514f8ad5b909d4cbe344eeff7e99dd17996a4b037\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jun 25 18:48:35.568074 containerd[1829]: time="2024-06-25T18:48:35.568032085Z" level=info msg="CreateContainer within sandbox \"608da6e0e796d3f8e3fb9ca514f8ad5b909d4cbe344eeff7e99dd17996a4b037\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"713d7c6ee4b5a0c272c4601d9ef65cbd19a65cd45a31fdc1829dc85c43fd2859\"" Jun 25 18:48:35.568634 containerd[1829]: time="2024-06-25T18:48:35.568591886Z" level=info msg="StartContainer for \"713d7c6ee4b5a0c272c4601d9ef65cbd19a65cd45a31fdc1829dc85c43fd2859\"" Jun 25 18:48:35.652650 containerd[1829]: time="2024-06-25T18:48:35.652594278Z" level=info msg="StartContainer for \"713d7c6ee4b5a0c272c4601d9ef65cbd19a65cd45a31fdc1829dc85c43fd2859\" returns successfully" Jun 25 18:48:36.411229 kubelet[3456]: E0625 18:48:36.411046 3456 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-ci-4012.0.0-a-bcd7e269e6.17dc53ceeba8fec0", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-ci-4012.0.0-a-bcd7e269e6", UID:"3af617eab2a141a9d19096e6443cc1bf", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"Unhealthy", Message:"Readiness probe failed: HTTP probe failed with statuscode: 500", Source:v1.EventSource{Component:"kubelet", Host:"ci-4012.0.0-a-bcd7e269e6"}, FirstTimestamp:time.Date(2024, time.June, 25, 18, 48, 25, 930776256, time.Local), LastTimestamp:time.Date(2024, time.June, 25, 18, 48, 25, 930776256, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-4012.0.0-a-bcd7e269e6"}': 'rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.42:35626->10.200.8.27:2379: read: connection timed out' (will not retry!) Jun 25 18:48:36.668198 kubelet[3456]: I0625 18:48:36.666970 3456 status_manager.go:853] "Failed to get status for pod" podUID="52158b1cac27fcbb07c7ef803b924efe" pod="kube-system/kube-controller-manager-ci-4012.0.0-a-bcd7e269e6" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.42:35754->10.200.8.27:2379: read: connection timed out" Jun 25 18:48:44.121038 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-32d76a29caf6d077fc45794a62bc6ffc2eb3ebe91ddbe35bec10b13000c91bc4-rootfs.mount: Deactivated successfully. Jun 25 18:48:44.141614 containerd[1829]: time="2024-06-25T18:48:44.141520065Z" level=info msg="shim disconnected" id=32d76a29caf6d077fc45794a62bc6ffc2eb3ebe91ddbe35bec10b13000c91bc4 namespace=k8s.io Jun 25 18:48:44.141614 containerd[1829]: time="2024-06-25T18:48:44.141608665Z" level=warning msg="cleaning up after shim disconnected" id=32d76a29caf6d077fc45794a62bc6ffc2eb3ebe91ddbe35bec10b13000c91bc4 namespace=k8s.io Jun 25 18:48:44.141614 containerd[1829]: time="2024-06-25T18:48:44.141620865Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:48:44.559694 kubelet[3456]: I0625 18:48:44.559520 3456 scope.go:117] "RemoveContainer" containerID="ae6a6bed9fbfd94b6320918f28c817fe6428557a1e162fbc7e4d79a48834c5b6" Jun 25 18:48:44.560336 kubelet[3456]: I0625 18:48:44.559926 3456 scope.go:117] "RemoveContainer" containerID="32d76a29caf6d077fc45794a62bc6ffc2eb3ebe91ddbe35bec10b13000c91bc4" Jun 25 18:48:44.560336 kubelet[3456]: E0625 18:48:44.560310 3456 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-76c4974c85-kllfl_tigera-operator(f5d21d28-9493-4888-842d-d6974c892614)\"" pod="tigera-operator/tigera-operator-76c4974c85-kllfl" podUID="f5d21d28-9493-4888-842d-d6974c892614" Jun 25 18:48:44.561845 containerd[1829]: time="2024-06-25T18:48:44.561549993Z" level=info msg="RemoveContainer for \"ae6a6bed9fbfd94b6320918f28c817fe6428557a1e162fbc7e4d79a48834c5b6\"" Jun 25 18:48:44.570932 containerd[1829]: time="2024-06-25T18:48:44.570894000Z" level=info msg="RemoveContainer for \"ae6a6bed9fbfd94b6320918f28c817fe6428557a1e162fbc7e4d79a48834c5b6\" returns successfully" Jun 25 18:48:45.003136 kubelet[3456]: E0625 18:48:45.002833 3456 controller.go:193] "Failed to update lease" err="Put \"https://10.200.8.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012.0.0-a-bcd7e269e6?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jun 25 18:48:52.996661 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.010870 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.021647 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.032350 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.042578 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.053537 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.057017 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.060224 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.077113 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.083454 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.083833 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.084033 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.089092 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.089398 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.094524 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.097373 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.100244 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.100502 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.105676 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.105953 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.111592 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.114812 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.114988 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.120794 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.124084 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.124369 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.129771 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.220756 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.224229 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.224931 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.229790 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.230290 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.235400 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.238735 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.242003 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.245114 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.245345 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.250815 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.251065 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.256493 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.259478 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.262494 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.265485 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.268458 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.268777 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.362110 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.362501 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.367737 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.368096 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.373561 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.373829 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.378829 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.381811 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.384830 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.387679 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.390820 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.393776 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.396874 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.401585 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.408470 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.408775 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.411765 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.502322 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.508114 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.508440 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.508602 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.514117 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.514437 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.520609 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.523433 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.526659 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.532773 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.533009 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.533165 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.538296 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.538606 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.544434 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.551396 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.551717 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.551887 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.644352 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.644758 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.649989 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.650283 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.655871 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.658848 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.661666 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.664694 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.664977 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.670420 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.670691 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.675789 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.678814 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.681600 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.681804 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.687337 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.690391 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.690701 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.784106 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 18:48:53.784523 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001