Feb 9 19:00:19.027273 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Feb 9 17:23:38 -00 2024 Feb 9 19:00:19.027306 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:00:19.027321 kernel: BIOS-provided physical RAM map: Feb 9 19:00:19.027331 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 9 19:00:19.027341 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Feb 9 19:00:19.027351 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Feb 9 19:00:19.027367 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Feb 9 19:00:19.027378 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Feb 9 19:00:19.027389 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Feb 9 19:00:19.027399 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Feb 9 19:00:19.027410 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Feb 9 19:00:19.027421 kernel: printk: bootconsole [earlyser0] enabled Feb 9 19:00:19.027432 kernel: NX (Execute Disable) protection: active Feb 9 19:00:19.027443 kernel: efi: EFI v2.70 by Microsoft Feb 9 19:00:19.027459 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c9a98 RNG=0x3ffd1018 Feb 9 19:00:19.027471 kernel: random: crng init done Feb 9 19:00:19.027482 kernel: SMBIOS 3.1.0 present. Feb 9 19:00:19.027494 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 07/12/2023 Feb 9 19:00:19.027506 kernel: Hypervisor detected: Microsoft Hyper-V Feb 9 19:00:19.027517 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Feb 9 19:00:19.027529 kernel: Hyper-V Host Build:20348-10.0-1-0.1544 Feb 9 19:00:19.027541 kernel: Hyper-V: Nested features: 0x1e0101 Feb 9 19:00:19.027554 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Feb 9 19:00:19.027566 kernel: Hyper-V: Using hypercall for remote TLB flush Feb 9 19:00:19.027578 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Feb 9 19:00:19.027590 kernel: tsc: Marking TSC unstable due to running on Hyper-V Feb 9 19:00:19.027602 kernel: tsc: Detected 2593.907 MHz processor Feb 9 19:00:19.027614 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 9 19:00:19.027626 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 9 19:00:19.027638 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Feb 9 19:00:19.027651 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 9 19:00:19.027663 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Feb 9 19:00:19.027677 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Feb 9 19:00:19.027689 kernel: Using GB pages for direct mapping Feb 9 19:00:19.027701 kernel: Secure boot disabled Feb 9 19:00:19.027713 kernel: ACPI: Early table checksum verification disabled Feb 9 19:00:19.027725 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Feb 9 19:00:19.027737 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:00:19.027749 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:00:19.027762 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Feb 9 19:00:19.027781 kernel: ACPI: FACS 0x000000003FFFE000 000040 Feb 9 19:00:19.027794 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:00:19.027807 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:00:19.027820 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:00:19.027832 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:00:19.027845 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:00:19.027861 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:00:19.027874 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:00:19.027887 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Feb 9 19:00:19.027900 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Feb 9 19:00:19.027913 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Feb 9 19:00:19.027926 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Feb 9 19:00:19.027939 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Feb 9 19:00:19.027952 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Feb 9 19:00:19.027967 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Feb 9 19:00:19.027980 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Feb 9 19:00:19.027993 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Feb 9 19:00:19.028006 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Feb 9 19:00:19.028019 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 9 19:00:19.028032 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 9 19:00:19.028055 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Feb 9 19:00:19.028068 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Feb 9 19:00:19.028081 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Feb 9 19:00:19.028097 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Feb 9 19:00:19.028110 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Feb 9 19:00:19.028123 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Feb 9 19:00:19.028135 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Feb 9 19:00:19.028148 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Feb 9 19:00:19.028161 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Feb 9 19:00:19.028174 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Feb 9 19:00:19.028188 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Feb 9 19:00:19.028200 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Feb 9 19:00:19.028215 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Feb 9 19:00:19.028228 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Feb 9 19:00:19.028241 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Feb 9 19:00:19.028254 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Feb 9 19:00:19.028267 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Feb 9 19:00:19.028280 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Feb 9 19:00:19.028293 kernel: Zone ranges: Feb 9 19:00:19.028307 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 9 19:00:19.028320 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 9 19:00:19.028335 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Feb 9 19:00:19.028348 kernel: Movable zone start for each node Feb 9 19:00:19.028361 kernel: Early memory node ranges Feb 9 19:00:19.028374 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Feb 9 19:00:19.028387 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Feb 9 19:00:19.028399 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Feb 9 19:00:19.028412 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Feb 9 19:00:19.028425 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Feb 9 19:00:19.028438 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 9 19:00:19.028453 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Feb 9 19:00:19.028466 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Feb 9 19:00:19.028479 kernel: ACPI: PM-Timer IO Port: 0x408 Feb 9 19:00:19.028492 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Feb 9 19:00:19.028505 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Feb 9 19:00:19.028519 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 9 19:00:19.028532 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 9 19:00:19.028544 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Feb 9 19:00:19.028557 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 9 19:00:19.028572 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Feb 9 19:00:19.028586 kernel: Booting paravirtualized kernel on Hyper-V Feb 9 19:00:19.028599 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 9 19:00:19.028612 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Feb 9 19:00:19.028625 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u1048576 Feb 9 19:00:19.028638 kernel: pcpu-alloc: s185624 r8192 d31464 u1048576 alloc=1*2097152 Feb 9 19:00:19.028651 kernel: pcpu-alloc: [0] 0 1 Feb 9 19:00:19.028663 kernel: Hyper-V: PV spinlocks enabled Feb 9 19:00:19.028677 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 9 19:00:19.028692 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Feb 9 19:00:19.028705 kernel: Policy zone: Normal Feb 9 19:00:19.028719 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:00:19.028734 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 19:00:19.028746 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Feb 9 19:00:19.028759 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 9 19:00:19.028772 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 19:00:19.028785 kernel: Memory: 8081200K/8387460K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 306000K reserved, 0K cma-reserved) Feb 9 19:00:19.028801 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 9 19:00:19.028814 kernel: ftrace: allocating 34475 entries in 135 pages Feb 9 19:00:19.028837 kernel: ftrace: allocated 135 pages with 4 groups Feb 9 19:00:19.028853 kernel: rcu: Hierarchical RCU implementation. Feb 9 19:00:19.028867 kernel: rcu: RCU event tracing is enabled. Feb 9 19:00:19.028881 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 9 19:00:19.028895 kernel: Rude variant of Tasks RCU enabled. Feb 9 19:00:19.028909 kernel: Tracing variant of Tasks RCU enabled. Feb 9 19:00:19.028922 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 19:00:19.028940 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 9 19:00:19.028954 kernel: Using NULL legacy PIC Feb 9 19:00:19.028970 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Feb 9 19:00:19.028984 kernel: Console: colour dummy device 80x25 Feb 9 19:00:19.028998 kernel: printk: console [tty1] enabled Feb 9 19:00:19.029011 kernel: printk: console [ttyS0] enabled Feb 9 19:00:19.029025 kernel: printk: bootconsole [earlyser0] disabled Feb 9 19:00:19.029041 kernel: ACPI: Core revision 20210730 Feb 9 19:00:19.029060 kernel: Failed to register legacy timer interrupt Feb 9 19:00:19.029071 kernel: APIC: Switch to symmetric I/O mode setup Feb 9 19:00:19.029083 kernel: Hyper-V: Using IPI hypercalls Feb 9 19:00:19.029095 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593907) Feb 9 19:00:19.029106 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 9 19:00:19.029113 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 9 19:00:19.029121 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 9 19:00:19.029128 kernel: Spectre V2 : Mitigation: Retpolines Feb 9 19:00:19.029135 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 9 19:00:19.029145 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 9 19:00:19.029153 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Feb 9 19:00:19.029160 kernel: RETBleed: Vulnerable Feb 9 19:00:19.029167 kernel: Speculative Store Bypass: Vulnerable Feb 9 19:00:19.029174 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Feb 9 19:00:19.029181 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 9 19:00:19.029189 kernel: GDS: Unknown: Dependent on hypervisor status Feb 9 19:00:19.036091 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 9 19:00:19.036107 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 9 19:00:19.036121 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 9 19:00:19.036139 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Feb 9 19:00:19.036153 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Feb 9 19:00:19.036166 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Feb 9 19:00:19.036180 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 9 19:00:19.036194 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Feb 9 19:00:19.036208 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Feb 9 19:00:19.036221 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Feb 9 19:00:19.036235 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Feb 9 19:00:19.036249 kernel: Freeing SMP alternatives memory: 32K Feb 9 19:00:19.036262 kernel: pid_max: default: 32768 minimum: 301 Feb 9 19:00:19.036276 kernel: LSM: Security Framework initializing Feb 9 19:00:19.036289 kernel: SELinux: Initializing. Feb 9 19:00:19.036305 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 9 19:00:19.036320 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 9 19:00:19.036333 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Feb 9 19:00:19.036347 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Feb 9 19:00:19.036362 kernel: signal: max sigframe size: 3632 Feb 9 19:00:19.036375 kernel: rcu: Hierarchical SRCU implementation. Feb 9 19:00:19.036389 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 9 19:00:19.036402 kernel: smp: Bringing up secondary CPUs ... Feb 9 19:00:19.036416 kernel: x86: Booting SMP configuration: Feb 9 19:00:19.036430 kernel: .... node #0, CPUs: #1 Feb 9 19:00:19.036447 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Feb 9 19:00:19.036461 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 9 19:00:19.036475 kernel: smp: Brought up 1 node, 2 CPUs Feb 9 19:00:19.036489 kernel: smpboot: Max logical packages: 1 Feb 9 19:00:19.036503 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Feb 9 19:00:19.036516 kernel: devtmpfs: initialized Feb 9 19:00:19.036530 kernel: x86/mm: Memory block size: 128MB Feb 9 19:00:19.036544 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Feb 9 19:00:19.036561 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 19:00:19.036574 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 9 19:00:19.036589 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 19:00:19.036602 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 19:00:19.036616 kernel: audit: initializing netlink subsys (disabled) Feb 9 19:00:19.036629 kernel: audit: type=2000 audit(1707505218.024:1): state=initialized audit_enabled=0 res=1 Feb 9 19:00:19.036643 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 19:00:19.036657 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 9 19:00:19.036670 kernel: cpuidle: using governor menu Feb 9 19:00:19.036687 kernel: ACPI: bus type PCI registered Feb 9 19:00:19.036701 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 19:00:19.036714 kernel: dca service started, version 1.12.1 Feb 9 19:00:19.036728 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 9 19:00:19.036741 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 19:00:19.036755 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 19:00:19.036769 kernel: ACPI: Added _OSI(Module Device) Feb 9 19:00:19.036783 kernel: ACPI: Added _OSI(Processor Device) Feb 9 19:00:19.036797 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 19:00:19.036813 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 19:00:19.036826 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 19:00:19.036840 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 19:00:19.036854 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 19:00:19.036868 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 9 19:00:19.036881 kernel: ACPI: Interpreter enabled Feb 9 19:00:19.036895 kernel: ACPI: PM: (supports S0 S5) Feb 9 19:00:19.036909 kernel: ACPI: Using IOAPIC for interrupt routing Feb 9 19:00:19.036922 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 9 19:00:19.036939 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Feb 9 19:00:19.036953 kernel: iommu: Default domain type: Translated Feb 9 19:00:19.036967 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 9 19:00:19.036981 kernel: vgaarb: loaded Feb 9 19:00:19.036994 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 19:00:19.037008 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 19:00:19.037022 kernel: PTP clock support registered Feb 9 19:00:19.037036 kernel: Registered efivars operations Feb 9 19:00:19.037059 kernel: PCI: Using ACPI for IRQ routing Feb 9 19:00:19.037073 kernel: PCI: System does not support PCI Feb 9 19:00:19.037089 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Feb 9 19:00:19.037103 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 19:00:19.037117 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 19:00:19.037130 kernel: pnp: PnP ACPI init Feb 9 19:00:19.037144 kernel: pnp: PnP ACPI: found 3 devices Feb 9 19:00:19.037158 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 9 19:00:19.037172 kernel: NET: Registered PF_INET protocol family Feb 9 19:00:19.037185 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 9 19:00:19.037201 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Feb 9 19:00:19.037215 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 19:00:19.037229 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 19:00:19.037242 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Feb 9 19:00:19.037257 kernel: TCP: Hash tables configured (established 65536 bind 65536) Feb 9 19:00:19.037270 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 9 19:00:19.037284 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 9 19:00:19.037298 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 19:00:19.037312 kernel: NET: Registered PF_XDP protocol family Feb 9 19:00:19.037328 kernel: PCI: CLS 0 bytes, default 64 Feb 9 19:00:19.037341 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 9 19:00:19.037356 kernel: software IO TLB: mapped [mem 0x000000003a8ad000-0x000000003e8ad000] (64MB) Feb 9 19:00:19.037369 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 9 19:00:19.037383 kernel: Initialise system trusted keyrings Feb 9 19:00:19.037396 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Feb 9 19:00:19.037410 kernel: Key type asymmetric registered Feb 9 19:00:19.037423 kernel: Asymmetric key parser 'x509' registered Feb 9 19:00:19.037437 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 19:00:19.037453 kernel: io scheduler mq-deadline registered Feb 9 19:00:19.037466 kernel: io scheduler kyber registered Feb 9 19:00:19.037480 kernel: io scheduler bfq registered Feb 9 19:00:19.037494 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 9 19:00:19.037508 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 19:00:19.037521 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 9 19:00:19.037535 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 9 19:00:19.037548 kernel: i8042: PNP: No PS/2 controller found. Feb 9 19:00:19.037698 kernel: rtc_cmos 00:02: registered as rtc0 Feb 9 19:00:19.037815 kernel: rtc_cmos 00:02: setting system clock to 2024-02-09T19:00:18 UTC (1707505218) Feb 9 19:00:19.037922 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Feb 9 19:00:19.037940 kernel: fail to initialize ptp_kvm Feb 9 19:00:19.037954 kernel: intel_pstate: CPU model not supported Feb 9 19:00:19.037968 kernel: efifb: probing for efifb Feb 9 19:00:19.037981 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Feb 9 19:00:19.037995 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Feb 9 19:00:19.038009 kernel: efifb: scrolling: redraw Feb 9 19:00:19.038025 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 9 19:00:19.038039 kernel: Console: switching to colour frame buffer device 128x48 Feb 9 19:00:19.040552 kernel: fb0: EFI VGA frame buffer device Feb 9 19:00:19.040571 kernel: pstore: Registered efi as persistent store backend Feb 9 19:00:19.040590 kernel: NET: Registered PF_INET6 protocol family Feb 9 19:00:19.040604 kernel: Segment Routing with IPv6 Feb 9 19:00:19.040619 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 19:00:19.040632 kernel: NET: Registered PF_PACKET protocol family Feb 9 19:00:19.040645 kernel: Key type dns_resolver registered Feb 9 19:00:19.040674 kernel: IPI shorthand broadcast: enabled Feb 9 19:00:19.040688 kernel: sched_clock: Marking stable (748577400, 21892000)->(958152000, -187682600) Feb 9 19:00:19.040701 kernel: registered taskstats version 1 Feb 9 19:00:19.040720 kernel: Loading compiled-in X.509 certificates Feb 9 19:00:19.040734 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 56154408a02b3bd349a9e9180c9bd837fd1d636a' Feb 9 19:00:19.040747 kernel: Key type .fscrypt registered Feb 9 19:00:19.040760 kernel: Key type fscrypt-provisioning registered Feb 9 19:00:19.040779 kernel: pstore: Using crash dump compression: deflate Feb 9 19:00:19.040796 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 19:00:19.040810 kernel: ima: Allocated hash algorithm: sha1 Feb 9 19:00:19.040823 kernel: ima: No architecture policies found Feb 9 19:00:19.040836 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 9 19:00:19.040850 kernel: Write protecting the kernel read-only data: 28672k Feb 9 19:00:19.040863 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 9 19:00:19.040877 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 9 19:00:19.040891 kernel: Run /init as init process Feb 9 19:00:19.040904 kernel: with arguments: Feb 9 19:00:19.040918 kernel: /init Feb 9 19:00:19.040933 kernel: with environment: Feb 9 19:00:19.040952 kernel: HOME=/ Feb 9 19:00:19.040965 kernel: TERM=linux Feb 9 19:00:19.040979 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 19:00:19.040996 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 19:00:19.041012 systemd[1]: Detected virtualization microsoft. Feb 9 19:00:19.041028 systemd[1]: Detected architecture x86-64. Feb 9 19:00:19.041062 systemd[1]: Running in initrd. Feb 9 19:00:19.041076 systemd[1]: No hostname configured, using default hostname. Feb 9 19:00:19.041089 systemd[1]: Hostname set to . Feb 9 19:00:19.041103 systemd[1]: Initializing machine ID from random generator. Feb 9 19:00:19.041123 systemd[1]: Queued start job for default target initrd.target. Feb 9 19:00:19.041137 systemd[1]: Started systemd-ask-password-console.path. Feb 9 19:00:19.041152 systemd[1]: Reached target cryptsetup.target. Feb 9 19:00:19.041166 systemd[1]: Reached target paths.target. Feb 9 19:00:19.041179 systemd[1]: Reached target slices.target. Feb 9 19:00:19.041196 systemd[1]: Reached target swap.target. Feb 9 19:00:19.041210 systemd[1]: Reached target timers.target. Feb 9 19:00:19.041225 systemd[1]: Listening on iscsid.socket. Feb 9 19:00:19.041245 systemd[1]: Listening on iscsiuio.socket. Feb 9 19:00:19.041260 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 19:00:19.041275 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 19:00:19.041289 systemd[1]: Listening on systemd-journald.socket. Feb 9 19:00:19.041306 systemd[1]: Listening on systemd-networkd.socket. Feb 9 19:00:19.041320 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 19:00:19.041335 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 19:00:19.041349 systemd[1]: Reached target sockets.target. Feb 9 19:00:19.041364 systemd[1]: Starting kmod-static-nodes.service... Feb 9 19:00:19.041378 systemd[1]: Finished network-cleanup.service. Feb 9 19:00:19.041392 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 19:00:19.041406 systemd[1]: Starting systemd-journald.service... Feb 9 19:00:19.041427 systemd[1]: Starting systemd-modules-load.service... Feb 9 19:00:19.041444 systemd[1]: Starting systemd-resolved.service... Feb 9 19:00:19.041458 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 19:00:19.041475 systemd-journald[183]: Journal started Feb 9 19:00:19.041545 systemd-journald[183]: Runtime Journal (/run/log/journal/9a4d0b5f8cfe408aa884790ffd219253) is 8.0M, max 159.0M, 151.0M free. Feb 9 19:00:19.013780 systemd-modules-load[184]: Inserted module 'overlay' Feb 9 19:00:19.050346 systemd[1]: Started systemd-journald.service. Feb 9 19:00:19.057730 systemd-resolved[185]: Positive Trust Anchors: Feb 9 19:00:19.057746 systemd-resolved[185]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 19:00:19.057788 systemd-resolved[185]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 19:00:19.060448 systemd-resolved[185]: Defaulting to hostname 'linux'. Feb 9 19:00:19.119974 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 19:00:19.120023 kernel: Bridge firewalling registered Feb 9 19:00:19.120040 kernel: audit: type=1130 audit(1707505219.093:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:19.120072 kernel: audit: type=1130 audit(1707505219.105:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:19.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:19.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:19.093444 systemd-modules-load[184]: Inserted module 'br_netfilter' Feb 9 19:00:19.123813 kernel: SCSI subsystem initialized Feb 9 19:00:19.103535 systemd[1]: Started systemd-resolved.service. Feb 9 19:00:19.105942 systemd[1]: Finished kmod-static-nodes.service. Feb 9 19:00:19.127716 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 19:00:19.131448 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 19:00:19.154990 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 19:00:19.155039 kernel: audit: type=1130 audit(1707505219.127:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:19.155060 kernel: device-mapper: uevent: version 1.0.3 Feb 9 19:00:19.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:19.155319 systemd[1]: Reached target nss-lookup.target. Feb 9 19:00:19.188865 kernel: audit: type=1130 audit(1707505219.131:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:19.188902 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 19:00:19.188918 kernel: audit: type=1130 audit(1707505219.154:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:19.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:19.154000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:19.191351 systemd-modules-load[184]: Inserted module 'dm_multipath' Feb 9 19:00:19.194303 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 19:00:19.196161 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 19:00:19.205399 systemd[1]: Finished systemd-modules-load.service. Feb 9 19:00:19.209000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:19.210344 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 19:00:19.225448 kernel: audit: type=1130 audit(1707505219.209:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:19.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:19.225639 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 19:00:19.241334 kernel: audit: type=1130 audit(1707505219.225:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:19.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:19.242395 systemd[1]: Starting dracut-cmdline.service... Feb 9 19:00:19.257152 kernel: audit: type=1130 audit(1707505219.241:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:19.257228 dracut-cmdline[203]: dracut-dracut-053 Feb 9 19:00:19.257228 dracut-cmdline[203]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Feb 9 19:00:19.257228 dracut-cmdline[203]: BEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:00:19.260973 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:00:19.289511 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:00:19.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:19.308068 kernel: audit: type=1130 audit(1707505219.293:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:19.327065 kernel: Loading iSCSI transport class v2.0-870. Feb 9 19:00:19.340067 kernel: iscsi: registered transport (tcp) Feb 9 19:00:19.364599 kernel: iscsi: registered transport (qla4xxx) Feb 9 19:00:19.364669 kernel: QLogic iSCSI HBA Driver Feb 9 19:00:19.393859 systemd[1]: Finished dracut-cmdline.service. Feb 9 19:00:19.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:19.397057 systemd[1]: Starting dracut-pre-udev.service... Feb 9 19:00:19.448071 kernel: raid6: avx512x4 gen() 18565 MB/s Feb 9 19:00:19.468061 kernel: raid6: avx512x4 xor() 8208 MB/s Feb 9 19:00:19.488057 kernel: raid6: avx512x2 gen() 18575 MB/s Feb 9 19:00:19.509063 kernel: raid6: avx512x2 xor() 29955 MB/s Feb 9 19:00:19.529057 kernel: raid6: avx512x1 gen() 18571 MB/s Feb 9 19:00:19.549057 kernel: raid6: avx512x1 xor() 27028 MB/s Feb 9 19:00:19.569071 kernel: raid6: avx2x4 gen() 18490 MB/s Feb 9 19:00:19.589057 kernel: raid6: avx2x4 xor() 8074 MB/s Feb 9 19:00:19.609054 kernel: raid6: avx2x2 gen() 18520 MB/s Feb 9 19:00:19.629062 kernel: raid6: avx2x2 xor() 22386 MB/s Feb 9 19:00:19.648056 kernel: raid6: avx2x1 gen() 14181 MB/s Feb 9 19:00:19.667057 kernel: raid6: avx2x1 xor() 19524 MB/s Feb 9 19:00:19.687059 kernel: raid6: sse2x4 gen() 11775 MB/s Feb 9 19:00:19.707063 kernel: raid6: sse2x4 xor() 7177 MB/s Feb 9 19:00:19.726058 kernel: raid6: sse2x2 gen() 12987 MB/s Feb 9 19:00:19.746055 kernel: raid6: sse2x2 xor() 7478 MB/s Feb 9 19:00:19.765056 kernel: raid6: sse2x1 gen() 11709 MB/s Feb 9 19:00:19.787869 kernel: raid6: sse2x1 xor() 5958 MB/s Feb 9 19:00:19.787916 kernel: raid6: using algorithm avx512x2 gen() 18575 MB/s Feb 9 19:00:19.787931 kernel: raid6: .... xor() 29955 MB/s, rmw enabled Feb 9 19:00:19.791213 kernel: raid6: using avx512x2 recovery algorithm Feb 9 19:00:19.809068 kernel: xor: automatically using best checksumming function avx Feb 9 19:00:19.904073 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 9 19:00:19.911859 systemd[1]: Finished dracut-pre-udev.service. Feb 9 19:00:19.913000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:19.915000 audit: BPF prog-id=7 op=LOAD Feb 9 19:00:19.915000 audit: BPF prog-id=8 op=LOAD Feb 9 19:00:19.915956 systemd[1]: Starting systemd-udevd.service... Feb 9 19:00:19.930088 systemd-udevd[383]: Using default interface naming scheme 'v252'. Feb 9 19:00:19.936782 systemd[1]: Started systemd-udevd.service. Feb 9 19:00:19.938000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:19.939886 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 19:00:19.960304 dracut-pre-trigger[396]: rd.md=0: removing MD RAID activation Feb 9 19:00:19.990516 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 19:00:19.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:19.996243 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 19:00:20.029115 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 19:00:20.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:20.075068 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 19:00:20.110067 kernel: hv_vmbus: Vmbus version:5.2 Feb 9 19:00:20.115061 kernel: AVX2 version of gcm_enc/dec engaged. Feb 9 19:00:20.127062 kernel: AES CTR mode by8 optimization enabled Feb 9 19:00:20.133072 kernel: hv_vmbus: registering driver hyperv_keyboard Feb 9 19:00:20.137062 kernel: hv_vmbus: registering driver hv_storvsc Feb 9 19:00:20.148535 kernel: scsi host1: storvsc_host_t Feb 9 19:00:20.148710 kernel: scsi host0: storvsc_host_t Feb 9 19:00:20.148834 kernel: hv_vmbus: registering driver hv_netvsc Feb 9 19:00:20.165593 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Feb 9 19:00:20.165670 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Feb 9 19:00:20.166071 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Feb 9 19:00:20.175058 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 9 19:00:20.196060 kernel: hv_vmbus: registering driver hid_hyperv Feb 9 19:00:20.196098 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Feb 9 19:00:20.205466 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Feb 9 19:00:20.205670 kernel: hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Feb 9 19:00:20.205788 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 9 19:00:20.217363 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 9 19:00:20.217569 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Feb 9 19:00:20.224068 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Feb 9 19:00:20.229065 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 19:00:20.233062 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 9 19:00:20.245330 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Feb 9 19:00:20.245541 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 9 19:00:20.254034 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Feb 9 19:00:20.347503 kernel: hv_netvsc 000d3add-dea7-000d-3add-dea7000d3add eth0: VF slot 1 added Feb 9 19:00:20.357063 kernel: hv_vmbus: registering driver hv_pci Feb 9 19:00:20.363066 kernel: hv_pci 82e571b4-e7e8-46bc-b884-996f4af1da65: PCI VMBus probing: Using version 0x10004 Feb 9 19:00:20.375107 kernel: hv_pci 82e571b4-e7e8-46bc-b884-996f4af1da65: PCI host bridge to bus e7e8:00 Feb 9 19:00:20.375261 kernel: pci_bus e7e8:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Feb 9 19:00:20.375393 kernel: pci_bus e7e8:00: No busn resource found for root bus, will use [bus 00-ff] Feb 9 19:00:20.385311 kernel: pci e7e8:00:02.0: [15b3:1016] type 00 class 0x020000 Feb 9 19:00:20.395023 kernel: pci e7e8:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 9 19:00:20.412063 kernel: pci e7e8:00:02.0: enabling Extended Tags Feb 9 19:00:20.430069 kernel: pci e7e8:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at e7e8:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Feb 9 19:00:20.438962 kernel: pci_bus e7e8:00: busn_res: [bus 00-ff] end is updated to 00 Feb 9 19:00:20.439166 kernel: pci e7e8:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 9 19:00:20.533071 kernel: mlx5_core e7e8:00:02.0: firmware version: 14.30.1224 Feb 9 19:00:20.691072 kernel: mlx5_core e7e8:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Feb 9 19:00:20.737203 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 19:00:20.767064 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (442) Feb 9 19:00:20.780137 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 19:00:20.844548 kernel: mlx5_core e7e8:00:02.0: Supported tc offload range - chains: 1, prios: 1 Feb 9 19:00:20.844813 kernel: mlx5_core e7e8:00:02.0: mlx5e_tc_post_act_init:40:(pid 16): firmware level support is missing Feb 9 19:00:20.856256 kernel: hv_netvsc 000d3add-dea7-000d-3add-dea7000d3add eth0: VF registering: eth1 Feb 9 19:00:20.856419 kernel: mlx5_core e7e8:00:02.0 eth1: joined to eth0 Feb 9 19:00:20.868061 kernel: mlx5_core e7e8:00:02.0 enP59368s1: renamed from eth1 Feb 9 19:00:20.959310 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 19:00:20.996288 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 19:00:21.002514 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 19:00:21.009679 systemd[1]: Starting disk-uuid.service... Feb 9 19:00:21.023074 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 19:00:21.031074 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 19:00:22.038065 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 19:00:22.038335 disk-uuid[563]: The operation has completed successfully. Feb 9 19:00:22.114088 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 19:00:22.114186 systemd[1]: Finished disk-uuid.service. Feb 9 19:00:22.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:22.119000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:22.125012 systemd[1]: Starting verity-setup.service... Feb 9 19:00:22.162066 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 9 19:00:22.511060 systemd[1]: Found device dev-mapper-usr.device. Feb 9 19:00:22.517176 systemd[1]: Finished verity-setup.service. Feb 9 19:00:22.520000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:22.521683 systemd[1]: Mounting sysusr-usr.mount... Feb 9 19:00:22.596072 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 19:00:22.596151 systemd[1]: Mounted sysusr-usr.mount. Feb 9 19:00:22.600078 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 19:00:22.604203 systemd[1]: Starting ignition-setup.service... Feb 9 19:00:22.609895 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 19:00:22.633628 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:00:22.633673 kernel: BTRFS info (device sda6): using free space tree Feb 9 19:00:22.633693 kernel: BTRFS info (device sda6): has skinny extents Feb 9 19:00:22.682117 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 19:00:22.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:22.686000 audit: BPF prog-id=9 op=LOAD Feb 9 19:00:22.687336 systemd[1]: Starting systemd-networkd.service... Feb 9 19:00:22.715316 systemd-networkd[801]: lo: Link UP Feb 9 19:00:22.715326 systemd-networkd[801]: lo: Gained carrier Feb 9 19:00:22.719791 systemd-networkd[801]: Enumeration completed Feb 9 19:00:22.721517 systemd-networkd[801]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:00:22.721646 systemd[1]: Started systemd-networkd.service. Feb 9 19:00:22.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:22.729156 systemd[1]: Reached target network.target. Feb 9 19:00:22.733125 systemd[1]: Starting iscsiuio.service... Feb 9 19:00:22.742832 systemd[1]: Started iscsiuio.service. Feb 9 19:00:22.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:22.749684 systemd[1]: Starting iscsid.service... Feb 9 19:00:22.755337 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 19:00:22.758499 iscsid[813]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 19:00:22.758499 iscsid[813]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 19:00:22.758499 iscsid[813]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 19:00:22.758499 iscsid[813]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 19:00:22.779950 iscsid[813]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 19:00:22.779950 iscsid[813]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 19:00:22.790215 systemd[1]: Started iscsid.service. Feb 9 19:00:22.796922 kernel: mlx5_core e7e8:00:02.0 enP59368s1: Link up Feb 9 19:00:22.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:22.793092 systemd[1]: Starting dracut-initqueue.service... Feb 9 19:00:22.803851 systemd[1]: Finished dracut-initqueue.service. Feb 9 19:00:22.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:22.806025 systemd[1]: Reached target remote-fs-pre.target. Feb 9 19:00:22.809266 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 19:00:22.811092 systemd[1]: Reached target remote-fs.target. Feb 9 19:00:22.814655 systemd[1]: Starting dracut-pre-mount.service... Feb 9 19:00:22.824147 systemd[1]: Finished dracut-pre-mount.service. Feb 9 19:00:22.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:22.833611 systemd[1]: Finished ignition-setup.service. Feb 9 19:00:22.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:22.838406 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 19:00:22.866185 kernel: hv_netvsc 000d3add-dea7-000d-3add-dea7000d3add eth0: Data path switched to VF: enP59368s1 Feb 9 19:00:22.866450 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:00:22.866562 systemd-networkd[801]: enP59368s1: Link UP Feb 9 19:00:22.866701 systemd-networkd[801]: eth0: Link UP Feb 9 19:00:22.866902 systemd-networkd[801]: eth0: Gained carrier Feb 9 19:00:22.873026 systemd-networkd[801]: enP59368s1: Gained carrier Feb 9 19:00:22.917143 systemd-networkd[801]: eth0: DHCPv4 address 10.200.8.38/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 9 19:00:24.689285 systemd-networkd[801]: eth0: Gained IPv6LL Feb 9 19:00:26.334735 ignition[828]: Ignition 2.14.0 Feb 9 19:00:26.334753 ignition[828]: Stage: fetch-offline Feb 9 19:00:26.334847 ignition[828]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:00:26.334897 ignition[828]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:00:26.506191 ignition[828]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:00:26.511000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:26.507574 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 19:00:26.531240 kernel: kauditd_printk_skb: 18 callbacks suppressed Feb 9 19:00:26.531273 kernel: audit: type=1130 audit(1707505226.511:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:26.506375 ignition[828]: parsed url from cmdline: "" Feb 9 19:00:26.513207 systemd[1]: Starting ignition-fetch.service... Feb 9 19:00:26.506379 ignition[828]: no config URL provided Feb 9 19:00:26.506385 ignition[828]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 19:00:26.506393 ignition[828]: no config at "/usr/lib/ignition/user.ign" Feb 9 19:00:26.506399 ignition[828]: failed to fetch config: resource requires networking Feb 9 19:00:26.506507 ignition[828]: Ignition finished successfully Feb 9 19:00:26.521959 ignition[834]: Ignition 2.14.0 Feb 9 19:00:26.521966 ignition[834]: Stage: fetch Feb 9 19:00:26.522088 ignition[834]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:00:26.522112 ignition[834]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:00:26.540253 ignition[834]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:00:26.540425 ignition[834]: parsed url from cmdline: "" Feb 9 19:00:26.540430 ignition[834]: no config URL provided Feb 9 19:00:26.540447 ignition[834]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 19:00:26.540457 ignition[834]: no config at "/usr/lib/ignition/user.ign" Feb 9 19:00:26.540488 ignition[834]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Feb 9 19:00:26.636403 ignition[834]: GET result: OK Feb 9 19:00:26.636650 ignition[834]: config has been read from IMDS userdata Feb 9 19:00:26.636708 ignition[834]: parsing config with SHA512: d06f9d0903d0f6d44fd2a628603351569fceeffdf88c20da29723b408c56e821b96002b0fdb347a1474288c83df66782882f1d477e02515b418117b37931bef3 Feb 9 19:00:26.671298 unknown[834]: fetched base config from "system" Feb 9 19:00:26.671310 unknown[834]: fetched base config from "system" Feb 9 19:00:26.671989 ignition[834]: fetch: fetch complete Feb 9 19:00:26.671317 unknown[834]: fetched user config from "azure" Feb 9 19:00:26.696091 kernel: audit: type=1130 audit(1707505226.679:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:26.679000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:26.671995 ignition[834]: fetch: fetch passed Feb 9 19:00:26.676376 systemd[1]: Finished ignition-fetch.service. Feb 9 19:00:26.672032 ignition[834]: Ignition finished successfully Feb 9 19:00:26.681060 systemd[1]: Starting ignition-kargs.service... Feb 9 19:00:26.700146 ignition[840]: Ignition 2.14.0 Feb 9 19:00:26.700159 ignition[840]: Stage: kargs Feb 9 19:00:26.700330 ignition[840]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:00:26.700361 ignition[840]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:00:26.702859 ignition[840]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:00:26.706780 ignition[840]: kargs: kargs passed Feb 9 19:00:26.706824 ignition[840]: Ignition finished successfully Feb 9 19:00:26.717124 systemd[1]: Finished ignition-kargs.service. Feb 9 19:00:26.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:26.729811 ignition[846]: Ignition 2.14.0 Feb 9 19:00:26.736971 kernel: audit: type=1130 audit(1707505226.721:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:26.722183 systemd[1]: Starting ignition-disks.service... Feb 9 19:00:26.729817 ignition[846]: Stage: disks Feb 9 19:00:26.729927 ignition[846]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:00:26.729951 ignition[846]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:00:26.735688 ignition[846]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:00:26.748126 ignition[846]: disks: disks passed Feb 9 19:00:26.750092 ignition[846]: Ignition finished successfully Feb 9 19:00:26.752607 systemd[1]: Finished ignition-disks.service. Feb 9 19:00:26.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:26.756708 systemd[1]: Reached target initrd-root-device.target. Feb 9 19:00:26.772915 kernel: audit: type=1130 audit(1707505226.756:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:26.772922 systemd[1]: Reached target local-fs-pre.target. Feb 9 19:00:26.776884 systemd[1]: Reached target local-fs.target. Feb 9 19:00:26.780516 systemd[1]: Reached target sysinit.target. Feb 9 19:00:26.784237 systemd[1]: Reached target basic.target. Feb 9 19:00:26.788727 systemd[1]: Starting systemd-fsck-root.service... Feb 9 19:00:26.858102 systemd-fsck[854]: ROOT: clean, 602/7326000 files, 481070/7359488 blocks Feb 9 19:00:26.863033 systemd[1]: Finished systemd-fsck-root.service. Feb 9 19:00:26.880754 kernel: audit: type=1130 audit(1707505226.865:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:26.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:26.881040 systemd[1]: Mounting sysroot.mount... Feb 9 19:00:26.896983 systemd[1]: Mounted sysroot.mount. Feb 9 19:00:26.902419 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 19:00:26.898930 systemd[1]: Reached target initrd-root-fs.target. Feb 9 19:00:26.937398 systemd[1]: Mounting sysroot-usr.mount... Feb 9 19:00:26.940687 systemd[1]: Starting flatcar-metadata-hostname.service... Feb 9 19:00:26.946143 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 19:00:26.946186 systemd[1]: Reached target ignition-diskful.target. Feb 9 19:00:26.955440 systemd[1]: Mounted sysroot-usr.mount. Feb 9 19:00:26.988983 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 19:00:26.994175 systemd[1]: Starting initrd-setup-root.service... Feb 9 19:00:27.006087 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (865) Feb 9 19:00:27.018367 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:00:27.018397 kernel: BTRFS info (device sda6): using free space tree Feb 9 19:00:27.018414 kernel: BTRFS info (device sda6): has skinny extents Feb 9 19:00:27.022781 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 19:00:27.027191 initrd-setup-root[870]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 19:00:27.039586 initrd-setup-root[896]: cut: /sysroot/etc/group: No such file or directory Feb 9 19:00:27.046219 initrd-setup-root[904]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 19:00:27.051826 initrd-setup-root[912]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 19:00:27.547536 systemd[1]: Finished initrd-setup-root.service. Feb 9 19:00:27.549000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:27.550793 systemd[1]: Starting ignition-mount.service... Feb 9 19:00:27.571727 kernel: audit: type=1130 audit(1707505227.549:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:27.571742 systemd[1]: Starting sysroot-boot.service... Feb 9 19:00:27.578927 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 9 19:00:27.581497 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 9 19:00:27.597729 systemd[1]: Finished sysroot-boot.service. Feb 9 19:00:27.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:27.615061 kernel: audit: type=1130 audit(1707505227.599:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:27.615642 ignition[933]: INFO : Ignition 2.14.0 Feb 9 19:00:27.617940 ignition[933]: INFO : Stage: mount Feb 9 19:00:27.620073 ignition[933]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:00:27.620073 ignition[933]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:00:27.631711 ignition[933]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:00:27.637895 ignition[933]: INFO : mount: mount passed Feb 9 19:00:27.640131 ignition[933]: INFO : Ignition finished successfully Feb 9 19:00:27.643016 systemd[1]: Finished ignition-mount.service. Feb 9 19:00:27.644000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:27.658079 kernel: audit: type=1130 audit(1707505227.644:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:28.694056 coreos-metadata[864]: Feb 09 19:00:28.693 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 9 19:00:28.711054 coreos-metadata[864]: Feb 09 19:00:28.710 INFO Fetch successful Feb 9 19:00:28.748973 coreos-metadata[864]: Feb 09 19:00:28.748 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Feb 9 19:00:28.769447 coreos-metadata[864]: Feb 09 19:00:28.769 INFO Fetch successful Feb 9 19:00:28.788521 coreos-metadata[864]: Feb 09 19:00:28.788 INFO wrote hostname ci-3510.3.2-a-c71e69a144 to /sysroot/etc/hostname Feb 9 19:00:28.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:28.790481 systemd[1]: Finished flatcar-metadata-hostname.service. Feb 9 19:00:28.811422 kernel: audit: type=1130 audit(1707505228.794:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:28.796157 systemd[1]: Starting ignition-files.service... Feb 9 19:00:28.814668 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 19:00:28.828064 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (944) Feb 9 19:00:28.836804 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:00:28.836842 kernel: BTRFS info (device sda6): using free space tree Feb 9 19:00:28.836853 kernel: BTRFS info (device sda6): has skinny extents Feb 9 19:00:28.845218 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 19:00:28.859314 ignition[963]: INFO : Ignition 2.14.0 Feb 9 19:00:28.859314 ignition[963]: INFO : Stage: files Feb 9 19:00:28.865268 ignition[963]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:00:28.865268 ignition[963]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:00:28.875711 ignition[963]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:00:28.915790 ignition[963]: DEBUG : files: compiled without relabeling support, skipping Feb 9 19:00:28.919267 ignition[963]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 19:00:28.919267 ignition[963]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 19:00:28.975282 ignition[963]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 19:00:28.978863 ignition[963]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 19:00:28.982631 unknown[963]: wrote ssh authorized keys file for user: core Feb 9 19:00:28.985206 ignition[963]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 19:00:28.988915 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 9 19:00:28.993395 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 9 19:00:34.481308 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 9 19:00:34.583687 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 9 19:00:34.589114 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 9 19:00:34.589114 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-amd64.tar.gz: attempt #1 Feb 9 19:00:35.196380 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 9 19:00:35.338594 ignition[963]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: a3a2c02a90b008686c20babaf272e703924db2a3e2a0d4e2a7c81d994cbc68c47458a4a354ecc243af095b390815c7f203348b9749351ae817bd52a522300449 Feb 9 19:00:35.347706 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 9 19:00:35.347706 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 9 19:00:35.347706 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 9 19:00:35.347706 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 9 19:00:35.347706 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Feb 9 19:00:35.834125 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 9 19:00:35.975370 ignition[963]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Feb 9 19:00:35.983003 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 9 19:00:35.983003 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 19:00:35.983003 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 9 19:00:36.578462 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 9 19:00:37.042760 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 19:00:37.042760 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 19:00:37.053597 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubeadm: attempt #1 Feb 9 19:00:37.370060 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 9 19:01:00.685680 ignition[963]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 1c324cd645a7bf93d19d24c87498d9a17878eb1cc927e2680200ffeab2f85051ddec47d85b79b8e774042dc6726299ad3d7caf52c060701f00deba30dc33f660 Feb 9 19:01:00.693963 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 19:01:00.693963 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 19:01:00.693963 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubelet: attempt #1 Feb 9 19:01:01.434809 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET result: OK Feb 9 19:01:53.320296 ignition[963]: DEBUG : files: createFilesystemsFiles: createFiles: op(9): file matches expected sum of: 40daf2a9b9e666c14b10e627da931bd79978628b1f23ef6429c1cb4fcba261f86ccff440c0dbb0070ee760fe55772b4fd279c4582dfbb17fa30bc94b7f00126b Feb 9 19:01:53.320296 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 19:01:53.339827 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 19:01:53.339827 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 19:01:53.339827 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/bin/kubectl" Feb 9 19:01:53.339827 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubectl: attempt #1 Feb 9 19:01:53.466093 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 9 19:01:53.654586 ignition[963]: DEBUG : files: createFilesystemsFiles: createFiles: op(b): file matches expected sum of: 97840854134909d75a1a2563628cc4ba632067369ce7fc8a8a1e90a387d32dd7bfd73f4f5b5a82ef842088e7470692951eb7fc869c5f297dd740f855672ee628 Feb 9 19:01:53.669512 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 9 19:01:53.669512 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/install.sh" Feb 9 19:01:53.669512 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 19:01:53.669512 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 9 19:01:53.669512 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 9 19:01:53.669512 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 19:01:53.669512 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 19:01:53.669512 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 19:01:53.669512 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 19:01:53.669512 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 19:01:53.669512 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 19:01:53.669512 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(11): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 19:01:53.669512 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(11): oem config not found in "/usr/share/oem", looking on oem partition Feb 9 19:01:53.740174 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (965) Feb 9 19:01:53.733584 systemd[1]: mnt-oem3742830835.mount: Deactivated successfully. Feb 9 19:01:53.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:53.754612 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(11): op(12): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1427090907" Feb 9 19:01:53.754612 ignition[963]: CRITICAL : files: createFilesystemsFiles: createFiles: op(11): op(12): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1427090907": device or resource busy Feb 9 19:01:53.754612 ignition[963]: ERROR : files: createFilesystemsFiles: createFiles: op(11): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1427090907", trying btrfs: device or resource busy Feb 9 19:01:53.754612 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(11): op(13): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1427090907" Feb 9 19:01:53.754612 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(11): op(13): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1427090907" Feb 9 19:01:53.754612 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(11): op(14): [started] unmounting "/mnt/oem1427090907" Feb 9 19:01:53.754612 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(11): op(14): [finished] unmounting "/mnt/oem1427090907" Feb 9 19:01:53.754612 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(11): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 19:01:53.754612 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(15): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 9 19:01:53.754612 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(15): oem config not found in "/usr/share/oem", looking on oem partition Feb 9 19:01:53.754612 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(15): op(16): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3742830835" Feb 9 19:01:53.754612 ignition[963]: CRITICAL : files: createFilesystemsFiles: createFiles: op(15): op(16): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3742830835": device or resource busy Feb 9 19:01:53.754612 ignition[963]: ERROR : files: createFilesystemsFiles: createFiles: op(15): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3742830835", trying btrfs: device or resource busy Feb 9 19:01:53.754612 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(15): op(17): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3742830835" Feb 9 19:01:53.871759 kernel: audit: type=1130 audit(1707505313.741:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:53.871792 kernel: audit: type=1130 audit(1707505313.780:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:53.871809 kernel: audit: type=1131 audit(1707505313.780:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:53.871825 kernel: audit: type=1130 audit(1707505313.827:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:53.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:53.780000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:53.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:53.739904 systemd[1]: Finished ignition-files.service. Feb 9 19:01:53.873888 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(15): op(17): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3742830835" Feb 9 19:01:53.873888 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(15): op(18): [started] unmounting "/mnt/oem3742830835" Feb 9 19:01:53.873888 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(15): op(18): [finished] unmounting "/mnt/oem3742830835" Feb 9 19:01:53.873888 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(15): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 9 19:01:53.873888 ignition[963]: INFO : files: op(19): [started] processing unit "waagent.service" Feb 9 19:01:53.873888 ignition[963]: INFO : files: op(19): [finished] processing unit "waagent.service" Feb 9 19:01:53.873888 ignition[963]: INFO : files: op(1a): [started] processing unit "nvidia.service" Feb 9 19:01:53.873888 ignition[963]: INFO : files: op(1a): [finished] processing unit "nvidia.service" Feb 9 19:01:53.873888 ignition[963]: INFO : files: op(1b): [started] processing unit "prepare-helm.service" Feb 9 19:01:53.873888 ignition[963]: INFO : files: op(1b): op(1c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 19:01:53.873888 ignition[963]: INFO : files: op(1b): op(1c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 19:01:53.873888 ignition[963]: INFO : files: op(1b): [finished] processing unit "prepare-helm.service" Feb 9 19:01:53.873888 ignition[963]: INFO : files: op(1d): [started] processing unit "containerd.service" Feb 9 19:01:53.873888 ignition[963]: INFO : files: op(1d): op(1e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 9 19:01:53.873888 ignition[963]: INFO : files: op(1d): op(1e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 9 19:01:53.873888 ignition[963]: INFO : files: op(1d): [finished] processing unit "containerd.service" Feb 9 19:01:53.873888 ignition[963]: INFO : files: op(1f): [started] processing unit "prepare-cni-plugins.service" Feb 9 19:01:53.873888 ignition[963]: INFO : files: op(1f): op(20): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 19:01:53.998871 kernel: audit: type=1130 audit(1707505313.902:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:53.998913 kernel: audit: type=1131 audit(1707505313.902:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:53.998931 kernel: audit: type=1130 audit(1707505313.951:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:53.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:53.902000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:53.951000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:53.744503 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 19:01:54.005156 ignition[963]: INFO : files: op(1f): op(20): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 19:01:54.005156 ignition[963]: INFO : files: op(1f): [finished] processing unit "prepare-cni-plugins.service" Feb 9 19:01:54.005156 ignition[963]: INFO : files: op(21): [started] processing unit "prepare-critools.service" Feb 9 19:01:54.005156 ignition[963]: INFO : files: op(21): op(22): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 19:01:54.005156 ignition[963]: INFO : files: op(21): op(22): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 19:01:54.005156 ignition[963]: INFO : files: op(21): [finished] processing unit "prepare-critools.service" Feb 9 19:01:54.005156 ignition[963]: INFO : files: op(23): [started] setting preset to enabled for "prepare-critools.service" Feb 9 19:01:54.005156 ignition[963]: INFO : files: op(23): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 19:01:54.005156 ignition[963]: INFO : files: op(24): [started] setting preset to enabled for "waagent.service" Feb 9 19:01:54.005156 ignition[963]: INFO : files: op(24): [finished] setting preset to enabled for "waagent.service" Feb 9 19:01:54.005156 ignition[963]: INFO : files: op(25): [started] setting preset to enabled for "nvidia.service" Feb 9 19:01:54.005156 ignition[963]: INFO : files: op(25): [finished] setting preset to enabled for "nvidia.service" Feb 9 19:01:54.005156 ignition[963]: INFO : files: op(26): [started] setting preset to enabled for "prepare-helm.service" Feb 9 19:01:54.005156 ignition[963]: INFO : files: op(26): [finished] setting preset to enabled for "prepare-helm.service" Feb 9 19:01:54.005156 ignition[963]: INFO : files: op(27): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 19:01:54.005156 ignition[963]: INFO : files: op(27): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 19:01:54.005156 ignition[963]: INFO : files: createResultFile: createFiles: op(28): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 19:01:54.005156 ignition[963]: INFO : files: createResultFile: createFiles: op(28): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 19:01:54.005156 ignition[963]: INFO : files: files passed Feb 9 19:01:54.005156 ignition[963]: INFO : Ignition finished successfully Feb 9 19:01:54.106002 kernel: audit: type=1130 audit(1707505314.027:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:54.106033 kernel: audit: type=1131 audit(1707505314.027:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:54.106054 kernel: audit: type=1131 audit(1707505314.075:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:54.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:54.027000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:54.075000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:53.761800 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 19:01:54.106433 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 19:01:53.762639 systemd[1]: Starting ignition-quench.service... Feb 9 19:01:53.772355 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 19:01:53.772446 systemd[1]: Finished ignition-quench.service. Feb 9 19:01:53.825434 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 19:01:53.828143 systemd[1]: Reached target ignition-complete.target. Feb 9 19:01:53.879260 systemd[1]: Starting initrd-parse-etc.service... Feb 9 19:01:53.899744 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 19:01:53.899838 systemd[1]: Finished initrd-parse-etc.service. Feb 9 19:01:53.902555 systemd[1]: Reached target initrd-fs.target. Feb 9 19:01:53.926496 systemd[1]: Reached target initrd.target. Feb 9 19:01:53.931393 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 19:01:53.932329 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 19:01:53.946976 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 19:01:53.978945 systemd[1]: Starting initrd-cleanup.service... Feb 9 19:01:54.021096 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 19:01:54.021185 systemd[1]: Finished initrd-cleanup.service. Feb 9 19:01:54.042406 systemd[1]: Stopped target nss-lookup.target. Feb 9 19:01:54.056620 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 19:01:54.064251 systemd[1]: Stopped target timers.target. Feb 9 19:01:54.069626 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 19:01:54.069689 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 19:01:54.075331 systemd[1]: Stopped target initrd.target. Feb 9 19:01:54.092373 systemd[1]: Stopped target basic.target. Feb 9 19:01:54.097642 systemd[1]: Stopped target ignition-complete.target. Feb 9 19:01:54.189362 systemd[1]: Stopped target ignition-diskful.target. Feb 9 19:01:54.193775 systemd[1]: Stopped target initrd-root-device.target. Feb 9 19:01:54.198210 systemd[1]: Stopped target remote-fs.target. Feb 9 19:01:54.203000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:54.204000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:54.204000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:54.205000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:54.205000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:54.210000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:54.211000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:54.212000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:54.228000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:54.237420 iscsid[813]: iscsid shutting down. Feb 9 19:01:54.200553 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 19:01:54.201486 systemd[1]: Stopped target sysinit.target. Feb 9 19:01:54.243666 ignition[1001]: INFO : Ignition 2.14.0 Feb 9 19:01:54.243666 ignition[1001]: INFO : Stage: umount Feb 9 19:01:54.243666 ignition[1001]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:01:54.243666 ignition[1001]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:01:54.243666 ignition[1001]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:01:54.201871 systemd[1]: Stopped target local-fs.target. Feb 9 19:01:54.202297 systemd[1]: Stopped target local-fs-pre.target. Feb 9 19:01:54.202724 systemd[1]: Stopped target swap.target. Feb 9 19:01:54.203178 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 19:01:54.203244 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 19:01:54.203659 systemd[1]: Stopped target cryptsetup.target. Feb 9 19:01:54.204017 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 19:01:54.204060 systemd[1]: Stopped dracut-initqueue.service. Feb 9 19:01:54.204523 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 19:01:54.204555 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 19:01:54.204903 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 19:01:54.204935 systemd[1]: Stopped ignition-files.service. Feb 9 19:01:54.205344 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 9 19:01:54.205378 systemd[1]: Stopped flatcar-metadata-hostname.service. Feb 9 19:01:54.206481 systemd[1]: Stopping ignition-mount.service... Feb 9 19:01:54.208832 systemd[1]: Stopping iscsid.service... Feb 9 19:01:54.210405 systemd[1]: Stopping sysroot-boot.service... Feb 9 19:01:54.210615 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 19:01:54.210678 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 19:01:54.211093 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 19:01:54.211134 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 19:01:54.212174 systemd[1]: iscsid.service: Deactivated successfully. Feb 9 19:01:54.212306 systemd[1]: Stopped iscsid.service. Feb 9 19:01:54.213286 systemd[1]: Stopping iscsiuio.service... Feb 9 19:01:54.228005 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 19:01:54.228109 systemd[1]: Stopped iscsiuio.service. Feb 9 19:01:54.334613 ignition[1001]: INFO : umount: umount passed Feb 9 19:01:54.337071 ignition[1001]: INFO : Ignition finished successfully Feb 9 19:01:54.336809 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 19:01:54.341000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:54.345000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:54.347000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:54.336935 systemd[1]: Stopped ignition-mount.service. Feb 9 19:01:54.341774 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 19:01:54.341828 systemd[1]: Stopped ignition-disks.service. Feb 9 19:01:54.353000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:54.345617 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 19:01:54.359000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:54.345668 systemd[1]: Stopped ignition-kargs.service. Feb 9 19:01:54.347365 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 9 19:01:54.347410 systemd[1]: Stopped ignition-fetch.service. Feb 9 19:01:54.353317 systemd[1]: Stopped target network.target. Feb 9 19:01:54.357191 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 19:01:54.357250 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 19:01:54.359425 systemd[1]: Stopped target paths.target. Feb 9 19:01:54.363481 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 19:01:54.392000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:54.367087 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 19:01:54.374120 systemd[1]: Stopped target slices.target. Feb 9 19:01:54.376059 systemd[1]: Stopped target sockets.target. Feb 9 19:01:54.380445 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 19:01:54.380495 systemd[1]: Closed iscsid.socket. Feb 9 19:01:54.408000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:54.384399 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 19:01:54.384441 systemd[1]: Closed iscsiuio.socket. Feb 9 19:01:54.414000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:54.387913 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 19:01:54.387964 systemd[1]: Stopped ignition-setup.service. Feb 9 19:01:54.420000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:54.393028 systemd[1]: Stopping systemd-networkd.service... Feb 9 19:01:54.420000 audit: BPF prog-id=6 op=UNLOAD Feb 9 19:01:54.396952 systemd[1]: Stopping systemd-resolved.service... Feb 9 19:01:54.428000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:54.399473 systemd-networkd[801]: eth0: DHCPv6 lease lost Feb 9 19:01:54.428000 audit: BPF prog-id=9 op=UNLOAD Feb 9 19:01:54.401762 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 19:01:54.437000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:54.402298 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 19:01:54.441000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:54.402424 systemd[1]: Stopped systemd-networkd.service. Feb 9 19:01:54.410787 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 19:01:54.410882 systemd[1]: Stopped systemd-resolved.service. Feb 9 19:01:54.417103 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 19:01:54.417195 systemd[1]: Stopped sysroot-boot.service. Feb 9 19:01:54.420953 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 19:01:54.420990 systemd[1]: Closed systemd-networkd.socket. Feb 9 19:01:54.424771 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 19:01:54.424821 systemd[1]: Stopped initrd-setup-root.service. Feb 9 19:01:54.429761 systemd[1]: Stopping network-cleanup.service... Feb 9 19:01:54.432865 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 19:01:54.432922 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 19:01:54.437500 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 19:01:54.437555 systemd[1]: Stopped systemd-sysctl.service. Feb 9 19:01:54.443662 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 19:01:54.443988 systemd[1]: Stopped systemd-modules-load.service. Feb 9 19:01:54.476000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:54.476767 systemd[1]: Stopping systemd-udevd.service... Feb 9 19:01:54.481638 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 19:01:54.485478 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 19:01:54.487994 systemd[1]: Stopped systemd-udevd.service. Feb 9 19:01:54.491000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:54.492747 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 19:01:54.492823 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 19:01:54.497670 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 19:01:54.497714 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 19:01:54.505956 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 19:01:54.506015 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 19:01:54.511000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:54.512133 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 19:01:54.512187 systemd[1]: Stopped dracut-cmdline.service. Feb 9 19:01:54.518000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:54.518720 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 19:01:54.518768 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 19:01:54.524000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:54.530000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:54.525864 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 19:01:54.535000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:54.528179 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 9 19:01:54.528241 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 9 19:01:54.530983 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 19:01:54.531040 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 19:01:54.535414 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 19:01:54.535465 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 19:01:54.550000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:54.551475 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 19:01:54.554143 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 19:01:54.558000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:54.558000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:54.567062 kernel: hv_netvsc 000d3add-dea7-000d-3add-dea7000d3add eth0: Data path switched from VF: enP59368s1 Feb 9 19:01:54.585453 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 19:01:54.585577 systemd[1]: Stopped network-cleanup.service. Feb 9 19:01:54.592000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:54.592872 systemd[1]: Reached target initrd-switch-root.target. Feb 9 19:01:54.598434 systemd[1]: Starting initrd-switch-root.service... Feb 9 19:01:54.605295 systemd[1]: Switching root. Feb 9 19:01:54.606000 audit: BPF prog-id=5 op=UNLOAD Feb 9 19:01:54.607000 audit: BPF prog-id=4 op=UNLOAD Feb 9 19:01:54.607000 audit: BPF prog-id=3 op=UNLOAD Feb 9 19:01:54.607000 audit: BPF prog-id=8 op=UNLOAD Feb 9 19:01:54.607000 audit: BPF prog-id=7 op=UNLOAD Feb 9 19:01:54.628464 systemd-journald[183]: Journal stopped Feb 9 19:02:08.883850 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Feb 9 19:02:08.883880 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 19:02:08.883894 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 19:02:08.883904 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 19:02:08.883913 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 19:02:08.883921 kernel: SELinux: policy capability open_perms=1 Feb 9 19:02:08.883935 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 19:02:08.883946 kernel: SELinux: policy capability always_check_network=0 Feb 9 19:02:08.883955 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 19:02:08.883963 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 19:02:08.883973 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 19:02:08.883983 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 19:02:08.883993 systemd[1]: Successfully loaded SELinux policy in 239.974ms. Feb 9 19:02:08.884004 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 30.521ms. Feb 9 19:02:08.884018 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 19:02:08.884031 systemd[1]: Detected virtualization microsoft. Feb 9 19:02:08.884040 systemd[1]: Detected architecture x86-64. Feb 9 19:02:08.884059 systemd[1]: Detected first boot. Feb 9 19:02:08.884074 systemd[1]: Hostname set to . Feb 9 19:02:08.884084 systemd[1]: Initializing machine ID from random generator. Feb 9 19:02:08.884095 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 19:02:08.884107 kernel: kauditd_printk_skb: 42 callbacks suppressed Feb 9 19:02:08.884117 kernel: audit: type=1400 audit(1707505320.121:90): avc: denied { associate } for pid=1051 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 19:02:08.884131 kernel: audit: type=1300 audit(1707505320.121:90): arch=c000003e syscall=188 success=yes exit=0 a0=c00014f672 a1=c0000d0af8 a2=c0000d8a00 a3=32 items=0 ppid=1034 pid=1051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:02:08.884145 kernel: audit: type=1327 audit(1707505320.121:90): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 19:02:08.884155 kernel: audit: type=1400 audit(1707505320.129:91): avc: denied { associate } for pid=1051 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 9 19:02:08.884166 kernel: audit: type=1300 audit(1707505320.129:91): arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014f749 a2=1ed a3=0 items=2 ppid=1034 pid=1051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:02:08.884176 kernel: audit: type=1307 audit(1707505320.129:91): cwd="/" Feb 9 19:02:08.884188 kernel: audit: type=1302 audit(1707505320.129:91): item=0 name=(null) inode=2 dev=00:29 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:08.884197 kernel: audit: type=1302 audit(1707505320.129:91): item=1 name=(null) inode=3 dev=00:29 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:08.884210 kernel: audit: type=1327 audit(1707505320.129:91): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 19:02:08.884221 systemd[1]: Populated /etc with preset unit settings. Feb 9 19:02:08.884231 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:02:08.884242 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:02:08.884254 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:02:08.884265 systemd[1]: Queued start job for default target multi-user.target. Feb 9 19:02:08.884275 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 19:02:08.884289 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 19:02:08.884300 systemd[1]: Created slice system-getty.slice. Feb 9 19:02:08.884312 systemd[1]: Created slice system-modprobe.slice. Feb 9 19:02:08.884325 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 19:02:08.884337 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 19:02:08.884350 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 19:02:08.884360 systemd[1]: Created slice user.slice. Feb 9 19:02:08.884370 systemd[1]: Started systemd-ask-password-console.path. Feb 9 19:02:08.884381 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 19:02:08.884395 systemd[1]: Set up automount boot.automount. Feb 9 19:02:08.884407 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 19:02:08.884419 systemd[1]: Reached target integritysetup.target. Feb 9 19:02:08.884430 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 19:02:08.884441 systemd[1]: Reached target remote-fs.target. Feb 9 19:02:08.884451 systemd[1]: Reached target slices.target. Feb 9 19:02:08.884461 systemd[1]: Reached target swap.target. Feb 9 19:02:08.884472 systemd[1]: Reached target torcx.target. Feb 9 19:02:08.884487 systemd[1]: Reached target veritysetup.target. Feb 9 19:02:08.884497 systemd[1]: Listening on systemd-coredump.socket. Feb 9 19:02:08.884509 systemd[1]: Listening on systemd-initctl.socket. Feb 9 19:02:08.884519 kernel: audit: type=1400 audit(1707505328.579:92): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 19:02:08.884531 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 19:02:08.884541 kernel: audit: type=1335 audit(1707505328.579:93): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 9 19:02:08.884553 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 19:02:08.884565 systemd[1]: Listening on systemd-journald.socket. Feb 9 19:02:08.884577 systemd[1]: Listening on systemd-networkd.socket. Feb 9 19:02:08.884589 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 19:02:08.884600 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 19:02:08.884614 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 19:02:08.884627 systemd[1]: Mounting dev-hugepages.mount... Feb 9 19:02:08.884639 systemd[1]: Mounting dev-mqueue.mount... Feb 9 19:02:08.884651 systemd[1]: Mounting media.mount... Feb 9 19:02:08.884661 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 19:02:08.884674 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 19:02:08.884686 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 19:02:08.884697 systemd[1]: Mounting tmp.mount... Feb 9 19:02:08.884708 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 19:02:08.884719 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 19:02:08.884733 systemd[1]: Starting kmod-static-nodes.service... Feb 9 19:02:08.884743 systemd[1]: Starting modprobe@configfs.service... Feb 9 19:02:08.884755 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 19:02:08.884766 systemd[1]: Starting modprobe@drm.service... Feb 9 19:02:08.884779 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 19:02:08.884789 systemd[1]: Starting modprobe@fuse.service... Feb 9 19:02:08.884799 systemd[1]: Starting modprobe@loop.service... Feb 9 19:02:08.884812 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 19:02:08.884825 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 9 19:02:08.884839 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Feb 9 19:02:08.884850 systemd[1]: Starting systemd-journald.service... Feb 9 19:02:08.884863 systemd[1]: Starting systemd-modules-load.service... Feb 9 19:02:08.884875 systemd[1]: Starting systemd-network-generator.service... Feb 9 19:02:08.884887 systemd[1]: Starting systemd-remount-fs.service... Feb 9 19:02:08.884898 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 19:02:08.884911 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 19:02:08.884922 systemd[1]: Mounted dev-hugepages.mount. Feb 9 19:02:08.884936 systemd[1]: Mounted dev-mqueue.mount. Feb 9 19:02:08.884949 systemd[1]: Mounted media.mount. Feb 9 19:02:08.884962 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 19:02:08.884971 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 19:02:08.884981 kernel: audit: type=1305 audit(1707505328.877:94): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 19:02:08.884997 systemd-journald[1147]: Journal started Feb 9 19:02:08.897765 systemd-journald[1147]: Runtime Journal (/run/log/journal/57bdfa0b6aaf4730be639379384ac7cb) is 8.0M, max 159.0M, 151.0M free. Feb 9 19:02:08.897834 kernel: loop: module loaded Feb 9 19:02:08.897862 systemd[1]: Mounted tmp.mount. Feb 9 19:02:08.897879 kernel: audit: type=1300 audit(1707505328.877:94): arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffdaf4e3000 a2=4000 a3=7ffdaf4e309c items=0 ppid=1 pid=1147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:02:08.579000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 9 19:02:08.877000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 19:02:08.877000 audit[1147]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffdaf4e3000 a2=4000 a3=7ffdaf4e309c items=0 ppid=1 pid=1147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:02:08.927073 systemd[1]: Started systemd-journald.service. Feb 9 19:02:08.927470 systemd[1]: Finished kmod-static-nodes.service. Feb 9 19:02:08.877000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 19:02:08.930602 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 19:02:08.936118 kernel: audit: type=1327 audit(1707505328.877:94): proctitle="/usr/lib/systemd/systemd-journald" Feb 9 19:02:08.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:08.938542 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 19:02:08.938753 systemd[1]: Finished modprobe@configfs.service. Feb 9 19:02:08.952818 kernel: audit: type=1130 audit(1707505328.925:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:08.951810 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 19:02:08.952024 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 19:02:08.954618 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 19:02:08.954784 systemd[1]: Finished modprobe@drm.service. Feb 9 19:02:08.959862 kernel: fuse: init (API version 7.34) Feb 9 19:02:08.930000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:08.960701 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 19:02:08.960904 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 19:02:08.974503 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 19:02:08.974824 systemd[1]: Finished modprobe@loop.service. Feb 9 19:02:08.976737 kernel: audit: type=1130 audit(1707505328.930:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:08.938000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:08.951000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:08.989840 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 19:02:08.990013 systemd[1]: Finished modprobe@fuse.service. Feb 9 19:02:09.001903 kernel: audit: type=1130 audit(1707505328.938:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:09.001950 kernel: audit: type=1130 audit(1707505328.951:98): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:08.951000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:08.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:08.954000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:08.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:08.959000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:08.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:08.973000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:08.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:08.989000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:09.014811 kernel: audit: type=1131 audit(1707505328.951:99): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:09.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:09.017000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:09.017530 systemd[1]: Finished systemd-modules-load.service. Feb 9 19:02:09.019000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:09.020014 systemd[1]: Finished systemd-network-generator.service. Feb 9 19:02:09.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:09.022726 systemd[1]: Finished systemd-remount-fs.service. Feb 9 19:02:09.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:09.025336 systemd[1]: Reached target network-pre.target. Feb 9 19:02:09.028685 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 19:02:09.032163 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 19:02:09.034328 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 19:02:09.072391 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 19:02:09.077001 systemd[1]: Starting systemd-journal-flush.service... Feb 9 19:02:09.079360 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 19:02:09.080905 systemd[1]: Starting systemd-random-seed.service... Feb 9 19:02:09.083019 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 19:02:09.084559 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:02:09.088216 systemd[1]: Starting systemd-sysusers.service... Feb 9 19:02:09.095673 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 19:02:09.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:09.098397 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 19:02:09.100963 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 19:02:09.104436 systemd[1]: Starting systemd-udev-settle.service... Feb 9 19:02:09.113396 systemd-journald[1147]: Time spent on flushing to /var/log/journal/57bdfa0b6aaf4730be639379384ac7cb is 30.811ms for 1141 entries. Feb 9 19:02:09.113396 systemd-journald[1147]: System Journal (/var/log/journal/57bdfa0b6aaf4730be639379384ac7cb) is 8.0M, max 2.6G, 2.6G free. Feb 9 19:02:09.222155 systemd-journald[1147]: Received client request to flush runtime journal. Feb 9 19:02:09.148000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:09.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:09.145016 systemd[1]: Finished systemd-random-seed.service. Feb 9 19:02:09.223060 udevadm[1203]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 9 19:02:09.148214 systemd[1]: Reached target first-boot-complete.target. Feb 9 19:02:09.208235 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:02:09.223162 systemd[1]: Finished systemd-journal-flush.service. Feb 9 19:02:09.226000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:09.826713 systemd[1]: Finished systemd-sysusers.service. Feb 9 19:02:09.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:09.831226 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 19:02:10.188360 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 19:02:10.190000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:10.424069 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 19:02:10.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:10.427974 systemd[1]: Starting systemd-udevd.service... Feb 9 19:02:10.447317 systemd-udevd[1214]: Using default interface naming scheme 'v252'. Feb 9 19:02:10.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:10.668599 systemd[1]: Started systemd-udevd.service. Feb 9 19:02:10.673548 systemd[1]: Starting systemd-networkd.service... Feb 9 19:02:10.707886 systemd[1]: Found device dev-ttyS0.device. Feb 9 19:02:10.781088 kernel: hv_utils: Registering HyperV Utility Driver Feb 9 19:02:10.781185 kernel: hv_vmbus: registering driver hv_utils Feb 9 19:02:10.803189 kernel: hv_vmbus: registering driver hyperv_fb Feb 9 19:02:10.814160 kernel: hv_utils: Shutdown IC version 3.2 Feb 9 19:02:10.814210 kernel: hv_utils: Heartbeat IC version 3.0 Feb 9 19:02:10.814236 kernel: hv_utils: TimeSync IC version 4.0 Feb 9 19:02:10.766000 audit[1225]: AVC avc: denied { confidentiality } for pid=1225 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 19:02:11.644898 kernel: mousedev: PS/2 mouse device common for all mice Feb 9 19:02:11.644983 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Feb 9 19:02:11.642522 systemd[1]: Starting systemd-userdbd.service... Feb 9 19:02:11.654058 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Feb 9 19:02:11.661589 kernel: Console: switching to colour dummy device 80x25 Feb 9 19:02:11.663047 kernel: hv_vmbus: registering driver hv_balloon Feb 9 19:02:11.663122 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Feb 9 19:02:11.681253 kernel: Console: switching to colour frame buffer device 128x48 Feb 9 19:02:10.766000 audit[1225]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5567b9f039a0 a1=f884 a2=7fa463a58bc5 a3=5 items=12 ppid=1214 pid=1225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:02:10.766000 audit: CWD cwd="/" Feb 9 19:02:10.766000 audit: PATH item=0 name=(null) inode=235 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:10.766000 audit: PATH item=1 name=(null) inode=14964 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:10.766000 audit: PATH item=2 name=(null) inode=14964 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:10.766000 audit: PATH item=3 name=(null) inode=14965 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:10.766000 audit: PATH item=4 name=(null) inode=14964 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:10.766000 audit: PATH item=5 name=(null) inode=14966 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:10.766000 audit: PATH item=6 name=(null) inode=14964 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:10.766000 audit: PATH item=7 name=(null) inode=14967 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:10.766000 audit: PATH item=8 name=(null) inode=14964 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:10.766000 audit: PATH item=9 name=(null) inode=14968 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:10.766000 audit: PATH item=10 name=(null) inode=14964 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:10.766000 audit: PATH item=11 name=(null) inode=14969 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:10.766000 audit: PROCTITLE proctitle="(udev-worker)" Feb 9 19:02:11.728090 systemd[1]: Started systemd-userdbd.service. Feb 9 19:02:11.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.821050 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1235) Feb 9 19:02:11.937767 systemd[1]: dev-disk-by\x2dlabel-OEM.device was skipped because of an unmet condition check (ConditionPathExists=!/usr/.noupdate). Feb 9 19:02:11.969056 kernel: KVM: vmx: using Hyper-V Enlightened VMCS Feb 9 19:02:12.025556 systemd[1]: Finished systemd-udev-settle.service. Feb 9 19:02:12.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:12.029880 systemd[1]: Starting lvm2-activation-early.service... Feb 9 19:02:12.114639 systemd-networkd[1226]: lo: Link UP Feb 9 19:02:12.114653 systemd-networkd[1226]: lo: Gained carrier Feb 9 19:02:12.115478 systemd-networkd[1226]: Enumeration completed Feb 9 19:02:12.115657 systemd[1]: Started systemd-networkd.service. Feb 9 19:02:12.116000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:12.119441 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 19:02:12.156859 systemd-networkd[1226]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:02:12.210056 kernel: mlx5_core e7e8:00:02.0 enP59368s1: Link up Feb 9 19:02:12.248268 kernel: hv_netvsc 000d3add-dea7-000d-3add-dea7000d3add eth0: Data path switched to VF: enP59368s1 Feb 9 19:02:12.247639 systemd-networkd[1226]: enP59368s1: Link UP Feb 9 19:02:12.247812 systemd-networkd[1226]: eth0: Link UP Feb 9 19:02:12.247818 systemd-networkd[1226]: eth0: Gained carrier Feb 9 19:02:12.253327 systemd-networkd[1226]: enP59368s1: Gained carrier Feb 9 19:02:12.290191 systemd-networkd[1226]: eth0: DHCPv4 address 10.200.8.38/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 9 19:02:12.397100 lvm[1292]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 19:02:12.424000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:12.423306 systemd[1]: Finished lvm2-activation-early.service. Feb 9 19:02:12.426102 systemd[1]: Reached target cryptsetup.target. Feb 9 19:02:12.430120 systemd[1]: Starting lvm2-activation.service... Feb 9 19:02:12.434745 lvm[1296]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 19:02:12.457344 systemd[1]: Finished lvm2-activation.service. Feb 9 19:02:12.458000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:12.460168 systemd[1]: Reached target local-fs-pre.target. Feb 9 19:02:12.462508 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 19:02:12.462546 systemd[1]: Reached target local-fs.target. Feb 9 19:02:12.464859 systemd[1]: Reached target machines.target. Feb 9 19:02:12.468310 systemd[1]: Starting ldconfig.service... Feb 9 19:02:12.470426 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 19:02:12.470536 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:02:12.471808 systemd[1]: Starting systemd-boot-update.service... Feb 9 19:02:12.475179 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 19:02:12.479434 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 19:02:12.482095 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 19:02:12.482270 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 19:02:12.483516 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 19:02:12.509647 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1299 (bootctl) Feb 9 19:02:12.511397 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 19:02:12.517658 systemd-tmpfiles[1302]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 19:02:12.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:12.533355 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 19:02:12.537891 systemd-tmpfiles[1302]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 19:02:12.573619 systemd-tmpfiles[1302]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 19:02:12.934005 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 19:02:12.935276 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 19:02:12.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:13.589382 systemd-fsck[1308]: fsck.fat 4.2 (2021-01-31) Feb 9 19:02:13.589382 systemd-fsck[1308]: /dev/sda1: 789 files, 115339/258078 clusters Feb 9 19:02:13.591712 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 19:02:13.597291 systemd[1]: Mounting boot.mount... Feb 9 19:02:13.594000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:13.599185 systemd-networkd[1226]: eth0: Gained IPv6LL Feb 9 19:02:13.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:13.603128 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 19:02:13.615328 systemd[1]: Mounted boot.mount. Feb 9 19:02:13.630936 systemd[1]: Finished systemd-boot-update.service. Feb 9 19:02:13.632000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:13.777037 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 19:02:13.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:13.781915 systemd[1]: Starting audit-rules.service... Feb 9 19:02:13.785530 systemd[1]: Starting clean-ca-certificates.service... Feb 9 19:02:13.789643 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 19:02:13.794388 systemd[1]: Starting systemd-resolved.service... Feb 9 19:02:13.798982 systemd[1]: Starting systemd-timesyncd.service... Feb 9 19:02:13.804411 systemd[1]: Starting systemd-update-utmp.service... Feb 9 19:02:13.807482 systemd[1]: Finished clean-ca-certificates.service. Feb 9 19:02:13.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:13.813600 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 19:02:13.828000 audit[1327]: SYSTEM_BOOT pid=1327 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 19:02:13.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:13.832425 systemd[1]: Finished systemd-update-utmp.service. Feb 9 19:02:13.914000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:13.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:13.913351 systemd[1]: Started systemd-timesyncd.service. Feb 9 19:02:13.915857 systemd[1]: Reached target time-set.target. Feb 9 19:02:13.918459 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 19:02:14.006990 systemd-resolved[1325]: Positive Trust Anchors: Feb 9 19:02:14.007018 systemd-resolved[1325]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 19:02:14.007090 systemd-resolved[1325]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 19:02:14.095000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 19:02:14.095000 audit[1344]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd81d04e20 a2=420 a3=0 items=0 ppid=1320 pid=1344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:02:14.095000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 19:02:14.097379 augenrules[1344]: No rules Feb 9 19:02:14.097953 systemd[1]: Finished audit-rules.service. Feb 9 19:02:14.126435 systemd-resolved[1325]: Using system hostname 'ci-3510.3.2-a-c71e69a144'. Feb 9 19:02:14.128065 systemd[1]: Started systemd-resolved.service. Feb 9 19:02:14.130867 systemd[1]: Reached target network.target. Feb 9 19:02:14.134076 systemd[1]: Reached target network-online.target. Feb 9 19:02:14.136563 systemd[1]: Reached target nss-lookup.target. Feb 9 19:02:14.226837 systemd-timesyncd[1326]: Contacted time server 193.1.8.98:123 (0.flatcar.pool.ntp.org). Feb 9 19:02:14.226917 systemd-timesyncd[1326]: Initial clock synchronization to Fri 2024-02-09 19:02:14.229376 UTC. Feb 9 19:02:20.062049 ldconfig[1298]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 19:02:20.076587 systemd[1]: Finished ldconfig.service. Feb 9 19:02:20.082531 systemd[1]: Starting systemd-update-done.service... Feb 9 19:02:20.091926 systemd[1]: Finished systemd-update-done.service. Feb 9 19:02:20.096529 systemd[1]: Reached target sysinit.target. Feb 9 19:02:20.098809 systemd[1]: Started motdgen.path. Feb 9 19:02:20.101018 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 19:02:20.104484 systemd[1]: Started logrotate.timer. Feb 9 19:02:20.106791 systemd[1]: Started mdadm.timer. Feb 9 19:02:20.109180 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 19:02:20.111968 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 19:02:20.112016 systemd[1]: Reached target paths.target. Feb 9 19:02:20.114416 systemd[1]: Reached target timers.target. Feb 9 19:02:20.117346 systemd[1]: Listening on dbus.socket. Feb 9 19:02:20.121223 systemd[1]: Starting docker.socket... Feb 9 19:02:20.125284 systemd[1]: Listening on sshd.socket. Feb 9 19:02:20.127606 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:02:20.128390 systemd[1]: Listening on docker.socket. Feb 9 19:02:20.130737 systemd[1]: Reached target sockets.target. Feb 9 19:02:20.134324 systemd[1]: Reached target basic.target. Feb 9 19:02:20.137192 systemd[1]: System is tainted: cgroupsv1 Feb 9 19:02:20.137259 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 19:02:20.137288 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 19:02:20.138891 systemd[1]: Starting containerd.service... Feb 9 19:02:20.142897 systemd[1]: Starting dbus.service... Feb 9 19:02:20.146685 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 19:02:20.151518 systemd[1]: Starting extend-filesystems.service... Feb 9 19:02:20.155060 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 19:02:20.156649 systemd[1]: Starting motdgen.service... Feb 9 19:02:20.161586 systemd[1]: Started nvidia.service. Feb 9 19:02:20.166245 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 19:02:20.175520 systemd[1]: Starting prepare-critools.service... Feb 9 19:02:20.186721 systemd[1]: Starting prepare-helm.service... Feb 9 19:02:20.191387 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 19:02:20.196922 systemd[1]: Starting sshd-keygen.service... Feb 9 19:02:20.202421 systemd[1]: Starting systemd-logind.service... Feb 9 19:02:20.211620 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:02:20.211706 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 19:02:20.213276 systemd[1]: Starting update-engine.service... Feb 9 19:02:20.218099 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 19:02:20.232429 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 19:02:20.232759 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 19:02:20.242648 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 19:02:20.243286 jq[1357]: false Feb 9 19:02:20.242938 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 19:02:20.255682 jq[1374]: true Feb 9 19:02:20.270726 jq[1385]: true Feb 9 19:02:20.292453 extend-filesystems[1358]: Found sda Feb 9 19:02:20.298221 extend-filesystems[1358]: Found sda1 Feb 9 19:02:20.298221 extend-filesystems[1358]: Found sda2 Feb 9 19:02:20.298221 extend-filesystems[1358]: Found sda3 Feb 9 19:02:20.298221 extend-filesystems[1358]: Found usr Feb 9 19:02:20.298221 extend-filesystems[1358]: Found sda4 Feb 9 19:02:20.298221 extend-filesystems[1358]: Found sda6 Feb 9 19:02:20.298221 extend-filesystems[1358]: Found sda7 Feb 9 19:02:20.298221 extend-filesystems[1358]: Found sda9 Feb 9 19:02:20.298221 extend-filesystems[1358]: Checking size of /dev/sda9 Feb 9 19:02:20.319483 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 19:02:20.319790 systemd[1]: Finished motdgen.service. Feb 9 19:02:20.367056 env[1401]: time="2024-02-09T19:02:20.366999764Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 19:02:20.381822 extend-filesystems[1358]: Old size kept for /dev/sda9 Feb 9 19:02:20.386772 extend-filesystems[1358]: Found sr0 Feb 9 19:02:20.382370 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 19:02:20.402135 tar[1381]: crictl Feb 9 19:02:20.402488 tar[1380]: ./ Feb 9 19:02:20.402488 tar[1380]: ./macvlan Feb 9 19:02:20.382662 systemd[1]: Finished extend-filesystems.service. Feb 9 19:02:20.404310 systemd-logind[1372]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 9 19:02:20.409128 systemd-logind[1372]: New seat seat0. Feb 9 19:02:20.409982 tar[1382]: linux-amd64/helm Feb 9 19:02:20.514956 bash[1416]: Updated "/home/core/.ssh/authorized_keys" Feb 9 19:02:20.515708 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 19:02:20.520114 env[1401]: time="2024-02-09T19:02:20.520066882Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 19:02:20.520351 env[1401]: time="2024-02-09T19:02:20.520327413Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:02:20.526582 env[1401]: time="2024-02-09T19:02:20.522506374Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:02:20.526582 env[1401]: time="2024-02-09T19:02:20.522545779Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:02:20.526582 env[1401]: time="2024-02-09T19:02:20.522865517Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:02:20.526582 env[1401]: time="2024-02-09T19:02:20.522891220Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 19:02:20.526582 env[1401]: time="2024-02-09T19:02:20.522908622Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 19:02:20.526582 env[1401]: time="2024-02-09T19:02:20.522921724Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 19:02:20.526582 env[1401]: time="2024-02-09T19:02:20.523015235Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:02:20.526582 env[1401]: time="2024-02-09T19:02:20.525334112Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:02:20.526582 env[1401]: time="2024-02-09T19:02:20.525574741Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:02:20.526582 env[1401]: time="2024-02-09T19:02:20.525596844Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 19:02:20.526988 env[1401]: time="2024-02-09T19:02:20.525660751Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 19:02:20.526988 env[1401]: time="2024-02-09T19:02:20.525676453Z" level=info msg="metadata content store policy set" policy=shared Feb 9 19:02:20.530848 tar[1380]: ./static Feb 9 19:02:20.534533 dbus-daemon[1356]: [system] SELinux support is enabled Feb 9 19:02:20.534765 systemd[1]: Started dbus.service. Feb 9 19:02:20.539856 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 19:02:20.539896 systemd[1]: Reached target system-config.target. Feb 9 19:02:20.541924 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 19:02:20.541953 systemd[1]: Reached target user-config.target. Feb 9 19:02:20.545313 systemd[1]: Started systemd-logind.service. Feb 9 19:02:20.546674 dbus-daemon[1356]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 9 19:02:20.560487 env[1401]: time="2024-02-09T19:02:20.560435913Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 19:02:20.560608 env[1401]: time="2024-02-09T19:02:20.560498721Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 19:02:20.560608 env[1401]: time="2024-02-09T19:02:20.560517223Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 19:02:20.560608 env[1401]: time="2024-02-09T19:02:20.560557228Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 19:02:20.560608 env[1401]: time="2024-02-09T19:02:20.560578830Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 19:02:20.560608 env[1401]: time="2024-02-09T19:02:20.560597132Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 19:02:20.560786 env[1401]: time="2024-02-09T19:02:20.560613534Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 19:02:20.560786 env[1401]: time="2024-02-09T19:02:20.560632037Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 19:02:20.560786 env[1401]: time="2024-02-09T19:02:20.560650839Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 19:02:20.560786 env[1401]: time="2024-02-09T19:02:20.560670441Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 19:02:20.560786 env[1401]: time="2024-02-09T19:02:20.560689343Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 19:02:20.560786 env[1401]: time="2024-02-09T19:02:20.560709246Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 19:02:20.560992 env[1401]: time="2024-02-09T19:02:20.560853663Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 19:02:20.560992 env[1401]: time="2024-02-09T19:02:20.560953075Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 19:02:20.562185 env[1401]: time="2024-02-09T19:02:20.562152219Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 19:02:20.562266 env[1401]: time="2024-02-09T19:02:20.562202325Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 19:02:20.562266 env[1401]: time="2024-02-09T19:02:20.562220927Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 19:02:20.562350 env[1401]: time="2024-02-09T19:02:20.562280734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 19:02:20.562350 env[1401]: time="2024-02-09T19:02:20.562301536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 19:02:20.562350 env[1401]: time="2024-02-09T19:02:20.562319839Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 19:02:20.562350 env[1401]: time="2024-02-09T19:02:20.562336041Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 19:02:20.562489 env[1401]: time="2024-02-09T19:02:20.562352643Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 19:02:20.562489 env[1401]: time="2024-02-09T19:02:20.562369545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 19:02:20.562489 env[1401]: time="2024-02-09T19:02:20.562387047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 19:02:20.562489 env[1401]: time="2024-02-09T19:02:20.562403749Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 19:02:20.562489 env[1401]: time="2024-02-09T19:02:20.562424051Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 19:02:20.562682 env[1401]: time="2024-02-09T19:02:20.562581070Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 19:02:20.562682 env[1401]: time="2024-02-09T19:02:20.562600972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 19:02:20.562682 env[1401]: time="2024-02-09T19:02:20.562618474Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 19:02:20.562682 env[1401]: time="2024-02-09T19:02:20.562641177Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 19:02:20.562682 env[1401]: time="2024-02-09T19:02:20.562663080Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 19:02:20.562854 env[1401]: time="2024-02-09T19:02:20.562682082Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 19:02:20.562854 env[1401]: time="2024-02-09T19:02:20.562706985Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 19:02:20.562854 env[1401]: time="2024-02-09T19:02:20.562753891Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 19:02:20.563103 env[1401]: time="2024-02-09T19:02:20.563015522Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 19:02:20.564353 systemd[1]: Started containerd.service. Feb 9 19:02:20.597325 env[1401]: time="2024-02-09T19:02:20.563121935Z" level=info msg="Connect containerd service" Feb 9 19:02:20.597325 env[1401]: time="2024-02-09T19:02:20.563177341Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 19:02:20.597325 env[1401]: time="2024-02-09T19:02:20.563838920Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 19:02:20.597325 env[1401]: time="2024-02-09T19:02:20.564147357Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 19:02:20.597325 env[1401]: time="2024-02-09T19:02:20.564196363Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 19:02:20.597325 env[1401]: time="2024-02-09T19:02:20.565944972Z" level=info msg="containerd successfully booted in 0.199761s" Feb 9 19:02:20.597325 env[1401]: time="2024-02-09T19:02:20.565972876Z" level=info msg="Start subscribing containerd event" Feb 9 19:02:20.597325 env[1401]: time="2024-02-09T19:02:20.566018381Z" level=info msg="Start recovering state" Feb 9 19:02:20.597325 env[1401]: time="2024-02-09T19:02:20.566093690Z" level=info msg="Start event monitor" Feb 9 19:02:20.597325 env[1401]: time="2024-02-09T19:02:20.566104792Z" level=info msg="Start snapshots syncer" Feb 9 19:02:20.597325 env[1401]: time="2024-02-09T19:02:20.566113293Z" level=info msg="Start cni network conf syncer for default" Feb 9 19:02:20.597325 env[1401]: time="2024-02-09T19:02:20.566124994Z" level=info msg="Start streaming server" Feb 9 19:02:20.579465 systemd[1]: nvidia.service: Deactivated successfully. Feb 9 19:02:20.649111 tar[1380]: ./vlan Feb 9 19:02:20.786869 tar[1380]: ./portmap Feb 9 19:02:20.881676 tar[1380]: ./host-local Feb 9 19:02:20.949590 tar[1380]: ./vrf Feb 9 19:02:21.019222 tar[1380]: ./bridge Feb 9 19:02:21.112476 tar[1380]: ./tuning Feb 9 19:02:21.188148 tar[1380]: ./firewall Feb 9 19:02:21.206066 update_engine[1373]: I0209 19:02:21.205361 1373 main.cc:92] Flatcar Update Engine starting Feb 9 19:02:21.252803 systemd[1]: Started update-engine.service. Feb 9 19:02:21.260490 update_engine[1373]: I0209 19:02:21.252873 1373 update_check_scheduler.cc:74] Next update check in 10m58s Feb 9 19:02:21.257791 systemd[1]: Started locksmithd.service. Feb 9 19:02:21.285195 tar[1380]: ./host-device Feb 9 19:02:21.365419 tar[1380]: ./sbr Feb 9 19:02:21.440949 tar[1380]: ./loopback Feb 9 19:02:21.511276 tar[1380]: ./dhcp Feb 9 19:02:21.574999 systemd[1]: Finished prepare-critools.service. Feb 9 19:02:21.589603 tar[1382]: linux-amd64/LICENSE Feb 9 19:02:21.590048 tar[1382]: linux-amd64/README.md Feb 9 19:02:21.599177 systemd[1]: Finished prepare-helm.service. Feb 9 19:02:21.662682 tar[1380]: ./ptp Feb 9 19:02:21.705021 tar[1380]: ./ipvlan Feb 9 19:02:21.746213 tar[1380]: ./bandwidth Feb 9 19:02:21.833634 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 19:02:21.948302 sshd_keygen[1389]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 19:02:21.968646 systemd[1]: Finished sshd-keygen.service. Feb 9 19:02:21.973250 systemd[1]: Starting issuegen.service... Feb 9 19:02:21.976984 systemd[1]: Started waagent.service. Feb 9 19:02:21.982485 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 19:02:21.982767 systemd[1]: Finished issuegen.service. Feb 9 19:02:21.987373 systemd[1]: Starting systemd-user-sessions.service... Feb 9 19:02:21.993323 systemd[1]: Finished systemd-user-sessions.service. Feb 9 19:02:21.997335 systemd[1]: Started getty@tty1.service. Feb 9 19:02:22.002191 systemd[1]: Started serial-getty@ttyS0.service. Feb 9 19:02:22.004824 systemd[1]: Reached target getty.target. Feb 9 19:02:22.006769 systemd[1]: Reached target multi-user.target. Feb 9 19:02:22.010351 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 19:02:22.019891 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 19:02:22.020192 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 19:02:22.026857 systemd[1]: Startup finished in 903ms (firmware) + 29.707s (loader) + 1min 39.589s (kernel) + 23.819s (userspace) = 2min 34.020s. Feb 9 19:02:22.351437 login[1504]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 19:02:22.353097 login[1505]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 19:02:22.375301 systemd[1]: Created slice user-500.slice. Feb 9 19:02:22.376525 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 19:02:22.379710 systemd-logind[1372]: New session 2 of user core. Feb 9 19:02:22.385005 systemd-logind[1372]: New session 1 of user core. Feb 9 19:02:22.390583 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 19:02:22.392291 systemd[1]: Starting user@500.service... Feb 9 19:02:22.410023 (systemd)[1514]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:02:22.546965 systemd[1514]: Queued start job for default target default.target. Feb 9 19:02:22.547284 systemd[1514]: Reached target paths.target. Feb 9 19:02:22.547308 systemd[1514]: Reached target sockets.target. Feb 9 19:02:22.547325 systemd[1514]: Reached target timers.target. Feb 9 19:02:22.547341 systemd[1514]: Reached target basic.target. Feb 9 19:02:22.547500 systemd[1]: Started user@500.service. Feb 9 19:02:22.548697 systemd[1]: Started session-1.scope. Feb 9 19:02:22.549492 systemd[1]: Started session-2.scope. Feb 9 19:02:22.549869 systemd[1514]: Reached target default.target. Feb 9 19:02:22.550115 systemd[1514]: Startup finished in 133ms. Feb 9 19:02:23.187360 locksmithd[1476]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 19:02:27.871056 waagent[1497]: 2024-02-09T19:02:27.870900Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Feb 9 19:02:27.884605 waagent[1497]: 2024-02-09T19:02:27.872594Z INFO Daemon Daemon OS: flatcar 3510.3.2 Feb 9 19:02:27.884605 waagent[1497]: 2024-02-09T19:02:27.873585Z INFO Daemon Daemon Python: 3.9.16 Feb 9 19:02:27.884605 waagent[1497]: 2024-02-09T19:02:27.874815Z INFO Daemon Daemon Run daemon Feb 9 19:02:27.884605 waagent[1497]: 2024-02-09T19:02:27.875727Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.2' Feb 9 19:02:27.888906 waagent[1497]: 2024-02-09T19:02:27.888783Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 9 19:02:27.897361 waagent[1497]: 2024-02-09T19:02:27.897242Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 9 19:02:27.902512 waagent[1497]: 2024-02-09T19:02:27.902369Z INFO Daemon Daemon cloud-init is enabled: False Feb 9 19:02:27.912523 waagent[1497]: 2024-02-09T19:02:27.902633Z INFO Daemon Daemon Using waagent for provisioning Feb 9 19:02:27.912523 waagent[1497]: 2024-02-09T19:02:27.904074Z INFO Daemon Daemon Activate resource disk Feb 9 19:02:27.912523 waagent[1497]: 2024-02-09T19:02:27.904826Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Feb 9 19:02:27.912688 waagent[1497]: 2024-02-09T19:02:27.912538Z INFO Daemon Daemon Found device: None Feb 9 19:02:27.941735 waagent[1497]: 2024-02-09T19:02:27.912849Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Feb 9 19:02:27.941735 waagent[1497]: 2024-02-09T19:02:27.913925Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Feb 9 19:02:27.941735 waagent[1497]: 2024-02-09T19:02:27.915914Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 9 19:02:27.941735 waagent[1497]: 2024-02-09T19:02:27.916835Z INFO Daemon Daemon Running default provisioning handler Feb 9 19:02:27.941735 waagent[1497]: 2024-02-09T19:02:27.926616Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 9 19:02:27.941735 waagent[1497]: 2024-02-09T19:02:27.929539Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 9 19:02:27.941735 waagent[1497]: 2024-02-09T19:02:27.930367Z INFO Daemon Daemon cloud-init is enabled: False Feb 9 19:02:27.941735 waagent[1497]: 2024-02-09T19:02:27.931349Z INFO Daemon Daemon Copying ovf-env.xml Feb 9 19:02:28.024747 waagent[1497]: 2024-02-09T19:02:28.024570Z INFO Daemon Daemon Successfully mounted dvd Feb 9 19:02:28.148114 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Feb 9 19:02:28.167334 waagent[1497]: 2024-02-09T19:02:28.167190Z INFO Daemon Daemon Detect protocol endpoint Feb 9 19:02:28.170429 waagent[1497]: 2024-02-09T19:02:28.170344Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 9 19:02:28.173625 waagent[1497]: 2024-02-09T19:02:28.173550Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Feb 9 19:02:28.177220 waagent[1497]: 2024-02-09T19:02:28.177155Z INFO Daemon Daemon Test for route to 168.63.129.16 Feb 9 19:02:28.180236 waagent[1497]: 2024-02-09T19:02:28.180173Z INFO Daemon Daemon Route to 168.63.129.16 exists Feb 9 19:02:28.184243 waagent[1497]: 2024-02-09T19:02:28.184171Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Feb 9 19:02:28.306302 waagent[1497]: 2024-02-09T19:02:28.306222Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Feb 9 19:02:28.311601 waagent[1497]: 2024-02-09T19:02:28.311546Z INFO Daemon Daemon Wire protocol version:2012-11-30 Feb 9 19:02:28.314824 waagent[1497]: 2024-02-09T19:02:28.314745Z INFO Daemon Daemon Server preferred version:2015-04-05 Feb 9 19:02:28.862204 waagent[1497]: 2024-02-09T19:02:28.862049Z INFO Daemon Daemon Initializing goal state during protocol detection Feb 9 19:02:28.873901 waagent[1497]: 2024-02-09T19:02:28.873817Z INFO Daemon Daemon Forcing an update of the goal state.. Feb 9 19:02:28.876777 waagent[1497]: 2024-02-09T19:02:28.876707Z INFO Daemon Daemon Fetching goal state [incarnation 1] Feb 9 19:02:28.955749 waagent[1497]: 2024-02-09T19:02:28.955619Z INFO Daemon Daemon Found private key matching thumbprint 72599646ED232C05D754C75EB4D54D781DD81FA4 Feb 9 19:02:28.966321 waagent[1497]: 2024-02-09T19:02:28.956097Z INFO Daemon Daemon Certificate with thumbprint FBD11B18A7FED78CC5A4121ABAC1596A76B21A23 has no matching private key. Feb 9 19:02:28.966321 waagent[1497]: 2024-02-09T19:02:28.957485Z INFO Daemon Daemon Fetch goal state completed Feb 9 19:02:29.006928 waagent[1497]: 2024-02-09T19:02:29.006831Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: 812b1465-aa10-4636-a8e2-c42a01d66088 New eTag: 11916785760191368043] Feb 9 19:02:29.012365 waagent[1497]: 2024-02-09T19:02:29.012275Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Feb 9 19:02:29.024596 waagent[1497]: 2024-02-09T19:02:29.024521Z INFO Daemon Daemon Starting provisioning Feb 9 19:02:29.027194 waagent[1497]: 2024-02-09T19:02:29.027115Z INFO Daemon Daemon Handle ovf-env.xml. Feb 9 19:02:29.029495 waagent[1497]: 2024-02-09T19:02:29.029431Z INFO Daemon Daemon Set hostname [ci-3510.3.2-a-c71e69a144] Feb 9 19:02:29.050553 waagent[1497]: 2024-02-09T19:02:29.050395Z INFO Daemon Daemon Publish hostname [ci-3510.3.2-a-c71e69a144] Feb 9 19:02:29.054628 waagent[1497]: 2024-02-09T19:02:29.054524Z INFO Daemon Daemon Examine /proc/net/route for primary interface Feb 9 19:02:29.057920 waagent[1497]: 2024-02-09T19:02:29.057848Z INFO Daemon Daemon Primary interface is [eth0] Feb 9 19:02:29.073570 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Feb 9 19:02:29.073880 systemd[1]: Stopped systemd-networkd-wait-online.service. Feb 9 19:02:29.073953 systemd[1]: Stopping systemd-networkd-wait-online.service... Feb 9 19:02:29.074288 systemd[1]: Stopping systemd-networkd.service... Feb 9 19:02:29.080086 systemd-networkd[1226]: eth0: DHCPv6 lease lost Feb 9 19:02:29.081505 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 19:02:29.081796 systemd[1]: Stopped systemd-networkd.service. Feb 9 19:02:29.084857 systemd[1]: Starting systemd-networkd.service... Feb 9 19:02:29.121141 systemd-networkd[1557]: enP59368s1: Link UP Feb 9 19:02:29.121151 systemd-networkd[1557]: enP59368s1: Gained carrier Feb 9 19:02:29.122565 systemd-networkd[1557]: eth0: Link UP Feb 9 19:02:29.122573 systemd-networkd[1557]: eth0: Gained carrier Feb 9 19:02:29.123005 systemd-networkd[1557]: lo: Link UP Feb 9 19:02:29.123014 systemd-networkd[1557]: lo: Gained carrier Feb 9 19:02:29.123348 systemd-networkd[1557]: eth0: Gained IPv6LL Feb 9 19:02:29.123623 systemd-networkd[1557]: Enumeration completed Feb 9 19:02:29.128563 waagent[1497]: 2024-02-09T19:02:29.126560Z INFO Daemon Daemon Create user account if not exists Feb 9 19:02:29.124194 systemd[1]: Started systemd-networkd.service. Feb 9 19:02:29.126772 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 19:02:29.129651 systemd-networkd[1557]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:02:29.131704 waagent[1497]: 2024-02-09T19:02:29.131396Z INFO Daemon Daemon User core already exists, skip useradd Feb 9 19:02:29.132279 waagent[1497]: 2024-02-09T19:02:29.131911Z INFO Daemon Daemon Configure sudoer Feb 9 19:02:29.133394 waagent[1497]: 2024-02-09T19:02:29.133066Z INFO Daemon Daemon Configure sshd Feb 9 19:02:29.134157 waagent[1497]: 2024-02-09T19:02:29.133809Z INFO Daemon Daemon Deploy ssh public key. Feb 9 19:02:29.167151 systemd-networkd[1557]: eth0: DHCPv4 address 10.200.8.38/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 9 19:02:29.170719 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 19:02:30.410150 waagent[1497]: 2024-02-09T19:02:30.410024Z INFO Daemon Daemon Provisioning complete Feb 9 19:02:30.425084 waagent[1497]: 2024-02-09T19:02:30.424988Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Feb 9 19:02:30.432367 waagent[1497]: 2024-02-09T19:02:30.425518Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Feb 9 19:02:30.432367 waagent[1497]: 2024-02-09T19:02:30.427336Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Feb 9 19:02:30.696472 waagent[1567]: 2024-02-09T19:02:30.696307Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Feb 9 19:02:30.697213 waagent[1567]: 2024-02-09T19:02:30.697148Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 19:02:30.697362 waagent[1567]: 2024-02-09T19:02:30.697306Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 19:02:30.707904 waagent[1567]: 2024-02-09T19:02:30.707830Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Feb 9 19:02:30.708076 waagent[1567]: 2024-02-09T19:02:30.708008Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Feb 9 19:02:30.768580 waagent[1567]: 2024-02-09T19:02:30.768454Z INFO ExtHandler ExtHandler Found private key matching thumbprint 72599646ED232C05D754C75EB4D54D781DD81FA4 Feb 9 19:02:30.768809 waagent[1567]: 2024-02-09T19:02:30.768748Z INFO ExtHandler ExtHandler Certificate with thumbprint FBD11B18A7FED78CC5A4121ABAC1596A76B21A23 has no matching private key. Feb 9 19:02:30.769051 waagent[1567]: 2024-02-09T19:02:30.768988Z INFO ExtHandler ExtHandler Fetch goal state completed Feb 9 19:02:30.782697 waagent[1567]: 2024-02-09T19:02:30.782635Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: 553d72af-87ff-4784-b61a-591e46be9305 New eTag: 11916785760191368043] Feb 9 19:02:30.783283 waagent[1567]: 2024-02-09T19:02:30.783226Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Feb 9 19:02:30.853658 waagent[1567]: 2024-02-09T19:02:30.853521Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 9 19:02:30.865189 waagent[1567]: 2024-02-09T19:02:30.865104Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1567 Feb 9 19:02:30.873753 waagent[1567]: 2024-02-09T19:02:30.869902Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 9 19:02:30.873753 waagent[1567]: 2024-02-09T19:02:30.871519Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 9 19:02:31.011340 waagent[1567]: 2024-02-09T19:02:31.011264Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 9 19:02:31.011834 waagent[1567]: 2024-02-09T19:02:31.011750Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 9 19:02:31.020830 waagent[1567]: 2024-02-09T19:02:31.020769Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 9 19:02:31.021341 waagent[1567]: 2024-02-09T19:02:31.021279Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 9 19:02:31.022420 waagent[1567]: 2024-02-09T19:02:31.022354Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Feb 9 19:02:31.023678 waagent[1567]: 2024-02-09T19:02:31.023615Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 9 19:02:31.024267 waagent[1567]: 2024-02-09T19:02:31.024210Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 9 19:02:31.024583 waagent[1567]: 2024-02-09T19:02:31.024530Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 19:02:31.025269 waagent[1567]: 2024-02-09T19:02:31.025209Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 9 19:02:31.025462 waagent[1567]: 2024-02-09T19:02:31.025411Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 19:02:31.025600 waagent[1567]: 2024-02-09T19:02:31.025552Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 19:02:31.025909 waagent[1567]: 2024-02-09T19:02:31.025857Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 9 19:02:31.026015 waagent[1567]: 2024-02-09T19:02:31.025962Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 19:02:31.026744 waagent[1567]: 2024-02-09T19:02:31.026691Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 9 19:02:31.027281 waagent[1567]: 2024-02-09T19:02:31.027223Z INFO EnvHandler ExtHandler Configure routes Feb 9 19:02:31.027741 waagent[1567]: 2024-02-09T19:02:31.027689Z INFO EnvHandler ExtHandler Gateway:None Feb 9 19:02:31.028110 waagent[1567]: 2024-02-09T19:02:31.028049Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 9 19:02:31.028110 waagent[1567]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 9 19:02:31.028110 waagent[1567]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Feb 9 19:02:31.028110 waagent[1567]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 9 19:02:31.028110 waagent[1567]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 9 19:02:31.028110 waagent[1567]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 19:02:31.028110 waagent[1567]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 19:02:31.028389 waagent[1567]: 2024-02-09T19:02:31.028202Z INFO EnvHandler ExtHandler Routes:None Feb 9 19:02:31.029459 waagent[1567]: 2024-02-09T19:02:31.029398Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 9 19:02:31.031687 waagent[1567]: 2024-02-09T19:02:31.031603Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 9 19:02:31.033063 waagent[1567]: 2024-02-09T19:02:31.032994Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 9 19:02:31.048404 waagent[1567]: 2024-02-09T19:02:31.048317Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1557' Feb 9 19:02:31.049375 waagent[1567]: 2024-02-09T19:02:31.049319Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Feb 9 19:02:31.050482 waagent[1567]: 2024-02-09T19:02:31.050434Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 9 19:02:31.054118 waagent[1567]: 2024-02-09T19:02:31.053974Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Feb 9 19:02:31.121687 waagent[1567]: 2024-02-09T19:02:31.121607Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Feb 9 19:02:31.207268 waagent[1567]: 2024-02-09T19:02:31.207148Z INFO MonitorHandler ExtHandler Network interfaces: Feb 9 19:02:31.207268 waagent[1567]: Executing ['ip', '-a', '-o', 'link']: Feb 9 19:02:31.207268 waagent[1567]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 9 19:02:31.207268 waagent[1567]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:dd:de:a7 brd ff:ff:ff:ff:ff:ff Feb 9 19:02:31.207268 waagent[1567]: 3: enP59368s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:dd:de:a7 brd ff:ff:ff:ff:ff:ff\ altname enP59368p0s2 Feb 9 19:02:31.207268 waagent[1567]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 9 19:02:31.207268 waagent[1567]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 9 19:02:31.207268 waagent[1567]: 2: eth0 inet 10.200.8.38/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 9 19:02:31.207268 waagent[1567]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 9 19:02:31.207268 waagent[1567]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 9 19:02:31.207268 waagent[1567]: 2: eth0 inet6 fe80::20d:3aff:fedd:dea7/64 scope link \ valid_lft forever preferred_lft forever Feb 9 19:02:31.388749 waagent[1567]: 2024-02-09T19:02:31.388556Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules Feb 9 19:02:31.391848 waagent[1567]: 2024-02-09T19:02:31.391734Z INFO EnvHandler ExtHandler Firewall rules: Feb 9 19:02:31.391848 waagent[1567]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 19:02:31.391848 waagent[1567]: pkts bytes target prot opt in out source destination Feb 9 19:02:31.391848 waagent[1567]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 9 19:02:31.391848 waagent[1567]: pkts bytes target prot opt in out source destination Feb 9 19:02:31.391848 waagent[1567]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 19:02:31.391848 waagent[1567]: pkts bytes target prot opt in out source destination Feb 9 19:02:31.391848 waagent[1567]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 9 19:02:31.391848 waagent[1567]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 9 19:02:31.393258 waagent[1567]: 2024-02-09T19:02:31.393200Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Feb 9 19:02:31.529612 waagent[1567]: 2024-02-09T19:02:31.529536Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.9.1.1 -- exiting Feb 9 19:02:32.432616 waagent[1497]: 2024-02-09T19:02:32.432423Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Feb 9 19:02:32.438516 waagent[1497]: 2024-02-09T19:02:32.438447Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.9.1.1 to be the latest agent Feb 9 19:02:33.463205 waagent[1611]: 2024-02-09T19:02:33.463084Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Feb 9 19:02:33.463933 waagent[1611]: 2024-02-09T19:02:33.463865Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.2 Feb 9 19:02:33.464090 waagent[1611]: 2024-02-09T19:02:33.464021Z INFO ExtHandler ExtHandler Python: 3.9.16 Feb 9 19:02:33.473649 waagent[1611]: 2024-02-09T19:02:33.473538Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 9 19:02:33.474058 waagent[1611]: 2024-02-09T19:02:33.473982Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 19:02:33.474220 waagent[1611]: 2024-02-09T19:02:33.474168Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 19:02:33.485673 waagent[1611]: 2024-02-09T19:02:33.485595Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Feb 9 19:02:33.494309 waagent[1611]: 2024-02-09T19:02:33.494241Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.143 Feb 9 19:02:33.495255 waagent[1611]: 2024-02-09T19:02:33.495196Z INFO ExtHandler Feb 9 19:02:33.495409 waagent[1611]: 2024-02-09T19:02:33.495359Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 14380c83-9f1f-4434-9337-2e75703efbfa eTag: 11916785760191368043 source: Fabric] Feb 9 19:02:33.496109 waagent[1611]: 2024-02-09T19:02:33.496054Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Feb 9 19:02:33.497190 waagent[1611]: 2024-02-09T19:02:33.497132Z INFO ExtHandler Feb 9 19:02:33.497326 waagent[1611]: 2024-02-09T19:02:33.497278Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Feb 9 19:02:33.503945 waagent[1611]: 2024-02-09T19:02:33.503891Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Feb 9 19:02:33.504405 waagent[1611]: 2024-02-09T19:02:33.504357Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 9 19:02:33.523361 waagent[1611]: 2024-02-09T19:02:33.523279Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Feb 9 19:02:33.588372 waagent[1611]: 2024-02-09T19:02:33.588242Z INFO ExtHandler Downloaded certificate {'thumbprint': 'FBD11B18A7FED78CC5A4121ABAC1596A76B21A23', 'hasPrivateKey': False} Feb 9 19:02:33.589357 waagent[1611]: 2024-02-09T19:02:33.589286Z INFO ExtHandler Downloaded certificate {'thumbprint': '72599646ED232C05D754C75EB4D54D781DD81FA4', 'hasPrivateKey': True} Feb 9 19:02:33.590340 waagent[1611]: 2024-02-09T19:02:33.590278Z INFO ExtHandler Fetch goal state completed Feb 9 19:02:33.615689 waagent[1611]: 2024-02-09T19:02:33.615590Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1611 Feb 9 19:02:33.619052 waagent[1611]: 2024-02-09T19:02:33.618967Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 9 19:02:33.620490 waagent[1611]: 2024-02-09T19:02:33.620430Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 9 19:02:33.625541 waagent[1611]: 2024-02-09T19:02:33.625486Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 9 19:02:33.625907 waagent[1611]: 2024-02-09T19:02:33.625850Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 9 19:02:33.633946 waagent[1611]: 2024-02-09T19:02:33.633889Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 9 19:02:33.634459 waagent[1611]: 2024-02-09T19:02:33.634400Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 9 19:02:33.662285 waagent[1611]: 2024-02-09T19:02:33.662146Z INFO ExtHandler ExtHandler Firewall rule to allow DNS TCP request to wireserver for a non root user unavailable. Setting it now. Feb 9 19:02:33.665867 waagent[1611]: 2024-02-09T19:02:33.665740Z INFO ExtHandler ExtHandler Succesfully added firewall rule to allow non root users to do a DNS TCP request to wireserver Feb 9 19:02:33.670870 waagent[1611]: 2024-02-09T19:02:33.670802Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Feb 9 19:02:33.672354 waagent[1611]: 2024-02-09T19:02:33.672292Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 9 19:02:33.672702 waagent[1611]: 2024-02-09T19:02:33.672642Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 19:02:33.673099 waagent[1611]: 2024-02-09T19:02:33.673043Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 19:02:33.673640 waagent[1611]: 2024-02-09T19:02:33.673580Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 9 19:02:33.673919 waagent[1611]: 2024-02-09T19:02:33.673862Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 9 19:02:33.673919 waagent[1611]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 9 19:02:33.673919 waagent[1611]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Feb 9 19:02:33.673919 waagent[1611]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 9 19:02:33.673919 waagent[1611]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 9 19:02:33.673919 waagent[1611]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 19:02:33.673919 waagent[1611]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 19:02:33.676325 waagent[1611]: 2024-02-09T19:02:33.676231Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 9 19:02:33.677275 waagent[1611]: 2024-02-09T19:02:33.677210Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 19:02:33.677597 waagent[1611]: 2024-02-09T19:02:33.677494Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 9 19:02:33.678331 waagent[1611]: 2024-02-09T19:02:33.678272Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 19:02:33.680514 waagent[1611]: 2024-02-09T19:02:33.680394Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 9 19:02:33.681085 waagent[1611]: 2024-02-09T19:02:33.681003Z INFO EnvHandler ExtHandler Configure routes Feb 9 19:02:33.681530 waagent[1611]: 2024-02-09T19:02:33.681478Z INFO EnvHandler ExtHandler Gateway:None Feb 9 19:02:33.681961 waagent[1611]: 2024-02-09T19:02:33.681903Z INFO EnvHandler ExtHandler Routes:None Feb 9 19:02:33.687523 waagent[1611]: 2024-02-09T19:02:33.687222Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 9 19:02:33.692067 waagent[1611]: 2024-02-09T19:02:33.684898Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 9 19:02:33.693599 waagent[1611]: 2024-02-09T19:02:33.693524Z INFO MonitorHandler ExtHandler Network interfaces: Feb 9 19:02:33.693599 waagent[1611]: Executing ['ip', '-a', '-o', 'link']: Feb 9 19:02:33.693599 waagent[1611]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 9 19:02:33.693599 waagent[1611]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:dd:de:a7 brd ff:ff:ff:ff:ff:ff Feb 9 19:02:33.693599 waagent[1611]: 3: enP59368s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:dd:de:a7 brd ff:ff:ff:ff:ff:ff\ altname enP59368p0s2 Feb 9 19:02:33.693599 waagent[1611]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 9 19:02:33.693599 waagent[1611]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 9 19:02:33.693599 waagent[1611]: 2: eth0 inet 10.200.8.38/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 9 19:02:33.693599 waagent[1611]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 9 19:02:33.693599 waagent[1611]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 9 19:02:33.693599 waagent[1611]: 2: eth0 inet6 fe80::20d:3aff:fedd:dea7/64 scope link \ valid_lft forever preferred_lft forever Feb 9 19:02:33.698760 waagent[1611]: 2024-02-09T19:02:33.698584Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 9 19:02:33.710315 waagent[1611]: 2024-02-09T19:02:33.710254Z INFO ExtHandler ExtHandler No requested version specified, checking for all versions for agent update (family: Prod) Feb 9 19:02:33.710556 waagent[1611]: 2024-02-09T19:02:33.710502Z INFO ExtHandler ExtHandler Downloading manifest Feb 9 19:02:33.780249 waagent[1611]: 2024-02-09T19:02:33.780176Z INFO EnvHandler ExtHandler Current Firewall rules: Feb 9 19:02:33.780249 waagent[1611]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 19:02:33.780249 waagent[1611]: pkts bytes target prot opt in out source destination Feb 9 19:02:33.780249 waagent[1611]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 9 19:02:33.780249 waagent[1611]: pkts bytes target prot opt in out source destination Feb 9 19:02:33.780249 waagent[1611]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 19:02:33.780249 waagent[1611]: pkts bytes target prot opt in out source destination Feb 9 19:02:33.780249 waagent[1611]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 9 19:02:33.780249 waagent[1611]: 139 15646 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 9 19:02:33.780249 waagent[1611]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 9 19:02:33.784214 waagent[1611]: 2024-02-09T19:02:33.784161Z INFO ExtHandler ExtHandler Feb 9 19:02:33.784375 waagent[1611]: 2024-02-09T19:02:33.784323Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: fd042acf-1f13-4508-976a-efa30e0af619 correlation 06d36a79-0216-40f7-8b0e-aba05fe49fde created: 2024-02-09T18:59:38.655830Z] Feb 9 19:02:33.785506 waagent[1611]: 2024-02-09T19:02:33.785447Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Feb 9 19:02:33.787219 waagent[1611]: 2024-02-09T19:02:33.787164Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 3 ms] Feb 9 19:02:33.808146 waagent[1611]: 2024-02-09T19:02:33.808071Z INFO ExtHandler ExtHandler Looking for existing remote access users. Feb 9 19:02:33.818048 waagent[1611]: 2024-02-09T19:02:33.817952Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 19E25EB8-F128-4E13-B5DA-FDE119E1D26D;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1] Feb 9 19:02:59.782364 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Feb 9 19:03:06.641403 update_engine[1373]: I0209 19:03:06.641261 1373 update_attempter.cc:509] Updating boot flags... Feb 9 19:03:16.503450 systemd[1]: Created slice system-sshd.slice. Feb 9 19:03:16.505360 systemd[1]: Started sshd@0-10.200.8.38:22-10.200.12.6:36424.service. Feb 9 19:03:17.386683 sshd[1745]: Accepted publickey for core from 10.200.12.6 port 36424 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:03:17.388301 sshd[1745]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:03:17.393301 systemd-logind[1372]: New session 3 of user core. Feb 9 19:03:17.393955 systemd[1]: Started session-3.scope. Feb 9 19:03:17.924672 systemd[1]: Started sshd@1-10.200.8.38:22-10.200.12.6:51594.service. Feb 9 19:03:18.560300 sshd[1750]: Accepted publickey for core from 10.200.12.6 port 51594 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:03:18.561948 sshd[1750]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:03:18.567743 systemd[1]: Started session-4.scope. Feb 9 19:03:18.568879 systemd-logind[1372]: New session 4 of user core. Feb 9 19:03:18.999228 sshd[1750]: pam_unix(sshd:session): session closed for user core Feb 9 19:03:19.002585 systemd[1]: sshd@1-10.200.8.38:22-10.200.12.6:51594.service: Deactivated successfully. Feb 9 19:03:19.004999 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 19:03:19.005806 systemd-logind[1372]: Session 4 logged out. Waiting for processes to exit. Feb 9 19:03:19.007444 systemd-logind[1372]: Removed session 4. Feb 9 19:03:19.101979 systemd[1]: Started sshd@2-10.200.8.38:22-10.200.12.6:51608.service. Feb 9 19:03:19.729083 sshd[1757]: Accepted publickey for core from 10.200.12.6 port 51608 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:03:19.730459 sshd[1757]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:03:19.734931 systemd-logind[1372]: New session 5 of user core. Feb 9 19:03:19.735559 systemd[1]: Started session-5.scope. Feb 9 19:03:20.164024 sshd[1757]: pam_unix(sshd:session): session closed for user core Feb 9 19:03:20.166927 systemd[1]: sshd@2-10.200.8.38:22-10.200.12.6:51608.service: Deactivated successfully. Feb 9 19:03:20.168558 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 19:03:20.169457 systemd-logind[1372]: Session 5 logged out. Waiting for processes to exit. Feb 9 19:03:20.170472 systemd-logind[1372]: Removed session 5. Feb 9 19:03:20.264569 systemd[1]: Started sshd@3-10.200.8.38:22-10.200.12.6:51624.service. Feb 9 19:03:20.881481 sshd[1764]: Accepted publickey for core from 10.200.12.6 port 51624 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:03:20.883018 sshd[1764]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:03:20.887717 systemd[1]: Started session-6.scope. Feb 9 19:03:20.888016 systemd-logind[1372]: New session 6 of user core. Feb 9 19:03:21.318167 sshd[1764]: pam_unix(sshd:session): session closed for user core Feb 9 19:03:21.320962 systemd[1]: sshd@3-10.200.8.38:22-10.200.12.6:51624.service: Deactivated successfully. Feb 9 19:03:21.322363 systemd[1]: session-6.scope: Deactivated successfully. Feb 9 19:03:21.323687 systemd-logind[1372]: Session 6 logged out. Waiting for processes to exit. Feb 9 19:03:21.324676 systemd-logind[1372]: Removed session 6. Feb 9 19:03:21.420561 systemd[1]: Started sshd@4-10.200.8.38:22-10.200.12.6:51640.service. Feb 9 19:03:22.035578 sshd[1771]: Accepted publickey for core from 10.200.12.6 port 51640 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:03:22.036958 sshd[1771]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:03:22.042143 systemd[1]: Started session-7.scope. Feb 9 19:03:22.042432 systemd-logind[1372]: New session 7 of user core. Feb 9 19:03:22.658664 sudo[1775]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 19:03:22.658946 sudo[1775]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 19:03:23.405360 systemd[1]: Starting docker.service... Feb 9 19:03:23.456429 env[1790]: time="2024-02-09T19:03:23.456367780Z" level=info msg="Starting up" Feb 9 19:03:23.457569 env[1790]: time="2024-02-09T19:03:23.457533710Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 19:03:23.457569 env[1790]: time="2024-02-09T19:03:23.457554511Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 19:03:23.457735 env[1790]: time="2024-02-09T19:03:23.457576211Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 19:03:23.457735 env[1790]: time="2024-02-09T19:03:23.457589212Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 19:03:23.459461 env[1790]: time="2024-02-09T19:03:23.459436859Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 19:03:23.459461 env[1790]: time="2024-02-09T19:03:23.459451959Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 19:03:23.459604 env[1790]: time="2024-02-09T19:03:23.459469260Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 19:03:23.459604 env[1790]: time="2024-02-09T19:03:23.459482560Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 19:03:23.561400 env[1790]: time="2024-02-09T19:03:23.561359862Z" level=warning msg="Your kernel does not support cgroup blkio weight" Feb 9 19:03:23.561400 env[1790]: time="2024-02-09T19:03:23.561386163Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Feb 9 19:03:23.561674 env[1790]: time="2024-02-09T19:03:23.561559667Z" level=info msg="Loading containers: start." Feb 9 19:03:23.684056 kernel: Initializing XFRM netlink socket Feb 9 19:03:23.724925 env[1790]: time="2024-02-09T19:03:23.724884339Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 9 19:03:23.816479 systemd-networkd[1557]: docker0: Link UP Feb 9 19:03:23.831844 env[1790]: time="2024-02-09T19:03:23.831803270Z" level=info msg="Loading containers: done." Feb 9 19:03:23.848978 env[1790]: time="2024-02-09T19:03:23.848930508Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 9 19:03:23.849192 env[1790]: time="2024-02-09T19:03:23.849152714Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 9 19:03:23.849285 env[1790]: time="2024-02-09T19:03:23.849263116Z" level=info msg="Daemon has completed initialization" Feb 9 19:03:23.874418 systemd[1]: Started docker.service. Feb 9 19:03:23.884409 env[1790]: time="2024-02-09T19:03:23.884246910Z" level=info msg="API listen on /run/docker.sock" Feb 9 19:03:23.901286 systemd[1]: Reloading. Feb 9 19:03:23.977549 /usr/lib/systemd/system-generators/torcx-generator[1918]: time="2024-02-09T19:03:23Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:03:23.986077 /usr/lib/systemd/system-generators/torcx-generator[1918]: time="2024-02-09T19:03:23Z" level=info msg="torcx already run" Feb 9 19:03:24.070116 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:03:24.070137 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:03:24.088322 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:03:24.167738 systemd[1]: Started kubelet.service. Feb 9 19:03:24.246803 kubelet[1986]: E0209 19:03:24.246743 1986 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 19:03:24.248605 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:03:24.248822 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:03:28.331928 env[1401]: time="2024-02-09T19:03:28.331846281Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\"" Feb 9 19:03:28.996876 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3992329723.mount: Deactivated successfully. Feb 9 19:03:30.935132 env[1401]: time="2024-02-09T19:03:30.935078037Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:30.939370 env[1401]: time="2024-02-09T19:03:30.939330427Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:30.942776 env[1401]: time="2024-02-09T19:03:30.942740899Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:30.946846 env[1401]: time="2024-02-09T19:03:30.946760783Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2f28bed4096abd572a56595ac0304238bdc271dcfe22c650707c09bf97ec16fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:30.947749 env[1401]: time="2024-02-09T19:03:30.947716604Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\" returns image reference \"sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f\"" Feb 9 19:03:30.957451 env[1401]: time="2024-02-09T19:03:30.957422508Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\"" Feb 9 19:03:33.166721 env[1401]: time="2024-02-09T19:03:33.166660075Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:33.175594 env[1401]: time="2024-02-09T19:03:33.175549948Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:33.179742 env[1401]: time="2024-02-09T19:03:33.179706729Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:33.183357 env[1401]: time="2024-02-09T19:03:33.183318499Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fda420c6c15cdd01c4eba3404f0662fe486a9c7f38fa13c741a21334673841a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:33.184109 env[1401]: time="2024-02-09T19:03:33.184077714Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\" returns image reference \"sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486\"" Feb 9 19:03:33.194777 env[1401]: time="2024-02-09T19:03:33.194745522Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\"" Feb 9 19:03:34.336390 env[1401]: time="2024-02-09T19:03:34.336329865Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:34.341623 env[1401]: time="2024-02-09T19:03:34.341576165Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:34.345368 env[1401]: time="2024-02-09T19:03:34.345336136Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:34.349174 env[1401]: time="2024-02-09T19:03:34.349142008Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c3c7303ee6d01c8e5a769db28661cf854b55175aa72c67e9b6a7b9d47ac42af3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:34.349746 env[1401]: time="2024-02-09T19:03:34.349716819Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\" returns image reference \"sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e\"" Feb 9 19:03:34.359199 env[1401]: time="2024-02-09T19:03:34.359172598Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 9 19:03:34.482624 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 9 19:03:34.482949 systemd[1]: Stopped kubelet.service. Feb 9 19:03:34.485209 systemd[1]: Started kubelet.service. Feb 9 19:03:34.531723 kubelet[2021]: E0209 19:03:34.531681 2021 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 19:03:34.534881 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:03:34.535161 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:03:35.464884 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount948847088.mount: Deactivated successfully. Feb 9 19:03:35.929684 env[1401]: time="2024-02-09T19:03:35.929628101Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:35.934358 env[1401]: time="2024-02-09T19:03:35.934318688Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:35.938567 env[1401]: time="2024-02-09T19:03:35.938535866Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:35.942369 env[1401]: time="2024-02-09T19:03:35.942326736Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:35.942802 env[1401]: time="2024-02-09T19:03:35.942774044Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f\"" Feb 9 19:03:35.954052 env[1401]: time="2024-02-09T19:03:35.954016852Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 9 19:03:36.400911 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2933124622.mount: Deactivated successfully. Feb 9 19:03:36.419352 env[1401]: time="2024-02-09T19:03:36.419311749Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:36.427068 env[1401]: time="2024-02-09T19:03:36.427024988Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:36.430398 env[1401]: time="2024-02-09T19:03:36.430367848Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:36.435038 env[1401]: time="2024-02-09T19:03:36.434997832Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:36.435492 env[1401]: time="2024-02-09T19:03:36.435462740Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 9 19:03:36.444882 env[1401]: time="2024-02-09T19:03:36.444851109Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\"" Feb 9 19:03:37.219975 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1594474886.mount: Deactivated successfully. Feb 9 19:03:41.386100 env[1401]: time="2024-02-09T19:03:41.385808932Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:41.391286 env[1401]: time="2024-02-09T19:03:41.391249218Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:41.394743 env[1401]: time="2024-02-09T19:03:41.394710772Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:41.398173 env[1401]: time="2024-02-09T19:03:41.398143727Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:41.398737 env[1401]: time="2024-02-09T19:03:41.398706036Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\" returns image reference \"sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7\"" Feb 9 19:03:41.409480 env[1401]: time="2024-02-09T19:03:41.409451305Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\"" Feb 9 19:03:41.899174 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3992854572.mount: Deactivated successfully. Feb 9 19:03:42.536425 env[1401]: time="2024-02-09T19:03:42.536375807Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:42.541935 env[1401]: time="2024-02-09T19:03:42.541895592Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:42.545536 env[1401]: time="2024-02-09T19:03:42.545503548Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:42.549110 env[1401]: time="2024-02-09T19:03:42.549077603Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:42.549518 env[1401]: time="2024-02-09T19:03:42.549484609Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\" returns image reference \"sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a\"" Feb 9 19:03:44.732012 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 9 19:03:44.732300 systemd[1]: Stopped kubelet.service. Feb 9 19:03:44.734905 systemd[1]: Started kubelet.service. Feb 9 19:03:44.813994 kubelet[2096]: E0209 19:03:44.813933 2096 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 19:03:44.815966 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:03:44.816184 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:03:44.833509 systemd[1]: Stopped kubelet.service. Feb 9 19:03:44.847499 systemd[1]: Reloading. Feb 9 19:03:44.912217 /usr/lib/systemd/system-generators/torcx-generator[2127]: time="2024-02-09T19:03:44Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:03:44.912258 /usr/lib/systemd/system-generators/torcx-generator[2127]: time="2024-02-09T19:03:44Z" level=info msg="torcx already run" Feb 9 19:03:45.023787 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:03:45.023807 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:03:45.042112 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:03:45.128971 systemd[1]: Started kubelet.service. Feb 9 19:03:45.183859 kubelet[2195]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:03:45.183859 kubelet[2195]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:03:45.184364 kubelet[2195]: I0209 19:03:45.183917 2195 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 19:03:45.185313 kubelet[2195]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:03:45.185313 kubelet[2195]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:03:45.474381 kubelet[2195]: I0209 19:03:45.474271 2195 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 19:03:45.474381 kubelet[2195]: I0209 19:03:45.474297 2195 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 19:03:45.474800 kubelet[2195]: I0209 19:03:45.474780 2195 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 19:03:45.477811 kubelet[2195]: E0209 19:03:45.477788 2195 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.38:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.38:6443: connect: connection refused Feb 9 19:03:45.477978 kubelet[2195]: I0209 19:03:45.477965 2195 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 19:03:45.480707 kubelet[2195]: I0209 19:03:45.480679 2195 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 19:03:45.481082 kubelet[2195]: I0209 19:03:45.481064 2195 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 19:03:45.481178 kubelet[2195]: I0209 19:03:45.481157 2195 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 19:03:45.481302 kubelet[2195]: I0209 19:03:45.481194 2195 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 19:03:45.481302 kubelet[2195]: I0209 19:03:45.481211 2195 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 19:03:45.481393 kubelet[2195]: I0209 19:03:45.481332 2195 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:03:45.483934 kubelet[2195]: I0209 19:03:45.483918 2195 kubelet.go:398] "Attempting to sync node with API server" Feb 9 19:03:45.484014 kubelet[2195]: I0209 19:03:45.483942 2195 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 19:03:45.484014 kubelet[2195]: I0209 19:03:45.483972 2195 kubelet.go:297] "Adding apiserver pod source" Feb 9 19:03:45.484014 kubelet[2195]: I0209 19:03:45.483990 2195 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 19:03:45.485911 kubelet[2195]: W0209 19:03:45.485868 2195 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.8.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-c71e69a144&limit=500&resourceVersion=0": dial tcp 10.200.8.38:6443: connect: connection refused Feb 9 19:03:45.486071 kubelet[2195]: E0209 19:03:45.486022 2195 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-c71e69a144&limit=500&resourceVersion=0": dial tcp 10.200.8.38:6443: connect: connection refused Feb 9 19:03:45.486260 kubelet[2195]: W0209 19:03:45.486228 2195 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.8.38:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.38:6443: connect: connection refused Feb 9 19:03:45.486348 kubelet[2195]: E0209 19:03:45.486338 2195 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.38:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.38:6443: connect: connection refused Feb 9 19:03:45.486810 kubelet[2195]: I0209 19:03:45.486797 2195 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 19:03:45.487212 kubelet[2195]: W0209 19:03:45.487198 2195 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 19:03:45.487777 kubelet[2195]: I0209 19:03:45.487761 2195 server.go:1186] "Started kubelet" Feb 9 19:03:45.496463 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 9 19:03:45.496566 kubelet[2195]: I0209 19:03:45.496171 2195 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 19:03:45.496964 kubelet[2195]: I0209 19:03:45.496937 2195 server.go:451] "Adding debug handlers to kubelet server" Feb 9 19:03:45.498669 kubelet[2195]: I0209 19:03:45.498653 2195 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 19:03:45.499223 kubelet[2195]: E0209 19:03:45.499203 2195 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 19:03:45.499302 kubelet[2195]: E0209 19:03:45.499228 2195 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 19:03:45.499424 kubelet[2195]: E0209 19:03:45.498585 2195 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-c71e69a144.17b24723423f8751", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-c71e69a144", UID:"ci-3510.3.2-a-c71e69a144", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-c71e69a144"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 3, 45, 487726417, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 3, 45, 487726417, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.200.8.38:6443/api/v1/namespaces/default/events": dial tcp 10.200.8.38:6443: connect: connection refused'(may retry after sleeping) Feb 9 19:03:45.502376 kubelet[2195]: I0209 19:03:45.502351 2195 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 19:03:45.502459 kubelet[2195]: I0209 19:03:45.502420 2195 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 19:03:45.502738 kubelet[2195]: W0209 19:03:45.502703 2195 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.8.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.38:6443: connect: connection refused Feb 9 19:03:45.502811 kubelet[2195]: E0209 19:03:45.502741 2195 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.38:6443: connect: connection refused Feb 9 19:03:45.503348 kubelet[2195]: E0209 19:03:45.503321 2195 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://10.200.8.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-c71e69a144?timeout=10s": dial tcp 10.200.8.38:6443: connect: connection refused Feb 9 19:03:45.552637 kubelet[2195]: I0209 19:03:45.552602 2195 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 19:03:45.552637 kubelet[2195]: I0209 19:03:45.552622 2195 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 19:03:45.552637 kubelet[2195]: I0209 19:03:45.552641 2195 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:03:45.558799 kubelet[2195]: I0209 19:03:45.558769 2195 policy_none.go:49] "None policy: Start" Feb 9 19:03:45.559437 kubelet[2195]: I0209 19:03:45.559417 2195 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 19:03:45.559437 kubelet[2195]: I0209 19:03:45.559441 2195 state_mem.go:35] "Initializing new in-memory state store" Feb 9 19:03:45.567276 kubelet[2195]: I0209 19:03:45.567251 2195 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 19:03:45.567486 kubelet[2195]: I0209 19:03:45.567469 2195 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 19:03:45.570350 kubelet[2195]: E0209 19:03:45.570325 2195 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.2-a-c71e69a144\" not found" Feb 9 19:03:45.604563 kubelet[2195]: I0209 19:03:45.604535 2195 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-c71e69a144" Feb 9 19:03:45.605084 kubelet[2195]: E0209 19:03:45.605062 2195 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.38:6443/api/v1/nodes\": dial tcp 10.200.8.38:6443: connect: connection refused" node="ci-3510.3.2-a-c71e69a144" Feb 9 19:03:45.650929 kubelet[2195]: I0209 19:03:45.650899 2195 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 19:03:45.686044 kubelet[2195]: I0209 19:03:45.686003 2195 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 19:03:45.686044 kubelet[2195]: I0209 19:03:45.686039 2195 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 19:03:45.686249 kubelet[2195]: I0209 19:03:45.686064 2195 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 19:03:45.686249 kubelet[2195]: E0209 19:03:45.686112 2195 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 9 19:03:45.686839 kubelet[2195]: W0209 19:03:45.686790 2195 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.8.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.38:6443: connect: connection refused Feb 9 19:03:45.687007 kubelet[2195]: E0209 19:03:45.686995 2195 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.38:6443: connect: connection refused Feb 9 19:03:45.704286 kubelet[2195]: E0209 19:03:45.704246 2195 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://10.200.8.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-c71e69a144?timeout=10s": dial tcp 10.200.8.38:6443: connect: connection refused Feb 9 19:03:45.786520 kubelet[2195]: I0209 19:03:45.786463 2195 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:03:45.788275 kubelet[2195]: I0209 19:03:45.788254 2195 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:03:45.789988 kubelet[2195]: I0209 19:03:45.789965 2195 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:03:45.792453 kubelet[2195]: I0209 19:03:45.792433 2195 status_manager.go:698] "Failed to get status for pod" podUID=b540ae43f4483224e4ebbfc9b9e771c3 pod="kube-system/kube-controller-manager-ci-3510.3.2-a-c71e69a144" err="Get \"https://10.200.8.38:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-3510.3.2-a-c71e69a144\": dial tcp 10.200.8.38:6443: connect: connection refused" Feb 9 19:03:45.797653 kubelet[2195]: I0209 19:03:45.797634 2195 status_manager.go:698] "Failed to get status for pod" podUID=7ca7495994c4d7c9d65ab19f15e8d1bb pod="kube-system/kube-scheduler-ci-3510.3.2-a-c71e69a144" err="Get \"https://10.200.8.38:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-3510.3.2-a-c71e69a144\": dial tcp 10.200.8.38:6443: connect: connection refused" Feb 9 19:03:45.798148 kubelet[2195]: I0209 19:03:45.798125 2195 status_manager.go:698] "Failed to get status for pod" podUID=b18b83774a9e432056951571196a0ed3 pod="kube-system/kube-apiserver-ci-3510.3.2-a-c71e69a144" err="Get \"https://10.200.8.38:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-3510.3.2-a-c71e69a144\": dial tcp 10.200.8.38:6443: connect: connection refused" Feb 9 19:03:45.806141 kubelet[2195]: I0209 19:03:45.806125 2195 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-c71e69a144" Feb 9 19:03:45.806481 kubelet[2195]: E0209 19:03:45.806457 2195 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.38:6443/api/v1/nodes\": dial tcp 10.200.8.38:6443: connect: connection refused" node="ci-3510.3.2-a-c71e69a144" Feb 9 19:03:45.905983 kubelet[2195]: I0209 19:03:45.905925 2195 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b540ae43f4483224e4ebbfc9b9e771c3-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-c71e69a144\" (UID: \"b540ae43f4483224e4ebbfc9b9e771c3\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-c71e69a144" Feb 9 19:03:45.906205 kubelet[2195]: I0209 19:03:45.905996 2195 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b540ae43f4483224e4ebbfc9b9e771c3-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-c71e69a144\" (UID: \"b540ae43f4483224e4ebbfc9b9e771c3\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-c71e69a144" Feb 9 19:03:45.906205 kubelet[2195]: I0209 19:03:45.906054 2195 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b18b83774a9e432056951571196a0ed3-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-c71e69a144\" (UID: \"b18b83774a9e432056951571196a0ed3\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-c71e69a144" Feb 9 19:03:45.906205 kubelet[2195]: I0209 19:03:45.906087 2195 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b540ae43f4483224e4ebbfc9b9e771c3-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-c71e69a144\" (UID: \"b540ae43f4483224e4ebbfc9b9e771c3\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-c71e69a144" Feb 9 19:03:45.906205 kubelet[2195]: I0209 19:03:45.906121 2195 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b540ae43f4483224e4ebbfc9b9e771c3-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-c71e69a144\" (UID: \"b540ae43f4483224e4ebbfc9b9e771c3\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-c71e69a144" Feb 9 19:03:45.906205 kubelet[2195]: I0209 19:03:45.906158 2195 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b18b83774a9e432056951571196a0ed3-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-c71e69a144\" (UID: \"b18b83774a9e432056951571196a0ed3\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-c71e69a144" Feb 9 19:03:45.906476 kubelet[2195]: I0209 19:03:45.906192 2195 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b18b83774a9e432056951571196a0ed3-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-c71e69a144\" (UID: \"b18b83774a9e432056951571196a0ed3\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-c71e69a144" Feb 9 19:03:45.906476 kubelet[2195]: I0209 19:03:45.906228 2195 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b540ae43f4483224e4ebbfc9b9e771c3-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-c71e69a144\" (UID: \"b540ae43f4483224e4ebbfc9b9e771c3\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-c71e69a144" Feb 9 19:03:45.906476 kubelet[2195]: I0209 19:03:45.906268 2195 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7ca7495994c4d7c9d65ab19f15e8d1bb-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-c71e69a144\" (UID: \"7ca7495994c4d7c9d65ab19f15e8d1bb\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-c71e69a144" Feb 9 19:03:46.094626 env[1401]: time="2024-02-09T19:03:46.093548248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-c71e69a144,Uid:b540ae43f4483224e4ebbfc9b9e771c3,Namespace:kube-system,Attempt:0,}" Feb 9 19:03:46.097906 env[1401]: time="2024-02-09T19:03:46.097613305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-c71e69a144,Uid:7ca7495994c4d7c9d65ab19f15e8d1bb,Namespace:kube-system,Attempt:0,}" Feb 9 19:03:46.098636 env[1401]: time="2024-02-09T19:03:46.098532918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-c71e69a144,Uid:b18b83774a9e432056951571196a0ed3,Namespace:kube-system,Attempt:0,}" Feb 9 19:03:46.105330 kubelet[2195]: E0209 19:03:46.105298 2195 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://10.200.8.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-c71e69a144?timeout=10s": dial tcp 10.200.8.38:6443: connect: connection refused Feb 9 19:03:46.208128 kubelet[2195]: I0209 19:03:46.208102 2195 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-c71e69a144" Feb 9 19:03:46.208563 kubelet[2195]: E0209 19:03:46.208414 2195 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.38:6443/api/v1/nodes\": dial tcp 10.200.8.38:6443: connect: connection refused" node="ci-3510.3.2-a-c71e69a144" Feb 9 19:03:46.337537 kubelet[2195]: W0209 19:03:46.337484 2195 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.8.38:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.38:6443: connect: connection refused Feb 9 19:03:46.337537 kubelet[2195]: E0209 19:03:46.337538 2195 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.38:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.38:6443: connect: connection refused Feb 9 19:03:46.582022 kubelet[2195]: W0209 19:03:46.581966 2195 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.8.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.38:6443: connect: connection refused Feb 9 19:03:46.582022 kubelet[2195]: E0209 19:03:46.582024 2195 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.38:6443: connect: connection refused Feb 9 19:03:46.602460 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3327751250.mount: Deactivated successfully. Feb 9 19:03:46.627800 env[1401]: time="2024-02-09T19:03:46.627748102Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:46.631509 env[1401]: time="2024-02-09T19:03:46.631470954Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:46.640733 env[1401]: time="2024-02-09T19:03:46.640701982Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:46.646242 env[1401]: time="2024-02-09T19:03:46.646202459Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:46.649480 env[1401]: time="2024-02-09T19:03:46.649447004Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:46.655820 env[1401]: time="2024-02-09T19:03:46.655788293Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:46.659326 env[1401]: time="2024-02-09T19:03:46.659295142Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:46.661901 env[1401]: time="2024-02-09T19:03:46.661870078Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:46.665248 env[1401]: time="2024-02-09T19:03:46.665218024Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:46.667963 env[1401]: time="2024-02-09T19:03:46.667930962Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:46.672559 env[1401]: time="2024-02-09T19:03:46.672526926Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:46.675566 env[1401]: time="2024-02-09T19:03:46.675534868Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:46.760686 kubelet[2195]: W0209 19:03:46.760563 2195 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.8.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-c71e69a144&limit=500&resourceVersion=0": dial tcp 10.200.8.38:6443: connect: connection refused Feb 9 19:03:46.760686 kubelet[2195]: E0209 19:03:46.760656 2195 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-c71e69a144&limit=500&resourceVersion=0": dial tcp 10.200.8.38:6443: connect: connection refused Feb 9 19:03:46.772806 env[1401]: time="2024-02-09T19:03:46.770126188Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:03:46.772806 env[1401]: time="2024-02-09T19:03:46.770170789Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:03:46.772806 env[1401]: time="2024-02-09T19:03:46.770189789Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:03:46.772806 env[1401]: time="2024-02-09T19:03:46.770392492Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/071c01048a38aebbc41295d9cca5092fb1589882ae7bc444661674c7454ba719 pid=2269 runtime=io.containerd.runc.v2 Feb 9 19:03:46.784082 env[1401]: time="2024-02-09T19:03:46.782990868Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:03:46.784261 env[1401]: time="2024-02-09T19:03:46.784231485Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:03:46.784391 env[1401]: time="2024-02-09T19:03:46.784360687Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:03:46.784740 env[1401]: time="2024-02-09T19:03:46.784703492Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ea9c715e08a9d1bcd352c16ac71aa09f865d4da67358ed53ec740f824b36aa71 pid=2288 runtime=io.containerd.runc.v2 Feb 9 19:03:46.836411 env[1401]: time="2024-02-09T19:03:46.835464800Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:03:46.836639 env[1401]: time="2024-02-09T19:03:46.836606216Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:03:46.836751 env[1401]: time="2024-02-09T19:03:46.836731218Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:03:46.836998 env[1401]: time="2024-02-09T19:03:46.836961121Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e8f8fdde45c381e4d3d46fb0a55ca7667424e8ad6fa46405b6e62f9f83aaab31 pid=2336 runtime=io.containerd.runc.v2 Feb 9 19:03:46.900343 env[1401]: time="2024-02-09T19:03:46.900297504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-c71e69a144,Uid:7ca7495994c4d7c9d65ab19f15e8d1bb,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea9c715e08a9d1bcd352c16ac71aa09f865d4da67358ed53ec740f824b36aa71\"" Feb 9 19:03:46.905718 kubelet[2195]: E0209 19:03:46.905684 2195 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: Get "https://10.200.8.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-c71e69a144?timeout=10s": dial tcp 10.200.8.38:6443: connect: connection refused Feb 9 19:03:46.906159 env[1401]: time="2024-02-09T19:03:46.906129486Z" level=info msg="CreateContainer within sandbox \"ea9c715e08a9d1bcd352c16ac71aa09f865d4da67358ed53ec740f824b36aa71\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 9 19:03:46.908585 env[1401]: time="2024-02-09T19:03:46.908546520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-c71e69a144,Uid:b540ae43f4483224e4ebbfc9b9e771c3,Namespace:kube-system,Attempt:0,} returns sandbox id \"071c01048a38aebbc41295d9cca5092fb1589882ae7bc444661674c7454ba719\"" Feb 9 19:03:46.912357 env[1401]: time="2024-02-09T19:03:46.912319972Z" level=info msg="CreateContainer within sandbox \"071c01048a38aebbc41295d9cca5092fb1589882ae7bc444661674c7454ba719\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 9 19:03:46.921144 env[1401]: time="2024-02-09T19:03:46.921106995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-c71e69a144,Uid:b18b83774a9e432056951571196a0ed3,Namespace:kube-system,Attempt:0,} returns sandbox id \"e8f8fdde45c381e4d3d46fb0a55ca7667424e8ad6fa46405b6e62f9f83aaab31\"" Feb 9 19:03:46.923330 env[1401]: time="2024-02-09T19:03:46.923302525Z" level=info msg="CreateContainer within sandbox \"e8f8fdde45c381e4d3d46fb0a55ca7667424e8ad6fa46405b6e62f9f83aaab31\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 9 19:03:46.970012 env[1401]: time="2024-02-09T19:03:46.969964376Z" level=info msg="CreateContainer within sandbox \"ea9c715e08a9d1bcd352c16ac71aa09f865d4da67358ed53ec740f824b36aa71\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"fb1c5e2cd761d5aa6a8eefaf288ac93f0f8c98ed268676baedc6c75919fba32a\"" Feb 9 19:03:46.970631 env[1401]: time="2024-02-09T19:03:46.970602685Z" level=info msg="StartContainer for \"fb1c5e2cd761d5aa6a8eefaf288ac93f0f8c98ed268676baedc6c75919fba32a\"" Feb 9 19:03:47.000053 env[1401]: time="2024-02-09T19:03:46.998368973Z" level=info msg="CreateContainer within sandbox \"e8f8fdde45c381e4d3d46fb0a55ca7667424e8ad6fa46405b6e62f9f83aaab31\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"bcc70c2eeadb5e8cd5cb57948077759ace1805a916811647b8af1f8e28a7d7e4\"" Feb 9 19:03:47.001346 env[1401]: time="2024-02-09T19:03:47.001318414Z" level=info msg="StartContainer for \"bcc70c2eeadb5e8cd5cb57948077759ace1805a916811647b8af1f8e28a7d7e4\"" Feb 9 19:03:47.013748 kubelet[2195]: I0209 19:03:47.013387 2195 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-c71e69a144" Feb 9 19:03:47.013748 kubelet[2195]: E0209 19:03:47.013715 2195 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.38:6443/api/v1/nodes\": dial tcp 10.200.8.38:6443: connect: connection refused" node="ci-3510.3.2-a-c71e69a144" Feb 9 19:03:47.029959 env[1401]: time="2024-02-09T19:03:47.029904603Z" level=info msg="CreateContainer within sandbox \"071c01048a38aebbc41295d9cca5092fb1589882ae7bc444661674c7454ba719\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"bcea027d155786f35819ef259d7b0430191d879595193bfd8281ee213c23cdb1\"" Feb 9 19:03:47.032248 env[1401]: time="2024-02-09T19:03:47.032222035Z" level=info msg="StartContainer for \"bcea027d155786f35819ef259d7b0430191d879595193bfd8281ee213c23cdb1\"" Feb 9 19:03:47.035040 kubelet[2195]: W0209 19:03:47.034951 2195 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.8.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.38:6443: connect: connection refused Feb 9 19:03:47.035040 kubelet[2195]: E0209 19:03:47.035006 2195 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.38:6443: connect: connection refused Feb 9 19:03:47.065471 env[1401]: time="2024-02-09T19:03:47.065413787Z" level=info msg="StartContainer for \"fb1c5e2cd761d5aa6a8eefaf288ac93f0f8c98ed268676baedc6c75919fba32a\" returns successfully" Feb 9 19:03:47.144506 env[1401]: time="2024-02-09T19:03:47.143519350Z" level=info msg="StartContainer for \"bcc70c2eeadb5e8cd5cb57948077759ace1805a916811647b8af1f8e28a7d7e4\" returns successfully" Feb 9 19:03:47.156049 env[1401]: time="2024-02-09T19:03:47.144568864Z" level=info msg="StartContainer for \"bcea027d155786f35819ef259d7b0430191d879595193bfd8281ee213c23cdb1\" returns successfully" Feb 9 19:03:48.616358 kubelet[2195]: I0209 19:03:48.616320 2195 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-c71e69a144" Feb 9 19:03:49.822390 kubelet[2195]: E0209 19:03:49.822323 2195 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.2-a-c71e69a144\" not found" node="ci-3510.3.2-a-c71e69a144" Feb 9 19:03:49.854317 kubelet[2195]: I0209 19:03:49.854282 2195 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-c71e69a144" Feb 9 19:03:49.897400 kubelet[2195]: E0209 19:03:49.897280 2195 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-c71e69a144.17b24723423f8751", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-c71e69a144", UID:"ci-3510.3.2-a-c71e69a144", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-c71e69a144"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 3, 45, 487726417, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 3, 45, 487726417, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:03:49.954266 kubelet[2195]: E0209 19:03:49.954157 2195 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-c71e69a144.17b2472342eee03d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-c71e69a144", UID:"ci-3510.3.2-a-c71e69a144", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-c71e69a144"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 3, 45, 499217981, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 3, 45, 499217981, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:03:50.008078 kubelet[2195]: E0209 19:03:50.007953 2195 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-c71e69a144.17b247234614e12c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-c71e69a144", UID:"ci-3510.3.2-a-c71e69a144", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node ci-3510.3.2-a-c71e69a144 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-c71e69a144"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 3, 45, 552040236, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 3, 45, 552040236, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:03:50.061664 kubelet[2195]: E0209 19:03:50.061569 2195 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-c71e69a144.17b247234614fd4c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-c71e69a144", UID:"ci-3510.3.2-a-c71e69a144", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node ci-3510.3.2-a-c71e69a144 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-c71e69a144"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 3, 45, 552047436, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 3, 45, 552047436, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:03:50.115915 kubelet[2195]: E0209 19:03:50.115723 2195 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-c71e69a144.17b2472346150cec", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-c71e69a144", UID:"ci-3510.3.2-a-c71e69a144", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node ci-3510.3.2-a-c71e69a144 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-c71e69a144"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 3, 45, 552051436, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 3, 45, 552051436, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:03:50.169944 kubelet[2195]: E0209 19:03:50.169844 2195 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-c71e69a144.17b247234711ebd5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-c71e69a144", UID:"ci-3510.3.2-a-c71e69a144", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-c71e69a144"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 3, 45, 568623573, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 3, 45, 568623573, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:03:50.225327 kubelet[2195]: E0209 19:03:50.225229 2195 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-c71e69a144.17b247234614e12c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-c71e69a144", UID:"ci-3510.3.2-a-c71e69a144", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node ci-3510.3.2-a-c71e69a144 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-c71e69a144"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 3, 45, 552040236, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 3, 45, 604476586, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:03:50.281313 kubelet[2195]: E0209 19:03:50.281204 2195 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-c71e69a144.17b247234614fd4c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-c71e69a144", UID:"ci-3510.3.2-a-c71e69a144", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node ci-3510.3.2-a-c71e69a144 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-c71e69a144"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 3, 45, 552047436, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 3, 45, 604485286, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:03:50.337958 kubelet[2195]: E0209 19:03:50.337840 2195 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-c71e69a144.17b2472346150cec", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-c71e69a144", UID:"ci-3510.3.2-a-c71e69a144", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node ci-3510.3.2-a-c71e69a144 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-c71e69a144"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 3, 45, 552051436, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 3, 45, 604490286, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:03:50.487862 kubelet[2195]: I0209 19:03:50.487822 2195 apiserver.go:52] "Watching apiserver" Feb 9 19:03:50.503054 kubelet[2195]: I0209 19:03:50.503013 2195 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 19:03:50.538220 kubelet[2195]: I0209 19:03:50.538172 2195 reconciler.go:41] "Reconciler: start to sync state" Feb 9 19:03:50.552718 kubelet[2195]: E0209 19:03:50.552627 2195 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-c71e69a144.17b247234614e12c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-c71e69a144", UID:"ci-3510.3.2-a-c71e69a144", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node ci-3510.3.2-a-c71e69a144 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-c71e69a144"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 3, 45, 552040236, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 3, 45, 788158313, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:03:50.958369 kubelet[2195]: E0209 19:03:50.958160 2195 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-c71e69a144.17b247234614fd4c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-c71e69a144", UID:"ci-3510.3.2-a-c71e69a144", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node ci-3510.3.2-a-c71e69a144 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-c71e69a144"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 3, 45, 552047436, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 3, 45, 788168913, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:03:53.212219 systemd[1]: Reloading. Feb 9 19:03:53.291478 /usr/lib/systemd/system-generators/torcx-generator[2522]: time="2024-02-09T19:03:53Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:03:53.291961 /usr/lib/systemd/system-generators/torcx-generator[2522]: time="2024-02-09T19:03:53Z" level=info msg="torcx already run" Feb 9 19:03:53.422261 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:03:53.422280 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:03:53.440985 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:03:53.541911 systemd[1]: Stopping kubelet.service... Feb 9 19:03:53.542716 kubelet[2195]: I0209 19:03:53.542431 2195 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 19:03:53.560121 systemd[1]: kubelet.service: Deactivated successfully. Feb 9 19:03:53.561518 systemd[1]: Stopped kubelet.service. Feb 9 19:03:53.564275 systemd[1]: Started kubelet.service. Feb 9 19:03:53.642586 kubelet[2592]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:03:53.642586 kubelet[2592]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:03:53.643042 kubelet[2592]: I0209 19:03:53.642633 2592 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 19:03:53.643964 kubelet[2592]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:03:53.643964 kubelet[2592]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:03:53.647566 kubelet[2592]: I0209 19:03:53.647476 2592 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 19:03:53.647566 kubelet[2592]: I0209 19:03:53.647496 2592 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 19:03:53.647808 kubelet[2592]: I0209 19:03:53.647787 2592 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 19:03:53.648885 kubelet[2592]: I0209 19:03:53.648856 2592 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 9 19:03:53.649853 kubelet[2592]: I0209 19:03:53.649836 2592 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 19:03:53.652667 kubelet[2592]: I0209 19:03:53.652645 2592 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 19:03:53.653066 kubelet[2592]: I0209 19:03:53.653046 2592 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 19:03:53.653145 kubelet[2592]: I0209 19:03:53.653121 2592 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 19:03:53.653256 kubelet[2592]: I0209 19:03:53.653146 2592 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 19:03:53.653256 kubelet[2592]: I0209 19:03:53.653160 2592 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 19:03:53.653256 kubelet[2592]: I0209 19:03:53.653201 2592 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:03:53.656497 kubelet[2592]: I0209 19:03:53.656478 2592 kubelet.go:398] "Attempting to sync node with API server" Feb 9 19:03:53.656591 kubelet[2592]: I0209 19:03:53.656515 2592 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 19:03:53.656591 kubelet[2592]: I0209 19:03:53.656541 2592 kubelet.go:297] "Adding apiserver pod source" Feb 9 19:03:53.656591 kubelet[2592]: I0209 19:03:53.656557 2592 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 19:03:53.669582 kubelet[2592]: I0209 19:03:53.669550 2592 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 19:03:53.670318 kubelet[2592]: I0209 19:03:53.670303 2592 server.go:1186] "Started kubelet" Feb 9 19:03:53.672484 kubelet[2592]: I0209 19:03:53.672467 2592 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 19:03:53.675638 kubelet[2592]: I0209 19:03:53.675611 2592 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 19:03:53.681348 kubelet[2592]: E0209 19:03:53.681329 2592 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 19:03:53.681520 kubelet[2592]: E0209 19:03:53.681507 2592 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 19:03:53.690151 kubelet[2592]: I0209 19:03:53.689971 2592 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 19:03:53.691876 kubelet[2592]: I0209 19:03:53.691852 2592 server.go:451] "Adding debug handlers to kubelet server" Feb 9 19:03:53.693785 kubelet[2592]: I0209 19:03:53.693771 2592 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 19:03:53.715093 kubelet[2592]: I0209 19:03:53.715073 2592 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 19:03:53.753392 kubelet[2592]: I0209 19:03:53.753373 2592 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 19:03:53.753550 kubelet[2592]: I0209 19:03:53.753541 2592 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 19:03:53.753606 kubelet[2592]: I0209 19:03:53.753600 2592 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 19:03:53.755899 kubelet[2592]: E0209 19:03:53.755878 2592 kubelet.go:2137] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 9 19:03:53.795312 kubelet[2592]: I0209 19:03:53.793618 2592 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-c71e69a144" Feb 9 19:03:53.799343 kubelet[2592]: I0209 19:03:53.799326 2592 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 19:03:53.799459 kubelet[2592]: I0209 19:03:53.799452 2592 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 19:03:53.799520 kubelet[2592]: I0209 19:03:53.799515 2592 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:03:53.799686 kubelet[2592]: I0209 19:03:53.799676 2592 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 9 19:03:53.799749 kubelet[2592]: I0209 19:03:53.799744 2592 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 9 19:03:53.799798 kubelet[2592]: I0209 19:03:53.799793 2592 policy_none.go:49] "None policy: Start" Feb 9 19:03:53.800474 kubelet[2592]: I0209 19:03:53.800415 2592 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 19:03:53.800474 kubelet[2592]: I0209 19:03:53.800460 2592 state_mem.go:35] "Initializing new in-memory state store" Feb 9 19:03:53.800666 kubelet[2592]: I0209 19:03:53.800649 2592 state_mem.go:75] "Updated machine memory state" Feb 9 19:03:53.803227 kubelet[2592]: I0209 19:03:53.802513 2592 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 19:03:53.803227 kubelet[2592]: I0209 19:03:53.802778 2592 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 19:03:53.806039 kubelet[2592]: I0209 19:03:53.806012 2592 kubelet_node_status.go:108] "Node was previously registered" node="ci-3510.3.2-a-c71e69a144" Feb 9 19:03:53.806204 kubelet[2592]: I0209 19:03:53.806182 2592 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-c71e69a144" Feb 9 19:03:53.856576 kubelet[2592]: I0209 19:03:53.856540 2592 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:03:53.856755 kubelet[2592]: I0209 19:03:53.856639 2592 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:03:53.856755 kubelet[2592]: I0209 19:03:53.856672 2592 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:03:53.868364 kubelet[2592]: E0209 19:03:53.868337 2592 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.2-a-c71e69a144\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-c71e69a144" Feb 9 19:03:53.896669 kubelet[2592]: I0209 19:03:53.896635 2592 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b18b83774a9e432056951571196a0ed3-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-c71e69a144\" (UID: \"b18b83774a9e432056951571196a0ed3\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-c71e69a144" Feb 9 19:03:53.896669 kubelet[2592]: I0209 19:03:53.896683 2592 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b540ae43f4483224e4ebbfc9b9e771c3-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-c71e69a144\" (UID: \"b540ae43f4483224e4ebbfc9b9e771c3\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-c71e69a144" Feb 9 19:03:53.896890 kubelet[2592]: I0209 19:03:53.896712 2592 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b540ae43f4483224e4ebbfc9b9e771c3-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-c71e69a144\" (UID: \"b540ae43f4483224e4ebbfc9b9e771c3\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-c71e69a144" Feb 9 19:03:53.896890 kubelet[2592]: I0209 19:03:53.896739 2592 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7ca7495994c4d7c9d65ab19f15e8d1bb-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-c71e69a144\" (UID: \"7ca7495994c4d7c9d65ab19f15e8d1bb\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-c71e69a144" Feb 9 19:03:53.896890 kubelet[2592]: I0209 19:03:53.896766 2592 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b18b83774a9e432056951571196a0ed3-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-c71e69a144\" (UID: \"b18b83774a9e432056951571196a0ed3\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-c71e69a144" Feb 9 19:03:53.896890 kubelet[2592]: I0209 19:03:53.896794 2592 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b18b83774a9e432056951571196a0ed3-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-c71e69a144\" (UID: \"b18b83774a9e432056951571196a0ed3\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-c71e69a144" Feb 9 19:03:53.896890 kubelet[2592]: I0209 19:03:53.896822 2592 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b540ae43f4483224e4ebbfc9b9e771c3-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-c71e69a144\" (UID: \"b540ae43f4483224e4ebbfc9b9e771c3\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-c71e69a144" Feb 9 19:03:53.897102 kubelet[2592]: I0209 19:03:53.896847 2592 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b540ae43f4483224e4ebbfc9b9e771c3-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-c71e69a144\" (UID: \"b540ae43f4483224e4ebbfc9b9e771c3\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-c71e69a144" Feb 9 19:03:53.897102 kubelet[2592]: I0209 19:03:53.896888 2592 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b540ae43f4483224e4ebbfc9b9e771c3-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-c71e69a144\" (UID: \"b540ae43f4483224e4ebbfc9b9e771c3\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-c71e69a144" Feb 9 19:03:54.343007 sudo[2647]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 9 19:03:54.343317 sudo[2647]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Feb 9 19:03:54.657893 kubelet[2592]: I0209 19:03:54.657798 2592 apiserver.go:52] "Watching apiserver" Feb 9 19:03:54.694663 kubelet[2592]: I0209 19:03:54.694625 2592 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 19:03:54.701097 kubelet[2592]: I0209 19:03:54.701067 2592 reconciler.go:41] "Reconciler: start to sync state" Feb 9 19:03:54.899189 sudo[2647]: pam_unix(sudo:session): session closed for user root Feb 9 19:03:55.065251 kubelet[2592]: E0209 19:03:55.065217 2592 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510.3.2-a-c71e69a144\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.2-a-c71e69a144" Feb 9 19:03:55.265070 kubelet[2592]: E0209 19:03:55.265023 2592 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.2-a-c71e69a144\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-c71e69a144" Feb 9 19:03:55.496294 kubelet[2592]: I0209 19:03:55.496248 2592 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.2-a-c71e69a144" podStartSLOduration=2.496197736 pod.CreationTimestamp="2024-02-09 19:03:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:03:55.496121835 +0000 UTC m=+1.923512577" watchObservedRunningTime="2024-02-09 19:03:55.496197736 +0000 UTC m=+1.923588478" Feb 9 19:03:55.864704 kubelet[2592]: I0209 19:03:55.864589 2592 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.2-a-c71e69a144" podStartSLOduration=2.864547988 pod.CreationTimestamp="2024-02-09 19:03:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:03:55.863061972 +0000 UTC m=+2.290452714" watchObservedRunningTime="2024-02-09 19:03:55.864547988 +0000 UTC m=+2.291938830" Feb 9 19:03:56.662966 sudo[1775]: pam_unix(sudo:session): session closed for user root Feb 9 19:03:56.760153 sshd[1771]: pam_unix(sshd:session): session closed for user core Feb 9 19:03:56.763380 systemd[1]: sshd@4-10.200.8.38:22-10.200.12.6:51640.service: Deactivated successfully. Feb 9 19:03:56.764807 systemd[1]: session-7.scope: Deactivated successfully. Feb 9 19:03:56.764826 systemd-logind[1372]: Session 7 logged out. Waiting for processes to exit. Feb 9 19:03:56.766524 systemd-logind[1372]: Removed session 7. Feb 9 19:03:58.518719 kubelet[2592]: I0209 19:03:58.518678 2592 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-c71e69a144" podStartSLOduration=6.518622882 pod.CreationTimestamp="2024-02-09 19:03:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:03:56.263004614 +0000 UTC m=+2.690395456" watchObservedRunningTime="2024-02-09 19:03:58.518622882 +0000 UTC m=+4.946013724" Feb 9 19:04:05.649390 kubelet[2592]: I0209 19:04:05.649347 2592 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 9 19:04:05.649885 env[1401]: time="2024-02-09T19:04:05.649757793Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 19:04:05.650240 kubelet[2592]: I0209 19:04:05.649983 2592 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 9 19:04:06.100763 kubelet[2592]: I0209 19:04:06.100727 2592 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:04:06.116537 kubelet[2592]: I0209 19:04:06.116505 2592 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:04:06.180194 kubelet[2592]: I0209 19:04:06.180169 2592 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/10a89e47-869c-4875-8ea2-6e794b5ec825-xtables-lock\") pod \"cilium-zdf7c\" (UID: \"10a89e47-869c-4875-8ea2-6e794b5ec825\") " pod="kube-system/cilium-zdf7c" Feb 9 19:04:06.180424 kubelet[2592]: I0209 19:04:06.180413 2592 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/10a89e47-869c-4875-8ea2-6e794b5ec825-cilium-cgroup\") pod \"cilium-zdf7c\" (UID: \"10a89e47-869c-4875-8ea2-6e794b5ec825\") " pod="kube-system/cilium-zdf7c" Feb 9 19:04:06.180529 kubelet[2592]: I0209 19:04:06.180519 2592 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/10a89e47-869c-4875-8ea2-6e794b5ec825-cni-path\") pod \"cilium-zdf7c\" (UID: \"10a89e47-869c-4875-8ea2-6e794b5ec825\") " pod="kube-system/cilium-zdf7c" Feb 9 19:04:06.180623 kubelet[2592]: I0209 19:04:06.180613 2592 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/10a89e47-869c-4875-8ea2-6e794b5ec825-cilium-run\") pod \"cilium-zdf7c\" (UID: \"10a89e47-869c-4875-8ea2-6e794b5ec825\") " pod="kube-system/cilium-zdf7c" Feb 9 19:04:06.180713 kubelet[2592]: I0209 19:04:06.180703 2592 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/10a89e47-869c-4875-8ea2-6e794b5ec825-cilium-config-path\") pod \"cilium-zdf7c\" (UID: \"10a89e47-869c-4875-8ea2-6e794b5ec825\") " pod="kube-system/cilium-zdf7c" Feb 9 19:04:06.180817 kubelet[2592]: I0209 19:04:06.180805 2592 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bk66\" (UniqueName: \"kubernetes.io/projected/c08c12c7-703d-49ba-959b-4551031bbc49-kube-api-access-9bk66\") pod \"kube-proxy-6ptbc\" (UID: \"c08c12c7-703d-49ba-959b-4551031bbc49\") " pod="kube-system/kube-proxy-6ptbc" Feb 9 19:04:06.180897 kubelet[2592]: I0209 19:04:06.180887 2592 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c08c12c7-703d-49ba-959b-4551031bbc49-kube-proxy\") pod \"kube-proxy-6ptbc\" (UID: \"c08c12c7-703d-49ba-959b-4551031bbc49\") " pod="kube-system/kube-proxy-6ptbc" Feb 9 19:04:06.180979 kubelet[2592]: I0209 19:04:06.180971 2592 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c08c12c7-703d-49ba-959b-4551031bbc49-lib-modules\") pod \"kube-proxy-6ptbc\" (UID: \"c08c12c7-703d-49ba-959b-4551031bbc49\") " pod="kube-system/kube-proxy-6ptbc" Feb 9 19:04:06.181087 kubelet[2592]: I0209 19:04:06.181077 2592 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/10a89e47-869c-4875-8ea2-6e794b5ec825-etc-cni-netd\") pod \"cilium-zdf7c\" (UID: \"10a89e47-869c-4875-8ea2-6e794b5ec825\") " pod="kube-system/cilium-zdf7c" Feb 9 19:04:06.181185 kubelet[2592]: I0209 19:04:06.181177 2592 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsq8w\" (UniqueName: \"kubernetes.io/projected/10a89e47-869c-4875-8ea2-6e794b5ec825-kube-api-access-jsq8w\") pod \"cilium-zdf7c\" (UID: \"10a89e47-869c-4875-8ea2-6e794b5ec825\") " pod="kube-system/cilium-zdf7c" Feb 9 19:04:06.181278 kubelet[2592]: I0209 19:04:06.181269 2592 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/10a89e47-869c-4875-8ea2-6e794b5ec825-host-proc-sys-net\") pod \"cilium-zdf7c\" (UID: \"10a89e47-869c-4875-8ea2-6e794b5ec825\") " pod="kube-system/cilium-zdf7c" Feb 9 19:04:06.181369 kubelet[2592]: I0209 19:04:06.181360 2592 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c08c12c7-703d-49ba-959b-4551031bbc49-xtables-lock\") pod \"kube-proxy-6ptbc\" (UID: \"c08c12c7-703d-49ba-959b-4551031bbc49\") " pod="kube-system/kube-proxy-6ptbc" Feb 9 19:04:06.181454 kubelet[2592]: I0209 19:04:06.181444 2592 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/10a89e47-869c-4875-8ea2-6e794b5ec825-clustermesh-secrets\") pod \"cilium-zdf7c\" (UID: \"10a89e47-869c-4875-8ea2-6e794b5ec825\") " pod="kube-system/cilium-zdf7c" Feb 9 19:04:06.181535 kubelet[2592]: I0209 19:04:06.181525 2592 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/10a89e47-869c-4875-8ea2-6e794b5ec825-hostproc\") pod \"cilium-zdf7c\" (UID: \"10a89e47-869c-4875-8ea2-6e794b5ec825\") " pod="kube-system/cilium-zdf7c" Feb 9 19:04:06.181620 kubelet[2592]: I0209 19:04:06.181607 2592 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/10a89e47-869c-4875-8ea2-6e794b5ec825-lib-modules\") pod \"cilium-zdf7c\" (UID: \"10a89e47-869c-4875-8ea2-6e794b5ec825\") " pod="kube-system/cilium-zdf7c" Feb 9 19:04:06.181703 kubelet[2592]: I0209 19:04:06.181694 2592 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/10a89e47-869c-4875-8ea2-6e794b5ec825-hubble-tls\") pod \"cilium-zdf7c\" (UID: \"10a89e47-869c-4875-8ea2-6e794b5ec825\") " pod="kube-system/cilium-zdf7c" Feb 9 19:04:06.181788 kubelet[2592]: I0209 19:04:06.181777 2592 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/10a89e47-869c-4875-8ea2-6e794b5ec825-bpf-maps\") pod \"cilium-zdf7c\" (UID: \"10a89e47-869c-4875-8ea2-6e794b5ec825\") " pod="kube-system/cilium-zdf7c" Feb 9 19:04:06.181871 kubelet[2592]: I0209 19:04:06.181861 2592 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/10a89e47-869c-4875-8ea2-6e794b5ec825-host-proc-sys-kernel\") pod \"cilium-zdf7c\" (UID: \"10a89e47-869c-4875-8ea2-6e794b5ec825\") " pod="kube-system/cilium-zdf7c" Feb 9 19:04:06.368780 kubelet[2592]: I0209 19:04:06.368657 2592 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:04:06.405803 env[1401]: time="2024-02-09T19:04:06.405757376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6ptbc,Uid:c08c12c7-703d-49ba-959b-4551031bbc49,Namespace:kube-system,Attempt:0,}" Feb 9 19:04:06.422377 env[1401]: time="2024-02-09T19:04:06.422340823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zdf7c,Uid:10a89e47-869c-4875-8ea2-6e794b5ec825,Namespace:kube-system,Attempt:0,}" Feb 9 19:04:06.434525 env[1401]: time="2024-02-09T19:04:06.434458531Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:04:06.434525 env[1401]: time="2024-02-09T19:04:06.434495931Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:04:06.434714 env[1401]: time="2024-02-09T19:04:06.434509631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:04:06.434989 env[1401]: time="2024-02-09T19:04:06.434927635Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ace03f2404c9188bb2e944c6bc4a7431d1afba923e7735736bf29737bee31490 pid=2700 runtime=io.containerd.runc.v2 Feb 9 19:04:06.466447 env[1401]: time="2024-02-09T19:04:06.466259113Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:04:06.466447 env[1401]: time="2024-02-09T19:04:06.466350714Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:04:06.466447 env[1401]: time="2024-02-09T19:04:06.466383115Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:04:06.466730 env[1401]: time="2024-02-09T19:04:06.466540116Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d9da176d643c602c305018f38eab6002bc48cf7b9ee6c24435ef64e765496742 pid=2733 runtime=io.containerd.runc.v2 Feb 9 19:04:06.486072 kubelet[2592]: I0209 19:04:06.483586 2592 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxsdn\" (UniqueName: \"kubernetes.io/projected/350a2779-d40b-4487-8264-f825d2fcf428-kube-api-access-jxsdn\") pod \"cilium-operator-f59cbd8c6-m8zpk\" (UID: \"350a2779-d40b-4487-8264-f825d2fcf428\") " pod="kube-system/cilium-operator-f59cbd8c6-m8zpk" Feb 9 19:04:06.486072 kubelet[2592]: I0209 19:04:06.483640 2592 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/350a2779-d40b-4487-8264-f825d2fcf428-cilium-config-path\") pod \"cilium-operator-f59cbd8c6-m8zpk\" (UID: \"350a2779-d40b-4487-8264-f825d2fcf428\") " pod="kube-system/cilium-operator-f59cbd8c6-m8zpk" Feb 9 19:04:06.515665 env[1401]: time="2024-02-09T19:04:06.515613352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6ptbc,Uid:c08c12c7-703d-49ba-959b-4551031bbc49,Namespace:kube-system,Attempt:0,} returns sandbox id \"ace03f2404c9188bb2e944c6bc4a7431d1afba923e7735736bf29737bee31490\"" Feb 9 19:04:06.520747 env[1401]: time="2024-02-09T19:04:06.520709097Z" level=info msg="CreateContainer within sandbox \"ace03f2404c9188bb2e944c6bc4a7431d1afba923e7735736bf29737bee31490\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 19:04:06.521698 env[1401]: time="2024-02-09T19:04:06.521665406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zdf7c,Uid:10a89e47-869c-4875-8ea2-6e794b5ec825,Namespace:kube-system,Attempt:0,} returns sandbox id \"d9da176d643c602c305018f38eab6002bc48cf7b9ee6c24435ef64e765496742\"" Feb 9 19:04:06.524909 env[1401]: time="2024-02-09T19:04:06.524874134Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 9 19:04:06.557775 env[1401]: time="2024-02-09T19:04:06.557738126Z" level=info msg="CreateContainer within sandbox \"ace03f2404c9188bb2e944c6bc4a7431d1afba923e7735736bf29737bee31490\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d0778c0ec1f9cd2be8b5bdaea5b42312e13048070a9505e64a45d7cb30b4aca8\"" Feb 9 19:04:06.560259 env[1401]: time="2024-02-09T19:04:06.560226849Z" level=info msg="StartContainer for \"d0778c0ec1f9cd2be8b5bdaea5b42312e13048070a9505e64a45d7cb30b4aca8\"" Feb 9 19:04:06.624411 env[1401]: time="2024-02-09T19:04:06.623495711Z" level=info msg="StartContainer for \"d0778c0ec1f9cd2be8b5bdaea5b42312e13048070a9505e64a45d7cb30b4aca8\" returns successfully" Feb 9 19:04:06.974013 env[1401]: time="2024-02-09T19:04:06.973871024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-m8zpk,Uid:350a2779-d40b-4487-8264-f825d2fcf428,Namespace:kube-system,Attempt:0,}" Feb 9 19:04:07.002850 env[1401]: time="2024-02-09T19:04:07.002542979Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:04:07.002850 env[1401]: time="2024-02-09T19:04:07.002614680Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:04:07.002850 env[1401]: time="2024-02-09T19:04:07.002625880Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:04:07.003140 env[1401]: time="2024-02-09T19:04:07.002926982Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ce8b2c51ba78af58d1df7e3d8e37375860c0b92280eafd59186bd2baefff1295 pid=2922 runtime=io.containerd.runc.v2 Feb 9 19:04:07.058540 env[1401]: time="2024-02-09T19:04:07.058504066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-m8zpk,Uid:350a2779-d40b-4487-8264-f825d2fcf428,Namespace:kube-system,Attempt:0,} returns sandbox id \"ce8b2c51ba78af58d1df7e3d8e37375860c0b92280eafd59186bd2baefff1295\"" Feb 9 19:04:12.145445 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3266329294.mount: Deactivated successfully. Feb 9 19:04:13.790856 kubelet[2592]: I0209 19:04:13.790817 2592 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-6ptbc" podStartSLOduration=7.790775545 pod.CreationTimestamp="2024-02-09 19:04:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:04:07.312002174 +0000 UTC m=+13.739393016" watchObservedRunningTime="2024-02-09 19:04:13.790775545 +0000 UTC m=+20.218166387" Feb 9 19:04:14.856300 env[1401]: time="2024-02-09T19:04:14.856252978Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:04:14.864097 env[1401]: time="2024-02-09T19:04:14.864058738Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:04:14.868401 env[1401]: time="2024-02-09T19:04:14.868366471Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:04:14.868842 env[1401]: time="2024-02-09T19:04:14.868810774Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 9 19:04:14.870241 env[1401]: time="2024-02-09T19:04:14.870213085Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 9 19:04:14.872044 env[1401]: time="2024-02-09T19:04:14.872003398Z" level=info msg="CreateContainer within sandbox \"d9da176d643c602c305018f38eab6002bc48cf7b9ee6c24435ef64e765496742\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 19:04:14.906264 env[1401]: time="2024-02-09T19:04:14.906224359Z" level=info msg="CreateContainer within sandbox \"d9da176d643c602c305018f38eab6002bc48cf7b9ee6c24435ef64e765496742\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"984fda7f7721b61ac971a6d9afd93d4c4f04d5e9749b859ac70cd245489b69c3\"" Feb 9 19:04:14.908741 env[1401]: time="2024-02-09T19:04:14.906871763Z" level=info msg="StartContainer for \"984fda7f7721b61ac971a6d9afd93d4c4f04d5e9749b859ac70cd245489b69c3\"" Feb 9 19:04:14.960830 env[1401]: time="2024-02-09T19:04:14.960774973Z" level=info msg="StartContainer for \"984fda7f7721b61ac971a6d9afd93d4c4f04d5e9749b859ac70cd245489b69c3\" returns successfully" Feb 9 19:04:15.893011 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-984fda7f7721b61ac971a6d9afd93d4c4f04d5e9749b859ac70cd245489b69c3-rootfs.mount: Deactivated successfully. Feb 9 19:04:19.279059 env[1401]: time="2024-02-09T19:04:19.278990293Z" level=info msg="shim disconnected" id=984fda7f7721b61ac971a6d9afd93d4c4f04d5e9749b859ac70cd245489b69c3 Feb 9 19:04:19.279559 env[1401]: time="2024-02-09T19:04:19.279521297Z" level=warning msg="cleaning up after shim disconnected" id=984fda7f7721b61ac971a6d9afd93d4c4f04d5e9749b859ac70cd245489b69c3 namespace=k8s.io Feb 9 19:04:19.279559 env[1401]: time="2024-02-09T19:04:19.279547197Z" level=info msg="cleaning up dead shim" Feb 9 19:04:19.288415 env[1401]: time="2024-02-09T19:04:19.288372259Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:04:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3006 runtime=io.containerd.runc.v2\n" Feb 9 19:04:19.873124 env[1401]: time="2024-02-09T19:04:19.873086225Z" level=info msg="CreateContainer within sandbox \"d9da176d643c602c305018f38eab6002bc48cf7b9ee6c24435ef64e765496742\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 19:04:20.053440 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1096955311.mount: Deactivated successfully. Feb 9 19:04:20.106233 env[1401]: time="2024-02-09T19:04:20.106177434Z" level=info msg="CreateContainer within sandbox \"d9da176d643c602c305018f38eab6002bc48cf7b9ee6c24435ef64e765496742\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2b6be50f0d7208d4fb91d4d68f3e192999b5413e53491bd51c1f62a56b169144\"" Feb 9 19:04:20.108502 env[1401]: time="2024-02-09T19:04:20.107048840Z" level=info msg="StartContainer for \"2b6be50f0d7208d4fb91d4d68f3e192999b5413e53491bd51c1f62a56b169144\"" Feb 9 19:04:20.210144 env[1401]: time="2024-02-09T19:04:20.210045044Z" level=info msg="StartContainer for \"2b6be50f0d7208d4fb91d4d68f3e192999b5413e53491bd51c1f62a56b169144\" returns successfully" Feb 9 19:04:20.211846 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 19:04:20.212585 systemd[1]: Stopped systemd-sysctl.service. Feb 9 19:04:20.214658 systemd[1]: Stopping systemd-sysctl.service... Feb 9 19:04:20.218294 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:04:20.233000 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:04:20.264747 env[1401]: time="2024-02-09T19:04:20.264694018Z" level=info msg="shim disconnected" id=2b6be50f0d7208d4fb91d4d68f3e192999b5413e53491bd51c1f62a56b169144 Feb 9 19:04:20.264747 env[1401]: time="2024-02-09T19:04:20.264742718Z" level=warning msg="cleaning up after shim disconnected" id=2b6be50f0d7208d4fb91d4d68f3e192999b5413e53491bd51c1f62a56b169144 namespace=k8s.io Feb 9 19:04:20.264747 env[1401]: time="2024-02-09T19:04:20.264754118Z" level=info msg="cleaning up dead shim" Feb 9 19:04:20.280198 env[1401]: time="2024-02-09T19:04:20.280151524Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:04:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3071 runtime=io.containerd.runc.v2\n" Feb 9 19:04:20.838941 env[1401]: time="2024-02-09T19:04:20.837964637Z" level=info msg="CreateContainer within sandbox \"d9da176d643c602c305018f38eab6002bc48cf7b9ee6c24435ef64e765496742\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 19:04:20.849274 env[1401]: time="2024-02-09T19:04:20.849227814Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:04:20.862060 env[1401]: time="2024-02-09T19:04:20.861283897Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:04:20.870101 env[1401]: time="2024-02-09T19:04:20.870064357Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:04:20.870412 env[1401]: time="2024-02-09T19:04:20.870382459Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 9 19:04:20.872774 env[1401]: time="2024-02-09T19:04:20.872742575Z" level=info msg="CreateContainer within sandbox \"ce8b2c51ba78af58d1df7e3d8e37375860c0b92280eafd59186bd2baefff1295\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 9 19:04:20.896401 env[1401]: time="2024-02-09T19:04:20.896356537Z" level=info msg="CreateContainer within sandbox \"d9da176d643c602c305018f38eab6002bc48cf7b9ee6c24435ef64e765496742\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"bb4f8c9c604276a0bb87566dbea61b1aa1311377a819eb8382ff59d529c0d2ce\"" Feb 9 19:04:20.898491 env[1401]: time="2024-02-09T19:04:20.897209642Z" level=info msg="StartContainer for \"bb4f8c9c604276a0bb87566dbea61b1aa1311377a819eb8382ff59d529c0d2ce\"" Feb 9 19:04:20.911582 env[1401]: time="2024-02-09T19:04:20.911540440Z" level=info msg="CreateContainer within sandbox \"ce8b2c51ba78af58d1df7e3d8e37375860c0b92280eafd59186bd2baefff1295\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"9a81947baedf25bb30af77cd08af9eb080e2d7454aed5a8f60d70bc3315fe118\"" Feb 9 19:04:20.914085 env[1401]: time="2024-02-09T19:04:20.912430347Z" level=info msg="StartContainer for \"9a81947baedf25bb30af77cd08af9eb080e2d7454aed5a8f60d70bc3315fe118\"" Feb 9 19:04:20.991059 env[1401]: time="2024-02-09T19:04:20.990873383Z" level=info msg="StartContainer for \"bb4f8c9c604276a0bb87566dbea61b1aa1311377a819eb8382ff59d529c0d2ce\" returns successfully" Feb 9 19:04:20.998054 env[1401]: time="2024-02-09T19:04:20.997694729Z" level=info msg="StartContainer for \"9a81947baedf25bb30af77cd08af9eb080e2d7454aed5a8f60d70bc3315fe118\" returns successfully" Feb 9 19:04:21.047255 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2b6be50f0d7208d4fb91d4d68f3e192999b5413e53491bd51c1f62a56b169144-rootfs.mount: Deactivated successfully. Feb 9 19:04:21.436873 env[1401]: time="2024-02-09T19:04:21.433470259Z" level=info msg="shim disconnected" id=bb4f8c9c604276a0bb87566dbea61b1aa1311377a819eb8382ff59d529c0d2ce Feb 9 19:04:21.436873 env[1401]: time="2024-02-09T19:04:21.433521860Z" level=warning msg="cleaning up after shim disconnected" id=bb4f8c9c604276a0bb87566dbea61b1aa1311377a819eb8382ff59d529c0d2ce namespace=k8s.io Feb 9 19:04:21.436873 env[1401]: time="2024-02-09T19:04:21.433534660Z" level=info msg="cleaning up dead shim" Feb 9 19:04:21.450160 env[1401]: time="2024-02-09T19:04:21.450099271Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:04:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3164 runtime=io.containerd.runc.v2\n" Feb 9 19:04:21.856344 env[1401]: time="2024-02-09T19:04:21.856300902Z" level=info msg="CreateContainer within sandbox \"d9da176d643c602c305018f38eab6002bc48cf7b9ee6c24435ef64e765496742\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 19:04:21.892778 env[1401]: time="2024-02-09T19:04:21.892726247Z" level=info msg="CreateContainer within sandbox \"d9da176d643c602c305018f38eab6002bc48cf7b9ee6c24435ef64e765496742\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"283e4ace11ee7a10bea647b4273863ae59e732bbffdd557e369fe6b734294739\"" Feb 9 19:04:21.893498 env[1401]: time="2024-02-09T19:04:21.893462052Z" level=info msg="StartContainer for \"283e4ace11ee7a10bea647b4273863ae59e732bbffdd557e369fe6b734294739\"" Feb 9 19:04:22.037396 kubelet[2592]: I0209 19:04:22.037357 2592 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-f59cbd8c6-m8zpk" podStartSLOduration=-9.223372020817463e+09 pod.CreationTimestamp="2024-02-09 19:04:06 +0000 UTC" firstStartedPulling="2024-02-09 19:04:07.059759177 +0000 UTC m=+13.487149919" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:04:22.037190514 +0000 UTC m=+28.464581256" watchObservedRunningTime="2024-02-09 19:04:22.037312215 +0000 UTC m=+28.464703057" Feb 9 19:04:22.072663 env[1401]: time="2024-02-09T19:04:22.072606248Z" level=info msg="StartContainer for \"283e4ace11ee7a10bea647b4273863ae59e732bbffdd557e369fe6b734294739\" returns successfully" Feb 9 19:04:22.106643 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-283e4ace11ee7a10bea647b4273863ae59e732bbffdd557e369fe6b734294739-rootfs.mount: Deactivated successfully. Feb 9 19:04:22.121320 env[1401]: time="2024-02-09T19:04:22.121269770Z" level=info msg="shim disconnected" id=283e4ace11ee7a10bea647b4273863ae59e732bbffdd557e369fe6b734294739 Feb 9 19:04:22.121666 env[1401]: time="2024-02-09T19:04:22.121640972Z" level=warning msg="cleaning up after shim disconnected" id=283e4ace11ee7a10bea647b4273863ae59e732bbffdd557e369fe6b734294739 namespace=k8s.io Feb 9 19:04:22.121802 env[1401]: time="2024-02-09T19:04:22.121786573Z" level=info msg="cleaning up dead shim" Feb 9 19:04:22.143345 env[1401]: time="2024-02-09T19:04:22.143305515Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:04:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3220 runtime=io.containerd.runc.v2\n" Feb 9 19:04:22.866774 env[1401]: time="2024-02-09T19:04:22.866727698Z" level=info msg="CreateContainer within sandbox \"d9da176d643c602c305018f38eab6002bc48cf7b9ee6c24435ef64e765496742\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 19:04:22.911190 env[1401]: time="2024-02-09T19:04:22.911141992Z" level=info msg="CreateContainer within sandbox \"d9da176d643c602c305018f38eab6002bc48cf7b9ee6c24435ef64e765496742\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"28e726f185fd4b1fcfd7b47364489b713418bd1249728f9366e7e3729e643b85\"" Feb 9 19:04:22.912058 env[1401]: time="2024-02-09T19:04:22.911794396Z" level=info msg="StartContainer for \"28e726f185fd4b1fcfd7b47364489b713418bd1249728f9366e7e3729e643b85\"" Feb 9 19:04:22.973052 env[1401]: time="2024-02-09T19:04:22.967933768Z" level=info msg="StartContainer for \"28e726f185fd4b1fcfd7b47364489b713418bd1249728f9366e7e3729e643b85\" returns successfully" Feb 9 19:04:23.078154 kubelet[2592]: I0209 19:04:23.077203 2592 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 19:04:23.118203 kubelet[2592]: I0209 19:04:23.118084 2592 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:04:23.127270 kubelet[2592]: I0209 19:04:23.127234 2592 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:04:23.207650 kubelet[2592]: I0209 19:04:23.207613 2592 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxbnr\" (UniqueName: \"kubernetes.io/projected/decb5516-a569-4398-a0ef-09332dc36be6-kube-api-access-dxbnr\") pod \"coredns-787d4945fb-x4l7h\" (UID: \"decb5516-a569-4398-a0ef-09332dc36be6\") " pod="kube-system/coredns-787d4945fb-x4l7h" Feb 9 19:04:23.207835 kubelet[2592]: I0209 19:04:23.207666 2592 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42phd\" (UniqueName: \"kubernetes.io/projected/9071b537-93b1-4ca9-a40c-6c6846e9b117-kube-api-access-42phd\") pod \"coredns-787d4945fb-rdqcx\" (UID: \"9071b537-93b1-4ca9-a40c-6c6846e9b117\") " pod="kube-system/coredns-787d4945fb-rdqcx" Feb 9 19:04:23.207835 kubelet[2592]: I0209 19:04:23.207706 2592 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9071b537-93b1-4ca9-a40c-6c6846e9b117-config-volume\") pod \"coredns-787d4945fb-rdqcx\" (UID: \"9071b537-93b1-4ca9-a40c-6c6846e9b117\") " pod="kube-system/coredns-787d4945fb-rdqcx" Feb 9 19:04:23.207835 kubelet[2592]: I0209 19:04:23.207734 2592 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/decb5516-a569-4398-a0ef-09332dc36be6-config-volume\") pod \"coredns-787d4945fb-x4l7h\" (UID: \"decb5516-a569-4398-a0ef-09332dc36be6\") " pod="kube-system/coredns-787d4945fb-x4l7h" Feb 9 19:04:23.424079 env[1401]: time="2024-02-09T19:04:23.423914537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-rdqcx,Uid:9071b537-93b1-4ca9-a40c-6c6846e9b117,Namespace:kube-system,Attempt:0,}" Feb 9 19:04:23.433186 env[1401]: time="2024-02-09T19:04:23.431208684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-x4l7h,Uid:decb5516-a569-4398-a0ef-09332dc36be6,Namespace:kube-system,Attempt:0,}" Feb 9 19:04:23.917707 kubelet[2592]: I0209 19:04:23.917677 2592 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-zdf7c" podStartSLOduration=-9.22337201893714e+09 pod.CreationTimestamp="2024-02-09 19:04:06 +0000 UTC" firstStartedPulling="2024-02-09 19:04:06.522531614 +0000 UTC m=+12.949922356" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:04:23.911295507 +0000 UTC m=+30.338686349" watchObservedRunningTime="2024-02-09 19:04:23.917635548 +0000 UTC m=+30.345026690" Feb 9 19:04:25.232094 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 9 19:04:25.236552 systemd-networkd[1557]: cilium_host: Link UP Feb 9 19:04:25.236725 systemd-networkd[1557]: cilium_net: Link UP Feb 9 19:04:25.236731 systemd-networkd[1557]: cilium_net: Gained carrier Feb 9 19:04:25.236911 systemd-networkd[1557]: cilium_host: Gained carrier Feb 9 19:04:25.237198 systemd-networkd[1557]: cilium_host: Gained IPv6LL Feb 9 19:04:25.407075 systemd-networkd[1557]: cilium_vxlan: Link UP Feb 9 19:04:25.407086 systemd-networkd[1557]: cilium_vxlan: Gained carrier Feb 9 19:04:25.678175 systemd-networkd[1557]: cilium_net: Gained IPv6LL Feb 9 19:04:25.689055 kernel: NET: Registered PF_ALG protocol family Feb 9 19:04:26.370477 systemd-networkd[1557]: lxc_health: Link UP Feb 9 19:04:26.397061 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 19:04:26.399632 systemd-networkd[1557]: lxc_health: Gained carrier Feb 9 19:04:27.005068 systemd-networkd[1557]: lxc4a3d1b66d6be: Link UP Feb 9 19:04:27.014262 systemd-networkd[1557]: lxc66d45d58b34f: Link UP Feb 9 19:04:27.022611 kernel: eth0: renamed from tmp790af Feb 9 19:04:27.036948 kernel: eth0: renamed from tmpb683d Feb 9 19:04:27.064215 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc66d45d58b34f: link becomes ready Feb 9 19:04:27.069499 systemd-networkd[1557]: cilium_vxlan: Gained IPv6LL Feb 9 19:04:27.069904 systemd-networkd[1557]: lxc66d45d58b34f: Gained carrier Feb 9 19:04:27.086523 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc4a3d1b66d6be: link becomes ready Feb 9 19:04:27.086254 systemd-networkd[1557]: lxc4a3d1b66d6be: Gained carrier Feb 9 19:04:27.614301 systemd-networkd[1557]: lxc_health: Gained IPv6LL Feb 9 19:04:28.958291 systemd-networkd[1557]: lxc4a3d1b66d6be: Gained IPv6LL Feb 9 19:04:29.086286 systemd-networkd[1557]: lxc66d45d58b34f: Gained IPv6LL Feb 9 19:04:30.642444 env[1401]: time="2024-02-09T19:04:30.642349787Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:04:30.642937 env[1401]: time="2024-02-09T19:04:30.642476688Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:04:30.642937 env[1401]: time="2024-02-09T19:04:30.642506288Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:04:30.642937 env[1401]: time="2024-02-09T19:04:30.642676289Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b683dcb35dca4c8124d043154c41b9415d8495e3f4ce4d741d47426f0f7fbb65 pid=3768 runtime=io.containerd.runc.v2 Feb 9 19:04:30.672124 env[1401]: time="2024-02-09T19:04:30.672056661Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:04:30.672374 env[1401]: time="2024-02-09T19:04:30.672341462Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:04:30.672556 env[1401]: time="2024-02-09T19:04:30.672523364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:04:30.672883 env[1401]: time="2024-02-09T19:04:30.672838165Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/790af88178db16194afdf88d6a5a6d55a53ce710a48946cbef1fae25e2fe1c73 pid=3788 runtime=io.containerd.runc.v2 Feb 9 19:04:30.780353 env[1401]: time="2024-02-09T19:04:30.780300993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-rdqcx,Uid:9071b537-93b1-4ca9-a40c-6c6846e9b117,Namespace:kube-system,Attempt:0,} returns sandbox id \"b683dcb35dca4c8124d043154c41b9415d8495e3f4ce4d741d47426f0f7fbb65\"" Feb 9 19:04:30.783439 env[1401]: time="2024-02-09T19:04:30.783389411Z" level=info msg="CreateContainer within sandbox \"b683dcb35dca4c8124d043154c41b9415d8495e3f4ce4d741d47426f0f7fbb65\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 19:04:30.817467 env[1401]: time="2024-02-09T19:04:30.817410310Z" level=info msg="CreateContainer within sandbox \"b683dcb35dca4c8124d043154c41b9415d8495e3f4ce4d741d47426f0f7fbb65\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5cfb198a74f7dba16b2731d886af168cddcb5c34d04a9ac9dd88041a88410b56\"" Feb 9 19:04:30.820815 env[1401]: time="2024-02-09T19:04:30.820780429Z" level=info msg="StartContainer for \"5cfb198a74f7dba16b2731d886af168cddcb5c34d04a9ac9dd88041a88410b56\"" Feb 9 19:04:30.834831 env[1401]: time="2024-02-09T19:04:30.834785111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-x4l7h,Uid:decb5516-a569-4398-a0ef-09332dc36be6,Namespace:kube-system,Attempt:0,} returns sandbox id \"790af88178db16194afdf88d6a5a6d55a53ce710a48946cbef1fae25e2fe1c73\"" Feb 9 19:04:30.840909 env[1401]: time="2024-02-09T19:04:30.840868447Z" level=info msg="CreateContainer within sandbox \"790af88178db16194afdf88d6a5a6d55a53ce710a48946cbef1fae25e2fe1c73\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 19:04:30.884573 env[1401]: time="2024-02-09T19:04:30.884525002Z" level=info msg="CreateContainer within sandbox \"790af88178db16194afdf88d6a5a6d55a53ce710a48946cbef1fae25e2fe1c73\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d79f1b7dd7c881d37ee1d87cd45a8dffd81a18404df30ce083275b67f687cabc\"" Feb 9 19:04:30.885656 env[1401]: time="2024-02-09T19:04:30.885624208Z" level=info msg="StartContainer for \"d79f1b7dd7c881d37ee1d87cd45a8dffd81a18404df30ce083275b67f687cabc\"" Feb 9 19:04:30.947927 env[1401]: time="2024-02-09T19:04:30.947119967Z" level=info msg="StartContainer for \"5cfb198a74f7dba16b2731d886af168cddcb5c34d04a9ac9dd88041a88410b56\" returns successfully" Feb 9 19:04:31.011586 env[1401]: time="2024-02-09T19:04:31.011540543Z" level=info msg="StartContainer for \"d79f1b7dd7c881d37ee1d87cd45a8dffd81a18404df30ce083275b67f687cabc\" returns successfully" Feb 9 19:04:31.927003 kubelet[2592]: I0209 19:04:31.926948 2592 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-x4l7h" podStartSLOduration=25.926905813 pod.CreationTimestamp="2024-02-09 19:04:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:04:31.913434335 +0000 UTC m=+38.340825077" watchObservedRunningTime="2024-02-09 19:04:31.926905813 +0000 UTC m=+38.354296655" Feb 9 19:04:31.982120 kubelet[2592]: I0209 19:04:31.982085 2592 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-rdqcx" podStartSLOduration=25.98201813 pod.CreationTimestamp="2024-02-09 19:04:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:04:31.963884026 +0000 UTC m=+38.391274868" watchObservedRunningTime="2024-02-09 19:04:31.98201813 +0000 UTC m=+38.409408872" Feb 9 19:05:59.528707 systemd[1]: Started sshd@5-10.200.8.38:22-10.200.12.6:54498.service. Feb 9 19:06:00.154425 sshd[4005]: Accepted publickey for core from 10.200.12.6 port 54498 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:06:00.156157 sshd[4005]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:06:00.161094 systemd[1]: Started session-8.scope. Feb 9 19:06:00.161843 systemd-logind[1372]: New session 8 of user core. Feb 9 19:06:00.717879 sshd[4005]: pam_unix(sshd:session): session closed for user core Feb 9 19:06:00.723375 systemd[1]: sshd@5-10.200.8.38:22-10.200.12.6:54498.service: Deactivated successfully. Feb 9 19:06:00.725596 systemd[1]: session-8.scope: Deactivated successfully. Feb 9 19:06:00.726497 systemd-logind[1372]: Session 8 logged out. Waiting for processes to exit. Feb 9 19:06:00.728430 systemd-logind[1372]: Removed session 8. Feb 9 19:06:05.821788 systemd[1]: Started sshd@6-10.200.8.38:22-10.200.12.6:54510.service. Feb 9 19:06:06.449862 sshd[4019]: Accepted publickey for core from 10.200.12.6 port 54510 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:06:06.451335 sshd[4019]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:06:06.456395 systemd[1]: Started session-9.scope. Feb 9 19:06:06.456642 systemd-logind[1372]: New session 9 of user core. Feb 9 19:06:06.943395 sshd[4019]: pam_unix(sshd:session): session closed for user core Feb 9 19:06:06.946365 systemd[1]: sshd@6-10.200.8.38:22-10.200.12.6:54510.service: Deactivated successfully. Feb 9 19:06:06.947783 systemd[1]: session-9.scope: Deactivated successfully. Feb 9 19:06:06.947797 systemd-logind[1372]: Session 9 logged out. Waiting for processes to exit. Feb 9 19:06:06.949213 systemd-logind[1372]: Removed session 9. Feb 9 19:06:12.046471 systemd[1]: Started sshd@7-10.200.8.38:22-10.200.12.6:37262.service. Feb 9 19:06:12.672157 sshd[4034]: Accepted publickey for core from 10.200.12.6 port 37262 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:06:12.673819 sshd[4034]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:06:12.678777 systemd[1]: Started session-10.scope. Feb 9 19:06:12.679534 systemd-logind[1372]: New session 10 of user core. Feb 9 19:06:13.170593 sshd[4034]: pam_unix(sshd:session): session closed for user core Feb 9 19:06:13.173683 systemd[1]: sshd@7-10.200.8.38:22-10.200.12.6:37262.service: Deactivated successfully. Feb 9 19:06:13.175219 systemd[1]: session-10.scope: Deactivated successfully. Feb 9 19:06:13.175235 systemd-logind[1372]: Session 10 logged out. Waiting for processes to exit. Feb 9 19:06:13.176461 systemd-logind[1372]: Removed session 10. Feb 9 19:06:18.277281 systemd[1]: Started sshd@8-10.200.8.38:22-10.200.12.6:39606.service. Feb 9 19:06:18.903506 sshd[4047]: Accepted publickey for core from 10.200.12.6 port 39606 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:06:18.905186 sshd[4047]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:06:18.911348 systemd[1]: Started session-11.scope. Feb 9 19:06:18.912928 systemd-logind[1372]: New session 11 of user core. Feb 9 19:06:19.395648 sshd[4047]: pam_unix(sshd:session): session closed for user core Feb 9 19:06:19.398743 systemd[1]: sshd@8-10.200.8.38:22-10.200.12.6:39606.service: Deactivated successfully. Feb 9 19:06:19.400507 systemd-logind[1372]: Session 11 logged out. Waiting for processes to exit. Feb 9 19:06:19.400586 systemd[1]: session-11.scope: Deactivated successfully. Feb 9 19:06:19.401962 systemd-logind[1372]: Removed session 11. Feb 9 19:06:24.499981 systemd[1]: Started sshd@9-10.200.8.38:22-10.200.12.6:39616.service. Feb 9 19:06:25.122650 sshd[4061]: Accepted publickey for core from 10.200.12.6 port 39616 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:06:25.124260 sshd[4061]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:06:25.129224 systemd[1]: Started session-12.scope. Feb 9 19:06:25.129469 systemd-logind[1372]: New session 12 of user core. Feb 9 19:06:25.622286 sshd[4061]: pam_unix(sshd:session): session closed for user core Feb 9 19:06:25.625420 systemd[1]: sshd@9-10.200.8.38:22-10.200.12.6:39616.service: Deactivated successfully. Feb 9 19:06:25.627498 systemd[1]: session-12.scope: Deactivated successfully. Feb 9 19:06:25.628250 systemd-logind[1372]: Session 12 logged out. Waiting for processes to exit. Feb 9 19:06:25.629384 systemd-logind[1372]: Removed session 12. Feb 9 19:06:25.725988 systemd[1]: Started sshd@10-10.200.8.38:22-10.200.12.6:39630.service. Feb 9 19:06:26.364290 sshd[4076]: Accepted publickey for core from 10.200.12.6 port 39630 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:06:26.366122 sshd[4076]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:06:26.371477 systemd[1]: Started session-13.scope. Feb 9 19:06:26.371725 systemd-logind[1372]: New session 13 of user core. Feb 9 19:06:27.600493 sshd[4076]: pam_unix(sshd:session): session closed for user core Feb 9 19:06:27.604274 systemd[1]: sshd@10-10.200.8.38:22-10.200.12.6:39630.service: Deactivated successfully. Feb 9 19:06:27.606023 systemd[1]: session-13.scope: Deactivated successfully. Feb 9 19:06:27.606075 systemd-logind[1372]: Session 13 logged out. Waiting for processes to exit. Feb 9 19:06:27.607955 systemd-logind[1372]: Removed session 13. Feb 9 19:06:27.705882 systemd[1]: Started sshd@11-10.200.8.38:22-10.200.12.6:39992.service. Feb 9 19:06:28.337380 sshd[4087]: Accepted publickey for core from 10.200.12.6 port 39992 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:06:28.339531 sshd[4087]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:06:28.344096 systemd-logind[1372]: New session 14 of user core. Feb 9 19:06:28.345051 systemd[1]: Started session-14.scope. Feb 9 19:06:28.854179 sshd[4087]: pam_unix(sshd:session): session closed for user core Feb 9 19:06:28.857754 systemd[1]: sshd@11-10.200.8.38:22-10.200.12.6:39992.service: Deactivated successfully. Feb 9 19:06:28.859277 systemd[1]: session-14.scope: Deactivated successfully. Feb 9 19:06:28.859299 systemd-logind[1372]: Session 14 logged out. Waiting for processes to exit. Feb 9 19:06:28.860818 systemd-logind[1372]: Removed session 14. Feb 9 19:06:33.941785 systemd[1]: Started sshd@12-10.200.8.38:22-10.200.12.6:40004.service. Feb 9 19:06:34.568059 sshd[4100]: Accepted publickey for core from 10.200.12.6 port 40004 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:06:34.569730 sshd[4100]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:06:34.575761 systemd[1]: Started session-15.scope. Feb 9 19:06:34.576898 systemd-logind[1372]: New session 15 of user core. Feb 9 19:06:35.067288 sshd[4100]: pam_unix(sshd:session): session closed for user core Feb 9 19:06:35.070845 systemd-logind[1372]: Session 15 logged out. Waiting for processes to exit. Feb 9 19:06:35.071310 systemd[1]: sshd@12-10.200.8.38:22-10.200.12.6:40004.service: Deactivated successfully. Feb 9 19:06:35.072607 systemd[1]: session-15.scope: Deactivated successfully. Feb 9 19:06:35.073540 systemd-logind[1372]: Removed session 15. Feb 9 19:06:40.172338 systemd[1]: Started sshd@13-10.200.8.38:22-10.200.12.6:35814.service. Feb 9 19:06:40.800927 sshd[4115]: Accepted publickey for core from 10.200.12.6 port 35814 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:06:40.802488 sshd[4115]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:06:40.807408 systemd[1]: Started session-16.scope. Feb 9 19:06:40.808770 systemd-logind[1372]: New session 16 of user core. Feb 9 19:06:41.300462 sshd[4115]: pam_unix(sshd:session): session closed for user core Feb 9 19:06:41.303686 systemd[1]: sshd@13-10.200.8.38:22-10.200.12.6:35814.service: Deactivated successfully. Feb 9 19:06:41.305416 systemd[1]: session-16.scope: Deactivated successfully. Feb 9 19:06:41.306200 systemd-logind[1372]: Session 16 logged out. Waiting for processes to exit. Feb 9 19:06:41.307604 systemd-logind[1372]: Removed session 16. Feb 9 19:06:41.403656 systemd[1]: Started sshd@14-10.200.8.38:22-10.200.12.6:35826.service. Feb 9 19:06:42.027530 sshd[4128]: Accepted publickey for core from 10.200.12.6 port 35826 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:06:42.028939 sshd[4128]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:06:42.034353 systemd[1]: Started session-17.scope. Feb 9 19:06:42.034594 systemd-logind[1372]: New session 17 of user core. Feb 9 19:06:42.585898 sshd[4128]: pam_unix(sshd:session): session closed for user core Feb 9 19:06:42.589457 systemd[1]: sshd@14-10.200.8.38:22-10.200.12.6:35826.service: Deactivated successfully. Feb 9 19:06:42.591735 systemd[1]: session-17.scope: Deactivated successfully. Feb 9 19:06:42.591789 systemd-logind[1372]: Session 17 logged out. Waiting for processes to exit. Feb 9 19:06:42.596309 systemd-logind[1372]: Removed session 17. Feb 9 19:06:42.689695 systemd[1]: Started sshd@15-10.200.8.38:22-10.200.12.6:35828.service. Feb 9 19:06:43.315269 sshd[4139]: Accepted publickey for core from 10.200.12.6 port 35828 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:06:43.316936 sshd[4139]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:06:43.322452 systemd[1]: Started session-18.scope. Feb 9 19:06:43.323396 systemd-logind[1372]: New session 18 of user core. Feb 9 19:06:44.811586 sshd[4139]: pam_unix(sshd:session): session closed for user core Feb 9 19:06:44.815009 systemd[1]: sshd@15-10.200.8.38:22-10.200.12.6:35828.service: Deactivated successfully. Feb 9 19:06:44.816586 systemd[1]: session-18.scope: Deactivated successfully. Feb 9 19:06:44.816627 systemd-logind[1372]: Session 18 logged out. Waiting for processes to exit. Feb 9 19:06:44.818938 systemd-logind[1372]: Removed session 18. Feb 9 19:06:44.915240 systemd[1]: Started sshd@16-10.200.8.38:22-10.200.12.6:35842.service. Feb 9 19:06:45.533325 sshd[4205]: Accepted publickey for core from 10.200.12.6 port 35842 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:06:45.535078 sshd[4205]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:06:45.540221 systemd[1]: Started session-19.scope. Feb 9 19:06:45.540623 systemd-logind[1372]: New session 19 of user core. Feb 9 19:06:46.134191 sshd[4205]: pam_unix(sshd:session): session closed for user core Feb 9 19:06:46.137477 systemd[1]: sshd@16-10.200.8.38:22-10.200.12.6:35842.service: Deactivated successfully. Feb 9 19:06:46.139654 systemd[1]: session-19.scope: Deactivated successfully. Feb 9 19:06:46.140382 systemd-logind[1372]: Session 19 logged out. Waiting for processes to exit. Feb 9 19:06:46.141486 systemd-logind[1372]: Removed session 19. Feb 9 19:06:46.237843 systemd[1]: Started sshd@17-10.200.8.38:22-10.200.12.6:35846.service. Feb 9 19:06:46.862686 sshd[4217]: Accepted publickey for core from 10.200.12.6 port 35846 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:06:46.864161 sshd[4217]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:06:46.869477 systemd[1]: Started session-20.scope. Feb 9 19:06:46.869927 systemd-logind[1372]: New session 20 of user core. Feb 9 19:06:47.360957 sshd[4217]: pam_unix(sshd:session): session closed for user core Feb 9 19:06:47.364218 systemd[1]: sshd@17-10.200.8.38:22-10.200.12.6:35846.service: Deactivated successfully. Feb 9 19:06:47.367349 systemd[1]: session-20.scope: Deactivated successfully. Feb 9 19:06:47.367407 systemd-logind[1372]: Session 20 logged out. Waiting for processes to exit. Feb 9 19:06:47.369336 systemd-logind[1372]: Removed session 20. Feb 9 19:06:52.475613 systemd[1]: Started sshd@18-10.200.8.38:22-10.200.12.6:55960.service. Feb 9 19:06:53.176822 sshd[4258]: Accepted publickey for core from 10.200.12.6 port 55960 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:06:53.178519 sshd[4258]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:06:53.183972 systemd[1]: Started session-21.scope. Feb 9 19:06:53.184249 systemd-logind[1372]: New session 21 of user core. Feb 9 19:06:53.714361 sshd[4258]: pam_unix(sshd:session): session closed for user core Feb 9 19:06:53.717280 systemd[1]: sshd@18-10.200.8.38:22-10.200.12.6:55960.service: Deactivated successfully. Feb 9 19:06:53.718582 systemd-logind[1372]: Session 21 logged out. Waiting for processes to exit. Feb 9 19:06:53.718668 systemd[1]: session-21.scope: Deactivated successfully. Feb 9 19:06:53.720230 systemd-logind[1372]: Removed session 21. Feb 9 19:06:58.817790 systemd[1]: Started sshd@19-10.200.8.38:22-10.200.12.6:35836.service. Feb 9 19:06:59.515574 sshd[4273]: Accepted publickey for core from 10.200.12.6 port 35836 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:06:59.517258 sshd[4273]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:06:59.521793 systemd-logind[1372]: New session 22 of user core. Feb 9 19:06:59.522448 systemd[1]: Started session-22.scope. Feb 9 19:07:00.025335 sshd[4273]: pam_unix(sshd:session): session closed for user core Feb 9 19:07:00.030075 systemd[1]: sshd@19-10.200.8.38:22-10.200.12.6:35836.service: Deactivated successfully. Feb 9 19:07:00.032026 systemd[1]: session-22.scope: Deactivated successfully. Feb 9 19:07:00.032952 systemd-logind[1372]: Session 22 logged out. Waiting for processes to exit. Feb 9 19:07:00.034134 systemd-logind[1372]: Removed session 22. Feb 9 19:07:05.122026 systemd[1]: Started sshd@20-10.200.8.38:22-10.200.12.6:35844.service. Feb 9 19:07:05.743886 sshd[4286]: Accepted publickey for core from 10.200.12.6 port 35844 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:07:05.745672 sshd[4286]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:07:05.750089 systemd-logind[1372]: New session 23 of user core. Feb 9 19:07:05.750708 systemd[1]: Started session-23.scope. Feb 9 19:07:06.241579 sshd[4286]: pam_unix(sshd:session): session closed for user core Feb 9 19:07:06.244318 systemd[1]: sshd@20-10.200.8.38:22-10.200.12.6:35844.service: Deactivated successfully. Feb 9 19:07:06.245843 systemd[1]: session-23.scope: Deactivated successfully. Feb 9 19:07:06.245860 systemd-logind[1372]: Session 23 logged out. Waiting for processes to exit. Feb 9 19:07:06.247251 systemd-logind[1372]: Removed session 23. Feb 9 19:07:06.346338 systemd[1]: Started sshd@21-10.200.8.38:22-10.200.12.6:35850.service. Feb 9 19:07:07.002904 sshd[4298]: Accepted publickey for core from 10.200.12.6 port 35850 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:07:07.004604 sshd[4298]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:07:07.010091 systemd-logind[1372]: New session 24 of user core. Feb 9 19:07:07.010747 systemd[1]: Started session-24.scope. Feb 9 19:07:08.643530 env[1401]: time="2024-02-09T19:07:08.643478739Z" level=info msg="StopContainer for \"9a81947baedf25bb30af77cd08af9eb080e2d7454aed5a8f60d70bc3315fe118\" with timeout 30 (s)" Feb 9 19:07:08.644108 env[1401]: time="2024-02-09T19:07:08.644046545Z" level=info msg="Stop container \"9a81947baedf25bb30af77cd08af9eb080e2d7454aed5a8f60d70bc3315fe118\" with signal terminated" Feb 9 19:07:08.683096 env[1401]: time="2024-02-09T19:07:08.682998333Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 19:07:08.689675 env[1401]: time="2024-02-09T19:07:08.689599399Z" level=info msg="StopContainer for \"28e726f185fd4b1fcfd7b47364489b713418bd1249728f9366e7e3729e643b85\" with timeout 1 (s)" Feb 9 19:07:08.690347 env[1401]: time="2024-02-09T19:07:08.690312306Z" level=info msg="Stop container \"28e726f185fd4b1fcfd7b47364489b713418bd1249728f9366e7e3729e643b85\" with signal terminated" Feb 9 19:07:08.693448 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9a81947baedf25bb30af77cd08af9eb080e2d7454aed5a8f60d70bc3315fe118-rootfs.mount: Deactivated successfully. Feb 9 19:07:08.701830 systemd-networkd[1557]: lxc_health: Link DOWN Feb 9 19:07:08.701838 systemd-networkd[1557]: lxc_health: Lost carrier Feb 9 19:07:08.722401 env[1401]: time="2024-02-09T19:07:08.722348225Z" level=info msg="shim disconnected" id=9a81947baedf25bb30af77cd08af9eb080e2d7454aed5a8f60d70bc3315fe118 Feb 9 19:07:08.722401 env[1401]: time="2024-02-09T19:07:08.722398625Z" level=warning msg="cleaning up after shim disconnected" id=9a81947baedf25bb30af77cd08af9eb080e2d7454aed5a8f60d70bc3315fe118 namespace=k8s.io Feb 9 19:07:08.722595 env[1401]: time="2024-02-09T19:07:08.722411825Z" level=info msg="cleaning up dead shim" Feb 9 19:07:08.737436 env[1401]: time="2024-02-09T19:07:08.737396975Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:07:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4358 runtime=io.containerd.runc.v2\n" Feb 9 19:07:08.741922 env[1401]: time="2024-02-09T19:07:08.741883419Z" level=info msg="StopContainer for \"9a81947baedf25bb30af77cd08af9eb080e2d7454aed5a8f60d70bc3315fe118\" returns successfully" Feb 9 19:07:08.742776 env[1401]: time="2024-02-09T19:07:08.742746628Z" level=info msg="StopPodSandbox for \"ce8b2c51ba78af58d1df7e3d8e37375860c0b92280eafd59186bd2baefff1295\"" Feb 9 19:07:08.742996 env[1401]: time="2024-02-09T19:07:08.742974430Z" level=info msg="Container to stop \"9a81947baedf25bb30af77cd08af9eb080e2d7454aed5a8f60d70bc3315fe118\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:07:08.745807 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ce8b2c51ba78af58d1df7e3d8e37375860c0b92280eafd59186bd2baefff1295-shm.mount: Deactivated successfully. Feb 9 19:07:08.749468 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-28e726f185fd4b1fcfd7b47364489b713418bd1249728f9366e7e3729e643b85-rootfs.mount: Deactivated successfully. Feb 9 19:07:08.765182 env[1401]: time="2024-02-09T19:07:08.765137251Z" level=info msg="shim disconnected" id=28e726f185fd4b1fcfd7b47364489b713418bd1249728f9366e7e3729e643b85 Feb 9 19:07:08.765441 env[1401]: time="2024-02-09T19:07:08.765416954Z" level=warning msg="cleaning up after shim disconnected" id=28e726f185fd4b1fcfd7b47364489b713418bd1249728f9366e7e3729e643b85 namespace=k8s.io Feb 9 19:07:08.765555 env[1401]: time="2024-02-09T19:07:08.765540755Z" level=info msg="cleaning up dead shim" Feb 9 19:07:08.792656 env[1401]: time="2024-02-09T19:07:08.789948598Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:07:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4392 runtime=io.containerd.runc.v2\n" Feb 9 19:07:08.792467 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ce8b2c51ba78af58d1df7e3d8e37375860c0b92280eafd59186bd2baefff1295-rootfs.mount: Deactivated successfully. Feb 9 19:07:08.797889 env[1401]: time="2024-02-09T19:07:08.797855077Z" level=info msg="StopContainer for \"28e726f185fd4b1fcfd7b47364489b713418bd1249728f9366e7e3729e643b85\" returns successfully" Feb 9 19:07:08.798379 env[1401]: time="2024-02-09T19:07:08.798350982Z" level=info msg="StopPodSandbox for \"d9da176d643c602c305018f38eab6002bc48cf7b9ee6c24435ef64e765496742\"" Feb 9 19:07:08.799112 env[1401]: time="2024-02-09T19:07:08.798429383Z" level=info msg="Container to stop \"bb4f8c9c604276a0bb87566dbea61b1aa1311377a819eb8382ff59d529c0d2ce\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:07:08.799112 env[1401]: time="2024-02-09T19:07:08.798452083Z" level=info msg="Container to stop \"2b6be50f0d7208d4fb91d4d68f3e192999b5413e53491bd51c1f62a56b169144\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:07:08.799112 env[1401]: time="2024-02-09T19:07:08.798470483Z" level=info msg="Container to stop \"984fda7f7721b61ac971a6d9afd93d4c4f04d5e9749b859ac70cd245489b69c3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:07:08.799112 env[1401]: time="2024-02-09T19:07:08.798486583Z" level=info msg="Container to stop \"283e4ace11ee7a10bea647b4273863ae59e732bbffdd557e369fe6b734294739\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:07:08.799112 env[1401]: time="2024-02-09T19:07:08.798496583Z" level=info msg="Container to stop \"28e726f185fd4b1fcfd7b47364489b713418bd1249728f9366e7e3729e643b85\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:07:08.808507 env[1401]: time="2024-02-09T19:07:08.808461783Z" level=info msg="shim disconnected" id=ce8b2c51ba78af58d1df7e3d8e37375860c0b92280eafd59186bd2baefff1295 Feb 9 19:07:08.808680 env[1401]: time="2024-02-09T19:07:08.808512483Z" level=warning msg="cleaning up after shim disconnected" id=ce8b2c51ba78af58d1df7e3d8e37375860c0b92280eafd59186bd2baefff1295 namespace=k8s.io Feb 9 19:07:08.808680 env[1401]: time="2024-02-09T19:07:08.808524783Z" level=info msg="cleaning up dead shim" Feb 9 19:07:08.823754 env[1401]: time="2024-02-09T19:07:08.823713935Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:07:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4428 runtime=io.containerd.runc.v2\n" Feb 9 19:07:08.824139 env[1401]: time="2024-02-09T19:07:08.824103038Z" level=info msg="TearDown network for sandbox \"ce8b2c51ba78af58d1df7e3d8e37375860c0b92280eafd59186bd2baefff1295\" successfully" Feb 9 19:07:08.824260 env[1401]: time="2024-02-09T19:07:08.824138839Z" level=info msg="StopPodSandbox for \"ce8b2c51ba78af58d1df7e3d8e37375860c0b92280eafd59186bd2baefff1295\" returns successfully" Feb 9 19:07:08.837880 env[1401]: time="2024-02-09T19:07:08.837838575Z" level=info msg="shim disconnected" id=d9da176d643c602c305018f38eab6002bc48cf7b9ee6c24435ef64e765496742 Feb 9 19:07:08.838120 env[1401]: time="2024-02-09T19:07:08.838090878Z" level=warning msg="cleaning up after shim disconnected" id=d9da176d643c602c305018f38eab6002bc48cf7b9ee6c24435ef64e765496742 namespace=k8s.io Feb 9 19:07:08.838120 env[1401]: time="2024-02-09T19:07:08.838115978Z" level=info msg="cleaning up dead shim" Feb 9 19:07:08.845198 kubelet[2592]: E0209 19:07:08.845174 2592 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 19:07:08.846220 env[1401]: time="2024-02-09T19:07:08.846191958Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:07:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4455 runtime=io.containerd.runc.v2\n" Feb 9 19:07:08.846489 env[1401]: time="2024-02-09T19:07:08.846462861Z" level=info msg="TearDown network for sandbox \"d9da176d643c602c305018f38eab6002bc48cf7b9ee6c24435ef64e765496742\" successfully" Feb 9 19:07:08.846564 env[1401]: time="2024-02-09T19:07:08.846490361Z" level=info msg="StopPodSandbox for \"d9da176d643c602c305018f38eab6002bc48cf7b9ee6c24435ef64e765496742\" returns successfully" Feb 9 19:07:08.916588 kubelet[2592]: I0209 19:07:08.914534 2592 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/350a2779-d40b-4487-8264-f825d2fcf428-cilium-config-path\") pod \"350a2779-d40b-4487-8264-f825d2fcf428\" (UID: \"350a2779-d40b-4487-8264-f825d2fcf428\") " Feb 9 19:07:08.916588 kubelet[2592]: I0209 19:07:08.914595 2592 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxsdn\" (UniqueName: \"kubernetes.io/projected/350a2779-d40b-4487-8264-f825d2fcf428-kube-api-access-jxsdn\") pod \"350a2779-d40b-4487-8264-f825d2fcf428\" (UID: \"350a2779-d40b-4487-8264-f825d2fcf428\") " Feb 9 19:07:08.916588 kubelet[2592]: W0209 19:07:08.914779 2592 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/350a2779-d40b-4487-8264-f825d2fcf428/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 19:07:08.919525 kubelet[2592]: I0209 19:07:08.919475 2592 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/350a2779-d40b-4487-8264-f825d2fcf428-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "350a2779-d40b-4487-8264-f825d2fcf428" (UID: "350a2779-d40b-4487-8264-f825d2fcf428"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 19:07:08.925860 kubelet[2592]: I0209 19:07:08.925827 2592 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/350a2779-d40b-4487-8264-f825d2fcf428-kube-api-access-jxsdn" (OuterVolumeSpecName: "kube-api-access-jxsdn") pod "350a2779-d40b-4487-8264-f825d2fcf428" (UID: "350a2779-d40b-4487-8264-f825d2fcf428"). InnerVolumeSpecName "kube-api-access-jxsdn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:07:09.015530 kubelet[2592]: I0209 19:07:09.015480 2592 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/10a89e47-869c-4875-8ea2-6e794b5ec825-lib-modules\") pod \"10a89e47-869c-4875-8ea2-6e794b5ec825\" (UID: \"10a89e47-869c-4875-8ea2-6e794b5ec825\") " Feb 9 19:07:09.015530 kubelet[2592]: I0209 19:07:09.015533 2592 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/10a89e47-869c-4875-8ea2-6e794b5ec825-cilium-run\") pod \"10a89e47-869c-4875-8ea2-6e794b5ec825\" (UID: \"10a89e47-869c-4875-8ea2-6e794b5ec825\") " Feb 9 19:07:09.015779 kubelet[2592]: I0209 19:07:09.015569 2592 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jsq8w\" (UniqueName: \"kubernetes.io/projected/10a89e47-869c-4875-8ea2-6e794b5ec825-kube-api-access-jsq8w\") pod \"10a89e47-869c-4875-8ea2-6e794b5ec825\" (UID: \"10a89e47-869c-4875-8ea2-6e794b5ec825\") " Feb 9 19:07:09.015779 kubelet[2592]: I0209 19:07:09.015594 2592 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/10a89e47-869c-4875-8ea2-6e794b5ec825-xtables-lock\") pod \"10a89e47-869c-4875-8ea2-6e794b5ec825\" (UID: \"10a89e47-869c-4875-8ea2-6e794b5ec825\") " Feb 9 19:07:09.015779 kubelet[2592]: I0209 19:07:09.015618 2592 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/10a89e47-869c-4875-8ea2-6e794b5ec825-hubble-tls\") pod \"10a89e47-869c-4875-8ea2-6e794b5ec825\" (UID: \"10a89e47-869c-4875-8ea2-6e794b5ec825\") " Feb 9 19:07:09.015779 kubelet[2592]: I0209 19:07:09.015666 2592 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/10a89e47-869c-4875-8ea2-6e794b5ec825-host-proc-sys-net\") pod \"10a89e47-869c-4875-8ea2-6e794b5ec825\" (UID: \"10a89e47-869c-4875-8ea2-6e794b5ec825\") " Feb 9 19:07:09.015779 kubelet[2592]: I0209 19:07:09.015692 2592 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/10a89e47-869c-4875-8ea2-6e794b5ec825-hostproc\") pod \"10a89e47-869c-4875-8ea2-6e794b5ec825\" (UID: \"10a89e47-869c-4875-8ea2-6e794b5ec825\") " Feb 9 19:07:09.015779 kubelet[2592]: I0209 19:07:09.015713 2592 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/10a89e47-869c-4875-8ea2-6e794b5ec825-bpf-maps\") pod \"10a89e47-869c-4875-8ea2-6e794b5ec825\" (UID: \"10a89e47-869c-4875-8ea2-6e794b5ec825\") " Feb 9 19:07:09.016024 kubelet[2592]: I0209 19:07:09.015737 2592 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/10a89e47-869c-4875-8ea2-6e794b5ec825-host-proc-sys-kernel\") pod \"10a89e47-869c-4875-8ea2-6e794b5ec825\" (UID: \"10a89e47-869c-4875-8ea2-6e794b5ec825\") " Feb 9 19:07:09.016024 kubelet[2592]: I0209 19:07:09.015766 2592 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/10a89e47-869c-4875-8ea2-6e794b5ec825-cilium-cgroup\") pod \"10a89e47-869c-4875-8ea2-6e794b5ec825\" (UID: \"10a89e47-869c-4875-8ea2-6e794b5ec825\") " Feb 9 19:07:09.016024 kubelet[2592]: I0209 19:07:09.015794 2592 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/10a89e47-869c-4875-8ea2-6e794b5ec825-cilium-config-path\") pod \"10a89e47-869c-4875-8ea2-6e794b5ec825\" (UID: \"10a89e47-869c-4875-8ea2-6e794b5ec825\") " Feb 9 19:07:09.016024 kubelet[2592]: I0209 19:07:09.015821 2592 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/10a89e47-869c-4875-8ea2-6e794b5ec825-etc-cni-netd\") pod \"10a89e47-869c-4875-8ea2-6e794b5ec825\" (UID: \"10a89e47-869c-4875-8ea2-6e794b5ec825\") " Feb 9 19:07:09.016024 kubelet[2592]: I0209 19:07:09.015851 2592 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/10a89e47-869c-4875-8ea2-6e794b5ec825-clustermesh-secrets\") pod \"10a89e47-869c-4875-8ea2-6e794b5ec825\" (UID: \"10a89e47-869c-4875-8ea2-6e794b5ec825\") " Feb 9 19:07:09.016024 kubelet[2592]: I0209 19:07:09.015878 2592 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/10a89e47-869c-4875-8ea2-6e794b5ec825-cni-path\") pod \"10a89e47-869c-4875-8ea2-6e794b5ec825\" (UID: \"10a89e47-869c-4875-8ea2-6e794b5ec825\") " Feb 9 19:07:09.016397 kubelet[2592]: I0209 19:07:09.015925 2592 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/350a2779-d40b-4487-8264-f825d2fcf428-cilium-config-path\") on node \"ci-3510.3.2-a-c71e69a144\" DevicePath \"\"" Feb 9 19:07:09.016397 kubelet[2592]: I0209 19:07:09.015944 2592 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-jxsdn\" (UniqueName: \"kubernetes.io/projected/350a2779-d40b-4487-8264-f825d2fcf428-kube-api-access-jxsdn\") on node \"ci-3510.3.2-a-c71e69a144\" DevicePath \"\"" Feb 9 19:07:09.016397 kubelet[2592]: I0209 19:07:09.015986 2592 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10a89e47-869c-4875-8ea2-6e794b5ec825-cni-path" (OuterVolumeSpecName: "cni-path") pod "10a89e47-869c-4875-8ea2-6e794b5ec825" (UID: "10a89e47-869c-4875-8ea2-6e794b5ec825"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:07:09.016397 kubelet[2592]: I0209 19:07:09.016052 2592 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10a89e47-869c-4875-8ea2-6e794b5ec825-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "10a89e47-869c-4875-8ea2-6e794b5ec825" (UID: "10a89e47-869c-4875-8ea2-6e794b5ec825"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:07:09.016397 kubelet[2592]: I0209 19:07:09.016089 2592 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10a89e47-869c-4875-8ea2-6e794b5ec825-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "10a89e47-869c-4875-8ea2-6e794b5ec825" (UID: "10a89e47-869c-4875-8ea2-6e794b5ec825"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:07:09.016753 kubelet[2592]: I0209 19:07:09.016660 2592 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10a89e47-869c-4875-8ea2-6e794b5ec825-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "10a89e47-869c-4875-8ea2-6e794b5ec825" (UID: "10a89e47-869c-4875-8ea2-6e794b5ec825"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:07:09.016895 kubelet[2592]: I0209 19:07:09.016874 2592 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10a89e47-869c-4875-8ea2-6e794b5ec825-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "10a89e47-869c-4875-8ea2-6e794b5ec825" (UID: "10a89e47-869c-4875-8ea2-6e794b5ec825"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:07:09.018083 kubelet[2592]: I0209 19:07:09.018054 2592 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10a89e47-869c-4875-8ea2-6e794b5ec825-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "10a89e47-869c-4875-8ea2-6e794b5ec825" (UID: "10a89e47-869c-4875-8ea2-6e794b5ec825"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:07:09.018738 kubelet[2592]: I0209 19:07:09.018254 2592 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10a89e47-869c-4875-8ea2-6e794b5ec825-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "10a89e47-869c-4875-8ea2-6e794b5ec825" (UID: "10a89e47-869c-4875-8ea2-6e794b5ec825"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:07:09.018859 kubelet[2592]: I0209 19:07:09.018276 2592 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10a89e47-869c-4875-8ea2-6e794b5ec825-hostproc" (OuterVolumeSpecName: "hostproc") pod "10a89e47-869c-4875-8ea2-6e794b5ec825" (UID: "10a89e47-869c-4875-8ea2-6e794b5ec825"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:07:09.018949 kubelet[2592]: W0209 19:07:09.018429 2592 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/10a89e47-869c-4875-8ea2-6e794b5ec825/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 19:07:09.019986 kubelet[2592]: I0209 19:07:09.018465 2592 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10a89e47-869c-4875-8ea2-6e794b5ec825-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "10a89e47-869c-4875-8ea2-6e794b5ec825" (UID: "10a89e47-869c-4875-8ea2-6e794b5ec825"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:07:09.020126 kubelet[2592]: I0209 19:07:09.018484 2592 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10a89e47-869c-4875-8ea2-6e794b5ec825-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "10a89e47-869c-4875-8ea2-6e794b5ec825" (UID: "10a89e47-869c-4875-8ea2-6e794b5ec825"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:07:09.020282 kubelet[2592]: I0209 19:07:09.020259 2592 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10a89e47-869c-4875-8ea2-6e794b5ec825-kube-api-access-jsq8w" (OuterVolumeSpecName: "kube-api-access-jsq8w") pod "10a89e47-869c-4875-8ea2-6e794b5ec825" (UID: "10a89e47-869c-4875-8ea2-6e794b5ec825"). InnerVolumeSpecName "kube-api-access-jsq8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:07:09.021828 kubelet[2592]: I0209 19:07:09.021803 2592 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10a89e47-869c-4875-8ea2-6e794b5ec825-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "10a89e47-869c-4875-8ea2-6e794b5ec825" (UID: "10a89e47-869c-4875-8ea2-6e794b5ec825"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 19:07:09.023318 kubelet[2592]: I0209 19:07:09.023290 2592 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10a89e47-869c-4875-8ea2-6e794b5ec825-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "10a89e47-869c-4875-8ea2-6e794b5ec825" (UID: "10a89e47-869c-4875-8ea2-6e794b5ec825"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:07:09.024713 kubelet[2592]: I0209 19:07:09.024632 2592 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10a89e47-869c-4875-8ea2-6e794b5ec825-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "10a89e47-869c-4875-8ea2-6e794b5ec825" (UID: "10a89e47-869c-4875-8ea2-6e794b5ec825"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:07:09.117146 kubelet[2592]: I0209 19:07:09.117096 2592 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/10a89e47-869c-4875-8ea2-6e794b5ec825-xtables-lock\") on node \"ci-3510.3.2-a-c71e69a144\" DevicePath \"\"" Feb 9 19:07:09.117146 kubelet[2592]: I0209 19:07:09.117139 2592 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/10a89e47-869c-4875-8ea2-6e794b5ec825-host-proc-sys-net\") on node \"ci-3510.3.2-a-c71e69a144\" DevicePath \"\"" Feb 9 19:07:09.117146 kubelet[2592]: I0209 19:07:09.117157 2592 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/10a89e47-869c-4875-8ea2-6e794b5ec825-hubble-tls\") on node \"ci-3510.3.2-a-c71e69a144\" DevicePath \"\"" Feb 9 19:07:09.117432 kubelet[2592]: I0209 19:07:09.117173 2592 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/10a89e47-869c-4875-8ea2-6e794b5ec825-cilium-cgroup\") on node \"ci-3510.3.2-a-c71e69a144\" DevicePath \"\"" Feb 9 19:07:09.117432 kubelet[2592]: I0209 19:07:09.117190 2592 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/10a89e47-869c-4875-8ea2-6e794b5ec825-hostproc\") on node \"ci-3510.3.2-a-c71e69a144\" DevicePath \"\"" Feb 9 19:07:09.117432 kubelet[2592]: I0209 19:07:09.117204 2592 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/10a89e47-869c-4875-8ea2-6e794b5ec825-bpf-maps\") on node \"ci-3510.3.2-a-c71e69a144\" DevicePath \"\"" Feb 9 19:07:09.117432 kubelet[2592]: I0209 19:07:09.117218 2592 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/10a89e47-869c-4875-8ea2-6e794b5ec825-host-proc-sys-kernel\") on node \"ci-3510.3.2-a-c71e69a144\" DevicePath \"\"" Feb 9 19:07:09.117432 kubelet[2592]: I0209 19:07:09.117234 2592 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/10a89e47-869c-4875-8ea2-6e794b5ec825-cni-path\") on node \"ci-3510.3.2-a-c71e69a144\" DevicePath \"\"" Feb 9 19:07:09.117432 kubelet[2592]: I0209 19:07:09.117250 2592 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/10a89e47-869c-4875-8ea2-6e794b5ec825-cilium-config-path\") on node \"ci-3510.3.2-a-c71e69a144\" DevicePath \"\"" Feb 9 19:07:09.117432 kubelet[2592]: I0209 19:07:09.117267 2592 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/10a89e47-869c-4875-8ea2-6e794b5ec825-etc-cni-netd\") on node \"ci-3510.3.2-a-c71e69a144\" DevicePath \"\"" Feb 9 19:07:09.117432 kubelet[2592]: I0209 19:07:09.117284 2592 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/10a89e47-869c-4875-8ea2-6e794b5ec825-clustermesh-secrets\") on node \"ci-3510.3.2-a-c71e69a144\" DevicePath \"\"" Feb 9 19:07:09.117677 kubelet[2592]: I0209 19:07:09.117302 2592 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/10a89e47-869c-4875-8ea2-6e794b5ec825-lib-modules\") on node \"ci-3510.3.2-a-c71e69a144\" DevicePath \"\"" Feb 9 19:07:09.117677 kubelet[2592]: I0209 19:07:09.117318 2592 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/10a89e47-869c-4875-8ea2-6e794b5ec825-cilium-run\") on node \"ci-3510.3.2-a-c71e69a144\" DevicePath \"\"" Feb 9 19:07:09.117677 kubelet[2592]: I0209 19:07:09.117334 2592 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-jsq8w\" (UniqueName: \"kubernetes.io/projected/10a89e47-869c-4875-8ea2-6e794b5ec825-kube-api-access-jsq8w\") on node \"ci-3510.3.2-a-c71e69a144\" DevicePath \"\"" Feb 9 19:07:09.205919 kubelet[2592]: I0209 19:07:09.202862 2592 scope.go:115] "RemoveContainer" containerID="9a81947baedf25bb30af77cd08af9eb080e2d7454aed5a8f60d70bc3315fe118" Feb 9 19:07:09.209947 env[1401]: time="2024-02-09T19:07:09.209904968Z" level=info msg="RemoveContainer for \"9a81947baedf25bb30af77cd08af9eb080e2d7454aed5a8f60d70bc3315fe118\"" Feb 9 19:07:09.223791 env[1401]: time="2024-02-09T19:07:09.223754005Z" level=info msg="RemoveContainer for \"9a81947baedf25bb30af77cd08af9eb080e2d7454aed5a8f60d70bc3315fe118\" returns successfully" Feb 9 19:07:09.224406 kubelet[2592]: I0209 19:07:09.224384 2592 scope.go:115] "RemoveContainer" containerID="9a81947baedf25bb30af77cd08af9eb080e2d7454aed5a8f60d70bc3315fe118" Feb 9 19:07:09.224686 env[1401]: time="2024-02-09T19:07:09.224613114Z" level=error msg="ContainerStatus for \"9a81947baedf25bb30af77cd08af9eb080e2d7454aed5a8f60d70bc3315fe118\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9a81947baedf25bb30af77cd08af9eb080e2d7454aed5a8f60d70bc3315fe118\": not found" Feb 9 19:07:09.224840 kubelet[2592]: E0209 19:07:09.224822 2592 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9a81947baedf25bb30af77cd08af9eb080e2d7454aed5a8f60d70bc3315fe118\": not found" containerID="9a81947baedf25bb30af77cd08af9eb080e2d7454aed5a8f60d70bc3315fe118" Feb 9 19:07:09.224930 kubelet[2592]: I0209 19:07:09.224858 2592 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:9a81947baedf25bb30af77cd08af9eb080e2d7454aed5a8f60d70bc3315fe118} err="failed to get container status \"9a81947baedf25bb30af77cd08af9eb080e2d7454aed5a8f60d70bc3315fe118\": rpc error: code = NotFound desc = an error occurred when try to find container \"9a81947baedf25bb30af77cd08af9eb080e2d7454aed5a8f60d70bc3315fe118\": not found" Feb 9 19:07:09.224930 kubelet[2592]: I0209 19:07:09.224872 2592 scope.go:115] "RemoveContainer" containerID="28e726f185fd4b1fcfd7b47364489b713418bd1249728f9366e7e3729e643b85" Feb 9 19:07:09.228240 env[1401]: time="2024-02-09T19:07:09.228206749Z" level=info msg="RemoveContainer for \"28e726f185fd4b1fcfd7b47364489b713418bd1249728f9366e7e3729e643b85\"" Feb 9 19:07:09.235824 env[1401]: time="2024-02-09T19:07:09.235792424Z" level=info msg="RemoveContainer for \"28e726f185fd4b1fcfd7b47364489b713418bd1249728f9366e7e3729e643b85\" returns successfully" Feb 9 19:07:09.236116 kubelet[2592]: I0209 19:07:09.236102 2592 scope.go:115] "RemoveContainer" containerID="283e4ace11ee7a10bea647b4273863ae59e732bbffdd557e369fe6b734294739" Feb 9 19:07:09.237292 env[1401]: time="2024-02-09T19:07:09.237267839Z" level=info msg="RemoveContainer for \"283e4ace11ee7a10bea647b4273863ae59e732bbffdd557e369fe6b734294739\"" Feb 9 19:07:09.244270 env[1401]: time="2024-02-09T19:07:09.244243208Z" level=info msg="RemoveContainer for \"283e4ace11ee7a10bea647b4273863ae59e732bbffdd557e369fe6b734294739\" returns successfully" Feb 9 19:07:09.244491 kubelet[2592]: I0209 19:07:09.244450 2592 scope.go:115] "RemoveContainer" containerID="bb4f8c9c604276a0bb87566dbea61b1aa1311377a819eb8382ff59d529c0d2ce" Feb 9 19:07:09.245398 env[1401]: time="2024-02-09T19:07:09.245366919Z" level=info msg="RemoveContainer for \"bb4f8c9c604276a0bb87566dbea61b1aa1311377a819eb8382ff59d529c0d2ce\"" Feb 9 19:07:09.256069 env[1401]: time="2024-02-09T19:07:09.255798122Z" level=info msg="RemoveContainer for \"bb4f8c9c604276a0bb87566dbea61b1aa1311377a819eb8382ff59d529c0d2ce\" returns successfully" Feb 9 19:07:09.256302 kubelet[2592]: I0209 19:07:09.256278 2592 scope.go:115] "RemoveContainer" containerID="2b6be50f0d7208d4fb91d4d68f3e192999b5413e53491bd51c1f62a56b169144" Feb 9 19:07:09.257352 env[1401]: time="2024-02-09T19:07:09.257327037Z" level=info msg="RemoveContainer for \"2b6be50f0d7208d4fb91d4d68f3e192999b5413e53491bd51c1f62a56b169144\"" Feb 9 19:07:09.266079 env[1401]: time="2024-02-09T19:07:09.266046024Z" level=info msg="RemoveContainer for \"2b6be50f0d7208d4fb91d4d68f3e192999b5413e53491bd51c1f62a56b169144\" returns successfully" Feb 9 19:07:09.266241 kubelet[2592]: I0209 19:07:09.266224 2592 scope.go:115] "RemoveContainer" containerID="984fda7f7721b61ac971a6d9afd93d4c4f04d5e9749b859ac70cd245489b69c3" Feb 9 19:07:09.267088 env[1401]: time="2024-02-09T19:07:09.267061734Z" level=info msg="RemoveContainer for \"984fda7f7721b61ac971a6d9afd93d4c4f04d5e9749b859ac70cd245489b69c3\"" Feb 9 19:07:09.276474 env[1401]: time="2024-02-09T19:07:09.276440527Z" level=info msg="RemoveContainer for \"984fda7f7721b61ac971a6d9afd93d4c4f04d5e9749b859ac70cd245489b69c3\" returns successfully" Feb 9 19:07:09.276638 kubelet[2592]: I0209 19:07:09.276596 2592 scope.go:115] "RemoveContainer" containerID="28e726f185fd4b1fcfd7b47364489b713418bd1249728f9366e7e3729e643b85" Feb 9 19:07:09.276855 env[1401]: time="2024-02-09T19:07:09.276780430Z" level=error msg="ContainerStatus for \"28e726f185fd4b1fcfd7b47364489b713418bd1249728f9366e7e3729e643b85\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"28e726f185fd4b1fcfd7b47364489b713418bd1249728f9366e7e3729e643b85\": not found" Feb 9 19:07:09.276994 kubelet[2592]: E0209 19:07:09.276972 2592 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"28e726f185fd4b1fcfd7b47364489b713418bd1249728f9366e7e3729e643b85\": not found" containerID="28e726f185fd4b1fcfd7b47364489b713418bd1249728f9366e7e3729e643b85" Feb 9 19:07:09.277081 kubelet[2592]: I0209 19:07:09.277016 2592 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:28e726f185fd4b1fcfd7b47364489b713418bd1249728f9366e7e3729e643b85} err="failed to get container status \"28e726f185fd4b1fcfd7b47364489b713418bd1249728f9366e7e3729e643b85\": rpc error: code = NotFound desc = an error occurred when try to find container \"28e726f185fd4b1fcfd7b47364489b713418bd1249728f9366e7e3729e643b85\": not found" Feb 9 19:07:09.277081 kubelet[2592]: I0209 19:07:09.277043 2592 scope.go:115] "RemoveContainer" containerID="283e4ace11ee7a10bea647b4273863ae59e732bbffdd557e369fe6b734294739" Feb 9 19:07:09.277347 env[1401]: time="2024-02-09T19:07:09.277288235Z" level=error msg="ContainerStatus for \"283e4ace11ee7a10bea647b4273863ae59e732bbffdd557e369fe6b734294739\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"283e4ace11ee7a10bea647b4273863ae59e732bbffdd557e369fe6b734294739\": not found" Feb 9 19:07:09.277469 kubelet[2592]: E0209 19:07:09.277451 2592 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"283e4ace11ee7a10bea647b4273863ae59e732bbffdd557e369fe6b734294739\": not found" containerID="283e4ace11ee7a10bea647b4273863ae59e732bbffdd557e369fe6b734294739" Feb 9 19:07:09.277548 kubelet[2592]: I0209 19:07:09.277488 2592 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:283e4ace11ee7a10bea647b4273863ae59e732bbffdd557e369fe6b734294739} err="failed to get container status \"283e4ace11ee7a10bea647b4273863ae59e732bbffdd557e369fe6b734294739\": rpc error: code = NotFound desc = an error occurred when try to find container \"283e4ace11ee7a10bea647b4273863ae59e732bbffdd557e369fe6b734294739\": not found" Feb 9 19:07:09.277548 kubelet[2592]: I0209 19:07:09.277504 2592 scope.go:115] "RemoveContainer" containerID="bb4f8c9c604276a0bb87566dbea61b1aa1311377a819eb8382ff59d529c0d2ce" Feb 9 19:07:09.277743 env[1401]: time="2024-02-09T19:07:09.277692539Z" level=error msg="ContainerStatus for \"bb4f8c9c604276a0bb87566dbea61b1aa1311377a819eb8382ff59d529c0d2ce\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bb4f8c9c604276a0bb87566dbea61b1aa1311377a819eb8382ff59d529c0d2ce\": not found" Feb 9 19:07:09.277880 kubelet[2592]: E0209 19:07:09.277862 2592 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bb4f8c9c604276a0bb87566dbea61b1aa1311377a819eb8382ff59d529c0d2ce\": not found" containerID="bb4f8c9c604276a0bb87566dbea61b1aa1311377a819eb8382ff59d529c0d2ce" Feb 9 19:07:09.277950 kubelet[2592]: I0209 19:07:09.277901 2592 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:bb4f8c9c604276a0bb87566dbea61b1aa1311377a819eb8382ff59d529c0d2ce} err="failed to get container status \"bb4f8c9c604276a0bb87566dbea61b1aa1311377a819eb8382ff59d529c0d2ce\": rpc error: code = NotFound desc = an error occurred when try to find container \"bb4f8c9c604276a0bb87566dbea61b1aa1311377a819eb8382ff59d529c0d2ce\": not found" Feb 9 19:07:09.277950 kubelet[2592]: I0209 19:07:09.277913 2592 scope.go:115] "RemoveContainer" containerID="2b6be50f0d7208d4fb91d4d68f3e192999b5413e53491bd51c1f62a56b169144" Feb 9 19:07:09.278136 env[1401]: time="2024-02-09T19:07:09.278088543Z" level=error msg="ContainerStatus for \"2b6be50f0d7208d4fb91d4d68f3e192999b5413e53491bd51c1f62a56b169144\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2b6be50f0d7208d4fb91d4d68f3e192999b5413e53491bd51c1f62a56b169144\": not found" Feb 9 19:07:09.278288 kubelet[2592]: E0209 19:07:09.278273 2592 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2b6be50f0d7208d4fb91d4d68f3e192999b5413e53491bd51c1f62a56b169144\": not found" containerID="2b6be50f0d7208d4fb91d4d68f3e192999b5413e53491bd51c1f62a56b169144" Feb 9 19:07:09.278353 kubelet[2592]: I0209 19:07:09.278330 2592 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:2b6be50f0d7208d4fb91d4d68f3e192999b5413e53491bd51c1f62a56b169144} err="failed to get container status \"2b6be50f0d7208d4fb91d4d68f3e192999b5413e53491bd51c1f62a56b169144\": rpc error: code = NotFound desc = an error occurred when try to find container \"2b6be50f0d7208d4fb91d4d68f3e192999b5413e53491bd51c1f62a56b169144\": not found" Feb 9 19:07:09.278353 kubelet[2592]: I0209 19:07:09.278345 2592 scope.go:115] "RemoveContainer" containerID="984fda7f7721b61ac971a6d9afd93d4c4f04d5e9749b859ac70cd245489b69c3" Feb 9 19:07:09.278560 env[1401]: time="2024-02-09T19:07:09.278514747Z" level=error msg="ContainerStatus for \"984fda7f7721b61ac971a6d9afd93d4c4f04d5e9749b859ac70cd245489b69c3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"984fda7f7721b61ac971a6d9afd93d4c4f04d5e9749b859ac70cd245489b69c3\": not found" Feb 9 19:07:09.278669 kubelet[2592]: E0209 19:07:09.278652 2592 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"984fda7f7721b61ac971a6d9afd93d4c4f04d5e9749b859ac70cd245489b69c3\": not found" containerID="984fda7f7721b61ac971a6d9afd93d4c4f04d5e9749b859ac70cd245489b69c3" Feb 9 19:07:09.278735 kubelet[2592]: I0209 19:07:09.278682 2592 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:984fda7f7721b61ac971a6d9afd93d4c4f04d5e9749b859ac70cd245489b69c3} err="failed to get container status \"984fda7f7721b61ac971a6d9afd93d4c4f04d5e9749b859ac70cd245489b69c3\": rpc error: code = NotFound desc = an error occurred when try to find container \"984fda7f7721b61ac971a6d9afd93d4c4f04d5e9749b859ac70cd245489b69c3\": not found" Feb 9 19:07:09.657311 systemd[1]: var-lib-kubelet-pods-350a2779\x2dd40b\x2d4487\x2d8264\x2df825d2fcf428-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djxsdn.mount: Deactivated successfully. Feb 9 19:07:09.657504 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d9da176d643c602c305018f38eab6002bc48cf7b9ee6c24435ef64e765496742-rootfs.mount: Deactivated successfully. Feb 9 19:07:09.657633 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d9da176d643c602c305018f38eab6002bc48cf7b9ee6c24435ef64e765496742-shm.mount: Deactivated successfully. Feb 9 19:07:09.657748 systemd[1]: var-lib-kubelet-pods-10a89e47\x2d869c\x2d4875\x2d8ea2\x2d6e794b5ec825-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djsq8w.mount: Deactivated successfully. Feb 9 19:07:09.657872 systemd[1]: var-lib-kubelet-pods-10a89e47\x2d869c\x2d4875\x2d8ea2\x2d6e794b5ec825-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 19:07:09.657987 systemd[1]: var-lib-kubelet-pods-10a89e47\x2d869c\x2d4875\x2d8ea2\x2d6e794b5ec825-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 19:07:09.757569 kubelet[2592]: E0209 19:07:09.757226 2592 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-787d4945fb-x4l7h" podUID=decb5516-a569-4398-a0ef-09332dc36be6 Feb 9 19:07:09.761055 kubelet[2592]: I0209 19:07:09.761011 2592 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=10a89e47-869c-4875-8ea2-6e794b5ec825 path="/var/lib/kubelet/pods/10a89e47-869c-4875-8ea2-6e794b5ec825/volumes" Feb 9 19:07:09.761628 kubelet[2592]: I0209 19:07:09.761608 2592 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=350a2779-d40b-4487-8264-f825d2fcf428 path="/var/lib/kubelet/pods/350a2779-d40b-4487-8264-f825d2fcf428/volumes" Feb 9 19:07:10.679302 sshd[4298]: pam_unix(sshd:session): session closed for user core Feb 9 19:07:10.682978 systemd[1]: sshd@21-10.200.8.38:22-10.200.12.6:35850.service: Deactivated successfully. Feb 9 19:07:10.685128 systemd[1]: session-24.scope: Deactivated successfully. Feb 9 19:07:10.685930 systemd-logind[1372]: Session 24 logged out. Waiting for processes to exit. Feb 9 19:07:10.687421 systemd-logind[1372]: Removed session 24. Feb 9 19:07:10.782003 systemd[1]: Started sshd@22-10.200.8.38:22-10.200.12.6:47646.service. Feb 9 19:07:11.405932 sshd[4474]: Accepted publickey for core from 10.200.12.6 port 47646 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:07:11.407404 sshd[4474]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:07:11.412454 systemd[1]: Started session-25.scope. Feb 9 19:07:11.413276 systemd-logind[1372]: New session 25 of user core. Feb 9 19:07:11.757746 kubelet[2592]: E0209 19:07:11.757716 2592 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-787d4945fb-x4l7h" podUID=decb5516-a569-4398-a0ef-09332dc36be6 Feb 9 19:07:12.301186 kubelet[2592]: I0209 19:07:12.301144 2592 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:07:12.301386 kubelet[2592]: E0209 19:07:12.301229 2592 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="10a89e47-869c-4875-8ea2-6e794b5ec825" containerName="apply-sysctl-overwrites" Feb 9 19:07:12.301386 kubelet[2592]: E0209 19:07:12.301244 2592 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="10a89e47-869c-4875-8ea2-6e794b5ec825" containerName="mount-bpf-fs" Feb 9 19:07:12.301386 kubelet[2592]: E0209 19:07:12.301252 2592 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="10a89e47-869c-4875-8ea2-6e794b5ec825" containerName="cilium-agent" Feb 9 19:07:12.301386 kubelet[2592]: E0209 19:07:12.301261 2592 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="350a2779-d40b-4487-8264-f825d2fcf428" containerName="cilium-operator" Feb 9 19:07:12.301386 kubelet[2592]: E0209 19:07:12.301281 2592 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="10a89e47-869c-4875-8ea2-6e794b5ec825" containerName="clean-cilium-state" Feb 9 19:07:12.301386 kubelet[2592]: E0209 19:07:12.301290 2592 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="10a89e47-869c-4875-8ea2-6e794b5ec825" containerName="mount-cgroup" Feb 9 19:07:12.301386 kubelet[2592]: I0209 19:07:12.301320 2592 memory_manager.go:346] "RemoveStaleState removing state" podUID="10a89e47-869c-4875-8ea2-6e794b5ec825" containerName="cilium-agent" Feb 9 19:07:12.301386 kubelet[2592]: I0209 19:07:12.301328 2592 memory_manager.go:346] "RemoveStaleState removing state" podUID="350a2779-d40b-4487-8264-f825d2fcf428" containerName="cilium-operator" Feb 9 19:07:12.371281 sshd[4474]: pam_unix(sshd:session): session closed for user core Feb 9 19:07:12.374517 systemd[1]: sshd@22-10.200.8.38:22-10.200.12.6:47646.service: Deactivated successfully. Feb 9 19:07:12.376181 systemd[1]: session-25.scope: Deactivated successfully. Feb 9 19:07:12.376239 systemd-logind[1372]: Session 25 logged out. Waiting for processes to exit. Feb 9 19:07:12.378366 systemd-logind[1372]: Removed session 25. Feb 9 19:07:12.437139 kubelet[2592]: I0209 19:07:12.437094 2592 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4bfda448-6bf1-4aa2-8d71-a1de8c8233d3-hostproc\") pod \"cilium-w5v8r\" (UID: \"4bfda448-6bf1-4aa2-8d71-a1de8c8233d3\") " pod="kube-system/cilium-w5v8r" Feb 9 19:07:12.437139 kubelet[2592]: I0209 19:07:12.437159 2592 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4bfda448-6bf1-4aa2-8d71-a1de8c8233d3-cilium-cgroup\") pod \"cilium-w5v8r\" (UID: \"4bfda448-6bf1-4aa2-8d71-a1de8c8233d3\") " pod="kube-system/cilium-w5v8r" Feb 9 19:07:12.437431 kubelet[2592]: I0209 19:07:12.437191 2592 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4bfda448-6bf1-4aa2-8d71-a1de8c8233d3-cni-path\") pod \"cilium-w5v8r\" (UID: \"4bfda448-6bf1-4aa2-8d71-a1de8c8233d3\") " pod="kube-system/cilium-w5v8r" Feb 9 19:07:12.437431 kubelet[2592]: I0209 19:07:12.437226 2592 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4bfda448-6bf1-4aa2-8d71-a1de8c8233d3-xtables-lock\") pod \"cilium-w5v8r\" (UID: \"4bfda448-6bf1-4aa2-8d71-a1de8c8233d3\") " pod="kube-system/cilium-w5v8r" Feb 9 19:07:12.437431 kubelet[2592]: I0209 19:07:12.437258 2592 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4bfda448-6bf1-4aa2-8d71-a1de8c8233d3-host-proc-sys-kernel\") pod \"cilium-w5v8r\" (UID: \"4bfda448-6bf1-4aa2-8d71-a1de8c8233d3\") " pod="kube-system/cilium-w5v8r" Feb 9 19:07:12.437431 kubelet[2592]: I0209 19:07:12.437285 2592 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4bfda448-6bf1-4aa2-8d71-a1de8c8233d3-lib-modules\") pod \"cilium-w5v8r\" (UID: \"4bfda448-6bf1-4aa2-8d71-a1de8c8233d3\") " pod="kube-system/cilium-w5v8r" Feb 9 19:07:12.437431 kubelet[2592]: I0209 19:07:12.437320 2592 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4bfda448-6bf1-4aa2-8d71-a1de8c8233d3-bpf-maps\") pod \"cilium-w5v8r\" (UID: \"4bfda448-6bf1-4aa2-8d71-a1de8c8233d3\") " pod="kube-system/cilium-w5v8r" Feb 9 19:07:12.437431 kubelet[2592]: I0209 19:07:12.437353 2592 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4bfda448-6bf1-4aa2-8d71-a1de8c8233d3-clustermesh-secrets\") pod \"cilium-w5v8r\" (UID: \"4bfda448-6bf1-4aa2-8d71-a1de8c8233d3\") " pod="kube-system/cilium-w5v8r" Feb 9 19:07:12.437745 kubelet[2592]: I0209 19:07:12.437392 2592 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4bfda448-6bf1-4aa2-8d71-a1de8c8233d3-etc-cni-netd\") pod \"cilium-w5v8r\" (UID: \"4bfda448-6bf1-4aa2-8d71-a1de8c8233d3\") " pod="kube-system/cilium-w5v8r" Feb 9 19:07:12.437745 kubelet[2592]: I0209 19:07:12.437432 2592 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4bfda448-6bf1-4aa2-8d71-a1de8c8233d3-cilium-ipsec-secrets\") pod \"cilium-w5v8r\" (UID: \"4bfda448-6bf1-4aa2-8d71-a1de8c8233d3\") " pod="kube-system/cilium-w5v8r" Feb 9 19:07:12.437745 kubelet[2592]: I0209 19:07:12.437467 2592 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4bfda448-6bf1-4aa2-8d71-a1de8c8233d3-hubble-tls\") pod \"cilium-w5v8r\" (UID: \"4bfda448-6bf1-4aa2-8d71-a1de8c8233d3\") " pod="kube-system/cilium-w5v8r" Feb 9 19:07:12.437745 kubelet[2592]: I0209 19:07:12.437512 2592 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4bfda448-6bf1-4aa2-8d71-a1de8c8233d3-cilium-config-path\") pod \"cilium-w5v8r\" (UID: \"4bfda448-6bf1-4aa2-8d71-a1de8c8233d3\") " pod="kube-system/cilium-w5v8r" Feb 9 19:07:12.437745 kubelet[2592]: I0209 19:07:12.437550 2592 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6gkk\" (UniqueName: \"kubernetes.io/projected/4bfda448-6bf1-4aa2-8d71-a1de8c8233d3-kube-api-access-v6gkk\") pod \"cilium-w5v8r\" (UID: \"4bfda448-6bf1-4aa2-8d71-a1de8c8233d3\") " pod="kube-system/cilium-w5v8r" Feb 9 19:07:12.437995 kubelet[2592]: I0209 19:07:12.437595 2592 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4bfda448-6bf1-4aa2-8d71-a1de8c8233d3-host-proc-sys-net\") pod \"cilium-w5v8r\" (UID: \"4bfda448-6bf1-4aa2-8d71-a1de8c8233d3\") " pod="kube-system/cilium-w5v8r" Feb 9 19:07:12.437995 kubelet[2592]: I0209 19:07:12.437634 2592 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4bfda448-6bf1-4aa2-8d71-a1de8c8233d3-cilium-run\") pod \"cilium-w5v8r\" (UID: \"4bfda448-6bf1-4aa2-8d71-a1de8c8233d3\") " pod="kube-system/cilium-w5v8r" Feb 9 19:07:12.474411 systemd[1]: Started sshd@23-10.200.8.38:22-10.200.12.6:47648.service. Feb 9 19:07:12.611100 env[1401]: time="2024-02-09T19:07:12.610542912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w5v8r,Uid:4bfda448-6bf1-4aa2-8d71-a1de8c8233d3,Namespace:kube-system,Attempt:0,}" Feb 9 19:07:12.648215 env[1401]: time="2024-02-09T19:07:12.648145877Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:07:12.648215 env[1401]: time="2024-02-09T19:07:12.648181577Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:07:12.648215 env[1401]: time="2024-02-09T19:07:12.648194977Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:07:12.648600 env[1401]: time="2024-02-09T19:07:12.648557381Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a04b879c41c3c5d67ce983622c4b1e45d149e79743736777172ce554a39a2db2 pid=4498 runtime=io.containerd.runc.v2 Feb 9 19:07:12.689382 env[1401]: time="2024-02-09T19:07:12.689327377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w5v8r,Uid:4bfda448-6bf1-4aa2-8d71-a1de8c8233d3,Namespace:kube-system,Attempt:0,} returns sandbox id \"a04b879c41c3c5d67ce983622c4b1e45d149e79743736777172ce554a39a2db2\"" Feb 9 19:07:12.692319 env[1401]: time="2024-02-09T19:07:12.692282105Z" level=info msg="CreateContainer within sandbox \"a04b879c41c3c5d67ce983622c4b1e45d149e79743736777172ce554a39a2db2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 19:07:12.720115 env[1401]: time="2024-02-09T19:07:12.720080675Z" level=info msg="CreateContainer within sandbox \"a04b879c41c3c5d67ce983622c4b1e45d149e79743736777172ce554a39a2db2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d4e9af725e1f3202098bf4b84288547889950b49a03e6a505bd737ecbac0b8ff\"" Feb 9 19:07:12.720667 env[1401]: time="2024-02-09T19:07:12.720636980Z" level=info msg="StartContainer for \"d4e9af725e1f3202098bf4b84288547889950b49a03e6a505bd737ecbac0b8ff\"" Feb 9 19:07:12.790435 env[1401]: time="2024-02-09T19:07:12.790381457Z" level=info msg="StartContainer for \"d4e9af725e1f3202098bf4b84288547889950b49a03e6a505bd737ecbac0b8ff\" returns successfully" Feb 9 19:07:12.829604 env[1401]: time="2024-02-09T19:07:12.829553038Z" level=info msg="shim disconnected" id=d4e9af725e1f3202098bf4b84288547889950b49a03e6a505bd737ecbac0b8ff Feb 9 19:07:12.829604 env[1401]: time="2024-02-09T19:07:12.829604438Z" level=warning msg="cleaning up after shim disconnected" id=d4e9af725e1f3202098bf4b84288547889950b49a03e6a505bd737ecbac0b8ff namespace=k8s.io Feb 9 19:07:12.829909 env[1401]: time="2024-02-09T19:07:12.829616338Z" level=info msg="cleaning up dead shim" Feb 9 19:07:12.837546 env[1401]: time="2024-02-09T19:07:12.837505915Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:07:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4582 runtime=io.containerd.runc.v2\n" Feb 9 19:07:13.098893 sshd[4485]: Accepted publickey for core from 10.200.12.6 port 47648 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:07:13.100535 sshd[4485]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:07:13.107438 systemd[1]: Started session-26.scope. Feb 9 19:07:13.107880 systemd-logind[1372]: New session 26 of user core. Feb 9 19:07:13.231100 env[1401]: time="2024-02-09T19:07:13.231044420Z" level=info msg="CreateContainer within sandbox \"a04b879c41c3c5d67ce983622c4b1e45d149e79743736777172ce554a39a2db2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 19:07:13.264532 env[1401]: time="2024-02-09T19:07:13.264481542Z" level=info msg="CreateContainer within sandbox \"a04b879c41c3c5d67ce983622c4b1e45d149e79743736777172ce554a39a2db2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c13612b5e027d8495ead16784316d2dc0988144f4c56a88e7c4e7a237a52fdd2\"" Feb 9 19:07:13.265225 env[1401]: time="2024-02-09T19:07:13.265189449Z" level=info msg="StartContainer for \"c13612b5e027d8495ead16784316d2dc0988144f4c56a88e7c4e7a237a52fdd2\"" Feb 9 19:07:13.315041 env[1401]: time="2024-02-09T19:07:13.313909719Z" level=info msg="StartContainer for \"c13612b5e027d8495ead16784316d2dc0988144f4c56a88e7c4e7a237a52fdd2\" returns successfully" Feb 9 19:07:13.355175 env[1401]: time="2024-02-09T19:07:13.354999215Z" level=info msg="shim disconnected" id=c13612b5e027d8495ead16784316d2dc0988144f4c56a88e7c4e7a237a52fdd2 Feb 9 19:07:13.355175 env[1401]: time="2024-02-09T19:07:13.355099816Z" level=warning msg="cleaning up after shim disconnected" id=c13612b5e027d8495ead16784316d2dc0988144f4c56a88e7c4e7a237a52fdd2 namespace=k8s.io Feb 9 19:07:13.355175 env[1401]: time="2024-02-09T19:07:13.355113316Z" level=info msg="cleaning up dead shim" Feb 9 19:07:13.363838 env[1401]: time="2024-02-09T19:07:13.363797000Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:07:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4652 runtime=io.containerd.runc.v2\n" Feb 9 19:07:13.598572 sshd[4485]: pam_unix(sshd:session): session closed for user core Feb 9 19:07:13.601933 systemd[1]: sshd@23-10.200.8.38:22-10.200.12.6:47648.service: Deactivated successfully. Feb 9 19:07:13.604206 systemd[1]: session-26.scope: Deactivated successfully. Feb 9 19:07:13.604261 systemd-logind[1372]: Session 26 logged out. Waiting for processes to exit. Feb 9 19:07:13.605779 systemd-logind[1372]: Removed session 26. Feb 9 19:07:13.703012 systemd[1]: Started sshd@24-10.200.8.38:22-10.200.12.6:47656.service. Feb 9 19:07:13.756469 kubelet[2592]: E0209 19:07:13.756422 2592 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-787d4945fb-x4l7h" podUID=decb5516-a569-4398-a0ef-09332dc36be6 Feb 9 19:07:13.846009 kubelet[2592]: E0209 19:07:13.845973 2592 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 19:07:14.231663 env[1401]: time="2024-02-09T19:07:14.231618255Z" level=info msg="CreateContainer within sandbox \"a04b879c41c3c5d67ce983622c4b1e45d149e79743736777172ce554a39a2db2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 19:07:14.271322 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2222435795.mount: Deactivated successfully. Feb 9 19:07:14.287683 env[1401]: time="2024-02-09T19:07:14.287633792Z" level=info msg="CreateContainer within sandbox \"a04b879c41c3c5d67ce983622c4b1e45d149e79743736777172ce554a39a2db2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"70e47ec8770b0de2eb14826e74fa7a9cf865672932d753adfeb441b3cd9b08a3\"" Feb 9 19:07:14.289551 env[1401]: time="2024-02-09T19:07:14.289521810Z" level=info msg="StartContainer for \"70e47ec8770b0de2eb14826e74fa7a9cf865672932d753adfeb441b3cd9b08a3\"" Feb 9 19:07:14.335697 sshd[4672]: Accepted publickey for core from 10.200.12.6 port 47656 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:07:14.336925 sshd[4672]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:07:14.347451 systemd[1]: Started session-27.scope. Feb 9 19:07:14.347697 systemd-logind[1372]: New session 27 of user core. Feb 9 19:07:14.382294 env[1401]: time="2024-02-09T19:07:14.382259698Z" level=info msg="StartContainer for \"70e47ec8770b0de2eb14826e74fa7a9cf865672932d753adfeb441b3cd9b08a3\" returns successfully" Feb 9 19:07:14.421773 env[1401]: time="2024-02-09T19:07:14.421727677Z" level=info msg="shim disconnected" id=70e47ec8770b0de2eb14826e74fa7a9cf865672932d753adfeb441b3cd9b08a3 Feb 9 19:07:14.421773 env[1401]: time="2024-02-09T19:07:14.421775377Z" level=warning msg="cleaning up after shim disconnected" id=70e47ec8770b0de2eb14826e74fa7a9cf865672932d753adfeb441b3cd9b08a3 namespace=k8s.io Feb 9 19:07:14.421773 env[1401]: time="2024-02-09T19:07:14.421786577Z" level=info msg="cleaning up dead shim" Feb 9 19:07:14.430239 env[1401]: time="2024-02-09T19:07:14.430197458Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:07:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4723 runtime=io.containerd.runc.v2\n" Feb 9 19:07:14.547830 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-70e47ec8770b0de2eb14826e74fa7a9cf865672932d753adfeb441b3cd9b08a3-rootfs.mount: Deactivated successfully. Feb 9 19:07:15.233574 env[1401]: time="2024-02-09T19:07:15.233525342Z" level=info msg="StopPodSandbox for \"a04b879c41c3c5d67ce983622c4b1e45d149e79743736777172ce554a39a2db2\"" Feb 9 19:07:15.234312 env[1401]: time="2024-02-09T19:07:15.234275949Z" level=info msg="Container to stop \"c13612b5e027d8495ead16784316d2dc0988144f4c56a88e7c4e7a237a52fdd2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:07:15.234440 env[1401]: time="2024-02-09T19:07:15.234417550Z" level=info msg="Container to stop \"70e47ec8770b0de2eb14826e74fa7a9cf865672932d753adfeb441b3cd9b08a3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:07:15.234542 env[1401]: time="2024-02-09T19:07:15.234521151Z" level=info msg="Container to stop \"d4e9af725e1f3202098bf4b84288547889950b49a03e6a505bd737ecbac0b8ff\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:07:15.237366 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a04b879c41c3c5d67ce983622c4b1e45d149e79743736777172ce554a39a2db2-shm.mount: Deactivated successfully. Feb 9 19:07:15.274871 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a04b879c41c3c5d67ce983622c4b1e45d149e79743736777172ce554a39a2db2-rootfs.mount: Deactivated successfully. Feb 9 19:07:15.290226 env[1401]: time="2024-02-09T19:07:15.290166581Z" level=info msg="shim disconnected" id=a04b879c41c3c5d67ce983622c4b1e45d149e79743736777172ce554a39a2db2 Feb 9 19:07:15.290466 env[1401]: time="2024-02-09T19:07:15.290230582Z" level=warning msg="cleaning up after shim disconnected" id=a04b879c41c3c5d67ce983622c4b1e45d149e79743736777172ce554a39a2db2 namespace=k8s.io Feb 9 19:07:15.290466 env[1401]: time="2024-02-09T19:07:15.290243382Z" level=info msg="cleaning up dead shim" Feb 9 19:07:15.298522 env[1401]: time="2024-02-09T19:07:15.298486061Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:07:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4763 runtime=io.containerd.runc.v2\n" Feb 9 19:07:15.298789 env[1401]: time="2024-02-09T19:07:15.298758963Z" level=info msg="TearDown network for sandbox \"a04b879c41c3c5d67ce983622c4b1e45d149e79743736777172ce554a39a2db2\" successfully" Feb 9 19:07:15.298789 env[1401]: time="2024-02-09T19:07:15.298785863Z" level=info msg="StopPodSandbox for \"a04b879c41c3c5d67ce983622c4b1e45d149e79743736777172ce554a39a2db2\" returns successfully" Feb 9 19:07:15.463723 kubelet[2592]: I0209 19:07:15.462778 2592 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6gkk\" (UniqueName: \"kubernetes.io/projected/4bfda448-6bf1-4aa2-8d71-a1de8c8233d3-kube-api-access-v6gkk\") pod \"4bfda448-6bf1-4aa2-8d71-a1de8c8233d3\" (UID: \"4bfda448-6bf1-4aa2-8d71-a1de8c8233d3\") " Feb 9 19:07:15.463723 kubelet[2592]: I0209 19:07:15.462847 2592 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4bfda448-6bf1-4aa2-8d71-a1de8c8233d3-host-proc-sys-net\") pod \"4bfda448-6bf1-4aa2-8d71-a1de8c8233d3\" (UID: \"4bfda448-6bf1-4aa2-8d71-a1de8c8233d3\") " Feb 9 19:07:15.463723 kubelet[2592]: I0209 19:07:15.462882 2592 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4bfda448-6bf1-4aa2-8d71-a1de8c8233d3-hostproc\") pod \"4bfda448-6bf1-4aa2-8d71-a1de8c8233d3\" (UID: \"4bfda448-6bf1-4aa2-8d71-a1de8c8233d3\") " Feb 9 19:07:15.463723 kubelet[2592]: I0209 19:07:15.462914 2592 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4bfda448-6bf1-4aa2-8d71-a1de8c8233d3-hubble-tls\") pod \"4bfda448-6bf1-4aa2-8d71-a1de8c8233d3\" (UID: \"4bfda448-6bf1-4aa2-8d71-a1de8c8233d3\") " Feb 9 19:07:15.463723 kubelet[2592]: I0209 19:07:15.462945 2592 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4bfda448-6bf1-4aa2-8d71-a1de8c8233d3-cilium-cgroup\") pod \"4bfda448-6bf1-4aa2-8d71-a1de8c8233d3\" (UID: \"4bfda448-6bf1-4aa2-8d71-a1de8c8233d3\") " Feb 9 19:07:15.463723 kubelet[2592]: I0209 19:07:15.462974 2592 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4bfda448-6bf1-4aa2-8d71-a1de8c8233d3-cni-path\") pod \"4bfda448-6bf1-4aa2-8d71-a1de8c8233d3\" (UID: \"4bfda448-6bf1-4aa2-8d71-a1de8c8233d3\") " Feb 9 19:07:15.464826 kubelet[2592]: I0209 19:07:15.463010 2592 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4bfda448-6bf1-4aa2-8d71-a1de8c8233d3-cilium-run\") pod \"4bfda448-6bf1-4aa2-8d71-a1de8c8233d3\" (UID: \"4bfda448-6bf1-4aa2-8d71-a1de8c8233d3\") " Feb 9 19:07:15.464826 kubelet[2592]: I0209 19:07:15.463059 2592 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4bfda448-6bf1-4aa2-8d71-a1de8c8233d3-xtables-lock\") pod \"4bfda448-6bf1-4aa2-8d71-a1de8c8233d3\" (UID: \"4bfda448-6bf1-4aa2-8d71-a1de8c8233d3\") " Feb 9 19:07:15.464826 kubelet[2592]: I0209 19:07:15.463091 2592 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4bfda448-6bf1-4aa2-8d71-a1de8c8233d3-bpf-maps\") pod \"4bfda448-6bf1-4aa2-8d71-a1de8c8233d3\" (UID: \"4bfda448-6bf1-4aa2-8d71-a1de8c8233d3\") " Feb 9 19:07:15.464826 kubelet[2592]: I0209 19:07:15.463126 2592 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4bfda448-6bf1-4aa2-8d71-a1de8c8233d3-etc-cni-netd\") pod \"4bfda448-6bf1-4aa2-8d71-a1de8c8233d3\" (UID: \"4bfda448-6bf1-4aa2-8d71-a1de8c8233d3\") " Feb 9 19:07:15.464826 kubelet[2592]: I0209 19:07:15.463162 2592 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4bfda448-6bf1-4aa2-8d71-a1de8c8233d3-cilium-ipsec-secrets\") pod \"4bfda448-6bf1-4aa2-8d71-a1de8c8233d3\" (UID: \"4bfda448-6bf1-4aa2-8d71-a1de8c8233d3\") " Feb 9 19:07:15.464826 kubelet[2592]: I0209 19:07:15.463198 2592 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4bfda448-6bf1-4aa2-8d71-a1de8c8233d3-cilium-config-path\") pod \"4bfda448-6bf1-4aa2-8d71-a1de8c8233d3\" (UID: \"4bfda448-6bf1-4aa2-8d71-a1de8c8233d3\") " Feb 9 19:07:15.465185 kubelet[2592]: I0209 19:07:15.463229 2592 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4bfda448-6bf1-4aa2-8d71-a1de8c8233d3-host-proc-sys-kernel\") pod \"4bfda448-6bf1-4aa2-8d71-a1de8c8233d3\" (UID: \"4bfda448-6bf1-4aa2-8d71-a1de8c8233d3\") " Feb 9 19:07:15.465185 kubelet[2592]: I0209 19:07:15.463263 2592 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4bfda448-6bf1-4aa2-8d71-a1de8c8233d3-clustermesh-secrets\") pod \"4bfda448-6bf1-4aa2-8d71-a1de8c8233d3\" (UID: \"4bfda448-6bf1-4aa2-8d71-a1de8c8233d3\") " Feb 9 19:07:15.465185 kubelet[2592]: I0209 19:07:15.463309 2592 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4bfda448-6bf1-4aa2-8d71-a1de8c8233d3-lib-modules\") pod \"4bfda448-6bf1-4aa2-8d71-a1de8c8233d3\" (UID: \"4bfda448-6bf1-4aa2-8d71-a1de8c8233d3\") " Feb 9 19:07:15.465185 kubelet[2592]: I0209 19:07:15.463367 2592 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4bfda448-6bf1-4aa2-8d71-a1de8c8233d3-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4bfda448-6bf1-4aa2-8d71-a1de8c8233d3" (UID: "4bfda448-6bf1-4aa2-8d71-a1de8c8233d3"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:07:15.465185 kubelet[2592]: I0209 19:07:15.463403 2592 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4bfda448-6bf1-4aa2-8d71-a1de8c8233d3-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "4bfda448-6bf1-4aa2-8d71-a1de8c8233d3" (UID: "4bfda448-6bf1-4aa2-8d71-a1de8c8233d3"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:07:15.465466 kubelet[2592]: I0209 19:07:15.463429 2592 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4bfda448-6bf1-4aa2-8d71-a1de8c8233d3-hostproc" (OuterVolumeSpecName: "hostproc") pod "4bfda448-6bf1-4aa2-8d71-a1de8c8233d3" (UID: "4bfda448-6bf1-4aa2-8d71-a1de8c8233d3"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:07:15.465466 kubelet[2592]: I0209 19:07:15.463548 2592 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4bfda448-6bf1-4aa2-8d71-a1de8c8233d3-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "4bfda448-6bf1-4aa2-8d71-a1de8c8233d3" (UID: "4bfda448-6bf1-4aa2-8d71-a1de8c8233d3"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:07:15.465466 kubelet[2592]: I0209 19:07:15.463585 2592 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4bfda448-6bf1-4aa2-8d71-a1de8c8233d3-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "4bfda448-6bf1-4aa2-8d71-a1de8c8233d3" (UID: "4bfda448-6bf1-4aa2-8d71-a1de8c8233d3"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:07:15.465466 kubelet[2592]: I0209 19:07:15.463632 2592 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4bfda448-6bf1-4aa2-8d71-a1de8c8233d3-cni-path" (OuterVolumeSpecName: "cni-path") pod "4bfda448-6bf1-4aa2-8d71-a1de8c8233d3" (UID: "4bfda448-6bf1-4aa2-8d71-a1de8c8233d3"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:07:15.465466 kubelet[2592]: I0209 19:07:15.463658 2592 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4bfda448-6bf1-4aa2-8d71-a1de8c8233d3-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "4bfda448-6bf1-4aa2-8d71-a1de8c8233d3" (UID: "4bfda448-6bf1-4aa2-8d71-a1de8c8233d3"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:07:15.465744 kubelet[2592]: I0209 19:07:15.463704 2592 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4bfda448-6bf1-4aa2-8d71-a1de8c8233d3-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "4bfda448-6bf1-4aa2-8d71-a1de8c8233d3" (UID: "4bfda448-6bf1-4aa2-8d71-a1de8c8233d3"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:07:15.465744 kubelet[2592]: W0209 19:07:15.463898 2592 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/4bfda448-6bf1-4aa2-8d71-a1de8c8233d3/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 19:07:15.465744 kubelet[2592]: I0209 19:07:15.464460 2592 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4bfda448-6bf1-4aa2-8d71-a1de8c8233d3-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "4bfda448-6bf1-4aa2-8d71-a1de8c8233d3" (UID: "4bfda448-6bf1-4aa2-8d71-a1de8c8233d3"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:07:15.466848 kubelet[2592]: I0209 19:07:15.465951 2592 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4bfda448-6bf1-4aa2-8d71-a1de8c8233d3-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "4bfda448-6bf1-4aa2-8d71-a1de8c8233d3" (UID: "4bfda448-6bf1-4aa2-8d71-a1de8c8233d3"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:07:15.469228 kubelet[2592]: I0209 19:07:15.469200 2592 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bfda448-6bf1-4aa2-8d71-a1de8c8233d3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4bfda448-6bf1-4aa2-8d71-a1de8c8233d3" (UID: "4bfda448-6bf1-4aa2-8d71-a1de8c8233d3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 19:07:15.472573 systemd[1]: var-lib-kubelet-pods-4bfda448\x2d6bf1\x2d4aa2\x2d8d71\x2da1de8c8233d3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dv6gkk.mount: Deactivated successfully. Feb 9 19:07:15.473913 kubelet[2592]: I0209 19:07:15.473636 2592 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bfda448-6bf1-4aa2-8d71-a1de8c8233d3-kube-api-access-v6gkk" (OuterVolumeSpecName: "kube-api-access-v6gkk") pod "4bfda448-6bf1-4aa2-8d71-a1de8c8233d3" (UID: "4bfda448-6bf1-4aa2-8d71-a1de8c8233d3"). InnerVolumeSpecName "kube-api-access-v6gkk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:07:15.477778 systemd[1]: var-lib-kubelet-pods-4bfda448\x2d6bf1\x2d4aa2\x2d8d71\x2da1de8c8233d3-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 19:07:15.477963 systemd[1]: var-lib-kubelet-pods-4bfda448\x2d6bf1\x2d4aa2\x2d8d71\x2da1de8c8233d3-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 19:07:15.481887 kubelet[2592]: I0209 19:07:15.481647 2592 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bfda448-6bf1-4aa2-8d71-a1de8c8233d3-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "4bfda448-6bf1-4aa2-8d71-a1de8c8233d3" (UID: "4bfda448-6bf1-4aa2-8d71-a1de8c8233d3"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:07:15.481887 kubelet[2592]: I0209 19:07:15.481722 2592 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bfda448-6bf1-4aa2-8d71-a1de8c8233d3-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "4bfda448-6bf1-4aa2-8d71-a1de8c8233d3" (UID: "4bfda448-6bf1-4aa2-8d71-a1de8c8233d3"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:07:15.482369 kubelet[2592]: I0209 19:07:15.482346 2592 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bfda448-6bf1-4aa2-8d71-a1de8c8233d3-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "4bfda448-6bf1-4aa2-8d71-a1de8c8233d3" (UID: "4bfda448-6bf1-4aa2-8d71-a1de8c8233d3"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:07:15.547716 systemd[1]: var-lib-kubelet-pods-4bfda448\x2d6bf1\x2d4aa2\x2d8d71\x2da1de8c8233d3-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 9 19:07:15.564238 kubelet[2592]: I0209 19:07:15.564210 2592 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4bfda448-6bf1-4aa2-8d71-a1de8c8233d3-bpf-maps\") on node \"ci-3510.3.2-a-c71e69a144\" DevicePath \"\"" Feb 9 19:07:15.564238 kubelet[2592]: I0209 19:07:15.564239 2592 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4bfda448-6bf1-4aa2-8d71-a1de8c8233d3-etc-cni-netd\") on node \"ci-3510.3.2-a-c71e69a144\" DevicePath \"\"" Feb 9 19:07:15.564434 kubelet[2592]: I0209 19:07:15.564259 2592 reconciler_common.go:295] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4bfda448-6bf1-4aa2-8d71-a1de8c8233d3-cilium-ipsec-secrets\") on node \"ci-3510.3.2-a-c71e69a144\" DevicePath \"\"" Feb 9 19:07:15.564434 kubelet[2592]: I0209 19:07:15.564272 2592 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4bfda448-6bf1-4aa2-8d71-a1de8c8233d3-cilium-config-path\") on node \"ci-3510.3.2-a-c71e69a144\" DevicePath \"\"" Feb 9 19:07:15.564434 kubelet[2592]: I0209 19:07:15.564285 2592 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4bfda448-6bf1-4aa2-8d71-a1de8c8233d3-host-proc-sys-kernel\") on node \"ci-3510.3.2-a-c71e69a144\" DevicePath \"\"" Feb 9 19:07:15.564434 kubelet[2592]: I0209 19:07:15.564298 2592 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4bfda448-6bf1-4aa2-8d71-a1de8c8233d3-clustermesh-secrets\") on node \"ci-3510.3.2-a-c71e69a144\" DevicePath \"\"" Feb 9 19:07:15.564434 kubelet[2592]: I0209 19:07:15.564313 2592 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4bfda448-6bf1-4aa2-8d71-a1de8c8233d3-lib-modules\") on node \"ci-3510.3.2-a-c71e69a144\" DevicePath \"\"" Feb 9 19:07:15.564434 kubelet[2592]: I0209 19:07:15.564328 2592 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-v6gkk\" (UniqueName: \"kubernetes.io/projected/4bfda448-6bf1-4aa2-8d71-a1de8c8233d3-kube-api-access-v6gkk\") on node \"ci-3510.3.2-a-c71e69a144\" DevicePath \"\"" Feb 9 19:07:15.564434 kubelet[2592]: I0209 19:07:15.564341 2592 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4bfda448-6bf1-4aa2-8d71-a1de8c8233d3-host-proc-sys-net\") on node \"ci-3510.3.2-a-c71e69a144\" DevicePath \"\"" Feb 9 19:07:15.564434 kubelet[2592]: I0209 19:07:15.564354 2592 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4bfda448-6bf1-4aa2-8d71-a1de8c8233d3-hostproc\") on node \"ci-3510.3.2-a-c71e69a144\" DevicePath \"\"" Feb 9 19:07:15.564645 kubelet[2592]: I0209 19:07:15.564367 2592 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4bfda448-6bf1-4aa2-8d71-a1de8c8233d3-hubble-tls\") on node \"ci-3510.3.2-a-c71e69a144\" DevicePath \"\"" Feb 9 19:07:15.564645 kubelet[2592]: I0209 19:07:15.564380 2592 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4bfda448-6bf1-4aa2-8d71-a1de8c8233d3-cilium-cgroup\") on node \"ci-3510.3.2-a-c71e69a144\" DevicePath \"\"" Feb 9 19:07:15.564645 kubelet[2592]: I0209 19:07:15.564392 2592 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4bfda448-6bf1-4aa2-8d71-a1de8c8233d3-cni-path\") on node \"ci-3510.3.2-a-c71e69a144\" DevicePath \"\"" Feb 9 19:07:15.564645 kubelet[2592]: I0209 19:07:15.564407 2592 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4bfda448-6bf1-4aa2-8d71-a1de8c8233d3-cilium-run\") on node \"ci-3510.3.2-a-c71e69a144\" DevicePath \"\"" Feb 9 19:07:15.564645 kubelet[2592]: I0209 19:07:15.564423 2592 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4bfda448-6bf1-4aa2-8d71-a1de8c8233d3-xtables-lock\") on node \"ci-3510.3.2-a-c71e69a144\" DevicePath \"\"" Feb 9 19:07:15.757609 kubelet[2592]: E0209 19:07:15.757573 2592 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-787d4945fb-x4l7h" podUID=decb5516-a569-4398-a0ef-09332dc36be6 Feb 9 19:07:16.237164 kubelet[2592]: I0209 19:07:16.237121 2592 scope.go:115] "RemoveContainer" containerID="70e47ec8770b0de2eb14826e74fa7a9cf865672932d753adfeb441b3cd9b08a3" Feb 9 19:07:16.239524 env[1401]: time="2024-02-09T19:07:16.239479707Z" level=info msg="RemoveContainer for \"70e47ec8770b0de2eb14826e74fa7a9cf865672932d753adfeb441b3cd9b08a3\"" Feb 9 19:07:16.263831 env[1401]: time="2024-02-09T19:07:16.263779337Z" level=info msg="RemoveContainer for \"70e47ec8770b0de2eb14826e74fa7a9cf865672932d753adfeb441b3cd9b08a3\" returns successfully" Feb 9 19:07:16.264384 kubelet[2592]: I0209 19:07:16.264360 2592 scope.go:115] "RemoveContainer" containerID="c13612b5e027d8495ead16784316d2dc0988144f4c56a88e7c4e7a237a52fdd2" Feb 9 19:07:16.267712 env[1401]: time="2024-02-09T19:07:16.267672074Z" level=info msg="RemoveContainer for \"c13612b5e027d8495ead16784316d2dc0988144f4c56a88e7c4e7a237a52fdd2\"" Feb 9 19:07:16.278196 env[1401]: time="2024-02-09T19:07:16.278160273Z" level=info msg="RemoveContainer for \"c13612b5e027d8495ead16784316d2dc0988144f4c56a88e7c4e7a237a52fdd2\" returns successfully" Feb 9 19:07:16.278404 kubelet[2592]: I0209 19:07:16.278387 2592 scope.go:115] "RemoveContainer" containerID="d4e9af725e1f3202098bf4b84288547889950b49a03e6a505bd737ecbac0b8ff" Feb 9 19:07:16.279542 env[1401]: time="2024-02-09T19:07:16.279513486Z" level=info msg="RemoveContainer for \"d4e9af725e1f3202098bf4b84288547889950b49a03e6a505bd737ecbac0b8ff\"" Feb 9 19:07:16.287183 env[1401]: time="2024-02-09T19:07:16.287153859Z" level=info msg="RemoveContainer for \"d4e9af725e1f3202098bf4b84288547889950b49a03e6a505bd737ecbac0b8ff\" returns successfully" Feb 9 19:07:16.300611 kubelet[2592]: I0209 19:07:16.300583 2592 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:07:16.300812 kubelet[2592]: E0209 19:07:16.300798 2592 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4bfda448-6bf1-4aa2-8d71-a1de8c8233d3" containerName="apply-sysctl-overwrites" Feb 9 19:07:16.300899 kubelet[2592]: E0209 19:07:16.300890 2592 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4bfda448-6bf1-4aa2-8d71-a1de8c8233d3" containerName="mount-bpf-fs" Feb 9 19:07:16.300983 kubelet[2592]: E0209 19:07:16.300974 2592 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4bfda448-6bf1-4aa2-8d71-a1de8c8233d3" containerName="mount-cgroup" Feb 9 19:07:16.301101 kubelet[2592]: I0209 19:07:16.301090 2592 memory_manager.go:346] "RemoveStaleState removing state" podUID="4bfda448-6bf1-4aa2-8d71-a1de8c8233d3" containerName="mount-bpf-fs" Feb 9 19:07:16.470664 kubelet[2592]: I0209 19:07:16.470621 2592 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/19df3546-b778-4141-973e-d47b33329933-hostproc\") pod \"cilium-6dh5w\" (UID: \"19df3546-b778-4141-973e-d47b33329933\") " pod="kube-system/cilium-6dh5w" Feb 9 19:07:16.471282 kubelet[2592]: I0209 19:07:16.470763 2592 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/19df3546-b778-4141-973e-d47b33329933-host-proc-sys-net\") pod \"cilium-6dh5w\" (UID: \"19df3546-b778-4141-973e-d47b33329933\") " pod="kube-system/cilium-6dh5w" Feb 9 19:07:16.471282 kubelet[2592]: I0209 19:07:16.470818 2592 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/19df3546-b778-4141-973e-d47b33329933-cilium-cgroup\") pod \"cilium-6dh5w\" (UID: \"19df3546-b778-4141-973e-d47b33329933\") " pod="kube-system/cilium-6dh5w" Feb 9 19:07:16.471282 kubelet[2592]: I0209 19:07:16.470859 2592 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/19df3546-b778-4141-973e-d47b33329933-cilium-ipsec-secrets\") pod \"cilium-6dh5w\" (UID: \"19df3546-b778-4141-973e-d47b33329933\") " pod="kube-system/cilium-6dh5w" Feb 9 19:07:16.471282 kubelet[2592]: I0209 19:07:16.470900 2592 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/19df3546-b778-4141-973e-d47b33329933-hubble-tls\") pod \"cilium-6dh5w\" (UID: \"19df3546-b778-4141-973e-d47b33329933\") " pod="kube-system/cilium-6dh5w" Feb 9 19:07:16.471282 kubelet[2592]: I0209 19:07:16.470935 2592 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/19df3546-b778-4141-973e-d47b33329933-xtables-lock\") pod \"cilium-6dh5w\" (UID: \"19df3546-b778-4141-973e-d47b33329933\") " pod="kube-system/cilium-6dh5w" Feb 9 19:07:16.471282 kubelet[2592]: I0209 19:07:16.470977 2592 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/19df3546-b778-4141-973e-d47b33329933-cilium-config-path\") pod \"cilium-6dh5w\" (UID: \"19df3546-b778-4141-973e-d47b33329933\") " pod="kube-system/cilium-6dh5w" Feb 9 19:07:16.471672 kubelet[2592]: I0209 19:07:16.471016 2592 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/19df3546-b778-4141-973e-d47b33329933-bpf-maps\") pod \"cilium-6dh5w\" (UID: \"19df3546-b778-4141-973e-d47b33329933\") " pod="kube-system/cilium-6dh5w" Feb 9 19:07:16.471672 kubelet[2592]: I0209 19:07:16.471070 2592 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/19df3546-b778-4141-973e-d47b33329933-cilium-run\") pod \"cilium-6dh5w\" (UID: \"19df3546-b778-4141-973e-d47b33329933\") " pod="kube-system/cilium-6dh5w" Feb 9 19:07:16.471672 kubelet[2592]: I0209 19:07:16.471126 2592 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/19df3546-b778-4141-973e-d47b33329933-cni-path\") pod \"cilium-6dh5w\" (UID: \"19df3546-b778-4141-973e-d47b33329933\") " pod="kube-system/cilium-6dh5w" Feb 9 19:07:16.471672 kubelet[2592]: I0209 19:07:16.471163 2592 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/19df3546-b778-4141-973e-d47b33329933-clustermesh-secrets\") pod \"cilium-6dh5w\" (UID: \"19df3546-b778-4141-973e-d47b33329933\") " pod="kube-system/cilium-6dh5w" Feb 9 19:07:16.471672 kubelet[2592]: I0209 19:07:16.471196 2592 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/19df3546-b778-4141-973e-d47b33329933-host-proc-sys-kernel\") pod \"cilium-6dh5w\" (UID: \"19df3546-b778-4141-973e-d47b33329933\") " pod="kube-system/cilium-6dh5w" Feb 9 19:07:16.471672 kubelet[2592]: I0209 19:07:16.471230 2592 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/19df3546-b778-4141-973e-d47b33329933-lib-modules\") pod \"cilium-6dh5w\" (UID: \"19df3546-b778-4141-973e-d47b33329933\") " pod="kube-system/cilium-6dh5w" Feb 9 19:07:16.471869 kubelet[2592]: I0209 19:07:16.471273 2592 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/19df3546-b778-4141-973e-d47b33329933-etc-cni-netd\") pod \"cilium-6dh5w\" (UID: \"19df3546-b778-4141-973e-d47b33329933\") " pod="kube-system/cilium-6dh5w" Feb 9 19:07:16.471869 kubelet[2592]: I0209 19:07:16.471311 2592 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhxql\" (UniqueName: \"kubernetes.io/projected/19df3546-b778-4141-973e-d47b33329933-kube-api-access-jhxql\") pod \"cilium-6dh5w\" (UID: \"19df3546-b778-4141-973e-d47b33329933\") " pod="kube-system/cilium-6dh5w" Feb 9 19:07:16.606357 env[1401]: time="2024-02-09T19:07:16.606246778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6dh5w,Uid:19df3546-b778-4141-973e-d47b33329933,Namespace:kube-system,Attempt:0,}" Feb 9 19:07:16.639380 env[1401]: time="2024-02-09T19:07:16.639091589Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:07:16.639380 env[1401]: time="2024-02-09T19:07:16.639161290Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:07:16.639380 env[1401]: time="2024-02-09T19:07:16.639183590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:07:16.639633 env[1401]: time="2024-02-09T19:07:16.639512993Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4a96eeae61eb4ea8053b1cbfb8e5da2fcaecd8aab5d2e218bfd4cd1ef020d578 pid=4792 runtime=io.containerd.runc.v2 Feb 9 19:07:16.682606 env[1401]: time="2024-02-09T19:07:16.682500900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6dh5w,Uid:19df3546-b778-4141-973e-d47b33329933,Namespace:kube-system,Attempt:0,} returns sandbox id \"4a96eeae61eb4ea8053b1cbfb8e5da2fcaecd8aab5d2e218bfd4cd1ef020d578\"" Feb 9 19:07:16.686129 env[1401]: time="2024-02-09T19:07:16.685568529Z" level=info msg="CreateContainer within sandbox \"4a96eeae61eb4ea8053b1cbfb8e5da2fcaecd8aab5d2e218bfd4cd1ef020d578\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 19:07:16.712617 env[1401]: time="2024-02-09T19:07:16.712571685Z" level=info msg="CreateContainer within sandbox \"4a96eeae61eb4ea8053b1cbfb8e5da2fcaecd8aab5d2e218bfd4cd1ef020d578\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"208421ba2ff0ca49f8db00c798575b4414961b87641321ab73f8f14eff688337\"" Feb 9 19:07:16.713017 env[1401]: time="2024-02-09T19:07:16.712985689Z" level=info msg="StartContainer for \"208421ba2ff0ca49f8db00c798575b4414961b87641321ab73f8f14eff688337\"" Feb 9 19:07:16.790179 env[1401]: time="2024-02-09T19:07:16.790132219Z" level=info msg="StartContainer for \"208421ba2ff0ca49f8db00c798575b4414961b87641321ab73f8f14eff688337\" returns successfully" Feb 9 19:07:16.833828 env[1401]: time="2024-02-09T19:07:16.833783432Z" level=info msg="shim disconnected" id=208421ba2ff0ca49f8db00c798575b4414961b87641321ab73f8f14eff688337 Feb 9 19:07:16.834133 env[1401]: time="2024-02-09T19:07:16.834105835Z" level=warning msg="cleaning up after shim disconnected" id=208421ba2ff0ca49f8db00c798575b4414961b87641321ab73f8f14eff688337 namespace=k8s.io Feb 9 19:07:16.834133 env[1401]: time="2024-02-09T19:07:16.834124935Z" level=info msg="cleaning up dead shim" Feb 9 19:07:16.842485 env[1401]: time="2024-02-09T19:07:16.842452414Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:07:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4872 runtime=io.containerd.runc.v2\n" Feb 9 19:07:17.244856 env[1401]: time="2024-02-09T19:07:17.243213593Z" level=info msg="CreateContainer within sandbox \"4a96eeae61eb4ea8053b1cbfb8e5da2fcaecd8aab5d2e218bfd4cd1ef020d578\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 19:07:17.276550 env[1401]: time="2024-02-09T19:07:17.276503706Z" level=info msg="CreateContainer within sandbox \"4a96eeae61eb4ea8053b1cbfb8e5da2fcaecd8aab5d2e218bfd4cd1ef020d578\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0e66b182f96a960ffebeafff999cef101500a5bfe504d33849511ad248b97b22\"" Feb 9 19:07:17.277994 env[1401]: time="2024-02-09T19:07:17.277101512Z" level=info msg="StartContainer for \"0e66b182f96a960ffebeafff999cef101500a5bfe504d33849511ad248b97b22\"" Feb 9 19:07:17.329816 env[1401]: time="2024-02-09T19:07:17.329771907Z" level=info msg="StartContainer for \"0e66b182f96a960ffebeafff999cef101500a5bfe504d33849511ad248b97b22\" returns successfully" Feb 9 19:07:17.355479 env[1401]: time="2024-02-09T19:07:17.355427748Z" level=info msg="shim disconnected" id=0e66b182f96a960ffebeafff999cef101500a5bfe504d33849511ad248b97b22 Feb 9 19:07:17.355479 env[1401]: time="2024-02-09T19:07:17.355477549Z" level=warning msg="cleaning up after shim disconnected" id=0e66b182f96a960ffebeafff999cef101500a5bfe504d33849511ad248b97b22 namespace=k8s.io Feb 9 19:07:17.355479 env[1401]: time="2024-02-09T19:07:17.355489249Z" level=info msg="cleaning up dead shim" Feb 9 19:07:17.366161 env[1401]: time="2024-02-09T19:07:17.366122649Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:07:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4934 runtime=io.containerd.runc.v2\n" Feb 9 19:07:17.676097 kubelet[2592]: I0209 19:07:17.675952 2592 setters.go:548] "Node became not ready" node="ci-3510.3.2-a-c71e69a144" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-09 19:07:17.675827362 +0000 UTC m=+204.103218104 LastTransitionTime:2024-02-09 19:07:17.675827362 +0000 UTC m=+204.103218104 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 9 19:07:17.757224 kubelet[2592]: E0209 19:07:17.757177 2592 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-787d4945fb-x4l7h" podUID=decb5516-a569-4398-a0ef-09332dc36be6 Feb 9 19:07:17.761914 kubelet[2592]: I0209 19:07:17.761638 2592 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=4bfda448-6bf1-4aa2-8d71-a1de8c8233d3 path="/var/lib/kubelet/pods/4bfda448-6bf1-4aa2-8d71-a1de8c8233d3/volumes" Feb 9 19:07:18.248911 env[1401]: time="2024-02-09T19:07:18.248857638Z" level=info msg="CreateContainer within sandbox \"4a96eeae61eb4ea8053b1cbfb8e5da2fcaecd8aab5d2e218bfd4cd1ef020d578\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 19:07:18.275511 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1897626458.mount: Deactivated successfully. Feb 9 19:07:18.291772 env[1401]: time="2024-02-09T19:07:18.291714939Z" level=info msg="CreateContainer within sandbox \"4a96eeae61eb4ea8053b1cbfb8e5da2fcaecd8aab5d2e218bfd4cd1ef020d578\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0026a15c05608934680896b5e1a4cc665620073bea34f394e26cb8969de611bb\"" Feb 9 19:07:18.293172 env[1401]: time="2024-02-09T19:07:18.292272744Z" level=info msg="StartContainer for \"0026a15c05608934680896b5e1a4cc665620073bea34f394e26cb8969de611bb\"" Feb 9 19:07:18.361960 env[1401]: time="2024-02-09T19:07:18.361888495Z" level=info msg="StartContainer for \"0026a15c05608934680896b5e1a4cc665620073bea34f394e26cb8969de611bb\" returns successfully" Feb 9 19:07:18.389762 env[1401]: time="2024-02-09T19:07:18.389714055Z" level=info msg="shim disconnected" id=0026a15c05608934680896b5e1a4cc665620073bea34f394e26cb8969de611bb Feb 9 19:07:18.389762 env[1401]: time="2024-02-09T19:07:18.389760455Z" level=warning msg="cleaning up after shim disconnected" id=0026a15c05608934680896b5e1a4cc665620073bea34f394e26cb8969de611bb namespace=k8s.io Feb 9 19:07:18.390106 env[1401]: time="2024-02-09T19:07:18.389772355Z" level=info msg="cleaning up dead shim" Feb 9 19:07:18.397743 env[1401]: time="2024-02-09T19:07:18.397705230Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:07:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4992 runtime=io.containerd.runc.v2\n" Feb 9 19:07:18.584262 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0026a15c05608934680896b5e1a4cc665620073bea34f394e26cb8969de611bb-rootfs.mount: Deactivated successfully. Feb 9 19:07:18.847183 kubelet[2592]: E0209 19:07:18.847053 2592 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 19:07:19.258305 env[1401]: time="2024-02-09T19:07:19.258251260Z" level=info msg="CreateContainer within sandbox \"4a96eeae61eb4ea8053b1cbfb8e5da2fcaecd8aab5d2e218bfd4cd1ef020d578\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 19:07:19.291341 env[1401]: time="2024-02-09T19:07:19.291296267Z" level=info msg="CreateContainer within sandbox \"4a96eeae61eb4ea8053b1cbfb8e5da2fcaecd8aab5d2e218bfd4cd1ef020d578\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4a5d10e17b31d462ea432ffc20989e742ac56d26b7dcd4158914fa4f74b57745\"" Feb 9 19:07:19.292026 env[1401]: time="2024-02-09T19:07:19.291982874Z" level=info msg="StartContainer for \"4a5d10e17b31d462ea432ffc20989e742ac56d26b7dcd4158914fa4f74b57745\"" Feb 9 19:07:19.353703 env[1401]: time="2024-02-09T19:07:19.353657947Z" level=info msg="StartContainer for \"4a5d10e17b31d462ea432ffc20989e742ac56d26b7dcd4158914fa4f74b57745\" returns successfully" Feb 9 19:07:19.383007 env[1401]: time="2024-02-09T19:07:19.382949919Z" level=info msg="shim disconnected" id=4a5d10e17b31d462ea432ffc20989e742ac56d26b7dcd4158914fa4f74b57745 Feb 9 19:07:19.383007 env[1401]: time="2024-02-09T19:07:19.383006020Z" level=warning msg="cleaning up after shim disconnected" id=4a5d10e17b31d462ea432ffc20989e742ac56d26b7dcd4158914fa4f74b57745 namespace=k8s.io Feb 9 19:07:19.383007 env[1401]: time="2024-02-09T19:07:19.383017320Z" level=info msg="cleaning up dead shim" Feb 9 19:07:19.391093 env[1401]: time="2024-02-09T19:07:19.391021894Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:07:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5051 runtime=io.containerd.runc.v2\n" Feb 9 19:07:19.584297 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4a5d10e17b31d462ea432ffc20989e742ac56d26b7dcd4158914fa4f74b57745-rootfs.mount: Deactivated successfully. Feb 9 19:07:19.756884 kubelet[2592]: E0209 19:07:19.756843 2592 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-787d4945fb-x4l7h" podUID=decb5516-a569-4398-a0ef-09332dc36be6 Feb 9 19:07:20.256062 env[1401]: time="2024-02-09T19:07:20.256005018Z" level=info msg="CreateContainer within sandbox \"4a96eeae61eb4ea8053b1cbfb8e5da2fcaecd8aab5d2e218bfd4cd1ef020d578\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 19:07:20.292020 env[1401]: time="2024-02-09T19:07:20.291974850Z" level=info msg="CreateContainer within sandbox \"4a96eeae61eb4ea8053b1cbfb8e5da2fcaecd8aab5d2e218bfd4cd1ef020d578\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"680b93e52fc9cdb08b1fb78fbd51515d1f499fbd9f58602a2b948a07bf194c80\"" Feb 9 19:07:20.293971 env[1401]: time="2024-02-09T19:07:20.292618056Z" level=info msg="StartContainer for \"680b93e52fc9cdb08b1fb78fbd51515d1f499fbd9f58602a2b948a07bf194c80\"" Feb 9 19:07:20.356529 env[1401]: time="2024-02-09T19:07:20.356479646Z" level=info msg="StartContainer for \"680b93e52fc9cdb08b1fb78fbd51515d1f499fbd9f58602a2b948a07bf194c80\" returns successfully" Feb 9 19:07:20.584475 systemd[1]: run-containerd-runc-k8s.io-680b93e52fc9cdb08b1fb78fbd51515d1f499fbd9f58602a2b948a07bf194c80-runc.6EtEgM.mount: Deactivated successfully. Feb 9 19:07:20.850057 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 9 19:07:21.756828 kubelet[2592]: E0209 19:07:21.756782 2592 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-787d4945fb-x4l7h" podUID=decb5516-a569-4398-a0ef-09332dc36be6 Feb 9 19:07:23.074175 systemd[1]: run-containerd-runc-k8s.io-680b93e52fc9cdb08b1fb78fbd51515d1f499fbd9f58602a2b948a07bf194c80-runc.OVidDP.mount: Deactivated successfully. Feb 9 19:07:23.453501 systemd-networkd[1557]: lxc_health: Link UP Feb 9 19:07:23.466537 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 19:07:23.466365 systemd-networkd[1557]: lxc_health: Gained carrier Feb 9 19:07:23.757819 kubelet[2592]: E0209 19:07:23.757771 2592 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-787d4945fb-x4l7h" podUID=decb5516-a569-4398-a0ef-09332dc36be6 Feb 9 19:07:24.635141 kubelet[2592]: I0209 19:07:24.635096 2592 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-6dh5w" podStartSLOduration=8.635052212 pod.CreationTimestamp="2024-02-09 19:07:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:07:21.273989307 +0000 UTC m=+207.701380149" watchObservedRunningTime="2024-02-09 19:07:24.635052212 +0000 UTC m=+211.062443054" Feb 9 19:07:24.638162 systemd-networkd[1557]: lxc_health: Gained IPv6LL Feb 9 19:07:29.801651 sshd[4672]: pam_unix(sshd:session): session closed for user core Feb 9 19:07:29.805017 systemd[1]: sshd@24-10.200.8.38:22-10.200.12.6:47656.service: Deactivated successfully. Feb 9 19:07:29.806181 systemd[1]: session-27.scope: Deactivated successfully. Feb 9 19:07:29.808104 systemd-logind[1372]: Session 27 logged out. Waiting for processes to exit. Feb 9 19:07:29.809365 systemd-logind[1372]: Removed session 27. Feb 9 19:07:45.148631 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bcea027d155786f35819ef259d7b0430191d879595193bfd8281ee213c23cdb1-rootfs.mount: Deactivated successfully. Feb 9 19:07:45.287514 env[1401]: time="2024-02-09T19:07:45.287456597Z" level=info msg="shim disconnected" id=bcea027d155786f35819ef259d7b0430191d879595193bfd8281ee213c23cdb1 Feb 9 19:07:45.287514 env[1401]: time="2024-02-09T19:07:45.287511798Z" level=warning msg="cleaning up after shim disconnected" id=bcea027d155786f35819ef259d7b0430191d879595193bfd8281ee213c23cdb1 namespace=k8s.io Feb 9 19:07:45.288181 env[1401]: time="2024-02-09T19:07:45.287526698Z" level=info msg="cleaning up dead shim" Feb 9 19:07:45.295662 env[1401]: time="2024-02-09T19:07:45.295621264Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:07:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5744 runtime=io.containerd.runc.v2\n" Feb 9 19:07:45.306238 kubelet[2592]: I0209 19:07:45.306138 2592 scope.go:115] "RemoveContainer" containerID="bcea027d155786f35819ef259d7b0430191d879595193bfd8281ee213c23cdb1" Feb 9 19:07:45.312345 env[1401]: time="2024-02-09T19:07:45.312302499Z" level=info msg="CreateContainer within sandbox \"071c01048a38aebbc41295d9cca5092fb1589882ae7bc444661674c7454ba719\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Feb 9 19:07:45.343628 env[1401]: time="2024-02-09T19:07:45.343583152Z" level=info msg="CreateContainer within sandbox \"071c01048a38aebbc41295d9cca5092fb1589882ae7bc444661674c7454ba719\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"2fea236831cb2a50927758b556d81d854eeb1e103c689b79711ae75b5aade1e0\"" Feb 9 19:07:45.344077 env[1401]: time="2024-02-09T19:07:45.344048256Z" level=info msg="StartContainer for \"2fea236831cb2a50927758b556d81d854eeb1e103c689b79711ae75b5aade1e0\"" Feb 9 19:07:45.415965 env[1401]: time="2024-02-09T19:07:45.415513435Z" level=info msg="StartContainer for \"2fea236831cb2a50927758b556d81d854eeb1e103c689b79711ae75b5aade1e0\" returns successfully" Feb 9 19:07:48.463863 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fb1c5e2cd761d5aa6a8eefaf288ac93f0f8c98ed268676baedc6c75919fba32a-rootfs.mount: Deactivated successfully. Feb 9 19:07:48.465501 kubelet[2592]: E0209 19:07:48.465198 2592 controller.go:189] failed to update lease, error: Put "https://10.200.8.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-c71e69a144?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 9 19:07:48.480620 env[1401]: time="2024-02-09T19:07:48.480571215Z" level=info msg="shim disconnected" id=fb1c5e2cd761d5aa6a8eefaf288ac93f0f8c98ed268676baedc6c75919fba32a Feb 9 19:07:48.480620 env[1401]: time="2024-02-09T19:07:48.480620415Z" level=warning msg="cleaning up after shim disconnected" id=fb1c5e2cd761d5aa6a8eefaf288ac93f0f8c98ed268676baedc6c75919fba32a namespace=k8s.io Feb 9 19:07:48.481271 env[1401]: time="2024-02-09T19:07:48.480631315Z" level=info msg="cleaning up dead shim" Feb 9 19:07:48.489182 env[1401]: time="2024-02-09T19:07:48.489141783Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:07:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5805 runtime=io.containerd.runc.v2\n" Feb 9 19:07:49.316696 kubelet[2592]: I0209 19:07:49.316661 2592 scope.go:115] "RemoveContainer" containerID="fb1c5e2cd761d5aa6a8eefaf288ac93f0f8c98ed268676baedc6c75919fba32a" Feb 9 19:07:49.318796 env[1401]: time="2024-02-09T19:07:49.318754706Z" level=info msg="CreateContainer within sandbox \"ea9c715e08a9d1bcd352c16ac71aa09f865d4da67358ed53ec740f824b36aa71\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Feb 9 19:07:49.343948 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2662501464.mount: Deactivated successfully. Feb 9 19:07:49.358205 env[1401]: time="2024-02-09T19:07:49.358153220Z" level=info msg="CreateContainer within sandbox \"ea9c715e08a9d1bcd352c16ac71aa09f865d4da67358ed53ec740f824b36aa71\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"b1f6f50e7b891a544f9cbf6696650784541c67aa16d16e21c7d29a9b278bdcdb\"" Feb 9 19:07:49.358774 env[1401]: time="2024-02-09T19:07:49.358731024Z" level=info msg="StartContainer for \"b1f6f50e7b891a544f9cbf6696650784541c67aa16d16e21c7d29a9b278bdcdb\"" Feb 9 19:07:49.448239 env[1401]: time="2024-02-09T19:07:49.448186137Z" level=info msg="StartContainer for \"b1f6f50e7b891a544f9cbf6696650784541c67aa16d16e21c7d29a9b278bdcdb\" returns successfully" Feb 9 19:07:50.945162 kubelet[2592]: E0209 19:07:50.945121 2592 controller.go:189] failed to update lease, error: rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.38:59340->10.200.8.27:2379: read: connection timed out Feb 9 19:07:53.696396 env[1401]: time="2024-02-09T19:07:53.696354557Z" level=info msg="StopPodSandbox for \"ce8b2c51ba78af58d1df7e3d8e37375860c0b92280eafd59186bd2baefff1295\"" Feb 9 19:07:53.699017 env[1401]: time="2024-02-09T19:07:53.696450758Z" level=info msg="TearDown network for sandbox \"ce8b2c51ba78af58d1df7e3d8e37375860c0b92280eafd59186bd2baefff1295\" successfully" Feb 9 19:07:53.699017 env[1401]: time="2024-02-09T19:07:53.696541858Z" level=info msg="StopPodSandbox for \"ce8b2c51ba78af58d1df7e3d8e37375860c0b92280eafd59186bd2baefff1295\" returns successfully" Feb 9 19:07:53.699017 env[1401]: time="2024-02-09T19:07:53.697354565Z" level=info msg="RemovePodSandbox for \"ce8b2c51ba78af58d1df7e3d8e37375860c0b92280eafd59186bd2baefff1295\"" Feb 9 19:07:53.699017 env[1401]: time="2024-02-09T19:07:53.697387365Z" level=info msg="Forcibly stopping sandbox \"ce8b2c51ba78af58d1df7e3d8e37375860c0b92280eafd59186bd2baefff1295\"" Feb 9 19:07:53.699017 env[1401]: time="2024-02-09T19:07:53.697462966Z" level=info msg="TearDown network for sandbox \"ce8b2c51ba78af58d1df7e3d8e37375860c0b92280eafd59186bd2baefff1295\" successfully" Feb 9 19:07:53.705724 env[1401]: time="2024-02-09T19:07:53.705686830Z" level=info msg="RemovePodSandbox \"ce8b2c51ba78af58d1df7e3d8e37375860c0b92280eafd59186bd2baefff1295\" returns successfully" Feb 9 19:07:53.706149 env[1401]: time="2024-02-09T19:07:53.706121733Z" level=info msg="StopPodSandbox for \"a04b879c41c3c5d67ce983622c4b1e45d149e79743736777172ce554a39a2db2\"" Feb 9 19:07:53.706265 env[1401]: time="2024-02-09T19:07:53.706200734Z" level=info msg="TearDown network for sandbox \"a04b879c41c3c5d67ce983622c4b1e45d149e79743736777172ce554a39a2db2\" successfully" Feb 9 19:07:53.706265 env[1401]: time="2024-02-09T19:07:53.706240834Z" level=info msg="StopPodSandbox for \"a04b879c41c3c5d67ce983622c4b1e45d149e79743736777172ce554a39a2db2\" returns successfully" Feb 9 19:07:53.706544 env[1401]: time="2024-02-09T19:07:53.706519937Z" level=info msg="RemovePodSandbox for \"a04b879c41c3c5d67ce983622c4b1e45d149e79743736777172ce554a39a2db2\"" Feb 9 19:07:53.706651 env[1401]: time="2024-02-09T19:07:53.706549937Z" level=info msg="Forcibly stopping sandbox \"a04b879c41c3c5d67ce983622c4b1e45d149e79743736777172ce554a39a2db2\"" Feb 9 19:07:53.706651 env[1401]: time="2024-02-09T19:07:53.706627637Z" level=info msg="TearDown network for sandbox \"a04b879c41c3c5d67ce983622c4b1e45d149e79743736777172ce554a39a2db2\" successfully" Feb 9 19:07:53.711454 env[1401]: time="2024-02-09T19:07:53.711424675Z" level=info msg="RemovePodSandbox \"a04b879c41c3c5d67ce983622c4b1e45d149e79743736777172ce554a39a2db2\" returns successfully" Feb 9 19:07:53.711734 env[1401]: time="2024-02-09T19:07:53.711710977Z" level=info msg="StopPodSandbox for \"d9da176d643c602c305018f38eab6002bc48cf7b9ee6c24435ef64e765496742\"" Feb 9 19:07:53.711823 env[1401]: time="2024-02-09T19:07:53.711786978Z" level=info msg="TearDown network for sandbox \"d9da176d643c602c305018f38eab6002bc48cf7b9ee6c24435ef64e765496742\" successfully" Feb 9 19:07:53.711873 env[1401]: time="2024-02-09T19:07:53.711825878Z" level=info msg="StopPodSandbox for \"d9da176d643c602c305018f38eab6002bc48cf7b9ee6c24435ef64e765496742\" returns successfully" Feb 9 19:07:53.712209 env[1401]: time="2024-02-09T19:07:53.712185581Z" level=info msg="RemovePodSandbox for \"d9da176d643c602c305018f38eab6002bc48cf7b9ee6c24435ef64e765496742\"" Feb 9 19:07:53.712296 env[1401]: time="2024-02-09T19:07:53.712212081Z" level=info msg="Forcibly stopping sandbox \"d9da176d643c602c305018f38eab6002bc48cf7b9ee6c24435ef64e765496742\"" Feb 9 19:07:53.712353 env[1401]: time="2024-02-09T19:07:53.712291082Z" level=info msg="TearDown network for sandbox \"d9da176d643c602c305018f38eab6002bc48cf7b9ee6c24435ef64e765496742\" successfully" Feb 9 19:07:53.717589 env[1401]: time="2024-02-09T19:07:53.717559923Z" level=info msg="RemovePodSandbox \"d9da176d643c602c305018f38eab6002bc48cf7b9ee6c24435ef64e765496742\" returns successfully" Feb 9 19:07:53.887234 kubelet[2592]: E0209 19:07:53.887104 2592 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-ci-3510.3.2-a-c71e69a144.17b2475985a242f7", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-ci-3510.3.2-a-c71e69a144", UID:"b18b83774a9e432056951571196a0ed3", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"Unhealthy", Message:"Readiness probe failed: HTTP probe failed with statuscode: 500", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-c71e69a144"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 7, 38, 546504439, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 7, 38, 546504439, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.38:59184->10.200.8.27:2379: read: connection timed out' (will not retry!) Feb 9 19:08:00.948042 kubelet[2592]: E0209 19:08:00.946240 2592 controller.go:189] failed to update lease, error: Put "https://10.200.8.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-c71e69a144?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 9 19:08:05.428066 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#3 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.441296 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#3 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.454128 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#3 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.467607 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#3 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.481051 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#3 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.493679 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#3 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.493912 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#3 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.504035 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#3 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.504251 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#3 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.514257 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#3 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.514467 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#3 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.524269 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#3 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.529640 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#3 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.529795 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#3 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.539171 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#3 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.554396 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#3 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.559784 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#3 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.559930 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#3 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.560082 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#3 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.560215 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#3 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.569947 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#3 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.570170 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#3 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.580902 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#3 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.581117 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#3 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.592106 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#3 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.592347 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#3 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.603334 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#3 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.640735 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#3 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.646232 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#3 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.646375 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#3 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.646507 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#3 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.646634 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#3 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.646764 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#3 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.646892 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#3 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.647024 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#3 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.647165 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#3 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.657121 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#3 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.662545 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#3 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.662695 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#3 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.672881 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#3 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.700691 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#3 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.700922 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#3 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.701075 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#3 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.701212 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#3 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.701341 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#3 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:05.701469 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#3 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001