Feb 9 19:00:13.017594 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Feb 9 17:23:38 -00 2024 Feb 9 19:00:13.017630 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:00:13.017645 kernel: BIOS-provided physical RAM map: Feb 9 19:00:13.017656 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 9 19:00:13.017665 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Feb 9 19:00:13.017675 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Feb 9 19:00:13.017691 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Feb 9 19:00:13.017704 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Feb 9 19:00:13.017715 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Feb 9 19:00:13.017726 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Feb 9 19:00:13.017741 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Feb 9 19:00:13.017753 kernel: printk: bootconsole [earlyser0] enabled Feb 9 19:00:13.017764 kernel: NX (Execute Disable) protection: active Feb 9 19:00:13.017776 kernel: efi: EFI v2.70 by Microsoft Feb 9 19:00:13.017793 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c9a98 RNG=0x3ffd1018 Feb 9 19:00:13.017806 kernel: random: crng init done Feb 9 19:00:13.017822 kernel: SMBIOS 3.1.0 present. Feb 9 19:00:13.017832 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 07/12/2023 Feb 9 19:00:13.017842 kernel: Hypervisor detected: Microsoft Hyper-V Feb 9 19:00:13.017852 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Feb 9 19:00:13.017861 kernel: Hyper-V Host Build:20348-10.0-1-0.1544 Feb 9 19:00:13.017871 kernel: Hyper-V: Nested features: 0x1e0101 Feb 9 19:00:13.017884 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Feb 9 19:00:13.017895 kernel: Hyper-V: Using hypercall for remote TLB flush Feb 9 19:00:13.017906 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Feb 9 19:00:13.017918 kernel: tsc: Marking TSC unstable due to running on Hyper-V Feb 9 19:00:13.017930 kernel: tsc: Detected 2593.907 MHz processor Feb 9 19:00:13.017942 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 9 19:00:13.017955 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 9 19:00:13.017966 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Feb 9 19:00:13.017978 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 9 19:00:13.017990 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Feb 9 19:00:13.018004 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Feb 9 19:00:13.018016 kernel: Using GB pages for direct mapping Feb 9 19:00:13.018028 kernel: Secure boot disabled Feb 9 19:00:13.018040 kernel: ACPI: Early table checksum verification disabled Feb 9 19:00:13.018051 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Feb 9 19:00:13.018063 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:00:13.018075 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:00:13.018088 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Feb 9 19:00:13.018107 kernel: ACPI: FACS 0x000000003FFFE000 000040 Feb 9 19:00:13.018118 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:00:13.018129 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:00:13.018140 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:00:13.018150 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:00:13.018161 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:00:13.018176 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:00:13.018188 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:00:13.018201 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Feb 9 19:00:13.018214 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Feb 9 19:00:13.018226 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Feb 9 19:00:13.018239 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Feb 9 19:00:13.018251 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Feb 9 19:00:13.018264 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Feb 9 19:00:13.018278 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Feb 9 19:00:13.018291 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Feb 9 19:00:13.018303 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Feb 9 19:00:13.018315 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Feb 9 19:00:13.018328 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 9 19:00:13.018341 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 9 19:00:13.018354 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Feb 9 19:00:13.018367 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Feb 9 19:00:13.018379 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Feb 9 19:00:13.018395 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Feb 9 19:00:13.018408 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Feb 9 19:00:13.018420 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Feb 9 19:00:13.018433 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Feb 9 19:00:13.018446 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Feb 9 19:00:13.018459 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Feb 9 19:00:13.018472 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Feb 9 19:00:13.018485 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Feb 9 19:00:13.018496 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Feb 9 19:00:13.018511 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Feb 9 19:00:13.018534 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Feb 9 19:00:13.018546 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Feb 9 19:00:13.018557 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Feb 9 19:00:13.018569 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Feb 9 19:00:13.018581 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Feb 9 19:00:13.018594 kernel: Zone ranges: Feb 9 19:00:13.018607 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 9 19:00:13.018620 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 9 19:00:13.018636 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Feb 9 19:00:13.018649 kernel: Movable zone start for each node Feb 9 19:00:13.018662 kernel: Early memory node ranges Feb 9 19:00:13.018674 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Feb 9 19:00:13.018687 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Feb 9 19:00:13.018700 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Feb 9 19:00:13.018713 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Feb 9 19:00:13.018726 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Feb 9 19:00:13.018739 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 9 19:00:13.018754 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Feb 9 19:00:13.018767 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Feb 9 19:00:13.018780 kernel: ACPI: PM-Timer IO Port: 0x408 Feb 9 19:00:13.018792 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Feb 9 19:00:13.018805 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Feb 9 19:00:13.018819 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 9 19:00:13.018832 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 9 19:00:13.018844 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Feb 9 19:00:13.018857 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 9 19:00:13.018873 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Feb 9 19:00:13.018886 kernel: Booting paravirtualized kernel on Hyper-V Feb 9 19:00:13.018899 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 9 19:00:13.018912 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Feb 9 19:00:13.018925 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u1048576 Feb 9 19:00:13.018938 kernel: pcpu-alloc: s185624 r8192 d31464 u1048576 alloc=1*2097152 Feb 9 19:00:13.018950 kernel: pcpu-alloc: [0] 0 1 Feb 9 19:00:13.018963 kernel: Hyper-V: PV spinlocks enabled Feb 9 19:00:13.018976 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 9 19:00:13.018990 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Feb 9 19:00:13.019003 kernel: Policy zone: Normal Feb 9 19:00:13.019018 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:00:13.019031 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 19:00:13.019044 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Feb 9 19:00:13.019057 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 9 19:00:13.019070 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 19:00:13.019084 kernel: Memory: 8081200K/8387460K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 306000K reserved, 0K cma-reserved) Feb 9 19:00:13.019099 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 9 19:00:13.019112 kernel: ftrace: allocating 34475 entries in 135 pages Feb 9 19:00:13.019135 kernel: ftrace: allocated 135 pages with 4 groups Feb 9 19:00:13.019151 kernel: rcu: Hierarchical RCU implementation. Feb 9 19:00:13.019165 kernel: rcu: RCU event tracing is enabled. Feb 9 19:00:13.019179 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 9 19:00:13.019193 kernel: Rude variant of Tasks RCU enabled. Feb 9 19:00:13.019206 kernel: Tracing variant of Tasks RCU enabled. Feb 9 19:00:13.019223 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 19:00:13.019237 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 9 19:00:13.019251 kernel: Using NULL legacy PIC Feb 9 19:00:13.019267 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Feb 9 19:00:13.019280 kernel: Console: colour dummy device 80x25 Feb 9 19:00:13.019294 kernel: printk: console [tty1] enabled Feb 9 19:00:13.019308 kernel: printk: console [ttyS0] enabled Feb 9 19:00:13.019321 kernel: printk: bootconsole [earlyser0] disabled Feb 9 19:00:13.019337 kernel: ACPI: Core revision 20210730 Feb 9 19:00:13.019351 kernel: Failed to register legacy timer interrupt Feb 9 19:00:13.019364 kernel: APIC: Switch to symmetric I/O mode setup Feb 9 19:00:13.019378 kernel: Hyper-V: Using IPI hypercalls Feb 9 19:00:13.019392 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593907) Feb 9 19:00:13.019406 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 9 19:00:13.019419 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 9 19:00:13.019433 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 9 19:00:13.019446 kernel: Spectre V2 : Mitigation: Retpolines Feb 9 19:00:13.019460 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 9 19:00:13.019476 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 9 19:00:13.019490 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Feb 9 19:00:13.019504 kernel: RETBleed: Vulnerable Feb 9 19:00:13.019534 kernel: Speculative Store Bypass: Vulnerable Feb 9 19:00:13.019548 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Feb 9 19:00:13.019561 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 9 19:00:13.019575 kernel: GDS: Unknown: Dependent on hypervisor status Feb 9 19:00:13.019588 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 9 19:00:13.019601 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 9 19:00:13.019615 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 9 19:00:13.019631 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Feb 9 19:00:13.019645 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Feb 9 19:00:13.019658 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Feb 9 19:00:13.019672 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 9 19:00:13.019685 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Feb 9 19:00:13.019699 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Feb 9 19:00:13.019712 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Feb 9 19:00:13.019726 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Feb 9 19:00:13.019739 kernel: Freeing SMP alternatives memory: 32K Feb 9 19:00:13.019752 kernel: pid_max: default: 32768 minimum: 301 Feb 9 19:00:13.019766 kernel: LSM: Security Framework initializing Feb 9 19:00:13.019779 kernel: SELinux: Initializing. Feb 9 19:00:13.019795 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 9 19:00:13.019809 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 9 19:00:13.019823 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Feb 9 19:00:13.019836 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Feb 9 19:00:13.019850 kernel: signal: max sigframe size: 3632 Feb 9 19:00:13.019863 kernel: rcu: Hierarchical SRCU implementation. Feb 9 19:00:13.019877 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 9 19:00:13.019891 kernel: smp: Bringing up secondary CPUs ... Feb 9 19:00:13.019904 kernel: x86: Booting SMP configuration: Feb 9 19:00:13.019918 kernel: .... node #0, CPUs: #1 Feb 9 19:00:13.019935 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Feb 9 19:00:13.019949 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 9 19:00:13.019963 kernel: smp: Brought up 1 node, 2 CPUs Feb 9 19:00:13.019976 kernel: smpboot: Max logical packages: 1 Feb 9 19:00:13.019990 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Feb 9 19:00:13.020003 kernel: devtmpfs: initialized Feb 9 19:00:13.020017 kernel: x86/mm: Memory block size: 128MB Feb 9 19:00:13.020030 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Feb 9 19:00:13.020047 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 19:00:13.020060 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 9 19:00:13.020074 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 19:00:13.020088 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 19:00:13.020101 kernel: audit: initializing netlink subsys (disabled) Feb 9 19:00:13.020115 kernel: audit: type=2000 audit(1707505212.023:1): state=initialized audit_enabled=0 res=1 Feb 9 19:00:13.020129 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 19:00:13.020143 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 9 19:00:13.020157 kernel: cpuidle: using governor menu Feb 9 19:00:13.020173 kernel: ACPI: bus type PCI registered Feb 9 19:00:13.020186 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 19:00:13.020200 kernel: dca service started, version 1.12.1 Feb 9 19:00:13.020213 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 9 19:00:13.020227 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 19:00:13.020241 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 19:00:13.020255 kernel: ACPI: Added _OSI(Module Device) Feb 9 19:00:13.020268 kernel: ACPI: Added _OSI(Processor Device) Feb 9 19:00:13.020282 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 19:00:13.020297 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 19:00:13.020311 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 19:00:13.020324 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 19:00:13.020338 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 19:00:13.020352 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 9 19:00:13.020365 kernel: ACPI: Interpreter enabled Feb 9 19:00:13.020379 kernel: ACPI: PM: (supports S0 S5) Feb 9 19:00:13.020392 kernel: ACPI: Using IOAPIC for interrupt routing Feb 9 19:00:13.020406 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 9 19:00:13.020422 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Feb 9 19:00:13.020436 kernel: iommu: Default domain type: Translated Feb 9 19:00:13.020450 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 9 19:00:13.020463 kernel: vgaarb: loaded Feb 9 19:00:13.020477 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 19:00:13.020491 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 19:00:13.020505 kernel: PTP clock support registered Feb 9 19:00:13.020531 kernel: Registered efivars operations Feb 9 19:00:13.020544 kernel: PCI: Using ACPI for IRQ routing Feb 9 19:00:13.020557 kernel: PCI: System does not support PCI Feb 9 19:00:13.020571 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Feb 9 19:00:13.020584 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 19:00:13.020597 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 19:00:13.020609 kernel: pnp: PnP ACPI init Feb 9 19:00:13.020621 kernel: pnp: PnP ACPI: found 3 devices Feb 9 19:00:13.020634 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 9 19:00:13.020646 kernel: NET: Registered PF_INET protocol family Feb 9 19:00:13.020658 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 9 19:00:13.020673 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Feb 9 19:00:13.020685 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 19:00:13.020698 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 19:00:13.020711 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Feb 9 19:00:13.020723 kernel: TCP: Hash tables configured (established 65536 bind 65536) Feb 9 19:00:13.020735 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 9 19:00:13.020748 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 9 19:00:13.020760 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 19:00:13.020772 kernel: NET: Registered PF_XDP protocol family Feb 9 19:00:13.020787 kernel: PCI: CLS 0 bytes, default 64 Feb 9 19:00:13.020799 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 9 19:00:13.020812 kernel: software IO TLB: mapped [mem 0x000000003a8ad000-0x000000003e8ad000] (64MB) Feb 9 19:00:13.020823 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 9 19:00:13.020836 kernel: Initialise system trusted keyrings Feb 9 19:00:13.020848 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Feb 9 19:00:13.020861 kernel: Key type asymmetric registered Feb 9 19:00:13.020874 kernel: Asymmetric key parser 'x509' registered Feb 9 19:00:13.020889 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 19:00:13.020907 kernel: io scheduler mq-deadline registered Feb 9 19:00:13.020920 kernel: io scheduler kyber registered Feb 9 19:00:13.020932 kernel: io scheduler bfq registered Feb 9 19:00:13.020943 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 9 19:00:13.020954 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 19:00:13.020966 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 9 19:00:13.020978 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 9 19:00:13.020990 kernel: i8042: PNP: No PS/2 controller found. Feb 9 19:00:13.021140 kernel: rtc_cmos 00:02: registered as rtc0 Feb 9 19:00:13.021250 kernel: rtc_cmos 00:02: setting system clock to 2024-02-09T19:00:12 UTC (1707505212) Feb 9 19:00:13.021349 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Feb 9 19:00:13.021365 kernel: fail to initialize ptp_kvm Feb 9 19:00:13.021378 kernel: intel_pstate: CPU model not supported Feb 9 19:00:13.021391 kernel: efifb: probing for efifb Feb 9 19:00:13.021405 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Feb 9 19:00:13.021418 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Feb 9 19:00:13.021431 kernel: efifb: scrolling: redraw Feb 9 19:00:13.021447 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 9 19:00:13.021461 kernel: Console: switching to colour frame buffer device 128x48 Feb 9 19:00:13.021474 kernel: fb0: EFI VGA frame buffer device Feb 9 19:00:13.021487 kernel: pstore: Registered efi as persistent store backend Feb 9 19:00:13.021500 kernel: NET: Registered PF_INET6 protocol family Feb 9 19:00:13.021513 kernel: Segment Routing with IPv6 Feb 9 19:00:13.021538 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 19:00:13.021549 kernel: NET: Registered PF_PACKET protocol family Feb 9 19:00:13.021562 kernel: Key type dns_resolver registered Feb 9 19:00:13.021578 kernel: IPI shorthand broadcast: enabled Feb 9 19:00:13.021591 kernel: sched_clock: Marking stable (806612000, 26467400)->(1036910500, -203831100) Feb 9 19:00:13.021604 kernel: registered taskstats version 1 Feb 9 19:00:13.021617 kernel: Loading compiled-in X.509 certificates Feb 9 19:00:13.021630 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 56154408a02b3bd349a9e9180c9bd837fd1d636a' Feb 9 19:00:13.021643 kernel: Key type .fscrypt registered Feb 9 19:00:13.021656 kernel: Key type fscrypt-provisioning registered Feb 9 19:00:13.021669 kernel: pstore: Using crash dump compression: deflate Feb 9 19:00:13.021685 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 19:00:13.021698 kernel: ima: Allocated hash algorithm: sha1 Feb 9 19:00:13.021711 kernel: ima: No architecture policies found Feb 9 19:00:13.021724 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 9 19:00:13.021737 kernel: Write protecting the kernel read-only data: 28672k Feb 9 19:00:13.021750 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 9 19:00:13.021763 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 9 19:00:13.021776 kernel: Run /init as init process Feb 9 19:00:13.021789 kernel: with arguments: Feb 9 19:00:13.021802 kernel: /init Feb 9 19:00:13.021817 kernel: with environment: Feb 9 19:00:13.021830 kernel: HOME=/ Feb 9 19:00:13.021842 kernel: TERM=linux Feb 9 19:00:13.021855 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 19:00:13.021871 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 19:00:13.021887 systemd[1]: Detected virtualization microsoft. Feb 9 19:00:13.021901 systemd[1]: Detected architecture x86-64. Feb 9 19:00:13.021916 systemd[1]: Running in initrd. Feb 9 19:00:13.021930 systemd[1]: No hostname configured, using default hostname. Feb 9 19:00:13.021943 systemd[1]: Hostname set to . Feb 9 19:00:13.021957 systemd[1]: Initializing machine ID from random generator. Feb 9 19:00:13.021971 systemd[1]: Queued start job for default target initrd.target. Feb 9 19:00:13.021984 systemd[1]: Started systemd-ask-password-console.path. Feb 9 19:00:13.021998 systemd[1]: Reached target cryptsetup.target. Feb 9 19:00:13.022011 systemd[1]: Reached target paths.target. Feb 9 19:00:13.022025 systemd[1]: Reached target slices.target. Feb 9 19:00:13.022040 systemd[1]: Reached target swap.target. Feb 9 19:00:13.022054 systemd[1]: Reached target timers.target. Feb 9 19:00:13.022068 systemd[1]: Listening on iscsid.socket. Feb 9 19:00:13.022082 systemd[1]: Listening on iscsiuio.socket. Feb 9 19:00:13.022096 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 19:00:13.022110 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 19:00:13.022123 systemd[1]: Listening on systemd-journald.socket. Feb 9 19:00:13.022139 systemd[1]: Listening on systemd-networkd.socket. Feb 9 19:00:13.022153 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 19:00:13.022167 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 19:00:13.022181 systemd[1]: Reached target sockets.target. Feb 9 19:00:13.022195 systemd[1]: Starting kmod-static-nodes.service... Feb 9 19:00:13.022208 systemd[1]: Finished network-cleanup.service. Feb 9 19:00:13.022222 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 19:00:13.022236 systemd[1]: Starting systemd-journald.service... Feb 9 19:00:13.022250 systemd[1]: Starting systemd-modules-load.service... Feb 9 19:00:13.022266 systemd[1]: Starting systemd-resolved.service... Feb 9 19:00:13.022279 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 19:00:13.022297 systemd-journald[183]: Journal started Feb 9 19:00:13.022354 systemd-journald[183]: Runtime Journal (/run/log/journal/7a0f91b901de43bcb39f5ea28dbe9a89) is 8.0M, max 159.0M, 151.0M free. Feb 9 19:00:13.022849 systemd-modules-load[184]: Inserted module 'overlay' Feb 9 19:00:13.034040 systemd[1]: Started systemd-journald.service. Feb 9 19:00:13.033000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:13.044533 kernel: audit: type=1130 audit(1707505213.033:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:13.044611 systemd[1]: Finished kmod-static-nodes.service. Feb 9 19:00:13.049253 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 19:00:13.053759 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 19:00:13.059436 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 19:00:13.064634 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 19:00:13.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:13.096508 kernel: audit: type=1130 audit(1707505213.048:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:13.096560 kernel: audit: type=1130 audit(1707505213.053:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:13.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:13.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:13.107551 kernel: audit: type=1130 audit(1707505213.057:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:13.101991 systemd-resolved[185]: Positive Trust Anchors: Feb 9 19:00:13.102005 systemd-resolved[185]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 19:00:13.102041 systemd-resolved[185]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 19:00:13.126716 systemd-resolved[185]: Defaulting to hostname 'linux'. Feb 9 19:00:13.143218 kernel: audit: type=1130 audit(1707505213.128:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:13.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:13.127636 systemd[1]: Started systemd-resolved.service. Feb 9 19:00:13.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:13.129212 systemd[1]: Reached target nss-lookup.target. Feb 9 19:00:13.192630 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 19:00:13.192660 kernel: audit: type=1130 audit(1707505213.129:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:13.192678 kernel: Bridge firewalling registered Feb 9 19:00:13.130127 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 19:00:13.167720 systemd-modules-load[184]: Inserted module 'br_netfilter' Feb 9 19:00:13.184988 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 19:00:13.212695 kernel: audit: type=1130 audit(1707505213.200:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:13.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:13.213454 systemd[1]: Starting dracut-cmdline.service... Feb 9 19:00:13.220888 kernel: SCSI subsystem initialized Feb 9 19:00:13.225979 dracut-cmdline[201]: dracut-dracut-053 Feb 9 19:00:13.228603 dracut-cmdline[201]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:00:13.261831 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 19:00:13.261872 kernel: device-mapper: uevent: version 1.0.3 Feb 9 19:00:13.266948 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 19:00:13.271267 systemd-modules-load[184]: Inserted module 'dm_multipath' Feb 9 19:00:13.274712 systemd[1]: Finished systemd-modules-load.service. Feb 9 19:00:13.291789 kernel: audit: type=1130 audit(1707505213.276:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:13.276000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:13.292194 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:00:13.305208 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:00:13.320799 kernel: audit: type=1130 audit(1707505213.306:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:13.306000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:13.324536 kernel: Loading iSCSI transport class v2.0-870. Feb 9 19:00:13.337542 kernel: iscsi: registered transport (tcp) Feb 9 19:00:13.362099 kernel: iscsi: registered transport (qla4xxx) Feb 9 19:00:13.362167 kernel: QLogic iSCSI HBA Driver Feb 9 19:00:13.391385 systemd[1]: Finished dracut-cmdline.service. Feb 9 19:00:13.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:13.395914 systemd[1]: Starting dracut-pre-udev.service... Feb 9 19:00:13.446540 kernel: raid6: avx512x4 gen() 18354 MB/s Feb 9 19:00:13.465531 kernel: raid6: avx512x4 xor() 8107 MB/s Feb 9 19:00:13.485526 kernel: raid6: avx512x2 gen() 18319 MB/s Feb 9 19:00:13.505532 kernel: raid6: avx512x2 xor() 29846 MB/s Feb 9 19:00:13.525527 kernel: raid6: avx512x1 gen() 18231 MB/s Feb 9 19:00:13.544529 kernel: raid6: avx512x1 xor() 26864 MB/s Feb 9 19:00:13.565531 kernel: raid6: avx2x4 gen() 18275 MB/s Feb 9 19:00:13.585526 kernel: raid6: avx2x4 xor() 7638 MB/s Feb 9 19:00:13.606530 kernel: raid6: avx2x2 gen() 18118 MB/s Feb 9 19:00:13.625531 kernel: raid6: avx2x2 xor() 22283 MB/s Feb 9 19:00:13.644526 kernel: raid6: avx2x1 gen() 14134 MB/s Feb 9 19:00:13.665527 kernel: raid6: avx2x1 xor() 19474 MB/s Feb 9 19:00:13.685528 kernel: raid6: sse2x4 gen() 11739 MB/s Feb 9 19:00:13.704528 kernel: raid6: sse2x4 xor() 7160 MB/s Feb 9 19:00:13.724533 kernel: raid6: sse2x2 gen() 12590 MB/s Feb 9 19:00:13.743533 kernel: raid6: sse2x2 xor() 7506 MB/s Feb 9 19:00:13.762527 kernel: raid6: sse2x1 gen() 11490 MB/s Feb 9 19:00:13.785961 kernel: raid6: sse2x1 xor() 5885 MB/s Feb 9 19:00:13.785984 kernel: raid6: using algorithm avx512x4 gen() 18354 MB/s Feb 9 19:00:13.785998 kernel: raid6: .... xor() 8107 MB/s, rmw enabled Feb 9 19:00:13.789393 kernel: raid6: using avx512x2 recovery algorithm Feb 9 19:00:13.809542 kernel: xor: automatically using best checksumming function avx Feb 9 19:00:13.905546 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 9 19:00:13.914093 systemd[1]: Finished dracut-pre-udev.service. Feb 9 19:00:13.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:13.918000 audit: BPF prog-id=7 op=LOAD Feb 9 19:00:13.918000 audit: BPF prog-id=8 op=LOAD Feb 9 19:00:13.919346 systemd[1]: Starting systemd-udevd.service... Feb 9 19:00:13.933101 systemd-udevd[384]: Using default interface naming scheme 'v252'. Feb 9 19:00:13.937869 systemd[1]: Started systemd-udevd.service. Feb 9 19:00:13.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:13.946153 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 19:00:13.961825 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Feb 9 19:00:13.992342 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 19:00:13.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:13.998211 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 19:00:14.031198 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 19:00:14.033000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:14.090536 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 19:00:14.116683 kernel: AVX2 version of gcm_enc/dec engaged. Feb 9 19:00:14.116731 kernel: AES CTR mode by8 optimization enabled Feb 9 19:00:14.116743 kernel: hv_vmbus: Vmbus version:5.2 Feb 9 19:00:14.130536 kernel: hv_vmbus: registering driver hyperv_keyboard Feb 9 19:00:14.155537 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Feb 9 19:00:14.165540 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 9 19:00:14.171559 kernel: hv_vmbus: registering driver hid_hyperv Feb 9 19:00:14.179583 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Feb 9 19:00:14.179613 kernel: hv_vmbus: registering driver hv_storvsc Feb 9 19:00:14.179627 kernel: hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Feb 9 19:00:14.194545 kernel: hv_vmbus: registering driver hv_netvsc Feb 9 19:00:14.194585 kernel: scsi host1: storvsc_host_t Feb 9 19:00:14.194818 kernel: scsi host0: storvsc_host_t Feb 9 19:00:14.199601 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Feb 9 19:00:14.208537 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Feb 9 19:00:14.245780 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Feb 9 19:00:14.245984 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 9 19:00:14.254421 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Feb 9 19:00:14.254605 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Feb 9 19:00:14.254728 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 9 19:00:14.254845 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 9 19:00:14.261884 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Feb 9 19:00:14.262071 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Feb 9 19:00:14.267542 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 19:00:14.271535 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 9 19:00:14.330488 kernel: hv_netvsc 002248a1-c12c-0022-48a1-c12c002248a1 eth0: VF slot 1 added Feb 9 19:00:14.339535 kernel: hv_vmbus: registering driver hv_pci Feb 9 19:00:14.350536 kernel: hv_pci 641426a6-49a9-4d05-84bd-e43554d62ffb: PCI VMBus probing: Using version 0x10004 Feb 9 19:00:14.361254 kernel: hv_pci 641426a6-49a9-4d05-84bd-e43554d62ffb: PCI host bridge to bus 49a9:00 Feb 9 19:00:14.361420 kernel: pci_bus 49a9:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Feb 9 19:00:14.361567 kernel: pci_bus 49a9:00: No busn resource found for root bus, will use [bus 00-ff] Feb 9 19:00:14.370653 kernel: pci 49a9:00:02.0: [15b3:1016] type 00 class 0x020000 Feb 9 19:00:14.380136 kernel: pci 49a9:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 9 19:00:14.395586 kernel: pci 49a9:00:02.0: enabling Extended Tags Feb 9 19:00:14.407674 kernel: pci 49a9:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 49a9:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Feb 9 19:00:14.416331 kernel: pci_bus 49a9:00: busn_res: [bus 00-ff] end is updated to 00 Feb 9 19:00:14.416500 kernel: pci 49a9:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 9 19:00:14.508538 kernel: mlx5_core 49a9:00:02.0: firmware version: 14.30.1350 Feb 9 19:00:14.595333 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 19:00:14.620537 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (438) Feb 9 19:00:14.641859 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 19:00:14.670535 kernel: mlx5_core 49a9:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Feb 9 19:00:14.756077 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 19:00:14.815066 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 19:00:14.818738 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 19:00:14.819624 systemd[1]: Starting disk-uuid.service... Feb 9 19:00:14.835539 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 19:00:14.843556 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 19:00:14.858540 kernel: mlx5_core 49a9:00:02.0: Supported tc offload range - chains: 1, prios: 1 Feb 9 19:00:14.858763 kernel: mlx5_core 49a9:00:02.0: mlx5e_tc_post_act_init:40:(pid 190): firmware level support is missing Feb 9 19:00:14.874940 kernel: hv_netvsc 002248a1-c12c-0022-48a1-c12c002248a1 eth0: VF registering: eth1 Feb 9 19:00:14.875096 kernel: mlx5_core 49a9:00:02.0 eth1: joined to eth0 Feb 9 19:00:14.887539 kernel: mlx5_core 49a9:00:02.0 enP18857s1: renamed from eth1 Feb 9 19:00:15.852298 disk-uuid[563]: The operation has completed successfully. Feb 9 19:00:15.854987 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 19:00:15.929893 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 19:00:15.929993 systemd[1]: Finished disk-uuid.service. Feb 9 19:00:15.937000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:15.937000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:15.938632 systemd[1]: Starting verity-setup.service... Feb 9 19:00:15.969534 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 9 19:00:16.180702 systemd[1]: Found device dev-mapper-usr.device. Feb 9 19:00:16.183977 systemd[1]: Mounting sysusr-usr.mount... Feb 9 19:00:16.189580 systemd[1]: Finished verity-setup.service. Feb 9 19:00:16.193000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:16.262374 systemd[1]: Mounted sysusr-usr.mount. Feb 9 19:00:16.265888 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 19:00:16.264251 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 19:00:16.265015 systemd[1]: Starting ignition-setup.service... Feb 9 19:00:16.274173 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 19:00:16.294236 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:00:16.294282 kernel: BTRFS info (device sda6): using free space tree Feb 9 19:00:16.294301 kernel: BTRFS info (device sda6): has skinny extents Feb 9 19:00:16.341301 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 19:00:16.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:16.346000 audit: BPF prog-id=9 op=LOAD Feb 9 19:00:16.347249 systemd[1]: Starting systemd-networkd.service... Feb 9 19:00:16.373786 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 19:00:16.374193 systemd-networkd[805]: lo: Link UP Feb 9 19:00:16.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:16.374197 systemd-networkd[805]: lo: Gained carrier Feb 9 19:00:16.375093 systemd-networkd[805]: Enumeration completed Feb 9 19:00:16.375725 systemd[1]: Started systemd-networkd.service. Feb 9 19:00:16.378080 systemd-networkd[805]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:00:16.380704 systemd[1]: Reached target network.target. Feb 9 19:00:16.389882 systemd[1]: Starting iscsiuio.service... Feb 9 19:00:16.406319 systemd[1]: Started iscsiuio.service. Feb 9 19:00:16.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:16.409406 systemd[1]: Starting iscsid.service... Feb 9 19:00:16.414002 iscsid[814]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 19:00:16.414002 iscsid[814]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Feb 9 19:00:16.414002 iscsid[814]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 19:00:16.414002 iscsid[814]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 19:00:16.414002 iscsid[814]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 19:00:16.414002 iscsid[814]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 19:00:16.414002 iscsid[814]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 19:00:16.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:16.417393 systemd[1]: Started iscsid.service. Feb 9 19:00:16.447446 systemd[1]: Starting dracut-initqueue.service... Feb 9 19:00:16.460551 kernel: mlx5_core 49a9:00:02.0 enP18857s1: Link up Feb 9 19:00:16.460973 systemd[1]: Finished dracut-initqueue.service. Feb 9 19:00:16.465000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:16.465943 systemd[1]: Reached target remote-fs-pre.target. Feb 9 19:00:16.470355 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 19:00:16.474789 systemd[1]: Reached target remote-fs.target. Feb 9 19:00:16.479488 systemd[1]: Starting dracut-pre-mount.service... Feb 9 19:00:16.488269 systemd[1]: Finished dracut-pre-mount.service. Feb 9 19:00:16.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:16.511550 systemd[1]: Finished ignition-setup.service. Feb 9 19:00:16.515000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:16.516607 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 19:00:16.535539 kernel: hv_netvsc 002248a1-c12c-0022-48a1-c12c002248a1 eth0: Data path switched to VF: enP18857s1 Feb 9 19:00:16.540243 systemd-networkd[805]: enP18857s1: Link UP Feb 9 19:00:16.542354 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:00:16.540382 systemd-networkd[805]: eth0: Link UP Feb 9 19:00:16.540558 systemd-networkd[805]: eth0: Gained carrier Feb 9 19:00:16.547692 systemd-networkd[805]: enP18857s1: Gained carrier Feb 9 19:00:16.566591 systemd-networkd[805]: eth0: DHCPv4 address 10.200.8.39/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 9 19:00:17.596754 systemd-networkd[805]: eth0: Gained IPv6LL Feb 9 19:00:19.108430 ignition[829]: Ignition 2.14.0 Feb 9 19:00:19.108447 ignition[829]: Stage: fetch-offline Feb 9 19:00:19.108566 ignition[829]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:00:19.108620 ignition[829]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:00:19.206625 ignition[829]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:00:19.212000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:19.208123 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 19:00:19.232441 kernel: kauditd_printk_skb: 18 callbacks suppressed Feb 9 19:00:19.232464 kernel: audit: type=1130 audit(1707505219.212:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:19.206827 ignition[829]: parsed url from cmdline: "" Feb 9 19:00:19.214709 systemd[1]: Starting ignition-fetch.service... Feb 9 19:00:19.206832 ignition[829]: no config URL provided Feb 9 19:00:19.206838 ignition[829]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 19:00:19.206846 ignition[829]: no config at "/usr/lib/ignition/user.ign" Feb 9 19:00:19.206857 ignition[829]: failed to fetch config: resource requires networking Feb 9 19:00:19.207135 ignition[829]: Ignition finished successfully Feb 9 19:00:19.223749 ignition[835]: Ignition 2.14.0 Feb 9 19:00:19.223755 ignition[835]: Stage: fetch Feb 9 19:00:19.223859 ignition[835]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:00:19.223880 ignition[835]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:00:19.229722 ignition[835]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:00:19.230628 ignition[835]: parsed url from cmdline: "" Feb 9 19:00:19.230634 ignition[835]: no config URL provided Feb 9 19:00:19.230640 ignition[835]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 19:00:19.230651 ignition[835]: no config at "/usr/lib/ignition/user.ign" Feb 9 19:00:19.230686 ignition[835]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Feb 9 19:00:19.317553 ignition[835]: GET result: OK Feb 9 19:00:19.317678 ignition[835]: config has been read from IMDS userdata Feb 9 19:00:19.317717 ignition[835]: parsing config with SHA512: c009173b35337f0d3ec2bf2fbf54116d0f983bcdc26827beefe20a4727a3e0224729168ceb6b7f64ef6a7b1814bd94bc55be62e9d8b193759083d99507452f8c Feb 9 19:00:19.349903 unknown[835]: fetched base config from "system" Feb 9 19:00:19.352587 unknown[835]: fetched base config from "system" Feb 9 19:00:19.352596 unknown[835]: fetched user config from "azure" Feb 9 19:00:19.353271 ignition[835]: fetch: fetch complete Feb 9 19:00:19.354682 systemd[1]: Finished ignition-fetch.service. Feb 9 19:00:19.380468 kernel: audit: type=1130 audit(1707505219.360:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:19.360000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:19.353278 ignition[835]: fetch: fetch passed Feb 9 19:00:19.361423 systemd[1]: Starting ignition-kargs.service... Feb 9 19:00:19.353318 ignition[835]: Ignition finished successfully Feb 9 19:00:19.382096 ignition[841]: Ignition 2.14.0 Feb 9 19:00:19.382103 ignition[841]: Stage: kargs Feb 9 19:00:19.382207 ignition[841]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:00:19.382233 ignition[841]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:00:19.394325 ignition[841]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:00:19.396392 ignition[841]: kargs: kargs passed Feb 9 19:00:19.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:19.397306 systemd[1]: Finished ignition-kargs.service. Feb 9 19:00:19.416116 kernel: audit: type=1130 audit(1707505219.401:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:19.396437 ignition[841]: Ignition finished successfully Feb 9 19:00:19.413162 systemd[1]: Starting ignition-disks.service... Feb 9 19:00:19.421673 ignition[847]: Ignition 2.14.0 Feb 9 19:00:19.421682 ignition[847]: Stage: disks Feb 9 19:00:19.421805 ignition[847]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:00:19.421836 ignition[847]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:00:19.425210 ignition[847]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:00:19.428698 ignition[847]: disks: disks passed Feb 9 19:00:19.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:19.433741 systemd[1]: Finished ignition-disks.service. Feb 9 19:00:19.452752 kernel: audit: type=1130 audit(1707505219.435:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:19.428740 ignition[847]: Ignition finished successfully Feb 9 19:00:19.435745 systemd[1]: Reached target initrd-root-device.target. Feb 9 19:00:19.449077 systemd[1]: Reached target local-fs-pre.target. Feb 9 19:00:19.454586 systemd[1]: Reached target local-fs.target. Feb 9 19:00:19.460139 systemd[1]: Reached target sysinit.target. Feb 9 19:00:19.465239 systemd[1]: Reached target basic.target. Feb 9 19:00:19.469672 systemd[1]: Starting systemd-fsck-root.service... Feb 9 19:00:19.523242 systemd-fsck[855]: ROOT: clean, 602/7326000 files, 481070/7359488 blocks Feb 9 19:00:19.531432 systemd[1]: Finished systemd-fsck-root.service. Feb 9 19:00:19.551681 kernel: audit: type=1130 audit(1707505219.532:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:19.532000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:19.534537 systemd[1]: Mounting sysroot.mount... Feb 9 19:00:19.568536 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 19:00:19.568817 systemd[1]: Mounted sysroot.mount. Feb 9 19:00:19.572305 systemd[1]: Reached target initrd-root-fs.target. Feb 9 19:00:19.601134 systemd[1]: Mounting sysroot-usr.mount... Feb 9 19:00:19.610991 systemd[1]: Starting flatcar-metadata-hostname.service... Feb 9 19:00:19.618090 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 19:00:19.618140 systemd[1]: Reached target ignition-diskful.target. Feb 9 19:00:19.626880 systemd[1]: Mounted sysroot-usr.mount. Feb 9 19:00:19.652960 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 19:00:19.666812 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (866) Feb 9 19:00:19.664159 systemd[1]: Starting initrd-setup-root.service... Feb 9 19:00:19.680999 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:00:19.681046 kernel: BTRFS info (device sda6): using free space tree Feb 9 19:00:19.681058 kernel: BTRFS info (device sda6): has skinny extents Feb 9 19:00:19.687978 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 19:00:19.692787 initrd-setup-root[871]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 19:00:19.732212 initrd-setup-root[897]: cut: /sysroot/etc/group: No such file or directory Feb 9 19:00:19.739256 initrd-setup-root[905]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 19:00:19.745877 initrd-setup-root[913]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 19:00:20.145470 systemd[1]: Finished initrd-setup-root.service. Feb 9 19:00:20.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:20.161704 kernel: audit: type=1130 audit(1707505220.147:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:20.160821 systemd[1]: Starting ignition-mount.service... Feb 9 19:00:20.164561 systemd[1]: Starting sysroot-boot.service... Feb 9 19:00:20.175730 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 9 19:00:20.175856 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 9 19:00:20.195045 ignition[932]: INFO : Ignition 2.14.0 Feb 9 19:00:20.195045 ignition[932]: INFO : Stage: mount Feb 9 19:00:20.198648 ignition[932]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:00:20.198648 ignition[932]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:00:20.213335 systemd[1]: Finished sysroot-boot.service. Feb 9 19:00:20.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:20.226367 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:00:20.234931 kernel: audit: type=1130 audit(1707505220.216:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:20.234958 kernel: audit: type=1130 audit(1707505220.234:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:20.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:20.231709 systemd[1]: Finished ignition-mount.service. Feb 9 19:00:20.248778 ignition[932]: INFO : mount: mount passed Feb 9 19:00:20.248778 ignition[932]: INFO : Ignition finished successfully Feb 9 19:00:20.895149 coreos-metadata[865]: Feb 09 19:00:20.894 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 9 19:00:20.908367 coreos-metadata[865]: Feb 09 19:00:20.908 INFO Fetch successful Feb 9 19:00:20.940833 coreos-metadata[865]: Feb 09 19:00:20.940 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Feb 9 19:00:20.957467 coreos-metadata[865]: Feb 09 19:00:20.957 INFO Fetch successful Feb 9 19:00:20.972866 coreos-metadata[865]: Feb 09 19:00:20.972 INFO wrote hostname ci-3510.3.2-a-2006cf4d94 to /sysroot/etc/hostname Feb 9 19:00:20.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:20.974941 systemd[1]: Finished flatcar-metadata-hostname.service. Feb 9 19:00:20.996299 kernel: audit: type=1130 audit(1707505220.979:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:20.980954 systemd[1]: Starting ignition-files.service... Feb 9 19:00:20.999562 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 19:00:21.012539 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (944) Feb 9 19:00:21.012574 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:00:21.020375 kernel: BTRFS info (device sda6): using free space tree Feb 9 19:00:21.020398 kernel: BTRFS info (device sda6): has skinny extents Feb 9 19:00:21.029041 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 19:00:21.041140 ignition[963]: INFO : Ignition 2.14.0 Feb 9 19:00:21.041140 ignition[963]: INFO : Stage: files Feb 9 19:00:21.049144 ignition[963]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:00:21.049144 ignition[963]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:00:21.049144 ignition[963]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:00:21.061274 ignition[963]: DEBUG : files: compiled without relabeling support, skipping Feb 9 19:00:21.061274 ignition[963]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 19:00:21.061274 ignition[963]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 19:00:21.123595 ignition[963]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 19:00:21.127599 ignition[963]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 19:00:21.131094 ignition[963]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 19:00:21.131094 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 9 19:00:21.131094 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-amd64.tar.gz: attempt #1 Feb 9 19:00:21.128041 unknown[963]: wrote ssh authorized keys file for user: core Feb 9 19:00:21.879957 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 9 19:00:22.656168 ignition[963]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: a3a2c02a90b008686c20babaf272e703924db2a3e2a0d4e2a7c81d994cbc68c47458a4a354ecc243af095b390815c7f203348b9749351ae817bd52a522300449 Feb 9 19:00:22.663733 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 9 19:00:22.663733 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 9 19:00:22.663733 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 9 19:00:27.258323 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 9 19:00:27.382264 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 9 19:00:27.387290 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 9 19:00:27.387290 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Feb 9 19:00:27.893010 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 9 19:00:28.521993 ignition[963]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Feb 9 19:00:28.529685 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 9 19:00:28.529685 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubectl" Feb 9 19:00:28.529685 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubectl: attempt #1 Feb 9 19:00:29.048666 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 9 19:00:51.519690 ignition[963]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 97840854134909d75a1a2563628cc4ba632067369ce7fc8a8a1e90a387d32dd7bfd73f4f5b5a82ef842088e7470692951eb7fc869c5f297dd740f855672ee628 Feb 9 19:00:51.527594 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 9 19:00:51.527594 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 19:00:51.527594 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubeadm: attempt #1 Feb 9 19:00:51.644901 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 9 19:01:00.950883 ignition[963]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 1c324cd645a7bf93d19d24c87498d9a17878eb1cc927e2680200ffeab2f85051ddec47d85b79b8e774042dc6726299ad3d7caf52c060701f00deba30dc33f660 Feb 9 19:01:00.960310 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 19:01:00.960310 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 19:01:00.960310 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubelet: attempt #1 Feb 9 19:01:01.700679 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 9 19:01:53.587160 ignition[963]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 40daf2a9b9e666c14b10e627da931bd79978628b1f23ef6429c1cb4fcba261f86ccff440c0dbb0070ee760fe55772b4fd279c4582dfbb17fa30bc94b7f00126b Feb 9 19:01:53.587160 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 19:01:53.600257 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 19:01:53.600257 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 19:01:53.600257 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 19:01:53.600257 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 9 19:01:54.169331 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 9 19:01:54.292153 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 19:01:54.299921 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Feb 9 19:01:54.299921 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 19:01:54.299921 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 9 19:01:54.299921 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 9 19:01:54.299921 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 19:01:54.299921 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 19:01:54.299921 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 19:01:54.299921 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 19:01:54.299921 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 19:01:54.299921 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 19:01:54.299921 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 9 19:01:54.299921 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(10): oem config not found in "/usr/share/oem", looking on oem partition Feb 9 19:01:54.374574 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (965) Feb 9 19:01:54.374607 kernel: audit: type=1130 audit(1707505314.357:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:54.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:54.374686 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1124903979" Feb 9 19:01:54.374686 ignition[963]: CRITICAL : files: createFilesystemsFiles: createFiles: op(10): op(11): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1124903979": device or resource busy Feb 9 19:01:54.374686 ignition[963]: ERROR : files: createFilesystemsFiles: createFiles: op(10): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1124903979", trying btrfs: device or resource busy Feb 9 19:01:54.374686 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1124903979" Feb 9 19:01:54.374686 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1124903979" Feb 9 19:01:54.374686 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [started] unmounting "/mnt/oem1124903979" Feb 9 19:01:54.374686 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [finished] unmounting "/mnt/oem1124903979" Feb 9 19:01:54.374686 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 9 19:01:54.374686 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(14): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 19:01:54.374686 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(14): oem config not found in "/usr/share/oem", looking on oem partition Feb 9 19:01:54.374686 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(15): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3992006113" Feb 9 19:01:54.374686 ignition[963]: CRITICAL : files: createFilesystemsFiles: createFiles: op(14): op(15): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3992006113": device or resource busy Feb 9 19:01:54.374686 ignition[963]: ERROR : files: createFilesystemsFiles: createFiles: op(14): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3992006113", trying btrfs: device or resource busy Feb 9 19:01:54.374686 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(16): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3992006113" Feb 9 19:01:54.517717 kernel: audit: type=1130 audit(1707505314.397:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:54.517754 kernel: audit: type=1131 audit(1707505314.398:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:54.519585 kernel: audit: type=1130 audit(1707505314.443:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:54.519605 kernel: audit: type=1130 audit(1707505314.486:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:54.519624 kernel: audit: type=1131 audit(1707505314.486:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:54.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:54.398000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:54.443000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:54.486000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:54.486000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:54.326323 systemd[1]: mnt-oem1124903979.mount: Deactivated successfully. Feb 9 19:01:54.525176 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(16): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3992006113" Feb 9 19:01:54.525176 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(17): [started] unmounting "/mnt/oem3992006113" Feb 9 19:01:54.525176 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(17): [finished] unmounting "/mnt/oem3992006113" Feb 9 19:01:54.525176 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(14): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 19:01:54.525176 ignition[963]: INFO : files: op(18): [started] processing unit "nvidia.service" Feb 9 19:01:54.525176 ignition[963]: INFO : files: op(18): [finished] processing unit "nvidia.service" Feb 9 19:01:54.525176 ignition[963]: INFO : files: op(19): [started] processing unit "waagent.service" Feb 9 19:01:54.525176 ignition[963]: INFO : files: op(19): [finished] processing unit "waagent.service" Feb 9 19:01:54.525176 ignition[963]: INFO : files: op(1a): [started] processing unit "prepare-cni-plugins.service" Feb 9 19:01:54.525176 ignition[963]: INFO : files: op(1a): op(1b): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 19:01:54.525176 ignition[963]: INFO : files: op(1a): op(1b): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 19:01:54.525176 ignition[963]: INFO : files: op(1a): [finished] processing unit "prepare-cni-plugins.service" Feb 9 19:01:54.525176 ignition[963]: INFO : files: op(1c): [started] processing unit "prepare-critools.service" Feb 9 19:01:54.525176 ignition[963]: INFO : files: op(1c): op(1d): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 19:01:54.525176 ignition[963]: INFO : files: op(1c): op(1d): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 19:01:54.525176 ignition[963]: INFO : files: op(1c): [finished] processing unit "prepare-critools.service" Feb 9 19:01:54.525176 ignition[963]: INFO : files: op(1e): [started] processing unit "prepare-helm.service" Feb 9 19:01:54.525176 ignition[963]: INFO : files: op(1e): op(1f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 19:01:54.525176 ignition[963]: INFO : files: op(1e): op(1f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 19:01:54.670409 kernel: audit: type=1130 audit(1707505314.543:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:54.670446 kernel: audit: type=1131 audit(1707505314.593:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:54.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:54.593000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:54.342868 systemd[1]: mnt-oem3992006113.mount: Deactivated successfully. Feb 9 19:01:54.676697 ignition[963]: INFO : files: op(1e): [finished] processing unit "prepare-helm.service" Feb 9 19:01:54.676697 ignition[963]: INFO : files: op(20): [started] setting preset to enabled for "nvidia.service" Feb 9 19:01:54.676697 ignition[963]: INFO : files: op(20): [finished] setting preset to enabled for "nvidia.service" Feb 9 19:01:54.676697 ignition[963]: INFO : files: op(21): [started] setting preset to enabled for "waagent.service" Feb 9 19:01:54.676697 ignition[963]: INFO : files: op(21): [finished] setting preset to enabled for "waagent.service" Feb 9 19:01:54.676697 ignition[963]: INFO : files: op(22): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 19:01:54.676697 ignition[963]: INFO : files: op(22): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 19:01:54.676697 ignition[963]: INFO : files: op(23): [started] setting preset to enabled for "prepare-critools.service" Feb 9 19:01:54.676697 ignition[963]: INFO : files: op(23): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 19:01:54.676697 ignition[963]: INFO : files: op(24): [started] setting preset to enabled for "prepare-helm.service" Feb 9 19:01:54.676697 ignition[963]: INFO : files: op(24): [finished] setting preset to enabled for "prepare-helm.service" Feb 9 19:01:54.676697 ignition[963]: INFO : files: createResultFile: createFiles: op(25): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 19:01:54.676697 ignition[963]: INFO : files: createResultFile: createFiles: op(25): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 19:01:54.676697 ignition[963]: INFO : files: files passed Feb 9 19:01:54.676697 ignition[963]: INFO : Ignition finished successfully Feb 9 19:01:54.772134 kernel: audit: type=1131 audit(1707505314.676:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:54.772173 kernel: audit: type=1131 audit(1707505314.700:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:54.676000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:54.700000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:54.721000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:54.727000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:54.733000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:54.772398 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 19:01:54.351656 systemd[1]: Finished ignition-files.service. Feb 9 19:01:54.778000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:54.780496 iscsid[814]: iscsid shutting down. Feb 9 19:01:54.782000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:54.374647 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 19:01:54.376851 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 19:01:54.381480 systemd[1]: Starting ignition-quench.service... Feb 9 19:01:54.793000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:54.797000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:54.390723 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 19:01:54.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:54.802000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:54.390818 systemd[1]: Finished ignition-quench.service. Feb 9 19:01:54.806000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:54.437426 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 19:01:54.808000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:54.812753 ignition[1002]: INFO : Ignition 2.14.0 Feb 9 19:01:54.812753 ignition[1002]: INFO : Stage: umount Feb 9 19:01:54.812753 ignition[1002]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:01:54.812753 ignition[1002]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:01:54.812753 ignition[1002]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:01:54.812753 ignition[1002]: INFO : umount: umount passed Feb 9 19:01:54.812753 ignition[1002]: INFO : Ignition finished successfully Feb 9 19:01:54.812000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:54.816000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:54.444142 systemd[1]: Reached target ignition-complete.target. Feb 9 19:01:54.462214 systemd[1]: Starting initrd-parse-etc.service... Feb 9 19:01:54.482870 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 19:01:54.482962 systemd[1]: Finished initrd-parse-etc.service. Feb 9 19:01:54.486566 systemd[1]: Reached target initrd-fs.target. Feb 9 19:01:54.517708 systemd[1]: Reached target initrd.target. Feb 9 19:01:54.519640 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 19:01:54.520661 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 19:01:54.536735 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 19:01:54.563641 systemd[1]: Starting initrd-cleanup.service... Feb 9 19:01:54.573833 systemd[1]: Stopped target nss-lookup.target. Feb 9 19:01:54.577818 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 19:01:54.583070 systemd[1]: Stopped target timers.target. Feb 9 19:01:54.588358 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 19:01:54.588511 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 19:01:54.608070 systemd[1]: Stopped target initrd.target. Feb 9 19:01:54.612227 systemd[1]: Stopped target basic.target. Feb 9 19:01:54.619426 systemd[1]: Stopped target ignition-complete.target. Feb 9 19:01:54.624792 systemd[1]: Stopped target ignition-diskful.target. Feb 9 19:01:54.630111 systemd[1]: Stopped target initrd-root-device.target. Feb 9 19:01:54.637227 systemd[1]: Stopped target remote-fs.target. Feb 9 19:01:54.647733 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 19:01:54.652708 systemd[1]: Stopped target sysinit.target. Feb 9 19:01:54.657555 systemd[1]: Stopped target local-fs.target. Feb 9 19:01:54.664126 systemd[1]: Stopped target local-fs-pre.target. Feb 9 19:01:54.670541 systemd[1]: Stopped target swap.target. Feb 9 19:01:54.672362 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 19:01:54.672542 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 19:01:54.689125 systemd[1]: Stopped target cryptsetup.target. Feb 9 19:01:54.694703 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 19:01:54.694849 systemd[1]: Stopped dracut-initqueue.service. Feb 9 19:01:54.715165 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 19:01:54.715413 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 19:01:54.721942 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 19:01:54.722076 systemd[1]: Stopped ignition-files.service. Feb 9 19:01:54.727711 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 9 19:01:54.727835 systemd[1]: Stopped flatcar-metadata-hostname.service. Feb 9 19:01:54.734426 systemd[1]: Stopping ignition-mount.service... Feb 9 19:01:54.739313 systemd[1]: Stopping iscsid.service... Feb 9 19:01:54.760600 systemd[1]: Stopping sysroot-boot.service... Feb 9 19:01:54.768373 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 19:01:54.768573 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 19:01:54.778610 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 19:01:54.778834 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 19:01:54.788458 systemd[1]: iscsid.service: Deactivated successfully. Feb 9 19:01:54.788563 systemd[1]: Stopped iscsid.service. Feb 9 19:01:54.794290 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 19:01:54.794386 systemd[1]: Stopped ignition-mount.service. Feb 9 19:01:54.799023 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 19:01:54.799107 systemd[1]: Finished initrd-cleanup.service. Feb 9 19:01:54.803205 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 19:01:54.803247 systemd[1]: Stopped ignition-disks.service. Feb 9 19:01:54.806567 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 19:01:54.806619 systemd[1]: Stopped ignition-kargs.service. Feb 9 19:01:54.808637 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 9 19:01:54.808678 systemd[1]: Stopped ignition-fetch.service. Feb 9 19:01:54.812985 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 19:01:54.813031 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 19:01:54.822546 systemd[1]: Stopped target paths.target. Feb 9 19:01:54.836907 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 19:01:54.839639 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 19:01:54.844422 systemd[1]: Stopped target slices.target. Feb 9 19:01:54.846667 systemd[1]: Stopped target sockets.target. Feb 9 19:01:54.849089 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 19:01:54.849146 systemd[1]: Closed iscsid.socket. Feb 9 19:01:54.853424 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 19:01:54.855318 systemd[1]: Stopped ignition-setup.service. Feb 9 19:01:54.981000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:54.982157 systemd[1]: Stopping iscsiuio.service... Feb 9 19:01:54.986998 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 19:01:54.989750 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 19:01:54.992582 systemd[1]: Stopped iscsiuio.service. Feb 9 19:01:54.998000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:54.998828 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 19:01:55.002113 systemd[1]: Stopped sysroot-boot.service. Feb 9 19:01:55.006000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:55.007120 systemd[1]: Stopped target network.target. Feb 9 19:01:55.011662 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 19:01:55.011718 systemd[1]: Closed iscsiuio.socket. Feb 9 19:01:55.017383 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 19:01:55.017440 systemd[1]: Stopped initrd-setup-root.service. Feb 9 19:01:55.023000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:55.023981 systemd[1]: Stopping systemd-networkd.service... Feb 9 19:01:55.027968 systemd[1]: Stopping systemd-resolved.service... Feb 9 19:01:55.035615 systemd-networkd[805]: eth0: DHCPv6 lease lost Feb 9 19:01:55.038665 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 19:01:55.038772 systemd[1]: Stopped systemd-resolved.service. Feb 9 19:01:55.044000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:55.045291 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 19:01:55.048000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:55.045372 systemd[1]: Stopped systemd-networkd.service. Feb 9 19:01:55.050000 audit: BPF prog-id=9 op=UNLOAD Feb 9 19:01:55.050000 audit: BPF prog-id=6 op=UNLOAD Feb 9 19:01:55.048762 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 19:01:55.048789 systemd[1]: Closed systemd-networkd.socket. Feb 9 19:01:55.061000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:55.063000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:55.053704 systemd[1]: Stopping network-cleanup.service... Feb 9 19:01:55.067000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:55.059414 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 19:01:55.075000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:55.059474 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 19:01:55.061612 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 19:01:55.061650 systemd[1]: Stopped systemd-sysctl.service. Feb 9 19:01:55.065432 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 19:01:55.088000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:55.093000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:55.095000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:55.065480 systemd[1]: Stopped systemd-modules-load.service. Feb 9 19:01:55.067897 systemd[1]: Stopping systemd-udevd.service... Feb 9 19:01:55.101000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:55.106000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:55.108000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:55.071662 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 19:01:55.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:55.114000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:55.071789 systemd[1]: Stopped systemd-udevd.service. Feb 9 19:01:55.078814 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 19:01:55.078855 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 19:01:55.084267 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 19:01:55.084364 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 19:01:55.086426 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 19:01:55.086476 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 19:01:55.090959 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 19:01:55.091004 systemd[1]: Stopped dracut-cmdline.service. Feb 9 19:01:55.093159 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 19:01:55.093204 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 19:01:55.095902 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 19:01:55.099611 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 9 19:01:55.099680 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 9 19:01:55.102169 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 19:01:55.102221 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 19:01:55.160679 kernel: hv_netvsc 002248a1-c12c-0022-48a1-c12c002248a1 eth0: Data path switched from VF: enP18857s1 Feb 9 19:01:55.106619 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 19:01:55.106671 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 19:01:55.109235 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 19:01:55.109328 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 19:01:55.178286 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 19:01:55.179567 systemd[1]: Stopped network-cleanup.service. Feb 9 19:01:55.184000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:55.184824 systemd[1]: Reached target initrd-switch-root.target. Feb 9 19:01:55.188630 systemd[1]: Starting initrd-switch-root.service... Feb 9 19:01:55.200957 systemd[1]: Switching root. Feb 9 19:01:55.226419 systemd-journald[183]: Journal stopped Feb 9 19:02:05.759714 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Feb 9 19:02:05.759742 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 19:02:05.759753 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 19:02:05.759765 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 19:02:05.759775 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 19:02:05.759784 kernel: SELinux: policy capability open_perms=1 Feb 9 19:02:05.759796 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 19:02:05.759806 kernel: SELinux: policy capability always_check_network=0 Feb 9 19:02:05.759817 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 19:02:05.759825 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 19:02:05.759836 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 19:02:05.759845 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 19:02:05.759856 systemd[1]: Successfully loaded SELinux policy in 240.778ms. Feb 9 19:02:05.759867 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.389ms. Feb 9 19:02:05.759883 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 19:02:05.759895 systemd[1]: Detected virtualization microsoft. Feb 9 19:02:05.759904 systemd[1]: Detected architecture x86-64. Feb 9 19:02:05.759919 systemd[1]: Detected first boot. Feb 9 19:02:05.759936 systemd[1]: Hostname set to . Feb 9 19:02:05.759945 systemd[1]: Initializing machine ID from random generator. Feb 9 19:02:05.759957 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 19:02:05.759967 systemd[1]: Populated /etc with preset unit settings. Feb 9 19:02:05.759978 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:02:05.759994 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:02:05.760015 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:02:05.760038 kernel: kauditd_printk_skb: 51 callbacks suppressed Feb 9 19:02:05.760055 kernel: audit: type=1334 audit(1707505325.102:92): prog-id=12 op=LOAD Feb 9 19:02:05.760072 kernel: audit: type=1334 audit(1707505325.102:93): prog-id=3 op=UNLOAD Feb 9 19:02:05.760089 kernel: audit: type=1334 audit(1707505325.107:94): prog-id=13 op=LOAD Feb 9 19:02:05.760104 kernel: audit: type=1334 audit(1707505325.121:95): prog-id=14 op=LOAD Feb 9 19:02:05.760126 kernel: audit: type=1334 audit(1707505325.121:96): prog-id=4 op=UNLOAD Feb 9 19:02:05.760142 kernel: audit: type=1334 audit(1707505325.121:97): prog-id=5 op=UNLOAD Feb 9 19:02:05.760160 kernel: audit: type=1131 audit(1707505325.126:98): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:05.760183 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 9 19:02:05.760200 systemd[1]: Stopped initrd-switch-root.service. Feb 9 19:02:05.760446 kernel: audit: type=1334 audit(1707505325.153:99): prog-id=12 op=UNLOAD Feb 9 19:02:05.760462 kernel: audit: type=1130 audit(1707505325.163:100): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:05.760480 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 9 19:02:05.760498 kernel: audit: type=1131 audit(1707505325.163:101): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:05.760535 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 19:02:05.760558 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 19:02:05.760580 systemd[1]: Created slice system-getty.slice. Feb 9 19:02:05.760599 systemd[1]: Created slice system-modprobe.slice. Feb 9 19:02:05.760618 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 19:02:05.760636 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 19:02:05.760653 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 19:02:05.760673 systemd[1]: Created slice user.slice. Feb 9 19:02:05.760691 systemd[1]: Started systemd-ask-password-console.path. Feb 9 19:02:05.760713 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 19:02:05.760732 systemd[1]: Set up automount boot.automount. Feb 9 19:02:05.760750 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 19:02:05.760771 systemd[1]: Stopped target initrd-switch-root.target. Feb 9 19:02:05.760789 systemd[1]: Stopped target initrd-fs.target. Feb 9 19:02:05.760804 systemd[1]: Stopped target initrd-root-fs.target. Feb 9 19:02:05.760817 systemd[1]: Reached target integritysetup.target. Feb 9 19:02:05.760831 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 19:02:05.760850 systemd[1]: Reached target remote-fs.target. Feb 9 19:02:05.760865 systemd[1]: Reached target slices.target. Feb 9 19:02:05.760877 systemd[1]: Reached target swap.target. Feb 9 19:02:05.760890 systemd[1]: Reached target torcx.target. Feb 9 19:02:05.760903 systemd[1]: Reached target veritysetup.target. Feb 9 19:02:05.760913 systemd[1]: Listening on systemd-coredump.socket. Feb 9 19:02:05.760922 systemd[1]: Listening on systemd-initctl.socket. Feb 9 19:02:05.760932 systemd[1]: Listening on systemd-networkd.socket. Feb 9 19:02:05.760944 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 19:02:05.760954 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 19:02:05.760964 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 19:02:05.760973 systemd[1]: Mounting dev-hugepages.mount... Feb 9 19:02:05.760983 systemd[1]: Mounting dev-mqueue.mount... Feb 9 19:02:05.760992 systemd[1]: Mounting media.mount... Feb 9 19:02:05.761006 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 19:02:05.761017 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 19:02:05.761030 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 19:02:05.761040 systemd[1]: Mounting tmp.mount... Feb 9 19:02:05.761051 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 19:02:05.761062 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 19:02:05.761073 systemd[1]: Starting kmod-static-nodes.service... Feb 9 19:02:05.761086 systemd[1]: Starting modprobe@configfs.service... Feb 9 19:02:05.761096 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 19:02:05.761110 systemd[1]: Starting modprobe@drm.service... Feb 9 19:02:05.761120 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 19:02:05.761134 systemd[1]: Starting modprobe@fuse.service... Feb 9 19:02:05.761144 systemd[1]: Starting modprobe@loop.service... Feb 9 19:02:05.761157 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 19:02:05.761168 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 9 19:02:05.761179 systemd[1]: Stopped systemd-fsck-root.service. Feb 9 19:02:05.761189 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 9 19:02:05.761203 systemd[1]: Stopped systemd-fsck-usr.service. Feb 9 19:02:05.761216 systemd[1]: Stopped systemd-journald.service. Feb 9 19:02:05.761226 systemd[1]: Starting systemd-journald.service... Feb 9 19:02:05.761238 systemd[1]: Starting systemd-modules-load.service... Feb 9 19:02:05.761248 systemd[1]: Starting systemd-network-generator.service... Feb 9 19:02:05.761261 systemd[1]: Starting systemd-remount-fs.service... Feb 9 19:02:05.761271 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 19:02:05.761284 systemd[1]: verity-setup.service: Deactivated successfully. Feb 9 19:02:05.761294 systemd[1]: Stopped verity-setup.service. Feb 9 19:02:05.761308 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 19:02:05.761319 kernel: loop: module loaded Feb 9 19:02:05.761330 systemd[1]: Mounted dev-hugepages.mount. Feb 9 19:02:05.761342 systemd[1]: Mounted dev-mqueue.mount. Feb 9 19:02:05.761352 systemd[1]: Mounted media.mount. Feb 9 19:02:05.761370 systemd-journald[1117]: Journal started Feb 9 19:02:05.761425 systemd-journald[1117]: Runtime Journal (/run/log/journal/383cfa7694b9461c9cce00c8adfb611f) is 8.0M, max 159.0M, 151.0M free. Feb 9 19:01:57.040000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 9 19:01:57.596000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 19:01:57.609000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 19:01:57.609000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 19:01:57.609000 audit: BPF prog-id=10 op=LOAD Feb 9 19:01:57.609000 audit: BPF prog-id=10 op=UNLOAD Feb 9 19:01:57.609000 audit: BPF prog-id=11 op=LOAD Feb 9 19:01:57.609000 audit: BPF prog-id=11 op=UNLOAD Feb 9 19:01:58.649000 audit[1035]: AVC avc: denied { associate } for pid=1035 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 19:01:58.649000 audit[1035]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00014d8a2 a1=c0000cedf8 a2=c0000d70c0 a3=32 items=0 ppid=1018 pid=1035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:58.649000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 19:01:58.656000 audit[1035]: AVC avc: denied { associate } for pid=1035 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 9 19:01:58.656000 audit[1035]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d979 a2=1ed a3=0 items=2 ppid=1018 pid=1035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:58.656000 audit: CWD cwd="/" Feb 9 19:01:58.656000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:58.656000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:58.656000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 19:02:05.102000 audit: BPF prog-id=12 op=LOAD Feb 9 19:02:05.102000 audit: BPF prog-id=3 op=UNLOAD Feb 9 19:02:05.107000 audit: BPF prog-id=13 op=LOAD Feb 9 19:02:05.121000 audit: BPF prog-id=14 op=LOAD Feb 9 19:02:05.121000 audit: BPF prog-id=4 op=UNLOAD Feb 9 19:02:05.121000 audit: BPF prog-id=5 op=UNLOAD Feb 9 19:02:05.126000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:05.153000 audit: BPF prog-id=12 op=UNLOAD Feb 9 19:02:05.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:05.163000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:05.616000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:05.626000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:05.632000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:05.632000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:05.633000 audit: BPF prog-id=15 op=LOAD Feb 9 19:02:05.633000 audit: BPF prog-id=16 op=LOAD Feb 9 19:02:05.634000 audit: BPF prog-id=17 op=LOAD Feb 9 19:02:05.634000 audit: BPF prog-id=13 op=UNLOAD Feb 9 19:02:05.634000 audit: BPF prog-id=14 op=UNLOAD Feb 9 19:02:05.732000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:05.755000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 19:02:05.755000 audit[1117]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffec23b5110 a2=4000 a3=7ffec23b51ac items=0 ppid=1 pid=1117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:02:05.755000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 19:01:58.636146 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-09T19:01:58Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:02:05.101382 systemd[1]: Queued start job for default target multi-user.target. Feb 9 19:01:58.636808 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-09T19:01:58Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 19:02:05.126790 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 9 19:01:58.636829 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-09T19:01:58Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 19:01:58.636868 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-09T19:01:58Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 9 19:01:58.636880 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-09T19:01:58Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 9 19:01:58.636926 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-09T19:01:58Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 9 19:01:58.636941 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-09T19:01:58Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 9 19:01:58.637151 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-09T19:01:58Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 9 19:01:58.637201 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-09T19:01:58Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 19:01:58.637217 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-09T19:01:58Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 19:01:58.637626 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-09T19:01:58Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 9 19:01:58.637663 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-09T19:01:58Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 9 19:01:58.637685 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-09T19:01:58Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 9 19:01:58.637701 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-09T19:01:58Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 9 19:01:58.637720 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-09T19:01:58Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 9 19:02:05.767506 systemd[1]: Started systemd-journald.service. Feb 9 19:01:58.637735 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-09T19:01:58Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 9 19:02:04.169258 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-09T19:02:04Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:02:04.169492 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-09T19:02:04Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:02:04.169623 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-09T19:02:04Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:02:04.169849 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-09T19:02:04Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:02:04.169896 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-09T19:02:04Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 9 19:02:04.169949 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-09T19:02:04Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 9 19:02:05.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:05.772710 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 19:02:05.796346 kernel: fuse: init (API version 7.34) Feb 9 19:02:05.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:05.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:05.787000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:05.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:05.789000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:05.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:05.792000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:05.777284 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 19:02:05.779611 systemd[1]: Mounted tmp.mount. Feb 9 19:02:05.781915 systemd[1]: Finished kmod-static-nodes.service. Feb 9 19:02:05.784678 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 19:02:05.784851 systemd[1]: Finished modprobe@configfs.service. Feb 9 19:02:05.787876 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 19:02:05.788035 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 19:02:05.790679 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 19:02:05.790916 systemd[1]: Finished modprobe@drm.service. Feb 9 19:02:05.793437 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 19:02:05.793605 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 19:02:05.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:05.797000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:05.798124 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 19:02:05.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:05.800813 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 19:02:05.800998 systemd[1]: Finished modprobe@fuse.service. Feb 9 19:02:05.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:05.803000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:05.803422 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 19:02:05.803570 systemd[1]: Finished modprobe@loop.service. Feb 9 19:02:05.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:05.805000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:05.805747 systemd[1]: Finished systemd-network-generator.service. Feb 9 19:02:05.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:05.808433 systemd[1]: Finished systemd-remount-fs.service. Feb 9 19:02:05.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:05.811218 systemd[1]: Reached target network-pre.target. Feb 9 19:02:05.814681 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 19:02:05.818140 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 19:02:05.822546 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 19:02:05.834613 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 19:02:05.837938 systemd[1]: Starting systemd-journal-flush.service... Feb 9 19:02:05.840079 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 19:02:05.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:05.841501 systemd[1]: Starting systemd-random-seed.service... Feb 9 19:02:05.843947 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 19:02:05.845664 systemd[1]: Starting systemd-sysusers.service... Feb 9 19:02:05.850567 systemd[1]: Finished systemd-modules-load.service. Feb 9 19:02:05.853337 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 19:02:05.855969 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 19:02:05.862754 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:02:05.878287 systemd-journald[1117]: Time spent on flushing to /var/log/journal/383cfa7694b9461c9cce00c8adfb611f is 28.414ms for 1185 entries. Feb 9 19:02:05.878287 systemd-journald[1117]: System Journal (/var/log/journal/383cfa7694b9461c9cce00c8adfb611f) is 8.0M, max 2.6G, 2.6G free. Feb 9 19:02:05.974460 systemd-journald[1117]: Received client request to flush runtime journal. Feb 9 19:02:05.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:05.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:05.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:05.899954 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 19:02:05.975745 udevadm[1160]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 9 19:02:05.902839 systemd[1]: Finished systemd-random-seed.service. Feb 9 19:02:05.905330 systemd[1]: Reached target first-boot-complete.target. Feb 9 19:02:05.909247 systemd[1]: Starting systemd-udev-settle.service... Feb 9 19:02:05.945835 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:02:05.975564 systemd[1]: Finished systemd-journal-flush.service. Feb 9 19:02:05.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:06.355576 systemd[1]: Finished systemd-sysusers.service. Feb 9 19:02:06.358000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:06.360285 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 19:02:06.566000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:06.564260 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 19:02:06.809484 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 19:02:06.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:06.812000 audit: BPF prog-id=18 op=LOAD Feb 9 19:02:06.812000 audit: BPF prog-id=19 op=LOAD Feb 9 19:02:06.812000 audit: BPF prog-id=7 op=UNLOAD Feb 9 19:02:06.812000 audit: BPF prog-id=8 op=UNLOAD Feb 9 19:02:06.813930 systemd[1]: Starting systemd-udevd.service... Feb 9 19:02:06.832690 systemd-udevd[1164]: Using default interface naming scheme 'v252'. Feb 9 19:02:06.996499 systemd[1]: Started systemd-udevd.service. Feb 9 19:02:06.999000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:07.000000 audit: BPF prog-id=20 op=LOAD Feb 9 19:02:07.001666 systemd[1]: Starting systemd-networkd.service... Feb 9 19:02:07.036294 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Feb 9 19:02:07.091000 audit: BPF prog-id=21 op=LOAD Feb 9 19:02:07.091000 audit: BPF prog-id=22 op=LOAD Feb 9 19:02:07.091000 audit: BPF prog-id=23 op=LOAD Feb 9 19:02:07.093280 systemd[1]: Starting systemd-userdbd.service... Feb 9 19:02:07.118541 kernel: mousedev: PS/2 mouse device common for all mice Feb 9 19:02:07.129236 kernel: hv_utils: Registering HyperV Utility Driver Feb 9 19:02:07.129365 kernel: hv_vmbus: registering driver hv_utils Feb 9 19:02:07.138545 kernel: hv_vmbus: registering driver hyperv_fb Feb 9 19:02:07.142694 systemd[1]: Started systemd-userdbd.service. Feb 9 19:02:07.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:07.160041 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Feb 9 19:02:07.160136 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Feb 9 19:02:07.160162 kernel: hv_vmbus: registering driver hv_balloon Feb 9 19:02:07.130000 audit[1187]: AVC avc: denied { confidentiality } for pid=1187 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 19:02:07.165733 kernel: Console: switching to colour dummy device 80x25 Feb 9 19:02:07.172202 kernel: Console: switching to colour frame buffer device 128x48 Feb 9 19:02:07.203535 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Feb 9 19:02:07.203630 kernel: hv_utils: Heartbeat IC version 3.0 Feb 9 19:02:07.208256 kernel: hv_utils: Shutdown IC version 3.2 Feb 9 19:02:07.211534 kernel: hv_utils: TimeSync IC version 4.0 Feb 9 19:02:07.130000 audit[1187]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=561f2a0c1980 a1=f884 a2=7f934e1bdbc5 a3=5 items=12 ppid=1164 pid=1187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:02:07.130000 audit: CWD cwd="/" Feb 9 19:02:07.130000 audit: PATH item=0 name=(null) inode=235 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:07.130000 audit: PATH item=1 name=(null) inode=16048 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:07.130000 audit: PATH item=2 name=(null) inode=16048 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:07.130000 audit: PATH item=3 name=(null) inode=16049 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:07.130000 audit: PATH item=4 name=(null) inode=16048 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:07.130000 audit: PATH item=5 name=(null) inode=16050 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:07.130000 audit: PATH item=6 name=(null) inode=16048 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:07.130000 audit: PATH item=7 name=(null) inode=16051 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:07.130000 audit: PATH item=8 name=(null) inode=16048 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:07.130000 audit: PATH item=9 name=(null) inode=16052 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:07.130000 audit: PATH item=10 name=(null) inode=16048 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:07.130000 audit: PATH item=11 name=(null) inode=16053 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:07.130000 audit: PROCTITLE proctitle="(udev-worker)" Feb 9 19:02:07.884600 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1175) Feb 9 19:02:07.946080 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 19:02:07.979462 systemd-networkd[1170]: lo: Link UP Feb 9 19:02:07.979848 systemd-networkd[1170]: lo: Gained carrier Feb 9 19:02:07.980584 systemd-networkd[1170]: Enumeration completed Feb 9 19:02:07.980820 systemd[1]: Started systemd-networkd.service. Feb 9 19:02:07.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:07.984440 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 19:02:07.997327 kernel: KVM: vmx: using Hyper-V Enlightened VMCS Feb 9 19:02:08.018906 systemd-networkd[1170]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:02:08.073334 kernel: mlx5_core 49a9:00:02.0 enP18857s1: Link up Feb 9 19:02:08.114193 systemd-networkd[1170]: enP18857s1: Link UP Feb 9 19:02:08.114383 kernel: hv_netvsc 002248a1-c12c-0022-48a1-c12c002248a1 eth0: Data path switched to VF: enP18857s1 Feb 9 19:02:08.114744 systemd-networkd[1170]: eth0: Link UP Feb 9 19:02:08.114757 systemd-networkd[1170]: eth0: Gained carrier Feb 9 19:02:08.119572 systemd-networkd[1170]: enP18857s1: Gained carrier Feb 9 19:02:08.134423 systemd-networkd[1170]: eth0: DHCPv4 address 10.200.8.39/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 9 19:02:08.135745 systemd[1]: Finished systemd-udev-settle.service. Feb 9 19:02:08.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:08.139645 systemd[1]: Starting lvm2-activation-early.service... Feb 9 19:02:08.391586 lvm[1242]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 19:02:08.418396 systemd[1]: Finished lvm2-activation-early.service. Feb 9 19:02:08.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:08.421254 systemd[1]: Reached target cryptsetup.target. Feb 9 19:02:08.424536 systemd[1]: Starting lvm2-activation.service... Feb 9 19:02:08.429291 lvm[1243]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 19:02:08.449338 systemd[1]: Finished lvm2-activation.service. Feb 9 19:02:08.451000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:08.451782 systemd[1]: Reached target local-fs-pre.target. Feb 9 19:02:08.453947 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 19:02:08.453986 systemd[1]: Reached target local-fs.target. Feb 9 19:02:08.455938 systemd[1]: Reached target machines.target. Feb 9 19:02:08.459059 systemd[1]: Starting ldconfig.service... Feb 9 19:02:08.461746 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 19:02:08.461836 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:02:08.462989 systemd[1]: Starting systemd-boot-update.service... Feb 9 19:02:08.466246 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 19:02:08.469913 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 19:02:08.472495 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 19:02:08.472581 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 19:02:08.473695 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 19:02:08.490592 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1245 (bootctl) Feb 9 19:02:08.491928 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 19:02:08.500842 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 19:02:08.680758 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 19:02:08.696253 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 19:02:08.699201 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 19:02:08.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:08.814766 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 19:02:08.815660 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 19:02:08.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:09.311437 systemd-networkd[1170]: eth0: Gained IPv6LL Feb 9 19:02:09.318177 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 19:02:09.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:09.404296 systemd-fsck[1253]: fsck.fat 4.2 (2021-01-31) Feb 9 19:02:09.404296 systemd-fsck[1253]: /dev/sda1: 789 files, 115339/258078 clusters Feb 9 19:02:09.406556 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 19:02:09.411645 systemd[1]: Mounting boot.mount... Feb 9 19:02:09.409000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:09.423604 systemd[1]: Mounted boot.mount. Feb 9 19:02:09.437947 systemd[1]: Finished systemd-boot-update.service. Feb 9 19:02:09.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:09.815057 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 19:02:09.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:09.820123 systemd[1]: Starting audit-rules.service... Feb 9 19:02:09.823645 systemd[1]: Starting clean-ca-certificates.service... Feb 9 19:02:09.826947 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 19:02:09.830000 audit: BPF prog-id=24 op=LOAD Feb 9 19:02:09.831980 systemd[1]: Starting systemd-resolved.service... Feb 9 19:02:09.836000 audit: BPF prog-id=25 op=LOAD Feb 9 19:02:09.837932 systemd[1]: Starting systemd-timesyncd.service... Feb 9 19:02:09.841746 systemd[1]: Starting systemd-update-utmp.service... Feb 9 19:02:09.865000 audit[1267]: SYSTEM_BOOT pid=1267 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 19:02:09.870141 systemd[1]: Finished systemd-update-utmp.service. Feb 9 19:02:09.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:09.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:09.904061 systemd[1]: Finished clean-ca-certificates.service. Feb 9 19:02:09.906776 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 19:02:09.926438 systemd[1]: Started systemd-timesyncd.service. Feb 9 19:02:09.931000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:09.931891 systemd[1]: Reached target time-set.target. Feb 9 19:02:09.943108 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 19:02:09.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:10.004067 systemd-resolved[1263]: Positive Trust Anchors: Feb 9 19:02:10.004083 systemd-resolved[1263]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 19:02:10.004121 systemd-resolved[1263]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 19:02:10.060961 systemd-timesyncd[1264]: Contacted time server 162.159.200.1:123 (0.flatcar.pool.ntp.org). Feb 9 19:02:10.061029 systemd-timesyncd[1264]: Initial clock synchronization to Fri 2024-02-09 19:02:10.063278 UTC. Feb 9 19:02:10.113000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 19:02:10.113000 audit[1280]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff30b71940 a2=420 a3=0 items=0 ppid=1259 pid=1280 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:02:10.113000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 19:02:10.114405 augenrules[1280]: No rules Feb 9 19:02:10.114798 systemd[1]: Finished audit-rules.service. Feb 9 19:02:10.123934 systemd-resolved[1263]: Using system hostname 'ci-3510.3.2-a-2006cf4d94'. Feb 9 19:02:10.125657 systemd[1]: Started systemd-resolved.service. Feb 9 19:02:10.128823 systemd[1]: Reached target network.target. Feb 9 19:02:10.131373 systemd[1]: Reached target network-online.target. Feb 9 19:02:10.135728 systemd[1]: Reached target nss-lookup.target. Feb 9 19:02:13.880864 ldconfig[1244]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 19:02:13.897059 systemd[1]: Finished ldconfig.service. Feb 9 19:02:13.900923 systemd[1]: Starting systemd-update-done.service... Feb 9 19:02:13.908915 systemd[1]: Finished systemd-update-done.service. Feb 9 19:02:13.911294 systemd[1]: Reached target sysinit.target. Feb 9 19:02:13.913348 systemd[1]: Started motdgen.path. Feb 9 19:02:13.915218 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 19:02:13.918138 systemd[1]: Started logrotate.timer. Feb 9 19:02:13.919980 systemd[1]: Started mdadm.timer. Feb 9 19:02:13.921593 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 19:02:13.927264 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 19:02:13.927326 systemd[1]: Reached target paths.target. Feb 9 19:02:13.929431 systemd[1]: Reached target timers.target. Feb 9 19:02:13.932112 systemd[1]: Listening on dbus.socket. Feb 9 19:02:13.935795 systemd[1]: Starting docker.socket... Feb 9 19:02:13.940828 systemd[1]: Listening on sshd.socket. Feb 9 19:02:13.942875 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:02:13.943359 systemd[1]: Listening on docker.socket. Feb 9 19:02:13.945196 systemd[1]: Reached target sockets.target. Feb 9 19:02:13.947111 systemd[1]: Reached target basic.target. Feb 9 19:02:13.948902 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 19:02:13.948937 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 19:02:13.949934 systemd[1]: Starting containerd.service... Feb 9 19:02:13.953033 systemd[1]: Starting dbus.service... Feb 9 19:02:13.955890 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 19:02:13.958927 systemd[1]: Starting extend-filesystems.service... Feb 9 19:02:13.960829 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 19:02:13.962373 systemd[1]: Starting motdgen.service... Feb 9 19:02:13.968246 systemd[1]: Started nvidia.service. Feb 9 19:02:13.971195 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 19:02:13.974046 systemd[1]: Starting prepare-critools.service... Feb 9 19:02:13.976935 systemd[1]: Starting prepare-helm.service... Feb 9 19:02:13.980056 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 19:02:13.983526 systemd[1]: Starting sshd-keygen.service... Feb 9 19:02:13.991977 systemd[1]: Starting systemd-logind.service... Feb 9 19:02:13.994432 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:02:13.994508 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 19:02:13.995008 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 9 19:02:13.995971 systemd[1]: Starting update-engine.service... Feb 9 19:02:13.999117 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 19:02:14.015019 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 19:02:14.015232 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 19:02:14.041129 jq[1305]: true Feb 9 19:02:14.042183 jq[1290]: false Feb 9 19:02:14.042682 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 19:02:14.042868 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 19:02:14.055715 extend-filesystems[1291]: Found sda Feb 9 19:02:14.058435 extend-filesystems[1291]: Found sda1 Feb 9 19:02:14.058435 extend-filesystems[1291]: Found sda2 Feb 9 19:02:14.058435 extend-filesystems[1291]: Found sda3 Feb 9 19:02:14.058435 extend-filesystems[1291]: Found usr Feb 9 19:02:14.058435 extend-filesystems[1291]: Found sda4 Feb 9 19:02:14.058435 extend-filesystems[1291]: Found sda6 Feb 9 19:02:14.058435 extend-filesystems[1291]: Found sda7 Feb 9 19:02:14.058435 extend-filesystems[1291]: Found sda9 Feb 9 19:02:14.058435 extend-filesystems[1291]: Checking size of /dev/sda9 Feb 9 19:02:14.088757 jq[1315]: true Feb 9 19:02:14.089504 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 19:02:14.089678 systemd[1]: Finished motdgen.service. Feb 9 19:02:14.103808 systemd-logind[1301]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 9 19:02:14.113299 systemd-logind[1301]: New seat seat0. Feb 9 19:02:14.132693 extend-filesystems[1291]: Old size kept for /dev/sda9 Feb 9 19:02:14.135347 extend-filesystems[1291]: Found sr0 Feb 9 19:02:14.137965 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 19:02:14.138157 systemd[1]: Finished extend-filesystems.service. Feb 9 19:02:14.186498 tar[1308]: ./ Feb 9 19:02:14.186498 tar[1308]: ./macvlan Feb 9 19:02:14.189089 tar[1309]: crictl Feb 9 19:02:14.190495 tar[1310]: linux-amd64/helm Feb 9 19:02:14.236619 bash[1339]: Updated "/home/core/.ssh/authorized_keys" Feb 9 19:02:14.233124 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 19:02:14.269719 dbus-daemon[1289]: [system] SELinux support is enabled Feb 9 19:02:14.269910 systemd[1]: Started dbus.service. Feb 9 19:02:14.274983 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 19:02:14.275014 systemd[1]: Reached target system-config.target. Feb 9 19:02:14.277627 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 19:02:14.277651 systemd[1]: Reached target user-config.target. Feb 9 19:02:14.285006 systemd[1]: Started systemd-logind.service. Feb 9 19:02:14.287837 dbus-daemon[1289]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 9 19:02:14.308882 env[1329]: time="2024-02-09T19:02:14.308817649Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 19:02:14.352287 systemd[1]: nvidia.service: Deactivated successfully. Feb 9 19:02:14.356212 tar[1308]: ./static Feb 9 19:02:14.450691 tar[1308]: ./vlan Feb 9 19:02:14.477764 env[1329]: time="2024-02-09T19:02:14.477721702Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 19:02:14.486502 env[1329]: time="2024-02-09T19:02:14.486468297Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:02:14.490454 env[1329]: time="2024-02-09T19:02:14.490415691Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:02:14.490568 env[1329]: time="2024-02-09T19:02:14.490553809Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:02:14.490894 env[1329]: time="2024-02-09T19:02:14.490867148Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:02:14.491003 env[1329]: time="2024-02-09T19:02:14.490987963Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 19:02:14.491075 env[1329]: time="2024-02-09T19:02:14.491061272Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 19:02:14.491141 env[1329]: time="2024-02-09T19:02:14.491126280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 19:02:14.491296 env[1329]: time="2024-02-09T19:02:14.491279199Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:02:14.491648 env[1329]: time="2024-02-09T19:02:14.491627143Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:02:14.491920 env[1329]: time="2024-02-09T19:02:14.491897077Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:02:14.492003 env[1329]: time="2024-02-09T19:02:14.491987188Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 19:02:14.492121 env[1329]: time="2024-02-09T19:02:14.492104403Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 19:02:14.492196 env[1329]: time="2024-02-09T19:02:14.492183813Z" level=info msg="metadata content store policy set" policy=shared Feb 9 19:02:14.510449 env[1329]: time="2024-02-09T19:02:14.508212520Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 19:02:14.510449 env[1329]: time="2024-02-09T19:02:14.508248825Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 19:02:14.510449 env[1329]: time="2024-02-09T19:02:14.508264127Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 19:02:14.510449 env[1329]: time="2024-02-09T19:02:14.508346737Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 19:02:14.510449 env[1329]: time="2024-02-09T19:02:14.508370240Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 19:02:14.510449 env[1329]: time="2024-02-09T19:02:14.508389842Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 19:02:14.510449 env[1329]: time="2024-02-09T19:02:14.508409245Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 19:02:14.510449 env[1329]: time="2024-02-09T19:02:14.508427647Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 19:02:14.510449 env[1329]: time="2024-02-09T19:02:14.508445649Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 19:02:14.510449 env[1329]: time="2024-02-09T19:02:14.508462951Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 19:02:14.510449 env[1329]: time="2024-02-09T19:02:14.508480154Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 19:02:14.510449 env[1329]: time="2024-02-09T19:02:14.508496656Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 19:02:14.510449 env[1329]: time="2024-02-09T19:02:14.508603169Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 19:02:14.510449 env[1329]: time="2024-02-09T19:02:14.508696781Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 19:02:14.510979 env[1329]: time="2024-02-09T19:02:14.509072428Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 19:02:14.510979 env[1329]: time="2024-02-09T19:02:14.509119834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 19:02:14.510979 env[1329]: time="2024-02-09T19:02:14.509138336Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 19:02:14.510979 env[1329]: time="2024-02-09T19:02:14.509201244Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 19:02:14.510979 env[1329]: time="2024-02-09T19:02:14.509220646Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 19:02:14.510979 env[1329]: time="2024-02-09T19:02:14.509240849Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 19:02:14.510979 env[1329]: time="2024-02-09T19:02:14.509324259Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 19:02:14.510979 env[1329]: time="2024-02-09T19:02:14.509343662Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 19:02:14.510979 env[1329]: time="2024-02-09T19:02:14.509360564Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 19:02:14.510979 env[1329]: time="2024-02-09T19:02:14.509376766Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 19:02:14.510979 env[1329]: time="2024-02-09T19:02:14.509392468Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 19:02:14.510979 env[1329]: time="2024-02-09T19:02:14.509409970Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 19:02:14.510979 env[1329]: time="2024-02-09T19:02:14.509563289Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 19:02:14.510979 env[1329]: time="2024-02-09T19:02:14.509585892Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 19:02:14.510979 env[1329]: time="2024-02-09T19:02:14.509603794Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 19:02:14.511509 env[1329]: time="2024-02-09T19:02:14.509622297Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 19:02:14.511509 env[1329]: time="2024-02-09T19:02:14.509642399Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 19:02:14.511509 env[1329]: time="2024-02-09T19:02:14.509658101Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 19:02:14.511509 env[1329]: time="2024-02-09T19:02:14.509681204Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 19:02:14.511509 env[1329]: time="2024-02-09T19:02:14.509730110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 19:02:14.511740 env[1329]: time="2024-02-09T19:02:14.509984342Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 19:02:14.511740 env[1329]: time="2024-02-09T19:02:14.510061852Z" level=info msg="Connect containerd service" Feb 9 19:02:14.511740 env[1329]: time="2024-02-09T19:02:14.510098656Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 19:02:14.534278 env[1329]: time="2024-02-09T19:02:14.512240225Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 19:02:14.534278 env[1329]: time="2024-02-09T19:02:14.512570466Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 19:02:14.534278 env[1329]: time="2024-02-09T19:02:14.512619072Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 19:02:14.534278 env[1329]: time="2024-02-09T19:02:14.517365866Z" level=info msg="Start subscribing containerd event" Feb 9 19:02:14.534278 env[1329]: time="2024-02-09T19:02:14.517424274Z" level=info msg="Start recovering state" Feb 9 19:02:14.534278 env[1329]: time="2024-02-09T19:02:14.517497683Z" level=info msg="Start event monitor" Feb 9 19:02:14.534278 env[1329]: time="2024-02-09T19:02:14.517515285Z" level=info msg="Start snapshots syncer" Feb 9 19:02:14.534278 env[1329]: time="2024-02-09T19:02:14.517527787Z" level=info msg="Start cni network conf syncer for default" Feb 9 19:02:14.534278 env[1329]: time="2024-02-09T19:02:14.517537588Z" level=info msg="Start streaming server" Feb 9 19:02:14.512751 systemd[1]: Started containerd.service. Feb 9 19:02:14.535121 env[1329]: time="2024-02-09T19:02:14.535093787Z" level=info msg="containerd successfully booted in 0.253688s" Feb 9 19:02:14.573842 tar[1308]: ./portmap Feb 9 19:02:14.652438 tar[1308]: ./host-local Feb 9 19:02:14.679770 update_engine[1302]: I0209 19:02:14.679409 1302 main.cc:92] Flatcar Update Engine starting Feb 9 19:02:14.720100 tar[1308]: ./vrf Feb 9 19:02:14.725705 systemd[1]: Started update-engine.service. Feb 9 19:02:14.733859 update_engine[1302]: I0209 19:02:14.725766 1302 update_check_scheduler.cc:74] Next update check in 11m16s Feb 9 19:02:14.730881 systemd[1]: Started locksmithd.service. Feb 9 19:02:14.809438 tar[1308]: ./bridge Feb 9 19:02:14.908939 tar[1308]: ./tuning Feb 9 19:02:14.988024 tar[1308]: ./firewall Feb 9 19:02:15.082153 tar[1308]: ./host-device Feb 9 19:02:15.170841 tar[1308]: ./sbr Feb 9 19:02:15.241740 tar[1308]: ./loopback Feb 9 19:02:15.308222 tar[1310]: linux-amd64/LICENSE Feb 9 19:02:15.308735 tar[1310]: linux-amd64/README.md Feb 9 19:02:15.310654 tar[1308]: ./dhcp Feb 9 19:02:15.319559 systemd[1]: Finished prepare-helm.service. Feb 9 19:02:15.328014 systemd[1]: Finished prepare-critools.service. Feb 9 19:02:15.421412 tar[1308]: ./ptp Feb 9 19:02:15.463960 tar[1308]: ./ipvlan Feb 9 19:02:15.506017 tar[1308]: ./bandwidth Feb 9 19:02:15.582538 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 19:02:15.665079 sshd_keygen[1311]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 19:02:15.684680 systemd[1]: Finished sshd-keygen.service. Feb 9 19:02:15.688938 systemd[1]: Starting issuegen.service... Feb 9 19:02:15.692436 systemd[1]: Started waagent.service. Feb 9 19:02:15.699440 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 19:02:15.699575 systemd[1]: Finished issuegen.service. Feb 9 19:02:15.704014 systemd[1]: Starting systemd-user-sessions.service... Feb 9 19:02:15.711995 systemd[1]: Finished systemd-user-sessions.service. Feb 9 19:02:15.715978 systemd[1]: Started getty@tty1.service. Feb 9 19:02:15.720077 systemd[1]: Started serial-getty@ttyS0.service. Feb 9 19:02:15.722773 systemd[1]: Reached target getty.target. Feb 9 19:02:15.724773 systemd[1]: Reached target multi-user.target. Feb 9 19:02:15.728380 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 19:02:15.738092 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 19:02:15.738260 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 19:02:15.741387 systemd[1]: Startup finished in 789ms (firmware) + 21.877s (loader) + 969ms (kernel) + 1min 43.904s (initrd) + 18.575s (userspace) = 2min 26.116s. Feb 9 19:02:16.047629 login[1413]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 19:02:16.049145 login[1414]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 19:02:16.070176 systemd[1]: Created slice user-500.slice. Feb 9 19:02:16.071669 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 19:02:16.075518 systemd-logind[1301]: New session 2 of user core. Feb 9 19:02:16.081660 systemd-logind[1301]: New session 1 of user core. Feb 9 19:02:16.085606 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 19:02:16.087272 systemd[1]: Starting user@500.service... Feb 9 19:02:16.102821 (systemd)[1420]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:02:16.244936 systemd[1420]: Queued start job for default target default.target. Feb 9 19:02:16.245698 systemd[1420]: Reached target paths.target. Feb 9 19:02:16.245726 systemd[1420]: Reached target sockets.target. Feb 9 19:02:16.245742 systemd[1420]: Reached target timers.target. Feb 9 19:02:16.245758 systemd[1420]: Reached target basic.target. Feb 9 19:02:16.245810 systemd[1420]: Reached target default.target. Feb 9 19:02:16.245846 systemd[1420]: Startup finished in 137ms. Feb 9 19:02:16.245894 systemd[1]: Started user@500.service. Feb 9 19:02:16.247414 systemd[1]: Started session-1.scope. Feb 9 19:02:16.248232 systemd[1]: Started session-2.scope. Feb 9 19:02:16.571151 locksmithd[1395]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 19:02:21.051473 waagent[1408]: 2024-02-09T19:02:21.051333Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Feb 9 19:02:21.065302 waagent[1408]: 2024-02-09T19:02:21.053107Z INFO Daemon Daemon OS: flatcar 3510.3.2 Feb 9 19:02:21.065302 waagent[1408]: 2024-02-09T19:02:21.054351Z INFO Daemon Daemon Python: 3.9.16 Feb 9 19:02:21.065302 waagent[1408]: 2024-02-09T19:02:21.055729Z INFO Daemon Daemon Run daemon Feb 9 19:02:21.065302 waagent[1408]: 2024-02-09T19:02:21.057198Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.2' Feb 9 19:02:21.069151 waagent[1408]: 2024-02-09T19:02:21.069028Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 9 19:02:21.075971 waagent[1408]: 2024-02-09T19:02:21.075857Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 9 19:02:21.118855 waagent[1408]: 2024-02-09T19:02:21.076408Z INFO Daemon Daemon cloud-init is enabled: False Feb 9 19:02:21.118855 waagent[1408]: 2024-02-09T19:02:21.078422Z INFO Daemon Daemon Using waagent for provisioning Feb 9 19:02:21.118855 waagent[1408]: 2024-02-09T19:02:21.079974Z INFO Daemon Daemon Activate resource disk Feb 9 19:02:21.118855 waagent[1408]: 2024-02-09T19:02:21.080941Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Feb 9 19:02:21.118855 waagent[1408]: 2024-02-09T19:02:21.088668Z INFO Daemon Daemon Found device: None Feb 9 19:02:21.118855 waagent[1408]: 2024-02-09T19:02:21.089774Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Feb 9 19:02:21.118855 waagent[1408]: 2024-02-09T19:02:21.090606Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Feb 9 19:02:21.118855 waagent[1408]: 2024-02-09T19:02:21.092774Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 9 19:02:21.118855 waagent[1408]: 2024-02-09T19:02:21.093633Z INFO Daemon Daemon Running default provisioning handler Feb 9 19:02:21.118855 waagent[1408]: 2024-02-09T19:02:21.103407Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 9 19:02:21.118855 waagent[1408]: 2024-02-09T19:02:21.106498Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 9 19:02:21.118855 waagent[1408]: 2024-02-09T19:02:21.107735Z INFO Daemon Daemon cloud-init is enabled: False Feb 9 19:02:21.118855 waagent[1408]: 2024-02-09T19:02:21.108874Z INFO Daemon Daemon Copying ovf-env.xml Feb 9 19:02:21.185995 waagent[1408]: 2024-02-09T19:02:21.185823Z INFO Daemon Daemon Successfully mounted dvd Feb 9 19:02:21.281148 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Feb 9 19:02:21.297682 waagent[1408]: 2024-02-09T19:02:21.297551Z INFO Daemon Daemon Detect protocol endpoint Feb 9 19:02:21.299858 waagent[1408]: 2024-02-09T19:02:21.299785Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 9 19:02:21.304854 waagent[1408]: 2024-02-09T19:02:21.304733Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Feb 9 19:02:21.308449 waagent[1408]: 2024-02-09T19:02:21.308356Z INFO Daemon Daemon Test for route to 168.63.129.16 Feb 9 19:02:21.311712 waagent[1408]: 2024-02-09T19:02:21.311629Z INFO Daemon Daemon Route to 168.63.129.16 exists Feb 9 19:02:21.314509 waagent[1408]: 2024-02-09T19:02:21.314443Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Feb 9 19:02:21.409229 waagent[1408]: 2024-02-09T19:02:21.409150Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Feb 9 19:02:21.419071 waagent[1408]: 2024-02-09T19:02:21.410071Z INFO Daemon Daemon Wire protocol version:2012-11-30 Feb 9 19:02:21.419071 waagent[1408]: 2024-02-09T19:02:21.412155Z INFO Daemon Daemon Server preferred version:2015-04-05 Feb 9 19:02:21.606532 waagent[1408]: 2024-02-09T19:02:21.606300Z INFO Daemon Daemon Initializing goal state during protocol detection Feb 9 19:02:21.618409 waagent[1408]: 2024-02-09T19:02:21.618332Z INFO Daemon Daemon Forcing an update of the goal state.. Feb 9 19:02:21.621245 waagent[1408]: 2024-02-09T19:02:21.621180Z INFO Daemon Daemon Fetching goal state [incarnation 1] Feb 9 19:02:21.700084 waagent[1408]: 2024-02-09T19:02:21.699955Z INFO Daemon Daemon Found private key matching thumbprint 72599646ED232C05D754C75EB4D54D781DD81FA4 Feb 9 19:02:21.709692 waagent[1408]: 2024-02-09T19:02:21.700552Z INFO Daemon Daemon Certificate with thumbprint 94361C3D0BDA042A39304137532E2F9AE8C36DA0 has no matching private key. Feb 9 19:02:21.709692 waagent[1408]: 2024-02-09T19:02:21.701628Z INFO Daemon Daemon Fetch goal state completed Feb 9 19:02:21.748106 waagent[1408]: 2024-02-09T19:02:21.748010Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: 2f53c66f-2ded-4df0-b3f7-4ceae9df23df New eTag: 7329058669547698153] Feb 9 19:02:21.756679 waagent[1408]: 2024-02-09T19:02:21.749981Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Feb 9 19:02:21.763734 waagent[1408]: 2024-02-09T19:02:21.763644Z INFO Daemon Daemon Starting provisioning Feb 9 19:02:21.766926 waagent[1408]: 2024-02-09T19:02:21.766818Z INFO Daemon Daemon Handle ovf-env.xml. Feb 9 19:02:21.769536 waagent[1408]: 2024-02-09T19:02:21.769396Z INFO Daemon Daemon Set hostname [ci-3510.3.2-a-2006cf4d94] Feb 9 19:02:21.787588 waagent[1408]: 2024-02-09T19:02:21.787467Z INFO Daemon Daemon Publish hostname [ci-3510.3.2-a-2006cf4d94] Feb 9 19:02:21.791110 waagent[1408]: 2024-02-09T19:02:21.791029Z INFO Daemon Daemon Examine /proc/net/route for primary interface Feb 9 19:02:21.794490 waagent[1408]: 2024-02-09T19:02:21.794431Z INFO Daemon Daemon Primary interface is [eth0] Feb 9 19:02:21.808558 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Feb 9 19:02:21.808808 systemd[1]: Stopped systemd-networkd-wait-online.service. Feb 9 19:02:21.808879 systemd[1]: Stopping systemd-networkd-wait-online.service... Feb 9 19:02:21.809222 systemd[1]: Stopping systemd-networkd.service... Feb 9 19:02:21.814353 systemd-networkd[1170]: eth0: DHCPv6 lease lost Feb 9 19:02:21.815675 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 19:02:21.815875 systemd[1]: Stopped systemd-networkd.service. Feb 9 19:02:21.818164 systemd[1]: Starting systemd-networkd.service... Feb 9 19:02:21.849507 systemd-networkd[1462]: enP18857s1: Link UP Feb 9 19:02:21.849517 systemd-networkd[1462]: enP18857s1: Gained carrier Feb 9 19:02:21.850860 systemd-networkd[1462]: eth0: Link UP Feb 9 19:02:21.850869 systemd-networkd[1462]: eth0: Gained carrier Feb 9 19:02:21.851303 systemd-networkd[1462]: lo: Link UP Feb 9 19:02:21.851322 systemd-networkd[1462]: lo: Gained carrier Feb 9 19:02:21.851654 systemd-networkd[1462]: eth0: Gained IPv6LL Feb 9 19:02:21.852205 systemd-networkd[1462]: Enumeration completed Feb 9 19:02:21.852339 systemd[1]: Started systemd-networkd.service. Feb 9 19:02:21.854603 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 19:02:21.860498 waagent[1408]: 2024-02-09T19:02:21.857696Z INFO Daemon Daemon Create user account if not exists Feb 9 19:02:21.861209 waagent[1408]: 2024-02-09T19:02:21.861117Z INFO Daemon Daemon User core already exists, skip useradd Feb 9 19:02:21.862399 systemd-networkd[1462]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:02:21.864070 waagent[1408]: 2024-02-09T19:02:21.863997Z INFO Daemon Daemon Configure sudoer Feb 9 19:02:21.866819 waagent[1408]: 2024-02-09T19:02:21.866758Z INFO Daemon Daemon Configure sshd Feb 9 19:02:21.869392 waagent[1408]: 2024-02-09T19:02:21.868990Z INFO Daemon Daemon Deploy ssh public key. Feb 9 19:02:21.889378 systemd-networkd[1462]: eth0: DHCPv4 address 10.200.8.39/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 9 19:02:21.892854 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 19:02:23.133722 waagent[1408]: 2024-02-09T19:02:23.133619Z INFO Daemon Daemon Provisioning complete Feb 9 19:02:23.152338 waagent[1408]: 2024-02-09T19:02:23.152222Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Feb 9 19:02:23.155831 waagent[1408]: 2024-02-09T19:02:23.155749Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Feb 9 19:02:23.161691 waagent[1408]: 2024-02-09T19:02:23.161622Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Feb 9 19:02:23.453395 waagent[1471]: 2024-02-09T19:02:23.453273Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Feb 9 19:02:23.454186 waagent[1471]: 2024-02-09T19:02:23.454114Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 19:02:23.454348 waagent[1471]: 2024-02-09T19:02:23.454276Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 19:02:23.464840 waagent[1471]: 2024-02-09T19:02:23.464758Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Feb 9 19:02:23.465020 waagent[1471]: 2024-02-09T19:02:23.464963Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Feb 9 19:02:23.527105 waagent[1471]: 2024-02-09T19:02:23.526974Z INFO ExtHandler ExtHandler Found private key matching thumbprint 72599646ED232C05D754C75EB4D54D781DD81FA4 Feb 9 19:02:23.527359 waagent[1471]: 2024-02-09T19:02:23.527269Z INFO ExtHandler ExtHandler Certificate with thumbprint 94361C3D0BDA042A39304137532E2F9AE8C36DA0 has no matching private key. Feb 9 19:02:23.527604 waagent[1471]: 2024-02-09T19:02:23.527553Z INFO ExtHandler ExtHandler Fetch goal state completed Feb 9 19:02:23.541072 waagent[1471]: 2024-02-09T19:02:23.541014Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: a3b3c713-0a2f-4701-85a0-4f9a9864e5b8 New eTag: 7329058669547698153] Feb 9 19:02:23.541619 waagent[1471]: 2024-02-09T19:02:23.541563Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Feb 9 19:02:23.622604 waagent[1471]: 2024-02-09T19:02:23.622444Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 9 19:02:23.643071 waagent[1471]: 2024-02-09T19:02:23.642981Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1471 Feb 9 19:02:23.646512 waagent[1471]: 2024-02-09T19:02:23.646447Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 9 19:02:23.647754 waagent[1471]: 2024-02-09T19:02:23.647693Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 9 19:02:23.715759 waagent[1471]: 2024-02-09T19:02:23.715625Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 9 19:02:23.716140 waagent[1471]: 2024-02-09T19:02:23.716074Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 9 19:02:23.723933 waagent[1471]: 2024-02-09T19:02:23.723877Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 9 19:02:23.724413 waagent[1471]: 2024-02-09T19:02:23.724355Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 9 19:02:23.725457 waagent[1471]: 2024-02-09T19:02:23.725394Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Feb 9 19:02:23.726704 waagent[1471]: 2024-02-09T19:02:23.726644Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 9 19:02:23.727036 waagent[1471]: 2024-02-09T19:02:23.726983Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 19:02:23.727634 waagent[1471]: 2024-02-09T19:02:23.727581Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 9 19:02:23.727825 waagent[1471]: 2024-02-09T19:02:23.727774Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 19:02:23.728239 waagent[1471]: 2024-02-09T19:02:23.728187Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 19:02:23.728739 waagent[1471]: 2024-02-09T19:02:23.728687Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 9 19:02:23.729363 waagent[1471]: 2024-02-09T19:02:23.729274Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 9 19:02:23.729568 waagent[1471]: 2024-02-09T19:02:23.729513Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 9 19:02:23.729818 waagent[1471]: 2024-02-09T19:02:23.729754Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 19:02:23.730435 waagent[1471]: 2024-02-09T19:02:23.730377Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 9 19:02:23.730435 waagent[1471]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 9 19:02:23.730435 waagent[1471]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Feb 9 19:02:23.730435 waagent[1471]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 9 19:02:23.730435 waagent[1471]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 9 19:02:23.730435 waagent[1471]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 19:02:23.730435 waagent[1471]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 19:02:23.731006 waagent[1471]: 2024-02-09T19:02:23.730952Z INFO EnvHandler ExtHandler Configure routes Feb 9 19:02:23.731272 waagent[1471]: 2024-02-09T19:02:23.731215Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 9 19:02:23.731534 waagent[1471]: 2024-02-09T19:02:23.731480Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 9 19:02:23.731847 waagent[1471]: 2024-02-09T19:02:23.731796Z INFO EnvHandler ExtHandler Gateway:None Feb 9 19:02:23.734351 waagent[1471]: 2024-02-09T19:02:23.734081Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 9 19:02:23.736262 waagent[1471]: 2024-02-09T19:02:23.736203Z INFO EnvHandler ExtHandler Routes:None Feb 9 19:02:23.745792 waagent[1471]: 2024-02-09T19:02:23.745725Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Feb 9 19:02:23.746451 waagent[1471]: 2024-02-09T19:02:23.746411Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 9 19:02:23.747274 waagent[1471]: 2024-02-09T19:02:23.747230Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Feb 9 19:02:23.762822 waagent[1471]: 2024-02-09T19:02:23.762752Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1462' Feb 9 19:02:23.784745 waagent[1471]: 2024-02-09T19:02:23.784674Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Feb 9 19:02:23.855608 waagent[1471]: 2024-02-09T19:02:23.855123Z INFO MonitorHandler ExtHandler Network interfaces: Feb 9 19:02:23.855608 waagent[1471]: Executing ['ip', '-a', '-o', 'link']: Feb 9 19:02:23.855608 waagent[1471]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 9 19:02:23.855608 waagent[1471]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:a1:c1:2c brd ff:ff:ff:ff:ff:ff Feb 9 19:02:23.855608 waagent[1471]: 3: enP18857s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:a1:c1:2c brd ff:ff:ff:ff:ff:ff\ altname enP18857p0s2 Feb 9 19:02:23.855608 waagent[1471]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 9 19:02:23.855608 waagent[1471]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 9 19:02:23.855608 waagent[1471]: 2: eth0 inet 10.200.8.39/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 9 19:02:23.855608 waagent[1471]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 9 19:02:23.855608 waagent[1471]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 9 19:02:23.855608 waagent[1471]: 2: eth0 inet6 fe80::222:48ff:fea1:c12c/64 scope link \ valid_lft forever preferred_lft forever Feb 9 19:02:24.078656 waagent[1471]: 2024-02-09T19:02:24.078462Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules Feb 9 19:02:24.082165 waagent[1471]: 2024-02-09T19:02:24.082027Z INFO EnvHandler ExtHandler Firewall rules: Feb 9 19:02:24.082165 waagent[1471]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 19:02:24.082165 waagent[1471]: pkts bytes target prot opt in out source destination Feb 9 19:02:24.082165 waagent[1471]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 9 19:02:24.082165 waagent[1471]: pkts bytes target prot opt in out source destination Feb 9 19:02:24.082165 waagent[1471]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 19:02:24.082165 waagent[1471]: pkts bytes target prot opt in out source destination Feb 9 19:02:24.082165 waagent[1471]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 9 19:02:24.082165 waagent[1471]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 9 19:02:24.084634 waagent[1471]: 2024-02-09T19:02:24.084566Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Feb 9 19:02:24.110756 waagent[1471]: 2024-02-09T19:02:24.110680Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.9.1.1 -- exiting Feb 9 19:02:24.165514 waagent[1408]: 2024-02-09T19:02:24.165361Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Feb 9 19:02:24.171474 waagent[1408]: 2024-02-09T19:02:24.171412Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.9.1.1 to be the latest agent Feb 9 19:02:25.183229 waagent[1510]: 2024-02-09T19:02:25.183106Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Feb 9 19:02:25.186314 waagent[1510]: 2024-02-09T19:02:25.186231Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.2 Feb 9 19:02:25.186499 waagent[1510]: 2024-02-09T19:02:25.186440Z INFO ExtHandler ExtHandler Python: 3.9.16 Feb 9 19:02:25.195955 waagent[1510]: 2024-02-09T19:02:25.195846Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 9 19:02:25.196367 waagent[1510]: 2024-02-09T19:02:25.196289Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 19:02:25.196545 waagent[1510]: 2024-02-09T19:02:25.196494Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 19:02:25.208059 waagent[1510]: 2024-02-09T19:02:25.207981Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Feb 9 19:02:25.216326 waagent[1510]: 2024-02-09T19:02:25.216241Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.143 Feb 9 19:02:25.217263 waagent[1510]: 2024-02-09T19:02:25.217200Z INFO ExtHandler Feb 9 19:02:25.217444 waagent[1510]: 2024-02-09T19:02:25.217390Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 6a760377-5e08-4713-913c-215a36f67958 eTag: 7329058669547698153 source: Fabric] Feb 9 19:02:25.218138 waagent[1510]: 2024-02-09T19:02:25.218079Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Feb 9 19:02:25.219211 waagent[1510]: 2024-02-09T19:02:25.219149Z INFO ExtHandler Feb 9 19:02:25.219365 waagent[1510]: 2024-02-09T19:02:25.219294Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Feb 9 19:02:25.225682 waagent[1510]: 2024-02-09T19:02:25.225630Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Feb 9 19:02:25.226122 waagent[1510]: 2024-02-09T19:02:25.226074Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 9 19:02:25.246933 waagent[1510]: 2024-02-09T19:02:25.246857Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Feb 9 19:02:25.309714 waagent[1510]: 2024-02-09T19:02:25.309566Z INFO ExtHandler Downloaded certificate {'thumbprint': '94361C3D0BDA042A39304137532E2F9AE8C36DA0', 'hasPrivateKey': False} Feb 9 19:02:25.310737 waagent[1510]: 2024-02-09T19:02:25.310663Z INFO ExtHandler Downloaded certificate {'thumbprint': '72599646ED232C05D754C75EB4D54D781DD81FA4', 'hasPrivateKey': True} Feb 9 19:02:25.311757 waagent[1510]: 2024-02-09T19:02:25.311692Z INFO ExtHandler Fetch goal state completed Feb 9 19:02:25.335745 waagent[1510]: 2024-02-09T19:02:25.335666Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1510 Feb 9 19:02:25.339000 waagent[1510]: 2024-02-09T19:02:25.338933Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 9 19:02:25.340448 waagent[1510]: 2024-02-09T19:02:25.340390Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 9 19:02:25.345349 waagent[1510]: 2024-02-09T19:02:25.345279Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 9 19:02:25.345718 waagent[1510]: 2024-02-09T19:02:25.345662Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 9 19:02:25.353606 waagent[1510]: 2024-02-09T19:02:25.353551Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 9 19:02:25.354047 waagent[1510]: 2024-02-09T19:02:25.353991Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 9 19:02:25.374442 waagent[1510]: 2024-02-09T19:02:25.374337Z INFO ExtHandler ExtHandler Firewall rule to allow DNS TCP request to wireserver for a non root user unavailable. Setting it now. Feb 9 19:02:25.377163 waagent[1510]: 2024-02-09T19:02:25.377059Z INFO ExtHandler ExtHandler Succesfully added firewall rule to allow non root users to do a DNS TCP request to wireserver Feb 9 19:02:25.381852 waagent[1510]: 2024-02-09T19:02:25.381790Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Feb 9 19:02:25.383286 waagent[1510]: 2024-02-09T19:02:25.383225Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 9 19:02:25.383786 waagent[1510]: 2024-02-09T19:02:25.383730Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 19:02:25.383947 waagent[1510]: 2024-02-09T19:02:25.383898Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 19:02:25.384504 waagent[1510]: 2024-02-09T19:02:25.384445Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 9 19:02:25.384937 waagent[1510]: 2024-02-09T19:02:25.384882Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 9 19:02:25.385494 waagent[1510]: 2024-02-09T19:02:25.385431Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 9 19:02:25.385891 waagent[1510]: 2024-02-09T19:02:25.385837Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 9 19:02:25.386167 waagent[1510]: 2024-02-09T19:02:25.386114Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 9 19:02:25.386167 waagent[1510]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 9 19:02:25.386167 waagent[1510]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Feb 9 19:02:25.386167 waagent[1510]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 9 19:02:25.386167 waagent[1510]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 9 19:02:25.386167 waagent[1510]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 19:02:25.386167 waagent[1510]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 19:02:25.386499 waagent[1510]: 2024-02-09T19:02:25.386184Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 19:02:25.386499 waagent[1510]: 2024-02-09T19:02:25.386372Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 19:02:25.386817 waagent[1510]: 2024-02-09T19:02:25.386738Z INFO EnvHandler ExtHandler Configure routes Feb 9 19:02:25.387394 waagent[1510]: 2024-02-09T19:02:25.387340Z INFO EnvHandler ExtHandler Gateway:None Feb 9 19:02:25.389146 waagent[1510]: 2024-02-09T19:02:25.389032Z INFO EnvHandler ExtHandler Routes:None Feb 9 19:02:25.390431 waagent[1510]: 2024-02-09T19:02:25.390356Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 9 19:02:25.391495 waagent[1510]: 2024-02-09T19:02:25.391437Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 9 19:02:25.398296 waagent[1510]: 2024-02-09T19:02:25.398090Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 9 19:02:25.415695 waagent[1510]: 2024-02-09T19:02:25.415625Z INFO ExtHandler ExtHandler No requested version specified, checking for all versions for agent update (family: Prod) Feb 9 19:02:25.416257 waagent[1510]: 2024-02-09T19:02:25.416196Z INFO MonitorHandler ExtHandler Network interfaces: Feb 9 19:02:25.416257 waagent[1510]: Executing ['ip', '-a', '-o', 'link']: Feb 9 19:02:25.416257 waagent[1510]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 9 19:02:25.416257 waagent[1510]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:a1:c1:2c brd ff:ff:ff:ff:ff:ff Feb 9 19:02:25.416257 waagent[1510]: 3: enP18857s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:a1:c1:2c brd ff:ff:ff:ff:ff:ff\ altname enP18857p0s2 Feb 9 19:02:25.416257 waagent[1510]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 9 19:02:25.416257 waagent[1510]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 9 19:02:25.416257 waagent[1510]: 2: eth0 inet 10.200.8.39/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 9 19:02:25.416257 waagent[1510]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 9 19:02:25.416257 waagent[1510]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 9 19:02:25.416257 waagent[1510]: 2: eth0 inet6 fe80::222:48ff:fea1:c12c/64 scope link \ valid_lft forever preferred_lft forever Feb 9 19:02:25.416712 waagent[1510]: 2024-02-09T19:02:25.416574Z INFO ExtHandler ExtHandler Downloading manifest Feb 9 19:02:25.479756 waagent[1510]: 2024-02-09T19:02:25.479648Z INFO ExtHandler ExtHandler Feb 9 19:02:25.480254 waagent[1510]: 2024-02-09T19:02:25.480192Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: a79b2550-38e4-4706-b1d6-e450ddf42246 correlation 5ddb6321-08e7-42af-867e-52e4c7d81cec created: 2024-02-09T18:59:39.062080Z] Feb 9 19:02:25.481832 waagent[1510]: 2024-02-09T19:02:25.481770Z INFO EnvHandler ExtHandler Current Firewall rules: Feb 9 19:02:25.481832 waagent[1510]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 19:02:25.481832 waagent[1510]: pkts bytes target prot opt in out source destination Feb 9 19:02:25.481832 waagent[1510]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 9 19:02:25.481832 waagent[1510]: pkts bytes target prot opt in out source destination Feb 9 19:02:25.481832 waagent[1510]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 19:02:25.481832 waagent[1510]: pkts bytes target prot opt in out source destination Feb 9 19:02:25.481832 waagent[1510]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 9 19:02:25.481832 waagent[1510]: 102 12305 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 9 19:02:25.481832 waagent[1510]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 9 19:02:25.482833 waagent[1510]: 2024-02-09T19:02:25.482775Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Feb 9 19:02:25.484940 waagent[1510]: 2024-02-09T19:02:25.484884Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 5 ms] Feb 9 19:02:25.508788 waagent[1510]: 2024-02-09T19:02:25.508719Z INFO ExtHandler ExtHandler Looking for existing remote access users. Feb 9 19:02:25.519758 waagent[1510]: 2024-02-09T19:02:25.519679Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: D94B2582-D22B-472F-B462-3BA290883855;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1] Feb 9 19:02:55.946531 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Feb 9 19:03:00.319568 update_engine[1302]: I0209 19:03:00.319478 1302 update_attempter.cc:509] Updating boot flags... Feb 9 19:03:16.652180 systemd[1]: Created slice system-sshd.slice. Feb 9 19:03:16.654040 systemd[1]: Started sshd@0-10.200.8.39:22-10.200.12.6:42140.service. Feb 9 19:03:17.443199 sshd[1591]: Accepted publickey for core from 10.200.12.6 port 42140 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:03:17.444862 sshd[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:03:17.449550 systemd-logind[1301]: New session 3 of user core. Feb 9 19:03:17.450415 systemd[1]: Started session-3.scope. Feb 9 19:03:17.979504 systemd[1]: Started sshd@1-10.200.8.39:22-10.200.12.6:53664.service. Feb 9 19:03:18.617356 sshd[1596]: Accepted publickey for core from 10.200.12.6 port 53664 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:03:18.618945 sshd[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:03:18.623837 systemd[1]: Started session-4.scope. Feb 9 19:03:18.624280 systemd-logind[1301]: New session 4 of user core. Feb 9 19:03:19.057400 sshd[1596]: pam_unix(sshd:session): session closed for user core Feb 9 19:03:19.060606 systemd[1]: sshd@1-10.200.8.39:22-10.200.12.6:53664.service: Deactivated successfully. Feb 9 19:03:19.061630 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 19:03:19.062352 systemd-logind[1301]: Session 4 logged out. Waiting for processes to exit. Feb 9 19:03:19.063090 systemd-logind[1301]: Removed session 4. Feb 9 19:03:19.160747 systemd[1]: Started sshd@2-10.200.8.39:22-10.200.12.6:53676.service. Feb 9 19:03:19.771032 sshd[1602]: Accepted publickey for core from 10.200.12.6 port 53676 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:03:19.772706 sshd[1602]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:03:19.777337 systemd[1]: Started session-5.scope. Feb 9 19:03:19.777935 systemd-logind[1301]: New session 5 of user core. Feb 9 19:03:20.203143 sshd[1602]: pam_unix(sshd:session): session closed for user core Feb 9 19:03:20.206014 systemd[1]: sshd@2-10.200.8.39:22-10.200.12.6:53676.service: Deactivated successfully. Feb 9 19:03:20.206849 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 19:03:20.207470 systemd-logind[1301]: Session 5 logged out. Waiting for processes to exit. Feb 9 19:03:20.208199 systemd-logind[1301]: Removed session 5. Feb 9 19:03:20.306516 systemd[1]: Started sshd@3-10.200.8.39:22-10.200.12.6:53690.service. Feb 9 19:03:20.917396 sshd[1608]: Accepted publickey for core from 10.200.12.6 port 53690 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:03:20.918970 sshd[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:03:20.924596 systemd[1]: Started session-6.scope. Feb 9 19:03:20.925147 systemd-logind[1301]: New session 6 of user core. Feb 9 19:03:21.354586 sshd[1608]: pam_unix(sshd:session): session closed for user core Feb 9 19:03:21.358017 systemd[1]: sshd@3-10.200.8.39:22-10.200.12.6:53690.service: Deactivated successfully. Feb 9 19:03:21.359015 systemd[1]: session-6.scope: Deactivated successfully. Feb 9 19:03:21.359774 systemd-logind[1301]: Session 6 logged out. Waiting for processes to exit. Feb 9 19:03:21.360510 systemd-logind[1301]: Removed session 6. Feb 9 19:03:21.460519 systemd[1]: Started sshd@4-10.200.8.39:22-10.200.12.6:53698.service. Feb 9 19:03:22.081262 sshd[1614]: Accepted publickey for core from 10.200.12.6 port 53698 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:03:22.082884 sshd[1614]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:03:22.087372 systemd-logind[1301]: New session 7 of user core. Feb 9 19:03:22.087660 systemd[1]: Started session-7.scope. Feb 9 19:03:22.640640 sudo[1617]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 19:03:22.640980 sudo[1617]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 19:03:23.505418 systemd[1]: Starting docker.service... Feb 9 19:03:23.558811 env[1632]: time="2024-02-09T19:03:23.558750493Z" level=info msg="Starting up" Feb 9 19:03:23.559958 env[1632]: time="2024-02-09T19:03:23.559927923Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 19:03:23.560084 env[1632]: time="2024-02-09T19:03:23.560073127Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 19:03:23.560139 env[1632]: time="2024-02-09T19:03:23.560128528Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 19:03:23.560181 env[1632]: time="2024-02-09T19:03:23.560173829Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 19:03:23.562117 env[1632]: time="2024-02-09T19:03:23.562096679Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 19:03:23.562223 env[1632]: time="2024-02-09T19:03:23.562212682Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 19:03:23.562276 env[1632]: time="2024-02-09T19:03:23.562266284Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 19:03:23.562333 env[1632]: time="2024-02-09T19:03:23.562324085Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 19:03:23.691615 env[1632]: time="2024-02-09T19:03:23.691563225Z" level=info msg="Loading containers: start." Feb 9 19:03:23.803338 kernel: Initializing XFRM netlink socket Feb 9 19:03:23.836556 env[1632]: time="2024-02-09T19:03:23.836515471Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 9 19:03:23.933784 systemd-networkd[1462]: docker0: Link UP Feb 9 19:03:23.953695 env[1632]: time="2024-02-09T19:03:23.953660298Z" level=info msg="Loading containers: done." Feb 9 19:03:23.964609 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1447546221-merged.mount: Deactivated successfully. Feb 9 19:03:23.979328 env[1632]: time="2024-02-09T19:03:23.979283860Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 9 19:03:23.979533 env[1632]: time="2024-02-09T19:03:23.979514666Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 9 19:03:23.979653 env[1632]: time="2024-02-09T19:03:23.979631569Z" level=info msg="Daemon has completed initialization" Feb 9 19:03:24.011929 systemd[1]: Started docker.service. Feb 9 19:03:24.021262 env[1632]: time="2024-02-09T19:03:24.021197030Z" level=info msg="API listen on /run/docker.sock" Feb 9 19:03:24.038131 systemd[1]: Reloading. Feb 9 19:03:24.111599 /usr/lib/systemd/system-generators/torcx-generator[1761]: time="2024-02-09T19:03:24Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:03:24.123423 /usr/lib/systemd/system-generators/torcx-generator[1761]: time="2024-02-09T19:03:24Z" level=info msg="torcx already run" Feb 9 19:03:24.207406 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:03:24.207426 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:03:24.225756 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:03:24.310806 systemd[1]: Started kubelet.service. Feb 9 19:03:24.376197 kubelet[1822]: E0209 19:03:24.376069 1822 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 19:03:24.378070 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:03:24.378231 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:03:27.707565 env[1329]: time="2024-02-09T19:03:27.707516449Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\"" Feb 9 19:03:28.313991 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1814187897.mount: Deactivated successfully. Feb 9 19:03:30.404415 env[1329]: time="2024-02-09T19:03:30.404357850Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:30.416432 env[1329]: time="2024-02-09T19:03:30.416384107Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:30.421498 env[1329]: time="2024-02-09T19:03:30.421449516Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:30.427723 env[1329]: time="2024-02-09T19:03:30.427682949Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2f28bed4096abd572a56595ac0304238bdc271dcfe22c650707c09bf97ec16fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:30.428398 env[1329]: time="2024-02-09T19:03:30.428364964Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\" returns image reference \"sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f\"" Feb 9 19:03:30.438329 env[1329]: time="2024-02-09T19:03:30.438286176Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\"" Feb 9 19:03:32.477998 env[1329]: time="2024-02-09T19:03:32.477934259Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:32.488735 env[1329]: time="2024-02-09T19:03:32.488688478Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:32.495210 env[1329]: time="2024-02-09T19:03:32.495175009Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:32.502017 env[1329]: time="2024-02-09T19:03:32.501986148Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fda420c6c15cdd01c4eba3404f0662fe486a9c7f38fa13c741a21334673841a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:32.502758 env[1329]: time="2024-02-09T19:03:32.502726263Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\" returns image reference \"sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486\"" Feb 9 19:03:32.512805 env[1329]: time="2024-02-09T19:03:32.512777867Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\"" Feb 9 19:03:33.800770 env[1329]: time="2024-02-09T19:03:33.800656107Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:33.811772 env[1329]: time="2024-02-09T19:03:33.811718525Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:33.825735 env[1329]: time="2024-02-09T19:03:33.825688302Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:33.832362 env[1329]: time="2024-02-09T19:03:33.832321233Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c3c7303ee6d01c8e5a769db28661cf854b55175aa72c67e9b6a7b9d47ac42af3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:33.833149 env[1329]: time="2024-02-09T19:03:33.833112849Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\" returns image reference \"sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e\"" Feb 9 19:03:33.843200 env[1329]: time="2024-02-09T19:03:33.843171248Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 9 19:03:34.393420 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 9 19:03:34.393713 systemd[1]: Stopped kubelet.service. Feb 9 19:03:34.395728 systemd[1]: Started kubelet.service. Feb 9 19:03:34.513978 kubelet[1856]: E0209 19:03:34.513920 1856 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 19:03:34.518736 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:03:34.518899 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:03:34.994866 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3850803838.mount: Deactivated successfully. Feb 9 19:03:35.486208 env[1329]: time="2024-02-09T19:03:35.486152570Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:35.492937 env[1329]: time="2024-02-09T19:03:35.492897296Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:35.496230 env[1329]: time="2024-02-09T19:03:35.496194058Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:35.503782 env[1329]: time="2024-02-09T19:03:35.503753200Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:35.504241 env[1329]: time="2024-02-09T19:03:35.504211009Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f\"" Feb 9 19:03:35.513988 env[1329]: time="2024-02-09T19:03:35.513962192Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 9 19:03:36.073695 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2266081181.mount: Deactivated successfully. Feb 9 19:03:36.098475 env[1329]: time="2024-02-09T19:03:36.098419534Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:36.110895 env[1329]: time="2024-02-09T19:03:36.110843161Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:36.115893 env[1329]: time="2024-02-09T19:03:36.115848953Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:36.121402 env[1329]: time="2024-02-09T19:03:36.121367554Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:36.121882 env[1329]: time="2024-02-09T19:03:36.121848963Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 9 19:03:36.132975 env[1329]: time="2024-02-09T19:03:36.132933066Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\"" Feb 9 19:03:36.880603 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1402226563.mount: Deactivated successfully. Feb 9 19:03:41.323834 env[1329]: time="2024-02-09T19:03:41.323771486Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:41.331135 env[1329]: time="2024-02-09T19:03:41.331091904Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:41.336169 env[1329]: time="2024-02-09T19:03:41.336131885Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:41.340243 env[1329]: time="2024-02-09T19:03:41.340161651Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:41.341155 env[1329]: time="2024-02-09T19:03:41.341124166Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\" returns image reference \"sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7\"" Feb 9 19:03:41.351804 env[1329]: time="2024-02-09T19:03:41.351766038Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\"" Feb 9 19:03:41.940860 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount552074423.mount: Deactivated successfully. Feb 9 19:03:42.557500 env[1329]: time="2024-02-09T19:03:42.557444397Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:42.568837 env[1329]: time="2024-02-09T19:03:42.568796576Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:42.575391 env[1329]: time="2024-02-09T19:03:42.575357680Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:42.583120 env[1329]: time="2024-02-09T19:03:42.583087302Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:42.583711 env[1329]: time="2024-02-09T19:03:42.583681511Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\" returns image reference \"sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a\"" Feb 9 19:03:44.643399 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 9 19:03:44.643638 systemd[1]: Stopped kubelet.service. Feb 9 19:03:44.650264 systemd[1]: Started kubelet.service. Feb 9 19:03:44.728934 kubelet[1932]: E0209 19:03:44.728878 1932 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 19:03:44.731450 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:03:44.731605 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:03:45.673232 systemd[1]: Stopped kubelet.service. Feb 9 19:03:45.687523 systemd[1]: Reloading. Feb 9 19:03:45.753793 /usr/lib/systemd/system-generators/torcx-generator[1963]: time="2024-02-09T19:03:45Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:03:45.754213 /usr/lib/systemd/system-generators/torcx-generator[1963]: time="2024-02-09T19:03:45Z" level=info msg="torcx already run" Feb 9 19:03:45.849103 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:03:45.849122 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:03:45.867050 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:03:45.955916 systemd[1]: Started kubelet.service. Feb 9 19:03:46.008029 kubelet[2026]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:03:46.008029 kubelet[2026]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:03:46.008474 kubelet[2026]: I0209 19:03:46.008099 2026 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 19:03:46.009550 kubelet[2026]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:03:46.009550 kubelet[2026]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:03:46.393653 kubelet[2026]: I0209 19:03:46.393621 2026 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 19:03:46.393653 kubelet[2026]: I0209 19:03:46.393647 2026 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 19:03:46.393910 kubelet[2026]: I0209 19:03:46.393891 2026 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 19:03:46.396838 kubelet[2026]: E0209 19:03:46.396816 2026 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.39:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.39:6443: connect: connection refused Feb 9 19:03:46.397012 kubelet[2026]: I0209 19:03:46.396996 2026 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 19:03:46.400076 kubelet[2026]: I0209 19:03:46.400048 2026 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 19:03:46.400294 kubelet[2026]: I0209 19:03:46.400278 2026 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 19:03:46.400414 kubelet[2026]: I0209 19:03:46.400391 2026 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 19:03:46.400541 kubelet[2026]: I0209 19:03:46.400427 2026 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 19:03:46.400541 kubelet[2026]: I0209 19:03:46.400442 2026 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 19:03:46.400633 kubelet[2026]: I0209 19:03:46.400545 2026 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:03:46.407052 kubelet[2026]: I0209 19:03:46.407035 2026 kubelet.go:398] "Attempting to sync node with API server" Feb 9 19:03:46.407165 kubelet[2026]: I0209 19:03:46.407156 2026 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 19:03:46.407247 kubelet[2026]: I0209 19:03:46.407239 2026 kubelet.go:297] "Adding apiserver pod source" Feb 9 19:03:46.407345 kubelet[2026]: I0209 19:03:46.407334 2026 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 19:03:46.408079 kubelet[2026]: W0209 19:03:46.408039 2026 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.8.39:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Feb 9 19:03:46.408157 kubelet[2026]: E0209 19:03:46.408091 2026 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.39:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Feb 9 19:03:46.408208 kubelet[2026]: I0209 19:03:46.408160 2026 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 19:03:46.408432 kubelet[2026]: W0209 19:03:46.408415 2026 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 19:03:46.408850 kubelet[2026]: I0209 19:03:46.408828 2026 server.go:1186] "Started kubelet" Feb 9 19:03:46.408965 kubelet[2026]: W0209 19:03:46.408929 2026 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.8.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-2006cf4d94&limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Feb 9 19:03:46.409022 kubelet[2026]: E0209 19:03:46.408979 2026 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-2006cf4d94&limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Feb 9 19:03:46.413187 kubelet[2026]: E0209 19:03:46.413170 2026 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 19:03:46.413282 kubelet[2026]: E0209 19:03:46.413275 2026 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 19:03:46.413553 kubelet[2026]: E0209 19:03:46.413486 2026 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-2006cf4d94.17b2472379261eee", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-2006cf4d94", UID:"ci-3510.3.2-a-2006cf4d94", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-2006cf4d94"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 3, 46, 408808174, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 3, 46, 408808174, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.200.8.39:6443/api/v1/namespaces/default/events": dial tcp 10.200.8.39:6443: connect: connection refused'(may retry after sleeping) Feb 9 19:03:46.414679 kubelet[2026]: I0209 19:03:46.414667 2026 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 19:03:46.415004 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 9 19:03:46.415136 kubelet[2026]: I0209 19:03:46.415119 2026 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 19:03:46.415575 kubelet[2026]: I0209 19:03:46.415559 2026 server.go:451] "Adding debug handlers to kubelet server" Feb 9 19:03:46.419443 kubelet[2026]: I0209 19:03:46.418623 2026 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 19:03:46.420130 kubelet[2026]: I0209 19:03:46.420105 2026 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 19:03:46.420943 kubelet[2026]: E0209 19:03:46.420923 2026 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://10.200.8.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-2006cf4d94?timeout=10s": dial tcp 10.200.8.39:6443: connect: connection refused Feb 9 19:03:46.422373 kubelet[2026]: W0209 19:03:46.422335 2026 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.8.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Feb 9 19:03:46.422497 kubelet[2026]: E0209 19:03:46.422485 2026 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Feb 9 19:03:46.471353 kubelet[2026]: I0209 19:03:46.471332 2026 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 19:03:46.471508 kubelet[2026]: I0209 19:03:46.471499 2026 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 19:03:46.471591 kubelet[2026]: I0209 19:03:46.471581 2026 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:03:46.478476 kubelet[2026]: I0209 19:03:46.478456 2026 policy_none.go:49] "None policy: Start" Feb 9 19:03:46.479169 kubelet[2026]: I0209 19:03:46.479157 2026 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 19:03:46.479269 kubelet[2026]: I0209 19:03:46.479262 2026 state_mem.go:35] "Initializing new in-memory state store" Feb 9 19:03:46.490160 systemd[1]: Created slice kubepods.slice. Feb 9 19:03:46.494833 systemd[1]: Created slice kubepods-burstable.slice. Feb 9 19:03:46.497838 systemd[1]: Created slice kubepods-besteffort.slice. Feb 9 19:03:46.502940 kubelet[2026]: I0209 19:03:46.502927 2026 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 19:03:46.504099 kubelet[2026]: I0209 19:03:46.504083 2026 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 19:03:46.505114 kubelet[2026]: E0209 19:03:46.505102 2026 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.2-a-2006cf4d94\" not found" Feb 9 19:03:46.514873 kubelet[2026]: I0209 19:03:46.514860 2026 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 19:03:46.520137 kubelet[2026]: I0209 19:03:46.520125 2026 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-2006cf4d94" Feb 9 19:03:46.520649 kubelet[2026]: E0209 19:03:46.520636 2026 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.39:6443/api/v1/nodes\": dial tcp 10.200.8.39:6443: connect: connection refused" node="ci-3510.3.2-a-2006cf4d94" Feb 9 19:03:46.547883 kubelet[2026]: I0209 19:03:46.547861 2026 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 19:03:46.547883 kubelet[2026]: I0209 19:03:46.547884 2026 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 19:03:46.548048 kubelet[2026]: I0209 19:03:46.547903 2026 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 19:03:46.548048 kubelet[2026]: E0209 19:03:46.547946 2026 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 9 19:03:46.548884 kubelet[2026]: W0209 19:03:46.548857 2026 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.8.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Feb 9 19:03:46.549615 kubelet[2026]: E0209 19:03:46.549602 2026 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Feb 9 19:03:46.621673 kubelet[2026]: E0209 19:03:46.621633 2026 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://10.200.8.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-2006cf4d94?timeout=10s": dial tcp 10.200.8.39:6443: connect: connection refused Feb 9 19:03:46.649343 kubelet[2026]: I0209 19:03:46.648970 2026 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:03:46.651682 kubelet[2026]: I0209 19:03:46.651657 2026 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:03:46.653630 kubelet[2026]: I0209 19:03:46.653600 2026 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:03:46.654586 kubelet[2026]: I0209 19:03:46.653763 2026 status_manager.go:698] "Failed to get status for pod" podUID=8d492f126ff7211c3c848fef2e74060f pod="kube-system/kube-apiserver-ci-3510.3.2-a-2006cf4d94" err="Get \"https://10.200.8.39:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-3510.3.2-a-2006cf4d94\": dial tcp 10.200.8.39:6443: connect: connection refused" Feb 9 19:03:46.658600 kubelet[2026]: I0209 19:03:46.658580 2026 status_manager.go:698] "Failed to get status for pod" podUID=b904945ced4b201ee38a29bd57e671c5 pod="kube-system/kube-controller-manager-ci-3510.3.2-a-2006cf4d94" err="Get \"https://10.200.8.39:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-3510.3.2-a-2006cf4d94\": dial tcp 10.200.8.39:6443: connect: connection refused" Feb 9 19:03:46.658924 kubelet[2026]: I0209 19:03:46.658905 2026 status_manager.go:698] "Failed to get status for pod" podUID=7e33389636318b04b3b81e3bebfced18 pod="kube-system/kube-scheduler-ci-3510.3.2-a-2006cf4d94" err="Get \"https://10.200.8.39:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-3510.3.2-a-2006cf4d94\": dial tcp 10.200.8.39:6443: connect: connection refused" Feb 9 19:03:46.660831 systemd[1]: Created slice kubepods-burstable-pod8d492f126ff7211c3c848fef2e74060f.slice. Feb 9 19:03:46.671200 systemd[1]: Created slice kubepods-burstable-podb904945ced4b201ee38a29bd57e671c5.slice. Feb 9 19:03:46.674735 systemd[1]: Created slice kubepods-burstable-pod7e33389636318b04b3b81e3bebfced18.slice. Feb 9 19:03:46.721788 kubelet[2026]: I0209 19:03:46.721752 2026 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7e33389636318b04b3b81e3bebfced18-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-2006cf4d94\" (UID: \"7e33389636318b04b3b81e3bebfced18\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-2006cf4d94" Feb 9 19:03:46.722014 kubelet[2026]: I0209 19:03:46.721997 2026 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8d492f126ff7211c3c848fef2e74060f-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-2006cf4d94\" (UID: \"8d492f126ff7211c3c848fef2e74060f\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-2006cf4d94" Feb 9 19:03:46.722153 kubelet[2026]: I0209 19:03:46.722139 2026 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8d492f126ff7211c3c848fef2e74060f-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-2006cf4d94\" (UID: \"8d492f126ff7211c3c848fef2e74060f\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-2006cf4d94" Feb 9 19:03:46.722395 kubelet[2026]: I0209 19:03:46.722376 2026 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b904945ced4b201ee38a29bd57e671c5-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-2006cf4d94\" (UID: \"b904945ced4b201ee38a29bd57e671c5\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-2006cf4d94" Feb 9 19:03:46.722556 kubelet[2026]: I0209 19:03:46.722539 2026 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b904945ced4b201ee38a29bd57e671c5-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-2006cf4d94\" (UID: \"b904945ced4b201ee38a29bd57e671c5\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-2006cf4d94" Feb 9 19:03:46.722688 kubelet[2026]: I0209 19:03:46.722674 2026 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b904945ced4b201ee38a29bd57e671c5-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-2006cf4d94\" (UID: \"b904945ced4b201ee38a29bd57e671c5\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-2006cf4d94" Feb 9 19:03:46.722862 kubelet[2026]: I0209 19:03:46.722845 2026 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8d492f126ff7211c3c848fef2e74060f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-2006cf4d94\" (UID: \"8d492f126ff7211c3c848fef2e74060f\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-2006cf4d94" Feb 9 19:03:46.722997 kubelet[2026]: I0209 19:03:46.722985 2026 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b904945ced4b201ee38a29bd57e671c5-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-2006cf4d94\" (UID: \"b904945ced4b201ee38a29bd57e671c5\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-2006cf4d94" Feb 9 19:03:46.723144 kubelet[2026]: I0209 19:03:46.723128 2026 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b904945ced4b201ee38a29bd57e671c5-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-2006cf4d94\" (UID: \"b904945ced4b201ee38a29bd57e671c5\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-2006cf4d94" Feb 9 19:03:46.723627 kubelet[2026]: I0209 19:03:46.723601 2026 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-2006cf4d94" Feb 9 19:03:46.723981 kubelet[2026]: E0209 19:03:46.723960 2026 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.39:6443/api/v1/nodes\": dial tcp 10.200.8.39:6443: connect: connection refused" node="ci-3510.3.2-a-2006cf4d94" Feb 9 19:03:46.970173 env[1329]: time="2024-02-09T19:03:46.970034805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-2006cf4d94,Uid:8d492f126ff7211c3c848fef2e74060f,Namespace:kube-system,Attempt:0,}" Feb 9 19:03:46.975288 env[1329]: time="2024-02-09T19:03:46.975184678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-2006cf4d94,Uid:b904945ced4b201ee38a29bd57e671c5,Namespace:kube-system,Attempt:0,}" Feb 9 19:03:46.980155 env[1329]: time="2024-02-09T19:03:46.980117749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-2006cf4d94,Uid:7e33389636318b04b3b81e3bebfced18,Namespace:kube-system,Attempt:0,}" Feb 9 19:03:47.022960 kubelet[2026]: E0209 19:03:47.022924 2026 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://10.200.8.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-2006cf4d94?timeout=10s": dial tcp 10.200.8.39:6443: connect: connection refused Feb 9 19:03:47.125818 kubelet[2026]: I0209 19:03:47.125786 2026 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-2006cf4d94" Feb 9 19:03:47.126171 kubelet[2026]: E0209 19:03:47.126147 2026 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.39:6443/api/v1/nodes\": dial tcp 10.200.8.39:6443: connect: connection refused" node="ci-3510.3.2-a-2006cf4d94" Feb 9 19:03:47.209664 kubelet[2026]: E0209 19:03:47.209566 2026 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-2006cf4d94.17b2472379261eee", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-2006cf4d94", UID:"ci-3510.3.2-a-2006cf4d94", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-2006cf4d94"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 3, 46, 408808174, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 3, 46, 408808174, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.200.8.39:6443/api/v1/namespaces/default/events": dial tcp 10.200.8.39:6443: connect: connection refused'(may retry after sleeping) Feb 9 19:03:47.613888 kubelet[2026]: W0209 19:03:47.613822 2026 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.8.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Feb 9 19:03:47.613888 kubelet[2026]: E0209 19:03:47.613891 2026 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Feb 9 19:03:47.823527 kubelet[2026]: E0209 19:03:47.823471 2026 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: Get "https://10.200.8.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-2006cf4d94?timeout=10s": dial tcp 10.200.8.39:6443: connect: connection refused Feb 9 19:03:47.860500 kubelet[2026]: W0209 19:03:47.860434 2026 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.8.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-2006cf4d94&limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Feb 9 19:03:47.860500 kubelet[2026]: E0209 19:03:47.860499 2026 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-2006cf4d94&limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Feb 9 19:03:47.908501 kubelet[2026]: W0209 19:03:47.908444 2026 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.8.39:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Feb 9 19:03:47.908501 kubelet[2026]: E0209 19:03:47.908507 2026 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.39:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Feb 9 19:03:47.919918 kubelet[2026]: W0209 19:03:47.919871 2026 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.8.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Feb 9 19:03:47.920026 kubelet[2026]: E0209 19:03:47.919926 2026 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Feb 9 19:03:47.928696 kubelet[2026]: I0209 19:03:47.928444 2026 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-2006cf4d94" Feb 9 19:03:47.928696 kubelet[2026]: E0209 19:03:47.928680 2026 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.39:6443/api/v1/nodes\": dial tcp 10.200.8.39:6443: connect: connection refused" node="ci-3510.3.2-a-2006cf4d94" Feb 9 19:03:48.488757 kubelet[2026]: E0209 19:03:48.488719 2026 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.39:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.39:6443: connect: connection refused Feb 9 19:03:49.540018 kubelet[2026]: E0209 19:03:49.424636 2026 controller.go:146] failed to ensure lease exists, will retry in 3.2s, error: Get "https://10.200.8.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-2006cf4d94?timeout=10s": dial tcp 10.200.8.39:6443: connect: connection refused Feb 9 19:03:49.540018 kubelet[2026]: I0209 19:03:49.531478 2026 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-2006cf4d94" Feb 9 19:03:49.540018 kubelet[2026]: E0209 19:03:49.531763 2026 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.39:6443/api/v1/nodes\": dial tcp 10.200.8.39:6443: connect: connection refused" node="ci-3510.3.2-a-2006cf4d94" Feb 9 19:03:49.766432 kubelet[2026]: W0209 19:03:49.766388 2026 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.8.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Feb 9 19:03:49.766608 kubelet[2026]: E0209 19:03:49.766447 2026 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Feb 9 19:03:50.317125 kubelet[2026]: W0209 19:03:50.317082 2026 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.8.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Feb 9 19:03:50.317125 kubelet[2026]: E0209 19:03:50.317129 2026 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Feb 9 19:03:50.578590 kubelet[2026]: W0209 19:03:50.578449 2026 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.8.39:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Feb 9 19:03:50.578590 kubelet[2026]: E0209 19:03:50.578502 2026 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.39:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Feb 9 19:03:50.785849 kubelet[2026]: W0209 19:03:50.785809 2026 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.8.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-2006cf4d94&limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Feb 9 19:03:50.785849 kubelet[2026]: E0209 19:03:50.785849 2026 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-2006cf4d94&limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Feb 9 19:03:52.625931 kubelet[2026]: E0209 19:03:52.625874 2026 controller.go:146] failed to ensure lease exists, will retry in 6.4s, error: Get "https://10.200.8.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-2006cf4d94?timeout=10s": dial tcp 10.200.8.39:6443: connect: connection refused Feb 9 19:03:52.834119 kubelet[2026]: I0209 19:03:52.734543 2026 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-2006cf4d94" Feb 9 19:03:52.834119 kubelet[2026]: E0209 19:03:52.734859 2026 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.39:6443/api/v1/nodes\": dial tcp 10.200.8.39:6443: connect: connection refused" node="ci-3510.3.2-a-2006cf4d94" Feb 9 19:03:52.875543 kubelet[2026]: E0209 19:03:52.875510 2026 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.39:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.39:6443: connect: connection refused Feb 9 19:03:53.300637 kubelet[2026]: W0209 19:03:53.300598 2026 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.8.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Feb 9 19:03:53.300637 kubelet[2026]: E0209 19:03:53.300637 2026 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Feb 9 19:03:53.526195 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3194771134.mount: Deactivated successfully. Feb 9 19:03:53.620856 env[1329]: time="2024-02-09T19:03:53.620715083Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:53.641724 env[1329]: time="2024-02-09T19:03:53.641676139Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:53.653482 env[1329]: time="2024-02-09T19:03:53.653428582Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:53.656821 env[1329]: time="2024-02-09T19:03:53.656786123Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:53.668210 env[1329]: time="2024-02-09T19:03:53.668173461Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:53.671786 env[1329]: time="2024-02-09T19:03:53.671752505Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:53.674558 env[1329]: time="2024-02-09T19:03:53.674529939Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:53.678936 env[1329]: time="2024-02-09T19:03:53.678903092Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:53.681287 env[1329]: time="2024-02-09T19:03:53.681253720Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:53.689218 env[1329]: time="2024-02-09T19:03:53.689183417Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:53.696953 env[1329]: time="2024-02-09T19:03:53.696919211Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:53.707440 env[1329]: time="2024-02-09T19:03:53.707407539Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:53.739835 env[1329]: time="2024-02-09T19:03:53.739782233Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:03:53.739977 env[1329]: time="2024-02-09T19:03:53.739817133Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:03:53.739977 env[1329]: time="2024-02-09T19:03:53.739831033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:03:53.740147 env[1329]: time="2024-02-09T19:03:53.740095237Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dec1b84c273b8128058737431606059633d55a1cd7a8228105a6fac39d3827c2 pid=2102 runtime=io.containerd.runc.v2 Feb 9 19:03:53.761354 systemd[1]: Started cri-containerd-dec1b84c273b8128058737431606059633d55a1cd7a8228105a6fac39d3827c2.scope. Feb 9 19:03:53.792863 env[1329]: time="2024-02-09T19:03:53.792795078Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:03:53.793088 env[1329]: time="2024-02-09T19:03:53.793060381Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:03:53.793189 env[1329]: time="2024-02-09T19:03:53.793169783Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:03:53.793444 env[1329]: time="2024-02-09T19:03:53.793412586Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/10402755eaf9b2e49532e57c2f8a1811ce6e1ceb7b7a08e576508191d9421cc0 pid=2138 runtime=io.containerd.runc.v2 Feb 9 19:03:53.797734 env[1329]: time="2024-02-09T19:03:53.797670037Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:03:53.797935 env[1329]: time="2024-02-09T19:03:53.797904840Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:03:53.798082 env[1329]: time="2024-02-09T19:03:53.798049342Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:03:53.798467 env[1329]: time="2024-02-09T19:03:53.798424747Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/29ec86780cc548ee99396d90432ba943defef171876dae87406c04ff2b6b8276 pid=2141 runtime=io.containerd.runc.v2 Feb 9 19:03:53.814131 systemd[1]: Started cri-containerd-29ec86780cc548ee99396d90432ba943defef171876dae87406c04ff2b6b8276.scope. Feb 9 19:03:53.837504 systemd[1]: Started cri-containerd-10402755eaf9b2e49532e57c2f8a1811ce6e1ceb7b7a08e576508191d9421cc0.scope. Feb 9 19:03:53.852360 env[1329]: time="2024-02-09T19:03:53.852295502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-2006cf4d94,Uid:8d492f126ff7211c3c848fef2e74060f,Namespace:kube-system,Attempt:0,} returns sandbox id \"dec1b84c273b8128058737431606059633d55a1cd7a8228105a6fac39d3827c2\"" Feb 9 19:03:53.863137 env[1329]: time="2024-02-09T19:03:53.863095834Z" level=info msg="CreateContainer within sandbox \"dec1b84c273b8128058737431606059633d55a1cd7a8228105a6fac39d3827c2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 9 19:03:53.918794 env[1329]: time="2024-02-09T19:03:53.918738911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-2006cf4d94,Uid:b904945ced4b201ee38a29bd57e671c5,Namespace:kube-system,Attempt:0,} returns sandbox id \"10402755eaf9b2e49532e57c2f8a1811ce6e1ceb7b7a08e576508191d9421cc0\"" Feb 9 19:03:53.922192 env[1329]: time="2024-02-09T19:03:53.922165853Z" level=info msg="CreateContainer within sandbox \"10402755eaf9b2e49532e57c2f8a1811ce6e1ceb7b7a08e576508191d9421cc0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 9 19:03:53.929579 env[1329]: time="2024-02-09T19:03:53.929533343Z" level=info msg="CreateContainer within sandbox \"dec1b84c273b8128058737431606059633d55a1cd7a8228105a6fac39d3827c2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"535f88d7d9e3e1b89f81efda70c14be38faba85f184bb91eaf467139cc47e185\"" Feb 9 19:03:53.943570 env[1329]: time="2024-02-09T19:03:53.943527613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-2006cf4d94,Uid:7e33389636318b04b3b81e3bebfced18,Namespace:kube-system,Attempt:0,} returns sandbox id \"29ec86780cc548ee99396d90432ba943defef171876dae87406c04ff2b6b8276\"" Feb 9 19:03:53.943908 env[1329]: time="2024-02-09T19:03:53.943754116Z" level=info msg="StartContainer for \"535f88d7d9e3e1b89f81efda70c14be38faba85f184bb91eaf467139cc47e185\"" Feb 9 19:03:53.947541 env[1329]: time="2024-02-09T19:03:53.947508761Z" level=info msg="CreateContainer within sandbox \"29ec86780cc548ee99396d90432ba943defef171876dae87406c04ff2b6b8276\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 9 19:03:53.964477 systemd[1]: Started cri-containerd-535f88d7d9e3e1b89f81efda70c14be38faba85f184bb91eaf467139cc47e185.scope. Feb 9 19:03:53.975226 env[1329]: time="2024-02-09T19:03:53.975184498Z" level=info msg="CreateContainer within sandbox \"10402755eaf9b2e49532e57c2f8a1811ce6e1ceb7b7a08e576508191d9421cc0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8f846fb580d82556dc9fd660de1b9c064f19905a2d9a52109f2c07d1d93549dc\"" Feb 9 19:03:53.975681 env[1329]: time="2024-02-09T19:03:53.975600503Z" level=info msg="StartContainer for \"8f846fb580d82556dc9fd660de1b9c064f19905a2d9a52109f2c07d1d93549dc\"" Feb 9 19:03:53.998123 systemd[1]: Started cri-containerd-8f846fb580d82556dc9fd660de1b9c064f19905a2d9a52109f2c07d1d93549dc.scope. Feb 9 19:03:54.019144 env[1329]: time="2024-02-09T19:03:54.019090128Z" level=info msg="CreateContainer within sandbox \"29ec86780cc548ee99396d90432ba943defef171876dae87406c04ff2b6b8276\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"66dea4725a1f264d21428fe38cdbbe4e27b561fa39b951cd1746e49e0839db9e\"" Feb 9 19:03:54.019733 env[1329]: time="2024-02-09T19:03:54.019692435Z" level=info msg="StartContainer for \"66dea4725a1f264d21428fe38cdbbe4e27b561fa39b951cd1746e49e0839db9e\"" Feb 9 19:03:54.049073 systemd[1]: Started cri-containerd-66dea4725a1f264d21428fe38cdbbe4e27b561fa39b951cd1746e49e0839db9e.scope. Feb 9 19:03:54.064046 env[1329]: time="2024-02-09T19:03:54.063995363Z" level=info msg="StartContainer for \"535f88d7d9e3e1b89f81efda70c14be38faba85f184bb91eaf467139cc47e185\" returns successfully" Feb 9 19:03:54.108002 env[1329]: time="2024-02-09T19:03:54.107913885Z" level=info msg="StartContainer for \"8f846fb580d82556dc9fd660de1b9c064f19905a2d9a52109f2c07d1d93549dc\" returns successfully" Feb 9 19:03:54.187897 kubelet[2026]: W0209 19:03:54.187728 2026 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.8.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-2006cf4d94&limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Feb 9 19:03:54.187897 kubelet[2026]: E0209 19:03:54.187780 2026 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-2006cf4d94&limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Feb 9 19:03:54.198117 env[1329]: time="2024-02-09T19:03:54.198064159Z" level=info msg="StartContainer for \"66dea4725a1f264d21428fe38cdbbe4e27b561fa39b951cd1746e49e0839db9e\" returns successfully" Feb 9 19:03:54.532550 systemd[1]: run-containerd-runc-k8s.io-dec1b84c273b8128058737431606059633d55a1cd7a8228105a6fac39d3827c2-runc.BfXYr4.mount: Deactivated successfully. Feb 9 19:03:56.505913 kubelet[2026]: E0209 19:03:56.505868 2026 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.2-a-2006cf4d94\" not found" Feb 9 19:03:56.995523 kubelet[2026]: E0209 19:03:56.995477 2026 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-3510.3.2-a-2006cf4d94" not found Feb 9 19:03:57.263997 kubelet[2026]: E0209 19:03:57.263809 2026 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-2006cf4d94.17b2472379261eee", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-2006cf4d94", UID:"ci-3510.3.2-a-2006cf4d94", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-2006cf4d94"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 3, 46, 408808174, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 3, 46, 408808174, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:03:57.317719 kubelet[2026]: E0209 19:03:57.317608 2026 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-2006cf4d94.17b24723796a253d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-2006cf4d94", UID:"ci-3510.3.2-a-2006cf4d94", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-2006cf4d94"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 3, 46, 413266237, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 3, 46, 413266237, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:03:57.372225 kubelet[2026]: E0209 19:03:57.372129 2026 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-2006cf4d94.17b247237cd6b5b7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-2006cf4d94", UID:"ci-3510.3.2-a-2006cf4d94", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node ci-3510.3.2-a-2006cf4d94 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-2006cf4d94"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 3, 46, 470712759, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 3, 46, 470712759, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:03:57.425656 kubelet[2026]: E0209 19:03:57.425511 2026 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-2006cf4d94.17b247237cd6cbfc", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-2006cf4d94", UID:"ci-3510.3.2-a-2006cf4d94", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node ci-3510.3.2-a-2006cf4d94 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-2006cf4d94"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 3, 46, 470718460, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 3, 46, 470718460, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:03:57.480436 kubelet[2026]: E0209 19:03:57.480266 2026 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-2006cf4d94.17b247237cd6da0c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-2006cf4d94", UID:"ci-3510.3.2-a-2006cf4d94", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node ci-3510.3.2-a-2006cf4d94 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-2006cf4d94"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 3, 46, 470722060, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 3, 46, 470722060, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:03:57.535811 kubelet[2026]: E0209 19:03:57.535639 2026 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-2006cf4d94.17b247237ee6ed23", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-2006cf4d94", UID:"ci-3510.3.2-a-2006cf4d94", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-2006cf4d94"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 3, 46, 505329955, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 3, 46, 505329955, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:03:57.593536 kubelet[2026]: E0209 19:03:57.593442 2026 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-2006cf4d94.17b247237cd6b5b7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-2006cf4d94", UID:"ci-3510.3.2-a-2006cf4d94", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node ci-3510.3.2-a-2006cf4d94 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-2006cf4d94"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 3, 46, 470712759, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 3, 46, 520097866, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:03:57.650102 kubelet[2026]: E0209 19:03:57.650004 2026 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-2006cf4d94.17b247237cd6cbfc", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-2006cf4d94", UID:"ci-3510.3.2-a-2006cf4d94", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node ci-3510.3.2-a-2006cf4d94 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-2006cf4d94"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 3, 46, 470718460, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 3, 46, 520101566, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:03:57.706137 kubelet[2026]: E0209 19:03:57.706039 2026 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-2006cf4d94.17b247237cd6da0c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-2006cf4d94", UID:"ci-3510.3.2-a-2006cf4d94", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node ci-3510.3.2-a-2006cf4d94 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-2006cf4d94"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 3, 46, 470722060, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 3, 46, 520105166, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:03:58.126537 kubelet[2026]: E0209 19:03:58.126497 2026 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-3510.3.2-a-2006cf4d94" not found Feb 9 19:03:59.029939 kubelet[2026]: E0209 19:03:59.029901 2026 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.2-a-2006cf4d94\" not found" node="ci-3510.3.2-a-2006cf4d94" Feb 9 19:03:59.137328 kubelet[2026]: I0209 19:03:59.137284 2026 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-2006cf4d94" Feb 9 19:03:59.528323 kubelet[2026]: I0209 19:03:59.528267 2026 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-2006cf4d94" Feb 9 19:03:59.544365 kubelet[2026]: E0209 19:03:59.544335 2026 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-2006cf4d94\" not found" Feb 9 19:03:59.645399 kubelet[2026]: E0209 19:03:59.645354 2026 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-2006cf4d94\" not found" Feb 9 19:03:59.746058 kubelet[2026]: E0209 19:03:59.746010 2026 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-2006cf4d94\" not found" Feb 9 19:03:59.846646 kubelet[2026]: E0209 19:03:59.846524 2026 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-2006cf4d94\" not found" Feb 9 19:03:59.947425 kubelet[2026]: E0209 19:03:59.947358 2026 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-2006cf4d94\" not found" Feb 9 19:04:00.047823 kubelet[2026]: E0209 19:04:00.047788 2026 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-2006cf4d94\" not found" Feb 9 19:04:00.148955 kubelet[2026]: E0209 19:04:00.148921 2026 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-2006cf4d94\" not found" Feb 9 19:04:00.249043 kubelet[2026]: E0209 19:04:00.249002 2026 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-2006cf4d94\" not found" Feb 9 19:04:00.349661 kubelet[2026]: E0209 19:04:00.349621 2026 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-2006cf4d94\" not found" Feb 9 19:04:00.450622 kubelet[2026]: E0209 19:04:00.450509 2026 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-2006cf4d94\" not found" Feb 9 19:04:00.551421 kubelet[2026]: E0209 19:04:00.551363 2026 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-2006cf4d94\" not found" Feb 9 19:04:00.559837 systemd[1]: Reloading. Feb 9 19:04:00.652499 kubelet[2026]: E0209 19:04:00.652433 2026 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-2006cf4d94\" not found" Feb 9 19:04:00.656970 /usr/lib/systemd/system-generators/torcx-generator[2348]: time="2024-02-09T19:04:00Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:04:00.657438 /usr/lib/systemd/system-generators/torcx-generator[2348]: time="2024-02-09T19:04:00Z" level=info msg="torcx already run" Feb 9 19:04:00.749226 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:04:00.749246 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:04:00.753086 kubelet[2026]: E0209 19:04:00.753023 2026 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-2006cf4d94\" not found" Feb 9 19:04:00.775865 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:04:00.854909 kubelet[2026]: E0209 19:04:00.854093 2026 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-2006cf4d94\" not found" Feb 9 19:04:00.887220 systemd[1]: Stopping kubelet.service... Feb 9 19:04:00.887689 kubelet[2026]: I0209 19:04:00.887652 2026 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 19:04:00.902707 systemd[1]: kubelet.service: Deactivated successfully. Feb 9 19:04:00.902961 systemd[1]: Stopped kubelet.service. Feb 9 19:04:00.905034 systemd[1]: Started kubelet.service. Feb 9 19:04:00.991628 kubelet[2411]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:04:00.991628 kubelet[2411]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:04:00.991628 kubelet[2411]: I0209 19:04:00.987702 2411 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 19:04:00.991628 kubelet[2411]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:04:00.991628 kubelet[2411]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:04:00.993787 kubelet[2411]: I0209 19:04:00.993760 2411 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 19:04:00.993787 kubelet[2411]: I0209 19:04:00.993783 2411 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 19:04:00.994004 kubelet[2411]: I0209 19:04:00.993986 2411 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 19:04:00.995232 kubelet[2411]: I0209 19:04:00.995206 2411 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 9 19:04:00.995925 kubelet[2411]: I0209 19:04:00.995903 2411 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 19:04:00.999230 kubelet[2411]: I0209 19:04:00.999212 2411 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 19:04:01.000417 kubelet[2411]: I0209 19:04:00.999448 2411 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 19:04:01.000417 kubelet[2411]: I0209 19:04:00.999529 2411 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 19:04:01.000417 kubelet[2411]: I0209 19:04:00.999558 2411 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 19:04:01.000417 kubelet[2411]: I0209 19:04:00.999571 2411 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 19:04:01.000417 kubelet[2411]: I0209 19:04:00.999613 2411 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:04:01.002972 kubelet[2411]: I0209 19:04:01.002951 2411 kubelet.go:398] "Attempting to sync node with API server" Feb 9 19:04:01.003058 kubelet[2411]: I0209 19:04:01.002979 2411 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 19:04:01.003058 kubelet[2411]: I0209 19:04:01.003000 2411 kubelet.go:297] "Adding apiserver pod source" Feb 9 19:04:01.003058 kubelet[2411]: I0209 19:04:01.003017 2411 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 19:04:01.016666 kubelet[2411]: I0209 19:04:01.016649 2411 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 19:04:01.017943 kubelet[2411]: I0209 19:04:01.017925 2411 server.go:1186] "Started kubelet" Feb 9 19:04:01.020182 kubelet[2411]: I0209 19:04:01.020165 2411 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 19:04:01.028537 kubelet[2411]: I0209 19:04:01.028511 2411 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 19:04:01.030281 sudo[2424]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 9 19:04:01.030651 sudo[2424]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Feb 9 19:04:01.031931 kubelet[2411]: I0209 19:04:01.031911 2411 server.go:451] "Adding debug handlers to kubelet server" Feb 9 19:04:01.034229 kubelet[2411]: E0209 19:04:01.033960 2411 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 19:04:01.034229 kubelet[2411]: E0209 19:04:01.033994 2411 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 19:04:01.037339 kubelet[2411]: I0209 19:04:01.037323 2411 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 19:04:01.052731 kubelet[2411]: I0209 19:04:01.052701 2411 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 19:04:01.125897 kubelet[2411]: I0209 19:04:01.125864 2411 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 19:04:01.133062 kubelet[2411]: I0209 19:04:01.133023 2411 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 19:04:01.133062 kubelet[2411]: I0209 19:04:01.133048 2411 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 19:04:01.133062 kubelet[2411]: I0209 19:04:01.133066 2411 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:04:01.133297 kubelet[2411]: I0209 19:04:01.133235 2411 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 9 19:04:01.133297 kubelet[2411]: I0209 19:04:01.133252 2411 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 9 19:04:01.133297 kubelet[2411]: I0209 19:04:01.133260 2411 policy_none.go:49] "None policy: Start" Feb 9 19:04:01.133992 kubelet[2411]: I0209 19:04:01.133969 2411 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 19:04:01.134096 kubelet[2411]: I0209 19:04:01.133997 2411 state_mem.go:35] "Initializing new in-memory state store" Feb 9 19:04:01.134167 kubelet[2411]: I0209 19:04:01.134149 2411 state_mem.go:75] "Updated machine memory state" Feb 9 19:04:01.140365 kubelet[2411]: I0209 19:04:01.140345 2411 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-2006cf4d94" Feb 9 19:04:01.153812 kubelet[2411]: I0209 19:04:01.153787 2411 kubelet_node_status.go:108] "Node was previously registered" node="ci-3510.3.2-a-2006cf4d94" Feb 9 19:04:01.153921 kubelet[2411]: I0209 19:04:01.153855 2411 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-2006cf4d94" Feb 9 19:04:01.154132 kubelet[2411]: I0209 19:04:01.154113 2411 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 19:04:01.154372 kubelet[2411]: I0209 19:04:01.154355 2411 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 19:04:01.216967 kubelet[2411]: I0209 19:04:01.216932 2411 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 19:04:01.216967 kubelet[2411]: I0209 19:04:01.216965 2411 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 19:04:01.217185 kubelet[2411]: I0209 19:04:01.216986 2411 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 19:04:01.217185 kubelet[2411]: E0209 19:04:01.217037 2411 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 9 19:04:01.318233 kubelet[2411]: I0209 19:04:01.318124 2411 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:04:01.318409 kubelet[2411]: I0209 19:04:01.318262 2411 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:04:01.318409 kubelet[2411]: I0209 19:04:01.318301 2411 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:04:01.354052 kubelet[2411]: I0209 19:04:01.353997 2411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b904945ced4b201ee38a29bd57e671c5-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-2006cf4d94\" (UID: \"b904945ced4b201ee38a29bd57e671c5\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-2006cf4d94" Feb 9 19:04:01.354052 kubelet[2411]: I0209 19:04:01.354059 2411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b904945ced4b201ee38a29bd57e671c5-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-2006cf4d94\" (UID: \"b904945ced4b201ee38a29bd57e671c5\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-2006cf4d94" Feb 9 19:04:01.354277 kubelet[2411]: I0209 19:04:01.354090 2411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7e33389636318b04b3b81e3bebfced18-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-2006cf4d94\" (UID: \"7e33389636318b04b3b81e3bebfced18\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-2006cf4d94" Feb 9 19:04:01.354277 kubelet[2411]: I0209 19:04:01.354113 2411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8d492f126ff7211c3c848fef2e74060f-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-2006cf4d94\" (UID: \"8d492f126ff7211c3c848fef2e74060f\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-2006cf4d94" Feb 9 19:04:01.354277 kubelet[2411]: I0209 19:04:01.354137 2411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8d492f126ff7211c3c848fef2e74060f-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-2006cf4d94\" (UID: \"8d492f126ff7211c3c848fef2e74060f\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-2006cf4d94" Feb 9 19:04:01.354277 kubelet[2411]: I0209 19:04:01.354164 2411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8d492f126ff7211c3c848fef2e74060f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-2006cf4d94\" (UID: \"8d492f126ff7211c3c848fef2e74060f\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-2006cf4d94" Feb 9 19:04:01.354277 kubelet[2411]: I0209 19:04:01.354190 2411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b904945ced4b201ee38a29bd57e671c5-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-2006cf4d94\" (UID: \"b904945ced4b201ee38a29bd57e671c5\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-2006cf4d94" Feb 9 19:04:01.354536 kubelet[2411]: I0209 19:04:01.354215 2411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b904945ced4b201ee38a29bd57e671c5-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-2006cf4d94\" (UID: \"b904945ced4b201ee38a29bd57e671c5\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-2006cf4d94" Feb 9 19:04:01.354536 kubelet[2411]: I0209 19:04:01.354243 2411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b904945ced4b201ee38a29bd57e671c5-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-2006cf4d94\" (UID: \"b904945ced4b201ee38a29bd57e671c5\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-2006cf4d94" Feb 9 19:04:01.633695 sudo[2424]: pam_unix(sudo:session): session closed for user root Feb 9 19:04:02.005890 kubelet[2411]: I0209 19:04:02.005857 2411 apiserver.go:52] "Watching apiserver" Feb 9 19:04:02.052897 kubelet[2411]: I0209 19:04:02.052858 2411 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 19:04:02.059015 kubelet[2411]: I0209 19:04:02.058961 2411 reconciler.go:41] "Reconciler: start to sync state" Feb 9 19:04:02.414999 kubelet[2411]: E0209 19:04:02.414947 2411 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-a-2006cf4d94\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.2-a-2006cf4d94" Feb 9 19:04:02.616488 kubelet[2411]: E0209 19:04:02.616453 2411 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510.3.2-a-2006cf4d94\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.2-a-2006cf4d94" Feb 9 19:04:02.693479 sudo[1617]: pam_unix(sudo:session): session closed for user root Feb 9 19:04:02.794414 sshd[1614]: pam_unix(sshd:session): session closed for user core Feb 9 19:04:02.797603 systemd[1]: sshd@4-10.200.8.39:22-10.200.12.6:53698.service: Deactivated successfully. Feb 9 19:04:02.798468 systemd[1]: session-7.scope: Deactivated successfully. Feb 9 19:04:02.798683 systemd[1]: session-7.scope: Consumed 4.194s CPU time. Feb 9 19:04:02.799189 systemd-logind[1301]: Session 7 logged out. Waiting for processes to exit. Feb 9 19:04:02.800073 systemd-logind[1301]: Removed session 7. Feb 9 19:04:02.811600 kubelet[2411]: E0209 19:04:02.811576 2411 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.2-a-2006cf4d94\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-2006cf4d94" Feb 9 19:04:03.409749 kubelet[2411]: I0209 19:04:03.409704 2411 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.2-a-2006cf4d94" podStartSLOduration=2.409605668 pod.CreationTimestamp="2024-02-09 19:04:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:04:03.014040376 +0000 UTC m=+2.103333860" watchObservedRunningTime="2024-02-09 19:04:03.409605668 +0000 UTC m=+2.498899252" Feb 9 19:04:03.810647 kubelet[2411]: I0209 19:04:03.810504 2411 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.2-a-2006cf4d94" podStartSLOduration=2.810459311 pod.CreationTimestamp="2024-02-09 19:04:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:04:03.410581877 +0000 UTC m=+2.499875461" watchObservedRunningTime="2024-02-09 19:04:03.810459311 +0000 UTC m=+2.899752795" Feb 9 19:04:07.549189 kubelet[2411]: I0209 19:04:07.549152 2411 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-2006cf4d94" podStartSLOduration=6.549110733 pod.CreationTimestamp="2024-02-09 19:04:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:04:03.810782914 +0000 UTC m=+2.900076498" watchObservedRunningTime="2024-02-09 19:04:07.549110733 +0000 UTC m=+6.638404317" Feb 9 19:04:12.604683 kubelet[2411]: I0209 19:04:12.604634 2411 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 9 19:04:12.605383 env[1329]: time="2024-02-09T19:04:12.605340747Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 19:04:12.605721 kubelet[2411]: I0209 19:04:12.605575 2411 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 9 19:04:13.296258 kubelet[2411]: I0209 19:04:13.296216 2411 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:04:13.303173 systemd[1]: Created slice kubepods-besteffort-podfea23c5e_0f00_40d5_af89_d3348f6fb5fb.slice. Feb 9 19:04:13.326105 kubelet[2411]: I0209 19:04:13.326064 2411 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:04:13.334698 systemd[1]: Created slice kubepods-burstable-podac78abb7_3216_4aa2_8ada_54fe26b03151.slice. Feb 9 19:04:13.335552 kubelet[2411]: I0209 19:04:13.335526 2411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ac78abb7-3216-4aa2-8ada-54fe26b03151-bpf-maps\") pod \"cilium-n2hgq\" (UID: \"ac78abb7-3216-4aa2-8ada-54fe26b03151\") " pod="kube-system/cilium-n2hgq" Feb 9 19:04:13.335668 kubelet[2411]: I0209 19:04:13.335575 2411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ac78abb7-3216-4aa2-8ada-54fe26b03151-hostproc\") pod \"cilium-n2hgq\" (UID: \"ac78abb7-3216-4aa2-8ada-54fe26b03151\") " pod="kube-system/cilium-n2hgq" Feb 9 19:04:13.335668 kubelet[2411]: I0209 19:04:13.335603 2411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ac78abb7-3216-4aa2-8ada-54fe26b03151-host-proc-sys-net\") pod \"cilium-n2hgq\" (UID: \"ac78abb7-3216-4aa2-8ada-54fe26b03151\") " pod="kube-system/cilium-n2hgq" Feb 9 19:04:13.335668 kubelet[2411]: I0209 19:04:13.335634 2411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fea23c5e-0f00-40d5-af89-d3348f6fb5fb-lib-modules\") pod \"kube-proxy-cp5zp\" (UID: \"fea23c5e-0f00-40d5-af89-d3348f6fb5fb\") " pod="kube-system/kube-proxy-cp5zp" Feb 9 19:04:13.335668 kubelet[2411]: I0209 19:04:13.335660 2411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ac78abb7-3216-4aa2-8ada-54fe26b03151-etc-cni-netd\") pod \"cilium-n2hgq\" (UID: \"ac78abb7-3216-4aa2-8ada-54fe26b03151\") " pod="kube-system/cilium-n2hgq" Feb 9 19:04:13.335853 kubelet[2411]: I0209 19:04:13.335701 2411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ac78abb7-3216-4aa2-8ada-54fe26b03151-xtables-lock\") pod \"cilium-n2hgq\" (UID: \"ac78abb7-3216-4aa2-8ada-54fe26b03151\") " pod="kube-system/cilium-n2hgq" Feb 9 19:04:13.335853 kubelet[2411]: I0209 19:04:13.335734 2411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r422v\" (UniqueName: \"kubernetes.io/projected/ac78abb7-3216-4aa2-8ada-54fe26b03151-kube-api-access-r422v\") pod \"cilium-n2hgq\" (UID: \"ac78abb7-3216-4aa2-8ada-54fe26b03151\") " pod="kube-system/cilium-n2hgq" Feb 9 19:04:13.335853 kubelet[2411]: I0209 19:04:13.335770 2411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fea23c5e-0f00-40d5-af89-d3348f6fb5fb-xtables-lock\") pod \"kube-proxy-cp5zp\" (UID: \"fea23c5e-0f00-40d5-af89-d3348f6fb5fb\") " pod="kube-system/kube-proxy-cp5zp" Feb 9 19:04:13.335853 kubelet[2411]: I0209 19:04:13.335801 2411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gcnb\" (UniqueName: \"kubernetes.io/projected/fea23c5e-0f00-40d5-af89-d3348f6fb5fb-kube-api-access-9gcnb\") pod \"kube-proxy-cp5zp\" (UID: \"fea23c5e-0f00-40d5-af89-d3348f6fb5fb\") " pod="kube-system/kube-proxy-cp5zp" Feb 9 19:04:13.335853 kubelet[2411]: I0209 19:04:13.335829 2411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ac78abb7-3216-4aa2-8ada-54fe26b03151-cilium-cgroup\") pod \"cilium-n2hgq\" (UID: \"ac78abb7-3216-4aa2-8ada-54fe26b03151\") " pod="kube-system/cilium-n2hgq" Feb 9 19:04:13.336053 kubelet[2411]: I0209 19:04:13.335858 2411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ac78abb7-3216-4aa2-8ada-54fe26b03151-cilium-config-path\") pod \"cilium-n2hgq\" (UID: \"ac78abb7-3216-4aa2-8ada-54fe26b03151\") " pod="kube-system/cilium-n2hgq" Feb 9 19:04:13.336053 kubelet[2411]: I0209 19:04:13.335897 2411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fea23c5e-0f00-40d5-af89-d3348f6fb5fb-kube-proxy\") pod \"kube-proxy-cp5zp\" (UID: \"fea23c5e-0f00-40d5-af89-d3348f6fb5fb\") " pod="kube-system/kube-proxy-cp5zp" Feb 9 19:04:13.336053 kubelet[2411]: I0209 19:04:13.335927 2411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ac78abb7-3216-4aa2-8ada-54fe26b03151-cni-path\") pod \"cilium-n2hgq\" (UID: \"ac78abb7-3216-4aa2-8ada-54fe26b03151\") " pod="kube-system/cilium-n2hgq" Feb 9 19:04:13.336053 kubelet[2411]: I0209 19:04:13.335955 2411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ac78abb7-3216-4aa2-8ada-54fe26b03151-lib-modules\") pod \"cilium-n2hgq\" (UID: \"ac78abb7-3216-4aa2-8ada-54fe26b03151\") " pod="kube-system/cilium-n2hgq" Feb 9 19:04:13.336053 kubelet[2411]: I0209 19:04:13.335985 2411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ac78abb7-3216-4aa2-8ada-54fe26b03151-clustermesh-secrets\") pod \"cilium-n2hgq\" (UID: \"ac78abb7-3216-4aa2-8ada-54fe26b03151\") " pod="kube-system/cilium-n2hgq" Feb 9 19:04:13.336053 kubelet[2411]: I0209 19:04:13.336017 2411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ac78abb7-3216-4aa2-8ada-54fe26b03151-hubble-tls\") pod \"cilium-n2hgq\" (UID: \"ac78abb7-3216-4aa2-8ada-54fe26b03151\") " pod="kube-system/cilium-n2hgq" Feb 9 19:04:13.336287 kubelet[2411]: I0209 19:04:13.336053 2411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ac78abb7-3216-4aa2-8ada-54fe26b03151-cilium-run\") pod \"cilium-n2hgq\" (UID: \"ac78abb7-3216-4aa2-8ada-54fe26b03151\") " pod="kube-system/cilium-n2hgq" Feb 9 19:04:13.336287 kubelet[2411]: I0209 19:04:13.336091 2411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ac78abb7-3216-4aa2-8ada-54fe26b03151-host-proc-sys-kernel\") pod \"cilium-n2hgq\" (UID: \"ac78abb7-3216-4aa2-8ada-54fe26b03151\") " pod="kube-system/cilium-n2hgq" Feb 9 19:04:13.336770 kubelet[2411]: W0209 19:04:13.336745 2411 reflector.go:424] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3510.3.2-a-2006cf4d94" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-2006cf4d94' and this object Feb 9 19:04:13.336884 kubelet[2411]: E0209 19:04:13.336783 2411 reflector.go:140] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3510.3.2-a-2006cf4d94" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-2006cf4d94' and this object Feb 9 19:04:13.336884 kubelet[2411]: W0209 19:04:13.336826 2411 reflector.go:424] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510.3.2-a-2006cf4d94" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-2006cf4d94' and this object Feb 9 19:04:13.336884 kubelet[2411]: E0209 19:04:13.336840 2411 reflector.go:140] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510.3.2-a-2006cf4d94" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-2006cf4d94' and this object Feb 9 19:04:13.336884 kubelet[2411]: W0209 19:04:13.336873 2411 reflector.go:424] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510.3.2-a-2006cf4d94" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-2006cf4d94' and this object Feb 9 19:04:13.336884 kubelet[2411]: E0209 19:04:13.336882 2411 reflector.go:140] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510.3.2-a-2006cf4d94" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-2006cf4d94' and this object Feb 9 19:04:13.396947 kubelet[2411]: I0209 19:04:13.396906 2411 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:04:13.403445 systemd[1]: Created slice kubepods-besteffort-pod646efd18_613e_44ca_95af_7144d8a6b0a4.slice. Feb 9 19:04:13.437279 kubelet[2411]: I0209 19:04:13.437243 2411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/646efd18-613e-44ca-95af-7144d8a6b0a4-cilium-config-path\") pod \"cilium-operator-f59cbd8c6-pkkfc\" (UID: \"646efd18-613e-44ca-95af-7144d8a6b0a4\") " pod="kube-system/cilium-operator-f59cbd8c6-pkkfc" Feb 9 19:04:13.437530 kubelet[2411]: I0209 19:04:13.437395 2411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsd2m\" (UniqueName: \"kubernetes.io/projected/646efd18-613e-44ca-95af-7144d8a6b0a4-kube-api-access-nsd2m\") pod \"cilium-operator-f59cbd8c6-pkkfc\" (UID: \"646efd18-613e-44ca-95af-7144d8a6b0a4\") " pod="kube-system/cilium-operator-f59cbd8c6-pkkfc" Feb 9 19:04:14.214617 env[1329]: time="2024-02-09T19:04:14.214563569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cp5zp,Uid:fea23c5e-0f00-40d5-af89-d3348f6fb5fb,Namespace:kube-system,Attempt:0,}" Feb 9 19:04:14.257245 env[1329]: time="2024-02-09T19:04:14.256938308Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:04:14.257245 env[1329]: time="2024-02-09T19:04:14.256991809Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:04:14.257245 env[1329]: time="2024-02-09T19:04:14.257025809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:04:14.257640 env[1329]: time="2024-02-09T19:04:14.257575213Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/855f5a7fbb8456a7fec79a4f9579ecb07166638b23d079736290b6e6e091e99a pid=2511 runtime=io.containerd.runc.v2 Feb 9 19:04:14.279058 systemd[1]: Started cri-containerd-855f5a7fbb8456a7fec79a4f9579ecb07166638b23d079736290b6e6e091e99a.scope. Feb 9 19:04:14.313245 env[1329]: time="2024-02-09T19:04:14.313194058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cp5zp,Uid:fea23c5e-0f00-40d5-af89-d3348f6fb5fb,Namespace:kube-system,Attempt:0,} returns sandbox id \"855f5a7fbb8456a7fec79a4f9579ecb07166638b23d079736290b6e6e091e99a\"" Feb 9 19:04:14.316256 env[1329]: time="2024-02-09T19:04:14.316214282Z" level=info msg="CreateContainer within sandbox \"855f5a7fbb8456a7fec79a4f9579ecb07166638b23d079736290b6e6e091e99a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 19:04:14.363911 env[1329]: time="2024-02-09T19:04:14.363862163Z" level=info msg="CreateContainer within sandbox \"855f5a7fbb8456a7fec79a4f9579ecb07166638b23d079736290b6e6e091e99a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"208472d2eea6d7b0dd4a1f1d5c90d3ab09df012fcb9cc19334cd673cbbe8558e\"" Feb 9 19:04:14.366086 env[1329]: time="2024-02-09T19:04:14.364544768Z" level=info msg="StartContainer for \"208472d2eea6d7b0dd4a1f1d5c90d3ab09df012fcb9cc19334cd673cbbe8558e\"" Feb 9 19:04:14.382575 systemd[1]: Started cri-containerd-208472d2eea6d7b0dd4a1f1d5c90d3ab09df012fcb9cc19334cd673cbbe8558e.scope. Feb 9 19:04:14.421343 env[1329]: time="2024-02-09T19:04:14.420588516Z" level=info msg="StartContainer for \"208472d2eea6d7b0dd4a1f1d5c90d3ab09df012fcb9cc19334cd673cbbe8558e\" returns successfully" Feb 9 19:04:14.439372 kubelet[2411]: E0209 19:04:14.439336 2411 secret.go:194] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Feb 9 19:04:14.439768 kubelet[2411]: E0209 19:04:14.439425 2411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ac78abb7-3216-4aa2-8ada-54fe26b03151-clustermesh-secrets podName:ac78abb7-3216-4aa2-8ada-54fe26b03151 nodeName:}" failed. No retries permitted until 2024-02-09 19:04:14.939402067 +0000 UTC m=+14.028695551 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/ac78abb7-3216-4aa2-8ada-54fe26b03151-clustermesh-secrets") pod "cilium-n2hgq" (UID: "ac78abb7-3216-4aa2-8ada-54fe26b03151") : failed to sync secret cache: timed out waiting for the condition Feb 9 19:04:14.439768 kubelet[2411]: E0209 19:04:14.439647 2411 projected.go:267] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Feb 9 19:04:14.439768 kubelet[2411]: E0209 19:04:14.439661 2411 projected.go:198] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-n2hgq: failed to sync secret cache: timed out waiting for the condition Feb 9 19:04:14.439768 kubelet[2411]: E0209 19:04:14.439715 2411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ac78abb7-3216-4aa2-8ada-54fe26b03151-hubble-tls podName:ac78abb7-3216-4aa2-8ada-54fe26b03151 nodeName:}" failed. No retries permitted until 2024-02-09 19:04:14.939698069 +0000 UTC m=+14.028991553 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/ac78abb7-3216-4aa2-8ada-54fe26b03151-hubble-tls") pod "cilium-n2hgq" (UID: "ac78abb7-3216-4aa2-8ada-54fe26b03151") : failed to sync secret cache: timed out waiting for the condition Feb 9 19:04:14.607393 env[1329]: time="2024-02-09T19:04:14.607281909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-pkkfc,Uid:646efd18-613e-44ca-95af-7144d8a6b0a4,Namespace:kube-system,Attempt:0,}" Feb 9 19:04:14.645049 env[1329]: time="2024-02-09T19:04:14.644957110Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:04:14.645049 env[1329]: time="2024-02-09T19:04:14.645017511Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:04:14.645276 env[1329]: time="2024-02-09T19:04:14.645238012Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:04:14.647912 env[1329]: time="2024-02-09T19:04:14.647657632Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c30c78b6347934b1d27785a485c742d39ce177f33a81f073c1b2ab4bc5c7f19a pid=2654 runtime=io.containerd.runc.v2 Feb 9 19:04:14.667521 systemd[1]: Started cri-containerd-c30c78b6347934b1d27785a485c742d39ce177f33a81f073c1b2ab4bc5c7f19a.scope. Feb 9 19:04:14.732343 env[1329]: time="2024-02-09T19:04:14.728457178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-pkkfc,Uid:646efd18-613e-44ca-95af-7144d8a6b0a4,Namespace:kube-system,Attempt:0,} returns sandbox id \"c30c78b6347934b1d27785a485c742d39ce177f33a81f073c1b2ab4bc5c7f19a\"" Feb 9 19:04:14.732343 env[1329]: time="2024-02-09T19:04:14.731006198Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 9 19:04:15.143811 env[1329]: time="2024-02-09T19:04:15.143756679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-n2hgq,Uid:ac78abb7-3216-4aa2-8ada-54fe26b03151,Namespace:kube-system,Attempt:0,}" Feb 9 19:04:15.186608 env[1329]: time="2024-02-09T19:04:15.186534415Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:04:15.186608 env[1329]: time="2024-02-09T19:04:15.186570215Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:04:15.186821 env[1329]: time="2024-02-09T19:04:15.186590115Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:04:15.187037 env[1329]: time="2024-02-09T19:04:15.186985918Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d04c537795675d8b0c58ab265203137f1067e872b2808a19058055276effd4dc pid=2737 runtime=io.containerd.runc.v2 Feb 9 19:04:15.205090 systemd[1]: Started cri-containerd-d04c537795675d8b0c58ab265203137f1067e872b2808a19058055276effd4dc.scope. Feb 9 19:04:15.235413 env[1329]: time="2024-02-09T19:04:15.235374799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-n2hgq,Uid:ac78abb7-3216-4aa2-8ada-54fe26b03151,Namespace:kube-system,Attempt:0,} returns sandbox id \"d04c537795675d8b0c58ab265203137f1067e872b2808a19058055276effd4dc\"" Feb 9 19:04:15.265614 kubelet[2411]: I0209 19:04:15.265304 2411 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-cp5zp" podStartSLOduration=2.265270133 pod.CreationTimestamp="2024-02-09 19:04:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:04:15.265151133 +0000 UTC m=+14.354444717" watchObservedRunningTime="2024-02-09 19:04:15.265270133 +0000 UTC m=+14.354563717" Feb 9 19:04:17.733526 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1011045047.mount: Deactivated successfully. Feb 9 19:04:20.172833 env[1329]: time="2024-02-09T19:04:20.172726491Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:04:20.182025 env[1329]: time="2024-02-09T19:04:20.181949958Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:04:20.187703 env[1329]: time="2024-02-09T19:04:20.187663899Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:04:20.188205 env[1329]: time="2024-02-09T19:04:20.188172003Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 9 19:04:20.190529 env[1329]: time="2024-02-09T19:04:20.189384711Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 9 19:04:20.191053 env[1329]: time="2024-02-09T19:04:20.191005123Z" level=info msg="CreateContainer within sandbox \"c30c78b6347934b1d27785a485c742d39ce177f33a81f073c1b2ab4bc5c7f19a\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 9 19:04:20.237252 env[1329]: time="2024-02-09T19:04:20.237201757Z" level=info msg="CreateContainer within sandbox \"c30c78b6347934b1d27785a485c742d39ce177f33a81f073c1b2ab4bc5c7f19a\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"5a4e0d1a0d31002316e72ed580df106cff716859ecfd5014fc09a01f0bf05acf\"" Feb 9 19:04:20.237851 env[1329]: time="2024-02-09T19:04:20.237809261Z" level=info msg="StartContainer for \"5a4e0d1a0d31002316e72ed580df106cff716859ecfd5014fc09a01f0bf05acf\"" Feb 9 19:04:20.265801 systemd[1]: Started cri-containerd-5a4e0d1a0d31002316e72ed580df106cff716859ecfd5014fc09a01f0bf05acf.scope. Feb 9 19:04:20.298652 env[1329]: time="2024-02-09T19:04:20.298596801Z" level=info msg="StartContainer for \"5a4e0d1a0d31002316e72ed580df106cff716859ecfd5014fc09a01f0bf05acf\" returns successfully" Feb 9 19:04:25.327435 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3823463664.mount: Deactivated successfully. Feb 9 19:04:28.134266 env[1329]: time="2024-02-09T19:04:28.134203947Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:04:28.163203 env[1329]: time="2024-02-09T19:04:28.163138433Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:04:28.173412 env[1329]: time="2024-02-09T19:04:28.173366599Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:04:28.174091 env[1329]: time="2024-02-09T19:04:28.174052303Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 9 19:04:28.177509 env[1329]: time="2024-02-09T19:04:28.177400824Z" level=info msg="CreateContainer within sandbox \"d04c537795675d8b0c58ab265203137f1067e872b2808a19058055276effd4dc\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 19:04:28.237356 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2792008650.mount: Deactivated successfully. Feb 9 19:04:28.248263 env[1329]: time="2024-02-09T19:04:28.248219979Z" level=info msg="CreateContainer within sandbox \"d04c537795675d8b0c58ab265203137f1067e872b2808a19058055276effd4dc\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"037d09d21c988a5b592ed29334c3da96d9f0317be25d5a2f7bc472f73d3b3bdc\"" Feb 9 19:04:28.249904 env[1329]: time="2024-02-09T19:04:28.248764482Z" level=info msg="StartContainer for \"037d09d21c988a5b592ed29334c3da96d9f0317be25d5a2f7bc472f73d3b3bdc\"" Feb 9 19:04:28.277415 systemd[1]: Started cri-containerd-037d09d21c988a5b592ed29334c3da96d9f0317be25d5a2f7bc472f73d3b3bdc.scope. Feb 9 19:04:28.317042 env[1329]: time="2024-02-09T19:04:28.316997020Z" level=info msg="StartContainer for \"037d09d21c988a5b592ed29334c3da96d9f0317be25d5a2f7bc472f73d3b3bdc\" returns successfully" Feb 9 19:04:28.321392 systemd[1]: cri-containerd-037d09d21c988a5b592ed29334c3da96d9f0317be25d5a2f7bc472f73d3b3bdc.scope: Deactivated successfully. Feb 9 19:04:29.232952 systemd[1]: run-containerd-runc-k8s.io-037d09d21c988a5b592ed29334c3da96d9f0317be25d5a2f7bc472f73d3b3bdc-runc.H1JsKm.mount: Deactivated successfully. Feb 9 19:04:29.233078 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-037d09d21c988a5b592ed29334c3da96d9f0317be25d5a2f7bc472f73d3b3bdc-rootfs.mount: Deactivated successfully. Feb 9 19:04:29.313341 kubelet[2411]: I0209 19:04:29.312406 2411 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-f59cbd8c6-pkkfc" podStartSLOduration=-9.223372020542429e+09 pod.CreationTimestamp="2024-02-09 19:04:13 +0000 UTC" firstStartedPulling="2024-02-09 19:04:14.730153591 +0000 UTC m=+13.819447075" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:04:21.288494525 +0000 UTC m=+20.377788009" watchObservedRunningTime="2024-02-09 19:04:29.312346675 +0000 UTC m=+28.401640159" Feb 9 19:04:32.508930 env[1329]: time="2024-02-09T19:04:32.508865313Z" level=info msg="shim disconnected" id=037d09d21c988a5b592ed29334c3da96d9f0317be25d5a2f7bc472f73d3b3bdc Feb 9 19:04:32.508930 env[1329]: time="2024-02-09T19:04:32.508923713Z" level=warning msg="cleaning up after shim disconnected" id=037d09d21c988a5b592ed29334c3da96d9f0317be25d5a2f7bc472f73d3b3bdc namespace=k8s.io Feb 9 19:04:32.508930 env[1329]: time="2024-02-09T19:04:32.508938913Z" level=info msg="cleaning up dead shim" Feb 9 19:04:32.520342 env[1329]: time="2024-02-09T19:04:32.520060381Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:04:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2863 runtime=io.containerd.runc.v2\n" Feb 9 19:04:33.310660 env[1329]: time="2024-02-09T19:04:33.310608260Z" level=info msg="CreateContainer within sandbox \"d04c537795675d8b0c58ab265203137f1067e872b2808a19058055276effd4dc\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 19:04:33.346206 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount216052718.mount: Deactivated successfully. Feb 9 19:04:33.355733 env[1329]: time="2024-02-09T19:04:33.355684131Z" level=info msg="CreateContainer within sandbox \"d04c537795675d8b0c58ab265203137f1067e872b2808a19058055276effd4dc\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e09d84add8840ac5f75e69ffbf10dc8cc897206555163c4aad0bc8f4ab553fcf\"" Feb 9 19:04:33.356999 env[1329]: time="2024-02-09T19:04:33.356186634Z" level=info msg="StartContainer for \"e09d84add8840ac5f75e69ffbf10dc8cc897206555163c4aad0bc8f4ab553fcf\"" Feb 9 19:04:33.381618 systemd[1]: Started cri-containerd-e09d84add8840ac5f75e69ffbf10dc8cc897206555163c4aad0bc8f4ab553fcf.scope. Feb 9 19:04:33.415546 env[1329]: time="2024-02-09T19:04:33.415407789Z" level=info msg="StartContainer for \"e09d84add8840ac5f75e69ffbf10dc8cc897206555163c4aad0bc8f4ab553fcf\" returns successfully" Feb 9 19:04:33.418112 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 19:04:33.418435 systemd[1]: Stopped systemd-sysctl.service. Feb 9 19:04:33.419510 systemd[1]: Stopping systemd-sysctl.service... Feb 9 19:04:33.422228 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:04:33.426605 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 19:04:33.427451 systemd[1]: cri-containerd-e09d84add8840ac5f75e69ffbf10dc8cc897206555163c4aad0bc8f4ab553fcf.scope: Deactivated successfully. Feb 9 19:04:33.437662 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:04:33.465287 env[1329]: time="2024-02-09T19:04:33.465204087Z" level=info msg="shim disconnected" id=e09d84add8840ac5f75e69ffbf10dc8cc897206555163c4aad0bc8f4ab553fcf Feb 9 19:04:33.465287 env[1329]: time="2024-02-09T19:04:33.465282588Z" level=warning msg="cleaning up after shim disconnected" id=e09d84add8840ac5f75e69ffbf10dc8cc897206555163c4aad0bc8f4ab553fcf namespace=k8s.io Feb 9 19:04:33.465625 env[1329]: time="2024-02-09T19:04:33.465303388Z" level=info msg="cleaning up dead shim" Feb 9 19:04:33.474064 env[1329]: time="2024-02-09T19:04:33.474017040Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:04:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2930 runtime=io.containerd.runc.v2\n" Feb 9 19:04:34.312402 env[1329]: time="2024-02-09T19:04:34.312351445Z" level=info msg="CreateContainer within sandbox \"d04c537795675d8b0c58ab265203137f1067e872b2808a19058055276effd4dc\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 19:04:34.344023 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e09d84add8840ac5f75e69ffbf10dc8cc897206555163c4aad0bc8f4ab553fcf-rootfs.mount: Deactivated successfully. Feb 9 19:04:34.356530 env[1329]: time="2024-02-09T19:04:34.356482307Z" level=info msg="CreateContainer within sandbox \"d04c537795675d8b0c58ab265203137f1067e872b2808a19058055276effd4dc\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a75293de3a241569627191ad4ae7388a6da4be3a5bf01c7f92f1449e6a4c0182\"" Feb 9 19:04:34.357167 env[1329]: time="2024-02-09T19:04:34.357132710Z" level=info msg="StartContainer for \"a75293de3a241569627191ad4ae7388a6da4be3a5bf01c7f92f1449e6a4c0182\"" Feb 9 19:04:34.384041 systemd[1]: run-containerd-runc-k8s.io-a75293de3a241569627191ad4ae7388a6da4be3a5bf01c7f92f1449e6a4c0182-runc.UMRaFy.mount: Deactivated successfully. Feb 9 19:04:34.389569 systemd[1]: Started cri-containerd-a75293de3a241569627191ad4ae7388a6da4be3a5bf01c7f92f1449e6a4c0182.scope. Feb 9 19:04:34.423986 systemd[1]: cri-containerd-a75293de3a241569627191ad4ae7388a6da4be3a5bf01c7f92f1449e6a4c0182.scope: Deactivated successfully. Feb 9 19:04:34.432268 env[1329]: time="2024-02-09T19:04:34.432228255Z" level=info msg="StartContainer for \"a75293de3a241569627191ad4ae7388a6da4be3a5bf01c7f92f1449e6a4c0182\" returns successfully" Feb 9 19:04:34.464599 env[1329]: time="2024-02-09T19:04:34.464540847Z" level=info msg="shim disconnected" id=a75293de3a241569627191ad4ae7388a6da4be3a5bf01c7f92f1449e6a4c0182 Feb 9 19:04:34.464599 env[1329]: time="2024-02-09T19:04:34.464588747Z" level=warning msg="cleaning up after shim disconnected" id=a75293de3a241569627191ad4ae7388a6da4be3a5bf01c7f92f1449e6a4c0182 namespace=k8s.io Feb 9 19:04:34.464599 env[1329]: time="2024-02-09T19:04:34.464601447Z" level=info msg="cleaning up dead shim" Feb 9 19:04:34.472485 env[1329]: time="2024-02-09T19:04:34.472440093Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:04:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2989 runtime=io.containerd.runc.v2\n" Feb 9 19:04:35.317172 env[1329]: time="2024-02-09T19:04:35.317125273Z" level=info msg="CreateContainer within sandbox \"d04c537795675d8b0c58ab265203137f1067e872b2808a19058055276effd4dc\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 19:04:35.343585 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a75293de3a241569627191ad4ae7388a6da4be3a5bf01c7f92f1449e6a4c0182-rootfs.mount: Deactivated successfully. Feb 9 19:04:35.357943 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2426000592.mount: Deactivated successfully. Feb 9 19:04:35.364566 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1523367727.mount: Deactivated successfully. Feb 9 19:04:35.375261 env[1329]: time="2024-02-09T19:04:35.375221213Z" level=info msg="CreateContainer within sandbox \"d04c537795675d8b0c58ab265203137f1067e872b2808a19058055276effd4dc\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"81d2028ba49bd2ffd739f1568a8843bc50484faf39fd915d29e533ba2cd328db\"" Feb 9 19:04:35.375658 env[1329]: time="2024-02-09T19:04:35.375625016Z" level=info msg="StartContainer for \"81d2028ba49bd2ffd739f1568a8843bc50484faf39fd915d29e533ba2cd328db\"" Feb 9 19:04:35.393002 systemd[1]: Started cri-containerd-81d2028ba49bd2ffd739f1568a8843bc50484faf39fd915d29e533ba2cd328db.scope. Feb 9 19:04:35.422672 systemd[1]: cri-containerd-81d2028ba49bd2ffd739f1568a8843bc50484faf39fd915d29e533ba2cd328db.scope: Deactivated successfully. Feb 9 19:04:35.428014 env[1329]: time="2024-02-09T19:04:35.427971922Z" level=info msg="StartContainer for \"81d2028ba49bd2ffd739f1568a8843bc50484faf39fd915d29e533ba2cd328db\" returns successfully" Feb 9 19:04:35.460251 env[1329]: time="2024-02-09T19:04:35.460203710Z" level=info msg="shim disconnected" id=81d2028ba49bd2ffd739f1568a8843bc50484faf39fd915d29e533ba2cd328db Feb 9 19:04:35.460251 env[1329]: time="2024-02-09T19:04:35.460248411Z" level=warning msg="cleaning up after shim disconnected" id=81d2028ba49bd2ffd739f1568a8843bc50484faf39fd915d29e533ba2cd328db namespace=k8s.io Feb 9 19:04:35.460549 env[1329]: time="2024-02-09T19:04:35.460259311Z" level=info msg="cleaning up dead shim" Feb 9 19:04:35.467974 env[1329]: time="2024-02-09T19:04:35.467939056Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:04:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3046 runtime=io.containerd.runc.v2\n" Feb 9 19:04:36.322412 env[1329]: time="2024-02-09T19:04:36.322352731Z" level=info msg="CreateContainer within sandbox \"d04c537795675d8b0c58ab265203137f1067e872b2808a19058055276effd4dc\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 19:04:36.380485 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3118596540.mount: Deactivated successfully. Feb 9 19:04:36.391688 env[1329]: time="2024-02-09T19:04:36.391644831Z" level=info msg="CreateContainer within sandbox \"d04c537795675d8b0c58ab265203137f1067e872b2808a19058055276effd4dc\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d002ea37bd663ca01eae84c8c168033870d56cf1ea565cfbb59a003495f111e4\"" Feb 9 19:04:36.393982 env[1329]: time="2024-02-09T19:04:36.392214334Z" level=info msg="StartContainer for \"d002ea37bd663ca01eae84c8c168033870d56cf1ea565cfbb59a003495f111e4\"" Feb 9 19:04:36.412907 systemd[1]: Started cri-containerd-d002ea37bd663ca01eae84c8c168033870d56cf1ea565cfbb59a003495f111e4.scope. Feb 9 19:04:36.452702 env[1329]: time="2024-02-09T19:04:36.452652384Z" level=info msg="StartContainer for \"d002ea37bd663ca01eae84c8c168033870d56cf1ea565cfbb59a003495f111e4\" returns successfully" Feb 9 19:04:36.590793 kubelet[2411]: I0209 19:04:36.589485 2411 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 19:04:36.621630 kubelet[2411]: I0209 19:04:36.621587 2411 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:04:36.621820 kubelet[2411]: I0209 19:04:36.621750 2411 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:04:36.628152 systemd[1]: Created slice kubepods-burstable-pod4c9c686f_7f6a_4599_8614_fcdba8e2c732.slice. Feb 9 19:04:36.634475 systemd[1]: Created slice kubepods-burstable-pod8de540d8_2586_4b3a_bee6_146df3aa8464.slice. Feb 9 19:04:36.711432 kubelet[2411]: I0209 19:04:36.711300 2411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfhhw\" (UniqueName: \"kubernetes.io/projected/8de540d8-2586-4b3a-bee6-146df3aa8464-kube-api-access-kfhhw\") pod \"coredns-787d4945fb-6ttph\" (UID: \"8de540d8-2586-4b3a-bee6-146df3aa8464\") " pod="kube-system/coredns-787d4945fb-6ttph" Feb 9 19:04:36.711627 kubelet[2411]: I0209 19:04:36.711507 2411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4c9c686f-7f6a-4599-8614-fcdba8e2c732-config-volume\") pod \"coredns-787d4945fb-w5gxh\" (UID: \"4c9c686f-7f6a-4599-8614-fcdba8e2c732\") " pod="kube-system/coredns-787d4945fb-w5gxh" Feb 9 19:04:36.711627 kubelet[2411]: I0209 19:04:36.711573 2411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8de540d8-2586-4b3a-bee6-146df3aa8464-config-volume\") pod \"coredns-787d4945fb-6ttph\" (UID: \"8de540d8-2586-4b3a-bee6-146df3aa8464\") " pod="kube-system/coredns-787d4945fb-6ttph" Feb 9 19:04:36.711752 kubelet[2411]: I0209 19:04:36.711603 2411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w96db\" (UniqueName: \"kubernetes.io/projected/4c9c686f-7f6a-4599-8614-fcdba8e2c732-kube-api-access-w96db\") pod \"coredns-787d4945fb-w5gxh\" (UID: \"4c9c686f-7f6a-4599-8614-fcdba8e2c732\") " pod="kube-system/coredns-787d4945fb-w5gxh" Feb 9 19:04:36.931510 env[1329]: time="2024-02-09T19:04:36.931457651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-w5gxh,Uid:4c9c686f-7f6a-4599-8614-fcdba8e2c732,Namespace:kube-system,Attempt:0,}" Feb 9 19:04:36.938150 env[1329]: time="2024-02-09T19:04:36.938108689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-6ttph,Uid:8de540d8-2586-4b3a-bee6-146df3aa8464,Namespace:kube-system,Attempt:0,}" Feb 9 19:04:38.614491 systemd-networkd[1462]: cilium_host: Link UP Feb 9 19:04:38.627462 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 9 19:04:38.627637 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 9 19:04:38.618776 systemd-networkd[1462]: cilium_net: Link UP Feb 9 19:04:38.622842 systemd-networkd[1462]: cilium_net: Gained carrier Feb 9 19:04:38.629627 systemd-networkd[1462]: cilium_host: Gained carrier Feb 9 19:04:38.786766 systemd-networkd[1462]: cilium_vxlan: Link UP Feb 9 19:04:38.786778 systemd-networkd[1462]: cilium_vxlan: Gained carrier Feb 9 19:04:38.911487 systemd-networkd[1462]: cilium_host: Gained IPv6LL Feb 9 19:04:39.011340 kernel: NET: Registered PF_ALG protocol family Feb 9 19:04:39.391510 systemd-networkd[1462]: cilium_net: Gained IPv6LL Feb 9 19:04:39.697841 systemd-networkd[1462]: lxc_health: Link UP Feb 9 19:04:39.719339 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 19:04:39.719504 systemd-networkd[1462]: lxc_health: Gained carrier Feb 9 19:04:40.026039 systemd-networkd[1462]: lxc7562f5b1083d: Link UP Feb 9 19:04:40.035335 kernel: eth0: renamed from tmp27326 Feb 9 19:04:40.046408 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc7562f5b1083d: link becomes ready Feb 9 19:04:40.047475 systemd-networkd[1462]: lxc7562f5b1083d: Gained carrier Feb 9 19:04:40.067330 systemd-networkd[1462]: lxc25f34083dca6: Link UP Feb 9 19:04:40.077114 kernel: eth0: renamed from tmpd2398 Feb 9 19:04:40.085403 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc25f34083dca6: link becomes ready Feb 9 19:04:40.088249 systemd-networkd[1462]: lxc25f34083dca6: Gained carrier Feb 9 19:04:40.735548 systemd-networkd[1462]: cilium_vxlan: Gained IPv6LL Feb 9 19:04:41.177483 kubelet[2411]: I0209 19:04:41.177438 2411 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-n2hgq" podStartSLOduration=-9.223372008677385e+09 pod.CreationTimestamp="2024-02-09 19:04:13 +0000 UTC" firstStartedPulling="2024-02-09 19:04:15.236760109 +0000 UTC m=+14.326053593" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:04:37.33504086 +0000 UTC m=+36.424334344" watchObservedRunningTime="2024-02-09 19:04:41.177389567 +0000 UTC m=+40.266683051" Feb 9 19:04:41.311515 systemd-networkd[1462]: lxc_health: Gained IPv6LL Feb 9 19:04:41.567526 systemd-networkd[1462]: lxc7562f5b1083d: Gained IPv6LL Feb 9 19:04:41.887503 systemd-networkd[1462]: lxc25f34083dca6: Gained IPv6LL Feb 9 19:04:43.941191 env[1329]: time="2024-02-09T19:04:43.937265861Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:04:43.941191 env[1329]: time="2024-02-09T19:04:43.937318361Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:04:43.941191 env[1329]: time="2024-02-09T19:04:43.937342862Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:04:43.941191 env[1329]: time="2024-02-09T19:04:43.937491762Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d2398a037d930c48ea089b44e7452717db6226b605003fe5cd2d12a623cb6a58 pid=3593 runtime=io.containerd.runc.v2 Feb 9 19:04:43.950877 env[1329]: time="2024-02-09T19:04:43.947096214Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:04:43.950877 env[1329]: time="2024-02-09T19:04:43.947437916Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:04:43.950877 env[1329]: time="2024-02-09T19:04:43.947478916Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:04:43.950877 env[1329]: time="2024-02-09T19:04:43.947614316Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/27326fa496351a81eceafbbf829f16516dc9c073b74fd29038d5898a6b72394e pid=3608 runtime=io.containerd.runc.v2 Feb 9 19:04:43.972876 systemd[1]: Started cri-containerd-d2398a037d930c48ea089b44e7452717db6226b605003fe5cd2d12a623cb6a58.scope. Feb 9 19:04:43.978394 systemd[1]: run-containerd-runc-k8s.io-d2398a037d930c48ea089b44e7452717db6226b605003fe5cd2d12a623cb6a58-runc.foPCPW.mount: Deactivated successfully. Feb 9 19:04:43.995151 systemd[1]: Started cri-containerd-27326fa496351a81eceafbbf829f16516dc9c073b74fd29038d5898a6b72394e.scope. Feb 9 19:04:44.073891 env[1329]: time="2024-02-09T19:04:44.073839587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-6ttph,Uid:8de540d8-2586-4b3a-bee6-146df3aa8464,Namespace:kube-system,Attempt:0,} returns sandbox id \"d2398a037d930c48ea089b44e7452717db6226b605003fe5cd2d12a623cb6a58\"" Feb 9 19:04:44.077486 env[1329]: time="2024-02-09T19:04:44.077392406Z" level=info msg="CreateContainer within sandbox \"d2398a037d930c48ea089b44e7452717db6226b605003fe5cd2d12a623cb6a58\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 19:04:44.113448 env[1329]: time="2024-02-09T19:04:44.113396296Z" level=info msg="CreateContainer within sandbox \"d2398a037d930c48ea089b44e7452717db6226b605003fe5cd2d12a623cb6a58\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8bc15816ec77c7e8bc292f22d10de9f85bf8c370d67668dbf98790ebe7878bcf\"" Feb 9 19:04:44.114478 env[1329]: time="2024-02-09T19:04:44.114441001Z" level=info msg="StartContainer for \"8bc15816ec77c7e8bc292f22d10de9f85bf8c370d67668dbf98790ebe7878bcf\"" Feb 9 19:04:44.123481 env[1329]: time="2024-02-09T19:04:44.123432649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-w5gxh,Uid:4c9c686f-7f6a-4599-8614-fcdba8e2c732,Namespace:kube-system,Attempt:0,} returns sandbox id \"27326fa496351a81eceafbbf829f16516dc9c073b74fd29038d5898a6b72394e\"" Feb 9 19:04:44.129612 env[1329]: time="2024-02-09T19:04:44.129300080Z" level=info msg="CreateContainer within sandbox \"27326fa496351a81eceafbbf829f16516dc9c073b74fd29038d5898a6b72394e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 19:04:44.155263 systemd[1]: Started cri-containerd-8bc15816ec77c7e8bc292f22d10de9f85bf8c370d67668dbf98790ebe7878bcf.scope. Feb 9 19:04:44.187026 env[1329]: time="2024-02-09T19:04:44.186975485Z" level=info msg="CreateContainer within sandbox \"27326fa496351a81eceafbbf829f16516dc9c073b74fd29038d5898a6b72394e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a1811bd62378be7fb995a07bc17c4ee9fc9686faab73b7c8df2aa434fcee0ba4\"" Feb 9 19:04:44.188898 env[1329]: time="2024-02-09T19:04:44.187741589Z" level=info msg="StartContainer for \"a1811bd62378be7fb995a07bc17c4ee9fc9686faab73b7c8df2aa434fcee0ba4\"" Feb 9 19:04:44.212303 env[1329]: time="2024-02-09T19:04:44.212145718Z" level=info msg="StartContainer for \"8bc15816ec77c7e8bc292f22d10de9f85bf8c370d67668dbf98790ebe7878bcf\" returns successfully" Feb 9 19:04:44.230225 systemd[1]: Started cri-containerd-a1811bd62378be7fb995a07bc17c4ee9fc9686faab73b7c8df2aa434fcee0ba4.scope. Feb 9 19:04:44.308214 env[1329]: time="2024-02-09T19:04:44.308154726Z" level=info msg="StartContainer for \"a1811bd62378be7fb995a07bc17c4ee9fc9686faab73b7c8df2aa434fcee0ba4\" returns successfully" Feb 9 19:04:44.353062 kubelet[2411]: I0209 19:04:44.353032 2411 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-6ttph" podStartSLOduration=31.352872962 pod.CreationTimestamp="2024-02-09 19:04:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:04:44.352798362 +0000 UTC m=+43.442091846" watchObservedRunningTime="2024-02-09 19:04:44.352872962 +0000 UTC m=+43.442166446" Feb 9 19:04:45.355888 kubelet[2411]: I0209 19:04:45.355848 2411 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-w5gxh" podStartSLOduration=32.355791746 pod.CreationTimestamp="2024-02-09 19:04:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:04:44.364755125 +0000 UTC m=+43.454048609" watchObservedRunningTime="2024-02-09 19:04:45.355791746 +0000 UTC m=+44.445085330" Feb 9 19:05:38.009255 systemd[1]: Started sshd@5-10.200.8.39:22-10.200.12.6:36378.service. Feb 9 19:05:38.619564 sshd[3855]: Accepted publickey for core from 10.200.12.6 port 36378 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:05:38.621142 sshd[3855]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:05:38.629667 systemd[1]: Started session-8.scope. Feb 9 19:05:38.630223 systemd-logind[1301]: New session 8 of user core. Feb 9 19:05:39.144450 sshd[3855]: pam_unix(sshd:session): session closed for user core Feb 9 19:05:39.148254 systemd[1]: sshd@5-10.200.8.39:22-10.200.12.6:36378.service: Deactivated successfully. Feb 9 19:05:39.149283 systemd[1]: session-8.scope: Deactivated successfully. Feb 9 19:05:39.149810 systemd-logind[1301]: Session 8 logged out. Waiting for processes to exit. Feb 9 19:05:39.150576 systemd-logind[1301]: Removed session 8. Feb 9 19:05:44.251274 systemd[1]: Started sshd@6-10.200.8.39:22-10.200.12.6:36390.service. Feb 9 19:05:44.866906 sshd[3869]: Accepted publickey for core from 10.200.12.6 port 36390 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:05:44.868534 sshd[3869]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:05:44.874565 systemd-logind[1301]: New session 9 of user core. Feb 9 19:05:44.875103 systemd[1]: Started session-9.scope. Feb 9 19:05:45.370942 sshd[3869]: pam_unix(sshd:session): session closed for user core Feb 9 19:05:45.374470 systemd[1]: sshd@6-10.200.8.39:22-10.200.12.6:36390.service: Deactivated successfully. Feb 9 19:05:45.375776 systemd[1]: session-9.scope: Deactivated successfully. Feb 9 19:05:45.376768 systemd-logind[1301]: Session 9 logged out. Waiting for processes to exit. Feb 9 19:05:45.377573 systemd-logind[1301]: Removed session 9. Feb 9 19:05:50.488097 systemd[1]: Started sshd@7-10.200.8.39:22-10.200.12.6:59590.service. Feb 9 19:05:51.103786 sshd[3885]: Accepted publickey for core from 10.200.12.6 port 59590 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:05:51.105235 sshd[3885]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:05:51.109105 systemd-logind[1301]: New session 10 of user core. Feb 9 19:05:51.111183 systemd[1]: Started session-10.scope. Feb 9 19:05:51.605299 sshd[3885]: pam_unix(sshd:session): session closed for user core Feb 9 19:05:51.608398 systemd[1]: sshd@7-10.200.8.39:22-10.200.12.6:59590.service: Deactivated successfully. Feb 9 19:05:51.609359 systemd[1]: session-10.scope: Deactivated successfully. Feb 9 19:05:51.610039 systemd-logind[1301]: Session 10 logged out. Waiting for processes to exit. Feb 9 19:05:51.610862 systemd-logind[1301]: Removed session 10. Feb 9 19:05:56.710384 systemd[1]: Started sshd@8-10.200.8.39:22-10.200.12.6:59594.service. Feb 9 19:05:57.321397 sshd[3900]: Accepted publickey for core from 10.200.12.6 port 59594 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:05:57.322821 sshd[3900]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:05:57.329817 systemd-logind[1301]: New session 11 of user core. Feb 9 19:05:57.330674 systemd[1]: Started session-11.scope. Feb 9 19:05:57.821930 sshd[3900]: pam_unix(sshd:session): session closed for user core Feb 9 19:05:57.825424 systemd[1]: sshd@8-10.200.8.39:22-10.200.12.6:59594.service: Deactivated successfully. Feb 9 19:05:57.826534 systemd[1]: session-11.scope: Deactivated successfully. Feb 9 19:05:57.827544 systemd-logind[1301]: Session 11 logged out. Waiting for processes to exit. Feb 9 19:05:57.828601 systemd-logind[1301]: Removed session 11. Feb 9 19:06:02.928165 systemd[1]: Started sshd@9-10.200.8.39:22-10.200.12.6:52558.service. Feb 9 19:06:03.549250 sshd[3916]: Accepted publickey for core from 10.200.12.6 port 52558 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:06:03.550844 sshd[3916]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:06:03.555789 systemd[1]: Started session-12.scope. Feb 9 19:06:03.556220 systemd-logind[1301]: New session 12 of user core. Feb 9 19:06:04.053013 sshd[3916]: pam_unix(sshd:session): session closed for user core Feb 9 19:06:04.056412 systemd[1]: sshd@9-10.200.8.39:22-10.200.12.6:52558.service: Deactivated successfully. Feb 9 19:06:04.057524 systemd[1]: session-12.scope: Deactivated successfully. Feb 9 19:06:04.058402 systemd-logind[1301]: Session 12 logged out. Waiting for processes to exit. Feb 9 19:06:04.059344 systemd-logind[1301]: Removed session 12. Feb 9 19:06:04.159533 systemd[1]: Started sshd@10-10.200.8.39:22-10.200.12.6:52568.service. Feb 9 19:06:04.774803 sshd[3929]: Accepted publickey for core from 10.200.12.6 port 52568 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:06:04.776201 sshd[3929]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:06:04.781032 systemd-logind[1301]: New session 13 of user core. Feb 9 19:06:04.782017 systemd[1]: Started session-13.scope. Feb 9 19:06:06.054056 sshd[3929]: pam_unix(sshd:session): session closed for user core Feb 9 19:06:06.058030 systemd[1]: sshd@10-10.200.8.39:22-10.200.12.6:52568.service: Deactivated successfully. Feb 9 19:06:06.058897 systemd[1]: session-13.scope: Deactivated successfully. Feb 9 19:06:06.059871 systemd-logind[1301]: Session 13 logged out. Waiting for processes to exit. Feb 9 19:06:06.060860 systemd-logind[1301]: Removed session 13. Feb 9 19:06:06.157548 systemd[1]: Started sshd@11-10.200.8.39:22-10.200.12.6:52578.service. Feb 9 19:06:06.786388 sshd[3939]: Accepted publickey for core from 10.200.12.6 port 52578 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:06:06.787802 sshd[3939]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:06:06.793241 systemd[1]: Started session-14.scope. Feb 9 19:06:06.793707 systemd-logind[1301]: New session 14 of user core. Feb 9 19:06:07.277839 sshd[3939]: pam_unix(sshd:session): session closed for user core Feb 9 19:06:07.281020 systemd[1]: sshd@11-10.200.8.39:22-10.200.12.6:52578.service: Deactivated successfully. Feb 9 19:06:07.281961 systemd[1]: session-14.scope: Deactivated successfully. Feb 9 19:06:07.282623 systemd-logind[1301]: Session 14 logged out. Waiting for processes to exit. Feb 9 19:06:07.283468 systemd-logind[1301]: Removed session 14. Feb 9 19:06:12.385258 systemd[1]: Started sshd@12-10.200.8.39:22-10.200.12.6:47456.service. Feb 9 19:06:13.000552 sshd[3951]: Accepted publickey for core from 10.200.12.6 port 47456 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:06:13.001980 sshd[3951]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:06:13.007046 systemd[1]: Started session-15.scope. Feb 9 19:06:13.007512 systemd-logind[1301]: New session 15 of user core. Feb 9 19:06:13.501602 sshd[3951]: pam_unix(sshd:session): session closed for user core Feb 9 19:06:13.505000 systemd[1]: sshd@12-10.200.8.39:22-10.200.12.6:47456.service: Deactivated successfully. Feb 9 19:06:13.506004 systemd[1]: session-15.scope: Deactivated successfully. Feb 9 19:06:13.507188 systemd-logind[1301]: Session 15 logged out. Waiting for processes to exit. Feb 9 19:06:13.508039 systemd-logind[1301]: Removed session 15. Feb 9 19:06:18.607445 systemd[1]: Started sshd@13-10.200.8.39:22-10.200.12.6:50190.service. Feb 9 19:06:19.222670 sshd[3965]: Accepted publickey for core from 10.200.12.6 port 50190 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:06:19.223393 sshd[3965]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:06:19.228027 systemd-logind[1301]: New session 16 of user core. Feb 9 19:06:19.229086 systemd[1]: Started session-16.scope. Feb 9 19:06:19.717205 sshd[3965]: pam_unix(sshd:session): session closed for user core Feb 9 19:06:19.719688 systemd[1]: sshd@13-10.200.8.39:22-10.200.12.6:50190.service: Deactivated successfully. Feb 9 19:06:19.720647 systemd[1]: session-16.scope: Deactivated successfully. Feb 9 19:06:19.721422 systemd-logind[1301]: Session 16 logged out. Waiting for processes to exit. Feb 9 19:06:19.722201 systemd-logind[1301]: Removed session 16. Feb 9 19:06:19.819995 systemd[1]: Started sshd@14-10.200.8.39:22-10.200.12.6:50196.service. Feb 9 19:06:20.434379 sshd[3977]: Accepted publickey for core from 10.200.12.6 port 50196 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:06:20.436066 sshd[3977]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:06:20.441252 systemd[1]: Started session-17.scope. Feb 9 19:06:20.442225 systemd-logind[1301]: New session 17 of user core. Feb 9 19:06:21.014042 sshd[3977]: pam_unix(sshd:session): session closed for user core Feb 9 19:06:21.016863 systemd[1]: sshd@14-10.200.8.39:22-10.200.12.6:50196.service: Deactivated successfully. Feb 9 19:06:21.018012 systemd[1]: session-17.scope: Deactivated successfully. Feb 9 19:06:21.018708 systemd-logind[1301]: Session 17 logged out. Waiting for processes to exit. Feb 9 19:06:21.019520 systemd-logind[1301]: Removed session 17. Feb 9 19:06:21.119237 systemd[1]: Started sshd@15-10.200.8.39:22-10.200.12.6:50210.service. Feb 9 19:06:21.743575 sshd[3986]: Accepted publickey for core from 10.200.12.6 port 50210 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:06:21.745035 sshd[3986]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:06:21.749953 systemd[1]: Started session-18.scope. Feb 9 19:06:21.750600 systemd-logind[1301]: New session 18 of user core. Feb 9 19:06:23.286927 sshd[3986]: pam_unix(sshd:session): session closed for user core Feb 9 19:06:23.289904 systemd[1]: sshd@15-10.200.8.39:22-10.200.12.6:50210.service: Deactivated successfully. Feb 9 19:06:23.291295 systemd[1]: session-18.scope: Deactivated successfully. Feb 9 19:06:23.291362 systemd-logind[1301]: Session 18 logged out. Waiting for processes to exit. Feb 9 19:06:23.292518 systemd-logind[1301]: Removed session 18. Feb 9 19:06:23.392239 systemd[1]: Started sshd@16-10.200.8.39:22-10.200.12.6:50212.service. Feb 9 19:06:24.007104 sshd[4051]: Accepted publickey for core from 10.200.12.6 port 50212 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:06:24.008429 sshd[4051]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:06:24.013079 systemd-logind[1301]: New session 19 of user core. Feb 9 19:06:24.013637 systemd[1]: Started session-19.scope. Feb 9 19:06:24.616656 sshd[4051]: pam_unix(sshd:session): session closed for user core Feb 9 19:06:24.619636 systemd[1]: sshd@16-10.200.8.39:22-10.200.12.6:50212.service: Deactivated successfully. Feb 9 19:06:24.620905 systemd-logind[1301]: Session 19 logged out. Waiting for processes to exit. Feb 9 19:06:24.620997 systemd[1]: session-19.scope: Deactivated successfully. Feb 9 19:06:24.622358 systemd-logind[1301]: Removed session 19. Feb 9 19:06:24.725395 systemd[1]: Started sshd@17-10.200.8.39:22-10.200.12.6:50218.service. Feb 9 19:06:25.343117 sshd[4061]: Accepted publickey for core from 10.200.12.6 port 50218 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:06:25.344809 sshd[4061]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:06:25.349949 systemd-logind[1301]: New session 20 of user core. Feb 9 19:06:25.351021 systemd[1]: Started session-20.scope. Feb 9 19:06:25.842927 sshd[4061]: pam_unix(sshd:session): session closed for user core Feb 9 19:06:25.846407 systemd[1]: sshd@17-10.200.8.39:22-10.200.12.6:50218.service: Deactivated successfully. Feb 9 19:06:25.847594 systemd[1]: session-20.scope: Deactivated successfully. Feb 9 19:06:25.848484 systemd-logind[1301]: Session 20 logged out. Waiting for processes to exit. Feb 9 19:06:25.849322 systemd-logind[1301]: Removed session 20. Feb 9 19:06:30.948153 systemd[1]: Started sshd@18-10.200.8.39:22-10.200.12.6:58198.service. Feb 9 19:06:31.580073 sshd[4100]: Accepted publickey for core from 10.200.12.6 port 58198 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:06:31.581469 sshd[4100]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:06:31.586816 systemd[1]: Started session-21.scope. Feb 9 19:06:31.587456 systemd-logind[1301]: New session 21 of user core. Feb 9 19:06:32.085593 sshd[4100]: pam_unix(sshd:session): session closed for user core Feb 9 19:06:32.088371 systemd[1]: sshd@18-10.200.8.39:22-10.200.12.6:58198.service: Deactivated successfully. Feb 9 19:06:32.089525 systemd[1]: session-21.scope: Deactivated successfully. Feb 9 19:06:32.090373 systemd-logind[1301]: Session 21 logged out. Waiting for processes to exit. Feb 9 19:06:32.091710 systemd-logind[1301]: Removed session 21. Feb 9 19:06:37.193321 systemd[1]: Started sshd@19-10.200.8.39:22-10.200.12.6:38192.service. Feb 9 19:06:37.807769 sshd[4112]: Accepted publickey for core from 10.200.12.6 port 38192 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:06:37.809172 sshd[4112]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:06:37.814296 systemd[1]: Started session-22.scope. Feb 9 19:06:37.815351 systemd-logind[1301]: New session 22 of user core. Feb 9 19:06:38.301220 sshd[4112]: pam_unix(sshd:session): session closed for user core Feb 9 19:06:38.304553 systemd[1]: sshd@19-10.200.8.39:22-10.200.12.6:38192.service: Deactivated successfully. Feb 9 19:06:38.305512 systemd[1]: session-22.scope: Deactivated successfully. Feb 9 19:06:38.306271 systemd-logind[1301]: Session 22 logged out. Waiting for processes to exit. Feb 9 19:06:38.307096 systemd-logind[1301]: Removed session 22. Feb 9 19:06:43.406668 systemd[1]: Started sshd@20-10.200.8.39:22-10.200.12.6:38196.service. Feb 9 19:06:44.026165 sshd[4124]: Accepted publickey for core from 10.200.12.6 port 38196 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:06:44.027880 sshd[4124]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:06:44.033000 systemd[1]: Started session-23.scope. Feb 9 19:06:44.033515 systemd-logind[1301]: New session 23 of user core. Feb 9 19:06:44.536045 sshd[4124]: pam_unix(sshd:session): session closed for user core Feb 9 19:06:44.539052 systemd[1]: sshd@20-10.200.8.39:22-10.200.12.6:38196.service: Deactivated successfully. Feb 9 19:06:44.540036 systemd[1]: session-23.scope: Deactivated successfully. Feb 9 19:06:44.540748 systemd-logind[1301]: Session 23 logged out. Waiting for processes to exit. Feb 9 19:06:44.541788 systemd-logind[1301]: Removed session 23. Feb 9 19:06:44.639660 systemd[1]: Started sshd@21-10.200.8.39:22-10.200.12.6:38202.service. Feb 9 19:06:45.258211 sshd[4138]: Accepted publickey for core from 10.200.12.6 port 38202 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:06:45.259903 sshd[4138]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:06:45.265463 systemd[1]: Started session-24.scope. Feb 9 19:06:45.265899 systemd-logind[1301]: New session 24 of user core. Feb 9 19:06:46.918997 systemd[1]: run-containerd-runc-k8s.io-d002ea37bd663ca01eae84c8c168033870d56cf1ea565cfbb59a003495f111e4-runc.vYkBv0.mount: Deactivated successfully. Feb 9 19:06:46.922783 env[1329]: time="2024-02-09T19:06:46.922737597Z" level=info msg="StopContainer for \"5a4e0d1a0d31002316e72ed580df106cff716859ecfd5014fc09a01f0bf05acf\" with timeout 30 (s)" Feb 9 19:06:46.926439 env[1329]: time="2024-02-09T19:06:46.926385036Z" level=info msg="Stop container \"5a4e0d1a0d31002316e72ed580df106cff716859ecfd5014fc09a01f0bf05acf\" with signal terminated" Feb 9 19:06:46.943111 systemd[1]: cri-containerd-5a4e0d1a0d31002316e72ed580df106cff716859ecfd5014fc09a01f0bf05acf.scope: Deactivated successfully. Feb 9 19:06:46.949625 env[1329]: time="2024-02-09T19:06:46.949533889Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 19:06:46.959723 env[1329]: time="2024-02-09T19:06:46.959681100Z" level=info msg="StopContainer for \"d002ea37bd663ca01eae84c8c168033870d56cf1ea565cfbb59a003495f111e4\" with timeout 1 (s)" Feb 9 19:06:46.960175 env[1329]: time="2024-02-09T19:06:46.960118404Z" level=info msg="Stop container \"d002ea37bd663ca01eae84c8c168033870d56cf1ea565cfbb59a003495f111e4\" with signal terminated" Feb 9 19:06:46.969647 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5a4e0d1a0d31002316e72ed580df106cff716859ecfd5014fc09a01f0bf05acf-rootfs.mount: Deactivated successfully. Feb 9 19:06:46.975340 systemd-networkd[1462]: lxc_health: Link DOWN Feb 9 19:06:46.975348 systemd-networkd[1462]: lxc_health: Lost carrier Feb 9 19:06:46.998640 systemd[1]: cri-containerd-d002ea37bd663ca01eae84c8c168033870d56cf1ea565cfbb59a003495f111e4.scope: Deactivated successfully. Feb 9 19:06:46.998926 systemd[1]: cri-containerd-d002ea37bd663ca01eae84c8c168033870d56cf1ea565cfbb59a003495f111e4.scope: Consumed 7.409s CPU time. Feb 9 19:06:47.017465 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d002ea37bd663ca01eae84c8c168033870d56cf1ea565cfbb59a003495f111e4-rootfs.mount: Deactivated successfully. Feb 9 19:06:47.067343 env[1329]: time="2024-02-09T19:06:47.067262068Z" level=info msg="shim disconnected" id=5a4e0d1a0d31002316e72ed580df106cff716859ecfd5014fc09a01f0bf05acf Feb 9 19:06:47.067584 env[1329]: time="2024-02-09T19:06:47.067346669Z" level=warning msg="cleaning up after shim disconnected" id=5a4e0d1a0d31002316e72ed580df106cff716859ecfd5014fc09a01f0bf05acf namespace=k8s.io Feb 9 19:06:47.067584 env[1329]: time="2024-02-09T19:06:47.067360869Z" level=info msg="cleaning up dead shim" Feb 9 19:06:47.067719 env[1329]: time="2024-02-09T19:06:47.067262068Z" level=info msg="shim disconnected" id=d002ea37bd663ca01eae84c8c168033870d56cf1ea565cfbb59a003495f111e4 Feb 9 19:06:47.067800 env[1329]: time="2024-02-09T19:06:47.067721973Z" level=warning msg="cleaning up after shim disconnected" id=d002ea37bd663ca01eae84c8c168033870d56cf1ea565cfbb59a003495f111e4 namespace=k8s.io Feb 9 19:06:47.067800 env[1329]: time="2024-02-09T19:06:47.067735273Z" level=info msg="cleaning up dead shim" Feb 9 19:06:47.080998 env[1329]: time="2024-02-09T19:06:47.080940016Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:06:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4205 runtime=io.containerd.runc.v2\n" Feb 9 19:06:47.082494 env[1329]: time="2024-02-09T19:06:47.082464932Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:06:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4206 runtime=io.containerd.runc.v2\n" Feb 9 19:06:47.087092 env[1329]: time="2024-02-09T19:06:47.087051482Z" level=info msg="StopContainer for \"5a4e0d1a0d31002316e72ed580df106cff716859ecfd5014fc09a01f0bf05acf\" returns successfully" Feb 9 19:06:47.087711 env[1329]: time="2024-02-09T19:06:47.087661489Z" level=info msg="StopPodSandbox for \"c30c78b6347934b1d27785a485c742d39ce177f33a81f073c1b2ab4bc5c7f19a\"" Feb 9 19:06:47.087810 env[1329]: time="2024-02-09T19:06:47.087729889Z" level=info msg="Container to stop \"5a4e0d1a0d31002316e72ed580df106cff716859ecfd5014fc09a01f0bf05acf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:06:47.088093 env[1329]: time="2024-02-09T19:06:47.088064093Z" level=info msg="StopContainer for \"d002ea37bd663ca01eae84c8c168033870d56cf1ea565cfbb59a003495f111e4\" returns successfully" Feb 9 19:06:47.088908 env[1329]: time="2024-02-09T19:06:47.088618599Z" level=info msg="StopPodSandbox for \"d04c537795675d8b0c58ab265203137f1067e872b2808a19058055276effd4dc\"" Feb 9 19:06:47.089131 env[1329]: time="2024-02-09T19:06:47.089105404Z" level=info msg="Container to stop \"81d2028ba49bd2ffd739f1568a8843bc50484faf39fd915d29e533ba2cd328db\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:06:47.089243 env[1329]: time="2024-02-09T19:06:47.089217505Z" level=info msg="Container to stop \"a75293de3a241569627191ad4ae7388a6da4be3a5bf01c7f92f1449e6a4c0182\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:06:47.089345 env[1329]: time="2024-02-09T19:06:47.089318407Z" level=info msg="Container to stop \"037d09d21c988a5b592ed29334c3da96d9f0317be25d5a2f7bc472f73d3b3bdc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:06:47.089438 env[1329]: time="2024-02-09T19:06:47.089418208Z" level=info msg="Container to stop \"e09d84add8840ac5f75e69ffbf10dc8cc897206555163c4aad0bc8f4ab553fcf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:06:47.089560 env[1329]: time="2024-02-09T19:06:47.089541809Z" level=info msg="Container to stop \"d002ea37bd663ca01eae84c8c168033870d56cf1ea565cfbb59a003495f111e4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:06:47.097021 systemd[1]: cri-containerd-d04c537795675d8b0c58ab265203137f1067e872b2808a19058055276effd4dc.scope: Deactivated successfully. Feb 9 19:06:47.097957 systemd[1]: cri-containerd-c30c78b6347934b1d27785a485c742d39ce177f33a81f073c1b2ab4bc5c7f19a.scope: Deactivated successfully. Feb 9 19:06:47.149153 env[1329]: time="2024-02-09T19:06:47.149117554Z" level=info msg="shim disconnected" id=d04c537795675d8b0c58ab265203137f1067e872b2808a19058055276effd4dc Feb 9 19:06:47.149441 env[1329]: time="2024-02-09T19:06:47.149414857Z" level=warning msg="cleaning up after shim disconnected" id=d04c537795675d8b0c58ab265203137f1067e872b2808a19058055276effd4dc namespace=k8s.io Feb 9 19:06:47.149441 env[1329]: time="2024-02-09T19:06:47.149436857Z" level=info msg="cleaning up dead shim" Feb 9 19:06:47.150200 env[1329]: time="2024-02-09T19:06:47.149100754Z" level=info msg="shim disconnected" id=c30c78b6347934b1d27785a485c742d39ce177f33a81f073c1b2ab4bc5c7f19a Feb 9 19:06:47.150200 env[1329]: time="2024-02-09T19:06:47.149979463Z" level=warning msg="cleaning up after shim disconnected" id=c30c78b6347934b1d27785a485c742d39ce177f33a81f073c1b2ab4bc5c7f19a namespace=k8s.io Feb 9 19:06:47.150200 env[1329]: time="2024-02-09T19:06:47.149996964Z" level=info msg="cleaning up dead shim" Feb 9 19:06:47.161162 env[1329]: time="2024-02-09T19:06:47.161123984Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:06:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4268 runtime=io.containerd.runc.v2\n" Feb 9 19:06:47.161485 env[1329]: time="2024-02-09T19:06:47.161453388Z" level=info msg="TearDown network for sandbox \"d04c537795675d8b0c58ab265203137f1067e872b2808a19058055276effd4dc\" successfully" Feb 9 19:06:47.161485 env[1329]: time="2024-02-09T19:06:47.161479188Z" level=info msg="StopPodSandbox for \"d04c537795675d8b0c58ab265203137f1067e872b2808a19058055276effd4dc\" returns successfully" Feb 9 19:06:47.162802 env[1329]: time="2024-02-09T19:06:47.162760502Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:06:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4269 runtime=io.containerd.runc.v2\n" Feb 9 19:06:47.163379 env[1329]: time="2024-02-09T19:06:47.163055705Z" level=info msg="TearDown network for sandbox \"c30c78b6347934b1d27785a485c742d39ce177f33a81f073c1b2ab4bc5c7f19a\" successfully" Feb 9 19:06:47.163379 env[1329]: time="2024-02-09T19:06:47.163083805Z" level=info msg="StopPodSandbox for \"c30c78b6347934b1d27785a485c742d39ce177f33a81f073c1b2ab4bc5c7f19a\" returns successfully" Feb 9 19:06:47.301412 kubelet[2411]: I0209 19:06:47.299089 2411 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ac78abb7-3216-4aa2-8ada-54fe26b03151-cilium-config-path\") pod \"ac78abb7-3216-4aa2-8ada-54fe26b03151\" (UID: \"ac78abb7-3216-4aa2-8ada-54fe26b03151\") " Feb 9 19:06:47.301412 kubelet[2411]: I0209 19:06:47.299171 2411 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ac78abb7-3216-4aa2-8ada-54fe26b03151-hostproc\") pod \"ac78abb7-3216-4aa2-8ada-54fe26b03151\" (UID: \"ac78abb7-3216-4aa2-8ada-54fe26b03151\") " Feb 9 19:06:47.301412 kubelet[2411]: I0209 19:06:47.299211 2411 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r422v\" (UniqueName: \"kubernetes.io/projected/ac78abb7-3216-4aa2-8ada-54fe26b03151-kube-api-access-r422v\") pod \"ac78abb7-3216-4aa2-8ada-54fe26b03151\" (UID: \"ac78abb7-3216-4aa2-8ada-54fe26b03151\") " Feb 9 19:06:47.301412 kubelet[2411]: I0209 19:06:47.299249 2411 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nsd2m\" (UniqueName: \"kubernetes.io/projected/646efd18-613e-44ca-95af-7144d8a6b0a4-kube-api-access-nsd2m\") pod \"646efd18-613e-44ca-95af-7144d8a6b0a4\" (UID: \"646efd18-613e-44ca-95af-7144d8a6b0a4\") " Feb 9 19:06:47.301412 kubelet[2411]: I0209 19:06:47.299281 2411 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ac78abb7-3216-4aa2-8ada-54fe26b03151-lib-modules\") pod \"ac78abb7-3216-4aa2-8ada-54fe26b03151\" (UID: \"ac78abb7-3216-4aa2-8ada-54fe26b03151\") " Feb 9 19:06:47.301412 kubelet[2411]: I0209 19:06:47.299356 2411 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ac78abb7-3216-4aa2-8ada-54fe26b03151-cilium-run\") pod \"ac78abb7-3216-4aa2-8ada-54fe26b03151\" (UID: \"ac78abb7-3216-4aa2-8ada-54fe26b03151\") " Feb 9 19:06:47.302181 kubelet[2411]: I0209 19:06:47.299392 2411 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ac78abb7-3216-4aa2-8ada-54fe26b03151-etc-cni-netd\") pod \"ac78abb7-3216-4aa2-8ada-54fe26b03151\" (UID: \"ac78abb7-3216-4aa2-8ada-54fe26b03151\") " Feb 9 19:06:47.302181 kubelet[2411]: I0209 19:06:47.299422 2411 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ac78abb7-3216-4aa2-8ada-54fe26b03151-xtables-lock\") pod \"ac78abb7-3216-4aa2-8ada-54fe26b03151\" (UID: \"ac78abb7-3216-4aa2-8ada-54fe26b03151\") " Feb 9 19:06:47.302181 kubelet[2411]: I0209 19:06:47.299451 2411 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ac78abb7-3216-4aa2-8ada-54fe26b03151-bpf-maps\") pod \"ac78abb7-3216-4aa2-8ada-54fe26b03151\" (UID: \"ac78abb7-3216-4aa2-8ada-54fe26b03151\") " Feb 9 19:06:47.302181 kubelet[2411]: W0209 19:06:47.299428 2411 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/ac78abb7-3216-4aa2-8ada-54fe26b03151/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 19:06:47.302181 kubelet[2411]: I0209 19:06:47.299480 2411 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ac78abb7-3216-4aa2-8ada-54fe26b03151-cni-path\") pod \"ac78abb7-3216-4aa2-8ada-54fe26b03151\" (UID: \"ac78abb7-3216-4aa2-8ada-54fe26b03151\") " Feb 9 19:06:47.302181 kubelet[2411]: I0209 19:06:47.299518 2411 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ac78abb7-3216-4aa2-8ada-54fe26b03151-cilium-cgroup\") pod \"ac78abb7-3216-4aa2-8ada-54fe26b03151\" (UID: \"ac78abb7-3216-4aa2-8ada-54fe26b03151\") " Feb 9 19:06:47.302548 kubelet[2411]: I0209 19:06:47.299555 2411 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ac78abb7-3216-4aa2-8ada-54fe26b03151-clustermesh-secrets\") pod \"ac78abb7-3216-4aa2-8ada-54fe26b03151\" (UID: \"ac78abb7-3216-4aa2-8ada-54fe26b03151\") " Feb 9 19:06:47.302548 kubelet[2411]: I0209 19:06:47.299589 2411 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ac78abb7-3216-4aa2-8ada-54fe26b03151-hubble-tls\") pod \"ac78abb7-3216-4aa2-8ada-54fe26b03151\" (UID: \"ac78abb7-3216-4aa2-8ada-54fe26b03151\") " Feb 9 19:06:47.302548 kubelet[2411]: I0209 19:06:47.299625 2411 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/646efd18-613e-44ca-95af-7144d8a6b0a4-cilium-config-path\") pod \"646efd18-613e-44ca-95af-7144d8a6b0a4\" (UID: \"646efd18-613e-44ca-95af-7144d8a6b0a4\") " Feb 9 19:06:47.302548 kubelet[2411]: I0209 19:06:47.299661 2411 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ac78abb7-3216-4aa2-8ada-54fe26b03151-host-proc-sys-kernel\") pod \"ac78abb7-3216-4aa2-8ada-54fe26b03151\" (UID: \"ac78abb7-3216-4aa2-8ada-54fe26b03151\") " Feb 9 19:06:47.302548 kubelet[2411]: I0209 19:06:47.299694 2411 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ac78abb7-3216-4aa2-8ada-54fe26b03151-host-proc-sys-net\") pod \"ac78abb7-3216-4aa2-8ada-54fe26b03151\" (UID: \"ac78abb7-3216-4aa2-8ada-54fe26b03151\") " Feb 9 19:06:47.302548 kubelet[2411]: I0209 19:06:47.299751 2411 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac78abb7-3216-4aa2-8ada-54fe26b03151-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ac78abb7-3216-4aa2-8ada-54fe26b03151" (UID: "ac78abb7-3216-4aa2-8ada-54fe26b03151"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:06:47.302873 kubelet[2411]: I0209 19:06:47.299803 2411 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac78abb7-3216-4aa2-8ada-54fe26b03151-hostproc" (OuterVolumeSpecName: "hostproc") pod "ac78abb7-3216-4aa2-8ada-54fe26b03151" (UID: "ac78abb7-3216-4aa2-8ada-54fe26b03151"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:06:47.304606 kubelet[2411]: I0209 19:06:47.304548 2411 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac78abb7-3216-4aa2-8ada-54fe26b03151-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ac78abb7-3216-4aa2-8ada-54fe26b03151" (UID: "ac78abb7-3216-4aa2-8ada-54fe26b03151"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 19:06:47.304851 kubelet[2411]: I0209 19:06:47.304811 2411 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac78abb7-3216-4aa2-8ada-54fe26b03151-cni-path" (OuterVolumeSpecName: "cni-path") pod "ac78abb7-3216-4aa2-8ada-54fe26b03151" (UID: "ac78abb7-3216-4aa2-8ada-54fe26b03151"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:06:47.305009 kubelet[2411]: I0209 19:06:47.304990 2411 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac78abb7-3216-4aa2-8ada-54fe26b03151-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ac78abb7-3216-4aa2-8ada-54fe26b03151" (UID: "ac78abb7-3216-4aa2-8ada-54fe26b03151"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:06:47.305155 kubelet[2411]: I0209 19:06:47.305136 2411 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac78abb7-3216-4aa2-8ada-54fe26b03151-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ac78abb7-3216-4aa2-8ada-54fe26b03151" (UID: "ac78abb7-3216-4aa2-8ada-54fe26b03151"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:06:47.305479 kubelet[2411]: I0209 19:06:47.305451 2411 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac78abb7-3216-4aa2-8ada-54fe26b03151-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ac78abb7-3216-4aa2-8ada-54fe26b03151" (UID: "ac78abb7-3216-4aa2-8ada-54fe26b03151"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:06:47.305665 kubelet[2411]: I0209 19:06:47.305643 2411 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac78abb7-3216-4aa2-8ada-54fe26b03151-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ac78abb7-3216-4aa2-8ada-54fe26b03151" (UID: "ac78abb7-3216-4aa2-8ada-54fe26b03151"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:06:47.305807 kubelet[2411]: I0209 19:06:47.305789 2411 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac78abb7-3216-4aa2-8ada-54fe26b03151-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ac78abb7-3216-4aa2-8ada-54fe26b03151" (UID: "ac78abb7-3216-4aa2-8ada-54fe26b03151"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:06:47.306240 kubelet[2411]: I0209 19:06:47.306201 2411 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac78abb7-3216-4aa2-8ada-54fe26b03151-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ac78abb7-3216-4aa2-8ada-54fe26b03151" (UID: "ac78abb7-3216-4aa2-8ada-54fe26b03151"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:06:47.306887 kubelet[2411]: W0209 19:06:47.306835 2411 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/646efd18-613e-44ca-95af-7144d8a6b0a4/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 19:06:47.308155 kubelet[2411]: I0209 19:06:47.308102 2411 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac78abb7-3216-4aa2-8ada-54fe26b03151-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ac78abb7-3216-4aa2-8ada-54fe26b03151" (UID: "ac78abb7-3216-4aa2-8ada-54fe26b03151"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:06:47.310250 kubelet[2411]: I0209 19:06:47.310214 2411 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/646efd18-613e-44ca-95af-7144d8a6b0a4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "646efd18-613e-44ca-95af-7144d8a6b0a4" (UID: "646efd18-613e-44ca-95af-7144d8a6b0a4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 19:06:47.310613 kubelet[2411]: I0209 19:06:47.310579 2411 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac78abb7-3216-4aa2-8ada-54fe26b03151-kube-api-access-r422v" (OuterVolumeSpecName: "kube-api-access-r422v") pod "ac78abb7-3216-4aa2-8ada-54fe26b03151" (UID: "ac78abb7-3216-4aa2-8ada-54fe26b03151"). InnerVolumeSpecName "kube-api-access-r422v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:06:47.311115 kubelet[2411]: I0209 19:06:47.311082 2411 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/646efd18-613e-44ca-95af-7144d8a6b0a4-kube-api-access-nsd2m" (OuterVolumeSpecName: "kube-api-access-nsd2m") pod "646efd18-613e-44ca-95af-7144d8a6b0a4" (UID: "646efd18-613e-44ca-95af-7144d8a6b0a4"). InnerVolumeSpecName "kube-api-access-nsd2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:06:47.314106 kubelet[2411]: I0209 19:06:47.314079 2411 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac78abb7-3216-4aa2-8ada-54fe26b03151-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ac78abb7-3216-4aa2-8ada-54fe26b03151" (UID: "ac78abb7-3216-4aa2-8ada-54fe26b03151"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:06:47.314509 kubelet[2411]: I0209 19:06:47.314483 2411 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac78abb7-3216-4aa2-8ada-54fe26b03151-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ac78abb7-3216-4aa2-8ada-54fe26b03151" (UID: "ac78abb7-3216-4aa2-8ada-54fe26b03151"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:06:47.400998 kubelet[2411]: I0209 19:06:47.400942 2411 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ac78abb7-3216-4aa2-8ada-54fe26b03151-bpf-maps\") on node \"ci-3510.3.2-a-2006cf4d94\" DevicePath \"\"" Feb 9 19:06:47.400998 kubelet[2411]: I0209 19:06:47.400988 2411 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ac78abb7-3216-4aa2-8ada-54fe26b03151-etc-cni-netd\") on node \"ci-3510.3.2-a-2006cf4d94\" DevicePath \"\"" Feb 9 19:06:47.400998 kubelet[2411]: I0209 19:06:47.401007 2411 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ac78abb7-3216-4aa2-8ada-54fe26b03151-xtables-lock\") on node \"ci-3510.3.2-a-2006cf4d94\" DevicePath \"\"" Feb 9 19:06:47.401376 kubelet[2411]: I0209 19:06:47.401026 2411 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ac78abb7-3216-4aa2-8ada-54fe26b03151-cilium-cgroup\") on node \"ci-3510.3.2-a-2006cf4d94\" DevicePath \"\"" Feb 9 19:06:47.401376 kubelet[2411]: I0209 19:06:47.401042 2411 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ac78abb7-3216-4aa2-8ada-54fe26b03151-cni-path\") on node \"ci-3510.3.2-a-2006cf4d94\" DevicePath \"\"" Feb 9 19:06:47.401376 kubelet[2411]: I0209 19:06:47.401062 2411 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ac78abb7-3216-4aa2-8ada-54fe26b03151-host-proc-sys-kernel\") on node \"ci-3510.3.2-a-2006cf4d94\" DevicePath \"\"" Feb 9 19:06:47.401376 kubelet[2411]: I0209 19:06:47.401110 2411 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ac78abb7-3216-4aa2-8ada-54fe26b03151-host-proc-sys-net\") on node \"ci-3510.3.2-a-2006cf4d94\" DevicePath \"\"" Feb 9 19:06:47.401376 kubelet[2411]: I0209 19:06:47.401127 2411 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ac78abb7-3216-4aa2-8ada-54fe26b03151-clustermesh-secrets\") on node \"ci-3510.3.2-a-2006cf4d94\" DevicePath \"\"" Feb 9 19:06:47.401376 kubelet[2411]: I0209 19:06:47.401144 2411 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ac78abb7-3216-4aa2-8ada-54fe26b03151-hubble-tls\") on node \"ci-3510.3.2-a-2006cf4d94\" DevicePath \"\"" Feb 9 19:06:47.401376 kubelet[2411]: I0209 19:06:47.401161 2411 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/646efd18-613e-44ca-95af-7144d8a6b0a4-cilium-config-path\") on node \"ci-3510.3.2-a-2006cf4d94\" DevicePath \"\"" Feb 9 19:06:47.401376 kubelet[2411]: I0209 19:06:47.401179 2411 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ac78abb7-3216-4aa2-8ada-54fe26b03151-hostproc\") on node \"ci-3510.3.2-a-2006cf4d94\" DevicePath \"\"" Feb 9 19:06:47.401636 kubelet[2411]: I0209 19:06:47.401197 2411 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-r422v\" (UniqueName: \"kubernetes.io/projected/ac78abb7-3216-4aa2-8ada-54fe26b03151-kube-api-access-r422v\") on node \"ci-3510.3.2-a-2006cf4d94\" DevicePath \"\"" Feb 9 19:06:47.401636 kubelet[2411]: I0209 19:06:47.401214 2411 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ac78abb7-3216-4aa2-8ada-54fe26b03151-cilium-config-path\") on node \"ci-3510.3.2-a-2006cf4d94\" DevicePath \"\"" Feb 9 19:06:47.401636 kubelet[2411]: I0209 19:06:47.401230 2411 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-nsd2m\" (UniqueName: \"kubernetes.io/projected/646efd18-613e-44ca-95af-7144d8a6b0a4-kube-api-access-nsd2m\") on node \"ci-3510.3.2-a-2006cf4d94\" DevicePath \"\"" Feb 9 19:06:47.401636 kubelet[2411]: I0209 19:06:47.401250 2411 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ac78abb7-3216-4aa2-8ada-54fe26b03151-lib-modules\") on node \"ci-3510.3.2-a-2006cf4d94\" DevicePath \"\"" Feb 9 19:06:47.401636 kubelet[2411]: I0209 19:06:47.401272 2411 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ac78abb7-3216-4aa2-8ada-54fe26b03151-cilium-run\") on node \"ci-3510.3.2-a-2006cf4d94\" DevicePath \"\"" Feb 9 19:06:47.571992 kubelet[2411]: I0209 19:06:47.570942 2411 scope.go:115] "RemoveContainer" containerID="d002ea37bd663ca01eae84c8c168033870d56cf1ea565cfbb59a003495f111e4" Feb 9 19:06:47.573888 env[1329]: time="2024-02-09T19:06:47.573826553Z" level=info msg="RemoveContainer for \"d002ea37bd663ca01eae84c8c168033870d56cf1ea565cfbb59a003495f111e4\"" Feb 9 19:06:47.580582 systemd[1]: Removed slice kubepods-burstable-podac78abb7_3216_4aa2_8ada_54fe26b03151.slice. Feb 9 19:06:47.580730 systemd[1]: kubepods-burstable-podac78abb7_3216_4aa2_8ada_54fe26b03151.slice: Consumed 7.507s CPU time. Feb 9 19:06:47.583681 systemd[1]: Removed slice kubepods-besteffort-pod646efd18_613e_44ca_95af_7144d8a6b0a4.slice. Feb 9 19:06:47.591909 env[1329]: time="2024-02-09T19:06:47.591758647Z" level=info msg="RemoveContainer for \"d002ea37bd663ca01eae84c8c168033870d56cf1ea565cfbb59a003495f111e4\" returns successfully" Feb 9 19:06:47.592056 kubelet[2411]: I0209 19:06:47.592032 2411 scope.go:115] "RemoveContainer" containerID="81d2028ba49bd2ffd739f1568a8843bc50484faf39fd915d29e533ba2cd328db" Feb 9 19:06:47.593878 env[1329]: time="2024-02-09T19:06:47.593844870Z" level=info msg="RemoveContainer for \"81d2028ba49bd2ffd739f1568a8843bc50484faf39fd915d29e533ba2cd328db\"" Feb 9 19:06:47.605456 env[1329]: time="2024-02-09T19:06:47.605365594Z" level=info msg="RemoveContainer for \"81d2028ba49bd2ffd739f1568a8843bc50484faf39fd915d29e533ba2cd328db\" returns successfully" Feb 9 19:06:47.605675 kubelet[2411]: I0209 19:06:47.605656 2411 scope.go:115] "RemoveContainer" containerID="a75293de3a241569627191ad4ae7388a6da4be3a5bf01c7f92f1449e6a4c0182" Feb 9 19:06:47.608960 env[1329]: time="2024-02-09T19:06:47.608695930Z" level=info msg="RemoveContainer for \"a75293de3a241569627191ad4ae7388a6da4be3a5bf01c7f92f1449e6a4c0182\"" Feb 9 19:06:47.626403 env[1329]: time="2024-02-09T19:06:47.626250620Z" level=info msg="RemoveContainer for \"a75293de3a241569627191ad4ae7388a6da4be3a5bf01c7f92f1449e6a4c0182\" returns successfully" Feb 9 19:06:47.626541 kubelet[2411]: I0209 19:06:47.626507 2411 scope.go:115] "RemoveContainer" containerID="e09d84add8840ac5f75e69ffbf10dc8cc897206555163c4aad0bc8f4ab553fcf" Feb 9 19:06:47.627925 env[1329]: time="2024-02-09T19:06:47.627881838Z" level=info msg="RemoveContainer for \"e09d84add8840ac5f75e69ffbf10dc8cc897206555163c4aad0bc8f4ab553fcf\"" Feb 9 19:06:47.641008 env[1329]: time="2024-02-09T19:06:47.640968080Z" level=info msg="RemoveContainer for \"e09d84add8840ac5f75e69ffbf10dc8cc897206555163c4aad0bc8f4ab553fcf\" returns successfully" Feb 9 19:06:47.641227 kubelet[2411]: I0209 19:06:47.641162 2411 scope.go:115] "RemoveContainer" containerID="037d09d21c988a5b592ed29334c3da96d9f0317be25d5a2f7bc472f73d3b3bdc" Feb 9 19:06:47.642194 env[1329]: time="2024-02-09T19:06:47.642164993Z" level=info msg="RemoveContainer for \"037d09d21c988a5b592ed29334c3da96d9f0317be25d5a2f7bc472f73d3b3bdc\"" Feb 9 19:06:47.653728 env[1329]: time="2024-02-09T19:06:47.653691818Z" level=info msg="RemoveContainer for \"037d09d21c988a5b592ed29334c3da96d9f0317be25d5a2f7bc472f73d3b3bdc\" returns successfully" Feb 9 19:06:47.653953 kubelet[2411]: I0209 19:06:47.653924 2411 scope.go:115] "RemoveContainer" containerID="d002ea37bd663ca01eae84c8c168033870d56cf1ea565cfbb59a003495f111e4" Feb 9 19:06:47.654236 env[1329]: time="2024-02-09T19:06:47.654156323Z" level=error msg="ContainerStatus for \"d002ea37bd663ca01eae84c8c168033870d56cf1ea565cfbb59a003495f111e4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d002ea37bd663ca01eae84c8c168033870d56cf1ea565cfbb59a003495f111e4\": not found" Feb 9 19:06:47.654431 kubelet[2411]: E0209 19:06:47.654414 2411 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d002ea37bd663ca01eae84c8c168033870d56cf1ea565cfbb59a003495f111e4\": not found" containerID="d002ea37bd663ca01eae84c8c168033870d56cf1ea565cfbb59a003495f111e4" Feb 9 19:06:47.654553 kubelet[2411]: I0209 19:06:47.654533 2411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:d002ea37bd663ca01eae84c8c168033870d56cf1ea565cfbb59a003495f111e4} err="failed to get container status \"d002ea37bd663ca01eae84c8c168033870d56cf1ea565cfbb59a003495f111e4\": rpc error: code = NotFound desc = an error occurred when try to find container \"d002ea37bd663ca01eae84c8c168033870d56cf1ea565cfbb59a003495f111e4\": not found" Feb 9 19:06:47.654630 kubelet[2411]: I0209 19:06:47.654556 2411 scope.go:115] "RemoveContainer" containerID="81d2028ba49bd2ffd739f1568a8843bc50484faf39fd915d29e533ba2cd328db" Feb 9 19:06:47.654783 env[1329]: time="2024-02-09T19:06:47.654731529Z" level=error msg="ContainerStatus for \"81d2028ba49bd2ffd739f1568a8843bc50484faf39fd915d29e533ba2cd328db\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"81d2028ba49bd2ffd739f1568a8843bc50484faf39fd915d29e533ba2cd328db\": not found" Feb 9 19:06:47.654915 kubelet[2411]: E0209 19:06:47.654897 2411 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"81d2028ba49bd2ffd739f1568a8843bc50484faf39fd915d29e533ba2cd328db\": not found" containerID="81d2028ba49bd2ffd739f1568a8843bc50484faf39fd915d29e533ba2cd328db" Feb 9 19:06:47.654988 kubelet[2411]: I0209 19:06:47.654928 2411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:81d2028ba49bd2ffd739f1568a8843bc50484faf39fd915d29e533ba2cd328db} err="failed to get container status \"81d2028ba49bd2ffd739f1568a8843bc50484faf39fd915d29e533ba2cd328db\": rpc error: code = NotFound desc = an error occurred when try to find container \"81d2028ba49bd2ffd739f1568a8843bc50484faf39fd915d29e533ba2cd328db\": not found" Feb 9 19:06:47.654988 kubelet[2411]: I0209 19:06:47.654943 2411 scope.go:115] "RemoveContainer" containerID="a75293de3a241569627191ad4ae7388a6da4be3a5bf01c7f92f1449e6a4c0182" Feb 9 19:06:47.655178 env[1329]: time="2024-02-09T19:06:47.655121933Z" level=error msg="ContainerStatus for \"a75293de3a241569627191ad4ae7388a6da4be3a5bf01c7f92f1449e6a4c0182\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a75293de3a241569627191ad4ae7388a6da4be3a5bf01c7f92f1449e6a4c0182\": not found" Feb 9 19:06:47.655285 kubelet[2411]: E0209 19:06:47.655268 2411 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a75293de3a241569627191ad4ae7388a6da4be3a5bf01c7f92f1449e6a4c0182\": not found" containerID="a75293de3a241569627191ad4ae7388a6da4be3a5bf01c7f92f1449e6a4c0182" Feb 9 19:06:47.655373 kubelet[2411]: I0209 19:06:47.655299 2411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:a75293de3a241569627191ad4ae7388a6da4be3a5bf01c7f92f1449e6a4c0182} err="failed to get container status \"a75293de3a241569627191ad4ae7388a6da4be3a5bf01c7f92f1449e6a4c0182\": rpc error: code = NotFound desc = an error occurred when try to find container \"a75293de3a241569627191ad4ae7388a6da4be3a5bf01c7f92f1449e6a4c0182\": not found" Feb 9 19:06:47.655373 kubelet[2411]: I0209 19:06:47.655337 2411 scope.go:115] "RemoveContainer" containerID="e09d84add8840ac5f75e69ffbf10dc8cc897206555163c4aad0bc8f4ab553fcf" Feb 9 19:06:47.655547 env[1329]: time="2024-02-09T19:06:47.655500237Z" level=error msg="ContainerStatus for \"e09d84add8840ac5f75e69ffbf10dc8cc897206555163c4aad0bc8f4ab553fcf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e09d84add8840ac5f75e69ffbf10dc8cc897206555163c4aad0bc8f4ab553fcf\": not found" Feb 9 19:06:47.655667 kubelet[2411]: E0209 19:06:47.655650 2411 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e09d84add8840ac5f75e69ffbf10dc8cc897206555163c4aad0bc8f4ab553fcf\": not found" containerID="e09d84add8840ac5f75e69ffbf10dc8cc897206555163c4aad0bc8f4ab553fcf" Feb 9 19:06:47.655735 kubelet[2411]: I0209 19:06:47.655679 2411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:e09d84add8840ac5f75e69ffbf10dc8cc897206555163c4aad0bc8f4ab553fcf} err="failed to get container status \"e09d84add8840ac5f75e69ffbf10dc8cc897206555163c4aad0bc8f4ab553fcf\": rpc error: code = NotFound desc = an error occurred when try to find container \"e09d84add8840ac5f75e69ffbf10dc8cc897206555163c4aad0bc8f4ab553fcf\": not found" Feb 9 19:06:47.655735 kubelet[2411]: I0209 19:06:47.655692 2411 scope.go:115] "RemoveContainer" containerID="037d09d21c988a5b592ed29334c3da96d9f0317be25d5a2f7bc472f73d3b3bdc" Feb 9 19:06:47.655898 env[1329]: time="2024-02-09T19:06:47.655852841Z" level=error msg="ContainerStatus for \"037d09d21c988a5b592ed29334c3da96d9f0317be25d5a2f7bc472f73d3b3bdc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"037d09d21c988a5b592ed29334c3da96d9f0317be25d5a2f7bc472f73d3b3bdc\": not found" Feb 9 19:06:47.656012 kubelet[2411]: E0209 19:06:47.655998 2411 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"037d09d21c988a5b592ed29334c3da96d9f0317be25d5a2f7bc472f73d3b3bdc\": not found" containerID="037d09d21c988a5b592ed29334c3da96d9f0317be25d5a2f7bc472f73d3b3bdc" Feb 9 19:06:47.656082 kubelet[2411]: I0209 19:06:47.656028 2411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:037d09d21c988a5b592ed29334c3da96d9f0317be25d5a2f7bc472f73d3b3bdc} err="failed to get container status \"037d09d21c988a5b592ed29334c3da96d9f0317be25d5a2f7bc472f73d3b3bdc\": rpc error: code = NotFound desc = an error occurred when try to find container \"037d09d21c988a5b592ed29334c3da96d9f0317be25d5a2f7bc472f73d3b3bdc\": not found" Feb 9 19:06:47.656082 kubelet[2411]: I0209 19:06:47.656043 2411 scope.go:115] "RemoveContainer" containerID="5a4e0d1a0d31002316e72ed580df106cff716859ecfd5014fc09a01f0bf05acf" Feb 9 19:06:47.657008 env[1329]: time="2024-02-09T19:06:47.656980953Z" level=info msg="RemoveContainer for \"5a4e0d1a0d31002316e72ed580df106cff716859ecfd5014fc09a01f0bf05acf\"" Feb 9 19:06:47.665780 env[1329]: time="2024-02-09T19:06:47.665750048Z" level=info msg="RemoveContainer for \"5a4e0d1a0d31002316e72ed580df106cff716859ecfd5014fc09a01f0bf05acf\" returns successfully" Feb 9 19:06:47.665940 kubelet[2411]: I0209 19:06:47.665921 2411 scope.go:115] "RemoveContainer" containerID="5a4e0d1a0d31002316e72ed580df106cff716859ecfd5014fc09a01f0bf05acf" Feb 9 19:06:47.666270 env[1329]: time="2024-02-09T19:06:47.666181753Z" level=error msg="ContainerStatus for \"5a4e0d1a0d31002316e72ed580df106cff716859ecfd5014fc09a01f0bf05acf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5a4e0d1a0d31002316e72ed580df106cff716859ecfd5014fc09a01f0bf05acf\": not found" Feb 9 19:06:47.666372 kubelet[2411]: E0209 19:06:47.666348 2411 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5a4e0d1a0d31002316e72ed580df106cff716859ecfd5014fc09a01f0bf05acf\": not found" containerID="5a4e0d1a0d31002316e72ed580df106cff716859ecfd5014fc09a01f0bf05acf" Feb 9 19:06:47.666422 kubelet[2411]: I0209 19:06:47.666376 2411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:5a4e0d1a0d31002316e72ed580df106cff716859ecfd5014fc09a01f0bf05acf} err="failed to get container status \"5a4e0d1a0d31002316e72ed580df106cff716859ecfd5014fc09a01f0bf05acf\": rpc error: code = NotFound desc = an error occurred when try to find container \"5a4e0d1a0d31002316e72ed580df106cff716859ecfd5014fc09a01f0bf05acf\": not found" Feb 9 19:06:47.911684 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d04c537795675d8b0c58ab265203137f1067e872b2808a19058055276effd4dc-rootfs.mount: Deactivated successfully. Feb 9 19:06:47.911816 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d04c537795675d8b0c58ab265203137f1067e872b2808a19058055276effd4dc-shm.mount: Deactivated successfully. Feb 9 19:06:47.911892 systemd[1]: var-lib-kubelet-pods-ac78abb7\x2d3216\x2d4aa2\x2d8ada\x2d54fe26b03151-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 19:06:47.911975 systemd[1]: var-lib-kubelet-pods-ac78abb7\x2d3216\x2d4aa2\x2d8ada\x2d54fe26b03151-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 19:06:47.912052 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c30c78b6347934b1d27785a485c742d39ce177f33a81f073c1b2ab4bc5c7f19a-rootfs.mount: Deactivated successfully. Feb 9 19:06:47.912124 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c30c78b6347934b1d27785a485c742d39ce177f33a81f073c1b2ab4bc5c7f19a-shm.mount: Deactivated successfully. Feb 9 19:06:47.912206 systemd[1]: var-lib-kubelet-pods-646efd18\x2d613e\x2d44ca\x2d95af\x2d7144d8a6b0a4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnsd2m.mount: Deactivated successfully. Feb 9 19:06:47.912282 systemd[1]: var-lib-kubelet-pods-ac78abb7\x2d3216\x2d4aa2\x2d8ada\x2d54fe26b03151-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dr422v.mount: Deactivated successfully. Feb 9 19:06:48.951738 sshd[4138]: pam_unix(sshd:session): session closed for user core Feb 9 19:06:48.954993 systemd[1]: sshd@21-10.200.8.39:22-10.200.12.6:38202.service: Deactivated successfully. Feb 9 19:06:48.955945 systemd[1]: session-24.scope: Deactivated successfully. Feb 9 19:06:48.956614 systemd-logind[1301]: Session 24 logged out. Waiting for processes to exit. Feb 9 19:06:48.957461 systemd-logind[1301]: Removed session 24. Feb 9 19:06:49.080544 systemd[1]: Started sshd@22-10.200.8.39:22-10.200.12.6:42690.service. Feb 9 19:06:49.221247 kubelet[2411]: I0209 19:06:49.221131 2411 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=646efd18-613e-44ca-95af-7144d8a6b0a4 path="/var/lib/kubelet/pods/646efd18-613e-44ca-95af-7144d8a6b0a4/volumes" Feb 9 19:06:49.222094 kubelet[2411]: I0209 19:06:49.222061 2411 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=ac78abb7-3216-4aa2-8ada-54fe26b03151 path="/var/lib/kubelet/pods/ac78abb7-3216-4aa2-8ada-54fe26b03151/volumes" Feb 9 19:06:49.824717 sshd[4301]: Accepted publickey for core from 10.200.12.6 port 42690 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:06:49.826242 sshd[4301]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:06:49.831692 systemd-logind[1301]: New session 25 of user core. Feb 9 19:06:49.832217 systemd[1]: Started session-25.scope. Feb 9 19:06:50.768128 kubelet[2411]: I0209 19:06:50.768076 2411 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:06:50.768623 kubelet[2411]: E0209 19:06:50.768158 2411 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ac78abb7-3216-4aa2-8ada-54fe26b03151" containerName="apply-sysctl-overwrites" Feb 9 19:06:50.768623 kubelet[2411]: E0209 19:06:50.768173 2411 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ac78abb7-3216-4aa2-8ada-54fe26b03151" containerName="cilium-agent" Feb 9 19:06:50.768623 kubelet[2411]: E0209 19:06:50.768184 2411 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="646efd18-613e-44ca-95af-7144d8a6b0a4" containerName="cilium-operator" Feb 9 19:06:50.768623 kubelet[2411]: E0209 19:06:50.768192 2411 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ac78abb7-3216-4aa2-8ada-54fe26b03151" containerName="mount-cgroup" Feb 9 19:06:50.768623 kubelet[2411]: E0209 19:06:50.768201 2411 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ac78abb7-3216-4aa2-8ada-54fe26b03151" containerName="mount-bpf-fs" Feb 9 19:06:50.768623 kubelet[2411]: E0209 19:06:50.768209 2411 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ac78abb7-3216-4aa2-8ada-54fe26b03151" containerName="clean-cilium-state" Feb 9 19:06:50.768623 kubelet[2411]: I0209 19:06:50.768244 2411 memory_manager.go:346] "RemoveStaleState removing state" podUID="646efd18-613e-44ca-95af-7144d8a6b0a4" containerName="cilium-operator" Feb 9 19:06:50.768623 kubelet[2411]: I0209 19:06:50.768255 2411 memory_manager.go:346] "RemoveStaleState removing state" podUID="ac78abb7-3216-4aa2-8ada-54fe26b03151" containerName="cilium-agent" Feb 9 19:06:50.774912 systemd[1]: Created slice kubepods-burstable-pod9d4064a8_504e_4859_8aac_1c9267ee63d0.slice. Feb 9 19:06:50.782393 kubelet[2411]: W0209 19:06:50.782364 2411 reflector.go:424] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510.3.2-a-2006cf4d94" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-2006cf4d94' and this object Feb 9 19:06:50.782534 kubelet[2411]: E0209 19:06:50.782399 2411 reflector.go:140] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510.3.2-a-2006cf4d94" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-2006cf4d94' and this object Feb 9 19:06:50.782534 kubelet[2411]: W0209 19:06:50.782481 2411 reflector.go:424] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3510.3.2-a-2006cf4d94" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-2006cf4d94' and this object Feb 9 19:06:50.782534 kubelet[2411]: E0209 19:06:50.782511 2411 reflector.go:140] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3510.3.2-a-2006cf4d94" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-2006cf4d94' and this object Feb 9 19:06:50.782690 kubelet[2411]: W0209 19:06:50.782553 2411 reflector.go:424] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510.3.2-a-2006cf4d94" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-2006cf4d94' and this object Feb 9 19:06:50.782690 kubelet[2411]: E0209 19:06:50.782564 2411 reflector.go:140] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510.3.2-a-2006cf4d94" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-2006cf4d94' and this object Feb 9 19:06:50.782690 kubelet[2411]: W0209 19:06:50.782628 2411 reflector.go:424] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-3510.3.2-a-2006cf4d94" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-2006cf4d94' and this object Feb 9 19:06:50.782690 kubelet[2411]: E0209 19:06:50.782641 2411 reflector.go:140] object-"kube-system"/"cilium-ipsec-keys": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-3510.3.2-a-2006cf4d94" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-2006cf4d94' and this object Feb 9 19:06:50.820929 kubelet[2411]: I0209 19:06:50.820872 2411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9d4064a8-504e-4859-8aac-1c9267ee63d0-cni-path\") pod \"cilium-xh5dd\" (UID: \"9d4064a8-504e-4859-8aac-1c9267ee63d0\") " pod="kube-system/cilium-xh5dd" Feb 9 19:06:50.821136 kubelet[2411]: I0209 19:06:50.820985 2411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9d4064a8-504e-4859-8aac-1c9267ee63d0-cilium-ipsec-secrets\") pod \"cilium-xh5dd\" (UID: \"9d4064a8-504e-4859-8aac-1c9267ee63d0\") " pod="kube-system/cilium-xh5dd" Feb 9 19:06:50.821136 kubelet[2411]: I0209 19:06:50.821021 2411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9d4064a8-504e-4859-8aac-1c9267ee63d0-xtables-lock\") pod \"cilium-xh5dd\" (UID: \"9d4064a8-504e-4859-8aac-1c9267ee63d0\") " pod="kube-system/cilium-xh5dd" Feb 9 19:06:50.821136 kubelet[2411]: I0209 19:06:50.821136 2411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9d4064a8-504e-4859-8aac-1c9267ee63d0-clustermesh-secrets\") pod \"cilium-xh5dd\" (UID: \"9d4064a8-504e-4859-8aac-1c9267ee63d0\") " pod="kube-system/cilium-xh5dd" Feb 9 19:06:50.821293 kubelet[2411]: I0209 19:06:50.821168 2411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9d4064a8-504e-4859-8aac-1c9267ee63d0-cilium-config-path\") pod \"cilium-xh5dd\" (UID: \"9d4064a8-504e-4859-8aac-1c9267ee63d0\") " pod="kube-system/cilium-xh5dd" Feb 9 19:06:50.821293 kubelet[2411]: I0209 19:06:50.821245 2411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9d4064a8-504e-4859-8aac-1c9267ee63d0-host-proc-sys-kernel\") pod \"cilium-xh5dd\" (UID: \"9d4064a8-504e-4859-8aac-1c9267ee63d0\") " pod="kube-system/cilium-xh5dd" Feb 9 19:06:50.821414 kubelet[2411]: I0209 19:06:50.821346 2411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9d4064a8-504e-4859-8aac-1c9267ee63d0-bpf-maps\") pod \"cilium-xh5dd\" (UID: \"9d4064a8-504e-4859-8aac-1c9267ee63d0\") " pod="kube-system/cilium-xh5dd" Feb 9 19:06:50.821414 kubelet[2411]: I0209 19:06:50.821382 2411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9d4064a8-504e-4859-8aac-1c9267ee63d0-lib-modules\") pod \"cilium-xh5dd\" (UID: \"9d4064a8-504e-4859-8aac-1c9267ee63d0\") " pod="kube-system/cilium-xh5dd" Feb 9 19:06:50.821506 kubelet[2411]: I0209 19:06:50.821431 2411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9d4064a8-504e-4859-8aac-1c9267ee63d0-hostproc\") pod \"cilium-xh5dd\" (UID: \"9d4064a8-504e-4859-8aac-1c9267ee63d0\") " pod="kube-system/cilium-xh5dd" Feb 9 19:06:50.821506 kubelet[2411]: I0209 19:06:50.821462 2411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9d4064a8-504e-4859-8aac-1c9267ee63d0-cilium-run\") pod \"cilium-xh5dd\" (UID: \"9d4064a8-504e-4859-8aac-1c9267ee63d0\") " pod="kube-system/cilium-xh5dd" Feb 9 19:06:50.821590 kubelet[2411]: I0209 19:06:50.821571 2411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9d4064a8-504e-4859-8aac-1c9267ee63d0-cilium-cgroup\") pod \"cilium-xh5dd\" (UID: \"9d4064a8-504e-4859-8aac-1c9267ee63d0\") " pod="kube-system/cilium-xh5dd" Feb 9 19:06:50.821651 kubelet[2411]: I0209 19:06:50.821603 2411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9d4064a8-504e-4859-8aac-1c9267ee63d0-hubble-tls\") pod \"cilium-xh5dd\" (UID: \"9d4064a8-504e-4859-8aac-1c9267ee63d0\") " pod="kube-system/cilium-xh5dd" Feb 9 19:06:50.821708 kubelet[2411]: I0209 19:06:50.821670 2411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9d4064a8-504e-4859-8aac-1c9267ee63d0-etc-cni-netd\") pod \"cilium-xh5dd\" (UID: \"9d4064a8-504e-4859-8aac-1c9267ee63d0\") " pod="kube-system/cilium-xh5dd" Feb 9 19:06:50.821754 kubelet[2411]: I0209 19:06:50.821727 2411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72t4t\" (UniqueName: \"kubernetes.io/projected/9d4064a8-504e-4859-8aac-1c9267ee63d0-kube-api-access-72t4t\") pod \"cilium-xh5dd\" (UID: \"9d4064a8-504e-4859-8aac-1c9267ee63d0\") " pod="kube-system/cilium-xh5dd" Feb 9 19:06:50.821799 kubelet[2411]: I0209 19:06:50.821759 2411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9d4064a8-504e-4859-8aac-1c9267ee63d0-host-proc-sys-net\") pod \"cilium-xh5dd\" (UID: \"9d4064a8-504e-4859-8aac-1c9267ee63d0\") " pod="kube-system/cilium-xh5dd" Feb 9 19:06:50.835922 sshd[4301]: pam_unix(sshd:session): session closed for user core Feb 9 19:06:50.840451 systemd[1]: sshd@22-10.200.8.39:22-10.200.12.6:42690.service: Deactivated successfully. Feb 9 19:06:50.841880 systemd[1]: session-25.scope: Deactivated successfully. Feb 9 19:06:50.843231 systemd-logind[1301]: Session 25 logged out. Waiting for processes to exit. Feb 9 19:06:50.844729 systemd-logind[1301]: Removed session 25. Feb 9 19:06:51.096687 systemd[1]: Started sshd@23-10.200.8.39:22-10.200.12.6:42704.service. Feb 9 19:06:51.197703 kubelet[2411]: E0209 19:06:51.197669 2411 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 19:06:51.782542 sshd[4312]: Accepted publickey for core from 10.200.12.6 port 42704 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:06:51.783959 sshd[4312]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:06:51.788960 systemd[1]: Started session-26.scope. Feb 9 19:06:51.789741 systemd-logind[1301]: New session 26 of user core. Feb 9 19:06:51.923422 kubelet[2411]: E0209 19:06:51.923356 2411 projected.go:267] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Feb 9 19:06:51.923422 kubelet[2411]: E0209 19:06:51.923404 2411 projected.go:198] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-xh5dd: failed to sync secret cache: timed out waiting for the condition Feb 9 19:06:51.924249 kubelet[2411]: E0209 19:06:51.923487 2411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d4064a8-504e-4859-8aac-1c9267ee63d0-hubble-tls podName:9d4064a8-504e-4859-8aac-1c9267ee63d0 nodeName:}" failed. No retries permitted until 2024-02-09 19:06:52.423459317 +0000 UTC m=+171.512752801 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/9d4064a8-504e-4859-8aac-1c9267ee63d0-hubble-tls") pod "cilium-xh5dd" (UID: "9d4064a8-504e-4859-8aac-1c9267ee63d0") : failed to sync secret cache: timed out waiting for the condition Feb 9 19:06:51.924249 kubelet[2411]: E0209 19:06:51.923354 2411 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Feb 9 19:06:51.924249 kubelet[2411]: E0209 19:06:51.924041 2411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9d4064a8-504e-4859-8aac-1c9267ee63d0-cilium-config-path podName:9d4064a8-504e-4859-8aac-1c9267ee63d0 nodeName:}" failed. No retries permitted until 2024-02-09 19:06:52.424014823 +0000 UTC m=+171.513308407 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/9d4064a8-504e-4859-8aac-1c9267ee63d0-cilium-config-path") pod "cilium-xh5dd" (UID: "9d4064a8-504e-4859-8aac-1c9267ee63d0") : failed to sync configmap cache: timed out waiting for the condition Feb 9 19:06:52.358103 sshd[4312]: pam_unix(sshd:session): session closed for user core Feb 9 19:06:52.360964 systemd[1]: sshd@23-10.200.8.39:22-10.200.12.6:42704.service: Deactivated successfully. Feb 9 19:06:52.362301 systemd[1]: session-26.scope: Deactivated successfully. Feb 9 19:06:52.362355 systemd-logind[1301]: Session 26 logged out. Waiting for processes to exit. Feb 9 19:06:52.363701 systemd-logind[1301]: Removed session 26. Feb 9 19:06:52.475392 systemd[1]: Started sshd@24-10.200.8.39:22-10.200.12.6:42712.service. Feb 9 19:06:52.579554 env[1329]: time="2024-02-09T19:06:52.579491582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xh5dd,Uid:9d4064a8-504e-4859-8aac-1c9267ee63d0,Namespace:kube-system,Attempt:0,}" Feb 9 19:06:52.627413 env[1329]: time="2024-02-09T19:06:52.626789676Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:06:52.627413 env[1329]: time="2024-02-09T19:06:52.626843277Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:06:52.627413 env[1329]: time="2024-02-09T19:06:52.626859677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:06:52.627815 env[1329]: time="2024-02-09T19:06:52.627761287Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b9c74ba25388595f0cbfef4a14d3edb3b6b791fd86d59074fb5e6e7bd334f9cc pid=4336 runtime=io.containerd.runc.v2 Feb 9 19:06:52.653845 systemd[1]: Started cri-containerd-b9c74ba25388595f0cbfef4a14d3edb3b6b791fd86d59074fb5e6e7bd334f9cc.scope. Feb 9 19:06:52.678620 env[1329]: time="2024-02-09T19:06:52.678550618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xh5dd,Uid:9d4064a8-504e-4859-8aac-1c9267ee63d0,Namespace:kube-system,Attempt:0,} returns sandbox id \"b9c74ba25388595f0cbfef4a14d3edb3b6b791fd86d59074fb5e6e7bd334f9cc\"" Feb 9 19:06:52.681955 env[1329]: time="2024-02-09T19:06:52.681926953Z" level=info msg="CreateContainer within sandbox \"b9c74ba25388595f0cbfef4a14d3edb3b6b791fd86d59074fb5e6e7bd334f9cc\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 19:06:52.718260 env[1329]: time="2024-02-09T19:06:52.718216832Z" level=info msg="CreateContainer within sandbox \"b9c74ba25388595f0cbfef4a14d3edb3b6b791fd86d59074fb5e6e7bd334f9cc\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"13a2a3c38d738ae99ba315a19bee01b12aaeb2a7212dd81d26ebf38f8873bc1a\"" Feb 9 19:06:52.719093 env[1329]: time="2024-02-09T19:06:52.719060441Z" level=info msg="StartContainer for \"13a2a3c38d738ae99ba315a19bee01b12aaeb2a7212dd81d26ebf38f8873bc1a\"" Feb 9 19:06:52.736947 systemd[1]: Started cri-containerd-13a2a3c38d738ae99ba315a19bee01b12aaeb2a7212dd81d26ebf38f8873bc1a.scope. Feb 9 19:06:52.758948 systemd[1]: cri-containerd-13a2a3c38d738ae99ba315a19bee01b12aaeb2a7212dd81d26ebf38f8873bc1a.scope: Deactivated successfully. Feb 9 19:06:52.836181 env[1329]: time="2024-02-09T19:06:52.836092165Z" level=info msg="shim disconnected" id=13a2a3c38d738ae99ba315a19bee01b12aaeb2a7212dd81d26ebf38f8873bc1a Feb 9 19:06:52.836181 env[1329]: time="2024-02-09T19:06:52.836171466Z" level=warning msg="cleaning up after shim disconnected" id=13a2a3c38d738ae99ba315a19bee01b12aaeb2a7212dd81d26ebf38f8873bc1a namespace=k8s.io Feb 9 19:06:52.836181 env[1329]: time="2024-02-09T19:06:52.836186266Z" level=info msg="cleaning up dead shim" Feb 9 19:06:52.845860 env[1329]: time="2024-02-09T19:06:52.845814666Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:06:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4396 runtime=io.containerd.runc.v2\ntime=\"2024-02-09T19:06:52Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/13a2a3c38d738ae99ba315a19bee01b12aaeb2a7212dd81d26ebf38f8873bc1a/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 9 19:06:52.846192 env[1329]: time="2024-02-09T19:06:52.846086069Z" level=error msg="copy shim log" error="read /proc/self/fd/41: file already closed" Feb 9 19:06:52.846561 env[1329]: time="2024-02-09T19:06:52.846508574Z" level=error msg="Failed to pipe stdout of container \"13a2a3c38d738ae99ba315a19bee01b12aaeb2a7212dd81d26ebf38f8873bc1a\"" error="reading from a closed fifo" Feb 9 19:06:52.846736 env[1329]: time="2024-02-09T19:06:52.846698776Z" level=error msg="Failed to pipe stderr of container \"13a2a3c38d738ae99ba315a19bee01b12aaeb2a7212dd81d26ebf38f8873bc1a\"" error="reading from a closed fifo" Feb 9 19:06:52.850983 env[1329]: time="2024-02-09T19:06:52.850911520Z" level=error msg="StartContainer for \"13a2a3c38d738ae99ba315a19bee01b12aaeb2a7212dd81d26ebf38f8873bc1a\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 9 19:06:52.851254 kubelet[2411]: E0209 19:06:52.851220 2411 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="13a2a3c38d738ae99ba315a19bee01b12aaeb2a7212dd81d26ebf38f8873bc1a" Feb 9 19:06:52.851436 kubelet[2411]: E0209 19:06:52.851416 2411 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 9 19:06:52.851436 kubelet[2411]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 9 19:06:52.851436 kubelet[2411]: rm /hostbin/cilium-mount Feb 9 19:06:52.851436 kubelet[2411]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-72t4t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-xh5dd_kube-system(9d4064a8-504e-4859-8aac-1c9267ee63d0): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 9 19:06:52.852887 kubelet[2411]: E0209 19:06:52.851469 2411 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-xh5dd" podUID=9d4064a8-504e-4859-8aac-1c9267ee63d0 Feb 9 19:06:53.173482 sshd[4328]: Accepted publickey for core from 10.200.12.6 port 42712 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:06:53.175268 sshd[4328]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:06:53.183034 systemd[1]: Started session-27.scope. Feb 9 19:06:53.183904 systemd-logind[1301]: New session 27 of user core. Feb 9 19:06:53.445837 systemd[1]: run-containerd-runc-k8s.io-b9c74ba25388595f0cbfef4a14d3edb3b6b791fd86d59074fb5e6e7bd334f9cc-runc.qgOcbh.mount: Deactivated successfully. Feb 9 19:06:53.591585 env[1329]: time="2024-02-09T19:06:53.590641012Z" level=info msg="StopPodSandbox for \"b9c74ba25388595f0cbfef4a14d3edb3b6b791fd86d59074fb5e6e7bd334f9cc\"" Feb 9 19:06:53.591585 env[1329]: time="2024-02-09T19:06:53.590721813Z" level=info msg="Container to stop \"13a2a3c38d738ae99ba315a19bee01b12aaeb2a7212dd81d26ebf38f8873bc1a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:06:53.593249 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b9c74ba25388595f0cbfef4a14d3edb3b6b791fd86d59074fb5e6e7bd334f9cc-shm.mount: Deactivated successfully. Feb 9 19:06:53.612874 systemd[1]: cri-containerd-b9c74ba25388595f0cbfef4a14d3edb3b6b791fd86d59074fb5e6e7bd334f9cc.scope: Deactivated successfully. Feb 9 19:06:53.641399 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b9c74ba25388595f0cbfef4a14d3edb3b6b791fd86d59074fb5e6e7bd334f9cc-rootfs.mount: Deactivated successfully. Feb 9 19:06:53.660091 env[1329]: time="2024-02-09T19:06:53.660035233Z" level=info msg="shim disconnected" id=b9c74ba25388595f0cbfef4a14d3edb3b6b791fd86d59074fb5e6e7bd334f9cc Feb 9 19:06:53.660091 env[1329]: time="2024-02-09T19:06:53.660098133Z" level=warning msg="cleaning up after shim disconnected" id=b9c74ba25388595f0cbfef4a14d3edb3b6b791fd86d59074fb5e6e7bd334f9cc namespace=k8s.io Feb 9 19:06:53.660446 env[1329]: time="2024-02-09T19:06:53.660111333Z" level=info msg="cleaning up dead shim" Feb 9 19:06:53.670527 env[1329]: time="2024-02-09T19:06:53.670486241Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:06:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4435 runtime=io.containerd.runc.v2\n" Feb 9 19:06:53.670801 env[1329]: time="2024-02-09T19:06:53.670771344Z" level=info msg="TearDown network for sandbox \"b9c74ba25388595f0cbfef4a14d3edb3b6b791fd86d59074fb5e6e7bd334f9cc\" successfully" Feb 9 19:06:53.670801 env[1329]: time="2024-02-09T19:06:53.670797744Z" level=info msg="StopPodSandbox for \"b9c74ba25388595f0cbfef4a14d3edb3b6b791fd86d59074fb5e6e7bd334f9cc\" returns successfully" Feb 9 19:06:53.748930 kubelet[2411]: I0209 19:06:53.748294 2411 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9d4064a8-504e-4859-8aac-1c9267ee63d0-host-proc-sys-net\") pod \"9d4064a8-504e-4859-8aac-1c9267ee63d0\" (UID: \"9d4064a8-504e-4859-8aac-1c9267ee63d0\") " Feb 9 19:06:53.748930 kubelet[2411]: I0209 19:06:53.748369 2411 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9d4064a8-504e-4859-8aac-1c9267ee63d0-cni-path\") pod \"9d4064a8-504e-4859-8aac-1c9267ee63d0\" (UID: \"9d4064a8-504e-4859-8aac-1c9267ee63d0\") " Feb 9 19:06:53.748930 kubelet[2411]: I0209 19:06:53.748399 2411 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9d4064a8-504e-4859-8aac-1c9267ee63d0-host-proc-sys-kernel\") pod \"9d4064a8-504e-4859-8aac-1c9267ee63d0\" (UID: \"9d4064a8-504e-4859-8aac-1c9267ee63d0\") " Feb 9 19:06:53.748930 kubelet[2411]: I0209 19:06:53.748427 2411 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9d4064a8-504e-4859-8aac-1c9267ee63d0-cilium-run\") pod \"9d4064a8-504e-4859-8aac-1c9267ee63d0\" (UID: \"9d4064a8-504e-4859-8aac-1c9267ee63d0\") " Feb 9 19:06:53.748930 kubelet[2411]: I0209 19:06:53.748453 2411 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9d4064a8-504e-4859-8aac-1c9267ee63d0-bpf-maps\") pod \"9d4064a8-504e-4859-8aac-1c9267ee63d0\" (UID: \"9d4064a8-504e-4859-8aac-1c9267ee63d0\") " Feb 9 19:06:53.748930 kubelet[2411]: I0209 19:06:53.748478 2411 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9d4064a8-504e-4859-8aac-1c9267ee63d0-lib-modules\") pod \"9d4064a8-504e-4859-8aac-1c9267ee63d0\" (UID: \"9d4064a8-504e-4859-8aac-1c9267ee63d0\") " Feb 9 19:06:53.749725 kubelet[2411]: I0209 19:06:53.748512 2411 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9d4064a8-504e-4859-8aac-1c9267ee63d0-hubble-tls\") pod \"9d4064a8-504e-4859-8aac-1c9267ee63d0\" (UID: \"9d4064a8-504e-4859-8aac-1c9267ee63d0\") " Feb 9 19:06:53.749725 kubelet[2411]: I0209 19:06:53.748553 2411 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9d4064a8-504e-4859-8aac-1c9267ee63d0-xtables-lock\") pod \"9d4064a8-504e-4859-8aac-1c9267ee63d0\" (UID: \"9d4064a8-504e-4859-8aac-1c9267ee63d0\") " Feb 9 19:06:53.749725 kubelet[2411]: I0209 19:06:53.748588 2411 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9d4064a8-504e-4859-8aac-1c9267ee63d0-cilium-cgroup\") pod \"9d4064a8-504e-4859-8aac-1c9267ee63d0\" (UID: \"9d4064a8-504e-4859-8aac-1c9267ee63d0\") " Feb 9 19:06:53.749725 kubelet[2411]: I0209 19:06:53.748618 2411 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9d4064a8-504e-4859-8aac-1c9267ee63d0-etc-cni-netd\") pod \"9d4064a8-504e-4859-8aac-1c9267ee63d0\" (UID: \"9d4064a8-504e-4859-8aac-1c9267ee63d0\") " Feb 9 19:06:53.749725 kubelet[2411]: I0209 19:06:53.748653 2411 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72t4t\" (UniqueName: \"kubernetes.io/projected/9d4064a8-504e-4859-8aac-1c9267ee63d0-kube-api-access-72t4t\") pod \"9d4064a8-504e-4859-8aac-1c9267ee63d0\" (UID: \"9d4064a8-504e-4859-8aac-1c9267ee63d0\") " Feb 9 19:06:53.749725 kubelet[2411]: I0209 19:06:53.748878 2411 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9d4064a8-504e-4859-8aac-1c9267ee63d0-clustermesh-secrets\") pod \"9d4064a8-504e-4859-8aac-1c9267ee63d0\" (UID: \"9d4064a8-504e-4859-8aac-1c9267ee63d0\") " Feb 9 19:06:53.750097 kubelet[2411]: I0209 19:06:53.748998 2411 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9d4064a8-504e-4859-8aac-1c9267ee63d0-cilium-config-path\") pod \"9d4064a8-504e-4859-8aac-1c9267ee63d0\" (UID: \"9d4064a8-504e-4859-8aac-1c9267ee63d0\") " Feb 9 19:06:53.750097 kubelet[2411]: I0209 19:06:53.749060 2411 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9d4064a8-504e-4859-8aac-1c9267ee63d0-cilium-ipsec-secrets\") pod \"9d4064a8-504e-4859-8aac-1c9267ee63d0\" (UID: \"9d4064a8-504e-4859-8aac-1c9267ee63d0\") " Feb 9 19:06:53.750097 kubelet[2411]: I0209 19:06:53.749100 2411 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9d4064a8-504e-4859-8aac-1c9267ee63d0-hostproc\") pod \"9d4064a8-504e-4859-8aac-1c9267ee63d0\" (UID: \"9d4064a8-504e-4859-8aac-1c9267ee63d0\") " Feb 9 19:06:53.750097 kubelet[2411]: I0209 19:06:53.749172 2411 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d4064a8-504e-4859-8aac-1c9267ee63d0-hostproc" (OuterVolumeSpecName: "hostproc") pod "9d4064a8-504e-4859-8aac-1c9267ee63d0" (UID: "9d4064a8-504e-4859-8aac-1c9267ee63d0"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:06:53.750097 kubelet[2411]: I0209 19:06:53.749211 2411 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d4064a8-504e-4859-8aac-1c9267ee63d0-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "9d4064a8-504e-4859-8aac-1c9267ee63d0" (UID: "9d4064a8-504e-4859-8aac-1c9267ee63d0"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:06:53.752101 kubelet[2411]: I0209 19:06:53.749237 2411 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d4064a8-504e-4859-8aac-1c9267ee63d0-cni-path" (OuterVolumeSpecName: "cni-path") pod "9d4064a8-504e-4859-8aac-1c9267ee63d0" (UID: "9d4064a8-504e-4859-8aac-1c9267ee63d0"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:06:53.752101 kubelet[2411]: I0209 19:06:53.749259 2411 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d4064a8-504e-4859-8aac-1c9267ee63d0-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "9d4064a8-504e-4859-8aac-1c9267ee63d0" (UID: "9d4064a8-504e-4859-8aac-1c9267ee63d0"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:06:53.752101 kubelet[2411]: I0209 19:06:53.749281 2411 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d4064a8-504e-4859-8aac-1c9267ee63d0-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "9d4064a8-504e-4859-8aac-1c9267ee63d0" (UID: "9d4064a8-504e-4859-8aac-1c9267ee63d0"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:06:53.752101 kubelet[2411]: I0209 19:06:53.749331 2411 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d4064a8-504e-4859-8aac-1c9267ee63d0-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "9d4064a8-504e-4859-8aac-1c9267ee63d0" (UID: "9d4064a8-504e-4859-8aac-1c9267ee63d0"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:06:53.752101 kubelet[2411]: I0209 19:06:53.749361 2411 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d4064a8-504e-4859-8aac-1c9267ee63d0-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "9d4064a8-504e-4859-8aac-1c9267ee63d0" (UID: "9d4064a8-504e-4859-8aac-1c9267ee63d0"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:06:53.752357 kubelet[2411]: W0209 19:06:53.750812 2411 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/9d4064a8-504e-4859-8aac-1c9267ee63d0/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 19:06:53.752357 kubelet[2411]: I0209 19:06:53.751115 2411 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d4064a8-504e-4859-8aac-1c9267ee63d0-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "9d4064a8-504e-4859-8aac-1c9267ee63d0" (UID: "9d4064a8-504e-4859-8aac-1c9267ee63d0"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:06:53.752357 kubelet[2411]: I0209 19:06:53.751173 2411 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d4064a8-504e-4859-8aac-1c9267ee63d0-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "9d4064a8-504e-4859-8aac-1c9267ee63d0" (UID: "9d4064a8-504e-4859-8aac-1c9267ee63d0"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:06:53.752357 kubelet[2411]: I0209 19:06:53.751201 2411 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d4064a8-504e-4859-8aac-1c9267ee63d0-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "9d4064a8-504e-4859-8aac-1c9267ee63d0" (UID: "9d4064a8-504e-4859-8aac-1c9267ee63d0"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:06:53.753548 kubelet[2411]: I0209 19:06:53.753517 2411 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4064a8-504e-4859-8aac-1c9267ee63d0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9d4064a8-504e-4859-8aac-1c9267ee63d0" (UID: "9d4064a8-504e-4859-8aac-1c9267ee63d0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 19:06:53.757241 systemd[1]: var-lib-kubelet-pods-9d4064a8\x2d504e\x2d4859\x2d8aac\x2d1c9267ee63d0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d72t4t.mount: Deactivated successfully. Feb 9 19:06:53.758347 kubelet[2411]: I0209 19:06:53.758287 2411 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4064a8-504e-4859-8aac-1c9267ee63d0-kube-api-access-72t4t" (OuterVolumeSpecName: "kube-api-access-72t4t") pod "9d4064a8-504e-4859-8aac-1c9267ee63d0" (UID: "9d4064a8-504e-4859-8aac-1c9267ee63d0"). InnerVolumeSpecName "kube-api-access-72t4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:06:53.761039 systemd[1]: var-lib-kubelet-pods-9d4064a8\x2d504e\x2d4859\x2d8aac\x2d1c9267ee63d0-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 19:06:53.762447 kubelet[2411]: I0209 19:06:53.762415 2411 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4064a8-504e-4859-8aac-1c9267ee63d0-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "9d4064a8-504e-4859-8aac-1c9267ee63d0" (UID: "9d4064a8-504e-4859-8aac-1c9267ee63d0"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:06:53.764656 kubelet[2411]: I0209 19:06:53.764626 2411 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4064a8-504e-4859-8aac-1c9267ee63d0-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "9d4064a8-504e-4859-8aac-1c9267ee63d0" (UID: "9d4064a8-504e-4859-8aac-1c9267ee63d0"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:06:53.765174 kubelet[2411]: I0209 19:06:53.765148 2411 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4064a8-504e-4859-8aac-1c9267ee63d0-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "9d4064a8-504e-4859-8aac-1c9267ee63d0" (UID: "9d4064a8-504e-4859-8aac-1c9267ee63d0"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:06:53.849533 kubelet[2411]: I0209 19:06:53.849483 2411 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9d4064a8-504e-4859-8aac-1c9267ee63d0-cilium-cgroup\") on node \"ci-3510.3.2-a-2006cf4d94\" DevicePath \"\"" Feb 9 19:06:53.849533 kubelet[2411]: I0209 19:06:53.849527 2411 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-72t4t\" (UniqueName: \"kubernetes.io/projected/9d4064a8-504e-4859-8aac-1c9267ee63d0-kube-api-access-72t4t\") on node \"ci-3510.3.2-a-2006cf4d94\" DevicePath \"\"" Feb 9 19:06:53.849533 kubelet[2411]: I0209 19:06:53.849546 2411 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9d4064a8-504e-4859-8aac-1c9267ee63d0-etc-cni-netd\") on node \"ci-3510.3.2-a-2006cf4d94\" DevicePath \"\"" Feb 9 19:06:53.849800 kubelet[2411]: I0209 19:06:53.849561 2411 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9d4064a8-504e-4859-8aac-1c9267ee63d0-cilium-config-path\") on node \"ci-3510.3.2-a-2006cf4d94\" DevicePath \"\"" Feb 9 19:06:53.849800 kubelet[2411]: I0209 19:06:53.849576 2411 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9d4064a8-504e-4859-8aac-1c9267ee63d0-clustermesh-secrets\") on node \"ci-3510.3.2-a-2006cf4d94\" DevicePath \"\"" Feb 9 19:06:53.849800 kubelet[2411]: I0209 19:06:53.849588 2411 reconciler_common.go:295] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9d4064a8-504e-4859-8aac-1c9267ee63d0-cilium-ipsec-secrets\") on node \"ci-3510.3.2-a-2006cf4d94\" DevicePath \"\"" Feb 9 19:06:53.849800 kubelet[2411]: I0209 19:06:53.849600 2411 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9d4064a8-504e-4859-8aac-1c9267ee63d0-hostproc\") on node \"ci-3510.3.2-a-2006cf4d94\" DevicePath \"\"" Feb 9 19:06:53.849800 kubelet[2411]: I0209 19:06:53.849612 2411 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9d4064a8-504e-4859-8aac-1c9267ee63d0-host-proc-sys-net\") on node \"ci-3510.3.2-a-2006cf4d94\" DevicePath \"\"" Feb 9 19:06:53.849800 kubelet[2411]: I0209 19:06:53.849627 2411 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9d4064a8-504e-4859-8aac-1c9267ee63d0-host-proc-sys-kernel\") on node \"ci-3510.3.2-a-2006cf4d94\" DevicePath \"\"" Feb 9 19:06:53.849800 kubelet[2411]: I0209 19:06:53.849643 2411 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9d4064a8-504e-4859-8aac-1c9267ee63d0-cilium-run\") on node \"ci-3510.3.2-a-2006cf4d94\" DevicePath \"\"" Feb 9 19:06:53.849800 kubelet[2411]: I0209 19:06:53.849654 2411 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9d4064a8-504e-4859-8aac-1c9267ee63d0-cni-path\") on node \"ci-3510.3.2-a-2006cf4d94\" DevicePath \"\"" Feb 9 19:06:53.849998 kubelet[2411]: I0209 19:06:53.849666 2411 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9d4064a8-504e-4859-8aac-1c9267ee63d0-xtables-lock\") on node \"ci-3510.3.2-a-2006cf4d94\" DevicePath \"\"" Feb 9 19:06:53.849998 kubelet[2411]: I0209 19:06:53.849677 2411 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9d4064a8-504e-4859-8aac-1c9267ee63d0-bpf-maps\") on node \"ci-3510.3.2-a-2006cf4d94\" DevicePath \"\"" Feb 9 19:06:53.849998 kubelet[2411]: I0209 19:06:53.849690 2411 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9d4064a8-504e-4859-8aac-1c9267ee63d0-lib-modules\") on node \"ci-3510.3.2-a-2006cf4d94\" DevicePath \"\"" Feb 9 19:06:53.849998 kubelet[2411]: I0209 19:06:53.849701 2411 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9d4064a8-504e-4859-8aac-1c9267ee63d0-hubble-tls\") on node \"ci-3510.3.2-a-2006cf4d94\" DevicePath \"\"" Feb 9 19:06:53.983130 kubelet[2411]: I0209 19:06:53.983096 2411 setters.go:548] "Node became not ready" node="ci-3510.3.2-a-2006cf4d94" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-09 19:06:53.983032586 +0000 UTC m=+173.072326170 LastTransitionTime:2024-02-09 19:06:53.983032586 +0000 UTC m=+173.072326170 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 9 19:06:54.446861 systemd[1]: var-lib-kubelet-pods-9d4064a8\x2d504e\x2d4859\x2d8aac\x2d1c9267ee63d0-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 9 19:06:54.447014 systemd[1]: var-lib-kubelet-pods-9d4064a8\x2d504e\x2d4859\x2d8aac\x2d1c9267ee63d0-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 19:06:54.594433 kubelet[2411]: I0209 19:06:54.594396 2411 scope.go:115] "RemoveContainer" containerID="13a2a3c38d738ae99ba315a19bee01b12aaeb2a7212dd81d26ebf38f8873bc1a" Feb 9 19:06:54.595609 env[1329]: time="2024-02-09T19:06:54.595565005Z" level=info msg="RemoveContainer for \"13a2a3c38d738ae99ba315a19bee01b12aaeb2a7212dd81d26ebf38f8873bc1a\"" Feb 9 19:06:54.601134 systemd[1]: Removed slice kubepods-burstable-pod9d4064a8_504e_4859_8aac_1c9267ee63d0.slice. Feb 9 19:06:54.611796 env[1329]: time="2024-02-09T19:06:54.611751872Z" level=info msg="RemoveContainer for \"13a2a3c38d738ae99ba315a19bee01b12aaeb2a7212dd81d26ebf38f8873bc1a\" returns successfully" Feb 9 19:06:54.650030 kubelet[2411]: I0209 19:06:54.649992 2411 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:06:54.650339 kubelet[2411]: E0209 19:06:54.650319 2411 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9d4064a8-504e-4859-8aac-1c9267ee63d0" containerName="mount-cgroup" Feb 9 19:06:54.650505 kubelet[2411]: I0209 19:06:54.650489 2411 memory_manager.go:346] "RemoveStaleState removing state" podUID="9d4064a8-504e-4859-8aac-1c9267ee63d0" containerName="mount-cgroup" Feb 9 19:06:54.657058 systemd[1]: Created slice kubepods-burstable-pod18b4aef9_3806_4ee6_a8f7_29a29b43a3a1.slice. Feb 9 19:06:54.754063 kubelet[2411]: I0209 19:06:54.753939 2411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/18b4aef9-3806-4ee6-a8f7-29a29b43a3a1-cilium-run\") pod \"cilium-shl9p\" (UID: \"18b4aef9-3806-4ee6-a8f7-29a29b43a3a1\") " pod="kube-system/cilium-shl9p" Feb 9 19:06:54.754063 kubelet[2411]: I0209 19:06:54.754006 2411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xk64\" (UniqueName: \"kubernetes.io/projected/18b4aef9-3806-4ee6-a8f7-29a29b43a3a1-kube-api-access-2xk64\") pod \"cilium-shl9p\" (UID: \"18b4aef9-3806-4ee6-a8f7-29a29b43a3a1\") " pod="kube-system/cilium-shl9p" Feb 9 19:06:54.754607 kubelet[2411]: I0209 19:06:54.754039 2411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/18b4aef9-3806-4ee6-a8f7-29a29b43a3a1-lib-modules\") pod \"cilium-shl9p\" (UID: \"18b4aef9-3806-4ee6-a8f7-29a29b43a3a1\") " pod="kube-system/cilium-shl9p" Feb 9 19:06:54.754607 kubelet[2411]: I0209 19:06:54.754099 2411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/18b4aef9-3806-4ee6-a8f7-29a29b43a3a1-host-proc-sys-kernel\") pod \"cilium-shl9p\" (UID: \"18b4aef9-3806-4ee6-a8f7-29a29b43a3a1\") " pod="kube-system/cilium-shl9p" Feb 9 19:06:54.754607 kubelet[2411]: I0209 19:06:54.754165 2411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/18b4aef9-3806-4ee6-a8f7-29a29b43a3a1-cilium-cgroup\") pod \"cilium-shl9p\" (UID: \"18b4aef9-3806-4ee6-a8f7-29a29b43a3a1\") " pod="kube-system/cilium-shl9p" Feb 9 19:06:54.754607 kubelet[2411]: I0209 19:06:54.754198 2411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/18b4aef9-3806-4ee6-a8f7-29a29b43a3a1-bpf-maps\") pod \"cilium-shl9p\" (UID: \"18b4aef9-3806-4ee6-a8f7-29a29b43a3a1\") " pod="kube-system/cilium-shl9p" Feb 9 19:06:54.754607 kubelet[2411]: I0209 19:06:54.754256 2411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/18b4aef9-3806-4ee6-a8f7-29a29b43a3a1-clustermesh-secrets\") pod \"cilium-shl9p\" (UID: \"18b4aef9-3806-4ee6-a8f7-29a29b43a3a1\") " pod="kube-system/cilium-shl9p" Feb 9 19:06:54.754607 kubelet[2411]: I0209 19:06:54.754342 2411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/18b4aef9-3806-4ee6-a8f7-29a29b43a3a1-hostproc\") pod \"cilium-shl9p\" (UID: \"18b4aef9-3806-4ee6-a8f7-29a29b43a3a1\") " pod="kube-system/cilium-shl9p" Feb 9 19:06:54.754861 kubelet[2411]: I0209 19:06:54.754379 2411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/18b4aef9-3806-4ee6-a8f7-29a29b43a3a1-etc-cni-netd\") pod \"cilium-shl9p\" (UID: \"18b4aef9-3806-4ee6-a8f7-29a29b43a3a1\") " pod="kube-system/cilium-shl9p" Feb 9 19:06:54.754861 kubelet[2411]: I0209 19:06:54.754455 2411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/18b4aef9-3806-4ee6-a8f7-29a29b43a3a1-host-proc-sys-net\") pod \"cilium-shl9p\" (UID: \"18b4aef9-3806-4ee6-a8f7-29a29b43a3a1\") " pod="kube-system/cilium-shl9p" Feb 9 19:06:54.754861 kubelet[2411]: I0209 19:06:54.754483 2411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/18b4aef9-3806-4ee6-a8f7-29a29b43a3a1-cilium-config-path\") pod \"cilium-shl9p\" (UID: \"18b4aef9-3806-4ee6-a8f7-29a29b43a3a1\") " pod="kube-system/cilium-shl9p" Feb 9 19:06:54.754861 kubelet[2411]: I0209 19:06:54.754555 2411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/18b4aef9-3806-4ee6-a8f7-29a29b43a3a1-cni-path\") pod \"cilium-shl9p\" (UID: \"18b4aef9-3806-4ee6-a8f7-29a29b43a3a1\") " pod="kube-system/cilium-shl9p" Feb 9 19:06:54.754861 kubelet[2411]: I0209 19:06:54.754621 2411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/18b4aef9-3806-4ee6-a8f7-29a29b43a3a1-xtables-lock\") pod \"cilium-shl9p\" (UID: \"18b4aef9-3806-4ee6-a8f7-29a29b43a3a1\") " pod="kube-system/cilium-shl9p" Feb 9 19:06:54.754861 kubelet[2411]: I0209 19:06:54.754684 2411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/18b4aef9-3806-4ee6-a8f7-29a29b43a3a1-cilium-ipsec-secrets\") pod \"cilium-shl9p\" (UID: \"18b4aef9-3806-4ee6-a8f7-29a29b43a3a1\") " pod="kube-system/cilium-shl9p" Feb 9 19:06:54.755023 kubelet[2411]: I0209 19:06:54.754715 2411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/18b4aef9-3806-4ee6-a8f7-29a29b43a3a1-hubble-tls\") pod \"cilium-shl9p\" (UID: \"18b4aef9-3806-4ee6-a8f7-29a29b43a3a1\") " pod="kube-system/cilium-shl9p" Feb 9 19:06:54.961627 env[1329]: time="2024-02-09T19:06:54.961564680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-shl9p,Uid:18b4aef9-3806-4ee6-a8f7-29a29b43a3a1,Namespace:kube-system,Attempt:0,}" Feb 9 19:06:55.022532 env[1329]: time="2024-02-09T19:06:55.022349505Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:06:55.022753 env[1329]: time="2024-02-09T19:06:55.022384406Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:06:55.022753 env[1329]: time="2024-02-09T19:06:55.022398106Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:06:55.023089 env[1329]: time="2024-02-09T19:06:55.022732109Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d29d6fa4a55af7884df644ce7634af97eb51110e70055b8b21e3f8d5e7993c0b pid=4465 runtime=io.containerd.runc.v2 Feb 9 19:06:55.037210 systemd[1]: Started cri-containerd-d29d6fa4a55af7884df644ce7634af97eb51110e70055b8b21e3f8d5e7993c0b.scope. Feb 9 19:06:55.065997 env[1329]: time="2024-02-09T19:06:55.065955552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-shl9p,Uid:18b4aef9-3806-4ee6-a8f7-29a29b43a3a1,Namespace:kube-system,Attempt:0,} returns sandbox id \"d29d6fa4a55af7884df644ce7634af97eb51110e70055b8b21e3f8d5e7993c0b\"" Feb 9 19:06:55.069743 env[1329]: time="2024-02-09T19:06:55.069582389Z" level=info msg="CreateContainer within sandbox \"d29d6fa4a55af7884df644ce7634af97eb51110e70055b8b21e3f8d5e7993c0b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 19:06:55.113966 env[1329]: time="2024-02-09T19:06:55.113901343Z" level=info msg="CreateContainer within sandbox \"d29d6fa4a55af7884df644ce7634af97eb51110e70055b8b21e3f8d5e7993c0b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ac9b040877386c7e64eba147507d253743f75e826fcccdb32fc3b9a66e0cc5c2\"" Feb 9 19:06:55.114546 env[1329]: time="2024-02-09T19:06:55.114513450Z" level=info msg="StartContainer for \"ac9b040877386c7e64eba147507d253743f75e826fcccdb32fc3b9a66e0cc5c2\"" Feb 9 19:06:55.132922 systemd[1]: Started cri-containerd-ac9b040877386c7e64eba147507d253743f75e826fcccdb32fc3b9a66e0cc5c2.scope. Feb 9 19:06:55.168576 systemd[1]: cri-containerd-ac9b040877386c7e64eba147507d253743f75e826fcccdb32fc3b9a66e0cc5c2.scope: Deactivated successfully. Feb 9 19:06:55.169681 env[1329]: time="2024-02-09T19:06:55.169637014Z" level=info msg="StartContainer for \"ac9b040877386c7e64eba147507d253743f75e826fcccdb32fc3b9a66e0cc5c2\" returns successfully" Feb 9 19:06:55.219321 env[1329]: time="2024-02-09T19:06:55.219261423Z" level=info msg="StopPodSandbox for \"b9c74ba25388595f0cbfef4a14d3edb3b6b791fd86d59074fb5e6e7bd334f9cc\"" Feb 9 19:06:55.219561 env[1329]: time="2024-02-09T19:06:55.219375524Z" level=info msg="TearDown network for sandbox \"b9c74ba25388595f0cbfef4a14d3edb3b6b791fd86d59074fb5e6e7bd334f9cc\" successfully" Feb 9 19:06:55.219561 env[1329]: time="2024-02-09T19:06:55.219421524Z" level=info msg="StopPodSandbox for \"b9c74ba25388595f0cbfef4a14d3edb3b6b791fd86d59074fb5e6e7bd334f9cc\" returns successfully" Feb 9 19:06:55.222096 kubelet[2411]: I0209 19:06:55.221625 2411 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=9d4064a8-504e-4859-8aac-1c9267ee63d0 path="/var/lib/kubelet/pods/9d4064a8-504e-4859-8aac-1c9267ee63d0/volumes" Feb 9 19:06:55.234152 env[1329]: time="2024-02-09T19:06:55.234110975Z" level=info msg="shim disconnected" id=ac9b040877386c7e64eba147507d253743f75e826fcccdb32fc3b9a66e0cc5c2 Feb 9 19:06:55.234271 env[1329]: time="2024-02-09T19:06:55.234152575Z" level=warning msg="cleaning up after shim disconnected" id=ac9b040877386c7e64eba147507d253743f75e826fcccdb32fc3b9a66e0cc5c2 namespace=k8s.io Feb 9 19:06:55.234271 env[1329]: time="2024-02-09T19:06:55.234164575Z" level=info msg="cleaning up dead shim" Feb 9 19:06:55.242195 env[1329]: time="2024-02-09T19:06:55.242162557Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:06:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4549 runtime=io.containerd.runc.v2\n" Feb 9 19:06:55.600429 env[1329]: time="2024-02-09T19:06:55.600383427Z" level=info msg="CreateContainer within sandbox \"d29d6fa4a55af7884df644ce7634af97eb51110e70055b8b21e3f8d5e7993c0b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 19:06:55.653726 env[1329]: time="2024-02-09T19:06:55.653674373Z" level=info msg="CreateContainer within sandbox \"d29d6fa4a55af7884df644ce7634af97eb51110e70055b8b21e3f8d5e7993c0b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"191d680fb46dcb7976488d953a46e3ae0a77b6261c72d3f171f84e10e97768ed\"" Feb 9 19:06:55.654444 env[1329]: time="2024-02-09T19:06:55.654405681Z" level=info msg="StartContainer for \"191d680fb46dcb7976488d953a46e3ae0a77b6261c72d3f171f84e10e97768ed\"" Feb 9 19:06:55.677717 systemd[1]: Started cri-containerd-191d680fb46dcb7976488d953a46e3ae0a77b6261c72d3f171f84e10e97768ed.scope. Feb 9 19:06:55.731367 env[1329]: time="2024-02-09T19:06:55.731315369Z" level=info msg="StartContainer for \"191d680fb46dcb7976488d953a46e3ae0a77b6261c72d3f171f84e10e97768ed\" returns successfully" Feb 9 19:06:55.733618 systemd[1]: cri-containerd-191d680fb46dcb7976488d953a46e3ae0a77b6261c72d3f171f84e10e97768ed.scope: Deactivated successfully. Feb 9 19:06:55.765278 env[1329]: time="2024-02-09T19:06:55.765219716Z" level=info msg="shim disconnected" id=191d680fb46dcb7976488d953a46e3ae0a77b6261c72d3f171f84e10e97768ed Feb 9 19:06:55.765278 env[1329]: time="2024-02-09T19:06:55.765274617Z" level=warning msg="cleaning up after shim disconnected" id=191d680fb46dcb7976488d953a46e3ae0a77b6261c72d3f171f84e10e97768ed namespace=k8s.io Feb 9 19:06:55.765590 env[1329]: time="2024-02-09T19:06:55.765286717Z" level=info msg="cleaning up dead shim" Feb 9 19:06:55.773253 env[1329]: time="2024-02-09T19:06:55.773217398Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:06:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4611 runtime=io.containerd.runc.v2\n" Feb 9 19:06:55.941834 kubelet[2411]: W0209 19:06:55.941782 2411 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d4064a8_504e_4859_8aac_1c9267ee63d0.slice/cri-containerd-13a2a3c38d738ae99ba315a19bee01b12aaeb2a7212dd81d26ebf38f8873bc1a.scope WatchSource:0}: container "13a2a3c38d738ae99ba315a19bee01b12aaeb2a7212dd81d26ebf38f8873bc1a" in namespace "k8s.io": not found Feb 9 19:06:56.199069 kubelet[2411]: E0209 19:06:56.198934 2411 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 19:06:56.446979 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-191d680fb46dcb7976488d953a46e3ae0a77b6261c72d3f171f84e10e97768ed-rootfs.mount: Deactivated successfully. Feb 9 19:06:56.609462 env[1329]: time="2024-02-09T19:06:56.607830307Z" level=info msg="CreateContainer within sandbox \"d29d6fa4a55af7884df644ce7634af97eb51110e70055b8b21e3f8d5e7993c0b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 19:06:56.657031 env[1329]: time="2024-02-09T19:06:56.656978307Z" level=info msg="CreateContainer within sandbox \"d29d6fa4a55af7884df644ce7634af97eb51110e70055b8b21e3f8d5e7993c0b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"59a953919e0faf34261a694f964b0d53f3422dc637ccfce712e46c05e77c5d5c\"" Feb 9 19:06:56.657590 env[1329]: time="2024-02-09T19:06:56.657551213Z" level=info msg="StartContainer for \"59a953919e0faf34261a694f964b0d53f3422dc637ccfce712e46c05e77c5d5c\"" Feb 9 19:06:56.685688 systemd[1]: Started cri-containerd-59a953919e0faf34261a694f964b0d53f3422dc637ccfce712e46c05e77c5d5c.scope. Feb 9 19:06:56.717383 systemd[1]: cri-containerd-59a953919e0faf34261a694f964b0d53f3422dc637ccfce712e46c05e77c5d5c.scope: Deactivated successfully. Feb 9 19:06:56.721754 env[1329]: time="2024-02-09T19:06:56.721710766Z" level=info msg="StartContainer for \"59a953919e0faf34261a694f964b0d53f3422dc637ccfce712e46c05e77c5d5c\" returns successfully" Feb 9 19:06:56.753696 env[1329]: time="2024-02-09T19:06:56.753641191Z" level=info msg="shim disconnected" id=59a953919e0faf34261a694f964b0d53f3422dc637ccfce712e46c05e77c5d5c Feb 9 19:06:56.753696 env[1329]: time="2024-02-09T19:06:56.753692592Z" level=warning msg="cleaning up after shim disconnected" id=59a953919e0faf34261a694f964b0d53f3422dc637ccfce712e46c05e77c5d5c namespace=k8s.io Feb 9 19:06:56.753696 env[1329]: time="2024-02-09T19:06:56.753705292Z" level=info msg="cleaning up dead shim" Feb 9 19:06:56.761453 env[1329]: time="2024-02-09T19:06:56.761412870Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:06:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4670 runtime=io.containerd.runc.v2\n" Feb 9 19:06:57.447065 systemd[1]: run-containerd-runc-k8s.io-59a953919e0faf34261a694f964b0d53f3422dc637ccfce712e46c05e77c5d5c-runc.51lZxE.mount: Deactivated successfully. Feb 9 19:06:57.447547 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-59a953919e0faf34261a694f964b0d53f3422dc637ccfce712e46c05e77c5d5c-rootfs.mount: Deactivated successfully. Feb 9 19:06:57.614838 env[1329]: time="2024-02-09T19:06:57.614783914Z" level=info msg="CreateContainer within sandbox \"d29d6fa4a55af7884df644ce7634af97eb51110e70055b8b21e3f8d5e7993c0b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 19:06:57.662854 env[1329]: time="2024-02-09T19:06:57.662793800Z" level=info msg="CreateContainer within sandbox \"d29d6fa4a55af7884df644ce7634af97eb51110e70055b8b21e3f8d5e7993c0b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"59f6935206b23de9cc90d16bfee05537e22e5fdd6788f8975cda6e65ab75a913\"" Feb 9 19:06:57.663600 env[1329]: time="2024-02-09T19:06:57.663560007Z" level=info msg="StartContainer for \"59f6935206b23de9cc90d16bfee05537e22e5fdd6788f8975cda6e65ab75a913\"" Feb 9 19:06:57.702949 systemd[1]: Started cri-containerd-59f6935206b23de9cc90d16bfee05537e22e5fdd6788f8975cda6e65ab75a913.scope. Feb 9 19:06:57.744580 systemd[1]: cri-containerd-59f6935206b23de9cc90d16bfee05537e22e5fdd6788f8975cda6e65ab75a913.scope: Deactivated successfully. Feb 9 19:06:57.749467 env[1329]: time="2024-02-09T19:06:57.747566057Z" level=info msg="StartContainer for \"59f6935206b23de9cc90d16bfee05537e22e5fdd6788f8975cda6e65ab75a913\" returns successfully" Feb 9 19:06:57.783614 env[1329]: time="2024-02-09T19:06:57.783556920Z" level=info msg="shim disconnected" id=59f6935206b23de9cc90d16bfee05537e22e5fdd6788f8975cda6e65ab75a913 Feb 9 19:06:57.783614 env[1329]: time="2024-02-09T19:06:57.783613221Z" level=warning msg="cleaning up after shim disconnected" id=59f6935206b23de9cc90d16bfee05537e22e5fdd6788f8975cda6e65ab75a913 namespace=k8s.io Feb 9 19:06:57.783880 env[1329]: time="2024-02-09T19:06:57.783624621Z" level=info msg="cleaning up dead shim" Feb 9 19:06:57.791934 env[1329]: time="2024-02-09T19:06:57.791891405Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:06:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4726 runtime=io.containerd.runc.v2\n" Feb 9 19:06:58.446542 systemd[1]: run-containerd-runc-k8s.io-59f6935206b23de9cc90d16bfee05537e22e5fdd6788f8975cda6e65ab75a913-runc.CH91kx.mount: Deactivated successfully. Feb 9 19:06:58.446675 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-59f6935206b23de9cc90d16bfee05537e22e5fdd6788f8975cda6e65ab75a913-rootfs.mount: Deactivated successfully. Feb 9 19:06:58.617599 env[1329]: time="2024-02-09T19:06:58.617553212Z" level=info msg="CreateContainer within sandbox \"d29d6fa4a55af7884df644ce7634af97eb51110e70055b8b21e3f8d5e7993c0b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 19:06:58.661705 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3585416716.mount: Deactivated successfully. Feb 9 19:06:58.680478 env[1329]: time="2024-02-09T19:06:58.680423043Z" level=info msg="CreateContainer within sandbox \"d29d6fa4a55af7884df644ce7634af97eb51110e70055b8b21e3f8d5e7993c0b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"347119423cf8508012fe5ebd3da2ca08e0f609598e7fd8399a2f4bf4e63ad7e2\"" Feb 9 19:06:58.681413 env[1329]: time="2024-02-09T19:06:58.681371653Z" level=info msg="StartContainer for \"347119423cf8508012fe5ebd3da2ca08e0f609598e7fd8399a2f4bf4e63ad7e2\"" Feb 9 19:06:58.701454 systemd[1]: Started cri-containerd-347119423cf8508012fe5ebd3da2ca08e0f609598e7fd8399a2f4bf4e63ad7e2.scope. Feb 9 19:06:58.747338 env[1329]: time="2024-02-09T19:06:58.747269414Z" level=info msg="StartContainer for \"347119423cf8508012fe5ebd3da2ca08e0f609598e7fd8399a2f4bf4e63ad7e2\" returns successfully" Feb 9 19:06:59.053375 kubelet[2411]: W0209 19:06:59.053220 2411 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18b4aef9_3806_4ee6_a8f7_29a29b43a3a1.slice/cri-containerd-ac9b040877386c7e64eba147507d253743f75e826fcccdb32fc3b9a66e0cc5c2.scope WatchSource:0}: task ac9b040877386c7e64eba147507d253743f75e826fcccdb32fc3b9a66e0cc5c2 not found: not found Feb 9 19:06:59.186341 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 9 19:06:59.825711 systemd[1]: run-containerd-runc-k8s.io-347119423cf8508012fe5ebd3da2ca08e0f609598e7fd8399a2f4bf4e63ad7e2-runc.FFyL5B.mount: Deactivated successfully. Feb 9 19:07:01.044751 env[1329]: time="2024-02-09T19:07:01.044696689Z" level=info msg="StopPodSandbox for \"c30c78b6347934b1d27785a485c742d39ce177f33a81f073c1b2ab4bc5c7f19a\"" Feb 9 19:07:01.045202 env[1329]: time="2024-02-09T19:07:01.044806790Z" level=info msg="TearDown network for sandbox \"c30c78b6347934b1d27785a485c742d39ce177f33a81f073c1b2ab4bc5c7f19a\" successfully" Feb 9 19:07:01.045202 env[1329]: time="2024-02-09T19:07:01.044851490Z" level=info msg="StopPodSandbox for \"c30c78b6347934b1d27785a485c742d39ce177f33a81f073c1b2ab4bc5c7f19a\" returns successfully" Feb 9 19:07:01.045335 env[1329]: time="2024-02-09T19:07:01.045255094Z" level=info msg="RemovePodSandbox for \"c30c78b6347934b1d27785a485c742d39ce177f33a81f073c1b2ab4bc5c7f19a\"" Feb 9 19:07:01.045393 env[1329]: time="2024-02-09T19:07:01.045292595Z" level=info msg="Forcibly stopping sandbox \"c30c78b6347934b1d27785a485c742d39ce177f33a81f073c1b2ab4bc5c7f19a\"" Feb 9 19:07:01.045438 env[1329]: time="2024-02-09T19:07:01.045398996Z" level=info msg="TearDown network for sandbox \"c30c78b6347934b1d27785a485c742d39ce177f33a81f073c1b2ab4bc5c7f19a\" successfully" Feb 9 19:07:01.070494 env[1329]: time="2024-02-09T19:07:01.070442442Z" level=info msg="RemovePodSandbox \"c30c78b6347934b1d27785a485c742d39ce177f33a81f073c1b2ab4bc5c7f19a\" returns successfully" Feb 9 19:07:01.070981 env[1329]: time="2024-02-09T19:07:01.070947847Z" level=info msg="StopPodSandbox for \"d04c537795675d8b0c58ab265203137f1067e872b2808a19058055276effd4dc\"" Feb 9 19:07:01.071116 env[1329]: time="2024-02-09T19:07:01.071044648Z" level=info msg="TearDown network for sandbox \"d04c537795675d8b0c58ab265203137f1067e872b2808a19058055276effd4dc\" successfully" Feb 9 19:07:01.071116 env[1329]: time="2024-02-09T19:07:01.071086949Z" level=info msg="StopPodSandbox for \"d04c537795675d8b0c58ab265203137f1067e872b2808a19058055276effd4dc\" returns successfully" Feb 9 19:07:01.071417 env[1329]: time="2024-02-09T19:07:01.071388652Z" level=info msg="RemovePodSandbox for \"d04c537795675d8b0c58ab265203137f1067e872b2808a19058055276effd4dc\"" Feb 9 19:07:01.071505 env[1329]: time="2024-02-09T19:07:01.071422452Z" level=info msg="Forcibly stopping sandbox \"d04c537795675d8b0c58ab265203137f1067e872b2808a19058055276effd4dc\"" Feb 9 19:07:01.071552 env[1329]: time="2024-02-09T19:07:01.071500253Z" level=info msg="TearDown network for sandbox \"d04c537795675d8b0c58ab265203137f1067e872b2808a19058055276effd4dc\" successfully" Feb 9 19:07:01.081630 env[1329]: time="2024-02-09T19:07:01.081594252Z" level=info msg="RemovePodSandbox \"d04c537795675d8b0c58ab265203137f1067e872b2808a19058055276effd4dc\" returns successfully" Feb 9 19:07:01.082201 env[1329]: time="2024-02-09T19:07:01.082160958Z" level=info msg="StopPodSandbox for \"b9c74ba25388595f0cbfef4a14d3edb3b6b791fd86d59074fb5e6e7bd334f9cc\"" Feb 9 19:07:01.082296 env[1329]: time="2024-02-09T19:07:01.082258459Z" level=info msg="TearDown network for sandbox \"b9c74ba25388595f0cbfef4a14d3edb3b6b791fd86d59074fb5e6e7bd334f9cc\" successfully" Feb 9 19:07:01.082390 env[1329]: time="2024-02-09T19:07:01.082298159Z" level=info msg="StopPodSandbox for \"b9c74ba25388595f0cbfef4a14d3edb3b6b791fd86d59074fb5e6e7bd334f9cc\" returns successfully" Feb 9 19:07:01.082782 env[1329]: time="2024-02-09T19:07:01.082748364Z" level=info msg="RemovePodSandbox for \"b9c74ba25388595f0cbfef4a14d3edb3b6b791fd86d59074fb5e6e7bd334f9cc\"" Feb 9 19:07:01.082865 env[1329]: time="2024-02-09T19:07:01.082785864Z" level=info msg="Forcibly stopping sandbox \"b9c74ba25388595f0cbfef4a14d3edb3b6b791fd86d59074fb5e6e7bd334f9cc\"" Feb 9 19:07:01.082914 env[1329]: time="2024-02-09T19:07:01.082864065Z" level=info msg="TearDown network for sandbox \"b9c74ba25388595f0cbfef4a14d3edb3b6b791fd86d59074fb5e6e7bd334f9cc\" successfully" Feb 9 19:07:01.095133 env[1329]: time="2024-02-09T19:07:01.095100685Z" level=info msg="RemovePodSandbox \"b9c74ba25388595f0cbfef4a14d3edb3b6b791fd86d59074fb5e6e7bd334f9cc\" returns successfully" Feb 9 19:07:01.712492 systemd-networkd[1462]: lxc_health: Link UP Feb 9 19:07:01.726468 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 19:07:01.726133 systemd-networkd[1462]: lxc_health: Gained carrier Feb 9 19:07:02.024722 systemd[1]: run-containerd-runc-k8s.io-347119423cf8508012fe5ebd3da2ca08e0f609598e7fd8399a2f4bf4e63ad7e2-runc.luRamU.mount: Deactivated successfully. Feb 9 19:07:02.160899 kubelet[2411]: W0209 19:07:02.160848 2411 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18b4aef9_3806_4ee6_a8f7_29a29b43a3a1.slice/cri-containerd-191d680fb46dcb7976488d953a46e3ae0a77b6261c72d3f171f84e10e97768ed.scope WatchSource:0}: task 191d680fb46dcb7976488d953a46e3ae0a77b6261c72d3f171f84e10e97768ed not found: not found Feb 9 19:07:02.986784 kubelet[2411]: I0209 19:07:02.986741 2411 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-shl9p" podStartSLOduration=8.986699162 pod.CreationTimestamp="2024-02-09 19:06:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:06:59.644639186 +0000 UTC m=+178.733932770" watchObservedRunningTime="2024-02-09 19:07:02.986699162 +0000 UTC m=+182.075992646" Feb 9 19:07:03.071590 systemd-networkd[1462]: lxc_health: Gained IPv6LL Feb 9 19:07:04.326742 systemd[1]: run-containerd-runc-k8s.io-347119423cf8508012fe5ebd3da2ca08e0f609598e7fd8399a2f4bf4e63ad7e2-runc.pnW8tS.mount: Deactivated successfully. Feb 9 19:07:05.278666 kubelet[2411]: W0209 19:07:05.278617 2411 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18b4aef9_3806_4ee6_a8f7_29a29b43a3a1.slice/cri-containerd-59a953919e0faf34261a694f964b0d53f3422dc637ccfce712e46c05e77c5d5c.scope WatchSource:0}: task 59a953919e0faf34261a694f964b0d53f3422dc637ccfce712e46c05e77c5d5c not found: not found Feb 9 19:07:06.496850 systemd[1]: run-containerd-runc-k8s.io-347119423cf8508012fe5ebd3da2ca08e0f609598e7fd8399a2f4bf4e63ad7e2-runc.Ye1C4F.mount: Deactivated successfully. Feb 9 19:07:08.387872 kubelet[2411]: W0209 19:07:08.387827 2411 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18b4aef9_3806_4ee6_a8f7_29a29b43a3a1.slice/cri-containerd-59f6935206b23de9cc90d16bfee05537e22e5fdd6788f8975cda6e65ab75a913.scope WatchSource:0}: task 59f6935206b23de9cc90d16bfee05537e22e5fdd6788f8975cda6e65ab75a913 not found: not found Feb 9 19:07:08.631777 systemd[1]: run-containerd-runc-k8s.io-347119423cf8508012fe5ebd3da2ca08e0f609598e7fd8399a2f4bf4e63ad7e2-runc.xUroNX.mount: Deactivated successfully. Feb 9 19:07:08.788401 sshd[4328]: pam_unix(sshd:session): session closed for user core Feb 9 19:07:08.792030 systemd[1]: sshd@24-10.200.8.39:22-10.200.12.6:42712.service: Deactivated successfully. Feb 9 19:07:08.793118 systemd[1]: session-27.scope: Deactivated successfully. Feb 9 19:07:08.793854 systemd-logind[1301]: Session 27 logged out. Waiting for processes to exit. Feb 9 19:07:08.794674 systemd-logind[1301]: Removed session 27. Feb 9 19:07:24.172947 systemd[1]: cri-containerd-8f846fb580d82556dc9fd660de1b9c064f19905a2d9a52109f2c07d1d93549dc.scope: Deactivated successfully. Feb 9 19:07:24.173261 systemd[1]: cri-containerd-8f846fb580d82556dc9fd660de1b9c064f19905a2d9a52109f2c07d1d93549dc.scope: Consumed 3.481s CPU time. Feb 9 19:07:24.195058 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8f846fb580d82556dc9fd660de1b9c064f19905a2d9a52109f2c07d1d93549dc-rootfs.mount: Deactivated successfully. Feb 9 19:07:24.253378 env[1329]: time="2024-02-09T19:07:24.253282916Z" level=info msg="shim disconnected" id=8f846fb580d82556dc9fd660de1b9c064f19905a2d9a52109f2c07d1d93549dc Feb 9 19:07:24.253378 env[1329]: time="2024-02-09T19:07:24.253367117Z" level=warning msg="cleaning up after shim disconnected" id=8f846fb580d82556dc9fd660de1b9c064f19905a2d9a52109f2c07d1d93549dc namespace=k8s.io Feb 9 19:07:24.253378 env[1329]: time="2024-02-09T19:07:24.253386717Z" level=info msg="cleaning up dead shim" Feb 9 19:07:24.261498 env[1329]: time="2024-02-09T19:07:24.261457987Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:07:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5444 runtime=io.containerd.runc.v2\n" Feb 9 19:07:24.673366 kubelet[2411]: I0209 19:07:24.673335 2411 scope.go:115] "RemoveContainer" containerID="8f846fb580d82556dc9fd660de1b9c064f19905a2d9a52109f2c07d1d93549dc" Feb 9 19:07:24.675601 env[1329]: time="2024-02-09T19:07:24.675553670Z" level=info msg="CreateContainer within sandbox \"10402755eaf9b2e49532e57c2f8a1811ce6e1ceb7b7a08e576508191d9421cc0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Feb 9 19:07:24.717930 env[1329]: time="2024-02-09T19:07:24.717874136Z" level=info msg="CreateContainer within sandbox \"10402755eaf9b2e49532e57c2f8a1811ce6e1ceb7b7a08e576508191d9421cc0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"da264ae4fba0b2f8b84d384cfe1077dda98f364797a0a274fe8047dd3e1d44e5\"" Feb 9 19:07:24.718458 env[1329]: time="2024-02-09T19:07:24.718424941Z" level=info msg="StartContainer for \"da264ae4fba0b2f8b84d384cfe1077dda98f364797a0a274fe8047dd3e1d44e5\"" Feb 9 19:07:24.737874 systemd[1]: Started cri-containerd-da264ae4fba0b2f8b84d384cfe1077dda98f364797a0a274fe8047dd3e1d44e5.scope. Feb 9 19:07:24.798105 env[1329]: time="2024-02-09T19:07:24.797938229Z" level=info msg="StartContainer for \"da264ae4fba0b2f8b84d384cfe1077dda98f364797a0a274fe8047dd3e1d44e5\" returns successfully" Feb 9 19:07:25.613818 kubelet[2411]: E0209 19:07:25.613674 2411 controller.go:189] failed to update lease, error: Put "https://10.200.8.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-2006cf4d94?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 9 19:07:28.516267 systemd[1]: cri-containerd-66dea4725a1f264d21428fe38cdbbe4e27b561fa39b951cd1746e49e0839db9e.scope: Deactivated successfully. Feb 9 19:07:28.516661 systemd[1]: cri-containerd-66dea4725a1f264d21428fe38cdbbe4e27b561fa39b951cd1746e49e0839db9e.scope: Consumed 1.616s CPU time. Feb 9 19:07:28.537903 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-66dea4725a1f264d21428fe38cdbbe4e27b561fa39b951cd1746e49e0839db9e-rootfs.mount: Deactivated successfully. Feb 9 19:07:28.572931 env[1329]: time="2024-02-09T19:07:28.572882037Z" level=info msg="shim disconnected" id=66dea4725a1f264d21428fe38cdbbe4e27b561fa39b951cd1746e49e0839db9e Feb 9 19:07:28.572931 env[1329]: time="2024-02-09T19:07:28.572930437Z" level=warning msg="cleaning up after shim disconnected" id=66dea4725a1f264d21428fe38cdbbe4e27b561fa39b951cd1746e49e0839db9e namespace=k8s.io Feb 9 19:07:28.573504 env[1329]: time="2024-02-09T19:07:28.572942337Z" level=info msg="cleaning up dead shim" Feb 9 19:07:28.580961 env[1329]: time="2024-02-09T19:07:28.580923905Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:07:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5509 runtime=io.containerd.runc.v2\n" Feb 9 19:07:28.683389 kubelet[2411]: I0209 19:07:28.683359 2411 scope.go:115] "RemoveContainer" containerID="66dea4725a1f264d21428fe38cdbbe4e27b561fa39b951cd1746e49e0839db9e" Feb 9 19:07:28.685213 env[1329]: time="2024-02-09T19:07:28.685169490Z" level=info msg="CreateContainer within sandbox \"29ec86780cc548ee99396d90432ba943defef171876dae87406c04ff2b6b8276\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Feb 9 19:07:28.712831 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4062309419.mount: Deactivated successfully. Feb 9 19:07:28.733758 env[1329]: time="2024-02-09T19:07:28.733711201Z" level=info msg="CreateContainer within sandbox \"29ec86780cc548ee99396d90432ba943defef171876dae87406c04ff2b6b8276\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"49db26f9cc3b6afdab2e02c095bef712c6145c85bef737656d4b80693619e881\"" Feb 9 19:07:28.734421 env[1329]: time="2024-02-09T19:07:28.734390307Z" level=info msg="StartContainer for \"49db26f9cc3b6afdab2e02c095bef712c6145c85bef737656d4b80693619e881\"" Feb 9 19:07:28.753282 systemd[1]: Started cri-containerd-49db26f9cc3b6afdab2e02c095bef712c6145c85bef737656d4b80693619e881.scope. Feb 9 19:07:28.808034 env[1329]: time="2024-02-09T19:07:28.807913431Z" level=info msg="StartContainer for \"49db26f9cc3b6afdab2e02c095bef712c6145c85bef737656d4b80693619e881\" returns successfully" Feb 9 19:07:29.445304 kubelet[2411]: E0209 19:07:29.445272 2411 controller.go:189] failed to update lease, error: rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.39:60316->10.200.8.22:2379: read: connection timed out Feb 9 19:07:33.984851 kubelet[2411]: E0209 19:07:33.984740 2411 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-ci-3510.3.2-a-2006cf4d94.17b24754664ff40a", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-ci-3510.3.2-a-2006cf4d94", UID:"8d492f126ff7211c3c848fef2e74060f", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"Unhealthy", Message:"Readiness probe failed: HTTP probe failed with statuscode: 500", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-2006cf4d94"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 7, 16, 546180106, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 7, 16, 546180106, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.39:60090->10.200.8.22:2379: read: connection timed out' (will not retry!) Feb 9 19:07:39.446265 kubelet[2411]: E0209 19:07:39.446211 2411 controller.go:189] failed to update lease, error: Put "https://10.200.8.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-2006cf4d94?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)