Feb 8 23:17:20.008939 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Feb 8 21:14:17 -00 2024 Feb 8 23:17:20.008972 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 8 23:17:20.008988 kernel: BIOS-provided physical RAM map: Feb 8 23:17:20.008999 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 8 23:17:20.009010 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Feb 8 23:17:20.009021 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Feb 8 23:17:20.009038 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Feb 8 23:17:20.009051 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Feb 8 23:17:20.009062 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Feb 8 23:17:20.009074 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Feb 8 23:17:20.009085 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Feb 8 23:17:20.009097 kernel: printk: bootconsole [earlyser0] enabled Feb 8 23:17:20.009108 kernel: NX (Execute Disable) protection: active Feb 8 23:17:20.009121 kernel: efi: EFI v2.70 by Microsoft Feb 8 23:17:20.009140 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c9a98 RNG=0x3ffd1018 Feb 8 23:17:20.009155 kernel: random: crng init done Feb 8 23:17:20.009167 kernel: SMBIOS 3.1.0 present. Feb 8 23:17:20.009180 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 07/12/2023 Feb 8 23:17:20.009192 kernel: Hypervisor detected: Microsoft Hyper-V Feb 8 23:17:20.009204 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Feb 8 23:17:20.009216 kernel: Hyper-V Host Build:20348-10.0-1-0.1544 Feb 8 23:17:20.009227 kernel: Hyper-V: Nested features: 0x1e0101 Feb 8 23:17:20.009245 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Feb 8 23:17:20.009257 kernel: Hyper-V: Using hypercall for remote TLB flush Feb 8 23:17:20.009269 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Feb 8 23:17:20.009279 kernel: tsc: Marking TSC unstable due to running on Hyper-V Feb 8 23:17:20.009290 kernel: tsc: Detected 2593.905 MHz processor Feb 8 23:17:20.020483 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 8 23:17:20.020507 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 8 23:17:20.020520 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Feb 8 23:17:20.020532 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 8 23:17:20.020543 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Feb 8 23:17:20.020559 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Feb 8 23:17:20.020571 kernel: Using GB pages for direct mapping Feb 8 23:17:20.020583 kernel: Secure boot disabled Feb 8 23:17:20.020594 kernel: ACPI: Early table checksum verification disabled Feb 8 23:17:20.020605 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Feb 8 23:17:20.020617 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:17:20.020628 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:17:20.020640 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Feb 8 23:17:20.020659 kernel: ACPI: FACS 0x000000003FFFE000 000040 Feb 8 23:17:20.020671 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:17:20.020684 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:17:20.020696 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:17:20.020708 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:17:20.020720 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:17:20.020735 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:17:20.020748 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:17:20.020760 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Feb 8 23:17:20.020772 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Feb 8 23:17:20.020785 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Feb 8 23:17:20.020797 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Feb 8 23:17:20.020809 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Feb 8 23:17:20.020822 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Feb 8 23:17:20.020836 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Feb 8 23:17:20.020848 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Feb 8 23:17:20.020861 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Feb 8 23:17:20.020873 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Feb 8 23:17:20.020885 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 8 23:17:20.020898 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 8 23:17:20.020910 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Feb 8 23:17:20.020923 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Feb 8 23:17:20.020935 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Feb 8 23:17:20.020950 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Feb 8 23:17:20.020962 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Feb 8 23:17:20.020974 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Feb 8 23:17:20.020986 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Feb 8 23:17:20.020999 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Feb 8 23:17:20.021011 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Feb 8 23:17:20.021023 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Feb 8 23:17:20.021036 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Feb 8 23:17:20.021048 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Feb 8 23:17:20.021062 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Feb 8 23:17:20.021075 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Feb 8 23:17:20.021087 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Feb 8 23:17:20.021099 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Feb 8 23:17:20.021112 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Feb 8 23:17:20.021124 kernel: NODE_DATA(0) allocated [mem 0x2bfff9000-0x2bfffefff] Feb 8 23:17:20.021137 kernel: Zone ranges: Feb 8 23:17:20.021149 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 8 23:17:20.021161 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 8 23:17:20.021176 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Feb 8 23:17:20.021188 kernel: Movable zone start for each node Feb 8 23:17:20.021201 kernel: Early memory node ranges Feb 8 23:17:20.021213 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Feb 8 23:17:20.021225 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Feb 8 23:17:20.021237 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Feb 8 23:17:20.021249 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Feb 8 23:17:20.021261 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Feb 8 23:17:20.021274 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 8 23:17:20.021288 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Feb 8 23:17:20.021300 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Feb 8 23:17:20.021313 kernel: ACPI: PM-Timer IO Port: 0x408 Feb 8 23:17:20.021325 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Feb 8 23:17:20.021338 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Feb 8 23:17:20.021350 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 8 23:17:20.021362 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 8 23:17:20.021373 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Feb 8 23:17:20.021385 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 8 23:17:20.021400 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Feb 8 23:17:20.021412 kernel: Booting paravirtualized kernel on Hyper-V Feb 8 23:17:20.021424 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 8 23:17:20.021436 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Feb 8 23:17:20.021461 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u1048576 Feb 8 23:17:20.021472 kernel: pcpu-alloc: s185624 r8192 d31464 u1048576 alloc=1*2097152 Feb 8 23:17:20.021495 kernel: pcpu-alloc: [0] 0 1 Feb 8 23:17:20.021515 kernel: Hyper-V: PV spinlocks enabled Feb 8 23:17:20.021526 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 8 23:17:20.021542 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Feb 8 23:17:20.021554 kernel: Policy zone: Normal Feb 8 23:17:20.021567 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 8 23:17:20.021580 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 8 23:17:20.021591 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Feb 8 23:17:20.021601 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 8 23:17:20.021613 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 8 23:17:20.021625 kernel: Memory: 8081196K/8387460K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 306004K reserved, 0K cma-reserved) Feb 8 23:17:20.021640 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 8 23:17:20.021652 kernel: ftrace: allocating 34475 entries in 135 pages Feb 8 23:17:20.021673 kernel: ftrace: allocated 135 pages with 4 groups Feb 8 23:17:20.021691 kernel: rcu: Hierarchical RCU implementation. Feb 8 23:17:20.021709 kernel: rcu: RCU event tracing is enabled. Feb 8 23:17:20.021720 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 8 23:17:20.021733 kernel: Rude variant of Tasks RCU enabled. Feb 8 23:17:20.021746 kernel: Tracing variant of Tasks RCU enabled. Feb 8 23:17:20.021757 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 8 23:17:20.021769 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 8 23:17:20.021782 kernel: Using NULL legacy PIC Feb 8 23:17:20.021799 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Feb 8 23:17:20.021811 kernel: Console: colour dummy device 80x25 Feb 8 23:17:20.021824 kernel: printk: console [tty1] enabled Feb 8 23:17:20.021837 kernel: printk: console [ttyS0] enabled Feb 8 23:17:20.021850 kernel: printk: bootconsole [earlyser0] disabled Feb 8 23:17:20.021866 kernel: ACPI: Core revision 20210730 Feb 8 23:17:20.021879 kernel: Failed to register legacy timer interrupt Feb 8 23:17:20.021893 kernel: APIC: Switch to symmetric I/O mode setup Feb 8 23:17:20.021906 kernel: Hyper-V: Using IPI hypercalls Feb 8 23:17:20.021919 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593905) Feb 8 23:17:20.021933 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 8 23:17:20.021946 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 8 23:17:20.021959 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 8 23:17:20.021972 kernel: Spectre V2 : Mitigation: Retpolines Feb 8 23:17:20.021985 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 8 23:17:20.022001 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 8 23:17:20.022014 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Feb 8 23:17:20.022027 kernel: RETBleed: Vulnerable Feb 8 23:17:20.022040 kernel: Speculative Store Bypass: Vulnerable Feb 8 23:17:20.022053 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Feb 8 23:17:20.022066 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 8 23:17:20.022079 kernel: GDS: Unknown: Dependent on hypervisor status Feb 8 23:17:20.022092 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 8 23:17:20.022104 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 8 23:17:20.022118 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 8 23:17:20.022134 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Feb 8 23:17:20.022147 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Feb 8 23:17:20.022161 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Feb 8 23:17:20.022174 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 8 23:17:20.022187 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Feb 8 23:17:20.022200 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Feb 8 23:17:20.022213 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Feb 8 23:17:20.022226 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Feb 8 23:17:20.022239 kernel: Freeing SMP alternatives memory: 32K Feb 8 23:17:20.022252 kernel: pid_max: default: 32768 minimum: 301 Feb 8 23:17:20.022265 kernel: LSM: Security Framework initializing Feb 8 23:17:20.022277 kernel: SELinux: Initializing. Feb 8 23:17:20.022293 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 8 23:17:20.022306 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 8 23:17:20.022319 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Feb 8 23:17:20.022332 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Feb 8 23:17:20.022345 kernel: signal: max sigframe size: 3632 Feb 8 23:17:20.022358 kernel: rcu: Hierarchical SRCU implementation. Feb 8 23:17:20.022371 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 8 23:17:20.022384 kernel: smp: Bringing up secondary CPUs ... Feb 8 23:17:20.022397 kernel: x86: Booting SMP configuration: Feb 8 23:17:20.022410 kernel: .... node #0, CPUs: #1 Feb 8 23:17:20.022426 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Feb 8 23:17:20.022457 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 8 23:17:20.022470 kernel: smp: Brought up 1 node, 2 CPUs Feb 8 23:17:20.022482 kernel: smpboot: Max logical packages: 1 Feb 8 23:17:20.022494 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Feb 8 23:17:20.022507 kernel: devtmpfs: initialized Feb 8 23:17:20.022520 kernel: x86/mm: Memory block size: 128MB Feb 8 23:17:20.022533 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Feb 8 23:17:20.022548 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 8 23:17:20.022563 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 8 23:17:20.022576 kernel: pinctrl core: initialized pinctrl subsystem Feb 8 23:17:20.022589 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 8 23:17:20.022602 kernel: audit: initializing netlink subsys (disabled) Feb 8 23:17:20.022615 kernel: audit: type=2000 audit(1707434238.025:1): state=initialized audit_enabled=0 res=1 Feb 8 23:17:20.022629 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 8 23:17:20.022642 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 8 23:17:20.022655 kernel: cpuidle: using governor menu Feb 8 23:17:20.022672 kernel: ACPI: bus type PCI registered Feb 8 23:17:20.022685 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 8 23:17:20.022697 kernel: dca service started, version 1.12.1 Feb 8 23:17:20.022710 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 8 23:17:20.022723 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 8 23:17:20.022735 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 8 23:17:20.022748 kernel: ACPI: Added _OSI(Module Device) Feb 8 23:17:20.022760 kernel: ACPI: Added _OSI(Processor Device) Feb 8 23:17:20.022776 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 8 23:17:20.022793 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 8 23:17:20.022805 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 8 23:17:20.022820 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 8 23:17:20.022840 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 8 23:17:20.022852 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 8 23:17:20.022864 kernel: ACPI: Interpreter enabled Feb 8 23:17:20.022876 kernel: ACPI: PM: (supports S0 S5) Feb 8 23:17:20.022889 kernel: ACPI: Using IOAPIC for interrupt routing Feb 8 23:17:20.022902 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 8 23:17:20.022917 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Feb 8 23:17:20.022931 kernel: iommu: Default domain type: Translated Feb 8 23:17:20.022944 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 8 23:17:20.022958 kernel: vgaarb: loaded Feb 8 23:17:20.022971 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 8 23:17:20.022983 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 8 23:17:20.022995 kernel: PTP clock support registered Feb 8 23:17:20.023008 kernel: Registered efivars operations Feb 8 23:17:20.023021 kernel: PCI: Using ACPI for IRQ routing Feb 8 23:17:20.023034 kernel: PCI: System does not support PCI Feb 8 23:17:20.023048 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Feb 8 23:17:20.023060 kernel: VFS: Disk quotas dquot_6.6.0 Feb 8 23:17:20.023073 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 8 23:17:20.023086 kernel: pnp: PnP ACPI init Feb 8 23:17:20.023098 kernel: pnp: PnP ACPI: found 3 devices Feb 8 23:17:20.023111 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 8 23:17:20.023123 kernel: NET: Registered PF_INET protocol family Feb 8 23:17:20.023136 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 8 23:17:20.023151 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Feb 8 23:17:20.023164 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 8 23:17:20.023177 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 8 23:17:20.023190 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Feb 8 23:17:20.023204 kernel: TCP: Hash tables configured (established 65536 bind 65536) Feb 8 23:17:20.023218 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 8 23:17:20.023233 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 8 23:17:20.023247 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 8 23:17:20.023262 kernel: NET: Registered PF_XDP protocol family Feb 8 23:17:20.023279 kernel: PCI: CLS 0 bytes, default 64 Feb 8 23:17:20.023292 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 8 23:17:20.023306 kernel: software IO TLB: mapped [mem 0x000000003a8ad000-0x000000003e8ad000] (64MB) Feb 8 23:17:20.023319 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 8 23:17:20.023332 kernel: Initialise system trusted keyrings Feb 8 23:17:20.023346 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Feb 8 23:17:20.023360 kernel: Key type asymmetric registered Feb 8 23:17:20.023373 kernel: Asymmetric key parser 'x509' registered Feb 8 23:17:20.023386 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 8 23:17:20.023402 kernel: io scheduler mq-deadline registered Feb 8 23:17:20.023416 kernel: io scheduler kyber registered Feb 8 23:17:20.023430 kernel: io scheduler bfq registered Feb 8 23:17:20.023460 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 8 23:17:20.023474 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 8 23:17:20.023487 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 8 23:17:20.023501 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 8 23:17:20.023515 kernel: i8042: PNP: No PS/2 controller found. Feb 8 23:17:20.023685 kernel: rtc_cmos 00:02: registered as rtc0 Feb 8 23:17:20.023809 kernel: rtc_cmos 00:02: setting system clock to 2024-02-08T23:17:19 UTC (1707434239) Feb 8 23:17:20.023922 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Feb 8 23:17:20.023939 kernel: fail to initialize ptp_kvm Feb 8 23:17:20.023953 kernel: intel_pstate: CPU model not supported Feb 8 23:17:20.023966 kernel: efifb: probing for efifb Feb 8 23:17:20.023980 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Feb 8 23:17:20.023994 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Feb 8 23:17:20.024007 kernel: efifb: scrolling: redraw Feb 8 23:17:20.024025 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 8 23:17:20.024038 kernel: Console: switching to colour frame buffer device 128x48 Feb 8 23:17:20.024052 kernel: fb0: EFI VGA frame buffer device Feb 8 23:17:20.024065 kernel: pstore: Registered efi as persistent store backend Feb 8 23:17:20.024078 kernel: NET: Registered PF_INET6 protocol family Feb 8 23:17:20.024092 kernel: Segment Routing with IPv6 Feb 8 23:17:20.024106 kernel: In-situ OAM (IOAM) with IPv6 Feb 8 23:17:20.024119 kernel: NET: Registered PF_PACKET protocol family Feb 8 23:17:20.024133 kernel: Key type dns_resolver registered Feb 8 23:17:20.024149 kernel: IPI shorthand broadcast: enabled Feb 8 23:17:20.024163 kernel: sched_clock: Marking stable (802583200, 20127600)->(1033677400, -210966600) Feb 8 23:17:20.024177 kernel: registered taskstats version 1 Feb 8 23:17:20.024191 kernel: Loading compiled-in X.509 certificates Feb 8 23:17:20.024204 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: e9d857ae0e8100c174221878afd1046acbb054a6' Feb 8 23:17:20.024217 kernel: Key type .fscrypt registered Feb 8 23:17:20.024231 kernel: Key type fscrypt-provisioning registered Feb 8 23:17:20.024244 kernel: pstore: Using crash dump compression: deflate Feb 8 23:17:20.024261 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 8 23:17:20.024274 kernel: ima: Allocated hash algorithm: sha1 Feb 8 23:17:20.024288 kernel: ima: No architecture policies found Feb 8 23:17:20.024301 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 8 23:17:20.024314 kernel: Write protecting the kernel read-only data: 28672k Feb 8 23:17:20.024328 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 8 23:17:20.024341 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 8 23:17:20.024354 kernel: Run /init as init process Feb 8 23:17:20.024367 kernel: with arguments: Feb 8 23:17:20.024381 kernel: /init Feb 8 23:17:20.024396 kernel: with environment: Feb 8 23:17:20.024408 kernel: HOME=/ Feb 8 23:17:20.024420 kernel: TERM=linux Feb 8 23:17:20.024432 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 8 23:17:20.024464 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 8 23:17:20.024480 systemd[1]: Detected virtualization microsoft. Feb 8 23:17:20.024494 systemd[1]: Detected architecture x86-64. Feb 8 23:17:20.024510 systemd[1]: Running in initrd. Feb 8 23:17:20.024523 systemd[1]: No hostname configured, using default hostname. Feb 8 23:17:20.024536 systemd[1]: Hostname set to . Feb 8 23:17:20.024550 systemd[1]: Initializing machine ID from random generator. Feb 8 23:17:20.024564 systemd[1]: Queued start job for default target initrd.target. Feb 8 23:17:20.024579 systemd[1]: Started systemd-ask-password-console.path. Feb 8 23:17:20.024594 systemd[1]: Reached target cryptsetup.target. Feb 8 23:17:20.024608 systemd[1]: Reached target paths.target. Feb 8 23:17:20.024622 systemd[1]: Reached target slices.target. Feb 8 23:17:20.024640 systemd[1]: Reached target swap.target. Feb 8 23:17:20.024655 systemd[1]: Reached target timers.target. Feb 8 23:17:20.024670 systemd[1]: Listening on iscsid.socket. Feb 8 23:17:20.024685 systemd[1]: Listening on iscsiuio.socket. Feb 8 23:17:20.024700 systemd[1]: Listening on systemd-journald-audit.socket. Feb 8 23:17:20.024715 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 8 23:17:20.024731 systemd[1]: Listening on systemd-journald.socket. Feb 8 23:17:20.024749 systemd[1]: Listening on systemd-networkd.socket. Feb 8 23:17:20.024763 systemd[1]: Listening on systemd-udevd-control.socket. Feb 8 23:17:20.024776 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 8 23:17:20.024790 systemd[1]: Reached target sockets.target. Feb 8 23:17:20.024804 systemd[1]: Starting kmod-static-nodes.service... Feb 8 23:17:20.024817 systemd[1]: Finished network-cleanup.service. Feb 8 23:17:20.024830 systemd[1]: Starting systemd-fsck-usr.service... Feb 8 23:17:20.024843 systemd[1]: Starting systemd-journald.service... Feb 8 23:17:20.024856 systemd[1]: Starting systemd-modules-load.service... Feb 8 23:17:20.024871 systemd[1]: Starting systemd-resolved.service... Feb 8 23:17:20.024885 systemd[1]: Starting systemd-vconsole-setup.service... Feb 8 23:17:20.024903 systemd-journald[183]: Journal started Feb 8 23:17:20.024967 systemd-journald[183]: Runtime Journal (/run/log/journal/092c79a9507b4ec39ae550ccc0b0baf8) is 8.0M, max 159.0M, 151.0M free. Feb 8 23:17:20.021715 systemd-modules-load[184]: Inserted module 'overlay' Feb 8 23:17:20.036454 systemd[1]: Started systemd-journald.service. Feb 8 23:17:20.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:20.053285 systemd[1]: Finished kmod-static-nodes.service. Feb 8 23:17:20.061178 kernel: audit: type=1130 audit(1707434240.042:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:20.057783 systemd[1]: Finished systemd-fsck-usr.service. Feb 8 23:17:20.061306 systemd[1]: Finished systemd-vconsole-setup.service. Feb 8 23:17:20.090091 kernel: audit: type=1130 audit(1707434240.057:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:20.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:20.085685 systemd-resolved[185]: Positive Trust Anchors: Feb 8 23:17:20.085696 systemd-resolved[185]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 8 23:17:20.085746 systemd-resolved[185]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 8 23:17:20.114695 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 8 23:17:20.114724 kernel: Bridge firewalling registered Feb 8 23:17:20.088564 systemd[1]: Starting dracut-cmdline-ask.service... Feb 8 23:17:20.100989 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 8 23:17:20.116940 systemd-modules-load[184]: Inserted module 'br_netfilter' Feb 8 23:17:20.122074 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 8 23:17:20.125754 systemd-resolved[185]: Defaulting to hostname 'linux'. Feb 8 23:17:20.143629 kernel: audit: type=1130 audit(1707434240.060:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:20.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:20.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:20.165455 kernel: audit: type=1130 audit(1707434240.086:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:20.158712 systemd[1]: Started systemd-resolved.service. Feb 8 23:17:20.160877 systemd[1]: Finished dracut-cmdline-ask.service. Feb 8 23:17:20.162974 systemd[1]: Reached target nss-lookup.target. Feb 8 23:17:20.171094 systemd[1]: Starting dracut-cmdline.service... Feb 8 23:17:20.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:20.199040 kernel: audit: type=1130 audit(1707434240.158:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:20.199094 kernel: SCSI subsystem initialized Feb 8 23:17:20.199111 kernel: audit: type=1130 audit(1707434240.160:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:20.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:20.199180 dracut-cmdline[201]: dracut-dracut-053 Feb 8 23:17:20.199180 dracut-cmdline[201]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 8 23:17:20.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:20.232152 kernel: audit: type=1130 audit(1707434240.162:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:20.232193 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 8 23:17:20.235658 kernel: device-mapper: uevent: version 1.0.3 Feb 8 23:17:20.240995 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 8 23:17:20.245067 systemd-modules-load[184]: Inserted module 'dm_multipath' Feb 8 23:17:20.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:20.245897 systemd[1]: Finished systemd-modules-load.service. Feb 8 23:17:20.264516 kernel: audit: type=1130 audit(1707434240.249:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:20.250494 systemd[1]: Starting systemd-sysctl.service... Feb 8 23:17:20.274541 systemd[1]: Finished systemd-sysctl.service. Feb 8 23:17:20.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:20.289462 kernel: audit: type=1130 audit(1707434240.277:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:20.289501 kernel: Loading iSCSI transport class v2.0-870. Feb 8 23:17:20.305457 kernel: iscsi: registered transport (tcp) Feb 8 23:17:20.329655 kernel: iscsi: registered transport (qla4xxx) Feb 8 23:17:20.329703 kernel: QLogic iSCSI HBA Driver Feb 8 23:17:20.360000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:20.358072 systemd[1]: Finished dracut-cmdline.service. Feb 8 23:17:20.361474 systemd[1]: Starting dracut-pre-udev.service... Feb 8 23:17:20.411461 kernel: raid6: avx512x4 gen() 18541 MB/s Feb 8 23:17:20.430454 kernel: raid6: avx512x4 xor() 8245 MB/s Feb 8 23:17:20.449452 kernel: raid6: avx512x2 gen() 18367 MB/s Feb 8 23:17:20.469459 kernel: raid6: avx512x2 xor() 29941 MB/s Feb 8 23:17:20.489452 kernel: raid6: avx512x1 gen() 18426 MB/s Feb 8 23:17:20.508453 kernel: raid6: avx512x1 xor() 27076 MB/s Feb 8 23:17:20.528454 kernel: raid6: avx2x4 gen() 18366 MB/s Feb 8 23:17:20.547454 kernel: raid6: avx2x4 xor() 8064 MB/s Feb 8 23:17:20.568453 kernel: raid6: avx2x2 gen() 18363 MB/s Feb 8 23:17:20.588458 kernel: raid6: avx2x2 xor() 22338 MB/s Feb 8 23:17:20.607452 kernel: raid6: avx2x1 gen() 14061 MB/s Feb 8 23:17:20.627451 kernel: raid6: avx2x1 xor() 19519 MB/s Feb 8 23:17:20.647454 kernel: raid6: sse2x4 gen() 11780 MB/s Feb 8 23:17:20.666457 kernel: raid6: sse2x4 xor() 7322 MB/s Feb 8 23:17:20.686451 kernel: raid6: sse2x2 gen() 12979 MB/s Feb 8 23:17:20.706455 kernel: raid6: sse2x2 xor() 7480 MB/s Feb 8 23:17:20.725451 kernel: raid6: sse2x1 gen() 11736 MB/s Feb 8 23:17:20.747435 kernel: raid6: sse2x1 xor() 5945 MB/s Feb 8 23:17:20.747459 kernel: raid6: using algorithm avx512x4 gen() 18541 MB/s Feb 8 23:17:20.747481 kernel: raid6: .... xor() 8245 MB/s, rmw enabled Feb 8 23:17:20.750269 kernel: raid6: using avx512x2 recovery algorithm Feb 8 23:17:20.768463 kernel: xor: automatically using best checksumming function avx Feb 8 23:17:20.863465 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 8 23:17:20.871498 systemd[1]: Finished dracut-pre-udev.service. Feb 8 23:17:20.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:20.874000 audit: BPF prog-id=7 op=LOAD Feb 8 23:17:20.874000 audit: BPF prog-id=8 op=LOAD Feb 8 23:17:20.875846 systemd[1]: Starting systemd-udevd.service... Feb 8 23:17:20.889510 systemd-udevd[385]: Using default interface naming scheme 'v252'. Feb 8 23:17:20.894130 systemd[1]: Started systemd-udevd.service. Feb 8 23:17:20.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:20.902707 systemd[1]: Starting dracut-pre-trigger.service... Feb 8 23:17:20.917587 dracut-pre-trigger[404]: rd.md=0: removing MD RAID activation Feb 8 23:17:20.946789 systemd[1]: Finished dracut-pre-trigger.service. Feb 8 23:17:20.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:20.949410 systemd[1]: Starting systemd-udev-trigger.service... Feb 8 23:17:20.986000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:20.984613 systemd[1]: Finished systemd-udev-trigger.service. Feb 8 23:17:21.053013 kernel: cryptd: max_cpu_qlen set to 1000 Feb 8 23:17:21.053067 kernel: hv_vmbus: Vmbus version:5.2 Feb 8 23:17:21.078457 kernel: hv_vmbus: registering driver hyperv_keyboard Feb 8 23:17:21.089469 kernel: hv_vmbus: registering driver hv_netvsc Feb 8 23:17:21.098677 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Feb 8 23:17:21.104453 kernel: AVX2 version of gcm_enc/dec engaged. Feb 8 23:17:21.104482 kernel: AES CTR mode by8 optimization enabled Feb 8 23:17:21.114308 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 8 23:17:21.114341 kernel: hv_vmbus: registering driver hv_storvsc Feb 8 23:17:21.121265 kernel: scsi host1: storvsc_host_t Feb 8 23:17:21.121436 kernel: hv_vmbus: registering driver hid_hyperv Feb 8 23:17:21.121461 kernel: scsi host0: storvsc_host_t Feb 8 23:17:21.128110 kernel: scsi 1:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Feb 8 23:17:21.128459 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Feb 8 23:17:21.137468 kernel: scsi 1:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Feb 8 23:17:21.137515 kernel: hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Feb 8 23:17:21.170876 kernel: sr 1:0:0:2: [sr0] scsi-1 drive Feb 8 23:17:21.171179 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 8 23:17:21.171206 kernel: sd 1:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Feb 8 23:17:21.176663 kernel: sr 1:0:0:2: Attached scsi CD-ROM sr0 Feb 8 23:17:21.176830 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Feb 8 23:17:21.186521 kernel: sd 1:0:0:0: [sda] Write Protect is off Feb 8 23:17:21.186775 kernel: sd 1:0:0:0: [sda] Mode Sense: 0f 00 10 00 Feb 8 23:17:21.186958 kernel: sd 1:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Feb 8 23:17:21.191456 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 8 23:17:21.197486 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Feb 8 23:17:21.271935 kernel: hv_netvsc 000d3ab6-e6e2-000d-3ab6-e6e2000d3ab6 eth0: VF slot 1 added Feb 8 23:17:21.280453 kernel: hv_vmbus: registering driver hv_pci Feb 8 23:17:21.288117 kernel: hv_pci 6a50d9e8-56b2-4393-b4e5-ecadd0f6129b: PCI VMBus probing: Using version 0x10004 Feb 8 23:17:21.288293 kernel: hv_pci 6a50d9e8-56b2-4393-b4e5-ecadd0f6129b: PCI host bridge to bus 56b2:00 Feb 8 23:17:21.305766 kernel: pci_bus 56b2:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Feb 8 23:17:21.305932 kernel: pci_bus 56b2:00: No busn resource found for root bus, will use [bus 00-ff] Feb 8 23:17:21.314456 kernel: pci 56b2:00:02.0: [15b3:1016] type 00 class 0x020000 Feb 8 23:17:21.323098 kernel: pci 56b2:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 8 23:17:21.338530 kernel: pci 56b2:00:02.0: enabling Extended Tags Feb 8 23:17:21.353475 kernel: pci 56b2:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 56b2:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Feb 8 23:17:21.353653 kernel: pci_bus 56b2:00: busn_res: [bus 00-ff] end is updated to 00 Feb 8 23:17:21.353773 kernel: pci 56b2:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 8 23:17:21.449460 kernel: mlx5_core 56b2:00:02.0: firmware version: 14.30.1224 Feb 8 23:17:21.611464 kernel: mlx5_core 56b2:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Feb 8 23:17:21.616822 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 8 23:17:21.656463 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (439) Feb 8 23:17:21.669495 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 8 23:17:21.776629 kernel: mlx5_core 56b2:00:02.0: Supported tc offload range - chains: 1, prios: 1 Feb 8 23:17:21.776832 kernel: mlx5_core 56b2:00:02.0: mlx5e_tc_post_act_init:40:(pid 7): firmware level support is missing Feb 8 23:17:21.787507 kernel: hv_netvsc 000d3ab6-e6e2-000d-3ab6-e6e2000d3ab6 eth0: VF registering: eth1 Feb 8 23:17:21.787673 kernel: mlx5_core 56b2:00:02.0 eth1: joined to eth0 Feb 8 23:17:21.799508 kernel: mlx5_core 56b2:00:02.0 enP22194s1: renamed from eth1 Feb 8 23:17:21.842233 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 8 23:17:21.861952 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 8 23:17:21.866755 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 8 23:17:21.877213 systemd[1]: Starting disk-uuid.service... Feb 8 23:17:21.889456 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 8 23:17:22.910625 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 8 23:17:22.911045 disk-uuid[565]: The operation has completed successfully. Feb 8 23:17:22.992302 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 8 23:17:22.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:22.995000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:22.992417 systemd[1]: Finished disk-uuid.service. Feb 8 23:17:22.997183 systemd[1]: Starting verity-setup.service... Feb 8 23:17:23.041465 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 8 23:17:23.338044 systemd[1]: Found device dev-mapper-usr.device. Feb 8 23:17:23.343026 systemd[1]: Mounting sysusr-usr.mount... Feb 8 23:17:23.346696 systemd[1]: Finished verity-setup.service. Feb 8 23:17:23.347000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:23.417467 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 8 23:17:23.416872 systemd[1]: Mounted sysusr-usr.mount. Feb 8 23:17:23.420522 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 8 23:17:23.424184 systemd[1]: Starting ignition-setup.service... Feb 8 23:17:23.428762 systemd[1]: Starting parse-ip-for-networkd.service... Feb 8 23:17:23.445452 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 8 23:17:23.445494 kernel: BTRFS info (device sda6): using free space tree Feb 8 23:17:23.445505 kernel: BTRFS info (device sda6): has skinny extents Feb 8 23:17:23.493201 systemd[1]: Finished parse-ip-for-networkd.service. Feb 8 23:17:23.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:23.496000 audit: BPF prog-id=9 op=LOAD Feb 8 23:17:23.497580 systemd[1]: Starting systemd-networkd.service... Feb 8 23:17:23.524916 systemd-networkd[829]: lo: Link UP Feb 8 23:17:23.526313 systemd-networkd[829]: lo: Gained carrier Feb 8 23:17:23.527995 systemd-networkd[829]: Enumeration completed Feb 8 23:17:23.530000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:23.528341 systemd[1]: Started systemd-networkd.service. Feb 8 23:17:23.530362 systemd-networkd[829]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 8 23:17:23.531357 systemd[1]: Reached target network.target. Feb 8 23:17:23.536547 systemd[1]: Starting iscsiuio.service... Feb 8 23:17:23.549000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:23.547842 systemd[1]: Started iscsiuio.service. Feb 8 23:17:23.550954 systemd[1]: Starting iscsid.service... Feb 8 23:17:23.557046 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 8 23:17:23.559515 iscsid[840]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 8 23:17:23.559515 iscsid[840]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 8 23:17:23.559515 iscsid[840]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 8 23:17:23.559515 iscsid[840]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 8 23:17:23.559515 iscsid[840]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 8 23:17:23.559515 iscsid[840]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 8 23:17:23.568000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:23.562386 systemd[1]: Started iscsid.service. Feb 8 23:17:23.591091 systemd[1]: Starting dracut-initqueue.service... Feb 8 23:17:23.602465 kernel: mlx5_core 56b2:00:02.0 enP22194s1: Link up Feb 8 23:17:23.604252 systemd[1]: Finished dracut-initqueue.service. Feb 8 23:17:23.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:23.606305 systemd[1]: Reached target remote-fs-pre.target. Feb 8 23:17:23.609468 systemd[1]: Reached target remote-cryptsetup.target. Feb 8 23:17:23.611331 systemd[1]: Reached target remote-fs.target. Feb 8 23:17:23.614812 systemd[1]: Starting dracut-pre-mount.service... Feb 8 23:17:23.624993 systemd[1]: Finished dracut-pre-mount.service. Feb 8 23:17:23.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:23.687414 kernel: hv_netvsc 000d3ab6-e6e2-000d-3ab6-e6e2000d3ab6 eth0: Data path switched to VF: enP22194s1 Feb 8 23:17:23.687627 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 8 23:17:23.688099 systemd-networkd[829]: enP22194s1: Link UP Feb 8 23:17:23.690921 systemd-networkd[829]: eth0: Link UP Feb 8 23:17:23.691720 systemd-networkd[829]: eth0: Gained carrier Feb 8 23:17:23.694815 systemd-networkd[829]: enP22194s1: Gained carrier Feb 8 23:17:23.728615 systemd-networkd[829]: eth0: DHCPv4 address 10.200.8.10/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 8 23:17:23.757941 systemd[1]: Finished ignition-setup.service. Feb 8 23:17:23.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:23.761079 systemd[1]: Starting ignition-fetch-offline.service... Feb 8 23:17:25.555586 systemd-networkd[829]: eth0: Gained IPv6LL Feb 8 23:17:27.305271 ignition[856]: Ignition 2.14.0 Feb 8 23:17:27.305287 ignition[856]: Stage: fetch-offline Feb 8 23:17:27.305370 ignition[856]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:17:27.305420 ignition[856]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 8 23:17:27.438280 ignition[856]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 8 23:17:27.443000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:27.439728 systemd[1]: Finished ignition-fetch-offline.service. Feb 8 23:17:27.463363 kernel: kauditd_printk_skb: 18 callbacks suppressed Feb 8 23:17:27.463392 kernel: audit: type=1130 audit(1707434247.443:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:27.438493 ignition[856]: parsed url from cmdline: "" Feb 8 23:17:27.445127 systemd[1]: Starting ignition-fetch.service... Feb 8 23:17:27.438499 ignition[856]: no config URL provided Feb 8 23:17:27.438506 ignition[856]: reading system config file "/usr/lib/ignition/user.ign" Feb 8 23:17:27.438515 ignition[856]: no config at "/usr/lib/ignition/user.ign" Feb 8 23:17:27.438521 ignition[856]: failed to fetch config: resource requires networking Feb 8 23:17:27.438755 ignition[856]: Ignition finished successfully Feb 8 23:17:27.453940 ignition[862]: Ignition 2.14.0 Feb 8 23:17:27.453946 ignition[862]: Stage: fetch Feb 8 23:17:27.454059 ignition[862]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:17:27.454094 ignition[862]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 8 23:17:27.476700 ignition[862]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 8 23:17:27.476853 ignition[862]: parsed url from cmdline: "" Feb 8 23:17:27.476858 ignition[862]: no config URL provided Feb 8 23:17:27.476864 ignition[862]: reading system config file "/usr/lib/ignition/user.ign" Feb 8 23:17:27.476871 ignition[862]: no config at "/usr/lib/ignition/user.ign" Feb 8 23:17:27.476902 ignition[862]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Feb 8 23:17:27.549783 ignition[862]: GET result: OK Feb 8 23:17:27.549877 ignition[862]: config has been read from IMDS userdata Feb 8 23:17:27.549912 ignition[862]: parsing config with SHA512: 219fc06629aee3b523c617fd2248c39bcc9ceeedf9c5eb005738bf3b172585772c166dae53588dc8a250068d9f69f3894a31345d07771918dbf4ffaceb117d23 Feb 8 23:17:27.566246 unknown[862]: fetched base config from "system" Feb 8 23:17:27.566263 unknown[862]: fetched base config from "system" Feb 8 23:17:27.566782 ignition[862]: fetch: fetch complete Feb 8 23:17:27.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:27.566271 unknown[862]: fetched user config from "azure" Feb 8 23:17:27.590174 kernel: audit: type=1130 audit(1707434247.572:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:27.566787 ignition[862]: fetch: fetch passed Feb 8 23:17:27.570716 systemd[1]: Finished ignition-fetch.service. Feb 8 23:17:27.566823 ignition[862]: Ignition finished successfully Feb 8 23:17:27.573614 systemd[1]: Starting ignition-kargs.service... Feb 8 23:17:27.595810 ignition[868]: Ignition 2.14.0 Feb 8 23:17:27.595817 ignition[868]: Stage: kargs Feb 8 23:17:27.595927 ignition[868]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:17:27.595953 ignition[868]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 8 23:17:27.599390 ignition[868]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 8 23:17:27.601931 ignition[868]: kargs: kargs passed Feb 8 23:17:27.630591 kernel: audit: type=1130 audit(1707434247.610:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:27.610000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:27.605069 systemd[1]: Finished ignition-kargs.service. Feb 8 23:17:27.601987 ignition[868]: Ignition finished successfully Feb 8 23:17:27.627535 systemd[1]: Starting ignition-disks.service... Feb 8 23:17:27.642259 ignition[874]: Ignition 2.14.0 Feb 8 23:17:27.642264 ignition[874]: Stage: disks Feb 8 23:17:27.642354 ignition[874]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:17:27.642373 ignition[874]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 8 23:17:27.650021 systemd[1]: Finished ignition-disks.service. Feb 8 23:17:27.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:27.645722 ignition[874]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 8 23:17:27.668692 kernel: audit: type=1130 audit(1707434247.651:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:27.652076 systemd[1]: Reached target initrd-root-device.target. Feb 8 23:17:27.648449 ignition[874]: disks: disks passed Feb 8 23:17:27.664795 systemd[1]: Reached target local-fs-pre.target. Feb 8 23:17:27.648490 ignition[874]: Ignition finished successfully Feb 8 23:17:27.668688 systemd[1]: Reached target local-fs.target. Feb 8 23:17:27.670494 systemd[1]: Reached target sysinit.target. Feb 8 23:17:27.674037 systemd[1]: Reached target basic.target. Feb 8 23:17:27.676491 systemd[1]: Starting systemd-fsck-root.service... Feb 8 23:17:27.749139 systemd-fsck[882]: ROOT: clean, 602/7326000 files, 481070/7359488 blocks Feb 8 23:17:27.759318 systemd[1]: Finished systemd-fsck-root.service. Feb 8 23:17:27.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:27.776502 kernel: audit: type=1130 audit(1707434247.763:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:27.775649 systemd[1]: Mounting sysroot.mount... Feb 8 23:17:27.789459 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 8 23:17:27.790058 systemd[1]: Mounted sysroot.mount. Feb 8 23:17:27.793087 systemd[1]: Reached target initrd-root-fs.target. Feb 8 23:17:27.828925 systemd[1]: Mounting sysroot-usr.mount... Feb 8 23:17:27.832193 systemd[1]: Starting flatcar-metadata-hostname.service... Feb 8 23:17:27.834128 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 8 23:17:27.834181 systemd[1]: Reached target ignition-diskful.target. Feb 8 23:17:27.839372 systemd[1]: Mounted sysroot-usr.mount. Feb 8 23:17:27.891823 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 8 23:17:27.899838 systemd[1]: Starting initrd-setup-root.service... Feb 8 23:17:27.908458 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (893) Feb 8 23:17:27.921881 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 8 23:17:27.922307 kernel: BTRFS info (device sda6): using free space tree Feb 8 23:17:27.922484 kernel: BTRFS info (device sda6): has skinny extents Feb 8 23:17:27.929184 initrd-setup-root[898]: cut: /sysroot/etc/passwd: No such file or directory Feb 8 23:17:27.933253 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 8 23:17:27.959181 initrd-setup-root[924]: cut: /sysroot/etc/group: No such file or directory Feb 8 23:17:27.964290 initrd-setup-root[932]: cut: /sysroot/etc/shadow: No such file or directory Feb 8 23:17:27.991644 initrd-setup-root[940]: cut: /sysroot/etc/gshadow: No such file or directory Feb 8 23:17:28.453957 systemd[1]: Finished initrd-setup-root.service. Feb 8 23:17:28.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:28.460911 systemd[1]: Starting ignition-mount.service... Feb 8 23:17:28.474180 kernel: audit: type=1130 audit(1707434248.459:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:28.472736 systemd[1]: Starting sysroot-boot.service... Feb 8 23:17:28.479223 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 8 23:17:28.479347 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 8 23:17:28.498799 systemd[1]: Finished sysroot-boot.service. Feb 8 23:17:28.502195 ignition[960]: INFO : Ignition 2.14.0 Feb 8 23:17:28.502195 ignition[960]: INFO : Stage: mount Feb 8 23:17:28.502195 ignition[960]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:17:28.502195 ignition[960]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 8 23:17:28.502195 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 8 23:17:28.522809 kernel: audit: type=1130 audit(1707434248.504:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:28.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:28.517153 systemd[1]: Finished ignition-mount.service. Feb 8 23:17:28.523172 ignition[960]: INFO : mount: mount passed Feb 8 23:17:28.523172 ignition[960]: INFO : Ignition finished successfully Feb 8 23:17:28.527000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:28.543454 kernel: audit: type=1130 audit(1707434248.527:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:29.436062 coreos-metadata[892]: Feb 08 23:17:29.435 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 8 23:17:29.468739 coreos-metadata[892]: Feb 08 23:17:29.468 INFO Fetch successful Feb 8 23:17:29.503412 coreos-metadata[892]: Feb 08 23:17:29.503 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Feb 8 23:17:29.519402 coreos-metadata[892]: Feb 08 23:17:29.519 INFO Fetch successful Feb 8 23:17:29.538753 coreos-metadata[892]: Feb 08 23:17:29.538 INFO wrote hostname ci-3510.3.2-a-eeebf457fd to /sysroot/etc/hostname Feb 8 23:17:29.544191 systemd[1]: Finished flatcar-metadata-hostname.service. Feb 8 23:17:29.547000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:29.548934 systemd[1]: Starting ignition-files.service... Feb 8 23:17:29.560857 kernel: audit: type=1130 audit(1707434249.547:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:17:29.567587 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 8 23:17:29.580453 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (972) Feb 8 23:17:29.588249 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 8 23:17:29.588286 kernel: BTRFS info (device sda6): using free space tree Feb 8 23:17:29.588300 kernel: BTRFS info (device sda6): has skinny extents Feb 8 23:17:29.595960 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 8 23:17:29.609141 ignition[991]: INFO : Ignition 2.14.0 Feb 8 23:17:29.609141 ignition[991]: INFO : Stage: files Feb 8 23:17:29.612279 ignition[991]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:17:29.612279 ignition[991]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 8 23:17:29.625468 ignition[991]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 8 23:17:29.641453 ignition[991]: DEBUG : files: compiled without relabeling support, skipping Feb 8 23:17:29.644344 ignition[991]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 8 23:17:29.644344 ignition[991]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 8 23:17:29.733471 ignition[991]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 8 23:17:29.738994 ignition[991]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 8 23:17:29.754922 unknown[991]: wrote ssh authorized keys file for user: core Feb 8 23:17:29.757820 ignition[991]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 8 23:17:29.773681 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 8 23:17:29.779852 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz: attempt #1 Feb 8 23:17:35.166371 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 8 23:17:35.378170 ignition[991]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: 5d0324ca8a3c90c680b6e1fddb245a2255582fa15949ba1f3c6bb7323df9d3af754dae98d6e40ac9ccafb2999c932df2c4288d418949a4915d928eb23c090540 Feb 8 23:17:35.386033 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 8 23:17:35.386033 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 8 23:17:35.386033 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-amd64.tar.gz: attempt #1 Feb 8 23:17:35.867417 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 8 23:17:35.974044 ignition[991]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: aa622325bf05520939f9e020d7a28ab48ac23e2fae6f47d5a4e52174c88c1ebc31b464853e4fd65bd8f5331f330a6ca96fd370d247d3eeaed042da4ee2d1219a Feb 8 23:17:35.981609 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 8 23:17:35.981609 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 8 23:17:35.981609 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://dl.k8s.io/release/v1.27.2/bin/linux/amd64/kubeadm: attempt #1 Feb 8 23:17:36.822969 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 8 23:18:00.842757 ignition[991]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: f40216b7d14046931c58072d10c7122934eac5a23c08821371f8b08ac1779443ad11d3458a4c5dcde7cf80fc600a9fefb14b1942aa46a52330248d497ca88836 Feb 8 23:18:00.851184 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 8 23:18:00.851184 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubelet" Feb 8 23:18:00.851184 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.27.2/bin/linux/amd64/kubelet: attempt #1 Feb 8 23:18:01.553044 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 8 23:18:51.988008 ignition[991]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: a283da2224d456958b2cb99b4f6faf4457c4ed89e9e95f37d970c637f6a7f64ff4dd4d2bfce538759b2d2090933bece599a285ef8fd132eb383fece9a3941560 Feb 8 23:18:51.988008 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 8 23:18:52.006299 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/install.sh" Feb 8 23:18:52.006299 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/install.sh" Feb 8 23:18:52.006299 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 8 23:18:52.006299 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 8 23:18:52.006299 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 8 23:18:52.006299 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 8 23:18:52.006299 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 8 23:18:52.006299 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(a): oem config not found in "/usr/share/oem", looking on oem partition Feb 8 23:18:52.042291 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (996) Feb 8 23:18:52.029490 systemd[1]: mnt-oem3450919728.mount: Deactivated successfully. Feb 8 23:18:52.044627 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3450919728" Feb 8 23:18:52.044627 ignition[991]: CRITICAL : files: createFilesystemsFiles: createFiles: op(a): op(b): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3450919728": device or resource busy Feb 8 23:18:52.044627 ignition[991]: ERROR : files: createFilesystemsFiles: createFiles: op(a): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3450919728", trying btrfs: device or resource busy Feb 8 23:18:52.044627 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3450919728" Feb 8 23:18:52.044627 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3450919728" Feb 8 23:18:52.044627 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [started] unmounting "/mnt/oem3450919728" Feb 8 23:18:52.044627 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [finished] unmounting "/mnt/oem3450919728" Feb 8 23:18:52.044627 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 8 23:18:52.044627 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 8 23:18:52.044627 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(e): oem config not found in "/usr/share/oem", looking on oem partition Feb 8 23:18:52.044627 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(f): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem404812479" Feb 8 23:18:52.044627 ignition[991]: CRITICAL : files: createFilesystemsFiles: createFiles: op(e): op(f): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem404812479": device or resource busy Feb 8 23:18:52.044627 ignition[991]: ERROR : files: createFilesystemsFiles: createFiles: op(e): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem404812479", trying btrfs: device or resource busy Feb 8 23:18:52.044627 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(10): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem404812479" Feb 8 23:18:52.043749 systemd[1]: mnt-oem404812479.mount: Deactivated successfully. Feb 8 23:18:52.055042 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(10): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem404812479" Feb 8 23:18:52.055042 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(11): [started] unmounting "/mnt/oem404812479" Feb 8 23:18:52.055042 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(11): [finished] unmounting "/mnt/oem404812479" Feb 8 23:18:52.055042 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 8 23:18:52.055042 ignition[991]: INFO : files: op(12): [started] processing unit "waagent.service" Feb 8 23:18:52.055042 ignition[991]: INFO : files: op(12): [finished] processing unit "waagent.service" Feb 8 23:18:52.055042 ignition[991]: INFO : files: op(13): [started] processing unit "nvidia.service" Feb 8 23:18:52.055042 ignition[991]: INFO : files: op(13): [finished] processing unit "nvidia.service" Feb 8 23:18:52.055042 ignition[991]: INFO : files: op(14): [started] processing unit "prepare-cni-plugins.service" Feb 8 23:18:52.055042 ignition[991]: INFO : files: op(14): op(15): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 8 23:18:52.055042 ignition[991]: INFO : files: op(14): op(15): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 8 23:18:52.055042 ignition[991]: INFO : files: op(14): [finished] processing unit "prepare-cni-plugins.service" Feb 8 23:18:52.055042 ignition[991]: INFO : files: op(16): [started] processing unit "prepare-critools.service" Feb 8 23:18:52.055042 ignition[991]: INFO : files: op(16): op(17): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 8 23:18:52.055042 ignition[991]: INFO : files: op(16): op(17): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 8 23:18:52.055042 ignition[991]: INFO : files: op(16): [finished] processing unit "prepare-critools.service" Feb 8 23:18:52.055042 ignition[991]: INFO : files: op(18): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 8 23:18:52.055042 ignition[991]: INFO : files: op(18): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 8 23:18:52.055042 ignition[991]: INFO : files: op(19): [started] setting preset to enabled for "prepare-critools.service" Feb 8 23:18:52.197575 kernel: audit: type=1130 audit(1707434332.166:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:52.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:52.046058 systemd[1]: Finished ignition-files.service. Feb 8 23:18:52.201116 ignition[991]: INFO : files: op(19): [finished] setting preset to enabled for "prepare-critools.service" Feb 8 23:18:52.201116 ignition[991]: INFO : files: op(1a): [started] setting preset to enabled for "waagent.service" Feb 8 23:18:52.201116 ignition[991]: INFO : files: op(1a): [finished] setting preset to enabled for "waagent.service" Feb 8 23:18:52.201116 ignition[991]: INFO : files: op(1b): [started] setting preset to enabled for "nvidia.service" Feb 8 23:18:52.201116 ignition[991]: INFO : files: op(1b): [finished] setting preset to enabled for "nvidia.service" Feb 8 23:18:52.201116 ignition[991]: INFO : files: createResultFile: createFiles: op(1c): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 8 23:18:52.201116 ignition[991]: INFO : files: createResultFile: createFiles: op(1c): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 8 23:18:52.201116 ignition[991]: INFO : files: files passed Feb 8 23:18:52.201116 ignition[991]: INFO : Ignition finished successfully Feb 8 23:18:52.182358 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 8 23:18:52.201082 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 8 23:18:52.234740 systemd[1]: Starting ignition-quench.service... Feb 8 23:18:52.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:52.240727 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 8 23:18:52.268383 kernel: audit: type=1130 audit(1707434332.248:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:52.268410 kernel: audit: type=1131 audit(1707434332.263:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:52.263000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:52.268497 initrd-setup-root-after-ignition[1016]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 8 23:18:52.240801 systemd[1]: Finished ignition-quench.service. Feb 8 23:18:52.284000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:52.268913 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 8 23:18:52.302533 kernel: audit: type=1130 audit(1707434332.284:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:52.284312 systemd[1]: Reached target ignition-complete.target. Feb 8 23:18:52.298404 systemd[1]: Starting initrd-parse-etc.service... Feb 8 23:18:52.312498 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 8 23:18:52.314571 systemd[1]: Finished initrd-parse-etc.service. Feb 8 23:18:52.318000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:52.318231 systemd[1]: Reached target initrd-fs.target. Feb 8 23:18:52.343972 kernel: audit: type=1130 audit(1707434332.318:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:52.344000 kernel: audit: type=1131 audit(1707434332.318:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:52.318000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:52.340086 systemd[1]: Reached target initrd.target. Feb 8 23:18:52.344025 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 8 23:18:52.349071 systemd[1]: Starting dracut-pre-pivot.service... Feb 8 23:18:52.359381 systemd[1]: Finished dracut-pre-pivot.service. Feb 8 23:18:52.374736 kernel: audit: type=1130 audit(1707434332.362:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:52.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:52.374428 systemd[1]: Starting initrd-cleanup.service... Feb 8 23:18:52.386690 systemd[1]: Stopped target nss-lookup.target. Feb 8 23:18:52.390143 systemd[1]: Stopped target remote-cryptsetup.target. Feb 8 23:18:52.393975 systemd[1]: Stopped target timers.target. Feb 8 23:18:52.397488 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 8 23:18:52.397627 systemd[1]: Stopped dracut-pre-pivot.service. Feb 8 23:18:52.401000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:52.411809 systemd[1]: Stopped target initrd.target. Feb 8 23:18:52.416581 kernel: audit: type=1131 audit(1707434332.401:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:52.416691 systemd[1]: Stopped target basic.target. Feb 8 23:18:52.420259 systemd[1]: Stopped target ignition-complete.target. Feb 8 23:18:52.426491 systemd[1]: Stopped target ignition-diskful.target. Feb 8 23:18:52.430010 systemd[1]: Stopped target initrd-root-device.target. Feb 8 23:18:52.433434 systemd[1]: Stopped target remote-fs.target. Feb 8 23:18:52.436927 systemd[1]: Stopped target remote-fs-pre.target. Feb 8 23:18:52.440416 systemd[1]: Stopped target sysinit.target. Feb 8 23:18:52.443773 systemd[1]: Stopped target local-fs.target. Feb 8 23:18:52.447062 systemd[1]: Stopped target local-fs-pre.target. Feb 8 23:18:52.450463 systemd[1]: Stopped target swap.target. Feb 8 23:18:52.453711 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 8 23:18:52.456000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:52.453837 systemd[1]: Stopped dracut-pre-mount.service. Feb 8 23:18:52.472198 kernel: audit: type=1131 audit(1707434332.456:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:52.467679 systemd[1]: Stopped target cryptsetup.target. Feb 8 23:18:52.472139 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 8 23:18:52.475000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:52.472301 systemd[1]: Stopped dracut-initqueue.service. Feb 8 23:18:52.493997 kernel: audit: type=1131 audit(1707434332.475:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:52.488000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:52.475778 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 8 23:18:52.493000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:52.475910 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 8 23:18:52.499000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:52.488838 systemd[1]: ignition-files.service: Deactivated successfully. Feb 8 23:18:52.506000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:52.488925 systemd[1]: Stopped ignition-files.service. Feb 8 23:18:52.516000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:52.519587 ignition[1029]: INFO : Ignition 2.14.0 Feb 8 23:18:52.519587 ignition[1029]: INFO : Stage: umount Feb 8 23:18:52.521000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:52.494157 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 8 23:18:52.527000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:52.531391 ignition[1029]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:18:52.531391 ignition[1029]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 8 23:18:52.531000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:52.531000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:52.540000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:52.494290 systemd[1]: Stopped flatcar-metadata-hostname.service. Feb 8 23:18:52.542000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:52.543000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:52.545000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:52.552387 ignition[1029]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 8 23:18:52.552387 ignition[1029]: INFO : umount: umount passed Feb 8 23:18:52.552387 ignition[1029]: INFO : Ignition finished successfully Feb 8 23:18:52.552000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:52.500470 systemd[1]: Stopping ignition-mount.service... Feb 8 23:18:52.503306 systemd[1]: Stopping iscsiuio.service... Feb 8 23:18:52.504817 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 8 23:18:52.504988 systemd[1]: Stopped kmod-static-nodes.service. Feb 8 23:18:52.508299 systemd[1]: Stopping sysroot-boot.service... Feb 8 23:18:52.510993 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 8 23:18:52.511162 systemd[1]: Stopped systemd-udev-trigger.service. Feb 8 23:18:52.517333 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 8 23:18:52.517502 systemd[1]: Stopped dracut-pre-trigger.service. Feb 8 23:18:52.523580 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 8 23:18:52.523683 systemd[1]: Stopped iscsiuio.service. Feb 8 23:18:52.529553 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 8 23:18:52.529646 systemd[1]: Finished initrd-cleanup.service. Feb 8 23:18:52.597000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:52.537476 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 8 23:18:52.537552 systemd[1]: Stopped ignition-mount.service. Feb 8 23:18:52.540721 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 8 23:18:52.540772 systemd[1]: Stopped ignition-disks.service. Feb 8 23:18:52.542525 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 8 23:18:52.542562 systemd[1]: Stopped ignition-kargs.service. Feb 8 23:18:52.617000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:52.545789 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 8 23:18:52.621000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:52.545830 systemd[1]: Stopped ignition-fetch.service. Feb 8 23:18:52.547524 systemd[1]: Stopped target network.target. Feb 8 23:18:52.626000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:52.628000 audit: BPF prog-id=6 op=UNLOAD Feb 8 23:18:52.549702 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 8 23:18:52.549774 systemd[1]: Stopped ignition-fetch-offline.service. Feb 8 23:18:52.635000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:52.552719 systemd[1]: Stopped target paths.target. Feb 8 23:18:52.558538 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 8 23:18:52.559141 systemd[1]: Stopped systemd-ask-password-console.path. Feb 8 23:18:52.565471 systemd[1]: Stopped target slices.target. Feb 8 23:18:52.646000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:52.570792 systemd[1]: Stopped target sockets.target. Feb 8 23:18:52.574071 systemd[1]: iscsid.socket: Deactivated successfully. Feb 8 23:18:52.579125 systemd[1]: Closed iscsid.socket. Feb 8 23:18:52.651000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:52.582802 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 8 23:18:52.584456 systemd[1]: Closed iscsiuio.socket. Feb 8 23:18:52.591370 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 8 23:18:52.592715 systemd[1]: Stopped ignition-setup.service. Feb 8 23:18:52.600527 systemd[1]: Stopping systemd-networkd.service... Feb 8 23:18:52.661000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:52.610612 systemd[1]: Stopping systemd-resolved.service... Feb 8 23:18:52.612491 systemd-networkd[829]: eth0: DHCPv6 lease lost Feb 8 23:18:52.666000 audit: BPF prog-id=9 op=UNLOAD Feb 8 23:18:52.614742 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 8 23:18:52.615271 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 8 23:18:52.673000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:52.615368 systemd[1]: Stopped systemd-networkd.service. Feb 8 23:18:52.619377 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 8 23:18:52.619481 systemd[1]: Stopped systemd-resolved.service. Feb 8 23:18:52.625329 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 8 23:18:52.685000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:52.690000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:52.625416 systemd[1]: Stopped sysroot-boot.service. Feb 8 23:18:52.692000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:52.628471 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 8 23:18:52.698000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:52.628515 systemd[1]: Closed systemd-networkd.socket. Feb 8 23:18:52.705000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:52.705000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:52.632417 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 8 23:18:52.632843 systemd[1]: Stopped initrd-setup-root.service. Feb 8 23:18:52.636772 systemd[1]: Stopping network-cleanup.service... Feb 8 23:18:52.639148 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 8 23:18:52.639207 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 8 23:18:52.646639 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 8 23:18:52.646691 systemd[1]: Stopped systemd-sysctl.service. Feb 8 23:18:52.656189 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 8 23:18:52.658068 systemd[1]: Stopped systemd-modules-load.service. Feb 8 23:18:52.664484 systemd[1]: Stopping systemd-udevd.service... Feb 8 23:18:52.671305 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 8 23:18:52.671456 systemd[1]: Stopped systemd-udevd.service. Feb 8 23:18:52.675263 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 8 23:18:52.675301 systemd[1]: Closed systemd-udevd-control.socket. Feb 8 23:18:52.678504 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 8 23:18:52.678541 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 8 23:18:52.682363 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 8 23:18:52.682425 systemd[1]: Stopped dracut-pre-udev.service. Feb 8 23:18:52.748070 kernel: hv_netvsc 000d3ab6-e6e2-000d-3ab6-e6e2000d3ab6 eth0: Data path switched from VF: enP22194s1 Feb 8 23:18:52.686240 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 8 23:18:52.686852 systemd[1]: Stopped dracut-cmdline.service. Feb 8 23:18:52.691075 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 8 23:18:52.691126 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 8 23:18:52.693698 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 8 23:18:52.696475 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 8 23:18:52.696535 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 8 23:18:52.701860 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 8 23:18:52.701945 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 8 23:18:52.770000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:52.768878 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 8 23:18:52.768961 systemd[1]: Stopped network-cleanup.service. Feb 8 23:18:52.770970 systemd[1]: Reached target initrd-switch-root.target. Feb 8 23:18:52.779073 systemd[1]: Starting initrd-switch-root.service... Feb 8 23:18:52.789931 systemd[1]: Switching root. Feb 8 23:18:52.816560 iscsid[840]: iscsid shutting down. Feb 8 23:18:52.818198 systemd-journald[183]: Received SIGTERM from PID 1 (n/a). Feb 8 23:18:52.818261 systemd-journald[183]: Journal stopped Feb 8 23:19:07.792587 kernel: SELinux: Class mctp_socket not defined in policy. Feb 8 23:19:07.792619 kernel: SELinux: Class anon_inode not defined in policy. Feb 8 23:19:07.792632 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 8 23:19:07.792641 kernel: SELinux: policy capability network_peer_controls=1 Feb 8 23:19:07.792652 kernel: SELinux: policy capability open_perms=1 Feb 8 23:19:07.792660 kernel: SELinux: policy capability extended_socket_class=1 Feb 8 23:19:07.792673 kernel: SELinux: policy capability always_check_network=0 Feb 8 23:19:07.792686 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 8 23:19:07.792694 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 8 23:19:07.792705 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 8 23:19:07.792714 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 8 23:19:07.792725 systemd[1]: Successfully loaded SELinux policy in 307.775ms. Feb 8 23:19:07.792738 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 24.742ms. Feb 8 23:19:07.792751 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 8 23:19:07.792764 systemd[1]: Detected virtualization microsoft. Feb 8 23:19:07.792776 systemd[1]: Detected architecture x86-64. Feb 8 23:19:07.792788 systemd[1]: Detected first boot. Feb 8 23:19:07.792797 systemd[1]: Hostname set to . Feb 8 23:19:07.792810 systemd[1]: Initializing machine ID from random generator. Feb 8 23:19:07.792824 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 8 23:19:07.792832 kernel: kauditd_printk_skb: 40 callbacks suppressed Feb 8 23:19:07.792846 kernel: audit: type=1400 audit(1707434337.520:88): avc: denied { associate } for pid=1062 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 8 23:19:07.792859 kernel: audit: type=1300 audit(1707434337.520:88): arch=c000003e syscall=188 success=yes exit=0 a0=c0001078d2 a1=c00002ae58 a2=c000029100 a3=32 items=0 ppid=1045 pid=1062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:19:07.792870 kernel: audit: type=1327 audit(1707434337.520:88): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 8 23:19:07.792883 kernel: audit: type=1400 audit(1707434337.529:89): avc: denied { associate } for pid=1062 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 8 23:19:07.792895 kernel: audit: type=1300 audit(1707434337.529:89): arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001079a9 a2=1ed a3=0 items=2 ppid=1045 pid=1062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:19:07.792904 kernel: audit: type=1307 audit(1707434337.529:89): cwd="/" Feb 8 23:19:07.792915 kernel: audit: type=1302 audit(1707434337.529:89): item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:19:07.792927 kernel: audit: type=1302 audit(1707434337.529:89): item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:19:07.792937 kernel: audit: type=1327 audit(1707434337.529:89): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 8 23:19:07.792950 systemd[1]: Populated /etc with preset unit settings. Feb 8 23:19:07.792960 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 8 23:19:07.792969 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 8 23:19:07.792983 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 8 23:19:07.792992 kernel: audit: type=1334 audit(1707434347.300:90): prog-id=12 op=LOAD Feb 8 23:19:07.793004 kernel: audit: type=1334 audit(1707434347.300:91): prog-id=3 op=UNLOAD Feb 8 23:19:07.793016 kernel: audit: type=1334 audit(1707434347.306:92): prog-id=13 op=LOAD Feb 8 23:19:07.793025 kernel: audit: type=1334 audit(1707434347.312:93): prog-id=14 op=LOAD Feb 8 23:19:07.793038 kernel: audit: type=1334 audit(1707434347.312:94): prog-id=4 op=UNLOAD Feb 8 23:19:07.793048 kernel: audit: type=1334 audit(1707434347.312:95): prog-id=5 op=UNLOAD Feb 8 23:19:07.793061 kernel: audit: type=1334 audit(1707434347.320:96): prog-id=15 op=LOAD Feb 8 23:19:07.793072 kernel: audit: type=1334 audit(1707434347.320:97): prog-id=12 op=UNLOAD Feb 8 23:19:07.793086 kernel: audit: type=1334 audit(1707434347.342:98): prog-id=16 op=LOAD Feb 8 23:19:07.793095 kernel: audit: type=1334 audit(1707434347.346:99): prog-id=17 op=LOAD Feb 8 23:19:07.793107 systemd[1]: iscsid.service: Deactivated successfully. Feb 8 23:19:07.793119 systemd[1]: Stopped iscsid.service. Feb 8 23:19:07.793132 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 8 23:19:07.793144 systemd[1]: Stopped initrd-switch-root.service. Feb 8 23:19:07.793157 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 8 23:19:07.793168 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 8 23:19:07.793180 systemd[1]: Created slice system-addon\x2drun.slice. Feb 8 23:19:07.793192 systemd[1]: Created slice system-getty.slice. Feb 8 23:19:07.793202 systemd[1]: Created slice system-modprobe.slice. Feb 8 23:19:07.793215 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 8 23:19:07.793228 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 8 23:19:07.793241 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 8 23:19:07.793252 systemd[1]: Created slice user.slice. Feb 8 23:19:07.793264 systemd[1]: Started systemd-ask-password-console.path. Feb 8 23:19:07.793274 systemd[1]: Started systemd-ask-password-wall.path. Feb 8 23:19:07.793286 systemd[1]: Set up automount boot.automount. Feb 8 23:19:07.793299 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 8 23:19:07.793309 systemd[1]: Stopped target initrd-switch-root.target. Feb 8 23:19:07.793321 systemd[1]: Stopped target initrd-fs.target. Feb 8 23:19:07.793337 systemd[1]: Stopped target initrd-root-fs.target. Feb 8 23:19:07.793349 systemd[1]: Reached target integritysetup.target. Feb 8 23:19:07.793361 systemd[1]: Reached target remote-cryptsetup.target. Feb 8 23:19:07.793372 systemd[1]: Reached target remote-fs.target. Feb 8 23:19:07.793384 systemd[1]: Reached target slices.target. Feb 8 23:19:07.793396 systemd[1]: Reached target swap.target. Feb 8 23:19:07.793406 systemd[1]: Reached target torcx.target. Feb 8 23:19:07.793419 systemd[1]: Reached target veritysetup.target. Feb 8 23:19:07.793433 systemd[1]: Listening on systemd-coredump.socket. Feb 8 23:19:07.793454 systemd[1]: Listening on systemd-initctl.socket. Feb 8 23:19:07.793467 systemd[1]: Listening on systemd-networkd.socket. Feb 8 23:19:07.793477 systemd[1]: Listening on systemd-udevd-control.socket. Feb 8 23:19:07.793490 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 8 23:19:07.793504 systemd[1]: Listening on systemd-userdbd.socket. Feb 8 23:19:07.793515 systemd[1]: Mounting dev-hugepages.mount... Feb 8 23:19:07.793527 systemd[1]: Mounting dev-mqueue.mount... Feb 8 23:19:07.793539 systemd[1]: Mounting media.mount... Feb 8 23:19:07.793550 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 8 23:19:07.793561 systemd[1]: Mounting sys-kernel-debug.mount... Feb 8 23:19:07.793573 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 8 23:19:07.793584 systemd[1]: Mounting tmp.mount... Feb 8 23:19:07.793598 systemd[1]: Starting flatcar-tmpfiles.service... Feb 8 23:19:07.793611 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 8 23:19:07.793623 systemd[1]: Starting kmod-static-nodes.service... Feb 8 23:19:07.793636 systemd[1]: Starting modprobe@configfs.service... Feb 8 23:19:07.793646 systemd[1]: Starting modprobe@dm_mod.service... Feb 8 23:19:07.793658 systemd[1]: Starting modprobe@drm.service... Feb 8 23:19:07.793671 systemd[1]: Starting modprobe@efi_pstore.service... Feb 8 23:19:07.793681 systemd[1]: Starting modprobe@fuse.service... Feb 8 23:19:07.793693 systemd[1]: Starting modprobe@loop.service... Feb 8 23:19:07.793707 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 8 23:19:07.793720 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 8 23:19:07.793731 systemd[1]: Stopped systemd-fsck-root.service. Feb 8 23:19:07.793743 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 8 23:19:07.793754 systemd[1]: Stopped systemd-fsck-usr.service. Feb 8 23:19:07.793766 systemd[1]: Stopped systemd-journald.service. Feb 8 23:19:07.793778 systemd[1]: Starting systemd-journald.service... Feb 8 23:19:07.793789 systemd[1]: Starting systemd-modules-load.service... Feb 8 23:19:07.793801 systemd[1]: Starting systemd-network-generator.service... Feb 8 23:19:07.793814 kernel: loop: module loaded Feb 8 23:19:07.793826 systemd[1]: Starting systemd-remount-fs.service... Feb 8 23:19:07.793838 systemd[1]: Starting systemd-udev-trigger.service... Feb 8 23:19:07.793849 systemd[1]: verity-setup.service: Deactivated successfully. Feb 8 23:19:07.793862 systemd[1]: Stopped verity-setup.service. Feb 8 23:19:07.793875 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 8 23:19:07.793885 systemd[1]: Mounted dev-hugepages.mount. Feb 8 23:19:07.793898 systemd[1]: Mounted dev-mqueue.mount. Feb 8 23:19:07.793911 systemd[1]: Mounted media.mount. Feb 8 23:19:07.793924 systemd[1]: Mounted sys-kernel-debug.mount. Feb 8 23:19:07.793935 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 8 23:19:07.793947 systemd[1]: Mounted tmp.mount. Feb 8 23:19:07.793959 systemd[1]: Finished flatcar-tmpfiles.service. Feb 8 23:19:07.793970 kernel: fuse: init (API version 7.34) Feb 8 23:19:07.793981 systemd[1]: Finished kmod-static-nodes.service. Feb 8 23:19:07.793996 systemd-journald[1167]: Journal started Feb 8 23:19:07.794050 systemd-journald[1167]: Runtime Journal (/run/log/journal/1c923fc918184535a1d02f45cc4dcada) is 8.0M, max 159.0M, 151.0M free. Feb 8 23:18:55.448000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 8 23:18:56.117000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 8 23:18:56.137000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 8 23:18:56.137000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 8 23:18:56.137000 audit: BPF prog-id=10 op=LOAD Feb 8 23:18:56.137000 audit: BPF prog-id=10 op=UNLOAD Feb 8 23:18:56.137000 audit: BPF prog-id=11 op=LOAD Feb 8 23:18:56.137000 audit: BPF prog-id=11 op=UNLOAD Feb 8 23:18:57.520000 audit[1062]: AVC avc: denied { associate } for pid=1062 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 8 23:18:57.520000 audit[1062]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001078d2 a1=c00002ae58 a2=c000029100 a3=32 items=0 ppid=1045 pid=1062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:18:57.520000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 8 23:18:57.529000 audit[1062]: AVC avc: denied { associate } for pid=1062 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 8 23:18:57.529000 audit[1062]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001079a9 a2=1ed a3=0 items=2 ppid=1045 pid=1062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:18:57.529000 audit: CWD cwd="/" Feb 8 23:18:57.529000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:18:57.529000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:18:57.529000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 8 23:19:07.300000 audit: BPF prog-id=12 op=LOAD Feb 8 23:19:07.300000 audit: BPF prog-id=3 op=UNLOAD Feb 8 23:19:07.306000 audit: BPF prog-id=13 op=LOAD Feb 8 23:19:07.312000 audit: BPF prog-id=14 op=LOAD Feb 8 23:19:07.312000 audit: BPF prog-id=4 op=UNLOAD Feb 8 23:19:07.312000 audit: BPF prog-id=5 op=UNLOAD Feb 8 23:19:07.320000 audit: BPF prog-id=15 op=LOAD Feb 8 23:19:07.320000 audit: BPF prog-id=12 op=UNLOAD Feb 8 23:19:07.342000 audit: BPF prog-id=16 op=LOAD Feb 8 23:19:07.346000 audit: BPF prog-id=17 op=LOAD Feb 8 23:19:07.346000 audit: BPF prog-id=13 op=UNLOAD Feb 8 23:19:07.346000 audit: BPF prog-id=14 op=UNLOAD Feb 8 23:19:07.348000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:19:07.358000 audit: BPF prog-id=15 op=UNLOAD Feb 8 23:19:07.360000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:19:07.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:19:07.369000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:19:07.675000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:19:07.686000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:19:07.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:19:07.692000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:19:07.693000 audit: BPF prog-id=18 op=LOAD Feb 8 23:19:07.695000 audit: BPF prog-id=19 op=LOAD Feb 8 23:19:07.695000 audit: BPF prog-id=20 op=LOAD Feb 8 23:19:07.695000 audit: BPF prog-id=16 op=UNLOAD Feb 8 23:19:07.695000 audit: BPF prog-id=17 op=UNLOAD Feb 8 23:19:07.746000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:19:07.787000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 8 23:19:07.787000 audit[1167]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffe02d36930 a2=4000 a3=7ffe02d369cc items=0 ppid=1 pid=1167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:19:07.787000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 8 23:19:07.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:18:57.455893 /usr/lib/systemd/system-generators/torcx-generator[1062]: time="2024-02-08T23:18:57Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 8 23:19:07.299719 systemd[1]: Queued start job for default target multi-user.target. Feb 8 23:18:57.475226 /usr/lib/systemd/system-generators/torcx-generator[1062]: time="2024-02-08T23:18:57Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 8 23:19:07.348498 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 8 23:18:57.475251 /usr/lib/systemd/system-generators/torcx-generator[1062]: time="2024-02-08T23:18:57Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 8 23:18:57.475296 /usr/lib/systemd/system-generators/torcx-generator[1062]: time="2024-02-08T23:18:57Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 8 23:18:57.475309 /usr/lib/systemd/system-generators/torcx-generator[1062]: time="2024-02-08T23:18:57Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 8 23:18:57.475359 /usr/lib/systemd/system-generators/torcx-generator[1062]: time="2024-02-08T23:18:57Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 8 23:18:57.475383 /usr/lib/systemd/system-generators/torcx-generator[1062]: time="2024-02-08T23:18:57Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 8 23:18:57.475655 /usr/lib/systemd/system-generators/torcx-generator[1062]: time="2024-02-08T23:18:57Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 8 23:18:57.475700 /usr/lib/systemd/system-generators/torcx-generator[1062]: time="2024-02-08T23:18:57Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 8 23:18:57.475721 /usr/lib/systemd/system-generators/torcx-generator[1062]: time="2024-02-08T23:18:57Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 8 23:18:57.491658 /usr/lib/systemd/system-generators/torcx-generator[1062]: time="2024-02-08T23:18:57Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 8 23:18:57.491718 /usr/lib/systemd/system-generators/torcx-generator[1062]: time="2024-02-08T23:18:57Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 8 23:18:57.491738 /usr/lib/systemd/system-generators/torcx-generator[1062]: time="2024-02-08T23:18:57Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 8 23:18:57.491752 /usr/lib/systemd/system-generators/torcx-generator[1062]: time="2024-02-08T23:18:57Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 8 23:18:57.491776 /usr/lib/systemd/system-generators/torcx-generator[1062]: time="2024-02-08T23:18:57Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 8 23:18:57.491790 /usr/lib/systemd/system-generators/torcx-generator[1062]: time="2024-02-08T23:18:57Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 8 23:19:06.151753 /usr/lib/systemd/system-generators/torcx-generator[1062]: time="2024-02-08T23:19:06Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 8 23:19:06.151981 /usr/lib/systemd/system-generators/torcx-generator[1062]: time="2024-02-08T23:19:06Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 8 23:19:06.152072 /usr/lib/systemd/system-generators/torcx-generator[1062]: time="2024-02-08T23:19:06Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 8 23:19:07.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:19:06.152228 /usr/lib/systemd/system-generators/torcx-generator[1062]: time="2024-02-08T23:19:06Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 8 23:19:06.152272 /usr/lib/systemd/system-generators/torcx-generator[1062]: time="2024-02-08T23:19:06Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 8 23:19:06.152329 /usr/lib/systemd/system-generators/torcx-generator[1062]: time="2024-02-08T23:19:06Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 8 23:19:07.802206 systemd[1]: Started systemd-journald.service. Feb 8 23:19:07.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:19:07.803064 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 8 23:19:07.803204 systemd[1]: Finished modprobe@configfs.service. Feb 8 23:19:07.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:19:07.805000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:19:07.805905 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 8 23:19:07.806057 systemd[1]: Finished modprobe@dm_mod.service. Feb 8 23:19:07.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:19:07.808000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:19:07.808810 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 8 23:19:07.808945 systemd[1]: Finished modprobe@drm.service. Feb 8 23:19:07.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:19:07.810000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:19:07.811286 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 8 23:19:07.811420 systemd[1]: Finished modprobe@efi_pstore.service. Feb 8 23:19:07.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:19:07.813000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:19:07.814144 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 8 23:19:07.814281 systemd[1]: Finished modprobe@fuse.service. Feb 8 23:19:07.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:19:07.816000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:19:07.816274 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 8 23:19:07.816407 systemd[1]: Finished modprobe@loop.service. Feb 8 23:19:07.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:19:07.818000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:19:07.818325 systemd[1]: Finished systemd-network-generator.service. Feb 8 23:19:07.820000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:19:07.820494 systemd[1]: Finished systemd-remount-fs.service. Feb 8 23:19:07.822000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:19:07.822746 systemd[1]: Reached target network-pre.target. Feb 8 23:19:07.826999 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 8 23:19:07.830371 systemd[1]: Mounting sys-kernel-config.mount... Feb 8 23:19:07.832577 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 8 23:19:07.834639 systemd[1]: Starting systemd-hwdb-update.service... Feb 8 23:19:07.837623 systemd[1]: Starting systemd-journal-flush.service... Feb 8 23:19:07.839512 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 8 23:19:07.840573 systemd[1]: Starting systemd-random-seed.service... Feb 8 23:19:07.842318 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 8 23:19:07.843597 systemd[1]: Starting systemd-sysusers.service... Feb 8 23:19:07.848044 systemd[1]: Finished systemd-modules-load.service. Feb 8 23:19:07.850523 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 8 23:19:07.852658 systemd[1]: Mounted sys-kernel-config.mount. Feb 8 23:19:07.850000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:19:07.855865 systemd[1]: Starting systemd-sysctl.service... Feb 8 23:19:07.873924 systemd[1]: Finished systemd-random-seed.service. Feb 8 23:19:07.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:19:07.876016 systemd[1]: Reached target first-boot-complete.target. Feb 8 23:19:07.891578 systemd-journald[1167]: Time spent on flushing to /var/log/journal/1c923fc918184535a1d02f45cc4dcada is 31.643ms for 1172 entries. Feb 8 23:19:07.891578 systemd-journald[1167]: System Journal (/var/log/journal/1c923fc918184535a1d02f45cc4dcada) is 8.0M, max 2.6G, 2.6G free. Feb 8 23:19:07.970161 systemd-journald[1167]: Received client request to flush runtime journal. Feb 8 23:19:07.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:19:07.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:19:07.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:19:07.920303 systemd[1]: Finished systemd-sysctl.service. Feb 8 23:19:07.934829 systemd[1]: Finished systemd-udev-trigger.service. Feb 8 23:19:07.974299 udevadm[1186]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 8 23:19:07.938715 systemd[1]: Starting systemd-udev-settle.service... Feb 8 23:19:07.971056 systemd[1]: Finished systemd-journal-flush.service. Feb 8 23:19:08.406780 systemd[1]: Finished systemd-sysusers.service. Feb 8 23:19:08.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:19:09.109886 systemd[1]: Finished systemd-hwdb-update.service. Feb 8 23:19:09.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:19:09.112000 audit: BPF prog-id=21 op=LOAD Feb 8 23:19:09.112000 audit: BPF prog-id=22 op=LOAD Feb 8 23:19:09.112000 audit: BPF prog-id=7 op=UNLOAD Feb 8 23:19:09.112000 audit: BPF prog-id=8 op=UNLOAD Feb 8 23:19:09.113240 systemd[1]: Starting systemd-udevd.service... Feb 8 23:19:09.131133 systemd-udevd[1188]: Using default interface naming scheme 'v252'. Feb 8 23:19:09.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:19:09.489000 audit: BPF prog-id=23 op=LOAD Feb 8 23:19:09.485368 systemd[1]: Started systemd-udevd.service. Feb 8 23:19:09.490584 systemd[1]: Starting systemd-networkd.service... Feb 8 23:19:09.522611 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Feb 8 23:19:09.607000 audit: BPF prog-id=24 op=LOAD Feb 8 23:19:09.607000 audit: BPF prog-id=25 op=LOAD Feb 8 23:19:09.607000 audit: BPF prog-id=26 op=LOAD Feb 8 23:19:09.608881 systemd[1]: Starting systemd-userdbd.service... Feb 8 23:19:09.635462 kernel: mousedev: PS/2 mouse device common for all mice Feb 8 23:19:09.619000 audit[1198]: AVC avc: denied { confidentiality } for pid=1198 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 8 23:19:09.659465 kernel: hv_vmbus: registering driver hv_balloon Feb 8 23:19:09.659526 kernel: hv_utils: Registering HyperV Utility Driver Feb 8 23:19:09.659552 kernel: hv_vmbus: registering driver hv_utils Feb 8 23:19:09.689285 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Feb 8 23:19:09.687000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:19:09.685081 systemd[1]: Started systemd-userdbd.service. Feb 8 23:19:09.619000 audit[1198]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55fcf3a0ef20 a1=f884 a2=7fdeccee5bc5 a3=5 items=12 ppid=1188 pid=1198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:19:09.619000 audit: CWD cwd="/" Feb 8 23:19:09.701452 kernel: hv_vmbus: registering driver hyperv_fb Feb 8 23:19:09.619000 audit: PATH item=0 name=(null) inode=235 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:19:09.619000 audit: PATH item=1 name=(null) inode=15593 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:19:09.619000 audit: PATH item=2 name=(null) inode=15593 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:19:09.619000 audit: PATH item=3 name=(null) inode=15594 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:19:09.619000 audit: PATH item=4 name=(null) inode=15593 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:19:09.619000 audit: PATH item=5 name=(null) inode=15595 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:19:09.619000 audit: PATH item=6 name=(null) inode=15593 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:19:09.619000 audit: PATH item=7 name=(null) inode=15596 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:19:09.619000 audit: PATH item=8 name=(null) inode=15593 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:19:09.619000 audit: PATH item=9 name=(null) inode=15597 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:19:09.619000 audit: PATH item=10 name=(null) inode=15593 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:19:09.619000 audit: PATH item=11 name=(null) inode=15598 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:19:09.619000 audit: PROCTITLE proctitle="(udev-worker)" Feb 8 23:19:09.716479 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Feb 8 23:19:09.716536 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Feb 8 23:19:09.723155 kernel: Console: switching to colour dummy device 80x25 Feb 8 23:19:09.731469 kernel: hv_utils: Heartbeat IC version 3.0 Feb 8 23:19:09.739929 kernel: Console: switching to colour frame buffer device 128x48 Feb 8 23:19:09.739991 kernel: hv_utils: Shutdown IC version 3.2 Feb 8 23:19:09.740035 kernel: hv_utils: TimeSync IC version 4.0 Feb 8 23:19:10.079346 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1198) Feb 8 23:19:10.136420 kernel: KVM: vmx: using Hyper-V Enlightened VMCS Feb 8 23:19:10.151849 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 8 23:19:10.170649 systemd[1]: Finished systemd-udev-settle.service. Feb 8 23:19:10.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:19:10.174291 systemd[1]: Starting lvm2-activation-early.service... Feb 8 23:19:10.234921 systemd-networkd[1200]: lo: Link UP Feb 8 23:19:10.234935 systemd-networkd[1200]: lo: Gained carrier Feb 8 23:19:10.235634 systemd-networkd[1200]: Enumeration completed Feb 8 23:19:10.235736 systemd[1]: Started systemd-networkd.service. Feb 8 23:19:10.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:19:10.239641 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 8 23:19:10.269467 systemd-networkd[1200]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 8 23:19:10.328357 kernel: mlx5_core 56b2:00:02.0 enP22194s1: Link up Feb 8 23:19:10.366365 kernel: hv_netvsc 000d3ab6-e6e2-000d-3ab6-e6e2000d3ab6 eth0: Data path switched to VF: enP22194s1 Feb 8 23:19:10.367468 systemd-networkd[1200]: enP22194s1: Link UP Feb 8 23:19:10.367793 systemd-networkd[1200]: eth0: Link UP Feb 8 23:19:10.367890 systemd-networkd[1200]: eth0: Gained carrier Feb 8 23:19:10.373058 systemd-networkd[1200]: enP22194s1: Gained carrier Feb 8 23:19:10.402450 systemd-networkd[1200]: eth0: DHCPv4 address 10.200.8.10/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 8 23:19:10.608471 lvm[1264]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 8 23:19:10.633424 systemd[1]: Finished lvm2-activation-early.service. Feb 8 23:19:10.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:19:10.636620 systemd[1]: Reached target cryptsetup.target. Feb 8 23:19:10.639980 systemd[1]: Starting lvm2-activation.service... Feb 8 23:19:10.644589 lvm[1266]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 8 23:19:10.674286 systemd[1]: Finished lvm2-activation.service. Feb 8 23:19:10.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:19:10.676801 systemd[1]: Reached target local-fs-pre.target. Feb 8 23:19:10.679118 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 8 23:19:10.679147 systemd[1]: Reached target local-fs.target. Feb 8 23:19:10.681062 systemd[1]: Reached target machines.target. Feb 8 23:19:10.684182 systemd[1]: Starting ldconfig.service... Feb 8 23:19:10.686226 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 8 23:19:10.686353 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 8 23:19:10.687483 systemd[1]: Starting systemd-boot-update.service... Feb 8 23:19:10.690562 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 8 23:19:10.694106 systemd[1]: Starting systemd-machine-id-commit.service... Feb 8 23:19:10.696436 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 8 23:19:10.696528 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 8 23:19:10.697567 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 8 23:19:10.729514 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1268 (bootctl) Feb 8 23:19:10.731036 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 8 23:19:10.810774 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 8 23:19:10.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:19:11.059076 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 8 23:19:11.059727 systemd[1]: Finished systemd-machine-id-commit.service. Feb 8 23:19:11.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:19:11.138314 systemd-tmpfiles[1271]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 8 23:19:11.194176 systemd-tmpfiles[1271]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 8 23:19:11.297114 systemd-tmpfiles[1271]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 8 23:19:11.797453 systemd-networkd[1200]: eth0: Gained IPv6LL Feb 8 23:19:11.803706 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 8 23:19:11.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:19:11.856048 systemd-fsck[1276]: fsck.fat 4.2 (2021-01-31) Feb 8 23:19:11.856048 systemd-fsck[1276]: /dev/sda1: 789 files, 115332/258078 clusters Feb 8 23:19:11.858341 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 8 23:19:11.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:19:11.863908 systemd[1]: Mounting boot.mount... Feb 8 23:19:11.877794 systemd[1]: Mounted boot.mount. Feb 8 23:19:11.892228 systemd[1]: Finished systemd-boot-update.service. Feb 8 23:19:11.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:19:14.004849 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 8 23:19:14.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:19:14.009001 systemd[1]: Starting audit-rules.service... Feb 8 23:19:14.009791 kernel: kauditd_printk_skb: 79 callbacks suppressed Feb 8 23:19:14.009856 kernel: audit: type=1130 audit(1707434354.006:162): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:19:14.021625 systemd[1]: Starting clean-ca-certificates.service... Feb 8 23:19:14.024777 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 8 23:19:14.026000 audit: BPF prog-id=27 op=LOAD Feb 8 23:19:14.032015 systemd[1]: Starting systemd-resolved.service... Feb 8 23:19:14.032346 kernel: audit: type=1334 audit(1707434354.026:163): prog-id=27 op=LOAD Feb 8 23:19:14.040507 kernel: audit: type=1334 audit(1707434354.033:164): prog-id=28 op=LOAD Feb 8 23:19:14.033000 audit: BPF prog-id=28 op=LOAD Feb 8 23:19:14.039054 systemd[1]: Starting systemd-timesyncd.service... Feb 8 23:19:14.041884 systemd[1]: Starting systemd-update-utmp.service... Feb 8 23:19:14.076000 audit[1288]: SYSTEM_BOOT pid=1288 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 8 23:19:14.089727 kernel: audit: type=1127 audit(1707434354.076:165): pid=1288 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 8 23:19:14.090115 systemd[1]: Finished systemd-update-utmp.service. Feb 8 23:19:14.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:19:14.105468 kernel: audit: type=1130 audit(1707434354.090:166): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:19:14.131429 systemd[1]: Finished clean-ca-certificates.service. Feb 8 23:19:14.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:19:14.133953 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 8 23:19:14.144652 kernel: audit: type=1130 audit(1707434354.132:167): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:19:14.151533 systemd[1]: Started systemd-timesyncd.service. Feb 8 23:19:14.152000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:19:14.153618 systemd[1]: Reached target time-set.target. Feb 8 23:19:14.165079 kernel: audit: type=1130 audit(1707434354.152:168): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:19:14.250544 systemd-resolved[1286]: Positive Trust Anchors: Feb 8 23:19:14.250560 systemd-resolved[1286]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 8 23:19:14.250601 systemd-resolved[1286]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 8 23:19:14.288869 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 8 23:19:14.287000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:19:14.305372 kernel: audit: type=1130 audit(1707434354.287:169): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:19:14.348438 systemd-timesyncd[1287]: Contacted time server 193.1.12.167:123 (0.flatcar.pool.ntp.org). Feb 8 23:19:14.348550 systemd-timesyncd[1287]: Initial clock synchronization to Thu 2024-02-08 23:19:14.349163 UTC. Feb 8 23:19:14.378352 systemd-resolved[1286]: Using system hostname 'ci-3510.3.2-a-eeebf457fd'. Feb 8 23:19:14.379837 systemd[1]: Started systemd-resolved.service. Feb 8 23:19:14.381000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:19:14.382472 systemd[1]: Reached target network.target. Feb 8 23:19:14.396421 kernel: audit: type=1130 audit(1707434354.381:170): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:19:14.397561 systemd[1]: Reached target network-online.target. Feb 8 23:19:14.399573 systemd[1]: Reached target nss-lookup.target. Feb 8 23:19:14.481000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 8 23:19:14.482889 augenrules[1303]: No rules Feb 8 23:19:14.483960 systemd[1]: Finished audit-rules.service. Feb 8 23:19:14.481000 audit[1303]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd253c0ad0 a2=420 a3=0 items=0 ppid=1282 pid=1303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:19:14.481000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 8 23:19:14.493977 kernel: audit: type=1305 audit(1707434354.481:171): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 8 23:19:18.912263 ldconfig[1267]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 8 23:19:18.921975 systemd[1]: Finished ldconfig.service. Feb 8 23:19:18.927091 systemd[1]: Starting systemd-update-done.service... Feb 8 23:19:18.951212 systemd[1]: Finished systemd-update-done.service. Feb 8 23:19:18.954665 systemd[1]: Reached target sysinit.target. Feb 8 23:19:18.958091 systemd[1]: Started motdgen.path. Feb 8 23:19:18.961316 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 8 23:19:18.964900 systemd[1]: Started logrotate.timer. Feb 8 23:19:18.967102 systemd[1]: Started mdadm.timer. Feb 8 23:19:18.974438 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 8 23:19:18.976695 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 8 23:19:18.976732 systemd[1]: Reached target paths.target. Feb 8 23:19:18.978483 systemd[1]: Reached target timers.target. Feb 8 23:19:18.980563 systemd[1]: Listening on dbus.socket. Feb 8 23:19:18.983217 systemd[1]: Starting docker.socket... Feb 8 23:19:18.986995 systemd[1]: Listening on sshd.socket. Feb 8 23:19:18.989088 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 8 23:19:18.989528 systemd[1]: Listening on docker.socket. Feb 8 23:19:18.991405 systemd[1]: Reached target sockets.target. Feb 8 23:19:18.993283 systemd[1]: Reached target basic.target. Feb 8 23:19:18.995060 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 8 23:19:18.995092 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 8 23:19:18.996037 systemd[1]: Starting containerd.service... Feb 8 23:19:18.998511 systemd[1]: Starting dbus.service... Feb 8 23:19:19.000885 systemd[1]: Starting enable-oem-cloudinit.service... Feb 8 23:19:19.003697 systemd[1]: Starting extend-filesystems.service... Feb 8 23:19:19.005558 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 8 23:19:19.006828 systemd[1]: Starting motdgen.service... Feb 8 23:19:19.010857 systemd[1]: Started nvidia.service. Feb 8 23:19:19.013808 systemd[1]: Starting prepare-cni-plugins.service... Feb 8 23:19:19.016782 systemd[1]: Starting prepare-critools.service... Feb 8 23:19:19.019924 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 8 23:19:19.022895 systemd[1]: Starting sshd-keygen.service... Feb 8 23:19:19.028665 systemd[1]: Starting systemd-logind.service... Feb 8 23:19:19.030684 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 8 23:19:19.030756 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 8 23:19:19.031248 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 8 23:19:19.032013 systemd[1]: Starting update-engine.service... Feb 8 23:19:19.035643 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 8 23:19:19.047183 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 8 23:19:19.047414 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 8 23:19:19.060723 jq[1313]: false Feb 8 23:19:19.061169 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 8 23:19:19.061359 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 8 23:19:19.061805 jq[1328]: true Feb 8 23:19:19.096949 systemd[1]: motdgen.service: Deactivated successfully. Feb 8 23:19:19.097149 systemd[1]: Finished motdgen.service. Feb 8 23:19:19.137629 systemd-logind[1326]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 8 23:19:19.145694 systemd-logind[1326]: New seat seat0. Feb 8 23:19:19.149467 jq[1335]: true Feb 8 23:19:19.182084 extend-filesystems[1314]: Found sda Feb 8 23:19:19.184299 extend-filesystems[1314]: Found sda1 Feb 8 23:19:19.186016 extend-filesystems[1314]: Found sda2 Feb 8 23:19:19.186016 extend-filesystems[1314]: Found sda3 Feb 8 23:19:19.186016 extend-filesystems[1314]: Found usr Feb 8 23:19:19.186016 extend-filesystems[1314]: Found sda4 Feb 8 23:19:19.186016 extend-filesystems[1314]: Found sda6 Feb 8 23:19:19.186016 extend-filesystems[1314]: Found sda7 Feb 8 23:19:19.186016 extend-filesystems[1314]: Found sda9 Feb 8 23:19:19.186016 extend-filesystems[1314]: Checking size of /dev/sda9 Feb 8 23:19:19.218136 tar[1332]: ./ Feb 8 23:19:19.218136 tar[1332]: ./loopback Feb 8 23:19:19.219085 env[1337]: time="2024-02-08T23:19:19.205005596Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 8 23:19:19.219281 tar[1333]: crictl Feb 8 23:19:19.288857 tar[1332]: ./bandwidth Feb 8 23:19:19.301999 env[1337]: time="2024-02-08T23:19:19.300839808Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 8 23:19:19.301999 env[1337]: time="2024-02-08T23:19:19.300999614Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 8 23:19:19.302856 env[1337]: time="2024-02-08T23:19:19.302814585Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 8 23:19:19.302856 env[1337]: time="2024-02-08T23:19:19.302855386Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 8 23:19:19.303128 env[1337]: time="2024-02-08T23:19:19.303101296Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 8 23:19:19.303181 env[1337]: time="2024-02-08T23:19:19.303130797Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 8 23:19:19.303181 env[1337]: time="2024-02-08T23:19:19.303149098Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 8 23:19:19.303181 env[1337]: time="2024-02-08T23:19:19.303163498Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 8 23:19:19.303295 env[1337]: time="2024-02-08T23:19:19.303262702Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 8 23:19:19.303571 env[1337]: time="2024-02-08T23:19:19.303544113Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 8 23:19:19.303747 env[1337]: time="2024-02-08T23:19:19.303720020Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 8 23:19:19.303801 env[1337]: time="2024-02-08T23:19:19.303749421Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 8 23:19:19.303845 env[1337]: time="2024-02-08T23:19:19.303816824Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 8 23:19:19.303845 env[1337]: time="2024-02-08T23:19:19.303833924Z" level=info msg="metadata content store policy set" policy=shared Feb 8 23:19:19.322400 extend-filesystems[1314]: Old size kept for /dev/sda9 Feb 8 23:19:19.346193 extend-filesystems[1314]: Found sr0 Feb 8 23:19:19.357831 env[1337]: time="2024-02-08T23:19:19.344175387Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 8 23:19:19.357831 env[1337]: time="2024-02-08T23:19:19.344222389Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 8 23:19:19.357831 env[1337]: time="2024-02-08T23:19:19.344239189Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 8 23:19:19.357831 env[1337]: time="2024-02-08T23:19:19.344277491Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 8 23:19:19.357831 env[1337]: time="2024-02-08T23:19:19.344296591Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 8 23:19:19.357831 env[1337]: time="2024-02-08T23:19:19.344315792Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 8 23:19:19.357831 env[1337]: time="2024-02-08T23:19:19.344355794Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 8 23:19:19.357831 env[1337]: time="2024-02-08T23:19:19.344375795Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 8 23:19:19.357831 env[1337]: time="2024-02-08T23:19:19.344392495Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 8 23:19:19.357831 env[1337]: time="2024-02-08T23:19:19.344410596Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 8 23:19:19.357831 env[1337]: time="2024-02-08T23:19:19.344428197Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 8 23:19:19.357831 env[1337]: time="2024-02-08T23:19:19.344443597Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 8 23:19:19.357831 env[1337]: time="2024-02-08T23:19:19.347361110Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 8 23:19:19.357831 env[1337]: time="2024-02-08T23:19:19.347464814Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 8 23:19:19.325800 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 8 23:19:19.326267 dbus-daemon[1312]: [system] SELinux support is enabled Feb 8 23:19:19.358711 env[1337]: time="2024-02-08T23:19:19.347780326Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 8 23:19:19.358711 env[1337]: time="2024-02-08T23:19:19.347825428Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 8 23:19:19.358711 env[1337]: time="2024-02-08T23:19:19.347848029Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 8 23:19:19.358711 env[1337]: time="2024-02-08T23:19:19.347913132Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 8 23:19:19.358711 env[1337]: time="2024-02-08T23:19:19.347931632Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 8 23:19:19.358711 env[1337]: time="2024-02-08T23:19:19.347948733Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 8 23:19:19.358711 env[1337]: time="2024-02-08T23:19:19.347964734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 8 23:19:19.358711 env[1337]: time="2024-02-08T23:19:19.347981234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 8 23:19:19.358711 env[1337]: time="2024-02-08T23:19:19.348002135Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 8 23:19:19.358711 env[1337]: time="2024-02-08T23:19:19.348017736Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 8 23:19:19.358711 env[1337]: time="2024-02-08T23:19:19.348034136Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 8 23:19:19.358711 env[1337]: time="2024-02-08T23:19:19.348051737Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 8 23:19:19.358711 env[1337]: time="2024-02-08T23:19:19.348184042Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 8 23:19:19.358711 env[1337]: time="2024-02-08T23:19:19.348201243Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 8 23:19:19.358711 env[1337]: time="2024-02-08T23:19:19.348218143Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 8 23:19:19.326020 systemd[1]: Finished extend-filesystems.service. Feb 8 23:19:19.340156 dbus-daemon[1312]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 8 23:19:19.366381 bash[1376]: Updated "/home/core/.ssh/authorized_keys" Feb 8 23:19:19.366509 env[1337]: time="2024-02-08T23:19:19.348232844Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 8 23:19:19.366509 env[1337]: time="2024-02-08T23:19:19.348271445Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 8 23:19:19.366509 env[1337]: time="2024-02-08T23:19:19.348290746Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 8 23:19:19.366509 env[1337]: time="2024-02-08T23:19:19.348314347Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 8 23:19:19.366509 env[1337]: time="2024-02-08T23:19:19.348370549Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 8 23:19:19.328208 systemd[1]: Started dbus.service. Feb 8 23:19:19.332652 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 8 23:19:19.368261 env[1337]: time="2024-02-08T23:19:19.348642860Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 8 23:19:19.368261 env[1337]: time="2024-02-08T23:19:19.348716263Z" level=info msg="Connect containerd service" Feb 8 23:19:19.368261 env[1337]: time="2024-02-08T23:19:19.348761064Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 8 23:19:19.368261 env[1337]: time="2024-02-08T23:19:19.350163319Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 8 23:19:19.368261 env[1337]: time="2024-02-08T23:19:19.350364426Z" level=info msg="Start subscribing containerd event" Feb 8 23:19:19.368261 env[1337]: time="2024-02-08T23:19:19.350472631Z" level=info msg="Start recovering state" Feb 8 23:19:19.368261 env[1337]: time="2024-02-08T23:19:19.350552734Z" level=info msg="Start event monitor" Feb 8 23:19:19.368261 env[1337]: time="2024-02-08T23:19:19.350571034Z" level=info msg="Start snapshots syncer" Feb 8 23:19:19.368261 env[1337]: time="2024-02-08T23:19:19.350581935Z" level=info msg="Start cni network conf syncer for default" Feb 8 23:19:19.368261 env[1337]: time="2024-02-08T23:19:19.350592735Z" level=info msg="Start streaming server" Feb 8 23:19:19.368261 env[1337]: time="2024-02-08T23:19:19.350938149Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 8 23:19:19.368261 env[1337]: time="2024-02-08T23:19:19.350995951Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 8 23:19:19.368261 env[1337]: time="2024-02-08T23:19:19.351058253Z" level=info msg="containerd successfully booted in 0.172392s" Feb 8 23:19:19.332682 systemd[1]: Reached target system-config.target. Feb 8 23:19:19.335389 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 8 23:19:19.335409 systemd[1]: Reached target user-config.target. Feb 8 23:19:19.339076 systemd[1]: Started systemd-logind.service. Feb 8 23:19:19.351107 systemd[1]: Started containerd.service. Feb 8 23:19:19.362246 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 8 23:19:19.447358 systemd[1]: nvidia.service: Deactivated successfully. Feb 8 23:19:19.448145 tar[1332]: ./ptp Feb 8 23:19:19.561346 tar[1332]: ./vlan Feb 8 23:19:19.662447 tar[1332]: ./host-device Feb 8 23:19:19.749939 tar[1332]: ./tuning Feb 8 23:19:19.821192 tar[1332]: ./vrf Feb 8 23:19:19.890813 tar[1332]: ./sbr Feb 8 23:19:19.966627 tar[1332]: ./tap Feb 8 23:19:20.027909 systemd[1]: Finished prepare-critools.service. Feb 8 23:19:20.045924 tar[1332]: ./dhcp Feb 8 23:19:20.074466 update_engine[1327]: I0208 23:19:20.073959 1327 main.cc:92] Flatcar Update Engine starting Feb 8 23:19:20.148072 systemd[1]: Started update-engine.service. Feb 8 23:19:20.156298 update_engine[1327]: I0208 23:19:20.148125 1327 update_check_scheduler.cc:74] Next update check in 9m41s Feb 8 23:19:20.153560 systemd[1]: Started locksmithd.service. Feb 8 23:19:20.163833 tar[1332]: ./static Feb 8 23:19:20.196669 tar[1332]: ./firewall Feb 8 23:19:20.245163 tar[1332]: ./macvlan Feb 8 23:19:20.290544 tar[1332]: ./dummy Feb 8 23:19:20.334199 tar[1332]: ./bridge Feb 8 23:19:20.382672 tar[1332]: ./ipvlan Feb 8 23:19:20.427708 tar[1332]: ./portmap Feb 8 23:19:20.470525 tar[1332]: ./host-local Feb 8 23:19:20.551921 systemd[1]: Finished prepare-cni-plugins.service. Feb 8 23:19:21.000581 sshd_keygen[1334]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 8 23:19:21.020860 systemd[1]: Finished sshd-keygen.service. Feb 8 23:19:21.024904 systemd[1]: Starting issuegen.service... Feb 8 23:19:21.028004 systemd[1]: Started waagent.service. Feb 8 23:19:21.032202 systemd[1]: issuegen.service: Deactivated successfully. Feb 8 23:19:21.032421 systemd[1]: Finished issuegen.service. Feb 8 23:19:21.035691 systemd[1]: Starting systemd-user-sessions.service... Feb 8 23:19:21.043104 systemd[1]: Finished systemd-user-sessions.service. Feb 8 23:19:21.047032 systemd[1]: Started getty@tty1.service. Feb 8 23:19:21.050763 systemd[1]: Started serial-getty@ttyS0.service. Feb 8 23:19:21.053243 systemd[1]: Reached target getty.target. Feb 8 23:19:21.055294 systemd[1]: Reached target multi-user.target. Feb 8 23:19:21.058790 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 8 23:19:21.065933 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 8 23:19:21.066103 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 8 23:19:21.068902 systemd[1]: Startup finished in 919ms (firmware) + 30.167s (loader) + 968ms (kernel) + 1min 35.091s (initrd) + 26.070s (userspace) = 2min 33.217s. Feb 8 23:19:21.542082 login[1434]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 8 23:19:21.544026 login[1435]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 8 23:19:21.569091 systemd[1]: Created slice user-500.slice. Feb 8 23:19:21.570470 systemd[1]: Starting user-runtime-dir@500.service... Feb 8 23:19:21.575442 systemd-logind[1326]: New session 1 of user core. Feb 8 23:19:21.578961 systemd-logind[1326]: New session 2 of user core. Feb 8 23:19:21.582984 systemd[1]: Finished user-runtime-dir@500.service. Feb 8 23:19:21.584439 systemd[1]: Starting user@500.service... Feb 8 23:19:21.588175 (systemd)[1438]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:19:21.782470 systemd[1438]: Queued start job for default target default.target. Feb 8 23:19:21.783183 systemd[1438]: Reached target paths.target. Feb 8 23:19:21.783220 systemd[1438]: Reached target sockets.target. Feb 8 23:19:21.783241 systemd[1438]: Reached target timers.target. Feb 8 23:19:21.783260 systemd[1438]: Reached target basic.target. Feb 8 23:19:21.783346 systemd[1438]: Reached target default.target. Feb 8 23:19:21.783397 systemd[1438]: Startup finished in 189ms. Feb 8 23:19:21.783434 systemd[1]: Started user@500.service. Feb 8 23:19:21.785072 systemd[1]: Started session-1.scope. Feb 8 23:19:21.786102 systemd[1]: Started session-2.scope. Feb 8 23:19:22.217480 locksmithd[1414]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 8 23:19:27.506948 waagent[1429]: 2024-02-08T23:19:27.506839Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Feb 8 23:19:27.510799 waagent[1429]: 2024-02-08T23:19:27.510717Z INFO Daemon Daemon OS: flatcar 3510.3.2 Feb 8 23:19:27.513204 waagent[1429]: 2024-02-08T23:19:27.513137Z INFO Daemon Daemon Python: 3.9.16 Feb 8 23:19:27.515686 waagent[1429]: 2024-02-08T23:19:27.515613Z INFO Daemon Daemon Run daemon Feb 8 23:19:27.518114 waagent[1429]: 2024-02-08T23:19:27.518053Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.2' Feb 8 23:19:27.530138 waagent[1429]: 2024-02-08T23:19:27.530020Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 8 23:19:27.538082 waagent[1429]: 2024-02-08T23:19:27.537966Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 8 23:19:27.543606 waagent[1429]: 2024-02-08T23:19:27.543535Z INFO Daemon Daemon cloud-init is enabled: False Feb 8 23:19:27.546409 waagent[1429]: 2024-02-08T23:19:27.546344Z INFO Daemon Daemon Using waagent for provisioning Feb 8 23:19:27.549892 waagent[1429]: 2024-02-08T23:19:27.549829Z INFO Daemon Daemon Activate resource disk Feb 8 23:19:27.552970 waagent[1429]: 2024-02-08T23:19:27.552904Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Feb 8 23:19:27.564280 waagent[1429]: 2024-02-08T23:19:27.564198Z INFO Daemon Daemon Found device: None Feb 8 23:19:27.566591 waagent[1429]: 2024-02-08T23:19:27.566515Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Feb 8 23:19:27.570229 waagent[1429]: 2024-02-08T23:19:27.570171Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Feb 8 23:19:27.575579 waagent[1429]: 2024-02-08T23:19:27.575518Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 8 23:19:27.578203 waagent[1429]: 2024-02-08T23:19:27.578143Z INFO Daemon Daemon Running default provisioning handler Feb 8 23:19:27.588374 waagent[1429]: 2024-02-08T23:19:27.588207Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 8 23:19:27.595729 waagent[1429]: 2024-02-08T23:19:27.595628Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 8 23:19:27.601297 waagent[1429]: 2024-02-08T23:19:27.601224Z INFO Daemon Daemon cloud-init is enabled: False Feb 8 23:19:27.604171 waagent[1429]: 2024-02-08T23:19:27.604101Z INFO Daemon Daemon Copying ovf-env.xml Feb 8 23:19:27.688751 waagent[1429]: 2024-02-08T23:19:27.685629Z INFO Daemon Daemon Successfully mounted dvd Feb 8 23:19:27.807224 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Feb 8 23:19:27.848182 waagent[1429]: 2024-02-08T23:19:27.848028Z INFO Daemon Daemon Detect protocol endpoint Feb 8 23:19:27.861810 waagent[1429]: 2024-02-08T23:19:27.849652Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 8 23:19:27.861810 waagent[1429]: 2024-02-08T23:19:27.850661Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Feb 8 23:19:27.861810 waagent[1429]: 2024-02-08T23:19:27.851462Z INFO Daemon Daemon Test for route to 168.63.129.16 Feb 8 23:19:27.861810 waagent[1429]: 2024-02-08T23:19:27.852566Z INFO Daemon Daemon Route to 168.63.129.16 exists Feb 8 23:19:27.861810 waagent[1429]: 2024-02-08T23:19:27.853151Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Feb 8 23:19:28.284055 waagent[1429]: 2024-02-08T23:19:28.283981Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Feb 8 23:19:28.291432 waagent[1429]: 2024-02-08T23:19:28.285795Z INFO Daemon Daemon Wire protocol version:2012-11-30 Feb 8 23:19:28.291432 waagent[1429]: 2024-02-08T23:19:28.286574Z INFO Daemon Daemon Server preferred version:2015-04-05 Feb 8 23:19:28.533150 waagent[1429]: 2024-02-08T23:19:28.532994Z INFO Daemon Daemon Initializing goal state during protocol detection Feb 8 23:19:28.543109 waagent[1429]: 2024-02-08T23:19:28.542993Z INFO Daemon Daemon Forcing an update of the goal state.. Feb 8 23:19:28.547872 waagent[1429]: 2024-02-08T23:19:28.544268Z INFO Daemon Daemon Fetching goal state [incarnation 1] Feb 8 23:19:28.636868 waagent[1429]: 2024-02-08T23:19:28.636742Z INFO Daemon Daemon Found private key matching thumbprint 1AD5E00B7CCD24E7C9BF151227154A76B1AB0599 Feb 8 23:19:28.647737 waagent[1429]: 2024-02-08T23:19:28.638209Z INFO Daemon Daemon Certificate with thumbprint C3B1D037FBE6C8CAF50457526159CC4D05F7E1EF has no matching private key. Feb 8 23:19:28.647737 waagent[1429]: 2024-02-08T23:19:28.639256Z INFO Daemon Daemon Fetch goal state completed Feb 8 23:19:28.690538 waagent[1429]: 2024-02-08T23:19:28.690444Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: e46f3d62-b196-4826-b76f-312aa1f58bed New eTag: 18440965677611601529] Feb 8 23:19:28.697836 waagent[1429]: 2024-02-08T23:19:28.692828Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Feb 8 23:19:28.703955 waagent[1429]: 2024-02-08T23:19:28.703895Z INFO Daemon Daemon Starting provisioning Feb 8 23:19:28.720805 waagent[1429]: 2024-02-08T23:19:28.705223Z INFO Daemon Daemon Handle ovf-env.xml. Feb 8 23:19:28.720805 waagent[1429]: 2024-02-08T23:19:28.705695Z INFO Daemon Daemon Set hostname [ci-3510.3.2-a-eeebf457fd] Feb 8 23:19:28.727481 waagent[1429]: 2024-02-08T23:19:28.727374Z INFO Daemon Daemon Publish hostname [ci-3510.3.2-a-eeebf457fd] Feb 8 23:19:28.731512 waagent[1429]: 2024-02-08T23:19:28.731444Z INFO Daemon Daemon Examine /proc/net/route for primary interface Feb 8 23:19:28.735208 waagent[1429]: 2024-02-08T23:19:28.735145Z INFO Daemon Daemon Primary interface is [eth0] Feb 8 23:19:28.750969 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Feb 8 23:19:28.751219 systemd[1]: Stopped systemd-networkd-wait-online.service. Feb 8 23:19:28.751289 systemd[1]: Stopping systemd-networkd-wait-online.service... Feb 8 23:19:28.751680 systemd[1]: Stopping systemd-networkd.service... Feb 8 23:19:28.756378 systemd-networkd[1200]: eth0: DHCPv6 lease lost Feb 8 23:19:28.757645 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 8 23:19:28.757821 systemd[1]: Stopped systemd-networkd.service. Feb 8 23:19:28.760027 systemd[1]: Starting systemd-networkd.service... Feb 8 23:19:28.794172 systemd-networkd[1480]: enP22194s1: Link UP Feb 8 23:19:28.794182 systemd-networkd[1480]: enP22194s1: Gained carrier Feb 8 23:19:28.795703 systemd-networkd[1480]: eth0: Link UP Feb 8 23:19:28.795712 systemd-networkd[1480]: eth0: Gained carrier Feb 8 23:19:28.796134 systemd-networkd[1480]: lo: Link UP Feb 8 23:19:28.796143 systemd-networkd[1480]: lo: Gained carrier Feb 8 23:19:28.796471 systemd-networkd[1480]: eth0: Gained IPv6LL Feb 8 23:19:28.796948 systemd-networkd[1480]: Enumeration completed Feb 8 23:19:28.798427 waagent[1429]: 2024-02-08T23:19:28.798005Z INFO Daemon Daemon Create user account if not exists Feb 8 23:19:28.797039 systemd[1]: Started systemd-networkd.service. Feb 8 23:19:28.801148 waagent[1429]: 2024-02-08T23:19:28.801085Z INFO Daemon Daemon User core already exists, skip useradd Feb 8 23:19:28.803158 waagent[1429]: 2024-02-08T23:19:28.803079Z INFO Daemon Daemon Configure sudoer Feb 8 23:19:28.804545 waagent[1429]: 2024-02-08T23:19:28.804475Z INFO Daemon Daemon Configure sshd Feb 8 23:19:28.805398 waagent[1429]: 2024-02-08T23:19:28.805319Z INFO Daemon Daemon Deploy ssh public key. Feb 8 23:19:28.811873 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 8 23:19:28.817554 systemd-networkd[1480]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 8 23:19:28.855410 systemd-networkd[1480]: eth0: DHCPv4 address 10.200.8.10/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 8 23:19:28.858941 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 8 23:19:30.068375 waagent[1429]: 2024-02-08T23:19:30.068251Z INFO Daemon Daemon Provisioning complete Feb 8 23:19:30.084553 waagent[1429]: 2024-02-08T23:19:30.084479Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Feb 8 23:19:30.087376 waagent[1429]: 2024-02-08T23:19:30.087295Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Feb 8 23:19:30.093168 waagent[1429]: 2024-02-08T23:19:30.093090Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Feb 8 23:19:30.359839 waagent[1489]: 2024-02-08T23:19:30.359674Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Feb 8 23:19:30.360580 waagent[1489]: 2024-02-08T23:19:30.360509Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 8 23:19:30.360725 waagent[1489]: 2024-02-08T23:19:30.360670Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 8 23:19:30.371255 waagent[1489]: 2024-02-08T23:19:30.371179Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Feb 8 23:19:30.371423 waagent[1489]: 2024-02-08T23:19:30.371368Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Feb 8 23:19:30.429361 waagent[1489]: 2024-02-08T23:19:30.429227Z INFO ExtHandler ExtHandler Found private key matching thumbprint 1AD5E00B7CCD24E7C9BF151227154A76B1AB0599 Feb 8 23:19:30.429580 waagent[1489]: 2024-02-08T23:19:30.429518Z INFO ExtHandler ExtHandler Certificate with thumbprint C3B1D037FBE6C8CAF50457526159CC4D05F7E1EF has no matching private key. Feb 8 23:19:30.429816 waagent[1489]: 2024-02-08T23:19:30.429765Z INFO ExtHandler ExtHandler Fetch goal state completed Feb 8 23:19:30.443412 waagent[1489]: 2024-02-08T23:19:30.443349Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: 521261c3-f4b6-4c95-8277-6b89075e4345 New eTag: 18440965677611601529] Feb 8 23:19:30.443989 waagent[1489]: 2024-02-08T23:19:30.443922Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Feb 8 23:19:30.533360 waagent[1489]: 2024-02-08T23:19:30.533183Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 8 23:19:30.542902 waagent[1489]: 2024-02-08T23:19:30.542821Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1489 Feb 8 23:19:30.546378 waagent[1489]: 2024-02-08T23:19:30.546296Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 8 23:19:30.547610 waagent[1489]: 2024-02-08T23:19:30.547548Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 8 23:19:30.628319 waagent[1489]: 2024-02-08T23:19:30.628256Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 8 23:19:30.628757 waagent[1489]: 2024-02-08T23:19:30.628692Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 8 23:19:30.638253 waagent[1489]: 2024-02-08T23:19:30.638198Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 8 23:19:30.638735 waagent[1489]: 2024-02-08T23:19:30.638672Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 8 23:19:30.639782 waagent[1489]: 2024-02-08T23:19:30.639716Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Feb 8 23:19:30.641037 waagent[1489]: 2024-02-08T23:19:30.640978Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 8 23:19:30.641451 waagent[1489]: 2024-02-08T23:19:30.641394Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 8 23:19:30.642114 waagent[1489]: 2024-02-08T23:19:30.642062Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 8 23:19:30.642269 waagent[1489]: 2024-02-08T23:19:30.642220Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 8 23:19:30.642457 waagent[1489]: 2024-02-08T23:19:30.642388Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 8 23:19:30.643010 waagent[1489]: 2024-02-08T23:19:30.642959Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 8 23:19:30.643559 waagent[1489]: 2024-02-08T23:19:30.643498Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 8 23:19:30.643899 waagent[1489]: 2024-02-08T23:19:30.643844Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 8 23:19:30.644107 waagent[1489]: 2024-02-08T23:19:30.644038Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 8 23:19:30.644107 waagent[1489]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 8 23:19:30.644107 waagent[1489]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Feb 8 23:19:30.644107 waagent[1489]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 8 23:19:30.644107 waagent[1489]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 8 23:19:30.644107 waagent[1489]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 8 23:19:30.644107 waagent[1489]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 8 23:19:30.644519 waagent[1489]: 2024-02-08T23:19:30.644462Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 8 23:19:30.645354 waagent[1489]: 2024-02-08T23:19:30.645282Z INFO EnvHandler ExtHandler Configure routes Feb 8 23:19:30.648706 waagent[1489]: 2024-02-08T23:19:30.648541Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 8 23:19:30.648958 waagent[1489]: 2024-02-08T23:19:30.648903Z INFO EnvHandler ExtHandler Gateway:None Feb 8 23:19:30.649121 waagent[1489]: 2024-02-08T23:19:30.649070Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 8 23:19:30.649375 waagent[1489]: 2024-02-08T23:19:30.649287Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 8 23:19:30.649547 waagent[1489]: 2024-02-08T23:19:30.649479Z INFO EnvHandler ExtHandler Routes:None Feb 8 23:19:30.660769 waagent[1489]: 2024-02-08T23:19:30.660716Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Feb 8 23:19:30.661580 waagent[1489]: 2024-02-08T23:19:30.661526Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 8 23:19:30.663829 waagent[1489]: 2024-02-08T23:19:30.663770Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Feb 8 23:19:30.697170 waagent[1489]: 2024-02-08T23:19:30.697054Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1480' Feb 8 23:19:30.709551 waagent[1489]: 2024-02-08T23:19:30.709487Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Feb 8 23:19:30.826694 waagent[1489]: 2024-02-08T23:19:30.826561Z INFO MonitorHandler ExtHandler Network interfaces: Feb 8 23:19:30.826694 waagent[1489]: Executing ['ip', '-a', '-o', 'link']: Feb 8 23:19:30.826694 waagent[1489]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 8 23:19:30.826694 waagent[1489]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:b6:e6:e2 brd ff:ff:ff:ff:ff:ff Feb 8 23:19:30.826694 waagent[1489]: 3: enP22194s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:b6:e6:e2 brd ff:ff:ff:ff:ff:ff\ altname enP22194p0s2 Feb 8 23:19:30.826694 waagent[1489]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 8 23:19:30.826694 waagent[1489]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 8 23:19:30.826694 waagent[1489]: 2: eth0 inet 10.200.8.10/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 8 23:19:30.826694 waagent[1489]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 8 23:19:30.826694 waagent[1489]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 8 23:19:30.826694 waagent[1489]: 2: eth0 inet6 fe80::20d:3aff:feb6:e6e2/64 scope link \ valid_lft forever preferred_lft forever Feb 8 23:19:31.031484 waagent[1489]: 2024-02-08T23:19:31.031361Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.9.1.1 -- exiting Feb 8 23:19:31.097228 waagent[1429]: 2024-02-08T23:19:31.097069Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Feb 8 23:19:31.104607 waagent[1429]: 2024-02-08T23:19:31.104538Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.9.1.1 to be the latest agent Feb 8 23:19:32.122260 waagent[1526]: 2024-02-08T23:19:32.122146Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Feb 8 23:19:32.123004 waagent[1526]: 2024-02-08T23:19:32.122936Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.2 Feb 8 23:19:32.123154 waagent[1526]: 2024-02-08T23:19:32.123099Z INFO ExtHandler ExtHandler Python: 3.9.16 Feb 8 23:19:32.132485 waagent[1526]: 2024-02-08T23:19:32.132384Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 8 23:19:32.132858 waagent[1526]: 2024-02-08T23:19:32.132801Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 8 23:19:32.133019 waagent[1526]: 2024-02-08T23:19:32.132968Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 8 23:19:32.144270 waagent[1526]: 2024-02-08T23:19:32.144190Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Feb 8 23:19:32.152937 waagent[1526]: 2024-02-08T23:19:32.152874Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.143 Feb 8 23:19:32.153842 waagent[1526]: 2024-02-08T23:19:32.153778Z INFO ExtHandler Feb 8 23:19:32.153987 waagent[1526]: 2024-02-08T23:19:32.153936Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: f3f9fb6e-9167-49e7-adcf-aaa5c96929bf eTag: 18440965677611601529 source: Fabric] Feb 8 23:19:32.154685 waagent[1526]: 2024-02-08T23:19:32.154628Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Feb 8 23:19:32.155760 waagent[1526]: 2024-02-08T23:19:32.155699Z INFO ExtHandler Feb 8 23:19:32.155895 waagent[1526]: 2024-02-08T23:19:32.155841Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Feb 8 23:19:32.163064 waagent[1526]: 2024-02-08T23:19:32.163012Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Feb 8 23:19:32.163488 waagent[1526]: 2024-02-08T23:19:32.163438Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 8 23:19:32.181419 waagent[1526]: 2024-02-08T23:19:32.181317Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Feb 8 23:19:32.245521 waagent[1526]: 2024-02-08T23:19:32.245387Z INFO ExtHandler Downloaded certificate {'thumbprint': 'C3B1D037FBE6C8CAF50457526159CC4D05F7E1EF', 'hasPrivateKey': False} Feb 8 23:19:32.246490 waagent[1526]: 2024-02-08T23:19:32.246423Z INFO ExtHandler Downloaded certificate {'thumbprint': '1AD5E00B7CCD24E7C9BF151227154A76B1AB0599', 'hasPrivateKey': True} Feb 8 23:19:32.247439 waagent[1526]: 2024-02-08T23:19:32.247380Z INFO ExtHandler Fetch goal state completed Feb 8 23:19:32.269601 waagent[1526]: 2024-02-08T23:19:32.269523Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1526 Feb 8 23:19:32.273216 waagent[1526]: 2024-02-08T23:19:32.273107Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 8 23:19:32.274865 waagent[1526]: 2024-02-08T23:19:32.274804Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 8 23:19:32.279711 waagent[1526]: 2024-02-08T23:19:32.279657Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 8 23:19:32.280059 waagent[1526]: 2024-02-08T23:19:32.280003Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 8 23:19:32.287842 waagent[1526]: 2024-02-08T23:19:32.287786Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 8 23:19:32.288282 waagent[1526]: 2024-02-08T23:19:32.288225Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 8 23:19:32.294215 waagent[1526]: 2024-02-08T23:19:32.294114Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Feb 8 23:19:32.299436 waagent[1526]: 2024-02-08T23:19:32.299379Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Feb 8 23:19:32.300771 waagent[1526]: 2024-02-08T23:19:32.300712Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 8 23:19:32.301102 waagent[1526]: 2024-02-08T23:19:32.301042Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 8 23:19:32.301841 waagent[1526]: 2024-02-08T23:19:32.301785Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 8 23:19:32.301970 waagent[1526]: 2024-02-08T23:19:32.301913Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 8 23:19:32.302266 waagent[1526]: 2024-02-08T23:19:32.302212Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 8 23:19:32.302826 waagent[1526]: 2024-02-08T23:19:32.302769Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 8 23:19:32.303569 waagent[1526]: 2024-02-08T23:19:32.303508Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 8 23:19:32.303826 waagent[1526]: 2024-02-08T23:19:32.303770Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 8 23:19:32.304059 waagent[1526]: 2024-02-08T23:19:32.304008Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 8 23:19:32.304059 waagent[1526]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 8 23:19:32.304059 waagent[1526]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Feb 8 23:19:32.304059 waagent[1526]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 8 23:19:32.304059 waagent[1526]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 8 23:19:32.304059 waagent[1526]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 8 23:19:32.304059 waagent[1526]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 8 23:19:32.304321 waagent[1526]: 2024-02-08T23:19:32.304268Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 8 23:19:32.305105 waagent[1526]: 2024-02-08T23:19:32.305047Z INFO EnvHandler ExtHandler Configure routes Feb 8 23:19:32.308145 waagent[1526]: 2024-02-08T23:19:32.308045Z INFO EnvHandler ExtHandler Gateway:None Feb 8 23:19:32.308539 waagent[1526]: 2024-02-08T23:19:32.308475Z INFO EnvHandler ExtHandler Routes:None Feb 8 23:19:32.309294 waagent[1526]: 2024-02-08T23:19:32.309230Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 8 23:19:32.311133 waagent[1526]: 2024-02-08T23:19:32.311068Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 8 23:19:32.325219 waagent[1526]: 2024-02-08T23:19:32.325149Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 8 23:19:32.336488 waagent[1526]: 2024-02-08T23:19:32.336367Z INFO MonitorHandler ExtHandler Network interfaces: Feb 8 23:19:32.336488 waagent[1526]: Executing ['ip', '-a', '-o', 'link']: Feb 8 23:19:32.336488 waagent[1526]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 8 23:19:32.336488 waagent[1526]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:b6:e6:e2 brd ff:ff:ff:ff:ff:ff Feb 8 23:19:32.336488 waagent[1526]: 3: enP22194s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:b6:e6:e2 brd ff:ff:ff:ff:ff:ff\ altname enP22194p0s2 Feb 8 23:19:32.336488 waagent[1526]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 8 23:19:32.336488 waagent[1526]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 8 23:19:32.336488 waagent[1526]: 2: eth0 inet 10.200.8.10/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 8 23:19:32.336488 waagent[1526]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 8 23:19:32.336488 waagent[1526]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 8 23:19:32.336488 waagent[1526]: 2: eth0 inet6 fe80::20d:3aff:feb6:e6e2/64 scope link \ valid_lft forever preferred_lft forever Feb 8 23:19:32.338318 waagent[1526]: 2024-02-08T23:19:32.338266Z INFO ExtHandler ExtHandler No requested version specified, checking for all versions for agent update (family: Prod) Feb 8 23:19:32.339568 waagent[1526]: 2024-02-08T23:19:32.339518Z INFO ExtHandler ExtHandler Downloading manifest Feb 8 23:19:32.385617 waagent[1526]: 2024-02-08T23:19:32.385514Z INFO ExtHandler ExtHandler Feb 8 23:19:32.387242 waagent[1526]: 2024-02-08T23:19:32.387188Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: c390f82b-46f2-4868-8438-5a20197a1b48 correlation 573503be-5913-4929-b032-8163778f9723 created: 2024-02-08T23:16:38.959511Z] Feb 8 23:19:32.391298 waagent[1526]: 2024-02-08T23:19:32.391252Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Feb 8 23:19:32.393957 waagent[1526]: 2024-02-08T23:19:32.393900Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 8 ms] Feb 8 23:19:32.420424 waagent[1526]: 2024-02-08T23:19:32.420363Z INFO ExtHandler ExtHandler Looking for existing remote access users. Feb 8 23:19:32.429626 waagent[1526]: 2024-02-08T23:19:32.429552Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: C01FFC1A-5059-4876-9D6C-90A8606D46C8;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1] Feb 8 23:19:32.479205 waagent[1526]: 2024-02-08T23:19:32.479079Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Feb 8 23:19:32.479205 waagent[1526]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 8 23:19:32.479205 waagent[1526]: pkts bytes target prot opt in out source destination Feb 8 23:19:32.479205 waagent[1526]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 8 23:19:32.479205 waagent[1526]: pkts bytes target prot opt in out source destination Feb 8 23:19:32.479205 waagent[1526]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 8 23:19:32.479205 waagent[1526]: pkts bytes target prot opt in out source destination Feb 8 23:19:32.479205 waagent[1526]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 8 23:19:32.479205 waagent[1526]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 8 23:19:32.479205 waagent[1526]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 8 23:19:32.486242 waagent[1526]: 2024-02-08T23:19:32.486131Z INFO EnvHandler ExtHandler Current Firewall rules: Feb 8 23:19:32.486242 waagent[1526]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 8 23:19:32.486242 waagent[1526]: pkts bytes target prot opt in out source destination Feb 8 23:19:32.486242 waagent[1526]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 8 23:19:32.486242 waagent[1526]: pkts bytes target prot opt in out source destination Feb 8 23:19:32.486242 waagent[1526]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 8 23:19:32.486242 waagent[1526]: pkts bytes target prot opt in out source destination Feb 8 23:19:32.486242 waagent[1526]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 8 23:19:32.486242 waagent[1526]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 8 23:19:32.486242 waagent[1526]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 8 23:19:32.486821 waagent[1526]: 2024-02-08T23:19:32.486767Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Feb 8 23:19:57.985793 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Feb 8 23:20:04.975586 update_engine[1327]: I0208 23:20:04.975500 1327 update_attempter.cc:509] Updating boot flags... Feb 8 23:20:17.067261 systemd[1]: Created slice system-sshd.slice. Feb 8 23:20:17.068949 systemd[1]: Started sshd@0-10.200.8.10:22-10.200.12.6:56636.service. Feb 8 23:20:18.406596 sshd[1613]: Accepted publickey for core from 10.200.12.6 port 56636 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc Feb 8 23:20:18.408307 sshd[1613]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:20:18.413583 systemd-logind[1326]: New session 3 of user core. Feb 8 23:20:18.414642 systemd[1]: Started session-3.scope. Feb 8 23:20:19.356689 systemd[1]: Started sshd@1-10.200.8.10:22-10.200.12.6:56728.service. Feb 8 23:20:20.355255 sshd[1618]: Accepted publickey for core from 10.200.12.6 port 56728 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc Feb 8 23:20:20.356903 sshd[1618]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:20:20.362482 systemd[1]: Started session-4.scope. Feb 8 23:20:20.363058 systemd-logind[1326]: New session 4 of user core. Feb 8 23:20:20.794745 sshd[1618]: pam_unix(sshd:session): session closed for user core Feb 8 23:20:20.797496 systemd[1]: sshd@1-10.200.8.10:22-10.200.12.6:56728.service: Deactivated successfully. Feb 8 23:20:20.798355 systemd[1]: session-4.scope: Deactivated successfully. Feb 8 23:20:20.798987 systemd-logind[1326]: Session 4 logged out. Waiting for processes to exit. Feb 8 23:20:20.799738 systemd-logind[1326]: Removed session 4. Feb 8 23:20:20.900170 systemd[1]: Started sshd@2-10.200.8.10:22-10.200.12.6:56734.service. Feb 8 23:20:21.531561 sshd[1624]: Accepted publickey for core from 10.200.12.6 port 56734 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc Feb 8 23:20:21.533190 sshd[1624]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:20:21.537906 systemd[1]: Started session-5.scope. Feb 8 23:20:21.538348 systemd-logind[1326]: New session 5 of user core. Feb 8 23:20:21.987038 sshd[1624]: pam_unix(sshd:session): session closed for user core Feb 8 23:20:21.990281 systemd[1]: sshd@2-10.200.8.10:22-10.200.12.6:56734.service: Deactivated successfully. Feb 8 23:20:21.991202 systemd[1]: session-5.scope: Deactivated successfully. Feb 8 23:20:21.991831 systemd-logind[1326]: Session 5 logged out. Waiting for processes to exit. Feb 8 23:20:21.992614 systemd-logind[1326]: Removed session 5. Feb 8 23:20:22.091408 systemd[1]: Started sshd@3-10.200.8.10:22-10.200.12.6:56742.service. Feb 8 23:20:22.716412 sshd[1630]: Accepted publickey for core from 10.200.12.6 port 56742 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc Feb 8 23:20:22.718013 sshd[1630]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:20:22.723418 systemd-logind[1326]: New session 6 of user core. Feb 8 23:20:22.723741 systemd[1]: Started session-6.scope. Feb 8 23:20:23.156858 sshd[1630]: pam_unix(sshd:session): session closed for user core Feb 8 23:20:23.160085 systemd[1]: sshd@3-10.200.8.10:22-10.200.12.6:56742.service: Deactivated successfully. Feb 8 23:20:23.161021 systemd[1]: session-6.scope: Deactivated successfully. Feb 8 23:20:23.161802 systemd-logind[1326]: Session 6 logged out. Waiting for processes to exit. Feb 8 23:20:23.162746 systemd-logind[1326]: Removed session 6. Feb 8 23:20:23.259922 systemd[1]: Started sshd@4-10.200.8.10:22-10.200.12.6:56746.service. Feb 8 23:20:23.880873 sshd[1636]: Accepted publickey for core from 10.200.12.6 port 56746 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc Feb 8 23:20:23.882358 sshd[1636]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:20:23.887351 systemd[1]: Started session-7.scope. Feb 8 23:20:23.887830 systemd-logind[1326]: New session 7 of user core. Feb 8 23:20:24.554533 sudo[1639]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 8 23:20:24.554878 sudo[1639]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 8 23:20:25.412166 systemd[1]: Reloading. Feb 8 23:20:25.494728 /usr/lib/systemd/system-generators/torcx-generator[1669]: time="2024-02-08T23:20:25Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 8 23:20:25.494765 /usr/lib/systemd/system-generators/torcx-generator[1669]: time="2024-02-08T23:20:25Z" level=info msg="torcx already run" Feb 8 23:20:25.574157 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 8 23:20:25.574176 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 8 23:20:25.590243 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 8 23:20:25.676029 systemd[1]: Started kubelet.service. Feb 8 23:20:25.705803 systemd[1]: Starting coreos-metadata.service... Feb 8 23:20:25.756502 kubelet[1731]: E0208 23:20:25.756452 1731 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Feb 8 23:20:25.758113 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 8 23:20:25.758274 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 8 23:20:25.763663 coreos-metadata[1739]: Feb 08 23:20:25.763 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 8 23:20:25.766210 coreos-metadata[1739]: Feb 08 23:20:25.766 INFO Fetch successful Feb 8 23:20:25.766483 coreos-metadata[1739]: Feb 08 23:20:25.766 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Feb 8 23:20:25.768567 coreos-metadata[1739]: Feb 08 23:20:25.768 INFO Fetch successful Feb 8 23:20:25.768909 coreos-metadata[1739]: Feb 08 23:20:25.768 INFO Fetching http://168.63.129.16/machine/267f2e95-931d-411a-84ec-f37546da1766/6b5af026%2D5adc%2D41ec%2D8eae%2D0919481b27c1.%5Fci%2D3510.3.2%2Da%2Deeebf457fd?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Feb 8 23:20:25.771019 coreos-metadata[1739]: Feb 08 23:20:25.770 INFO Fetch successful Feb 8 23:20:25.803118 coreos-metadata[1739]: Feb 08 23:20:25.803 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Feb 8 23:20:25.815219 coreos-metadata[1739]: Feb 08 23:20:25.815 INFO Fetch successful Feb 8 23:20:25.824944 systemd[1]: Finished coreos-metadata.service. Feb 8 23:20:29.565766 systemd[1]: Stopped kubelet.service. Feb 8 23:20:29.579607 systemd[1]: Reloading. Feb 8 23:20:29.641741 /usr/lib/systemd/system-generators/torcx-generator[1795]: time="2024-02-08T23:20:29Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 8 23:20:29.649409 /usr/lib/systemd/system-generators/torcx-generator[1795]: time="2024-02-08T23:20:29Z" level=info msg="torcx already run" Feb 8 23:20:29.744073 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 8 23:20:29.744092 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 8 23:20:29.760057 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 8 23:20:29.851850 systemd[1]: Started kubelet.service. Feb 8 23:20:29.898134 kubelet[1858]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 8 23:20:29.898134 kubelet[1858]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 8 23:20:29.898134 kubelet[1858]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 8 23:20:29.898573 kubelet[1858]: I0208 23:20:29.898193 1858 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 8 23:20:30.235050 kubelet[1858]: I0208 23:20:30.235014 1858 server.go:415] "Kubelet version" kubeletVersion="v1.27.2" Feb 8 23:20:30.235050 kubelet[1858]: I0208 23:20:30.235042 1858 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 8 23:20:30.235321 kubelet[1858]: I0208 23:20:30.235300 1858 server.go:837] "Client rotation is on, will bootstrap in background" Feb 8 23:20:30.237911 kubelet[1858]: I0208 23:20:30.237886 1858 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 8 23:20:30.240626 kubelet[1858]: I0208 23:20:30.240605 1858 server.go:662] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 8 23:20:30.240875 kubelet[1858]: I0208 23:20:30.240857 1858 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 8 23:20:30.240987 kubelet[1858]: I0208 23:20:30.240964 1858 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 8 23:20:30.241115 kubelet[1858]: I0208 23:20:30.240999 1858 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 8 23:20:30.241115 kubelet[1858]: I0208 23:20:30.241023 1858 container_manager_linux.go:302] "Creating device plugin manager" Feb 8 23:20:30.241205 kubelet[1858]: I0208 23:20:30.241133 1858 state_mem.go:36] "Initialized new in-memory state store" Feb 8 23:20:30.248368 kubelet[1858]: I0208 23:20:30.248348 1858 kubelet.go:405] "Attempting to sync node with API server" Feb 8 23:20:30.248368 kubelet[1858]: I0208 23:20:30.248371 1858 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 8 23:20:30.248520 kubelet[1858]: I0208 23:20:30.248396 1858 kubelet.go:309] "Adding apiserver pod source" Feb 8 23:20:30.248520 kubelet[1858]: I0208 23:20:30.248410 1858 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 8 23:20:30.248678 kubelet[1858]: E0208 23:20:30.248664 1858 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:20:30.248776 kubelet[1858]: E0208 23:20:30.248765 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:20:30.249296 kubelet[1858]: I0208 23:20:30.249278 1858 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 8 23:20:30.249704 kubelet[1858]: W0208 23:20:30.249689 1858 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 8 23:20:30.250173 kubelet[1858]: I0208 23:20:30.250157 1858 server.go:1168] "Started kubelet" Feb 8 23:20:30.250649 kubelet[1858]: I0208 23:20:30.250628 1858 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 8 23:20:30.250862 kubelet[1858]: I0208 23:20:30.250843 1858 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 8 23:20:30.251849 kubelet[1858]: I0208 23:20:30.251830 1858 server.go:461] "Adding debug handlers to kubelet server" Feb 8 23:20:30.256342 kubelet[1858]: E0208 23:20:30.253405 1858 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 8 23:20:30.256342 kubelet[1858]: E0208 23:20:30.253425 1858 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 8 23:20:30.257693 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 8 23:20:30.257878 kubelet[1858]: I0208 23:20:30.257860 1858 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 8 23:20:30.263603 kubelet[1858]: I0208 23:20:30.263580 1858 volume_manager.go:284] "Starting Kubelet Volume Manager" Feb 8 23:20:30.266042 kubelet[1858]: E0208 23:20:30.265940 1858 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.10.17b206916454acde", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.10", UID:"10.200.8.10", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.10"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 20, 30, 250134750, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 20, 30, 250134750, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:20:30.266550 kubelet[1858]: W0208 23:20:30.266527 1858 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 8 23:20:30.266630 kubelet[1858]: E0208 23:20:30.266562 1858 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 8 23:20:30.266676 kubelet[1858]: W0208 23:20:30.266659 1858 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.200.8.10" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 8 23:20:30.266676 kubelet[1858]: E0208 23:20:30.266674 1858 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.200.8.10" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 8 23:20:30.266762 kubelet[1858]: E0208 23:20:30.266732 1858 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.200.8.10\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Feb 8 23:20:30.271342 kubelet[1858]: E0208 23:20:30.267691 1858 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.10.17b206916486c602", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.10", UID:"10.200.8.10", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.10"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 20, 30, 253417986, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 20, 30, 253417986, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:20:30.271342 kubelet[1858]: I0208 23:20:30.267798 1858 desired_state_of_world_populator.go:145] "Desired state populator starts to run" Feb 8 23:20:30.288827 kubelet[1858]: W0208 23:20:30.288807 1858 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 8 23:20:30.289159 kubelet[1858]: E0208 23:20:30.289141 1858 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 8 23:20:30.293344 kubelet[1858]: I0208 23:20:30.293310 1858 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 8 23:20:30.293344 kubelet[1858]: I0208 23:20:30.293336 1858 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 8 23:20:30.293470 kubelet[1858]: I0208 23:20:30.293354 1858 state_mem.go:36] "Initialized new in-memory state store" Feb 8 23:20:30.293769 kubelet[1858]: E0208 23:20:30.293672 1858 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.10.17b2069166de3d6e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.10", UID:"10.200.8.10", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.8.10 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.10"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 20, 30, 292704622, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 20, 30, 292704622, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:20:30.294425 kubelet[1858]: E0208 23:20:30.294368 1858 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.10.17b2069166de679e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.10", UID:"10.200.8.10", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.8.10 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.10"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 20, 30, 292715422, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 20, 30, 292715422, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:20:30.294979 kubelet[1858]: E0208 23:20:30.294928 1858 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.10.17b2069166de77a3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.10", UID:"10.200.8.10", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.8.10 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.10"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 20, 30, 292719523, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 20, 30, 292719523, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:20:30.297583 kubelet[1858]: I0208 23:20:30.297565 1858 policy_none.go:49] "None policy: Start" Feb 8 23:20:30.298108 kubelet[1858]: I0208 23:20:30.298091 1858 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 8 23:20:30.298187 kubelet[1858]: I0208 23:20:30.298114 1858 state_mem.go:35] "Initializing new in-memory state store" Feb 8 23:20:30.305214 systemd[1]: Created slice kubepods.slice. Feb 8 23:20:30.312474 systemd[1]: Created slice kubepods-burstable.slice. Feb 8 23:20:30.314654 kubelet[1858]: I0208 23:20:30.314639 1858 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 8 23:20:30.315444 systemd[1]: Created slice kubepods-besteffort.slice. Feb 8 23:20:30.319173 kubelet[1858]: I0208 23:20:30.319155 1858 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 8 23:20:30.319501 kubelet[1858]: I0208 23:20:30.319485 1858 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 8 23:20:30.321272 kubelet[1858]: E0208 23:20:30.321261 1858 eviction_manager.go:262] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.200.8.10\" not found" Feb 8 23:20:30.321924 kubelet[1858]: I0208 23:20:30.321912 1858 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 8 23:20:30.322320 kubelet[1858]: I0208 23:20:30.322309 1858 status_manager.go:207] "Starting to sync pod status with apiserver" Feb 8 23:20:30.322415 kubelet[1858]: I0208 23:20:30.322407 1858 kubelet.go:2257] "Starting kubelet main sync loop" Feb 8 23:20:30.322502 kubelet[1858]: E0208 23:20:30.322496 1858 kubelet.go:2281] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 8 23:20:30.324275 kubelet[1858]: E0208 23:20:30.324210 1858 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.10.17b20691689a0109", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.10", UID:"10.200.8.10", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.10"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 20, 30, 321787145, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 20, 30, 321787145, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:20:30.324876 kubelet[1858]: W0208 23:20:30.324617 1858 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 8 23:20:30.325013 kubelet[1858]: E0208 23:20:30.325002 1858 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 8 23:20:30.364925 kubelet[1858]: I0208 23:20:30.364901 1858 kubelet_node_status.go:70] "Attempting to register node" node="10.200.8.10" Feb 8 23:20:30.366193 kubelet[1858]: E0208 23:20:30.366163 1858 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.200.8.10" Feb 8 23:20:30.366288 kubelet[1858]: E0208 23:20:30.366122 1858 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.10.17b2069166de3d6e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.10", UID:"10.200.8.10", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.8.10 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.10"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 20, 30, 292704622, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 20, 30, 364856823, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.10.17b2069166de3d6e" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:20:30.367186 kubelet[1858]: E0208 23:20:30.367115 1858 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.10.17b2069166de679e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.10", UID:"10.200.8.10", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.8.10 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.10"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 20, 30, 292715422, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 20, 30, 364869024, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.10.17b2069166de679e" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:20:30.367962 kubelet[1858]: E0208 23:20:30.367891 1858 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.10.17b2069166de77a3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.10", UID:"10.200.8.10", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.8.10 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.10"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 20, 30, 292719523, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 20, 30, 364873724, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.10.17b2069166de77a3" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:20:30.468477 kubelet[1858]: E0208 23:20:30.468437 1858 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.200.8.10\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="400ms" Feb 8 23:20:30.569739 kubelet[1858]: I0208 23:20:30.567508 1858 kubelet_node_status.go:70] "Attempting to register node" node="10.200.8.10" Feb 8 23:20:30.569739 kubelet[1858]: E0208 23:20:30.568757 1858 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.10.17b2069166de3d6e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.10", UID:"10.200.8.10", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.8.10 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.10"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 20, 30, 292704622, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 20, 30, 567456473, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.10.17b2069166de3d6e" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:20:30.569739 kubelet[1858]: E0208 23:20:30.569088 1858 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.200.8.10" Feb 8 23:20:30.570776 kubelet[1858]: E0208 23:20:30.570696 1858 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.10.17b2069166de679e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.10", UID:"10.200.8.10", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.8.10 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.10"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 20, 30, 292715422, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 20, 30, 567471673, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.10.17b2069166de679e" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:20:30.571561 kubelet[1858]: E0208 23:20:30.571490 1858 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.10.17b2069166de77a3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.10", UID:"10.200.8.10", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.8.10 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.10"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 20, 30, 292719523, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 20, 30, 567476073, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.10.17b2069166de77a3" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:20:30.870107 kubelet[1858]: E0208 23:20:30.870009 1858 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.200.8.10\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="800ms" Feb 8 23:20:30.970036 kubelet[1858]: I0208 23:20:30.969999 1858 kubelet_node_status.go:70] "Attempting to register node" node="10.200.8.10" Feb 8 23:20:30.971316 kubelet[1858]: E0208 23:20:30.971276 1858 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.200.8.10" Feb 8 23:20:30.971474 kubelet[1858]: E0208 23:20:30.971278 1858 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.10.17b2069166de3d6e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.10", UID:"10.200.8.10", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.8.10 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.10"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 20, 30, 292704622, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 20, 30, 969943142, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.10.17b2069166de3d6e" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:20:30.972279 kubelet[1858]: E0208 23:20:30.972213 1858 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.10.17b2069166de679e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.10", UID:"10.200.8.10", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.8.10 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.10"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 20, 30, 292715422, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 20, 30, 969955942, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.10.17b2069166de679e" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:20:30.973182 kubelet[1858]: E0208 23:20:30.973125 1858 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.10.17b2069166de77a3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.10", UID:"10.200.8.10", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.8.10 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.10"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 20, 30, 292719523, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 20, 30, 969965342, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.10.17b2069166de77a3" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:20:31.237455 kubelet[1858]: I0208 23:20:31.237407 1858 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 8 23:20:31.249158 kubelet[1858]: E0208 23:20:31.249123 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:20:31.623649 kubelet[1858]: E0208 23:20:31.623609 1858 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.200.8.10" not found Feb 8 23:20:31.673380 kubelet[1858]: E0208 23:20:31.673352 1858 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.200.8.10\" not found" node="10.200.8.10" Feb 8 23:20:31.772780 kubelet[1858]: I0208 23:20:31.772750 1858 kubelet_node_status.go:70] "Attempting to register node" node="10.200.8.10" Feb 8 23:20:31.776918 kubelet[1858]: I0208 23:20:31.776843 1858 kubelet_node_status.go:73] "Successfully registered node" node="10.200.8.10" Feb 8 23:20:31.866937 sudo[1639]: pam_unix(sudo:session): session closed for user root Feb 8 23:20:31.889483 kubelet[1858]: I0208 23:20:31.889380 1858 kuberuntime_manager.go:1460] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 8 23:20:31.890092 env[1337]: time="2024-02-08T23:20:31.890048100Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 8 23:20:31.890521 kubelet[1858]: I0208 23:20:31.890305 1858 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 8 23:20:31.986114 sshd[1636]: pam_unix(sshd:session): session closed for user core Feb 8 23:20:31.989623 systemd[1]: sshd@4-10.200.8.10:22-10.200.12.6:56746.service: Deactivated successfully. Feb 8 23:20:31.990765 systemd[1]: session-7.scope: Deactivated successfully. Feb 8 23:20:31.991708 systemd-logind[1326]: Session 7 logged out. Waiting for processes to exit. Feb 8 23:20:31.992956 systemd-logind[1326]: Removed session 7. Feb 8 23:20:32.249984 kubelet[1858]: E0208 23:20:32.249647 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:20:32.249984 kubelet[1858]: I0208 23:20:32.249650 1858 apiserver.go:52] "Watching apiserver" Feb 8 23:20:32.252223 kubelet[1858]: I0208 23:20:32.252194 1858 topology_manager.go:212] "Topology Admit Handler" Feb 8 23:20:32.252392 kubelet[1858]: I0208 23:20:32.252372 1858 topology_manager.go:212] "Topology Admit Handler" Feb 8 23:20:32.258390 systemd[1]: Created slice kubepods-besteffort-pod26225a3f_aeb7_45bc_8503_88b637ded516.slice. Feb 8 23:20:32.266837 systemd[1]: Created slice kubepods-burstable-pod554bb687_e224_4cfe_8c5e_d03b29408c01.slice. Feb 8 23:20:32.268923 kubelet[1858]: I0208 23:20:32.268896 1858 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world" Feb 8 23:20:32.278998 kubelet[1858]: I0208 23:20:32.278977 1858 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/554bb687-e224-4cfe-8c5e-d03b29408c01-host-proc-sys-net\") pod \"cilium-9nr9m\" (UID: \"554bb687-e224-4cfe-8c5e-d03b29408c01\") " pod="kube-system/cilium-9nr9m" Feb 8 23:20:32.279097 kubelet[1858]: I0208 23:20:32.279016 1858 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/26225a3f-aeb7-45bc-8503-88b637ded516-kube-proxy\") pod \"kube-proxy-9s24r\" (UID: \"26225a3f-aeb7-45bc-8503-88b637ded516\") " pod="kube-system/kube-proxy-9s24r" Feb 8 23:20:32.279097 kubelet[1858]: I0208 23:20:32.279042 1858 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/554bb687-e224-4cfe-8c5e-d03b29408c01-hostproc\") pod \"cilium-9nr9m\" (UID: \"554bb687-e224-4cfe-8c5e-d03b29408c01\") " pod="kube-system/cilium-9nr9m" Feb 8 23:20:32.279097 kubelet[1858]: I0208 23:20:32.279071 1858 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/554bb687-e224-4cfe-8c5e-d03b29408c01-cilium-cgroup\") pod \"cilium-9nr9m\" (UID: \"554bb687-e224-4cfe-8c5e-d03b29408c01\") " pod="kube-system/cilium-9nr9m" Feb 8 23:20:32.279097 kubelet[1858]: I0208 23:20:32.279097 1858 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/554bb687-e224-4cfe-8c5e-d03b29408c01-cni-path\") pod \"cilium-9nr9m\" (UID: \"554bb687-e224-4cfe-8c5e-d03b29408c01\") " pod="kube-system/cilium-9nr9m" Feb 8 23:20:32.279269 kubelet[1858]: I0208 23:20:32.279122 1858 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/554bb687-e224-4cfe-8c5e-d03b29408c01-hubble-tls\") pod \"cilium-9nr9m\" (UID: \"554bb687-e224-4cfe-8c5e-d03b29408c01\") " pod="kube-system/cilium-9nr9m" Feb 8 23:20:32.279269 kubelet[1858]: I0208 23:20:32.279150 1858 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/26225a3f-aeb7-45bc-8503-88b637ded516-lib-modules\") pod \"kube-proxy-9s24r\" (UID: \"26225a3f-aeb7-45bc-8503-88b637ded516\") " pod="kube-system/kube-proxy-9s24r" Feb 8 23:20:32.279269 kubelet[1858]: I0208 23:20:32.279176 1858 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/554bb687-e224-4cfe-8c5e-d03b29408c01-cilium-run\") pod \"cilium-9nr9m\" (UID: \"554bb687-e224-4cfe-8c5e-d03b29408c01\") " pod="kube-system/cilium-9nr9m" Feb 8 23:20:32.279269 kubelet[1858]: I0208 23:20:32.279204 1858 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/554bb687-e224-4cfe-8c5e-d03b29408c01-lib-modules\") pod \"cilium-9nr9m\" (UID: \"554bb687-e224-4cfe-8c5e-d03b29408c01\") " pod="kube-system/cilium-9nr9m" Feb 8 23:20:32.279269 kubelet[1858]: I0208 23:20:32.279232 1858 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/554bb687-e224-4cfe-8c5e-d03b29408c01-xtables-lock\") pod \"cilium-9nr9m\" (UID: \"554bb687-e224-4cfe-8c5e-d03b29408c01\") " pod="kube-system/cilium-9nr9m" Feb 8 23:20:32.279490 kubelet[1858]: I0208 23:20:32.279278 1858 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/554bb687-e224-4cfe-8c5e-d03b29408c01-host-proc-sys-kernel\") pod \"cilium-9nr9m\" (UID: \"554bb687-e224-4cfe-8c5e-d03b29408c01\") " pod="kube-system/cilium-9nr9m" Feb 8 23:20:32.279490 kubelet[1858]: I0208 23:20:32.279306 1858 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/554bb687-e224-4cfe-8c5e-d03b29408c01-bpf-maps\") pod \"cilium-9nr9m\" (UID: \"554bb687-e224-4cfe-8c5e-d03b29408c01\") " pod="kube-system/cilium-9nr9m" Feb 8 23:20:32.279490 kubelet[1858]: I0208 23:20:32.279367 1858 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/554bb687-e224-4cfe-8c5e-d03b29408c01-etc-cni-netd\") pod \"cilium-9nr9m\" (UID: \"554bb687-e224-4cfe-8c5e-d03b29408c01\") " pod="kube-system/cilium-9nr9m" Feb 8 23:20:32.279490 kubelet[1858]: I0208 23:20:32.279408 1858 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/554bb687-e224-4cfe-8c5e-d03b29408c01-clustermesh-secrets\") pod \"cilium-9nr9m\" (UID: \"554bb687-e224-4cfe-8c5e-d03b29408c01\") " pod="kube-system/cilium-9nr9m" Feb 8 23:20:32.279490 kubelet[1858]: I0208 23:20:32.279438 1858 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/554bb687-e224-4cfe-8c5e-d03b29408c01-cilium-config-path\") pod \"cilium-9nr9m\" (UID: \"554bb687-e224-4cfe-8c5e-d03b29408c01\") " pod="kube-system/cilium-9nr9m" Feb 8 23:20:32.279689 kubelet[1858]: I0208 23:20:32.279467 1858 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsxz2\" (UniqueName: \"kubernetes.io/projected/554bb687-e224-4cfe-8c5e-d03b29408c01-kube-api-access-jsxz2\") pod \"cilium-9nr9m\" (UID: \"554bb687-e224-4cfe-8c5e-d03b29408c01\") " pod="kube-system/cilium-9nr9m" Feb 8 23:20:32.279689 kubelet[1858]: I0208 23:20:32.279494 1858 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/26225a3f-aeb7-45bc-8503-88b637ded516-xtables-lock\") pod \"kube-proxy-9s24r\" (UID: \"26225a3f-aeb7-45bc-8503-88b637ded516\") " pod="kube-system/kube-proxy-9s24r" Feb 8 23:20:32.279689 kubelet[1858]: I0208 23:20:32.279526 1858 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8st9f\" (UniqueName: \"kubernetes.io/projected/26225a3f-aeb7-45bc-8503-88b637ded516-kube-api-access-8st9f\") pod \"kube-proxy-9s24r\" (UID: \"26225a3f-aeb7-45bc-8503-88b637ded516\") " pod="kube-system/kube-proxy-9s24r" Feb 8 23:20:32.279689 kubelet[1858]: I0208 23:20:32.279536 1858 reconciler.go:41] "Reconciler: start to sync state" Feb 8 23:20:32.567232 env[1337]: time="2024-02-08T23:20:32.567093956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9s24r,Uid:26225a3f-aeb7-45bc-8503-88b637ded516,Namespace:kube-system,Attempt:0,}" Feb 8 23:20:32.574843 env[1337]: time="2024-02-08T23:20:32.574807137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9nr9m,Uid:554bb687-e224-4cfe-8c5e-d03b29408c01,Namespace:kube-system,Attempt:0,}" Feb 8 23:20:33.250271 kubelet[1858]: E0208 23:20:33.250234 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:20:33.514341 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4205584666.mount: Deactivated successfully. Feb 8 23:20:33.538301 env[1337]: time="2024-02-08T23:20:33.538252327Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:20:33.541975 env[1337]: time="2024-02-08T23:20:33.541933464Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:20:33.549880 env[1337]: time="2024-02-08T23:20:33.549843245Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:20:33.551994 env[1337]: time="2024-02-08T23:20:33.551958967Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:20:33.554224 env[1337]: time="2024-02-08T23:20:33.554192290Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:20:33.556297 env[1337]: time="2024-02-08T23:20:33.556264211Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:20:33.558175 env[1337]: time="2024-02-08T23:20:33.558141830Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:20:33.561020 env[1337]: time="2024-02-08T23:20:33.560987260Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:20:33.642264 env[1337]: time="2024-02-08T23:20:33.642195792Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:20:33.642436 env[1337]: time="2024-02-08T23:20:33.642272993Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:20:33.642436 env[1337]: time="2024-02-08T23:20:33.642300993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:20:33.642537 env[1337]: time="2024-02-08T23:20:33.642459194Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7dc980d799dbf5c24589970b3ef8ad7f81b8f9fb9193ce9135d1f48c630966d6 pid=1903 runtime=io.containerd.runc.v2 Feb 8 23:20:33.647575 env[1337]: time="2024-02-08T23:20:33.647501046Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:20:33.647732 env[1337]: time="2024-02-08T23:20:33.647543347Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:20:33.647863 env[1337]: time="2024-02-08T23:20:33.647725048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:20:33.648208 env[1337]: time="2024-02-08T23:20:33.648148353Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6b0767bd0a5f989b056327905bb6695e832c3af09d3728a50b6bd97b37fda381 pid=1914 runtime=io.containerd.runc.v2 Feb 8 23:20:33.664822 systemd[1]: Started cri-containerd-6b0767bd0a5f989b056327905bb6695e832c3af09d3728a50b6bd97b37fda381.scope. Feb 8 23:20:33.679019 systemd[1]: Started cri-containerd-7dc980d799dbf5c24589970b3ef8ad7f81b8f9fb9193ce9135d1f48c630966d6.scope. Feb 8 23:20:33.714797 env[1337]: time="2024-02-08T23:20:33.713830926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9s24r,Uid:26225a3f-aeb7-45bc-8503-88b637ded516,Namespace:kube-system,Attempt:0,} returns sandbox id \"6b0767bd0a5f989b056327905bb6695e832c3af09d3728a50b6bd97b37fda381\"" Feb 8 23:20:33.716531 env[1337]: time="2024-02-08T23:20:33.716490153Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.10\"" Feb 8 23:20:33.721534 env[1337]: time="2024-02-08T23:20:33.721483304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9nr9m,Uid:554bb687-e224-4cfe-8c5e-d03b29408c01,Namespace:kube-system,Attempt:0,} returns sandbox id \"7dc980d799dbf5c24589970b3ef8ad7f81b8f9fb9193ce9135d1f48c630966d6\"" Feb 8 23:20:34.250824 kubelet[1858]: E0208 23:20:34.250768 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:20:34.765458 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4280443738.mount: Deactivated successfully. Feb 8 23:20:35.251443 kubelet[1858]: E0208 23:20:35.251383 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:20:35.310571 env[1337]: time="2024-02-08T23:20:35.310519455Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:20:35.316461 env[1337]: time="2024-02-08T23:20:35.316418112Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:db7b01e105753475c198490cf875df1314fd1a599f67ea1b184586cb399e1cae,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:20:35.320863 env[1337]: time="2024-02-08T23:20:35.320838055Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:20:35.323470 env[1337]: time="2024-02-08T23:20:35.323442681Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:d084b53c772f62ec38fddb2348a82d4234016daf6cd43fedbf0b3281f3790f88,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:20:35.323861 env[1337]: time="2024-02-08T23:20:35.323832185Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.10\" returns image reference \"sha256:db7b01e105753475c198490cf875df1314fd1a599f67ea1b184586cb399e1cae\"" Feb 8 23:20:35.325832 env[1337]: time="2024-02-08T23:20:35.325211398Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 8 23:20:35.326090 env[1337]: time="2024-02-08T23:20:35.326059506Z" level=info msg="CreateContainer within sandbox \"6b0767bd0a5f989b056327905bb6695e832c3af09d3728a50b6bd97b37fda381\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 8 23:20:35.351648 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1042369005.mount: Deactivated successfully. Feb 8 23:20:35.368122 env[1337]: time="2024-02-08T23:20:35.368081915Z" level=info msg="CreateContainer within sandbox \"6b0767bd0a5f989b056327905bb6695e832c3af09d3728a50b6bd97b37fda381\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"30f616d1d1bec157d3e79ce8594308d13f0aae76518aff08d57c41a79c9367e4\"" Feb 8 23:20:35.368859 env[1337]: time="2024-02-08T23:20:35.368821022Z" level=info msg="StartContainer for \"30f616d1d1bec157d3e79ce8594308d13f0aae76518aff08d57c41a79c9367e4\"" Feb 8 23:20:35.385711 systemd[1]: Started cri-containerd-30f616d1d1bec157d3e79ce8594308d13f0aae76518aff08d57c41a79c9367e4.scope. Feb 8 23:20:35.420923 env[1337]: time="2024-02-08T23:20:35.420882728Z" level=info msg="StartContainer for \"30f616d1d1bec157d3e79ce8594308d13f0aae76518aff08d57c41a79c9367e4\" returns successfully" Feb 8 23:20:35.506213 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1657882341.mount: Deactivated successfully. Feb 8 23:20:36.252398 kubelet[1858]: E0208 23:20:36.252357 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:20:36.354510 kubelet[1858]: I0208 23:20:36.354479 1858 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-9s24r" podStartSLOduration=3.745890168 podCreationTimestamp="2024-02-08 23:20:31 +0000 UTC" firstStartedPulling="2024-02-08 23:20:33.715608744 +0000 UTC m=+3.858931208" lastFinishedPulling="2024-02-08 23:20:35.324165988 +0000 UTC m=+5.467488552" observedRunningTime="2024-02-08 23:20:36.35419021 +0000 UTC m=+6.497512674" watchObservedRunningTime="2024-02-08 23:20:36.354447512 +0000 UTC m=+6.497769976" Feb 8 23:20:37.253392 kubelet[1858]: E0208 23:20:37.253315 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:20:38.253767 kubelet[1858]: E0208 23:20:38.253665 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:20:39.253815 kubelet[1858]: E0208 23:20:39.253776 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:20:40.254547 kubelet[1858]: E0208 23:20:40.254485 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:20:40.830945 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3620668054.mount: Deactivated successfully. Feb 8 23:20:41.255184 kubelet[1858]: E0208 23:20:41.255124 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:20:42.256010 kubelet[1858]: E0208 23:20:42.255971 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:20:43.257502 kubelet[1858]: E0208 23:20:43.257450 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:20:43.470079 env[1337]: time="2024-02-08T23:20:43.470027293Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:20:43.475751 env[1337]: time="2024-02-08T23:20:43.475700638Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:20:43.479361 env[1337]: time="2024-02-08T23:20:43.479310766Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:20:43.479933 env[1337]: time="2024-02-08T23:20:43.479882871Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 8 23:20:43.482217 env[1337]: time="2024-02-08T23:20:43.482185389Z" level=info msg="CreateContainer within sandbox \"7dc980d799dbf5c24589970b3ef8ad7f81b8f9fb9193ce9135d1f48c630966d6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 8 23:20:43.508276 env[1337]: time="2024-02-08T23:20:43.508191395Z" level=info msg="CreateContainer within sandbox \"7dc980d799dbf5c24589970b3ef8ad7f81b8f9fb9193ce9135d1f48c630966d6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4ea3d18eca453197b46d7a106e460e0677437a7b2d35f824f88ee2066c62ec17\"" Feb 8 23:20:43.509064 env[1337]: time="2024-02-08T23:20:43.509020701Z" level=info msg="StartContainer for \"4ea3d18eca453197b46d7a106e460e0677437a7b2d35f824f88ee2066c62ec17\"" Feb 8 23:20:43.535055 systemd[1]: Started cri-containerd-4ea3d18eca453197b46d7a106e460e0677437a7b2d35f824f88ee2066c62ec17.scope. Feb 8 23:20:43.563984 env[1337]: time="2024-02-08T23:20:43.563937536Z" level=info msg="StartContainer for \"4ea3d18eca453197b46d7a106e460e0677437a7b2d35f824f88ee2066c62ec17\" returns successfully" Feb 8 23:20:43.567705 systemd[1]: cri-containerd-4ea3d18eca453197b46d7a106e460e0677437a7b2d35f824f88ee2066c62ec17.scope: Deactivated successfully. Feb 8 23:20:44.257731 kubelet[1858]: E0208 23:20:44.257666 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:20:44.496924 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4ea3d18eca453197b46d7a106e460e0677437a7b2d35f824f88ee2066c62ec17-rootfs.mount: Deactivated successfully. Feb 8 23:20:45.258856 kubelet[1858]: E0208 23:20:45.258813 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:20:46.259833 kubelet[1858]: E0208 23:20:46.259777 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:20:47.260164 kubelet[1858]: E0208 23:20:47.260121 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:20:47.893161 env[1337]: time="2024-02-08T23:20:47.893073290Z" level=info msg="shim disconnected" id=4ea3d18eca453197b46d7a106e460e0677437a7b2d35f824f88ee2066c62ec17 Feb 8 23:20:47.893161 env[1337]: time="2024-02-08T23:20:47.893147790Z" level=warning msg="cleaning up after shim disconnected" id=4ea3d18eca453197b46d7a106e460e0677437a7b2d35f824f88ee2066c62ec17 namespace=k8s.io Feb 8 23:20:47.893690 env[1337]: time="2024-02-08T23:20:47.893173190Z" level=info msg="cleaning up dead shim" Feb 8 23:20:47.900695 env[1337]: time="2024-02-08T23:20:47.900657644Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:20:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2189 runtime=io.containerd.runc.v2\n" Feb 8 23:20:48.261085 kubelet[1858]: E0208 23:20:48.260951 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:20:48.368682 env[1337]: time="2024-02-08T23:20:48.368639141Z" level=info msg="CreateContainer within sandbox \"7dc980d799dbf5c24589970b3ef8ad7f81b8f9fb9193ce9135d1f48c630966d6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 8 23:20:48.398513 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3896373641.mount: Deactivated successfully. Feb 8 23:20:48.404830 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1634071523.mount: Deactivated successfully. Feb 8 23:20:48.418260 env[1337]: time="2024-02-08T23:20:48.418218488Z" level=info msg="CreateContainer within sandbox \"7dc980d799dbf5c24589970b3ef8ad7f81b8f9fb9193ce9135d1f48c630966d6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0cb701d95774bd9b5378caf09c6e3dd264eef1761b21656d8bb984b4b4851dd2\"" Feb 8 23:20:48.418615 env[1337]: time="2024-02-08T23:20:48.418583090Z" level=info msg="StartContainer for \"0cb701d95774bd9b5378caf09c6e3dd264eef1761b21656d8bb984b4b4851dd2\"" Feb 8 23:20:48.438099 systemd[1]: Started cri-containerd-0cb701d95774bd9b5378caf09c6e3dd264eef1761b21656d8bb984b4b4851dd2.scope. Feb 8 23:20:48.467489 env[1337]: time="2024-02-08T23:20:48.467446732Z" level=info msg="StartContainer for \"0cb701d95774bd9b5378caf09c6e3dd264eef1761b21656d8bb984b4b4851dd2\" returns successfully" Feb 8 23:20:48.474536 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 8 23:20:48.474849 systemd[1]: Stopped systemd-sysctl.service. Feb 8 23:20:48.475008 systemd[1]: Stopping systemd-sysctl.service... Feb 8 23:20:48.476919 systemd[1]: Starting systemd-sysctl.service... Feb 8 23:20:48.479300 systemd[1]: cri-containerd-0cb701d95774bd9b5378caf09c6e3dd264eef1761b21656d8bb984b4b4851dd2.scope: Deactivated successfully. Feb 8 23:20:48.490551 systemd[1]: Finished systemd-sysctl.service. Feb 8 23:20:48.514206 env[1337]: time="2024-02-08T23:20:48.514099359Z" level=info msg="shim disconnected" id=0cb701d95774bd9b5378caf09c6e3dd264eef1761b21656d8bb984b4b4851dd2 Feb 8 23:20:48.514206 env[1337]: time="2024-02-08T23:20:48.514144359Z" level=warning msg="cleaning up after shim disconnected" id=0cb701d95774bd9b5378caf09c6e3dd264eef1761b21656d8bb984b4b4851dd2 namespace=k8s.io Feb 8 23:20:48.514206 env[1337]: time="2024-02-08T23:20:48.514158859Z" level=info msg="cleaning up dead shim" Feb 8 23:20:48.521590 env[1337]: time="2024-02-08T23:20:48.521557111Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:20:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2254 runtime=io.containerd.runc.v2\n" Feb 8 23:20:49.262068 kubelet[1858]: E0208 23:20:49.261973 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:20:49.372113 env[1337]: time="2024-02-08T23:20:49.372065101Z" level=info msg="CreateContainer within sandbox \"7dc980d799dbf5c24589970b3ef8ad7f81b8f9fb9193ce9135d1f48c630966d6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 8 23:20:49.395546 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0cb701d95774bd9b5378caf09c6e3dd264eef1761b21656d8bb984b4b4851dd2-rootfs.mount: Deactivated successfully. Feb 8 23:20:49.413383 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount75816500.mount: Deactivated successfully. Feb 8 23:20:49.425702 env[1337]: time="2024-02-08T23:20:49.425659967Z" level=info msg="CreateContainer within sandbox \"7dc980d799dbf5c24589970b3ef8ad7f81b8f9fb9193ce9135d1f48c630966d6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a5ba707cc54aaf67516e5b9695074b82d493599ae9f8c6ff03247d567dd0ff33\"" Feb 8 23:20:49.426250 env[1337]: time="2024-02-08T23:20:49.426222171Z" level=info msg="StartContainer for \"a5ba707cc54aaf67516e5b9695074b82d493599ae9f8c6ff03247d567dd0ff33\"" Feb 8 23:20:49.443213 systemd[1]: Started cri-containerd-a5ba707cc54aaf67516e5b9695074b82d493599ae9f8c6ff03247d567dd0ff33.scope. Feb 8 23:20:49.480556 systemd[1]: cri-containerd-a5ba707cc54aaf67516e5b9695074b82d493599ae9f8c6ff03247d567dd0ff33.scope: Deactivated successfully. Feb 8 23:20:49.485478 env[1337]: time="2024-02-08T23:20:49.485397975Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod554bb687_e224_4cfe_8c5e_d03b29408c01.slice/cri-containerd-a5ba707cc54aaf67516e5b9695074b82d493599ae9f8c6ff03247d567dd0ff33.scope/memory.events\": no such file or directory" Feb 8 23:20:49.487430 env[1337]: time="2024-02-08T23:20:49.487395389Z" level=info msg="StartContainer for \"a5ba707cc54aaf67516e5b9695074b82d493599ae9f8c6ff03247d567dd0ff33\" returns successfully" Feb 8 23:20:49.517906 env[1337]: time="2024-02-08T23:20:49.517014891Z" level=info msg="shim disconnected" id=a5ba707cc54aaf67516e5b9695074b82d493599ae9f8c6ff03247d567dd0ff33 Feb 8 23:20:49.517906 env[1337]: time="2024-02-08T23:20:49.517058291Z" level=warning msg="cleaning up after shim disconnected" id=a5ba707cc54aaf67516e5b9695074b82d493599ae9f8c6ff03247d567dd0ff33 namespace=k8s.io Feb 8 23:20:49.517906 env[1337]: time="2024-02-08T23:20:49.517069591Z" level=info msg="cleaning up dead shim" Feb 8 23:20:49.524832 env[1337]: time="2024-02-08T23:20:49.524799244Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:20:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2309 runtime=io.containerd.runc.v2\n" Feb 8 23:20:50.249301 kubelet[1858]: E0208 23:20:50.249235 1858 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:20:50.262359 kubelet[1858]: E0208 23:20:50.262318 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:20:50.376183 env[1337]: time="2024-02-08T23:20:50.376133400Z" level=info msg="CreateContainer within sandbox \"7dc980d799dbf5c24589970b3ef8ad7f81b8f9fb9193ce9135d1f48c630966d6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 8 23:20:50.395498 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a5ba707cc54aaf67516e5b9695074b82d493599ae9f8c6ff03247d567dd0ff33-rootfs.mount: Deactivated successfully. Feb 8 23:20:50.416488 env[1337]: time="2024-02-08T23:20:50.416439069Z" level=info msg="CreateContainer within sandbox \"7dc980d799dbf5c24589970b3ef8ad7f81b8f9fb9193ce9135d1f48c630966d6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2ca6f61f0020dff3ce40cfb5dd5ba8f085a9cf23eee5164dc68825ac030b7e2c\"" Feb 8 23:20:50.417611 env[1337]: time="2024-02-08T23:20:50.417570776Z" level=info msg="StartContainer for \"2ca6f61f0020dff3ce40cfb5dd5ba8f085a9cf23eee5164dc68825ac030b7e2c\"" Feb 8 23:20:50.443782 systemd[1]: Started cri-containerd-2ca6f61f0020dff3ce40cfb5dd5ba8f085a9cf23eee5164dc68825ac030b7e2c.scope. Feb 8 23:20:50.471177 systemd[1]: cri-containerd-2ca6f61f0020dff3ce40cfb5dd5ba8f085a9cf23eee5164dc68825ac030b7e2c.scope: Deactivated successfully. Feb 8 23:20:50.475689 env[1337]: time="2024-02-08T23:20:50.475513363Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod554bb687_e224_4cfe_8c5e_d03b29408c01.slice/cri-containerd-2ca6f61f0020dff3ce40cfb5dd5ba8f085a9cf23eee5164dc68825ac030b7e2c.scope/memory.events\": no such file or directory" Feb 8 23:20:50.478597 env[1337]: time="2024-02-08T23:20:50.478559483Z" level=info msg="StartContainer for \"2ca6f61f0020dff3ce40cfb5dd5ba8f085a9cf23eee5164dc68825ac030b7e2c\" returns successfully" Feb 8 23:20:50.495644 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2ca6f61f0020dff3ce40cfb5dd5ba8f085a9cf23eee5164dc68825ac030b7e2c-rootfs.mount: Deactivated successfully. Feb 8 23:20:50.506979 env[1337]: time="2024-02-08T23:20:50.506215468Z" level=info msg="shim disconnected" id=2ca6f61f0020dff3ce40cfb5dd5ba8f085a9cf23eee5164dc68825ac030b7e2c Feb 8 23:20:50.506979 env[1337]: time="2024-02-08T23:20:50.506264868Z" level=warning msg="cleaning up after shim disconnected" id=2ca6f61f0020dff3ce40cfb5dd5ba8f085a9cf23eee5164dc68825ac030b7e2c namespace=k8s.io Feb 8 23:20:50.506979 env[1337]: time="2024-02-08T23:20:50.506276768Z" level=info msg="cleaning up dead shim" Feb 8 23:20:50.514290 env[1337]: time="2024-02-08T23:20:50.514253121Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:20:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2365 runtime=io.containerd.runc.v2\n" Feb 8 23:20:51.263252 kubelet[1858]: E0208 23:20:51.263215 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:20:51.380728 env[1337]: time="2024-02-08T23:20:51.380603642Z" level=info msg="CreateContainer within sandbox \"7dc980d799dbf5c24589970b3ef8ad7f81b8f9fb9193ce9135d1f48c630966d6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 8 23:20:51.409159 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2337774840.mount: Deactivated successfully. Feb 8 23:20:51.426110 env[1337]: time="2024-02-08T23:20:51.426062438Z" level=info msg="CreateContainer within sandbox \"7dc980d799dbf5c24589970b3ef8ad7f81b8f9fb9193ce9135d1f48c630966d6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f044f536aacd5d32cc9d4c020bdbb9a3aac21e27215566771aef237c8b89a989\"" Feb 8 23:20:51.426823 env[1337]: time="2024-02-08T23:20:51.426787142Z" level=info msg="StartContainer for \"f044f536aacd5d32cc9d4c020bdbb9a3aac21e27215566771aef237c8b89a989\"" Feb 8 23:20:51.449682 systemd[1]: Started cri-containerd-f044f536aacd5d32cc9d4c020bdbb9a3aac21e27215566771aef237c8b89a989.scope. Feb 8 23:20:51.482602 env[1337]: time="2024-02-08T23:20:51.482556606Z" level=info msg="StartContainer for \"f044f536aacd5d32cc9d4c020bdbb9a3aac21e27215566771aef237c8b89a989\" returns successfully" Feb 8 23:20:51.657620 kubelet[1858]: I0208 23:20:51.657583 1858 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 8 23:20:51.989357 kernel: Initializing XFRM netlink socket Feb 8 23:20:52.264272 kubelet[1858]: E0208 23:20:52.264217 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:20:52.396486 kubelet[1858]: I0208 23:20:52.396436 1858 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-9nr9m" podStartSLOduration=11.638944342 podCreationTimestamp="2024-02-08 23:20:31 +0000 UTC" firstStartedPulling="2024-02-08 23:20:33.722739517 +0000 UTC m=+3.866061981" lastFinishedPulling="2024-02-08 23:20:43.480196973 +0000 UTC m=+13.623519537" observedRunningTime="2024-02-08 23:20:52.396266498 +0000 UTC m=+22.539588962" watchObservedRunningTime="2024-02-08 23:20:52.396401898 +0000 UTC m=+22.539724462" Feb 8 23:20:53.264797 kubelet[1858]: E0208 23:20:53.264744 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:20:53.668514 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 8 23:20:53.672644 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 8 23:20:53.668103 systemd-networkd[1480]: cilium_host: Link UP Feb 8 23:20:53.668308 systemd-networkd[1480]: cilium_net: Link UP Feb 8 23:20:53.673801 systemd-networkd[1480]: cilium_net: Gained carrier Feb 8 23:20:53.675766 systemd-networkd[1480]: cilium_host: Gained carrier Feb 8 23:20:53.676507 systemd-networkd[1480]: cilium_net: Gained IPv6LL Feb 8 23:20:53.677144 systemd-networkd[1480]: cilium_host: Gained IPv6LL Feb 8 23:20:53.730833 kubelet[1858]: I0208 23:20:53.730797 1858 topology_manager.go:212] "Topology Admit Handler" Feb 8 23:20:53.737680 systemd[1]: Created slice kubepods-besteffort-pod22fce782_94b9_447a_badd_f49fa65135d7.slice. Feb 8 23:20:53.834360 kubelet[1858]: I0208 23:20:53.834314 1858 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj26n\" (UniqueName: \"kubernetes.io/projected/22fce782-94b9-447a-badd-f49fa65135d7-kube-api-access-pj26n\") pod \"nginx-deployment-845c78c8b9-9ssjl\" (UID: \"22fce782-94b9-447a-badd-f49fa65135d7\") " pod="default/nginx-deployment-845c78c8b9-9ssjl" Feb 8 23:20:53.877190 systemd-networkd[1480]: cilium_vxlan: Link UP Feb 8 23:20:53.877199 systemd-networkd[1480]: cilium_vxlan: Gained carrier Feb 8 23:20:54.043849 env[1337]: time="2024-02-08T23:20:54.043722623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-845c78c8b9-9ssjl,Uid:22fce782-94b9-447a-badd-f49fa65135d7,Namespace:default,Attempt:0,}" Feb 8 23:20:54.120534 kernel: NET: Registered PF_ALG protocol family Feb 8 23:20:54.265627 kubelet[1858]: E0208 23:20:54.265568 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:20:54.849130 systemd-networkd[1480]: lxc_health: Link UP Feb 8 23:20:54.857905 systemd-networkd[1480]: lxc_health: Gained carrier Feb 8 23:20:54.858425 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 8 23:20:55.116067 systemd-networkd[1480]: lxc4fb8f7a57411: Link UP Feb 8 23:20:55.123424 kernel: eth0: renamed from tmp88924 Feb 8 23:20:55.132035 systemd-networkd[1480]: lxc4fb8f7a57411: Gained carrier Feb 8 23:20:55.132344 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc4fb8f7a57411: link becomes ready Feb 8 23:20:55.157445 systemd-networkd[1480]: cilium_vxlan: Gained IPv6LL Feb 8 23:20:55.266703 kubelet[1858]: E0208 23:20:55.266634 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:20:56.053477 systemd-networkd[1480]: lxc_health: Gained IPv6LL Feb 8 23:20:56.267393 kubelet[1858]: E0208 23:20:56.267284 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:20:56.373534 systemd-networkd[1480]: lxc4fb8f7a57411: Gained IPv6LL Feb 8 23:20:57.268240 kubelet[1858]: E0208 23:20:57.268190 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:20:58.268381 kubelet[1858]: E0208 23:20:58.268340 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:20:58.783886 env[1337]: time="2024-02-08T23:20:58.783809295Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:20:58.784280 env[1337]: time="2024-02-08T23:20:58.783857195Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:20:58.784280 env[1337]: time="2024-02-08T23:20:58.783870595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:20:58.784428 env[1337]: time="2024-02-08T23:20:58.784306897Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8892447fdc7f163aae75b89db0b6c591804411d3aa7dc17836a13873bf23eddb pid=2889 runtime=io.containerd.runc.v2 Feb 8 23:20:58.805704 systemd[1]: run-containerd-runc-k8s.io-8892447fdc7f163aae75b89db0b6c591804411d3aa7dc17836a13873bf23eddb-runc.4DAGHD.mount: Deactivated successfully. Feb 8 23:20:58.811059 systemd[1]: Started cri-containerd-8892447fdc7f163aae75b89db0b6c591804411d3aa7dc17836a13873bf23eddb.scope. Feb 8 23:20:58.851535 env[1337]: time="2024-02-08T23:20:58.851484370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-845c78c8b9-9ssjl,Uid:22fce782-94b9-447a-badd-f49fa65135d7,Namespace:default,Attempt:0,} returns sandbox id \"8892447fdc7f163aae75b89db0b6c591804411d3aa7dc17836a13873bf23eddb\"" Feb 8 23:20:58.853051 env[1337]: time="2024-02-08T23:20:58.853015379Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 8 23:20:59.269039 kubelet[1858]: E0208 23:20:59.269001 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:21:00.269514 kubelet[1858]: E0208 23:21:00.269463 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:21:01.270686 kubelet[1858]: E0208 23:21:01.270630 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:21:02.271127 kubelet[1858]: E0208 23:21:02.271090 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:21:02.928622 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount992326530.mount: Deactivated successfully. Feb 8 23:21:03.271822 kubelet[1858]: E0208 23:21:03.271693 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:21:04.272243 kubelet[1858]: E0208 23:21:04.272188 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:21:05.273212 kubelet[1858]: E0208 23:21:05.273141 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:21:05.925894 env[1337]: time="2024-02-08T23:21:05.925838795Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:21:05.932445 env[1337]: time="2024-02-08T23:21:05.932404427Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:21:05.936425 env[1337]: time="2024-02-08T23:21:05.936387846Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:21:05.940707 env[1337]: time="2024-02-08T23:21:05.940611266Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:21:05.941456 env[1337]: time="2024-02-08T23:21:05.941425170Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\"" Feb 8 23:21:05.943589 env[1337]: time="2024-02-08T23:21:05.943557880Z" level=info msg="CreateContainer within sandbox \"8892447fdc7f163aae75b89db0b6c591804411d3aa7dc17836a13873bf23eddb\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 8 23:21:05.970251 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2946990398.mount: Deactivated successfully. Feb 8 23:21:05.981286 env[1337]: time="2024-02-08T23:21:05.981239260Z" level=info msg="CreateContainer within sandbox \"8892447fdc7f163aae75b89db0b6c591804411d3aa7dc17836a13873bf23eddb\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"a561bd3cceaeec0737bc1692de047855478f667db344401017083f34f883c9f2\"" Feb 8 23:21:05.981859 env[1337]: time="2024-02-08T23:21:05.981823863Z" level=info msg="StartContainer for \"a561bd3cceaeec0737bc1692de047855478f667db344401017083f34f883c9f2\"" Feb 8 23:21:06.001073 systemd[1]: Started cri-containerd-a561bd3cceaeec0737bc1692de047855478f667db344401017083f34f883c9f2.scope. Feb 8 23:21:06.032145 env[1337]: time="2024-02-08T23:21:06.032105700Z" level=info msg="StartContainer for \"a561bd3cceaeec0737bc1692de047855478f667db344401017083f34f883c9f2\" returns successfully" Feb 8 23:21:06.274424 kubelet[1858]: E0208 23:21:06.274255 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:21:06.420297 kubelet[1858]: I0208 23:21:06.420268 1858 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-845c78c8b9-9ssjl" podStartSLOduration=6.331223122 podCreationTimestamp="2024-02-08 23:20:53 +0000 UTC" firstStartedPulling="2024-02-08 23:20:58.852707577 +0000 UTC m=+28.996030041" lastFinishedPulling="2024-02-08 23:21:05.941726071 +0000 UTC m=+36.085048535" observedRunningTime="2024-02-08 23:21:06.419890114 +0000 UTC m=+36.563212578" watchObservedRunningTime="2024-02-08 23:21:06.420241616 +0000 UTC m=+36.563564080" Feb 8 23:21:07.274996 kubelet[1858]: E0208 23:21:07.274932 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:21:08.275890 kubelet[1858]: E0208 23:21:08.275831 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:21:09.276141 kubelet[1858]: E0208 23:21:09.276062 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:21:10.248874 kubelet[1858]: E0208 23:21:10.248796 1858 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:21:10.277205 kubelet[1858]: E0208 23:21:10.277153 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:21:11.254790 kubelet[1858]: I0208 23:21:11.254754 1858 topology_manager.go:212] "Topology Admit Handler" Feb 8 23:21:11.259840 systemd[1]: Created slice kubepods-besteffort-pod723cc476_e7d2_4d68_8003_d0c9c64c2a52.slice. Feb 8 23:21:11.277645 kubelet[1858]: E0208 23:21:11.277611 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:21:11.442395 kubelet[1858]: I0208 23:21:11.442353 1858 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnpgm\" (UniqueName: \"kubernetes.io/projected/723cc476-e7d2-4d68-8003-d0c9c64c2a52-kube-api-access-gnpgm\") pod \"nfs-server-provisioner-0\" (UID: \"723cc476-e7d2-4d68-8003-d0c9c64c2a52\") " pod="default/nfs-server-provisioner-0" Feb 8 23:21:11.442657 kubelet[1858]: I0208 23:21:11.442636 1858 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/723cc476-e7d2-4d68-8003-d0c9c64c2a52-data\") pod \"nfs-server-provisioner-0\" (UID: \"723cc476-e7d2-4d68-8003-d0c9c64c2a52\") " pod="default/nfs-server-provisioner-0" Feb 8 23:21:11.564333 env[1337]: time="2024-02-08T23:21:11.564214626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:723cc476-e7d2-4d68-8003-d0c9c64c2a52,Namespace:default,Attempt:0,}" Feb 8 23:21:11.622613 systemd-networkd[1480]: lxc2fe866ecaee3: Link UP Feb 8 23:21:11.638210 kernel: eth0: renamed from tmpb913b Feb 8 23:21:11.652705 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 8 23:21:11.652815 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc2fe866ecaee3: link becomes ready Feb 8 23:21:11.652983 systemd-networkd[1480]: lxc2fe866ecaee3: Gained carrier Feb 8 23:21:11.851077 env[1337]: time="2024-02-08T23:21:11.850934341Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:21:11.851235 env[1337]: time="2024-02-08T23:21:11.850970042Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:21:11.851235 env[1337]: time="2024-02-08T23:21:11.850996842Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:21:11.851626 env[1337]: time="2024-02-08T23:21:11.851406143Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b913b6ca65654fb09e4f5b47a92601b5fa972158bb446f6f6e539739c513ba8f pid=3013 runtime=io.containerd.runc.v2 Feb 8 23:21:11.870492 systemd[1]: Started cri-containerd-b913b6ca65654fb09e4f5b47a92601b5fa972158bb446f6f6e539739c513ba8f.scope. Feb 8 23:21:11.910864 env[1337]: time="2024-02-08T23:21:11.910815395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:723cc476-e7d2-4d68-8003-d0c9c64c2a52,Namespace:default,Attempt:0,} returns sandbox id \"b913b6ca65654fb09e4f5b47a92601b5fa972158bb446f6f6e539739c513ba8f\"" Feb 8 23:21:11.912649 env[1337]: time="2024-02-08T23:21:11.912615703Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 8 23:21:12.278015 kubelet[1858]: E0208 23:21:12.277957 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:21:12.556648 systemd[1]: run-containerd-runc-k8s.io-b913b6ca65654fb09e4f5b47a92601b5fa972158bb446f6f6e539739c513ba8f-runc.Y1MXXa.mount: Deactivated successfully. Feb 8 23:21:13.278724 kubelet[1858]: E0208 23:21:13.278664 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:21:13.525566 systemd-networkd[1480]: lxc2fe866ecaee3: Gained IPv6LL Feb 8 23:21:14.279801 kubelet[1858]: E0208 23:21:14.279756 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:21:15.198601 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1300436718.mount: Deactivated successfully. Feb 8 23:21:15.280857 kubelet[1858]: E0208 23:21:15.280812 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:21:16.281865 kubelet[1858]: E0208 23:21:16.281805 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:21:17.189532 env[1337]: time="2024-02-08T23:21:17.189478227Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:21:17.198416 env[1337]: time="2024-02-08T23:21:17.198370761Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:21:17.203720 env[1337]: time="2024-02-08T23:21:17.203682881Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:21:17.208566 env[1337]: time="2024-02-08T23:21:17.208532599Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:21:17.209146 env[1337]: time="2024-02-08T23:21:17.209111201Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Feb 8 23:21:17.211480 env[1337]: time="2024-02-08T23:21:17.211449810Z" level=info msg="CreateContainer within sandbox \"b913b6ca65654fb09e4f5b47a92601b5fa972158bb446f6f6e539739c513ba8f\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 8 23:21:17.240795 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount478906258.mount: Deactivated successfully. Feb 8 23:21:17.258772 env[1337]: time="2024-02-08T23:21:17.258723790Z" level=info msg="CreateContainer within sandbox \"b913b6ca65654fb09e4f5b47a92601b5fa972158bb446f6f6e539739c513ba8f\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"13076773f58f1a754848b7fe10ec3e70e25bf3f7ef4344faa74d2324a0e94f26\"" Feb 8 23:21:17.259366 env[1337]: time="2024-02-08T23:21:17.259316692Z" level=info msg="StartContainer for \"13076773f58f1a754848b7fe10ec3e70e25bf3f7ef4344faa74d2324a0e94f26\"" Feb 8 23:21:17.281952 systemd[1]: Started cri-containerd-13076773f58f1a754848b7fe10ec3e70e25bf3f7ef4344faa74d2324a0e94f26.scope. Feb 8 23:21:17.285869 kubelet[1858]: E0208 23:21:17.284335 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:21:17.316523 env[1337]: time="2024-02-08T23:21:17.316487209Z" level=info msg="StartContainer for \"13076773f58f1a754848b7fe10ec3e70e25bf3f7ef4344faa74d2324a0e94f26\" returns successfully" Feb 8 23:21:17.446554 kubelet[1858]: I0208 23:21:17.445998 1858 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.148571598 podCreationTimestamp="2024-02-08 23:21:11 +0000 UTC" firstStartedPulling="2024-02-08 23:21:11.912100901 +0000 UTC m=+42.055423465" lastFinishedPulling="2024-02-08 23:21:17.209490403 +0000 UTC m=+47.352812867" observedRunningTime="2024-02-08 23:21:17.445513098 +0000 UTC m=+47.588835562" watchObservedRunningTime="2024-02-08 23:21:17.445961 +0000 UTC m=+47.589283564" Feb 8 23:21:18.284539 kubelet[1858]: E0208 23:21:18.284475 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:21:19.285465 kubelet[1858]: E0208 23:21:19.285409 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:21:20.285983 kubelet[1858]: E0208 23:21:20.285928 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:21:21.286404 kubelet[1858]: E0208 23:21:21.286344 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:21:22.287279 kubelet[1858]: E0208 23:21:22.287224 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:21:23.288287 kubelet[1858]: E0208 23:21:23.288225 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:21:24.288876 kubelet[1858]: E0208 23:21:24.288817 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:21:25.289044 kubelet[1858]: E0208 23:21:25.288989 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:21:26.289715 kubelet[1858]: E0208 23:21:26.289656 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:21:27.289863 kubelet[1858]: E0208 23:21:27.289817 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:21:27.330096 kubelet[1858]: I0208 23:21:27.330060 1858 topology_manager.go:212] "Topology Admit Handler" Feb 8 23:21:27.334755 systemd[1]: Created slice kubepods-besteffort-podbf29b796_4139_49f4_9237_a515323dda3b.slice. Feb 8 23:21:27.524504 kubelet[1858]: I0208 23:21:27.524441 1858 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22xqn\" (UniqueName: \"kubernetes.io/projected/bf29b796-4139-49f4-9237-a515323dda3b-kube-api-access-22xqn\") pod \"test-pod-1\" (UID: \"bf29b796-4139-49f4-9237-a515323dda3b\") " pod="default/test-pod-1" Feb 8 23:21:27.524504 kubelet[1858]: I0208 23:21:27.524511 1858 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-927feabd-1e16-4681-bcaa-83a6416389e3\" (UniqueName: \"kubernetes.io/nfs/bf29b796-4139-49f4-9237-a515323dda3b-pvc-927feabd-1e16-4681-bcaa-83a6416389e3\") pod \"test-pod-1\" (UID: \"bf29b796-4139-49f4-9237-a515323dda3b\") " pod="default/test-pod-1" Feb 8 23:21:27.762360 kernel: FS-Cache: Loaded Feb 8 23:21:27.936449 kernel: RPC: Registered named UNIX socket transport module. Feb 8 23:21:27.936600 kernel: RPC: Registered udp transport module. Feb 8 23:21:27.936632 kernel: RPC: Registered tcp transport module. Feb 8 23:21:27.941412 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 8 23:21:28.208379 kernel: FS-Cache: Netfs 'nfs' registered for caching Feb 8 23:21:28.290175 kubelet[1858]: E0208 23:21:28.290136 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:21:28.446155 kernel: NFS: Registering the id_resolver key type Feb 8 23:21:28.446293 kernel: Key type id_resolver registered Feb 8 23:21:28.446317 kernel: Key type id_legacy registered Feb 8 23:21:28.798834 nfsidmap[3128]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.2-a-eeebf457fd' Feb 8 23:21:28.820001 nfsidmap[3129]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.2-a-eeebf457fd' Feb 8 23:21:28.839259 env[1337]: time="2024-02-08T23:21:28.839210666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:bf29b796-4139-49f4-9237-a515323dda3b,Namespace:default,Attempt:0,}" Feb 8 23:21:28.899911 systemd-networkd[1480]: lxcb88de3e95629: Link UP Feb 8 23:21:28.909425 kernel: eth0: renamed from tmp5c348 Feb 8 23:21:28.922499 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 8 23:21:28.922612 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcb88de3e95629: link becomes ready Feb 8 23:21:28.922790 systemd-networkd[1480]: lxcb88de3e95629: Gained carrier Feb 8 23:21:29.196252 env[1337]: time="2024-02-08T23:21:29.196181988Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:21:29.196476 env[1337]: time="2024-02-08T23:21:29.196220888Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:21:29.196476 env[1337]: time="2024-02-08T23:21:29.196234788Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:21:29.196616 env[1337]: time="2024-02-08T23:21:29.196547389Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5c3489b908c4b430fc787563844c2d2f29926bcb0dbd92afa6704035948cf42d pid=3154 runtime=io.containerd.runc.v2 Feb 8 23:21:29.209620 systemd[1]: Started cri-containerd-5c3489b908c4b430fc787563844c2d2f29926bcb0dbd92afa6704035948cf42d.scope. Feb 8 23:21:29.253390 env[1337]: time="2024-02-08T23:21:29.253311466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:bf29b796-4139-49f4-9237-a515323dda3b,Namespace:default,Attempt:0,} returns sandbox id \"5c3489b908c4b430fc787563844c2d2f29926bcb0dbd92afa6704035948cf42d\"" Feb 8 23:21:29.255038 env[1337]: time="2024-02-08T23:21:29.255001672Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 8 23:21:29.291676 kubelet[1858]: E0208 23:21:29.291627 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:21:29.847831 env[1337]: time="2024-02-08T23:21:29.847779522Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:21:29.855288 env[1337]: time="2024-02-08T23:21:29.855245245Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:21:29.859112 env[1337]: time="2024-02-08T23:21:29.859074457Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:21:29.864657 env[1337]: time="2024-02-08T23:21:29.864615474Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:21:29.865004 env[1337]: time="2024-02-08T23:21:29.864970075Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\"" Feb 8 23:21:29.867392 env[1337]: time="2024-02-08T23:21:29.867363883Z" level=info msg="CreateContainer within sandbox \"5c3489b908c4b430fc787563844c2d2f29926bcb0dbd92afa6704035948cf42d\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 8 23:21:29.893448 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1026007415.mount: Deactivated successfully. Feb 8 23:21:29.911984 env[1337]: time="2024-02-08T23:21:29.911942022Z" level=info msg="CreateContainer within sandbox \"5c3489b908c4b430fc787563844c2d2f29926bcb0dbd92afa6704035948cf42d\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"ce97c031aee3be0fc5a84126fd02e2006afa3c83aaefd8d7b7e7d6337423a0df\"" Feb 8 23:21:29.912629 env[1337]: time="2024-02-08T23:21:29.912596924Z" level=info msg="StartContainer for \"ce97c031aee3be0fc5a84126fd02e2006afa3c83aaefd8d7b7e7d6337423a0df\"" Feb 8 23:21:29.931292 systemd[1]: Started cri-containerd-ce97c031aee3be0fc5a84126fd02e2006afa3c83aaefd8d7b7e7d6337423a0df.scope. Feb 8 23:21:29.963552 env[1337]: time="2024-02-08T23:21:29.963510183Z" level=info msg="StartContainer for \"ce97c031aee3be0fc5a84126fd02e2006afa3c83aaefd8d7b7e7d6337423a0df\" returns successfully" Feb 8 23:21:30.249303 kubelet[1858]: E0208 23:21:30.249254 1858 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:21:30.291779 kubelet[1858]: E0208 23:21:30.291737 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:21:30.421578 systemd-networkd[1480]: lxcb88de3e95629: Gained IPv6LL Feb 8 23:21:30.472462 kubelet[1858]: I0208 23:21:30.472427 1858 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=17.861232741 podCreationTimestamp="2024-02-08 23:21:12 +0000 UTC" firstStartedPulling="2024-02-08 23:21:29.25457057 +0000 UTC m=+59.397893034" lastFinishedPulling="2024-02-08 23:21:29.865735678 +0000 UTC m=+60.009058142" observedRunningTime="2024-02-08 23:21:30.472197249 +0000 UTC m=+60.615519813" watchObservedRunningTime="2024-02-08 23:21:30.472397849 +0000 UTC m=+60.615720313" Feb 8 23:21:31.292734 kubelet[1858]: E0208 23:21:31.292678 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:21:32.293186 kubelet[1858]: E0208 23:21:32.293136 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:21:33.293781 kubelet[1858]: E0208 23:21:33.293733 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:21:34.294105 kubelet[1858]: E0208 23:21:34.294045 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:21:35.048520 systemd[1]: run-containerd-runc-k8s.io-f044f536aacd5d32cc9d4c020bdbb9a3aac21e27215566771aef237c8b89a989-runc.zSR6fG.mount: Deactivated successfully. Feb 8 23:21:35.065530 env[1337]: time="2024-02-08T23:21:35.065461337Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 8 23:21:35.070698 env[1337]: time="2024-02-08T23:21:35.070661552Z" level=info msg="StopContainer for \"f044f536aacd5d32cc9d4c020bdbb9a3aac21e27215566771aef237c8b89a989\" with timeout 1 (s)" Feb 8 23:21:35.071041 env[1337]: time="2024-02-08T23:21:35.071005153Z" level=info msg="Stop container \"f044f536aacd5d32cc9d4c020bdbb9a3aac21e27215566771aef237c8b89a989\" with signal terminated" Feb 8 23:21:35.078577 systemd-networkd[1480]: lxc_health: Link DOWN Feb 8 23:21:35.078586 systemd-networkd[1480]: lxc_health: Lost carrier Feb 8 23:21:35.102502 systemd[1]: cri-containerd-f044f536aacd5d32cc9d4c020bdbb9a3aac21e27215566771aef237c8b89a989.scope: Deactivated successfully. Feb 8 23:21:35.102744 systemd[1]: cri-containerd-f044f536aacd5d32cc9d4c020bdbb9a3aac21e27215566771aef237c8b89a989.scope: Consumed 7.006s CPU time. Feb 8 23:21:35.121076 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f044f536aacd5d32cc9d4c020bdbb9a3aac21e27215566771aef237c8b89a989-rootfs.mount: Deactivated successfully. Feb 8 23:21:35.294592 kubelet[1858]: E0208 23:21:35.294535 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:21:35.337634 kubelet[1858]: E0208 23:21:35.337525 1858 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 8 23:21:36.080538 env[1337]: time="2024-02-08T23:21:36.080470247Z" level=info msg="Kill container \"f044f536aacd5d32cc9d4c020bdbb9a3aac21e27215566771aef237c8b89a989\"" Feb 8 23:21:36.295560 kubelet[1858]: E0208 23:21:36.295508 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:21:37.295953 kubelet[1858]: E0208 23:21:37.295900 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:21:38.296201 kubelet[1858]: E0208 23:21:38.296145 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:21:38.298845 env[1337]: time="2024-02-08T23:21:38.298791572Z" level=info msg="shim disconnected" id=f044f536aacd5d32cc9d4c020bdbb9a3aac21e27215566771aef237c8b89a989 Feb 8 23:21:38.299213 env[1337]: time="2024-02-08T23:21:38.298875472Z" level=warning msg="cleaning up after shim disconnected" id=f044f536aacd5d32cc9d4c020bdbb9a3aac21e27215566771aef237c8b89a989 namespace=k8s.io Feb 8 23:21:38.299213 env[1337]: time="2024-02-08T23:21:38.298890372Z" level=info msg="cleaning up dead shim" Feb 8 23:21:38.306689 env[1337]: time="2024-02-08T23:21:38.306648094Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:21:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3289 runtime=io.containerd.runc.v2\n" Feb 8 23:21:38.313101 env[1337]: time="2024-02-08T23:21:38.313062111Z" level=info msg="StopContainer for \"f044f536aacd5d32cc9d4c020bdbb9a3aac21e27215566771aef237c8b89a989\" returns successfully" Feb 8 23:21:38.313718 env[1337]: time="2024-02-08T23:21:38.313684213Z" level=info msg="StopPodSandbox for \"7dc980d799dbf5c24589970b3ef8ad7f81b8f9fb9193ce9135d1f48c630966d6\"" Feb 8 23:21:38.313832 env[1337]: time="2024-02-08T23:21:38.313754313Z" level=info msg="Container to stop \"4ea3d18eca453197b46d7a106e460e0677437a7b2d35f824f88ee2066c62ec17\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:21:38.313832 env[1337]: time="2024-02-08T23:21:38.313774013Z" level=info msg="Container to stop \"2ca6f61f0020dff3ce40cfb5dd5ba8f085a9cf23eee5164dc68825ac030b7e2c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:21:38.313832 env[1337]: time="2024-02-08T23:21:38.313798013Z" level=info msg="Container to stop \"a5ba707cc54aaf67516e5b9695074b82d493599ae9f8c6ff03247d567dd0ff33\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:21:38.313832 env[1337]: time="2024-02-08T23:21:38.313813013Z" level=info msg="Container to stop \"f044f536aacd5d32cc9d4c020bdbb9a3aac21e27215566771aef237c8b89a989\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:21:38.313832 env[1337]: time="2024-02-08T23:21:38.313826813Z" level=info msg="Container to stop \"0cb701d95774bd9b5378caf09c6e3dd264eef1761b21656d8bb984b4b4851dd2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:21:38.315937 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7dc980d799dbf5c24589970b3ef8ad7f81b8f9fb9193ce9135d1f48c630966d6-shm.mount: Deactivated successfully. Feb 8 23:21:38.322226 systemd[1]: cri-containerd-7dc980d799dbf5c24589970b3ef8ad7f81b8f9fb9193ce9135d1f48c630966d6.scope: Deactivated successfully. Feb 8 23:21:38.341875 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7dc980d799dbf5c24589970b3ef8ad7f81b8f9fb9193ce9135d1f48c630966d6-rootfs.mount: Deactivated successfully. Feb 8 23:21:38.356803 env[1337]: time="2024-02-08T23:21:38.356756732Z" level=info msg="shim disconnected" id=7dc980d799dbf5c24589970b3ef8ad7f81b8f9fb9193ce9135d1f48c630966d6 Feb 8 23:21:38.356991 env[1337]: time="2024-02-08T23:21:38.356804532Z" level=warning msg="cleaning up after shim disconnected" id=7dc980d799dbf5c24589970b3ef8ad7f81b8f9fb9193ce9135d1f48c630966d6 namespace=k8s.io Feb 8 23:21:38.356991 env[1337]: time="2024-02-08T23:21:38.356816432Z" level=info msg="cleaning up dead shim" Feb 8 23:21:38.365077 env[1337]: time="2024-02-08T23:21:38.365040955Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:21:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3322 runtime=io.containerd.runc.v2\n" Feb 8 23:21:38.365391 env[1337]: time="2024-02-08T23:21:38.365361356Z" level=info msg="TearDown network for sandbox \"7dc980d799dbf5c24589970b3ef8ad7f81b8f9fb9193ce9135d1f48c630966d6\" successfully" Feb 8 23:21:38.365483 env[1337]: time="2024-02-08T23:21:38.365387456Z" level=info msg="StopPodSandbox for \"7dc980d799dbf5c24589970b3ef8ad7f81b8f9fb9193ce9135d1f48c630966d6\" returns successfully" Feb 8 23:21:38.485232 kubelet[1858]: I0208 23:21:38.485197 1858 scope.go:115] "RemoveContainer" containerID="f044f536aacd5d32cc9d4c020bdbb9a3aac21e27215566771aef237c8b89a989" Feb 8 23:21:38.486704 env[1337]: time="2024-02-08T23:21:38.486666991Z" level=info msg="RemoveContainer for \"f044f536aacd5d32cc9d4c020bdbb9a3aac21e27215566771aef237c8b89a989\"" Feb 8 23:21:38.493687 env[1337]: time="2024-02-08T23:21:38.493655710Z" level=info msg="RemoveContainer for \"f044f536aacd5d32cc9d4c020bdbb9a3aac21e27215566771aef237c8b89a989\" returns successfully" Feb 8 23:21:38.493873 kubelet[1858]: I0208 23:21:38.493673 1858 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/554bb687-e224-4cfe-8c5e-d03b29408c01-hostproc" (OuterVolumeSpecName: "hostproc") pod "554bb687-e224-4cfe-8c5e-d03b29408c01" (UID: "554bb687-e224-4cfe-8c5e-d03b29408c01"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:21:38.493873 kubelet[1858]: I0208 23:21:38.493745 1858 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/554bb687-e224-4cfe-8c5e-d03b29408c01-hostproc\") pod \"554bb687-e224-4cfe-8c5e-d03b29408c01\" (UID: \"554bb687-e224-4cfe-8c5e-d03b29408c01\") " Feb 8 23:21:38.493873 kubelet[1858]: I0208 23:21:38.493821 1858 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/554bb687-e224-4cfe-8c5e-d03b29408c01-cilium-cgroup\") pod \"554bb687-e224-4cfe-8c5e-d03b29408c01\" (UID: \"554bb687-e224-4cfe-8c5e-d03b29408c01\") " Feb 8 23:21:38.494026 kubelet[1858]: I0208 23:21:38.493888 1858 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/554bb687-e224-4cfe-8c5e-d03b29408c01-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "554bb687-e224-4cfe-8c5e-d03b29408c01" (UID: "554bb687-e224-4cfe-8c5e-d03b29408c01"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:21:38.494026 kubelet[1858]: I0208 23:21:38.493909 1858 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jsxz2\" (UniqueName: \"kubernetes.io/projected/554bb687-e224-4cfe-8c5e-d03b29408c01-kube-api-access-jsxz2\") pod \"554bb687-e224-4cfe-8c5e-d03b29408c01\" (UID: \"554bb687-e224-4cfe-8c5e-d03b29408c01\") " Feb 8 23:21:38.494026 kubelet[1858]: I0208 23:21:38.493937 1858 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/554bb687-e224-4cfe-8c5e-d03b29408c01-host-proc-sys-kernel\") pod \"554bb687-e224-4cfe-8c5e-d03b29408c01\" (UID: \"554bb687-e224-4cfe-8c5e-d03b29408c01\") " Feb 8 23:21:38.496342 kubelet[1858]: I0208 23:21:38.494418 1858 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/554bb687-e224-4cfe-8c5e-d03b29408c01-bpf-maps\") pod \"554bb687-e224-4cfe-8c5e-d03b29408c01\" (UID: \"554bb687-e224-4cfe-8c5e-d03b29408c01\") " Feb 8 23:21:38.496342 kubelet[1858]: I0208 23:21:38.494454 1858 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/554bb687-e224-4cfe-8c5e-d03b29408c01-clustermesh-secrets\") pod \"554bb687-e224-4cfe-8c5e-d03b29408c01\" (UID: \"554bb687-e224-4cfe-8c5e-d03b29408c01\") " Feb 8 23:21:38.496342 kubelet[1858]: I0208 23:21:38.494479 1858 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/554bb687-e224-4cfe-8c5e-d03b29408c01-host-proc-sys-net\") pod \"554bb687-e224-4cfe-8c5e-d03b29408c01\" (UID: \"554bb687-e224-4cfe-8c5e-d03b29408c01\") " Feb 8 23:21:38.496342 kubelet[1858]: I0208 23:21:38.494508 1858 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/554bb687-e224-4cfe-8c5e-d03b29408c01-hubble-tls\") pod \"554bb687-e224-4cfe-8c5e-d03b29408c01\" (UID: \"554bb687-e224-4cfe-8c5e-d03b29408c01\") " Feb 8 23:21:38.496342 kubelet[1858]: I0208 23:21:38.494530 1858 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/554bb687-e224-4cfe-8c5e-d03b29408c01-cilium-run\") pod \"554bb687-e224-4cfe-8c5e-d03b29408c01\" (UID: \"554bb687-e224-4cfe-8c5e-d03b29408c01\") " Feb 8 23:21:38.496342 kubelet[1858]: I0208 23:21:38.494559 1858 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/554bb687-e224-4cfe-8c5e-d03b29408c01-cilium-config-path\") pod \"554bb687-e224-4cfe-8c5e-d03b29408c01\" (UID: \"554bb687-e224-4cfe-8c5e-d03b29408c01\") " Feb 8 23:21:38.496650 kubelet[1858]: I0208 23:21:38.494584 1858 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/554bb687-e224-4cfe-8c5e-d03b29408c01-cni-path\") pod \"554bb687-e224-4cfe-8c5e-d03b29408c01\" (UID: \"554bb687-e224-4cfe-8c5e-d03b29408c01\") " Feb 8 23:21:38.496650 kubelet[1858]: I0208 23:21:38.494611 1858 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/554bb687-e224-4cfe-8c5e-d03b29408c01-lib-modules\") pod \"554bb687-e224-4cfe-8c5e-d03b29408c01\" (UID: \"554bb687-e224-4cfe-8c5e-d03b29408c01\") " Feb 8 23:21:38.496650 kubelet[1858]: I0208 23:21:38.494634 1858 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/554bb687-e224-4cfe-8c5e-d03b29408c01-xtables-lock\") pod \"554bb687-e224-4cfe-8c5e-d03b29408c01\" (UID: \"554bb687-e224-4cfe-8c5e-d03b29408c01\") " Feb 8 23:21:38.496650 kubelet[1858]: I0208 23:21:38.494658 1858 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/554bb687-e224-4cfe-8c5e-d03b29408c01-etc-cni-netd\") pod \"554bb687-e224-4cfe-8c5e-d03b29408c01\" (UID: \"554bb687-e224-4cfe-8c5e-d03b29408c01\") " Feb 8 23:21:38.496650 kubelet[1858]: I0208 23:21:38.494695 1858 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/554bb687-e224-4cfe-8c5e-d03b29408c01-hostproc\") on node \"10.200.8.10\" DevicePath \"\"" Feb 8 23:21:38.496650 kubelet[1858]: I0208 23:21:38.494711 1858 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/554bb687-e224-4cfe-8c5e-d03b29408c01-cilium-cgroup\") on node \"10.200.8.10\" DevicePath \"\"" Feb 8 23:21:38.496923 kubelet[1858]: I0208 23:21:38.494738 1858 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/554bb687-e224-4cfe-8c5e-d03b29408c01-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "554bb687-e224-4cfe-8c5e-d03b29408c01" (UID: "554bb687-e224-4cfe-8c5e-d03b29408c01"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:21:38.496923 kubelet[1858]: I0208 23:21:38.494767 1858 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/554bb687-e224-4cfe-8c5e-d03b29408c01-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "554bb687-e224-4cfe-8c5e-d03b29408c01" (UID: "554bb687-e224-4cfe-8c5e-d03b29408c01"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:21:38.496923 kubelet[1858]: W0208 23:21:38.494898 1858 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/554bb687-e224-4cfe-8c5e-d03b29408c01/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 8 23:21:38.496923 kubelet[1858]: I0208 23:21:38.496452 1858 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/554bb687-e224-4cfe-8c5e-d03b29408c01-cni-path" (OuterVolumeSpecName: "cni-path") pod "554bb687-e224-4cfe-8c5e-d03b29408c01" (UID: "554bb687-e224-4cfe-8c5e-d03b29408c01"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:21:38.496923 kubelet[1858]: I0208 23:21:38.496500 1858 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/554bb687-e224-4cfe-8c5e-d03b29408c01-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "554bb687-e224-4cfe-8c5e-d03b29408c01" (UID: "554bb687-e224-4cfe-8c5e-d03b29408c01"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:21:38.497122 kubelet[1858]: I0208 23:21:38.496529 1858 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/554bb687-e224-4cfe-8c5e-d03b29408c01-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "554bb687-e224-4cfe-8c5e-d03b29408c01" (UID: "554bb687-e224-4cfe-8c5e-d03b29408c01"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:21:38.497122 kubelet[1858]: I0208 23:21:38.496561 1858 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/554bb687-e224-4cfe-8c5e-d03b29408c01-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "554bb687-e224-4cfe-8c5e-d03b29408c01" (UID: "554bb687-e224-4cfe-8c5e-d03b29408c01"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:21:38.497122 kubelet[1858]: I0208 23:21:38.494058 1858 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/554bb687-e224-4cfe-8c5e-d03b29408c01-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "554bb687-e224-4cfe-8c5e-d03b29408c01" (UID: "554bb687-e224-4cfe-8c5e-d03b29408c01"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:21:38.497122 kubelet[1858]: I0208 23:21:38.494360 1858 scope.go:115] "RemoveContainer" containerID="2ca6f61f0020dff3ce40cfb5dd5ba8f085a9cf23eee5164dc68825ac030b7e2c" Feb 8 23:21:38.497122 kubelet[1858]: I0208 23:21:38.496851 1858 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/554bb687-e224-4cfe-8c5e-d03b29408c01-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "554bb687-e224-4cfe-8c5e-d03b29408c01" (UID: "554bb687-e224-4cfe-8c5e-d03b29408c01"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:21:38.498434 kubelet[1858]: I0208 23:21:38.498409 1858 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/554bb687-e224-4cfe-8c5e-d03b29408c01-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "554bb687-e224-4cfe-8c5e-d03b29408c01" (UID: "554bb687-e224-4cfe-8c5e-d03b29408c01"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 8 23:21:38.503084 systemd[1]: var-lib-kubelet-pods-554bb687\x2de224\x2d4cfe\x2d8c5e\x2dd03b29408c01-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 8 23:21:38.504234 env[1337]: time="2024-02-08T23:21:38.504200339Z" level=info msg="RemoveContainer for \"2ca6f61f0020dff3ce40cfb5dd5ba8f085a9cf23eee5164dc68825ac030b7e2c\"" Feb 8 23:21:38.504486 kubelet[1858]: I0208 23:21:38.504441 1858 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/554bb687-e224-4cfe-8c5e-d03b29408c01-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "554bb687-e224-4cfe-8c5e-d03b29408c01" (UID: "554bb687-e224-4cfe-8c5e-d03b29408c01"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 8 23:21:38.506610 systemd[1]: var-lib-kubelet-pods-554bb687\x2de224\x2d4cfe\x2d8c5e\x2dd03b29408c01-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djsxz2.mount: Deactivated successfully. Feb 8 23:21:38.509308 systemd[1]: var-lib-kubelet-pods-554bb687\x2de224\x2d4cfe\x2d8c5e\x2dd03b29408c01-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 8 23:21:38.510128 kubelet[1858]: I0208 23:21:38.510103 1858 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/554bb687-e224-4cfe-8c5e-d03b29408c01-kube-api-access-jsxz2" (OuterVolumeSpecName: "kube-api-access-jsxz2") pod "554bb687-e224-4cfe-8c5e-d03b29408c01" (UID: "554bb687-e224-4cfe-8c5e-d03b29408c01"). InnerVolumeSpecName "kube-api-access-jsxz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 8 23:21:38.510445 kubelet[1858]: I0208 23:21:38.510419 1858 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/554bb687-e224-4cfe-8c5e-d03b29408c01-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "554bb687-e224-4cfe-8c5e-d03b29408c01" (UID: "554bb687-e224-4cfe-8c5e-d03b29408c01"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 8 23:21:38.513931 env[1337]: time="2024-02-08T23:21:38.513902166Z" level=info msg="RemoveContainer for \"2ca6f61f0020dff3ce40cfb5dd5ba8f085a9cf23eee5164dc68825ac030b7e2c\" returns successfully" Feb 8 23:21:38.514075 kubelet[1858]: I0208 23:21:38.514056 1858 scope.go:115] "RemoveContainer" containerID="a5ba707cc54aaf67516e5b9695074b82d493599ae9f8c6ff03247d567dd0ff33" Feb 8 23:21:38.515024 env[1337]: time="2024-02-08T23:21:38.514995169Z" level=info msg="RemoveContainer for \"a5ba707cc54aaf67516e5b9695074b82d493599ae9f8c6ff03247d567dd0ff33\"" Feb 8 23:21:38.521897 env[1337]: time="2024-02-08T23:21:38.521864088Z" level=info msg="RemoveContainer for \"a5ba707cc54aaf67516e5b9695074b82d493599ae9f8c6ff03247d567dd0ff33\" returns successfully" Feb 8 23:21:38.522051 kubelet[1858]: I0208 23:21:38.522019 1858 scope.go:115] "RemoveContainer" containerID="0cb701d95774bd9b5378caf09c6e3dd264eef1761b21656d8bb984b4b4851dd2" Feb 8 23:21:38.522918 env[1337]: time="2024-02-08T23:21:38.522893691Z" level=info msg="RemoveContainer for \"0cb701d95774bd9b5378caf09c6e3dd264eef1761b21656d8bb984b4b4851dd2\"" Feb 8 23:21:38.528570 env[1337]: time="2024-02-08T23:21:38.528537906Z" level=info msg="RemoveContainer for \"0cb701d95774bd9b5378caf09c6e3dd264eef1761b21656d8bb984b4b4851dd2\" returns successfully" Feb 8 23:21:38.528722 kubelet[1858]: I0208 23:21:38.528694 1858 scope.go:115] "RemoveContainer" containerID="4ea3d18eca453197b46d7a106e460e0677437a7b2d35f824f88ee2066c62ec17" Feb 8 23:21:38.529586 env[1337]: time="2024-02-08T23:21:38.529559709Z" level=info msg="RemoveContainer for \"4ea3d18eca453197b46d7a106e460e0677437a7b2d35f824f88ee2066c62ec17\"" Feb 8 23:21:38.536108 env[1337]: time="2024-02-08T23:21:38.536071127Z" level=info msg="RemoveContainer for \"4ea3d18eca453197b46d7a106e460e0677437a7b2d35f824f88ee2066c62ec17\" returns successfully" Feb 8 23:21:38.536253 kubelet[1858]: I0208 23:21:38.536236 1858 scope.go:115] "RemoveContainer" containerID="f044f536aacd5d32cc9d4c020bdbb9a3aac21e27215566771aef237c8b89a989" Feb 8 23:21:38.536538 env[1337]: time="2024-02-08T23:21:38.536463828Z" level=error msg="ContainerStatus for \"f044f536aacd5d32cc9d4c020bdbb9a3aac21e27215566771aef237c8b89a989\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f044f536aacd5d32cc9d4c020bdbb9a3aac21e27215566771aef237c8b89a989\": not found" Feb 8 23:21:38.536697 kubelet[1858]: E0208 23:21:38.536684 1858 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f044f536aacd5d32cc9d4c020bdbb9a3aac21e27215566771aef237c8b89a989\": not found" containerID="f044f536aacd5d32cc9d4c020bdbb9a3aac21e27215566771aef237c8b89a989" Feb 8 23:21:38.536770 kubelet[1858]: I0208 23:21:38.536724 1858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:f044f536aacd5d32cc9d4c020bdbb9a3aac21e27215566771aef237c8b89a989} err="failed to get container status \"f044f536aacd5d32cc9d4c020bdbb9a3aac21e27215566771aef237c8b89a989\": rpc error: code = NotFound desc = an error occurred when try to find container \"f044f536aacd5d32cc9d4c020bdbb9a3aac21e27215566771aef237c8b89a989\": not found" Feb 8 23:21:38.536770 kubelet[1858]: I0208 23:21:38.536739 1858 scope.go:115] "RemoveContainer" containerID="2ca6f61f0020dff3ce40cfb5dd5ba8f085a9cf23eee5164dc68825ac030b7e2c" Feb 8 23:21:38.536955 env[1337]: time="2024-02-08T23:21:38.536897529Z" level=error msg="ContainerStatus for \"2ca6f61f0020dff3ce40cfb5dd5ba8f085a9cf23eee5164dc68825ac030b7e2c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2ca6f61f0020dff3ce40cfb5dd5ba8f085a9cf23eee5164dc68825ac030b7e2c\": not found" Feb 8 23:21:38.537090 kubelet[1858]: E0208 23:21:38.537073 1858 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2ca6f61f0020dff3ce40cfb5dd5ba8f085a9cf23eee5164dc68825ac030b7e2c\": not found" containerID="2ca6f61f0020dff3ce40cfb5dd5ba8f085a9cf23eee5164dc68825ac030b7e2c" Feb 8 23:21:38.537162 kubelet[1858]: I0208 23:21:38.537112 1858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:2ca6f61f0020dff3ce40cfb5dd5ba8f085a9cf23eee5164dc68825ac030b7e2c} err="failed to get container status \"2ca6f61f0020dff3ce40cfb5dd5ba8f085a9cf23eee5164dc68825ac030b7e2c\": rpc error: code = NotFound desc = an error occurred when try to find container \"2ca6f61f0020dff3ce40cfb5dd5ba8f085a9cf23eee5164dc68825ac030b7e2c\": not found" Feb 8 23:21:38.537162 kubelet[1858]: I0208 23:21:38.537128 1858 scope.go:115] "RemoveContainer" containerID="a5ba707cc54aaf67516e5b9695074b82d493599ae9f8c6ff03247d567dd0ff33" Feb 8 23:21:38.537335 env[1337]: time="2024-02-08T23:21:38.537279230Z" level=error msg="ContainerStatus for \"a5ba707cc54aaf67516e5b9695074b82d493599ae9f8c6ff03247d567dd0ff33\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a5ba707cc54aaf67516e5b9695074b82d493599ae9f8c6ff03247d567dd0ff33\": not found" Feb 8 23:21:38.537496 kubelet[1858]: E0208 23:21:38.537478 1858 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a5ba707cc54aaf67516e5b9695074b82d493599ae9f8c6ff03247d567dd0ff33\": not found" containerID="a5ba707cc54aaf67516e5b9695074b82d493599ae9f8c6ff03247d567dd0ff33" Feb 8 23:21:38.537561 kubelet[1858]: I0208 23:21:38.537509 1858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:a5ba707cc54aaf67516e5b9695074b82d493599ae9f8c6ff03247d567dd0ff33} err="failed to get container status \"a5ba707cc54aaf67516e5b9695074b82d493599ae9f8c6ff03247d567dd0ff33\": rpc error: code = NotFound desc = an error occurred when try to find container \"a5ba707cc54aaf67516e5b9695074b82d493599ae9f8c6ff03247d567dd0ff33\": not found" Feb 8 23:21:38.537561 kubelet[1858]: I0208 23:21:38.537520 1858 scope.go:115] "RemoveContainer" containerID="0cb701d95774bd9b5378caf09c6e3dd264eef1761b21656d8bb984b4b4851dd2" Feb 8 23:21:38.537712 env[1337]: time="2024-02-08T23:21:38.537668431Z" level=error msg="ContainerStatus for \"0cb701d95774bd9b5378caf09c6e3dd264eef1761b21656d8bb984b4b4851dd2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0cb701d95774bd9b5378caf09c6e3dd264eef1761b21656d8bb984b4b4851dd2\": not found" Feb 8 23:21:38.537830 kubelet[1858]: E0208 23:21:38.537807 1858 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0cb701d95774bd9b5378caf09c6e3dd264eef1761b21656d8bb984b4b4851dd2\": not found" containerID="0cb701d95774bd9b5378caf09c6e3dd264eef1761b21656d8bb984b4b4851dd2" Feb 8 23:21:38.537902 kubelet[1858]: I0208 23:21:38.537847 1858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:0cb701d95774bd9b5378caf09c6e3dd264eef1761b21656d8bb984b4b4851dd2} err="failed to get container status \"0cb701d95774bd9b5378caf09c6e3dd264eef1761b21656d8bb984b4b4851dd2\": rpc error: code = NotFound desc = an error occurred when try to find container \"0cb701d95774bd9b5378caf09c6e3dd264eef1761b21656d8bb984b4b4851dd2\": not found" Feb 8 23:21:38.537902 kubelet[1858]: I0208 23:21:38.537869 1858 scope.go:115] "RemoveContainer" containerID="4ea3d18eca453197b46d7a106e460e0677437a7b2d35f824f88ee2066c62ec17" Feb 8 23:21:38.538079 env[1337]: time="2024-02-08T23:21:38.538028832Z" level=error msg="ContainerStatus for \"4ea3d18eca453197b46d7a106e460e0677437a7b2d35f824f88ee2066c62ec17\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4ea3d18eca453197b46d7a106e460e0677437a7b2d35f824f88ee2066c62ec17\": not found" Feb 8 23:21:38.538183 kubelet[1858]: E0208 23:21:38.538165 1858 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4ea3d18eca453197b46d7a106e460e0677437a7b2d35f824f88ee2066c62ec17\": not found" containerID="4ea3d18eca453197b46d7a106e460e0677437a7b2d35f824f88ee2066c62ec17" Feb 8 23:21:38.538260 kubelet[1858]: I0208 23:21:38.538195 1858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:4ea3d18eca453197b46d7a106e460e0677437a7b2d35f824f88ee2066c62ec17} err="failed to get container status \"4ea3d18eca453197b46d7a106e460e0677437a7b2d35f824f88ee2066c62ec17\": rpc error: code = NotFound desc = an error occurred when try to find container \"4ea3d18eca453197b46d7a106e460e0677437a7b2d35f824f88ee2066c62ec17\": not found" Feb 8 23:21:38.595681 kubelet[1858]: I0208 23:21:38.595545 1858 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/554bb687-e224-4cfe-8c5e-d03b29408c01-cni-path\") on node \"10.200.8.10\" DevicePath \"\"" Feb 8 23:21:38.595681 kubelet[1858]: I0208 23:21:38.595588 1858 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/554bb687-e224-4cfe-8c5e-d03b29408c01-lib-modules\") on node \"10.200.8.10\" DevicePath \"\"" Feb 8 23:21:38.595681 kubelet[1858]: I0208 23:21:38.595602 1858 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/554bb687-e224-4cfe-8c5e-d03b29408c01-xtables-lock\") on node \"10.200.8.10\" DevicePath \"\"" Feb 8 23:21:38.595681 kubelet[1858]: I0208 23:21:38.595613 1858 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/554bb687-e224-4cfe-8c5e-d03b29408c01-etc-cni-netd\") on node \"10.200.8.10\" DevicePath \"\"" Feb 8 23:21:38.595681 kubelet[1858]: I0208 23:21:38.595629 1858 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-jsxz2\" (UniqueName: \"kubernetes.io/projected/554bb687-e224-4cfe-8c5e-d03b29408c01-kube-api-access-jsxz2\") on node \"10.200.8.10\" DevicePath \"\"" Feb 8 23:21:38.595681 kubelet[1858]: I0208 23:21:38.595641 1858 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/554bb687-e224-4cfe-8c5e-d03b29408c01-cilium-run\") on node \"10.200.8.10\" DevicePath \"\"" Feb 8 23:21:38.595681 kubelet[1858]: I0208 23:21:38.595654 1858 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/554bb687-e224-4cfe-8c5e-d03b29408c01-host-proc-sys-kernel\") on node \"10.200.8.10\" DevicePath \"\"" Feb 8 23:21:38.596478 kubelet[1858]: I0208 23:21:38.596370 1858 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/554bb687-e224-4cfe-8c5e-d03b29408c01-bpf-maps\") on node \"10.200.8.10\" DevicePath \"\"" Feb 8 23:21:38.596478 kubelet[1858]: I0208 23:21:38.596412 1858 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/554bb687-e224-4cfe-8c5e-d03b29408c01-clustermesh-secrets\") on node \"10.200.8.10\" DevicePath \"\"" Feb 8 23:21:38.596478 kubelet[1858]: I0208 23:21:38.596428 1858 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/554bb687-e224-4cfe-8c5e-d03b29408c01-host-proc-sys-net\") on node \"10.200.8.10\" DevicePath \"\"" Feb 8 23:21:38.596478 kubelet[1858]: I0208 23:21:38.596442 1858 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/554bb687-e224-4cfe-8c5e-d03b29408c01-hubble-tls\") on node \"10.200.8.10\" DevicePath \"\"" Feb 8 23:21:38.596478 kubelet[1858]: I0208 23:21:38.596456 1858 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/554bb687-e224-4cfe-8c5e-d03b29408c01-cilium-config-path\") on node \"10.200.8.10\" DevicePath \"\"" Feb 8 23:21:38.670704 kubelet[1858]: I0208 23:21:38.670652 1858 topology_manager.go:212] "Topology Admit Handler" Feb 8 23:21:38.670903 kubelet[1858]: E0208 23:21:38.670738 1858 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="554bb687-e224-4cfe-8c5e-d03b29408c01" containerName="mount-cgroup" Feb 8 23:21:38.670903 kubelet[1858]: E0208 23:21:38.670755 1858 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="554bb687-e224-4cfe-8c5e-d03b29408c01" containerName="clean-cilium-state" Feb 8 23:21:38.670903 kubelet[1858]: E0208 23:21:38.670765 1858 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="554bb687-e224-4cfe-8c5e-d03b29408c01" containerName="cilium-agent" Feb 8 23:21:38.670903 kubelet[1858]: E0208 23:21:38.670775 1858 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="554bb687-e224-4cfe-8c5e-d03b29408c01" containerName="apply-sysctl-overwrites" Feb 8 23:21:38.670903 kubelet[1858]: E0208 23:21:38.670786 1858 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="554bb687-e224-4cfe-8c5e-d03b29408c01" containerName="mount-bpf-fs" Feb 8 23:21:38.670903 kubelet[1858]: I0208 23:21:38.670817 1858 memory_manager.go:346] "RemoveStaleState removing state" podUID="554bb687-e224-4cfe-8c5e-d03b29408c01" containerName="cilium-agent" Feb 8 23:21:38.671588 kubelet[1858]: I0208 23:21:38.671560 1858 topology_manager.go:212] "Topology Admit Handler" Feb 8 23:21:38.677380 systemd[1]: Created slice kubepods-burstable-pod269058d2_2ed1_4526_9231_02fe113a8ce3.slice. Feb 8 23:21:38.682730 systemd[1]: Created slice kubepods-besteffort-podb907b49c_0963_420c_becf_9fd8c13f3a79.slice. Feb 8 23:21:38.790788 systemd[1]: Removed slice kubepods-burstable-pod554bb687_e224_4cfe_8c5e_d03b29408c01.slice. Feb 8 23:21:38.790935 systemd[1]: kubepods-burstable-pod554bb687_e224_4cfe_8c5e_d03b29408c01.slice: Consumed 7.097s CPU time. Feb 8 23:21:38.797634 kubelet[1858]: I0208 23:21:38.797571 1858 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/269058d2-2ed1-4526-9231-02fe113a8ce3-cilium-cgroup\") pod \"cilium-w4b6p\" (UID: \"269058d2-2ed1-4526-9231-02fe113a8ce3\") " pod="kube-system/cilium-w4b6p" Feb 8 23:21:38.797918 kubelet[1858]: I0208 23:21:38.797900 1858 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/269058d2-2ed1-4526-9231-02fe113a8ce3-host-proc-sys-net\") pod \"cilium-w4b6p\" (UID: \"269058d2-2ed1-4526-9231-02fe113a8ce3\") " pod="kube-system/cilium-w4b6p" Feb 8 23:21:38.798008 kubelet[1858]: I0208 23:21:38.797946 1858 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58dql\" (UniqueName: \"kubernetes.io/projected/269058d2-2ed1-4526-9231-02fe113a8ce3-kube-api-access-58dql\") pod \"cilium-w4b6p\" (UID: \"269058d2-2ed1-4526-9231-02fe113a8ce3\") " pod="kube-system/cilium-w4b6p" Feb 8 23:21:38.798008 kubelet[1858]: I0208 23:21:38.797997 1858 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b907b49c-0963-420c-becf-9fd8c13f3a79-cilium-config-path\") pod \"cilium-operator-574c4bb98d-9rl7d\" (UID: \"b907b49c-0963-420c-becf-9fd8c13f3a79\") " pod="kube-system/cilium-operator-574c4bb98d-9rl7d" Feb 8 23:21:38.798113 kubelet[1858]: I0208 23:21:38.798045 1858 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/269058d2-2ed1-4526-9231-02fe113a8ce3-bpf-maps\") pod \"cilium-w4b6p\" (UID: \"269058d2-2ed1-4526-9231-02fe113a8ce3\") " pod="kube-system/cilium-w4b6p" Feb 8 23:21:38.798113 kubelet[1858]: I0208 23:21:38.798078 1858 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/269058d2-2ed1-4526-9231-02fe113a8ce3-xtables-lock\") pod \"cilium-w4b6p\" (UID: \"269058d2-2ed1-4526-9231-02fe113a8ce3\") " pod="kube-system/cilium-w4b6p" Feb 8 23:21:38.798113 kubelet[1858]: I0208 23:21:38.798108 1858 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/269058d2-2ed1-4526-9231-02fe113a8ce3-clustermesh-secrets\") pod \"cilium-w4b6p\" (UID: \"269058d2-2ed1-4526-9231-02fe113a8ce3\") " pod="kube-system/cilium-w4b6p" Feb 8 23:21:38.798237 kubelet[1858]: I0208 23:21:38.798153 1858 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/269058d2-2ed1-4526-9231-02fe113a8ce3-cilium-config-path\") pod \"cilium-w4b6p\" (UID: \"269058d2-2ed1-4526-9231-02fe113a8ce3\") " pod="kube-system/cilium-w4b6p" Feb 8 23:21:38.798237 kubelet[1858]: I0208 23:21:38.798199 1858 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/269058d2-2ed1-4526-9231-02fe113a8ce3-hubble-tls\") pod \"cilium-w4b6p\" (UID: \"269058d2-2ed1-4526-9231-02fe113a8ce3\") " pod="kube-system/cilium-w4b6p" Feb 8 23:21:38.798237 kubelet[1858]: I0208 23:21:38.798234 1858 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8qwq\" (UniqueName: \"kubernetes.io/projected/b907b49c-0963-420c-becf-9fd8c13f3a79-kube-api-access-d8qwq\") pod \"cilium-operator-574c4bb98d-9rl7d\" (UID: \"b907b49c-0963-420c-becf-9fd8c13f3a79\") " pod="kube-system/cilium-operator-574c4bb98d-9rl7d" Feb 8 23:21:38.798421 kubelet[1858]: I0208 23:21:38.798265 1858 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/269058d2-2ed1-4526-9231-02fe113a8ce3-cilium-run\") pod \"cilium-w4b6p\" (UID: \"269058d2-2ed1-4526-9231-02fe113a8ce3\") " pod="kube-system/cilium-w4b6p" Feb 8 23:21:38.798421 kubelet[1858]: I0208 23:21:38.798309 1858 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/269058d2-2ed1-4526-9231-02fe113a8ce3-cni-path\") pod \"cilium-w4b6p\" (UID: \"269058d2-2ed1-4526-9231-02fe113a8ce3\") " pod="kube-system/cilium-w4b6p" Feb 8 23:21:38.798505 kubelet[1858]: I0208 23:21:38.798474 1858 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/269058d2-2ed1-4526-9231-02fe113a8ce3-host-proc-sys-kernel\") pod \"cilium-w4b6p\" (UID: \"269058d2-2ed1-4526-9231-02fe113a8ce3\") " pod="kube-system/cilium-w4b6p" Feb 8 23:21:38.798555 kubelet[1858]: I0208 23:21:38.798521 1858 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/269058d2-2ed1-4526-9231-02fe113a8ce3-hostproc\") pod \"cilium-w4b6p\" (UID: \"269058d2-2ed1-4526-9231-02fe113a8ce3\") " pod="kube-system/cilium-w4b6p" Feb 8 23:21:38.798601 kubelet[1858]: I0208 23:21:38.798555 1858 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/269058d2-2ed1-4526-9231-02fe113a8ce3-etc-cni-netd\") pod \"cilium-w4b6p\" (UID: \"269058d2-2ed1-4526-9231-02fe113a8ce3\") " pod="kube-system/cilium-w4b6p" Feb 8 23:21:38.798601 kubelet[1858]: I0208 23:21:38.798592 1858 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/269058d2-2ed1-4526-9231-02fe113a8ce3-lib-modules\") pod \"cilium-w4b6p\" (UID: \"269058d2-2ed1-4526-9231-02fe113a8ce3\") " pod="kube-system/cilium-w4b6p" Feb 8 23:21:38.798680 kubelet[1858]: I0208 23:21:38.798621 1858 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/269058d2-2ed1-4526-9231-02fe113a8ce3-cilium-ipsec-secrets\") pod \"cilium-w4b6p\" (UID: \"269058d2-2ed1-4526-9231-02fe113a8ce3\") " pod="kube-system/cilium-w4b6p" Feb 8 23:21:38.981650 env[1337]: time="2024-02-08T23:21:38.981598957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w4b6p,Uid:269058d2-2ed1-4526-9231-02fe113a8ce3,Namespace:kube-system,Attempt:0,}" Feb 8 23:21:38.986294 env[1337]: time="2024-02-08T23:21:38.986253070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-9rl7d,Uid:b907b49c-0963-420c-becf-9fd8c13f3a79,Namespace:kube-system,Attempt:0,}" Feb 8 23:21:39.031873 env[1337]: time="2024-02-08T23:21:39.031796694Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:21:39.032164 env[1337]: time="2024-02-08T23:21:39.032110295Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:21:39.032370 env[1337]: time="2024-02-08T23:21:39.032307096Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:21:39.032912 env[1337]: time="2024-02-08T23:21:39.032864897Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9a5641443f910bc34b1d1ffd006ba39f23b600abcdb948ce394ddcda50abd97a pid=3350 runtime=io.containerd.runc.v2 Feb 8 23:21:39.050336 systemd[1]: Started cri-containerd-9a5641443f910bc34b1d1ffd006ba39f23b600abcdb948ce394ddcda50abd97a.scope. Feb 8 23:21:39.057436 env[1337]: time="2024-02-08T23:21:39.057357464Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:21:39.057681 env[1337]: time="2024-02-08T23:21:39.057639665Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:21:39.057839 env[1337]: time="2024-02-08T23:21:39.057810965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:21:39.059775 env[1337]: time="2024-02-08T23:21:39.058432467Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a1fe50660be9f2c320666112f0520c0c0fd5bdc87f3b9f2d69b631133bd4a2f7 pid=3376 runtime=io.containerd.runc.v2 Feb 8 23:21:39.081107 systemd[1]: Started cri-containerd-a1fe50660be9f2c320666112f0520c0c0fd5bdc87f3b9f2d69b631133bd4a2f7.scope. Feb 8 23:21:39.099279 env[1337]: time="2024-02-08T23:21:39.099228978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w4b6p,Uid:269058d2-2ed1-4526-9231-02fe113a8ce3,Namespace:kube-system,Attempt:0,} returns sandbox id \"9a5641443f910bc34b1d1ffd006ba39f23b600abcdb948ce394ddcda50abd97a\"" Feb 8 23:21:39.102893 env[1337]: time="2024-02-08T23:21:39.102856088Z" level=info msg="CreateContainer within sandbox \"9a5641443f910bc34b1d1ffd006ba39f23b600abcdb948ce394ddcda50abd97a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 8 23:21:39.136556 env[1337]: time="2024-02-08T23:21:39.135441977Z" level=info msg="CreateContainer within sandbox \"9a5641443f910bc34b1d1ffd006ba39f23b600abcdb948ce394ddcda50abd97a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c5bb7426889e15e79ad83dfa32bbcce7cf4d4c7f97684c7f89ff065857afa511\"" Feb 8 23:21:39.136556 env[1337]: time="2024-02-08T23:21:39.136143579Z" level=info msg="StartContainer for \"c5bb7426889e15e79ad83dfa32bbcce7cf4d4c7f97684c7f89ff065857afa511\"" Feb 8 23:21:39.138460 env[1337]: time="2024-02-08T23:21:39.138420285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-9rl7d,Uid:b907b49c-0963-420c-becf-9fd8c13f3a79,Namespace:kube-system,Attempt:0,} returns sandbox id \"a1fe50660be9f2c320666112f0520c0c0fd5bdc87f3b9f2d69b631133bd4a2f7\"" Feb 8 23:21:39.140344 env[1337]: time="2024-02-08T23:21:39.140296990Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 8 23:21:39.155809 systemd[1]: Started cri-containerd-c5bb7426889e15e79ad83dfa32bbcce7cf4d4c7f97684c7f89ff065857afa511.scope. Feb 8 23:21:39.166153 systemd[1]: cri-containerd-c5bb7426889e15e79ad83dfa32bbcce7cf4d4c7f97684c7f89ff065857afa511.scope: Deactivated successfully. Feb 8 23:21:39.166446 systemd[1]: Stopped cri-containerd-c5bb7426889e15e79ad83dfa32bbcce7cf4d4c7f97684c7f89ff065857afa511.scope. Feb 8 23:21:39.194248 env[1337]: time="2024-02-08T23:21:39.194190637Z" level=info msg="shim disconnected" id=c5bb7426889e15e79ad83dfa32bbcce7cf4d4c7f97684c7f89ff065857afa511 Feb 8 23:21:39.194545 env[1337]: time="2024-02-08T23:21:39.194524438Z" level=warning msg="cleaning up after shim disconnected" id=c5bb7426889e15e79ad83dfa32bbcce7cf4d4c7f97684c7f89ff065857afa511 namespace=k8s.io Feb 8 23:21:39.194645 env[1337]: time="2024-02-08T23:21:39.194629338Z" level=info msg="cleaning up dead shim" Feb 8 23:21:39.203868 env[1337]: time="2024-02-08T23:21:39.203831863Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:21:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3450 runtime=io.containerd.runc.v2\ntime=\"2024-02-08T23:21:39Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/c5bb7426889e15e79ad83dfa32bbcce7cf4d4c7f97684c7f89ff065857afa511/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 8 23:21:39.204144 env[1337]: time="2024-02-08T23:21:39.204055164Z" level=error msg="copy shim log" error="read /proc/self/fd/64: file already closed" Feb 8 23:21:39.204398 env[1337]: time="2024-02-08T23:21:39.204300965Z" level=error msg="Failed to pipe stderr of container \"c5bb7426889e15e79ad83dfa32bbcce7cf4d4c7f97684c7f89ff065857afa511\"" error="reading from a closed fifo" Feb 8 23:21:39.205408 env[1337]: time="2024-02-08T23:21:39.205367168Z" level=error msg="Failed to pipe stdout of container \"c5bb7426889e15e79ad83dfa32bbcce7cf4d4c7f97684c7f89ff065857afa511\"" error="reading from a closed fifo" Feb 8 23:21:39.209731 env[1337]: time="2024-02-08T23:21:39.209667779Z" level=error msg="StartContainer for \"c5bb7426889e15e79ad83dfa32bbcce7cf4d4c7f97684c7f89ff065857afa511\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 8 23:21:39.209976 kubelet[1858]: E0208 23:21:39.209955 1858 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="c5bb7426889e15e79ad83dfa32bbcce7cf4d4c7f97684c7f89ff065857afa511" Feb 8 23:21:39.210127 kubelet[1858]: E0208 23:21:39.210101 1858 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 8 23:21:39.210127 kubelet[1858]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 8 23:21:39.210127 kubelet[1858]: rm /hostbin/cilium-mount Feb 8 23:21:39.210268 kubelet[1858]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-58dql,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-w4b6p_kube-system(269058d2-2ed1-4526-9231-02fe113a8ce3): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 8 23:21:39.210268 kubelet[1858]: E0208 23:21:39.210161 1858 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-w4b6p" podUID=269058d2-2ed1-4526-9231-02fe113a8ce3 Feb 8 23:21:39.296952 kubelet[1858]: E0208 23:21:39.296832 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:21:39.494033 env[1337]: time="2024-02-08T23:21:39.493949155Z" level=info msg="CreateContainer within sandbox \"9a5641443f910bc34b1d1ffd006ba39f23b600abcdb948ce394ddcda50abd97a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Feb 8 23:21:39.516913 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2635231591.mount: Deactivated successfully. Feb 8 23:21:39.534520 env[1337]: time="2024-02-08T23:21:39.534488065Z" level=info msg="CreateContainer within sandbox \"9a5641443f910bc34b1d1ffd006ba39f23b600abcdb948ce394ddcda50abd97a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"9197e3610da092d21faea379a28301e7667a95af9814ac81fe0ee6217acc72dd\"" Feb 8 23:21:39.535122 env[1337]: time="2024-02-08T23:21:39.535068167Z" level=info msg="StartContainer for \"9197e3610da092d21faea379a28301e7667a95af9814ac81fe0ee6217acc72dd\"" Feb 8 23:21:39.553743 systemd[1]: Started cri-containerd-9197e3610da092d21faea379a28301e7667a95af9814ac81fe0ee6217acc72dd.scope. Feb 8 23:21:39.564703 systemd[1]: cri-containerd-9197e3610da092d21faea379a28301e7667a95af9814ac81fe0ee6217acc72dd.scope: Deactivated successfully. Feb 8 23:21:39.580231 env[1337]: time="2024-02-08T23:21:39.580179890Z" level=info msg="shim disconnected" id=9197e3610da092d21faea379a28301e7667a95af9814ac81fe0ee6217acc72dd Feb 8 23:21:39.580433 env[1337]: time="2024-02-08T23:21:39.580233090Z" level=warning msg="cleaning up after shim disconnected" id=9197e3610da092d21faea379a28301e7667a95af9814ac81fe0ee6217acc72dd namespace=k8s.io Feb 8 23:21:39.580433 env[1337]: time="2024-02-08T23:21:39.580244290Z" level=info msg="cleaning up dead shim" Feb 8 23:21:39.588097 env[1337]: time="2024-02-08T23:21:39.588059911Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:21:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3487 runtime=io.containerd.runc.v2\ntime=\"2024-02-08T23:21:39Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/9197e3610da092d21faea379a28301e7667a95af9814ac81fe0ee6217acc72dd/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 8 23:21:39.588383 env[1337]: time="2024-02-08T23:21:39.588301612Z" level=error msg="copy shim log" error="read /proc/self/fd/67: file already closed" Feb 8 23:21:39.590191 env[1337]: time="2024-02-08T23:21:39.590151717Z" level=error msg="Failed to pipe stderr of container \"9197e3610da092d21faea379a28301e7667a95af9814ac81fe0ee6217acc72dd\"" error="reading from a closed fifo" Feb 8 23:21:39.590283 env[1337]: time="2024-02-08T23:21:39.590138017Z" level=error msg="Failed to pipe stdout of container \"9197e3610da092d21faea379a28301e7667a95af9814ac81fe0ee6217acc72dd\"" error="reading from a closed fifo" Feb 8 23:21:39.594698 env[1337]: time="2024-02-08T23:21:39.594642129Z" level=error msg="StartContainer for \"9197e3610da092d21faea379a28301e7667a95af9814ac81fe0ee6217acc72dd\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 8 23:21:39.595610 kubelet[1858]: E0208 23:21:39.595584 1858 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="9197e3610da092d21faea379a28301e7667a95af9814ac81fe0ee6217acc72dd" Feb 8 23:21:39.595755 kubelet[1858]: E0208 23:21:39.595727 1858 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 8 23:21:39.595755 kubelet[1858]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 8 23:21:39.595755 kubelet[1858]: rm /hostbin/cilium-mount Feb 8 23:21:39.595755 kubelet[1858]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-58dql,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-w4b6p_kube-system(269058d2-2ed1-4526-9231-02fe113a8ce3): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 8 23:21:39.596000 kubelet[1858]: E0208 23:21:39.595790 1858 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-w4b6p" podUID=269058d2-2ed1-4526-9231-02fe113a8ce3 Feb 8 23:21:40.297600 kubelet[1858]: E0208 23:21:40.297542 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:21:40.316936 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9197e3610da092d21faea379a28301e7667a95af9814ac81fe0ee6217acc72dd-rootfs.mount: Deactivated successfully. Feb 8 23:21:40.325705 kubelet[1858]: I0208 23:21:40.325674 1858 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=554bb687-e224-4cfe-8c5e-d03b29408c01 path="/var/lib/kubelet/pods/554bb687-e224-4cfe-8c5e-d03b29408c01/volumes" Feb 8 23:21:40.338271 kubelet[1858]: E0208 23:21:40.338247 1858 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 8 23:21:40.494895 kubelet[1858]: I0208 23:21:40.494856 1858 scope.go:115] "RemoveContainer" containerID="c5bb7426889e15e79ad83dfa32bbcce7cf4d4c7f97684c7f89ff065857afa511" Feb 8 23:21:40.495484 env[1337]: time="2024-02-08T23:21:40.495439569Z" level=info msg="StopPodSandbox for \"9a5641443f910bc34b1d1ffd006ba39f23b600abcdb948ce394ddcda50abd97a\"" Feb 8 23:21:40.496054 env[1337]: time="2024-02-08T23:21:40.496008770Z" level=info msg="Container to stop \"9197e3610da092d21faea379a28301e7667a95af9814ac81fe0ee6217acc72dd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:21:40.496206 env[1337]: time="2024-02-08T23:21:40.496176971Z" level=info msg="Container to stop \"c5bb7426889e15e79ad83dfa32bbcce7cf4d4c7f97684c7f89ff065857afa511\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:21:40.499414 env[1337]: time="2024-02-08T23:21:40.496409271Z" level=info msg="RemoveContainer for \"c5bb7426889e15e79ad83dfa32bbcce7cf4d4c7f97684c7f89ff065857afa511\"" Feb 8 23:21:40.498755 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9a5641443f910bc34b1d1ffd006ba39f23b600abcdb948ce394ddcda50abd97a-shm.mount: Deactivated successfully. Feb 8 23:21:40.506069 systemd[1]: cri-containerd-9a5641443f910bc34b1d1ffd006ba39f23b600abcdb948ce394ddcda50abd97a.scope: Deactivated successfully. Feb 8 23:21:40.525875 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9a5641443f910bc34b1d1ffd006ba39f23b600abcdb948ce394ddcda50abd97a-rootfs.mount: Deactivated successfully. Feb 8 23:21:40.545786 env[1337]: time="2024-02-08T23:21:40.545736204Z" level=info msg="shim disconnected" id=9a5641443f910bc34b1d1ffd006ba39f23b600abcdb948ce394ddcda50abd97a Feb 8 23:21:40.545966 env[1337]: time="2024-02-08T23:21:40.545791204Z" level=warning msg="cleaning up after shim disconnected" id=9a5641443f910bc34b1d1ffd006ba39f23b600abcdb948ce394ddcda50abd97a namespace=k8s.io Feb 8 23:21:40.545966 env[1337]: time="2024-02-08T23:21:40.545803804Z" level=info msg="cleaning up dead shim" Feb 8 23:21:40.553994 env[1337]: time="2024-02-08T23:21:40.553912326Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:21:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3518 runtime=io.containerd.runc.v2\n" Feb 8 23:21:40.554705 env[1337]: time="2024-02-08T23:21:40.554670428Z" level=info msg="TearDown network for sandbox \"9a5641443f910bc34b1d1ffd006ba39f23b600abcdb948ce394ddcda50abd97a\" successfully" Feb 8 23:21:40.554705 env[1337]: time="2024-02-08T23:21:40.554698728Z" level=info msg="StopPodSandbox for \"9a5641443f910bc34b1d1ffd006ba39f23b600abcdb948ce394ddcda50abd97a\" returns successfully" Feb 8 23:21:40.568730 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1306507910.mount: Deactivated successfully. Feb 8 23:21:40.600941 env[1337]: time="2024-02-08T23:21:40.600895753Z" level=info msg="RemoveContainer for \"c5bb7426889e15e79ad83dfa32bbcce7cf4d4c7f97684c7f89ff065857afa511\" returns successfully" Feb 8 23:21:40.710374 kubelet[1858]: I0208 23:21:40.710313 1858 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/269058d2-2ed1-4526-9231-02fe113a8ce3-cilium-ipsec-secrets\") pod \"269058d2-2ed1-4526-9231-02fe113a8ce3\" (UID: \"269058d2-2ed1-4526-9231-02fe113a8ce3\") " Feb 8 23:21:40.710571 kubelet[1858]: I0208 23:21:40.710459 1858 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/269058d2-2ed1-4526-9231-02fe113a8ce3-etc-cni-netd\") pod \"269058d2-2ed1-4526-9231-02fe113a8ce3\" (UID: \"269058d2-2ed1-4526-9231-02fe113a8ce3\") " Feb 8 23:21:40.710571 kubelet[1858]: I0208 23:21:40.710491 1858 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/269058d2-2ed1-4526-9231-02fe113a8ce3-cilium-cgroup\") pod \"269058d2-2ed1-4526-9231-02fe113a8ce3\" (UID: \"269058d2-2ed1-4526-9231-02fe113a8ce3\") " Feb 8 23:21:40.710571 kubelet[1858]: I0208 23:21:40.710526 1858 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/269058d2-2ed1-4526-9231-02fe113a8ce3-cilium-config-path\") pod \"269058d2-2ed1-4526-9231-02fe113a8ce3\" (UID: \"269058d2-2ed1-4526-9231-02fe113a8ce3\") " Feb 8 23:21:40.710571 kubelet[1858]: I0208 23:21:40.710556 1858 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/269058d2-2ed1-4526-9231-02fe113a8ce3-host-proc-sys-kernel\") pod \"269058d2-2ed1-4526-9231-02fe113a8ce3\" (UID: \"269058d2-2ed1-4526-9231-02fe113a8ce3\") " Feb 8 23:21:40.710751 kubelet[1858]: I0208 23:21:40.710583 1858 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58dql\" (UniqueName: \"kubernetes.io/projected/269058d2-2ed1-4526-9231-02fe113a8ce3-kube-api-access-58dql\") pod \"269058d2-2ed1-4526-9231-02fe113a8ce3\" (UID: \"269058d2-2ed1-4526-9231-02fe113a8ce3\") " Feb 8 23:21:40.710751 kubelet[1858]: I0208 23:21:40.710608 1858 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/269058d2-2ed1-4526-9231-02fe113a8ce3-xtables-lock\") pod \"269058d2-2ed1-4526-9231-02fe113a8ce3\" (UID: \"269058d2-2ed1-4526-9231-02fe113a8ce3\") " Feb 8 23:21:40.710751 kubelet[1858]: I0208 23:21:40.710633 1858 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/269058d2-2ed1-4526-9231-02fe113a8ce3-cilium-run\") pod \"269058d2-2ed1-4526-9231-02fe113a8ce3\" (UID: \"269058d2-2ed1-4526-9231-02fe113a8ce3\") " Feb 8 23:21:40.710751 kubelet[1858]: I0208 23:21:40.710659 1858 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/269058d2-2ed1-4526-9231-02fe113a8ce3-hubble-tls\") pod \"269058d2-2ed1-4526-9231-02fe113a8ce3\" (UID: \"269058d2-2ed1-4526-9231-02fe113a8ce3\") " Feb 8 23:21:40.710751 kubelet[1858]: I0208 23:21:40.710683 1858 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/269058d2-2ed1-4526-9231-02fe113a8ce3-lib-modules\") pod \"269058d2-2ed1-4526-9231-02fe113a8ce3\" (UID: \"269058d2-2ed1-4526-9231-02fe113a8ce3\") " Feb 8 23:21:40.710751 kubelet[1858]: I0208 23:21:40.710711 1858 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/269058d2-2ed1-4526-9231-02fe113a8ce3-bpf-maps\") pod \"269058d2-2ed1-4526-9231-02fe113a8ce3\" (UID: \"269058d2-2ed1-4526-9231-02fe113a8ce3\") " Feb 8 23:21:40.710751 kubelet[1858]: I0208 23:21:40.710736 1858 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/269058d2-2ed1-4526-9231-02fe113a8ce3-cni-path\") pod \"269058d2-2ed1-4526-9231-02fe113a8ce3\" (UID: \"269058d2-2ed1-4526-9231-02fe113a8ce3\") " Feb 8 23:21:40.711033 kubelet[1858]: I0208 23:21:40.710761 1858 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/269058d2-2ed1-4526-9231-02fe113a8ce3-hostproc\") pod \"269058d2-2ed1-4526-9231-02fe113a8ce3\" (UID: \"269058d2-2ed1-4526-9231-02fe113a8ce3\") " Feb 8 23:21:40.711033 kubelet[1858]: I0208 23:21:40.710831 1858 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/269058d2-2ed1-4526-9231-02fe113a8ce3-host-proc-sys-net\") pod \"269058d2-2ed1-4526-9231-02fe113a8ce3\" (UID: \"269058d2-2ed1-4526-9231-02fe113a8ce3\") " Feb 8 23:21:40.711033 kubelet[1858]: I0208 23:21:40.710879 1858 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/269058d2-2ed1-4526-9231-02fe113a8ce3-clustermesh-secrets\") pod \"269058d2-2ed1-4526-9231-02fe113a8ce3\" (UID: \"269058d2-2ed1-4526-9231-02fe113a8ce3\") " Feb 8 23:21:40.711508 kubelet[1858]: I0208 23:21:40.711473 1858 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/269058d2-2ed1-4526-9231-02fe113a8ce3-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "269058d2-2ed1-4526-9231-02fe113a8ce3" (UID: "269058d2-2ed1-4526-9231-02fe113a8ce3"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:21:40.711613 kubelet[1858]: I0208 23:21:40.711539 1858 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/269058d2-2ed1-4526-9231-02fe113a8ce3-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "269058d2-2ed1-4526-9231-02fe113a8ce3" (UID: "269058d2-2ed1-4526-9231-02fe113a8ce3"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:21:40.711613 kubelet[1858]: I0208 23:21:40.711563 1858 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/269058d2-2ed1-4526-9231-02fe113a8ce3-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "269058d2-2ed1-4526-9231-02fe113a8ce3" (UID: "269058d2-2ed1-4526-9231-02fe113a8ce3"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:21:40.711777 kubelet[1858]: W0208 23:21:40.711727 1858 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/269058d2-2ed1-4526-9231-02fe113a8ce3/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 8 23:21:40.714106 kubelet[1858]: I0208 23:21:40.714074 1858 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/269058d2-2ed1-4526-9231-02fe113a8ce3-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "269058d2-2ed1-4526-9231-02fe113a8ce3" (UID: "269058d2-2ed1-4526-9231-02fe113a8ce3"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:21:40.714417 kubelet[1858]: I0208 23:21:40.714389 1858 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/269058d2-2ed1-4526-9231-02fe113a8ce3-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "269058d2-2ed1-4526-9231-02fe113a8ce3" (UID: "269058d2-2ed1-4526-9231-02fe113a8ce3"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:21:40.714508 kubelet[1858]: I0208 23:21:40.714452 1858 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/269058d2-2ed1-4526-9231-02fe113a8ce3-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "269058d2-2ed1-4526-9231-02fe113a8ce3" (UID: "269058d2-2ed1-4526-9231-02fe113a8ce3"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:21:40.714726 kubelet[1858]: I0208 23:21:40.714703 1858 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/269058d2-2ed1-4526-9231-02fe113a8ce3-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "269058d2-2ed1-4526-9231-02fe113a8ce3" (UID: "269058d2-2ed1-4526-9231-02fe113a8ce3"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:21:40.714797 kubelet[1858]: I0208 23:21:40.714750 1858 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/269058d2-2ed1-4526-9231-02fe113a8ce3-hostproc" (OuterVolumeSpecName: "hostproc") pod "269058d2-2ed1-4526-9231-02fe113a8ce3" (UID: "269058d2-2ed1-4526-9231-02fe113a8ce3"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:21:40.714797 kubelet[1858]: I0208 23:21:40.714775 1858 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/269058d2-2ed1-4526-9231-02fe113a8ce3-cni-path" (OuterVolumeSpecName: "cni-path") pod "269058d2-2ed1-4526-9231-02fe113a8ce3" (UID: "269058d2-2ed1-4526-9231-02fe113a8ce3"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:21:40.714888 kubelet[1858]: I0208 23:21:40.714795 1858 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/269058d2-2ed1-4526-9231-02fe113a8ce3-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "269058d2-2ed1-4526-9231-02fe113a8ce3" (UID: "269058d2-2ed1-4526-9231-02fe113a8ce3"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:21:40.716304 kubelet[1858]: I0208 23:21:40.716274 1858 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/269058d2-2ed1-4526-9231-02fe113a8ce3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "269058d2-2ed1-4526-9231-02fe113a8ce3" (UID: "269058d2-2ed1-4526-9231-02fe113a8ce3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 8 23:21:40.719744 kubelet[1858]: I0208 23:21:40.719711 1858 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/269058d2-2ed1-4526-9231-02fe113a8ce3-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "269058d2-2ed1-4526-9231-02fe113a8ce3" (UID: "269058d2-2ed1-4526-9231-02fe113a8ce3"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 8 23:21:40.722899 kubelet[1858]: I0208 23:21:40.722865 1858 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/269058d2-2ed1-4526-9231-02fe113a8ce3-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "269058d2-2ed1-4526-9231-02fe113a8ce3" (UID: "269058d2-2ed1-4526-9231-02fe113a8ce3"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 8 23:21:40.725682 kubelet[1858]: I0208 23:21:40.725654 1858 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/269058d2-2ed1-4526-9231-02fe113a8ce3-kube-api-access-58dql" (OuterVolumeSpecName: "kube-api-access-58dql") pod "269058d2-2ed1-4526-9231-02fe113a8ce3" (UID: "269058d2-2ed1-4526-9231-02fe113a8ce3"). InnerVolumeSpecName "kube-api-access-58dql". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 8 23:21:40.728157 kubelet[1858]: I0208 23:21:40.728130 1858 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/269058d2-2ed1-4526-9231-02fe113a8ce3-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "269058d2-2ed1-4526-9231-02fe113a8ce3" (UID: "269058d2-2ed1-4526-9231-02fe113a8ce3"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 8 23:21:40.813358 kubelet[1858]: I0208 23:21:40.811650 1858 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/269058d2-2ed1-4526-9231-02fe113a8ce3-bpf-maps\") on node \"10.200.8.10\" DevicePath \"\"" Feb 8 23:21:40.813358 kubelet[1858]: I0208 23:21:40.811690 1858 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/269058d2-2ed1-4526-9231-02fe113a8ce3-cni-path\") on node \"10.200.8.10\" DevicePath \"\"" Feb 8 23:21:40.813358 kubelet[1858]: I0208 23:21:40.811702 1858 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/269058d2-2ed1-4526-9231-02fe113a8ce3-lib-modules\") on node \"10.200.8.10\" DevicePath \"\"" Feb 8 23:21:40.813358 kubelet[1858]: I0208 23:21:40.811713 1858 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/269058d2-2ed1-4526-9231-02fe113a8ce3-hostproc\") on node \"10.200.8.10\" DevicePath \"\"" Feb 8 23:21:40.813358 kubelet[1858]: I0208 23:21:40.811727 1858 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/269058d2-2ed1-4526-9231-02fe113a8ce3-host-proc-sys-net\") on node \"10.200.8.10\" DevicePath \"\"" Feb 8 23:21:40.813358 kubelet[1858]: I0208 23:21:40.811741 1858 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/269058d2-2ed1-4526-9231-02fe113a8ce3-clustermesh-secrets\") on node \"10.200.8.10\" DevicePath \"\"" Feb 8 23:21:40.813358 kubelet[1858]: I0208 23:21:40.811753 1858 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/269058d2-2ed1-4526-9231-02fe113a8ce3-cilium-cgroup\") on node \"10.200.8.10\" DevicePath \"\"" Feb 8 23:21:40.813358 kubelet[1858]: I0208 23:21:40.811765 1858 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/269058d2-2ed1-4526-9231-02fe113a8ce3-cilium-config-path\") on node \"10.200.8.10\" DevicePath \"\"" Feb 8 23:21:40.813358 kubelet[1858]: I0208 23:21:40.811777 1858 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/269058d2-2ed1-4526-9231-02fe113a8ce3-cilium-ipsec-secrets\") on node \"10.200.8.10\" DevicePath \"\"" Feb 8 23:21:40.813358 kubelet[1858]: I0208 23:21:40.811788 1858 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/269058d2-2ed1-4526-9231-02fe113a8ce3-etc-cni-netd\") on node \"10.200.8.10\" DevicePath \"\"" Feb 8 23:21:40.813358 kubelet[1858]: I0208 23:21:40.811801 1858 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-58dql\" (UniqueName: \"kubernetes.io/projected/269058d2-2ed1-4526-9231-02fe113a8ce3-kube-api-access-58dql\") on node \"10.200.8.10\" DevicePath \"\"" Feb 8 23:21:40.813358 kubelet[1858]: I0208 23:21:40.811813 1858 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/269058d2-2ed1-4526-9231-02fe113a8ce3-xtables-lock\") on node \"10.200.8.10\" DevicePath \"\"" Feb 8 23:21:40.813358 kubelet[1858]: I0208 23:21:40.811827 1858 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/269058d2-2ed1-4526-9231-02fe113a8ce3-cilium-run\") on node \"10.200.8.10\" DevicePath \"\"" Feb 8 23:21:40.813358 kubelet[1858]: I0208 23:21:40.811841 1858 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/269058d2-2ed1-4526-9231-02fe113a8ce3-hubble-tls\") on node \"10.200.8.10\" DevicePath \"\"" Feb 8 23:21:40.813358 kubelet[1858]: I0208 23:21:40.811854 1858 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/269058d2-2ed1-4526-9231-02fe113a8ce3-host-proc-sys-kernel\") on node \"10.200.8.10\" DevicePath \"\"" Feb 8 23:21:41.281315 env[1337]: time="2024-02-08T23:21:41.281263777Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:21:41.286872 env[1337]: time="2024-02-08T23:21:41.286830591Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:21:41.290602 env[1337]: time="2024-02-08T23:21:41.290569601Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:21:41.292068 env[1337]: time="2024-02-08T23:21:41.291024903Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 8 23:21:41.294248 env[1337]: time="2024-02-08T23:21:41.294214511Z" level=info msg="CreateContainer within sandbox \"a1fe50660be9f2c320666112f0520c0c0fd5bdc87f3b9f2d69b631133bd4a2f7\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 8 23:21:41.297982 kubelet[1858]: E0208 23:21:41.297923 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:21:41.316315 systemd[1]: var-lib-kubelet-pods-269058d2\x2d2ed1\x2d4526\x2d9231\x2d02fe113a8ce3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d58dql.mount: Deactivated successfully. Feb 8 23:21:41.316448 systemd[1]: var-lib-kubelet-pods-269058d2\x2d2ed1\x2d4526\x2d9231\x2d02fe113a8ce3-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 8 23:21:41.316529 systemd[1]: var-lib-kubelet-pods-269058d2\x2d2ed1\x2d4526\x2d9231\x2d02fe113a8ce3-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 8 23:21:41.316649 systemd[1]: var-lib-kubelet-pods-269058d2\x2d2ed1\x2d4526\x2d9231\x2d02fe113a8ce3-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 8 23:21:41.322842 env[1337]: time="2024-02-08T23:21:41.322794387Z" level=info msg="CreateContainer within sandbox \"a1fe50660be9f2c320666112f0520c0c0fd5bdc87f3b9f2d69b631133bd4a2f7\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"26a6f80ed144334026037515a4330e2e03c5129dc43fa8d7433d49eb5bb436d1\"" Feb 8 23:21:41.323851 env[1337]: time="2024-02-08T23:21:41.323783490Z" level=info msg="StartContainer for \"26a6f80ed144334026037515a4330e2e03c5129dc43fa8d7433d49eb5bb436d1\"" Feb 8 23:21:41.353266 systemd[1]: run-containerd-runc-k8s.io-26a6f80ed144334026037515a4330e2e03c5129dc43fa8d7433d49eb5bb436d1-runc.5to5t5.mount: Deactivated successfully. Feb 8 23:21:41.356414 systemd[1]: Started cri-containerd-26a6f80ed144334026037515a4330e2e03c5129dc43fa8d7433d49eb5bb436d1.scope. Feb 8 23:21:41.387607 env[1337]: time="2024-02-08T23:21:41.387561259Z" level=info msg="StartContainer for \"26a6f80ed144334026037515a4330e2e03c5129dc43fa8d7433d49eb5bb436d1\" returns successfully" Feb 8 23:21:41.502139 kubelet[1858]: I0208 23:21:41.499724 1858 scope.go:115] "RemoveContainer" containerID="9197e3610da092d21faea379a28301e7667a95af9814ac81fe0ee6217acc72dd" Feb 8 23:21:41.504252 env[1337]: time="2024-02-08T23:21:41.503389868Z" level=info msg="RemoveContainer for \"9197e3610da092d21faea379a28301e7667a95af9814ac81fe0ee6217acc72dd\"" Feb 8 23:21:41.503469 systemd[1]: Removed slice kubepods-burstable-pod269058d2_2ed1_4526_9231_02fe113a8ce3.slice. Feb 8 23:21:41.513776 env[1337]: time="2024-02-08T23:21:41.513738295Z" level=info msg="RemoveContainer for \"9197e3610da092d21faea379a28301e7667a95af9814ac81fe0ee6217acc72dd\" returns successfully" Feb 8 23:21:41.519663 kubelet[1858]: I0208 23:21:41.519633 1858 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-574c4bb98d-9rl7d" podStartSLOduration=1.366953894 podCreationTimestamp="2024-02-08 23:21:38 +0000 UTC" firstStartedPulling="2024-02-08 23:21:39.139769389 +0000 UTC m=+69.283091953" lastFinishedPulling="2024-02-08 23:21:41.292408306 +0000 UTC m=+71.435730770" observedRunningTime="2024-02-08 23:21:41.51923971 +0000 UTC m=+71.662562174" watchObservedRunningTime="2024-02-08 23:21:41.519592711 +0000 UTC m=+71.662915275" Feb 8 23:21:41.555581 kubelet[1858]: I0208 23:21:41.555490 1858 topology_manager.go:212] "Topology Admit Handler" Feb 8 23:21:41.555581 kubelet[1858]: E0208 23:21:41.555544 1858 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="269058d2-2ed1-4526-9231-02fe113a8ce3" containerName="mount-cgroup" Feb 8 23:21:41.555581 kubelet[1858]: E0208 23:21:41.555558 1858 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="269058d2-2ed1-4526-9231-02fe113a8ce3" containerName="mount-cgroup" Feb 8 23:21:41.555581 kubelet[1858]: I0208 23:21:41.555585 1858 memory_manager.go:346] "RemoveStaleState removing state" podUID="269058d2-2ed1-4526-9231-02fe113a8ce3" containerName="mount-cgroup" Feb 8 23:21:41.555851 kubelet[1858]: I0208 23:21:41.555594 1858 memory_manager.go:346] "RemoveStaleState removing state" podUID="269058d2-2ed1-4526-9231-02fe113a8ce3" containerName="mount-cgroup" Feb 8 23:21:41.561638 systemd[1]: Created slice kubepods-burstable-podb612e027_1c44_47f7_a8f9_afb091dcca5e.slice. Feb 8 23:21:41.717986 kubelet[1858]: I0208 23:21:41.717926 1858 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b612e027-1c44-47f7-a8f9-afb091dcca5e-xtables-lock\") pod \"cilium-q6579\" (UID: \"b612e027-1c44-47f7-a8f9-afb091dcca5e\") " pod="kube-system/cilium-q6579" Feb 8 23:21:41.717986 kubelet[1858]: I0208 23:21:41.717995 1858 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvffm\" (UniqueName: \"kubernetes.io/projected/b612e027-1c44-47f7-a8f9-afb091dcca5e-kube-api-access-wvffm\") pod \"cilium-q6579\" (UID: \"b612e027-1c44-47f7-a8f9-afb091dcca5e\") " pod="kube-system/cilium-q6579" Feb 8 23:21:41.718288 kubelet[1858]: I0208 23:21:41.718029 1858 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b612e027-1c44-47f7-a8f9-afb091dcca5e-cilium-cgroup\") pod \"cilium-q6579\" (UID: \"b612e027-1c44-47f7-a8f9-afb091dcca5e\") " pod="kube-system/cilium-q6579" Feb 8 23:21:41.718288 kubelet[1858]: I0208 23:21:41.718058 1858 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b612e027-1c44-47f7-a8f9-afb091dcca5e-etc-cni-netd\") pod \"cilium-q6579\" (UID: \"b612e027-1c44-47f7-a8f9-afb091dcca5e\") " pod="kube-system/cilium-q6579" Feb 8 23:21:41.718288 kubelet[1858]: I0208 23:21:41.718086 1858 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b612e027-1c44-47f7-a8f9-afb091dcca5e-lib-modules\") pod \"cilium-q6579\" (UID: \"b612e027-1c44-47f7-a8f9-afb091dcca5e\") " pod="kube-system/cilium-q6579" Feb 8 23:21:41.718288 kubelet[1858]: I0208 23:21:41.718118 1858 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b612e027-1c44-47f7-a8f9-afb091dcca5e-clustermesh-secrets\") pod \"cilium-q6579\" (UID: \"b612e027-1c44-47f7-a8f9-afb091dcca5e\") " pod="kube-system/cilium-q6579" Feb 8 23:21:41.718288 kubelet[1858]: I0208 23:21:41.718149 1858 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b612e027-1c44-47f7-a8f9-afb091dcca5e-hostproc\") pod \"cilium-q6579\" (UID: \"b612e027-1c44-47f7-a8f9-afb091dcca5e\") " pod="kube-system/cilium-q6579" Feb 8 23:21:41.718288 kubelet[1858]: I0208 23:21:41.718205 1858 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b612e027-1c44-47f7-a8f9-afb091dcca5e-cilium-config-path\") pod \"cilium-q6579\" (UID: \"b612e027-1c44-47f7-a8f9-afb091dcca5e\") " pod="kube-system/cilium-q6579" Feb 8 23:21:41.718288 kubelet[1858]: I0208 23:21:41.718241 1858 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b612e027-1c44-47f7-a8f9-afb091dcca5e-cilium-ipsec-secrets\") pod \"cilium-q6579\" (UID: \"b612e027-1c44-47f7-a8f9-afb091dcca5e\") " pod="kube-system/cilium-q6579" Feb 8 23:21:41.718288 kubelet[1858]: I0208 23:21:41.718286 1858 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b612e027-1c44-47f7-a8f9-afb091dcca5e-host-proc-sys-net\") pod \"cilium-q6579\" (UID: \"b612e027-1c44-47f7-a8f9-afb091dcca5e\") " pod="kube-system/cilium-q6579" Feb 8 23:21:41.718772 kubelet[1858]: I0208 23:21:41.718322 1858 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b612e027-1c44-47f7-a8f9-afb091dcca5e-host-proc-sys-kernel\") pod \"cilium-q6579\" (UID: \"b612e027-1c44-47f7-a8f9-afb091dcca5e\") " pod="kube-system/cilium-q6579" Feb 8 23:21:41.718772 kubelet[1858]: I0208 23:21:41.718397 1858 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b612e027-1c44-47f7-a8f9-afb091dcca5e-cilium-run\") pod \"cilium-q6579\" (UID: \"b612e027-1c44-47f7-a8f9-afb091dcca5e\") " pod="kube-system/cilium-q6579" Feb 8 23:21:41.718772 kubelet[1858]: I0208 23:21:41.718432 1858 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b612e027-1c44-47f7-a8f9-afb091dcca5e-bpf-maps\") pod \"cilium-q6579\" (UID: \"b612e027-1c44-47f7-a8f9-afb091dcca5e\") " pod="kube-system/cilium-q6579" Feb 8 23:21:41.718772 kubelet[1858]: I0208 23:21:41.718467 1858 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b612e027-1c44-47f7-a8f9-afb091dcca5e-cni-path\") pod \"cilium-q6579\" (UID: \"b612e027-1c44-47f7-a8f9-afb091dcca5e\") " pod="kube-system/cilium-q6579" Feb 8 23:21:41.718772 kubelet[1858]: I0208 23:21:41.718503 1858 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b612e027-1c44-47f7-a8f9-afb091dcca5e-hubble-tls\") pod \"cilium-q6579\" (UID: \"b612e027-1c44-47f7-a8f9-afb091dcca5e\") " pod="kube-system/cilium-q6579" Feb 8 23:21:41.868247 env[1337]: time="2024-02-08T23:21:41.867771638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q6579,Uid:b612e027-1c44-47f7-a8f9-afb091dcca5e,Namespace:kube-system,Attempt:0,}" Feb 8 23:21:41.898174 env[1337]: time="2024-02-08T23:21:41.898103518Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:21:41.898174 env[1337]: time="2024-02-08T23:21:41.898137618Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:21:41.898480 env[1337]: time="2024-02-08T23:21:41.898152118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:21:41.898480 env[1337]: time="2024-02-08T23:21:41.898357019Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/858dd52e4ca9bf9fe8a681efad4dd93d6a60f94009ef8973f4a7a9af7352c2c4 pid=3584 runtime=io.containerd.runc.v2 Feb 8 23:21:41.910828 systemd[1]: Started cri-containerd-858dd52e4ca9bf9fe8a681efad4dd93d6a60f94009ef8973f4a7a9af7352c2c4.scope. Feb 8 23:21:41.933969 env[1337]: time="2024-02-08T23:21:41.933932114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q6579,Uid:b612e027-1c44-47f7-a8f9-afb091dcca5e,Namespace:kube-system,Attempt:0,} returns sandbox id \"858dd52e4ca9bf9fe8a681efad4dd93d6a60f94009ef8973f4a7a9af7352c2c4\"" Feb 8 23:21:41.936366 env[1337]: time="2024-02-08T23:21:41.936320320Z" level=info msg="CreateContainer within sandbox \"858dd52e4ca9bf9fe8a681efad4dd93d6a60f94009ef8973f4a7a9af7352c2c4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 8 23:21:41.971339 env[1337]: time="2024-02-08T23:21:41.971295613Z" level=info msg="CreateContainer within sandbox \"858dd52e4ca9bf9fe8a681efad4dd93d6a60f94009ef8973f4a7a9af7352c2c4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"efb9393dae87cfabb31efbe03f380498ea09c1a6223ac4ab52468a48e5762f4c\"" Feb 8 23:21:41.971951 env[1337]: time="2024-02-08T23:21:41.971916715Z" level=info msg="StartContainer for \"efb9393dae87cfabb31efbe03f380498ea09c1a6223ac4ab52468a48e5762f4c\"" Feb 8 23:21:41.987393 systemd[1]: Started cri-containerd-efb9393dae87cfabb31efbe03f380498ea09c1a6223ac4ab52468a48e5762f4c.scope. Feb 8 23:21:42.017220 env[1337]: time="2024-02-08T23:21:42.017163935Z" level=info msg="StartContainer for \"efb9393dae87cfabb31efbe03f380498ea09c1a6223ac4ab52468a48e5762f4c\" returns successfully" Feb 8 23:21:42.021848 systemd[1]: cri-containerd-efb9393dae87cfabb31efbe03f380498ea09c1a6223ac4ab52468a48e5762f4c.scope: Deactivated successfully. Feb 8 23:21:42.488418 kubelet[1858]: W0208 23:21:42.297888 1858 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod269058d2_2ed1_4526_9231_02fe113a8ce3.slice/cri-containerd-c5bb7426889e15e79ad83dfa32bbcce7cf4d4c7f97684c7f89ff065857afa511.scope WatchSource:0}: container "c5bb7426889e15e79ad83dfa32bbcce7cf4d4c7f97684c7f89ff065857afa511" in namespace "k8s.io": not found Feb 8 23:21:42.488418 kubelet[1858]: E0208 23:21:42.298318 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:21:42.489528 kubelet[1858]: I0208 23:21:42.489498 1858 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=269058d2-2ed1-4526-9231-02fe113a8ce3 path="/var/lib/kubelet/pods/269058d2-2ed1-4526-9231-02fe113a8ce3/volumes" Feb 8 23:21:42.541425 env[1337]: time="2024-02-08T23:21:42.541365314Z" level=info msg="shim disconnected" id=efb9393dae87cfabb31efbe03f380498ea09c1a6223ac4ab52468a48e5762f4c Feb 8 23:21:42.541908 env[1337]: time="2024-02-08T23:21:42.541441414Z" level=warning msg="cleaning up after shim disconnected" id=efb9393dae87cfabb31efbe03f380498ea09c1a6223ac4ab52468a48e5762f4c namespace=k8s.io Feb 8 23:21:42.541908 env[1337]: time="2024-02-08T23:21:42.541457614Z" level=info msg="cleaning up dead shim" Feb 8 23:21:42.542237 kubelet[1858]: W0208 23:21:42.542210 1858 container.go:485] Failed to get RecentStats("/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb612e027_1c44_47f7_a8f9_afb091dcca5e.slice/cri-containerd-efb9393dae87cfabb31efbe03f380498ea09c1a6223ac4ab52468a48e5762f4c.scope") while determining the next housekeeping: unable to find data in memory cache Feb 8 23:21:42.547706 kubelet[1858]: E0208 23:21:42.547678 1858 cadvisor_stats_provider.go:442] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb612e027_1c44_47f7_a8f9_afb091dcca5e.slice/cri-containerd-efb9393dae87cfabb31efbe03f380498ea09c1a6223ac4ab52468a48e5762f4c.scope\": RecentStats: unable to find data in memory cache]" Feb 8 23:21:42.552610 env[1337]: time="2024-02-08T23:21:42.552572543Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:21:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3668 runtime=io.containerd.runc.v2\n" Feb 8 23:21:43.030719 kubelet[1858]: I0208 23:21:43.030678 1858 setters.go:548] "Node became not ready" node="10.200.8.10" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-08 23:21:43.0306013 +0000 UTC m=+73.173923764 LastTransitionTime:2024-02-08 23:21:43.0306013 +0000 UTC m=+73.173923764 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 8 23:21:43.299080 kubelet[1858]: E0208 23:21:43.298949 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:21:43.509772 env[1337]: time="2024-02-08T23:21:43.509720746Z" level=info msg="CreateContainer within sandbox \"858dd52e4ca9bf9fe8a681efad4dd93d6a60f94009ef8973f4a7a9af7352c2c4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 8 23:21:43.537541 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount919095160.mount: Deactivated successfully. Feb 8 23:21:43.543918 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2511629230.mount: Deactivated successfully. Feb 8 23:21:43.556039 env[1337]: time="2024-02-08T23:21:43.555899666Z" level=info msg="CreateContainer within sandbox \"858dd52e4ca9bf9fe8a681efad4dd93d6a60f94009ef8973f4a7a9af7352c2c4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"11d4e95dd31e3cc12d007f9a980eb26a92b6e00f9dcb4d6f492cad9b0c1b5b9c\"" Feb 8 23:21:43.556803 env[1337]: time="2024-02-08T23:21:43.556771968Z" level=info msg="StartContainer for \"11d4e95dd31e3cc12d007f9a980eb26a92b6e00f9dcb4d6f492cad9b0c1b5b9c\"" Feb 8 23:21:43.575271 systemd[1]: Started cri-containerd-11d4e95dd31e3cc12d007f9a980eb26a92b6e00f9dcb4d6f492cad9b0c1b5b9c.scope. Feb 8 23:21:43.607211 env[1337]: time="2024-02-08T23:21:43.607170000Z" level=info msg="StartContainer for \"11d4e95dd31e3cc12d007f9a980eb26a92b6e00f9dcb4d6f492cad9b0c1b5b9c\" returns successfully" Feb 8 23:21:43.609059 systemd[1]: cri-containerd-11d4e95dd31e3cc12d007f9a980eb26a92b6e00f9dcb4d6f492cad9b0c1b5b9c.scope: Deactivated successfully. Feb 8 23:21:43.636651 env[1337]: time="2024-02-08T23:21:43.636592676Z" level=info msg="shim disconnected" id=11d4e95dd31e3cc12d007f9a980eb26a92b6e00f9dcb4d6f492cad9b0c1b5b9c Feb 8 23:21:43.636651 env[1337]: time="2024-02-08T23:21:43.636641476Z" level=warning msg="cleaning up after shim disconnected" id=11d4e95dd31e3cc12d007f9a980eb26a92b6e00f9dcb4d6f492cad9b0c1b5b9c namespace=k8s.io Feb 8 23:21:43.636651 env[1337]: time="2024-02-08T23:21:43.636655076Z" level=info msg="cleaning up dead shim" Feb 8 23:21:43.644196 env[1337]: time="2024-02-08T23:21:43.644159896Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:21:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3729 runtime=io.containerd.runc.v2\n" Feb 8 23:21:44.299910 kubelet[1858]: E0208 23:21:44.299849 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:21:44.512838 env[1337]: time="2024-02-08T23:21:44.512795640Z" level=info msg="CreateContainer within sandbox \"858dd52e4ca9bf9fe8a681efad4dd93d6a60f94009ef8973f4a7a9af7352c2c4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 8 23:21:44.534414 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-11d4e95dd31e3cc12d007f9a980eb26a92b6e00f9dcb4d6f492cad9b0c1b5b9c-rootfs.mount: Deactivated successfully. Feb 8 23:21:44.546509 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1653014590.mount: Deactivated successfully. Feb 8 23:21:44.563248 env[1337]: time="2024-02-08T23:21:44.563143870Z" level=info msg="CreateContainer within sandbox \"858dd52e4ca9bf9fe8a681efad4dd93d6a60f94009ef8973f4a7a9af7352c2c4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"adb240560f18b0bf456a95cdc2c02d726e8be3432614da850536efe186acda58\"" Feb 8 23:21:44.564165 env[1337]: time="2024-02-08T23:21:44.564133172Z" level=info msg="StartContainer for \"adb240560f18b0bf456a95cdc2c02d726e8be3432614da850536efe186acda58\"" Feb 8 23:21:44.591668 systemd[1]: Started cri-containerd-adb240560f18b0bf456a95cdc2c02d726e8be3432614da850536efe186acda58.scope. Feb 8 23:21:44.623474 systemd[1]: cri-containerd-adb240560f18b0bf456a95cdc2c02d726e8be3432614da850536efe186acda58.scope: Deactivated successfully. Feb 8 23:21:44.625840 env[1337]: time="2024-02-08T23:21:44.625795631Z" level=info msg="StartContainer for \"adb240560f18b0bf456a95cdc2c02d726e8be3432614da850536efe186acda58\" returns successfully" Feb 8 23:21:44.656244 env[1337]: time="2024-02-08T23:21:44.656179609Z" level=info msg="shim disconnected" id=adb240560f18b0bf456a95cdc2c02d726e8be3432614da850536efe186acda58 Feb 8 23:21:44.656244 env[1337]: time="2024-02-08T23:21:44.656243209Z" level=warning msg="cleaning up after shim disconnected" id=adb240560f18b0bf456a95cdc2c02d726e8be3432614da850536efe186acda58 namespace=k8s.io Feb 8 23:21:44.656571 env[1337]: time="2024-02-08T23:21:44.656255709Z" level=info msg="cleaning up dead shim" Feb 8 23:21:44.664825 env[1337]: time="2024-02-08T23:21:44.664785031Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:21:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3785 runtime=io.containerd.runc.v2\n" Feb 8 23:21:45.300508 kubelet[1858]: E0208 23:21:45.300454 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:21:45.339608 kubelet[1858]: E0208 23:21:45.339558 1858 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 8 23:21:45.517134 env[1337]: time="2024-02-08T23:21:45.517086509Z" level=info msg="CreateContainer within sandbox \"858dd52e4ca9bf9fe8a681efad4dd93d6a60f94009ef8973f4a7a9af7352c2c4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 8 23:21:45.534475 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-adb240560f18b0bf456a95cdc2c02d726e8be3432614da850536efe186acda58-rootfs.mount: Deactivated successfully. Feb 8 23:21:45.554542 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2439224228.mount: Deactivated successfully. Feb 8 23:21:45.571719 env[1337]: time="2024-02-08T23:21:45.571678148Z" level=info msg="CreateContainer within sandbox \"858dd52e4ca9bf9fe8a681efad4dd93d6a60f94009ef8973f4a7a9af7352c2c4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6d3485b07b66e29fafeba59c25b4971a4dd6e6ab12488da61b4e6b5ac7f11483\"" Feb 8 23:21:45.572437 env[1337]: time="2024-02-08T23:21:45.572400950Z" level=info msg="StartContainer for \"6d3485b07b66e29fafeba59c25b4971a4dd6e6ab12488da61b4e6b5ac7f11483\"" Feb 8 23:21:45.593651 systemd[1]: Started cri-containerd-6d3485b07b66e29fafeba59c25b4971a4dd6e6ab12488da61b4e6b5ac7f11483.scope. Feb 8 23:21:45.621095 systemd[1]: cri-containerd-6d3485b07b66e29fafeba59c25b4971a4dd6e6ab12488da61b4e6b5ac7f11483.scope: Deactivated successfully. Feb 8 23:21:45.622875 env[1337]: time="2024-02-08T23:21:45.622708578Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb612e027_1c44_47f7_a8f9_afb091dcca5e.slice/cri-containerd-6d3485b07b66e29fafeba59c25b4971a4dd6e6ab12488da61b4e6b5ac7f11483.scope/memory.events\": no such file or directory" Feb 8 23:21:45.628043 env[1337]: time="2024-02-08T23:21:45.628007091Z" level=info msg="StartContainer for \"6d3485b07b66e29fafeba59c25b4971a4dd6e6ab12488da61b4e6b5ac7f11483\" returns successfully" Feb 8 23:21:45.654819 env[1337]: time="2024-02-08T23:21:45.654769559Z" level=info msg="shim disconnected" id=6d3485b07b66e29fafeba59c25b4971a4dd6e6ab12488da61b4e6b5ac7f11483 Feb 8 23:21:45.655025 env[1337]: time="2024-02-08T23:21:45.654818959Z" level=warning msg="cleaning up after shim disconnected" id=6d3485b07b66e29fafeba59c25b4971a4dd6e6ab12488da61b4e6b5ac7f11483 namespace=k8s.io Feb 8 23:21:45.655025 env[1337]: time="2024-02-08T23:21:45.654830559Z" level=info msg="cleaning up dead shim" Feb 8 23:21:45.661821 env[1337]: time="2024-02-08T23:21:45.661787777Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:21:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3841 runtime=io.containerd.runc.v2\n" Feb 8 23:21:46.301697 kubelet[1858]: E0208 23:21:46.301638 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:21:46.523039 env[1337]: time="2024-02-08T23:21:46.522971954Z" level=info msg="CreateContainer within sandbox \"858dd52e4ca9bf9fe8a681efad4dd93d6a60f94009ef8973f4a7a9af7352c2c4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 8 23:21:46.534507 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6d3485b07b66e29fafeba59c25b4971a4dd6e6ab12488da61b4e6b5ac7f11483-rootfs.mount: Deactivated successfully. Feb 8 23:21:46.564476 env[1337]: time="2024-02-08T23:21:46.564064858Z" level=info msg="CreateContainer within sandbox \"858dd52e4ca9bf9fe8a681efad4dd93d6a60f94009ef8973f4a7a9af7352c2c4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e7da7700bdd048e05661c47f470aa1c92881aa6435eaaf531b2bdd5f6ee63ce9\"" Feb 8 23:21:46.564811 env[1337]: time="2024-02-08T23:21:46.564774660Z" level=info msg="StartContainer for \"e7da7700bdd048e05661c47f470aa1c92881aa6435eaaf531b2bdd5f6ee63ce9\"" Feb 8 23:21:46.584478 systemd[1]: Started cri-containerd-e7da7700bdd048e05661c47f470aa1c92881aa6435eaaf531b2bdd5f6ee63ce9.scope. Feb 8 23:21:46.623887 env[1337]: time="2024-02-08T23:21:46.623830008Z" level=info msg="StartContainer for \"e7da7700bdd048e05661c47f470aa1c92881aa6435eaaf531b2bdd5f6ee63ce9\" returns successfully" Feb 8 23:21:46.958363 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 8 23:21:47.302645 kubelet[1858]: E0208 23:21:47.302503 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:21:47.539871 kubelet[1858]: I0208 23:21:47.539837 1858 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-q6579" podStartSLOduration=6.5398053990000005 podCreationTimestamp="2024-02-08 23:21:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:21:47.539608598 +0000 UTC m=+77.682931162" watchObservedRunningTime="2024-02-08 23:21:47.539805399 +0000 UTC m=+77.683127963" Feb 8 23:21:48.303100 kubelet[1858]: E0208 23:21:48.303044 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:21:49.303444 kubelet[1858]: E0208 23:21:49.303393 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:21:49.486705 systemd-networkd[1480]: lxc_health: Link UP Feb 8 23:21:49.506408 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 8 23:21:49.506307 systemd-networkd[1480]: lxc_health: Gained carrier Feb 8 23:21:49.543142 systemd[1]: run-containerd-runc-k8s.io-e7da7700bdd048e05661c47f470aa1c92881aa6435eaaf531b2bdd5f6ee63ce9-runc.rUszVL.mount: Deactivated successfully. Feb 8 23:21:50.249526 kubelet[1858]: E0208 23:21:50.249482 1858 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:21:50.304605 kubelet[1858]: E0208 23:21:50.304562 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:21:50.709559 systemd-networkd[1480]: lxc_health: Gained IPv6LL Feb 8 23:21:51.305790 kubelet[1858]: E0208 23:21:51.305738 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:21:51.775831 systemd[1]: run-containerd-runc-k8s.io-e7da7700bdd048e05661c47f470aa1c92881aa6435eaaf531b2bdd5f6ee63ce9-runc.QwnpTv.mount: Deactivated successfully. Feb 8 23:21:52.306732 kubelet[1858]: E0208 23:21:52.306653 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:21:53.307301 kubelet[1858]: E0208 23:21:53.307255 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:21:53.933463 systemd[1]: run-containerd-runc-k8s.io-e7da7700bdd048e05661c47f470aa1c92881aa6435eaaf531b2bdd5f6ee63ce9-runc.7xyVvL.mount: Deactivated successfully. Feb 8 23:21:54.307974 kubelet[1858]: E0208 23:21:54.307835 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:21:55.308398 kubelet[1858]: E0208 23:21:55.308263 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:21:56.308698 kubelet[1858]: E0208 23:21:56.308654 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:21:57.308924 kubelet[1858]: E0208 23:21:57.308864 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:21:58.309780 kubelet[1858]: E0208 23:21:58.309727 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:21:59.310497 kubelet[1858]: E0208 23:21:59.310445 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:22:00.311536 kubelet[1858]: E0208 23:22:00.311486 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:22:01.312671 kubelet[1858]: E0208 23:22:01.312610 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:22:02.313243 kubelet[1858]: E0208 23:22:02.313186 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:22:03.313777 kubelet[1858]: E0208 23:22:03.313714 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:22:04.314111 kubelet[1858]: E0208 23:22:04.314053 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:22:05.314504 kubelet[1858]: E0208 23:22:05.314445 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:22:06.314699 kubelet[1858]: E0208 23:22:06.314636 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:22:07.315632 kubelet[1858]: E0208 23:22:07.315580 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:22:08.316226 kubelet[1858]: E0208 23:22:08.316117 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:22:09.317070 kubelet[1858]: E0208 23:22:09.317011 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:22:10.249477 kubelet[1858]: E0208 23:22:10.249419 1858 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:22:10.317978 kubelet[1858]: E0208 23:22:10.317925 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:22:11.318561 kubelet[1858]: E0208 23:22:11.318504 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:22:11.456505 systemd[1]: cri-containerd-26a6f80ed144334026037515a4330e2e03c5129dc43fa8d7433d49eb5bb436d1.scope: Deactivated successfully. Feb 8 23:22:11.476093 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-26a6f80ed144334026037515a4330e2e03c5129dc43fa8d7433d49eb5bb436d1-rootfs.mount: Deactivated successfully. Feb 8 23:22:11.505082 env[1337]: time="2024-02-08T23:22:11.505023080Z" level=info msg="shim disconnected" id=26a6f80ed144334026037515a4330e2e03c5129dc43fa8d7433d49eb5bb436d1 Feb 8 23:22:11.505082 env[1337]: time="2024-02-08T23:22:11.505082480Z" level=warning msg="cleaning up after shim disconnected" id=26a6f80ed144334026037515a4330e2e03c5129dc43fa8d7433d49eb5bb436d1 namespace=k8s.io Feb 8 23:22:11.505618 env[1337]: time="2024-02-08T23:22:11.505094880Z" level=info msg="cleaning up dead shim" Feb 8 23:22:11.512871 env[1337]: time="2024-02-08T23:22:11.512834896Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:22:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4536 runtime=io.containerd.runc.v2\n" Feb 8 23:22:11.585750 kubelet[1858]: I0208 23:22:11.585101 1858 scope.go:115] "RemoveContainer" containerID="26a6f80ed144334026037515a4330e2e03c5129dc43fa8d7433d49eb5bb436d1" Feb 8 23:22:11.587619 env[1337]: time="2024-02-08T23:22:11.587580449Z" level=info msg="CreateContainer within sandbox \"a1fe50660be9f2c320666112f0520c0c0fd5bdc87f3b9f2d69b631133bd4a2f7\" for container &ContainerMetadata{Name:cilium-operator,Attempt:1,}" Feb 8 23:22:11.611941 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3115623250.mount: Deactivated successfully. Feb 8 23:22:11.625437 env[1337]: time="2024-02-08T23:22:11.625393827Z" level=info msg="CreateContainer within sandbox \"a1fe50660be9f2c320666112f0520c0c0fd5bdc87f3b9f2d69b631133bd4a2f7\" for &ContainerMetadata{Name:cilium-operator,Attempt:1,} returns container id \"68a14b66927d694c4121ddd4d8911110b666ffc1d4415c0fb362b46b8177f202\"" Feb 8 23:22:11.625901 env[1337]: time="2024-02-08T23:22:11.625868428Z" level=info msg="StartContainer for \"68a14b66927d694c4121ddd4d8911110b666ffc1d4415c0fb362b46b8177f202\"" Feb 8 23:22:11.646814 systemd[1]: Started cri-containerd-68a14b66927d694c4121ddd4d8911110b666ffc1d4415c0fb362b46b8177f202.scope. Feb 8 23:22:11.680010 env[1337]: time="2024-02-08T23:22:11.679958139Z" level=info msg="StartContainer for \"68a14b66927d694c4121ddd4d8911110b666ffc1d4415c0fb362b46b8177f202\" returns successfully" Feb 8 23:22:12.319145 kubelet[1858]: E0208 23:22:12.319091 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:22:13.319955 kubelet[1858]: E0208 23:22:13.319893 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:22:13.427237 kubelet[1858]: E0208 23:22:13.427164 1858 kubelet_node_status.go:540] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2024-02-08T23:22:03Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-02-08T23:22:03Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-02-08T23:22:03Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-02-08T23:22:03Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\\\"],\\\"sizeBytes\\\":166719855},{\\\"names\\\":[\\\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\\\",\\\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\\\"],\\\"sizeBytes\\\":91036984},{\\\"names\\\":[\\\"ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6\\\",\\\"ghcr.io/flatcar/nginx:latest\\\"],\\\"sizeBytes\\\":57035507},{\\\"names\\\":[\\\"registry.k8s.io/kube-proxy@sha256:d084b53c772f62ec38fddb2348a82d4234016daf6cd43fedbf0b3281f3790f88\\\",\\\"registry.k8s.io/kube-proxy:v1.27.10\\\"],\\\"sizeBytes\\\":25732783},{\\\"names\\\":[\\\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\\\"],\\\"sizeBytes\\\":18897442},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db\\\",\\\"registry.k8s.io/pause:3.6\\\"],\\\"sizeBytes\\\":301773}]}}\" for node \"10.200.8.10\": Patch \"https://10.200.8.40:6443/api/v1/nodes/10.200.8.10/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 8 23:22:13.711497 kubelet[1858]: E0208 23:22:13.711383 1858 controller.go:193] "Failed to update lease" err="Put \"https://10.200.8.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/10.200.8.10?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 8 23:22:14.320573 kubelet[1858]: E0208 23:22:14.320520 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:22:15.321108 kubelet[1858]: E0208 23:22:15.321049 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:22:15.622387 kubelet[1858]: E0208 23:22:15.622322 1858 controller.go:193] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.40:37884->10.200.8.21:2379: read: connection timed out" Feb 8 23:22:16.321518 kubelet[1858]: E0208 23:22:16.321456 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:22:17.322131 kubelet[1858]: E0208 23:22:17.322070 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:22:18.322288 kubelet[1858]: E0208 23:22:18.322222 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:22:19.322665 kubelet[1858]: E0208 23:22:19.322588 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:22:20.322968 kubelet[1858]: E0208 23:22:20.322914 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:22:20.678210 kubelet[1858]: I0208 23:22:20.678161 1858 status_manager.go:809] "Failed to get status for pod" podUID=b907b49c-0963-420c-becf-9fd8c13f3a79 pod="kube-system/cilium-operator-574c4bb98d-9rl7d" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.40:37798->10.200.8.21:2379: read: connection timed out" Feb 8 23:22:21.323428 kubelet[1858]: E0208 23:22:21.323378 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:22:22.086436 kubelet[1858]: E0208 23:22:22.086299 1858 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"cilium-operator-574c4bb98d-9rl7d.17b206a8fc6f699e", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"cilium-operator-574c4bb98d-9rl7d", UID:"b907b49c-0963-420c-becf-9fd8c13f3a79", APIVersion:"v1", ResourceVersion:"1216", FieldPath:"spec.containers{cilium-operator}"}, Reason:"Pulled", Message:"Container image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" already present on machine", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.10"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 22, 11, 586271646, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 22, 11, 586271646, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.40:37698->10.200.8.21:2379: read: connection timed out' (will not retry!) Feb 8 23:22:22.324204 kubelet[1858]: E0208 23:22:22.324142 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:22:23.324342 kubelet[1858]: E0208 23:22:23.324269 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:22:23.428012 kubelet[1858]: E0208 23:22:23.427967 1858 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"10.200.8.10\": Get \"https://10.200.8.40:6443/api/v1/nodes/10.200.8.10?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 8 23:22:23.686152 kubelet[1858]: E0208 23:22:23.686108 1858 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"10.200.8.10\": rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.40:37796->10.200.8.21:2379: read: connection timed out" Feb 8 23:22:24.324983 kubelet[1858]: E0208 23:22:24.324921 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:22:25.325766 kubelet[1858]: E0208 23:22:25.325707 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:22:25.623435 kubelet[1858]: E0208 23:22:25.623400 1858 controller.go:193] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io 10.200.8.10)" Feb 8 23:22:26.325991 kubelet[1858]: E0208 23:22:26.325962 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:22:27.326729 kubelet[1858]: E0208 23:22:27.326676 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:22:28.327294 kubelet[1858]: E0208 23:22:28.327256 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:22:29.327453 kubelet[1858]: E0208 23:22:29.327402 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:22:30.248770 kubelet[1858]: E0208 23:22:30.248714 1858 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:22:30.259139 env[1337]: time="2024-02-08T23:22:30.259091707Z" level=info msg="StopPodSandbox for \"9a5641443f910bc34b1d1ffd006ba39f23b600abcdb948ce394ddcda50abd97a\"" Feb 8 23:22:30.259651 env[1337]: time="2024-02-08T23:22:30.259212008Z" level=info msg="TearDown network for sandbox \"9a5641443f910bc34b1d1ffd006ba39f23b600abcdb948ce394ddcda50abd97a\" successfully" Feb 8 23:22:30.259651 env[1337]: time="2024-02-08T23:22:30.259270009Z" level=info msg="StopPodSandbox for \"9a5641443f910bc34b1d1ffd006ba39f23b600abcdb948ce394ddcda50abd97a\" returns successfully" Feb 8 23:22:30.259769 env[1337]: time="2024-02-08T23:22:30.259733614Z" level=info msg="RemovePodSandbox for \"9a5641443f910bc34b1d1ffd006ba39f23b600abcdb948ce394ddcda50abd97a\"" Feb 8 23:22:30.259826 env[1337]: time="2024-02-08T23:22:30.259772815Z" level=info msg="Forcibly stopping sandbox \"9a5641443f910bc34b1d1ffd006ba39f23b600abcdb948ce394ddcda50abd97a\"" Feb 8 23:22:30.259913 env[1337]: time="2024-02-08T23:22:30.259883216Z" level=info msg="TearDown network for sandbox \"9a5641443f910bc34b1d1ffd006ba39f23b600abcdb948ce394ddcda50abd97a\" successfully" Feb 8 23:22:30.270298 env[1337]: time="2024-02-08T23:22:30.270258442Z" level=info msg="RemovePodSandbox \"9a5641443f910bc34b1d1ffd006ba39f23b600abcdb948ce394ddcda50abd97a\" returns successfully" Feb 8 23:22:30.270960 env[1337]: time="2024-02-08T23:22:30.270931650Z" level=info msg="StopPodSandbox for \"7dc980d799dbf5c24589970b3ef8ad7f81b8f9fb9193ce9135d1f48c630966d6\"" Feb 8 23:22:30.271190 env[1337]: time="2024-02-08T23:22:30.271144953Z" level=info msg="TearDown network for sandbox \"7dc980d799dbf5c24589970b3ef8ad7f81b8f9fb9193ce9135d1f48c630966d6\" successfully" Feb 8 23:22:30.271190 env[1337]: time="2024-02-08T23:22:30.271182753Z" level=info msg="StopPodSandbox for \"7dc980d799dbf5c24589970b3ef8ad7f81b8f9fb9193ce9135d1f48c630966d6\" returns successfully" Feb 8 23:22:30.271478 env[1337]: time="2024-02-08T23:22:30.271453057Z" level=info msg="RemovePodSandbox for \"7dc980d799dbf5c24589970b3ef8ad7f81b8f9fb9193ce9135d1f48c630966d6\"" Feb 8 23:22:30.271559 env[1337]: time="2024-02-08T23:22:30.271483157Z" level=info msg="Forcibly stopping sandbox \"7dc980d799dbf5c24589970b3ef8ad7f81b8f9fb9193ce9135d1f48c630966d6\"" Feb 8 23:22:30.271606 env[1337]: time="2024-02-08T23:22:30.271564758Z" level=info msg="TearDown network for sandbox \"7dc980d799dbf5c24589970b3ef8ad7f81b8f9fb9193ce9135d1f48c630966d6\" successfully" Feb 8 23:22:30.278265 env[1337]: time="2024-02-08T23:22:30.278237839Z" level=info msg="RemovePodSandbox \"7dc980d799dbf5c24589970b3ef8ad7f81b8f9fb9193ce9135d1f48c630966d6\" returns successfully" Feb 8 23:22:30.330301 kubelet[1858]: E0208 23:22:30.330082 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:22:31.330436 kubelet[1858]: E0208 23:22:31.330380 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:22:32.331436 kubelet[1858]: E0208 23:22:32.331406 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:22:33.332488 kubelet[1858]: E0208 23:22:33.332379 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:22:33.686441 kubelet[1858]: E0208 23:22:33.686404 1858 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"10.200.8.10\": Get \"https://10.200.8.40:6443/api/v1/nodes/10.200.8.10?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 8 23:22:34.333186 kubelet[1858]: E0208 23:22:34.333138 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:22:35.333852 kubelet[1858]: E0208 23:22:35.333741 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:22:35.624062 kubelet[1858]: E0208 23:22:35.623967 1858 controller.go:193] "Failed to update lease" err="Put \"https://10.200.8.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/10.200.8.10?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 8 23:22:36.334708 kubelet[1858]: E0208 23:22:36.334673 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:22:37.335269 kubelet[1858]: E0208 23:22:37.335162 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:22:38.336096 kubelet[1858]: E0208 23:22:38.336059 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:22:39.337218 kubelet[1858]: E0208 23:22:39.337161 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:22:40.337545 kubelet[1858]: E0208 23:22:40.337508 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:22:41.338556 kubelet[1858]: E0208 23:22:41.338500 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:22:42.339078 kubelet[1858]: E0208 23:22:42.339041 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:22:43.339550 kubelet[1858]: E0208 23:22:43.339494 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:22:43.686753 kubelet[1858]: E0208 23:22:43.686698 1858 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"10.200.8.10\": Get \"https://10.200.8.40:6443/api/v1/nodes/10.200.8.10?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 8 23:22:43.686753 kubelet[1858]: E0208 23:22:43.686744 1858 kubelet_node_status.go:527] "Unable to update node status" err="update node status exceeds retry count" Feb 8 23:22:44.339972 kubelet[1858]: E0208 23:22:44.339919 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:22:45.340963 kubelet[1858]: E0208 23:22:45.340923 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:22:45.624809 kubelet[1858]: E0208 23:22:45.624772 1858 controller.go:193] "Failed to update lease" err="Put \"https://10.200.8.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/10.200.8.10?timeout=10s\": context deadline exceeded" Feb 8 23:22:45.625008 kubelet[1858]: I0208 23:22:45.624819 1858 controller.go:116] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 8 23:22:46.341356 kubelet[1858]: E0208 23:22:46.341297 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:22:47.341845 kubelet[1858]: E0208 23:22:47.341791 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:22:48.342428 kubelet[1858]: E0208 23:22:48.342387 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:22:49.342706 kubelet[1858]: E0208 23:22:49.342655 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:22:50.248666 kubelet[1858]: E0208 23:22:50.248615 1858 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:22:50.343698 kubelet[1858]: E0208 23:22:50.343646 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:22:51.344447 kubelet[1858]: E0208 23:22:51.344392 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:22:52.344823 kubelet[1858]: E0208 23:22:52.344782 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:22:53.345709 kubelet[1858]: E0208 23:22:53.345653 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:22:54.346675 kubelet[1858]: E0208 23:22:54.346624 1858 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"