Feb 8 23:23:04.034268 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Feb 8 21:14:17 -00 2024 Feb 8 23:23:04.034290 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 8 23:23:04.034300 kernel: BIOS-provided physical RAM map: Feb 8 23:23:04.034308 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 8 23:23:04.034314 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Feb 8 23:23:04.034321 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Feb 8 23:23:04.034349 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Feb 8 23:23:04.034356 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Feb 8 23:23:04.034363 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Feb 8 23:23:04.034371 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Feb 8 23:23:04.034376 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Feb 8 23:23:04.034382 kernel: printk: bootconsole [earlyser0] enabled Feb 8 23:23:04.034390 kernel: NX (Execute Disable) protection: active Feb 8 23:23:04.034397 kernel: efi: EFI v2.70 by Microsoft Feb 8 23:23:04.034409 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c9a98 RNG=0x3ffd1018 Feb 8 23:23:04.034415 kernel: random: crng init done Feb 8 23:23:04.034422 kernel: SMBIOS 3.1.0 present. Feb 8 23:23:04.034430 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 07/12/2023 Feb 8 23:23:04.034437 kernel: Hypervisor detected: Microsoft Hyper-V Feb 8 23:23:04.034444 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Feb 8 23:23:04.034453 kernel: Hyper-V Host Build:20348-10.0-1-0.1544 Feb 8 23:23:04.034459 kernel: Hyper-V: Nested features: 0x1e0101 Feb 8 23:23:04.034468 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Feb 8 23:23:04.034476 kernel: Hyper-V: Using hypercall for remote TLB flush Feb 8 23:23:04.034482 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Feb 8 23:23:04.034493 kernel: tsc: Marking TSC unstable due to running on Hyper-V Feb 8 23:23:04.034499 kernel: tsc: Detected 2593.908 MHz processor Feb 8 23:23:04.034506 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 8 23:23:04.034516 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 8 23:23:04.034522 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Feb 8 23:23:04.034528 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 8 23:23:04.034535 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Feb 8 23:23:04.034546 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Feb 8 23:23:04.034552 kernel: Using GB pages for direct mapping Feb 8 23:23:04.034559 kernel: Secure boot disabled Feb 8 23:23:04.034565 kernel: ACPI: Early table checksum verification disabled Feb 8 23:23:04.034574 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Feb 8 23:23:04.034581 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:23:04.034587 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:23:04.034595 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Feb 8 23:23:04.034607 kernel: ACPI: FACS 0x000000003FFFE000 000040 Feb 8 23:23:04.034614 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:23:04.034622 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:23:04.034631 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:23:04.034639 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:23:04.034648 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:23:04.034656 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:23:04.034665 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:23:04.034673 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Feb 8 23:23:04.034683 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Feb 8 23:23:04.034690 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Feb 8 23:23:04.034697 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Feb 8 23:23:04.034706 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Feb 8 23:23:04.034714 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Feb 8 23:23:04.034725 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Feb 8 23:23:04.034732 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Feb 8 23:23:04.034738 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Feb 8 23:23:04.034748 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Feb 8 23:23:04.034755 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 8 23:23:04.034764 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 8 23:23:04.034772 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Feb 8 23:23:04.034778 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Feb 8 23:23:04.034786 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Feb 8 23:23:04.034797 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Feb 8 23:23:04.034805 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Feb 8 23:23:04.034814 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Feb 8 23:23:04.034820 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Feb 8 23:23:04.034828 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Feb 8 23:23:04.034837 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Feb 8 23:23:04.034844 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Feb 8 23:23:04.034854 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Feb 8 23:23:04.034861 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Feb 8 23:23:04.034871 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Feb 8 23:23:04.034879 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Feb 8 23:23:04.034888 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Feb 8 23:23:04.034896 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Feb 8 23:23:04.034903 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Feb 8 23:23:04.034911 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Feb 8 23:23:04.034920 kernel: Zone ranges: Feb 8 23:23:04.034928 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 8 23:23:04.034937 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 8 23:23:04.034945 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Feb 8 23:23:04.034954 kernel: Movable zone start for each node Feb 8 23:23:04.034962 kernel: Early memory node ranges Feb 8 23:23:04.034970 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Feb 8 23:23:04.034978 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Feb 8 23:23:04.034985 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Feb 8 23:23:04.034993 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Feb 8 23:23:04.035002 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Feb 8 23:23:04.035010 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 8 23:23:04.035020 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Feb 8 23:23:04.035027 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Feb 8 23:23:04.035035 kernel: ACPI: PM-Timer IO Port: 0x408 Feb 8 23:23:04.035044 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Feb 8 23:23:04.035053 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Feb 8 23:23:04.035061 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 8 23:23:04.035068 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 8 23:23:04.035076 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Feb 8 23:23:04.035085 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 8 23:23:04.035093 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Feb 8 23:23:04.035102 kernel: Booting paravirtualized kernel on Hyper-V Feb 8 23:23:04.035110 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 8 23:23:04.035117 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Feb 8 23:23:04.035124 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u1048576 Feb 8 23:23:04.035133 kernel: pcpu-alloc: s185624 r8192 d31464 u1048576 alloc=1*2097152 Feb 8 23:23:04.035140 kernel: pcpu-alloc: [0] 0 1 Feb 8 23:23:04.035150 kernel: Hyper-V: PV spinlocks enabled Feb 8 23:23:04.035157 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 8 23:23:04.035165 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Feb 8 23:23:04.035175 kernel: Policy zone: Normal Feb 8 23:23:04.035183 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 8 23:23:04.035193 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 8 23:23:04.035200 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Feb 8 23:23:04.035207 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 8 23:23:04.035217 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 8 23:23:04.035224 kernel: Memory: 8081200K/8387460K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 306000K reserved, 0K cma-reserved) Feb 8 23:23:04.035235 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 8 23:23:04.035242 kernel: ftrace: allocating 34475 entries in 135 pages Feb 8 23:23:04.035258 kernel: ftrace: allocated 135 pages with 4 groups Feb 8 23:23:04.035267 kernel: rcu: Hierarchical RCU implementation. Feb 8 23:23:04.035278 kernel: rcu: RCU event tracing is enabled. Feb 8 23:23:04.035285 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 8 23:23:04.035293 kernel: Rude variant of Tasks RCU enabled. Feb 8 23:23:04.035303 kernel: Tracing variant of Tasks RCU enabled. Feb 8 23:23:04.035312 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 8 23:23:04.035321 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 8 23:23:04.035333 kernel: Using NULL legacy PIC Feb 8 23:23:04.035346 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Feb 8 23:23:04.035354 kernel: Console: colour dummy device 80x25 Feb 8 23:23:04.035364 kernel: printk: console [tty1] enabled Feb 8 23:23:04.035371 kernel: printk: console [ttyS0] enabled Feb 8 23:23:04.035379 kernel: printk: bootconsole [earlyser0] disabled Feb 8 23:23:04.035390 kernel: ACPI: Core revision 20210730 Feb 8 23:23:04.035400 kernel: Failed to register legacy timer interrupt Feb 8 23:23:04.035408 kernel: APIC: Switch to symmetric I/O mode setup Feb 8 23:23:04.035415 kernel: Hyper-V: Using IPI hypercalls Feb 8 23:23:04.035426 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593908) Feb 8 23:23:04.035433 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 8 23:23:04.035444 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 8 23:23:04.035451 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 8 23:23:04.035458 kernel: Spectre V2 : Mitigation: Retpolines Feb 8 23:23:04.035468 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 8 23:23:04.035479 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 8 23:23:04.035488 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Feb 8 23:23:04.035495 kernel: RETBleed: Vulnerable Feb 8 23:23:04.035503 kernel: Speculative Store Bypass: Vulnerable Feb 8 23:23:04.035512 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Feb 8 23:23:04.035520 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 8 23:23:04.035529 kernel: GDS: Unknown: Dependent on hypervisor status Feb 8 23:23:04.035536 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 8 23:23:04.035544 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 8 23:23:04.035554 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 8 23:23:04.035565 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Feb 8 23:23:04.035573 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Feb 8 23:23:04.035580 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Feb 8 23:23:04.035591 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 8 23:23:04.035598 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Feb 8 23:23:04.035608 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Feb 8 23:23:04.035616 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Feb 8 23:23:04.035623 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Feb 8 23:23:04.035633 kernel: Freeing SMP alternatives memory: 32K Feb 8 23:23:04.035640 kernel: pid_max: default: 32768 minimum: 301 Feb 8 23:23:04.035647 kernel: LSM: Security Framework initializing Feb 8 23:23:04.035654 kernel: SELinux: Initializing. Feb 8 23:23:04.035666 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 8 23:23:04.035673 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 8 23:23:04.035680 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Feb 8 23:23:04.035690 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Feb 8 23:23:04.035698 kernel: signal: max sigframe size: 3632 Feb 8 23:23:04.035707 kernel: rcu: Hierarchical SRCU implementation. Feb 8 23:23:04.035715 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 8 23:23:04.035722 kernel: smp: Bringing up secondary CPUs ... Feb 8 23:23:04.035732 kernel: x86: Booting SMP configuration: Feb 8 23:23:04.035740 kernel: .... node #0, CPUs: #1 Feb 8 23:23:04.035752 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Feb 8 23:23:04.035760 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 8 23:23:04.035767 kernel: smp: Brought up 1 node, 2 CPUs Feb 8 23:23:04.035777 kernel: smpboot: Max logical packages: 1 Feb 8 23:23:04.035785 kernel: smpboot: Total of 2 processors activated (10375.63 BogoMIPS) Feb 8 23:23:04.035795 kernel: devtmpfs: initialized Feb 8 23:23:04.035802 kernel: x86/mm: Memory block size: 128MB Feb 8 23:23:04.035809 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Feb 8 23:23:04.035821 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 8 23:23:04.035830 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 8 23:23:04.035839 kernel: pinctrl core: initialized pinctrl subsystem Feb 8 23:23:04.035846 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 8 23:23:04.035855 kernel: audit: initializing netlink subsys (disabled) Feb 8 23:23:04.035864 kernel: audit: type=2000 audit(1707434582.026:1): state=initialized audit_enabled=0 res=1 Feb 8 23:23:04.035873 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 8 23:23:04.035881 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 8 23:23:04.035888 kernel: cpuidle: using governor menu Feb 8 23:23:04.035900 kernel: ACPI: bus type PCI registered Feb 8 23:23:04.035907 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 8 23:23:04.035918 kernel: dca service started, version 1.12.1 Feb 8 23:23:04.035925 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 8 23:23:04.035933 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 8 23:23:04.035943 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 8 23:23:04.035951 kernel: ACPI: Added _OSI(Module Device) Feb 8 23:23:04.035961 kernel: ACPI: Added _OSI(Processor Device) Feb 8 23:23:04.035968 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 8 23:23:04.035979 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 8 23:23:04.035987 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 8 23:23:04.035996 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 8 23:23:04.036004 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 8 23:23:04.036012 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 8 23:23:04.036021 kernel: ACPI: Interpreter enabled Feb 8 23:23:04.036029 kernel: ACPI: PM: (supports S0 S5) Feb 8 23:23:04.036039 kernel: ACPI: Using IOAPIC for interrupt routing Feb 8 23:23:04.036047 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 8 23:23:04.036056 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Feb 8 23:23:04.036068 kernel: iommu: Default domain type: Translated Feb 8 23:23:04.036075 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 8 23:23:04.036085 kernel: vgaarb: loaded Feb 8 23:23:04.036093 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 8 23:23:04.036100 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 8 23:23:04.036110 kernel: PTP clock support registered Feb 8 23:23:04.036118 kernel: Registered efivars operations Feb 8 23:23:04.036128 kernel: PCI: Using ACPI for IRQ routing Feb 8 23:23:04.036135 kernel: PCI: System does not support PCI Feb 8 23:23:04.036143 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Feb 8 23:23:04.036154 kernel: VFS: Disk quotas dquot_6.6.0 Feb 8 23:23:04.036161 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 8 23:23:04.036171 kernel: pnp: PnP ACPI init Feb 8 23:23:04.036178 kernel: pnp: PnP ACPI: found 3 devices Feb 8 23:23:04.036186 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 8 23:23:04.036196 kernel: NET: Registered PF_INET protocol family Feb 8 23:23:04.036204 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 8 23:23:04.036215 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Feb 8 23:23:04.036223 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 8 23:23:04.036233 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 8 23:23:04.036241 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Feb 8 23:23:04.036252 kernel: TCP: Hash tables configured (established 65536 bind 65536) Feb 8 23:23:04.036259 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 8 23:23:04.036267 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 8 23:23:04.036277 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 8 23:23:04.036285 kernel: NET: Registered PF_XDP protocol family Feb 8 23:23:04.036297 kernel: PCI: CLS 0 bytes, default 64 Feb 8 23:23:04.036304 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 8 23:23:04.036313 kernel: software IO TLB: mapped [mem 0x000000003a8ad000-0x000000003e8ad000] (64MB) Feb 8 23:23:04.036322 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 8 23:23:04.036337 kernel: Initialise system trusted keyrings Feb 8 23:23:04.036344 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Feb 8 23:23:04.036352 kernel: Key type asymmetric registered Feb 8 23:23:04.036361 kernel: Asymmetric key parser 'x509' registered Feb 8 23:23:04.036369 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 8 23:23:04.036381 kernel: io scheduler mq-deadline registered Feb 8 23:23:04.036388 kernel: io scheduler kyber registered Feb 8 23:23:04.036397 kernel: io scheduler bfq registered Feb 8 23:23:04.036405 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 8 23:23:04.036415 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 8 23:23:04.036423 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 8 23:23:04.036430 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 8 23:23:04.036440 kernel: i8042: PNP: No PS/2 controller found. Feb 8 23:23:04.036559 kernel: rtc_cmos 00:02: registered as rtc0 Feb 8 23:23:04.036647 kernel: rtc_cmos 00:02: setting system clock to 2024-02-08T23:23:03 UTC (1707434583) Feb 8 23:23:04.036725 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Feb 8 23:23:04.036737 kernel: fail to initialize ptp_kvm Feb 8 23:23:04.036746 kernel: intel_pstate: CPU model not supported Feb 8 23:23:04.036755 kernel: efifb: probing for efifb Feb 8 23:23:04.036762 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Feb 8 23:23:04.036774 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Feb 8 23:23:04.036781 kernel: efifb: scrolling: redraw Feb 8 23:23:04.036793 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 8 23:23:04.036801 kernel: Console: switching to colour frame buffer device 128x48 Feb 8 23:23:04.036809 kernel: fb0: EFI VGA frame buffer device Feb 8 23:23:04.036818 kernel: pstore: Registered efi as persistent store backend Feb 8 23:23:04.036827 kernel: NET: Registered PF_INET6 protocol family Feb 8 23:23:04.036836 kernel: Segment Routing with IPv6 Feb 8 23:23:04.036843 kernel: In-situ OAM (IOAM) with IPv6 Feb 8 23:23:04.036852 kernel: NET: Registered PF_PACKET protocol family Feb 8 23:23:04.036860 kernel: Key type dns_resolver registered Feb 8 23:23:04.036872 kernel: IPI shorthand broadcast: enabled Feb 8 23:23:04.036880 kernel: sched_clock: Marking stable (820247500, 24187300)->(1164931900, -320497100) Feb 8 23:23:04.036887 kernel: registered taskstats version 1 Feb 8 23:23:04.036897 kernel: Loading compiled-in X.509 certificates Feb 8 23:23:04.036904 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: e9d857ae0e8100c174221878afd1046acbb054a6' Feb 8 23:23:04.036915 kernel: Key type .fscrypt registered Feb 8 23:23:04.036922 kernel: Key type fscrypt-provisioning registered Feb 8 23:23:04.036929 kernel: pstore: Using crash dump compression: deflate Feb 8 23:23:04.036941 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 8 23:23:04.036950 kernel: ima: Allocated hash algorithm: sha1 Feb 8 23:23:04.036959 kernel: ima: No architecture policies found Feb 8 23:23:04.036966 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 8 23:23:04.036975 kernel: Write protecting the kernel read-only data: 28672k Feb 8 23:23:04.036984 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 8 23:23:04.036993 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 8 23:23:04.037001 kernel: Run /init as init process Feb 8 23:23:04.037009 kernel: with arguments: Feb 8 23:23:04.037017 kernel: /init Feb 8 23:23:04.037028 kernel: with environment: Feb 8 23:23:04.037038 kernel: HOME=/ Feb 8 23:23:04.037045 kernel: TERM=linux Feb 8 23:23:04.037052 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 8 23:23:04.037064 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 8 23:23:04.037075 systemd[1]: Detected virtualization microsoft. Feb 8 23:23:04.037085 systemd[1]: Detected architecture x86-64. Feb 8 23:23:04.037094 systemd[1]: Running in initrd. Feb 8 23:23:04.037105 systemd[1]: No hostname configured, using default hostname. Feb 8 23:23:04.037112 systemd[1]: Hostname set to . Feb 8 23:23:04.037123 systemd[1]: Initializing machine ID from random generator. Feb 8 23:23:04.037131 systemd[1]: Queued start job for default target initrd.target. Feb 8 23:23:04.037140 systemd[1]: Started systemd-ask-password-console.path. Feb 8 23:23:04.037149 systemd[1]: Reached target cryptsetup.target. Feb 8 23:23:04.037158 systemd[1]: Reached target paths.target. Feb 8 23:23:04.037167 systemd[1]: Reached target slices.target. Feb 8 23:23:04.037176 systemd[1]: Reached target swap.target. Feb 8 23:23:04.037187 systemd[1]: Reached target timers.target. Feb 8 23:23:04.037196 systemd[1]: Listening on iscsid.socket. Feb 8 23:23:04.037207 systemd[1]: Listening on iscsiuio.socket. Feb 8 23:23:04.037214 systemd[1]: Listening on systemd-journald-audit.socket. Feb 8 23:23:04.037223 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 8 23:23:04.037233 systemd[1]: Listening on systemd-journald.socket. Feb 8 23:23:04.037245 systemd[1]: Listening on systemd-networkd.socket. Feb 8 23:23:04.037253 systemd[1]: Listening on systemd-udevd-control.socket. Feb 8 23:23:04.037261 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 8 23:23:04.037272 systemd[1]: Reached target sockets.target. Feb 8 23:23:04.037280 systemd[1]: Starting kmod-static-nodes.service... Feb 8 23:23:04.037290 systemd[1]: Finished network-cleanup.service. Feb 8 23:23:04.037298 systemd[1]: Starting systemd-fsck-usr.service... Feb 8 23:23:04.037307 systemd[1]: Starting systemd-journald.service... Feb 8 23:23:04.037316 systemd[1]: Starting systemd-modules-load.service... Feb 8 23:23:04.037334 systemd[1]: Starting systemd-resolved.service... Feb 8 23:23:04.037342 systemd[1]: Starting systemd-vconsole-setup.service... Feb 8 23:23:04.037354 systemd[1]: Finished kmod-static-nodes.service. Feb 8 23:23:04.037362 kernel: audit: type=1130 audit(1707434584.032:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:04.037375 systemd-journald[183]: Journal started Feb 8 23:23:04.037418 systemd-journald[183]: Runtime Journal (/run/log/journal/6b3b2b398efd4e93b7f23b68c0b61179) is 8.0M, max 159.0M, 151.0M free. Feb 8 23:23:04.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:04.044651 systemd-modules-load[184]: Inserted module 'overlay' Feb 8 23:23:04.048467 systemd[1]: Started systemd-journald.service. Feb 8 23:23:04.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:04.057683 systemd[1]: Finished systemd-fsck-usr.service. Feb 8 23:23:04.073429 kernel: audit: type=1130 audit(1707434584.057:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:04.073550 systemd[1]: Finished systemd-vconsole-setup.service. Feb 8 23:23:04.078944 systemd[1]: Starting dracut-cmdline-ask.service... Feb 8 23:23:04.082038 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 8 23:23:04.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:04.108696 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 8 23:23:04.109542 kernel: audit: type=1130 audit(1707434584.073:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:04.115551 systemd[1]: Finished dracut-cmdline-ask.service. Feb 8 23:23:04.121318 systemd[1]: Starting dracut-cmdline.service... Feb 8 23:23:04.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:04.139353 kernel: audit: type=1130 audit(1707434584.075:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:04.149342 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 8 23:23:04.155451 kernel: Bridge firewalling registered Feb 8 23:23:04.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:04.169291 systemd-modules-load[184]: Inserted module 'br_netfilter' Feb 8 23:23:04.171816 kernel: audit: type=1130 audit(1707434584.108:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:04.172376 dracut-cmdline[201]: dracut-dracut-053 Feb 8 23:23:04.177465 dracut-cmdline[201]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 8 23:23:04.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:04.217873 kernel: audit: type=1130 audit(1707434584.120:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:04.217928 kernel: SCSI subsystem initialized Feb 8 23:23:04.220478 systemd-resolved[185]: Positive Trust Anchors: Feb 8 23:23:04.222844 systemd-resolved[185]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 8 23:23:04.222973 systemd-resolved[185]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 8 23:23:04.225684 systemd-resolved[185]: Defaulting to hostname 'linux'. Feb 8 23:23:04.227058 systemd[1]: Started systemd-resolved.service. Feb 8 23:23:04.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:04.248724 systemd[1]: Reached target nss-lookup.target. Feb 8 23:23:04.263535 kernel: audit: type=1130 audit(1707434584.248:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:04.286942 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 8 23:23:04.286982 kernel: device-mapper: uevent: version 1.0.3 Feb 8 23:23:04.287376 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 8 23:23:04.296313 systemd-modules-load[184]: Inserted module 'dm_multipath' Feb 8 23:23:04.296979 systemd[1]: Finished systemd-modules-load.service. Feb 8 23:23:04.323078 kernel: audit: type=1130 audit(1707434584.301:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:04.323108 kernel: Loading iSCSI transport class v2.0-870. Feb 8 23:23:04.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:04.302082 systemd[1]: Starting systemd-sysctl.service... Feb 8 23:23:04.330000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:04.328263 systemd[1]: Finished systemd-sysctl.service. Feb 8 23:23:04.345077 kernel: audit: type=1130 audit(1707434584.330:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:04.351345 kernel: iscsi: registered transport (tcp) Feb 8 23:23:04.375419 kernel: iscsi: registered transport (qla4xxx) Feb 8 23:23:04.375457 kernel: QLogic iSCSI HBA Driver Feb 8 23:23:04.403496 systemd[1]: Finished dracut-cmdline.service. Feb 8 23:23:04.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:04.406815 systemd[1]: Starting dracut-pre-udev.service... Feb 8 23:23:04.458344 kernel: raid6: avx512x4 gen() 18536 MB/s Feb 8 23:23:04.478344 kernel: raid6: avx512x4 xor() 8263 MB/s Feb 8 23:23:04.498337 kernel: raid6: avx512x2 gen() 18610 MB/s Feb 8 23:23:04.518344 kernel: raid6: avx512x2 xor() 29906 MB/s Feb 8 23:23:04.538338 kernel: raid6: avx512x1 gen() 18751 MB/s Feb 8 23:23:04.558339 kernel: raid6: avx512x1 xor() 26917 MB/s Feb 8 23:23:04.579340 kernel: raid6: avx2x4 gen() 18722 MB/s Feb 8 23:23:04.599339 kernel: raid6: avx2x4 xor() 8016 MB/s Feb 8 23:23:04.619338 kernel: raid6: avx2x2 gen() 18651 MB/s Feb 8 23:23:04.639342 kernel: raid6: avx2x2 xor() 22239 MB/s Feb 8 23:23:04.659337 kernel: raid6: avx2x1 gen() 14147 MB/s Feb 8 23:23:04.679336 kernel: raid6: avx2x1 xor() 19422 MB/s Feb 8 23:23:04.700341 kernel: raid6: sse2x4 gen() 11734 MB/s Feb 8 23:23:04.720338 kernel: raid6: sse2x4 xor() 7271 MB/s Feb 8 23:23:04.740338 kernel: raid6: sse2x2 gen() 12944 MB/s Feb 8 23:23:04.760340 kernel: raid6: sse2x2 xor() 7510 MB/s Feb 8 23:23:04.780336 kernel: raid6: sse2x1 gen() 11655 MB/s Feb 8 23:23:04.803484 kernel: raid6: sse2x1 xor() 5912 MB/s Feb 8 23:23:04.803513 kernel: raid6: using algorithm avx512x1 gen() 18751 MB/s Feb 8 23:23:04.803526 kernel: raid6: .... xor() 26917 MB/s, rmw enabled Feb 8 23:23:04.811140 kernel: raid6: using avx512x2 recovery algorithm Feb 8 23:23:04.827347 kernel: xor: automatically using best checksumming function avx Feb 8 23:23:04.922355 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 8 23:23:04.930614 systemd[1]: Finished dracut-pre-udev.service. Feb 8 23:23:04.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:04.934000 audit: BPF prog-id=7 op=LOAD Feb 8 23:23:04.934000 audit: BPF prog-id=8 op=LOAD Feb 8 23:23:04.935525 systemd[1]: Starting systemd-udevd.service... Feb 8 23:23:04.950268 systemd-udevd[384]: Using default interface naming scheme 'v252'. Feb 8 23:23:04.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:04.957208 systemd[1]: Started systemd-udevd.service. Feb 8 23:23:04.960652 systemd[1]: Starting dracut-pre-trigger.service... Feb 8 23:23:04.980771 dracut-pre-trigger[399]: rd.md=0: removing MD RAID activation Feb 8 23:23:05.011001 systemd[1]: Finished dracut-pre-trigger.service. Feb 8 23:23:05.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:05.014378 systemd[1]: Starting systemd-udev-trigger.service... Feb 8 23:23:05.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:05.051848 systemd[1]: Finished systemd-udev-trigger.service. Feb 8 23:23:05.110341 kernel: cryptd: max_cpu_qlen set to 1000 Feb 8 23:23:05.119339 kernel: hv_vmbus: Vmbus version:5.2 Feb 8 23:23:05.134343 kernel: AVX2 version of gcm_enc/dec engaged. Feb 8 23:23:05.149339 kernel: hv_vmbus: registering driver hyperv_keyboard Feb 8 23:23:05.149366 kernel: AES CTR mode by8 optimization enabled Feb 8 23:23:05.157348 kernel: hv_vmbus: registering driver hv_storvsc Feb 8 23:23:05.172205 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Feb 8 23:23:05.172234 kernel: scsi host0: storvsc_host_t Feb 8 23:23:05.172273 kernel: scsi host1: storvsc_host_t Feb 8 23:23:05.179428 kernel: hv_vmbus: registering driver hv_netvsc Feb 8 23:23:05.188561 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 8 23:23:05.188598 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Feb 8 23:23:05.196529 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Feb 8 23:23:05.213342 kernel: hv_vmbus: registering driver hid_hyperv Feb 8 23:23:05.227149 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Feb 8 23:23:05.227184 kernel: hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Feb 8 23:23:05.245076 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Feb 8 23:23:05.245288 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 8 23:23:05.253355 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Feb 8 23:23:05.253523 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Feb 8 23:23:05.253646 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 8 23:23:05.259080 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 8 23:23:05.259242 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Feb 8 23:23:05.259374 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Feb 8 23:23:05.274234 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 8 23:23:05.274263 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 8 23:23:05.316347 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (440) Feb 8 23:23:05.338059 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 8 23:23:05.343842 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 8 23:23:05.359963 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 8 23:23:05.370834 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 8 23:23:05.381850 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 8 23:23:05.398749 kernel: hv_netvsc 000d3a64-9dc5-000d-3a64-9dc5000d3a64 eth0: VF slot 1 added Feb 8 23:23:05.399636 systemd[1]: Starting disk-uuid.service... Feb 8 23:23:05.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:05.408000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:05.406626 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 8 23:23:05.417805 kernel: hv_vmbus: registering driver hv_pci Feb 8 23:23:05.406719 systemd[1]: Finished disk-uuid.service. Feb 8 23:23:05.411106 systemd[1]: Starting verity-setup.service... Feb 8 23:23:05.427338 kernel: hv_pci 773fed88-c043-4163-ae56-0da256a0fb5c: PCI VMBus probing: Using version 0x10004 Feb 8 23:23:05.443721 kernel: hv_pci 773fed88-c043-4163-ae56-0da256a0fb5c: PCI host bridge to bus c043:00 Feb 8 23:23:05.443874 kernel: pci_bus c043:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Feb 8 23:23:05.443996 kernel: pci_bus c043:00: No busn resource found for root bus, will use [bus 00-ff] Feb 8 23:23:05.454744 kernel: pci c043:00:02.0: [15b3:1016] type 00 class 0x020000 Feb 8 23:23:05.467866 kernel: pci c043:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 8 23:23:05.467906 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 8 23:23:05.482727 kernel: pci c043:00:02.0: enabling Extended Tags Feb 8 23:23:05.502661 kernel: pci c043:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at c043:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Feb 8 23:23:05.502837 kernel: pci_bus c043:00: busn_res: [bus 00-ff] end is updated to 00 Feb 8 23:23:05.502951 kernel: pci c043:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 8 23:23:05.539028 systemd[1]: Found device dev-mapper-usr.device. Feb 8 23:23:05.545359 systemd[1]: Finished verity-setup.service. Feb 8 23:23:05.549000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:05.550883 systemd[1]: Mounting sysusr-usr.mount... Feb 8 23:23:05.646344 kernel: mlx5_core c043:00:02.0: firmware version: 14.30.1350 Feb 8 23:23:05.652338 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 8 23:23:05.652628 systemd[1]: Mounted sysusr-usr.mount. Feb 8 23:23:05.656914 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 8 23:23:05.661377 systemd[1]: Starting ignition-setup.service... Feb 8 23:23:05.667043 systemd[1]: Starting parse-ip-for-networkd.service... Feb 8 23:23:05.689343 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 8 23:23:05.689378 kernel: BTRFS info (device sda6): using free space tree Feb 8 23:23:05.689392 kernel: BTRFS info (device sda6): has skinny extents Feb 8 23:23:05.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:05.758000 audit: BPF prog-id=9 op=LOAD Feb 8 23:23:05.755391 systemd[1]: Finished parse-ip-for-networkd.service. Feb 8 23:23:05.759823 systemd[1]: Starting systemd-networkd.service... Feb 8 23:23:05.791701 systemd-networkd[709]: lo: Link UP Feb 8 23:23:05.792032 systemd-networkd[709]: lo: Gained carrier Feb 8 23:23:05.792504 systemd-networkd[709]: Enumeration completed Feb 8 23:23:05.792567 systemd[1]: Started systemd-networkd.service. Feb 8 23:23:05.803954 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 8 23:23:05.793380 systemd-networkd[709]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 8 23:23:05.802131 systemd-networkd[709]: eth0: Link UP Feb 8 23:23:05.802259 systemd-networkd[709]: eth0: Gained carrier Feb 8 23:23:05.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:05.814133 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 8 23:23:05.814460 systemd[1]: Reached target network.target. Feb 8 23:23:05.817372 systemd[1]: Starting iscsiuio.service... Feb 8 23:23:05.831723 systemd[1]: Started iscsiuio.service. Feb 8 23:23:05.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:05.834926 systemd[1]: Starting iscsid.service... Feb 8 23:23:05.841567 iscsid[728]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 8 23:23:05.841567 iscsid[728]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Feb 8 23:23:05.841567 iscsid[728]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 8 23:23:05.841567 iscsid[728]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 8 23:23:05.841567 iscsid[728]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 8 23:23:05.841567 iscsid[728]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 8 23:23:05.841567 iscsid[728]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 8 23:23:05.840162 systemd[1]: Started iscsid.service. Feb 8 23:23:05.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:05.878426 systemd[1]: Starting dracut-initqueue.service... Feb 8 23:23:05.881908 systemd[1]: Finished ignition-setup.service. Feb 8 23:23:05.881975 systemd-networkd[709]: eth0: DHCPv4 address 10.200.8.22/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 8 23:23:05.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:05.893151 systemd[1]: Starting ignition-fetch-offline.service... Feb 8 23:23:05.909954 systemd[1]: Finished dracut-initqueue.service. Feb 8 23:23:05.930867 kernel: mlx5_core c043:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Feb 8 23:23:05.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:05.912401 systemd[1]: Reached target remote-fs-pre.target. Feb 8 23:23:05.914798 systemd[1]: Reached target remote-cryptsetup.target. Feb 8 23:23:05.917182 systemd[1]: Reached target remote-fs.target. Feb 8 23:23:05.920114 systemd[1]: Starting dracut-pre-mount.service... Feb 8 23:23:05.930759 systemd[1]: Finished dracut-pre-mount.service. Feb 8 23:23:05.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:06.049547 ignition[731]: Ignition 2.14.0 Feb 8 23:23:06.049558 ignition[731]: Stage: fetch-offline Feb 8 23:23:06.049628 ignition[731]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:23:06.049665 ignition[731]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 8 23:23:06.065630 ignition[731]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 8 23:23:06.068799 ignition[731]: parsed url from cmdline: "" Feb 8 23:23:06.068807 ignition[731]: no config URL provided Feb 8 23:23:06.068815 ignition[731]: reading system config file "/usr/lib/ignition/user.ign" Feb 8 23:23:06.068827 ignition[731]: no config at "/usr/lib/ignition/user.ign" Feb 8 23:23:06.068835 ignition[731]: failed to fetch config: resource requires networking Feb 8 23:23:06.074255 systemd[1]: Finished ignition-fetch-offline.service. Feb 8 23:23:06.073471 ignition[731]: Ignition finished successfully Feb 8 23:23:06.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:06.086442 systemd[1]: Starting ignition-fetch.service... Feb 8 23:23:06.097699 ignition[752]: Ignition 2.14.0 Feb 8 23:23:06.097709 ignition[752]: Stage: fetch Feb 8 23:23:06.097827 ignition[752]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:23:06.097858 ignition[752]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 8 23:23:06.106786 ignition[752]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 8 23:23:06.120991 kernel: mlx5_core c043:00:02.0: Supported tc offload range - chains: 1, prios: 1 Feb 8 23:23:06.121161 kernel: mlx5_core c043:00:02.0: mlx5e_tc_post_act_init:40:(pid 188): firmware level support is missing Feb 8 23:23:06.106968 ignition[752]: parsed url from cmdline: "" Feb 8 23:23:06.106971 ignition[752]: no config URL provided Feb 8 23:23:06.106978 ignition[752]: reading system config file "/usr/lib/ignition/user.ign" Feb 8 23:23:06.106986 ignition[752]: no config at "/usr/lib/ignition/user.ign" Feb 8 23:23:06.107014 ignition[752]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Feb 8 23:23:06.139337 kernel: hv_netvsc 000d3a64-9dc5-000d-3a64-9dc5000d3a64 eth0: VF registering: eth1 Feb 8 23:23:06.140424 kernel: mlx5_core c043:00:02.0 eth1: joined to eth0 Feb 8 23:23:06.155337 kernel: mlx5_core c043:00:02.0 enP49219s1: renamed from eth1 Feb 8 23:23:06.158444 systemd-networkd[709]: eth1: Interface name change detected, renamed to enP49219s1. Feb 8 23:23:06.229255 ignition[752]: GET result: OK Feb 8 23:23:06.229364 ignition[752]: config has been read from IMDS userdata Feb 8 23:23:06.229403 ignition[752]: parsing config with SHA512: 5bf16fe8030691e9da88ff89f17d01152ef8c113431c957c44ddff3efa49d4860967c3ec8f79e55d32a0f2900cf03a2888be892532b97f86eb17676715fcadae Feb 8 23:23:06.253970 unknown[752]: fetched base config from "system" Feb 8 23:23:06.256830 unknown[752]: fetched base config from "system" Feb 8 23:23:06.256842 unknown[752]: fetched user config from "azure" Feb 8 23:23:06.262205 ignition[752]: fetch: fetch complete Feb 8 23:23:06.262218 ignition[752]: fetch: fetch passed Feb 8 23:23:06.264034 ignition[752]: Ignition finished successfully Feb 8 23:23:06.269001 systemd[1]: Finished ignition-fetch.service. Feb 8 23:23:06.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:06.272071 systemd[1]: Starting ignition-kargs.service... Feb 8 23:23:06.286217 ignition[759]: Ignition 2.14.0 Feb 8 23:23:06.286226 ignition[759]: Stage: kargs Feb 8 23:23:06.286371 ignition[759]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:23:06.286403 ignition[759]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 8 23:23:06.290582 ignition[759]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 8 23:23:06.299000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:06.297138 systemd[1]: Finished ignition-kargs.service. Feb 8 23:23:06.293464 ignition[759]: kargs: kargs passed Feb 8 23:23:06.300215 systemd[1]: Starting ignition-disks.service... Feb 8 23:23:06.293516 ignition[759]: Ignition finished successfully Feb 8 23:23:06.309896 ignition[765]: Ignition 2.14.0 Feb 8 23:23:06.309902 ignition[765]: Stage: disks Feb 8 23:23:06.310021 ignition[765]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:23:06.319000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:06.316224 systemd[1]: Finished ignition-disks.service. Feb 8 23:23:06.310043 ignition[765]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 8 23:23:06.319856 systemd[1]: Reached target initrd-root-device.target. Feb 8 23:23:06.313033 ignition[765]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 8 23:23:06.323133 systemd[1]: Reached target local-fs-pre.target. Feb 8 23:23:06.315622 ignition[765]: disks: disks passed Feb 8 23:23:06.323585 systemd[1]: Reached target local-fs.target. Feb 8 23:23:06.315660 ignition[765]: Ignition finished successfully Feb 8 23:23:06.324020 systemd[1]: Reached target sysinit.target. Feb 8 23:23:06.324482 systemd[1]: Reached target basic.target. Feb 8 23:23:06.325700 systemd[1]: Starting systemd-fsck-root.service... Feb 8 23:23:06.361339 kernel: mlx5_core c043:00:02.0 enP49219s1: Link up Feb 8 23:23:06.361637 systemd-networkd[709]: enP49219s1: Link UP Feb 8 23:23:06.413811 systemd-fsck[773]: ROOT: clean, 602/7326000 files, 481070/7359488 blocks Feb 8 23:23:06.421118 systemd[1]: Finished systemd-fsck-root.service. Feb 8 23:23:06.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:06.427373 systemd[1]: Mounting sysroot.mount... Feb 8 23:23:06.439355 kernel: hv_netvsc 000d3a64-9dc5-000d-3a64-9dc5000d3a64 eth0: Data path switched to VF: enP49219s1 Feb 8 23:23:06.451342 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 8 23:23:06.451314 systemd[1]: Mounted sysroot.mount. Feb 8 23:23:06.453332 systemd[1]: Reached target initrd-root-fs.target. Feb 8 23:23:06.487885 systemd[1]: Mounting sysroot-usr.mount... Feb 8 23:23:06.496260 systemd[1]: Starting flatcar-metadata-hostname.service... Feb 8 23:23:06.498084 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 8 23:23:06.498112 systemd[1]: Reached target ignition-diskful.target. Feb 8 23:23:06.511626 systemd[1]: Mounted sysroot-usr.mount. Feb 8 23:23:06.516502 systemd[1]: Starting initrd-setup-root.service... Feb 8 23:23:06.541386 initrd-setup-root[787]: cut: /sysroot/etc/passwd: No such file or directory Feb 8 23:23:06.597259 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 8 23:23:06.613348 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (794) Feb 8 23:23:06.622248 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 8 23:23:06.622276 kernel: BTRFS info (device sda6): using free space tree Feb 8 23:23:06.622290 kernel: BTRFS info (device sda6): has skinny extents Feb 8 23:23:06.628475 initrd-setup-root[814]: cut: /sysroot/etc/group: No such file or directory Feb 8 23:23:06.633105 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 8 23:23:06.639143 initrd-setup-root[822]: cut: /sysroot/etc/shadow: No such file or directory Feb 8 23:23:06.661796 initrd-setup-root[830]: cut: /sysroot/etc/gshadow: No such file or directory Feb 8 23:23:06.801500 systemd-networkd[709]: enP49219s1: Gained carrier Feb 8 23:23:07.129636 systemd[1]: Finished initrd-setup-root.service. Feb 8 23:23:07.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:07.133469 systemd[1]: Starting ignition-mount.service... Feb 8 23:23:07.141508 systemd[1]: Starting sysroot-boot.service... Feb 8 23:23:07.145450 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 8 23:23:07.145541 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 8 23:23:07.170239 ignition[849]: INFO : Ignition 2.14.0 Feb 8 23:23:07.170239 ignition[849]: INFO : Stage: mount Feb 8 23:23:07.174275 ignition[849]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:23:07.174275 ignition[849]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 8 23:23:07.172070 systemd[1]: Finished sysroot-boot.service. Feb 8 23:23:07.187000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:07.191227 ignition[849]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 8 23:23:07.196969 ignition[849]: INFO : mount: mount passed Feb 8 23:23:07.199248 ignition[849]: INFO : Ignition finished successfully Feb 8 23:23:07.202047 systemd[1]: Finished ignition-mount.service. Feb 8 23:23:07.203000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:07.505573 systemd-networkd[709]: eth0: Gained IPv6LL Feb 8 23:23:08.203598 coreos-metadata[782]: Feb 08 23:23:08.203 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 8 23:23:08.250311 coreos-metadata[782]: Feb 08 23:23:08.250 INFO Fetch successful Feb 8 23:23:08.286542 coreos-metadata[782]: Feb 08 23:23:08.286 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Feb 8 23:23:08.303921 coreos-metadata[782]: Feb 08 23:23:08.303 INFO Fetch successful Feb 8 23:23:08.322713 coreos-metadata[782]: Feb 08 23:23:08.322 INFO wrote hostname ci-3510.3.2-a-5de6cd8e96 to /sysroot/etc/hostname Feb 8 23:23:08.328263 systemd[1]: Finished flatcar-metadata-hostname.service. Feb 8 23:23:08.330000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:08.331638 systemd[1]: Starting ignition-files.service... Feb 8 23:23:08.354829 kernel: kauditd_printk_skb: 26 callbacks suppressed Feb 8 23:23:08.354859 kernel: audit: type=1130 audit(1707434588.330:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:08.359813 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 8 23:23:08.374344 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (861) Feb 8 23:23:08.384145 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 8 23:23:08.384177 kernel: BTRFS info (device sda6): using free space tree Feb 8 23:23:08.384190 kernel: BTRFS info (device sda6): has skinny extents Feb 8 23:23:08.392178 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 8 23:23:08.403936 ignition[880]: INFO : Ignition 2.14.0 Feb 8 23:23:08.403936 ignition[880]: INFO : Stage: files Feb 8 23:23:08.408501 ignition[880]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:23:08.408501 ignition[880]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 8 23:23:08.408501 ignition[880]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 8 23:23:08.428406 ignition[880]: DEBUG : files: compiled without relabeling support, skipping Feb 8 23:23:08.431747 ignition[880]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 8 23:23:08.431747 ignition[880]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 8 23:23:08.524474 ignition[880]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 8 23:23:08.529158 ignition[880]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 8 23:23:08.533233 unknown[880]: wrote ssh authorized keys file for user: core Feb 8 23:23:08.535858 ignition[880]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 8 23:23:08.562995 ignition[880]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 8 23:23:08.569570 ignition[880]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz: attempt #1 Feb 8 23:23:09.243922 ignition[880]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 8 23:23:09.442652 ignition[880]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: 5d0324ca8a3c90c680b6e1fddb245a2255582fa15949ba1f3c6bb7323df9d3af754dae98d6e40ac9ccafb2999c932df2c4288d418949a4915d928eb23c090540 Feb 8 23:23:09.451384 ignition[880]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 8 23:23:09.451384 ignition[880]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 8 23:23:09.451384 ignition[880]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-amd64.tar.gz: attempt #1 Feb 8 23:23:09.941683 ignition[880]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 8 23:23:10.033260 ignition[880]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: aa622325bf05520939f9e020d7a28ab48ac23e2fae6f47d5a4e52174c88c1ebc31b464853e4fd65bd8f5331f330a6ca96fd370d247d3eeaed042da4ee2d1219a Feb 8 23:23:10.042107 ignition[880]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 8 23:23:10.042107 ignition[880]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 8 23:23:10.051624 ignition[880]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubeadm: attempt #1 Feb 8 23:23:10.583565 ignition[880]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 8 23:23:10.897479 ignition[880]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: f4daad200c8378dfdc6cb69af28eaca4215f2b4a2dbdf75f29f9210171cb5683bc873fc000319022e6b3ad61175475d77190734713ba9136644394e8a8faafa1 Feb 8 23:23:10.907090 ignition[880]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 8 23:23:10.907090 ignition[880]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubelet" Feb 8 23:23:10.907090 ignition[880]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubelet: attempt #1 Feb 8 23:23:11.026461 ignition[880]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 8 23:23:11.492338 ignition[880]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: ce6ba764274162d38ac1c44e1fb1f0f835346f3afc5b508bb755b1b7d7170910f5812b0a1941b32e29d950e905bbd08ae761c87befad921db4d44969c8562e75 Feb 8 23:23:11.501283 ignition[880]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 8 23:23:11.501283 ignition[880]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/install.sh" Feb 8 23:23:11.501283 ignition[880]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/install.sh" Feb 8 23:23:11.501283 ignition[880]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 8 23:23:11.521062 ignition[880]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 8 23:23:11.521062 ignition[880]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 8 23:23:11.521062 ignition[880]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 8 23:23:11.536212 ignition[880]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 8 23:23:11.546880 ignition[880]: INFO : files: createFilesystemsFiles: createFiles: op(a): oem config not found in "/usr/share/oem", looking on oem partition Feb 8 23:23:11.546880 ignition[880]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1305305416" Feb 8 23:23:11.546880 ignition[880]: CRITICAL : files: createFilesystemsFiles: createFiles: op(a): op(b): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1305305416": device or resource busy Feb 8 23:23:11.546880 ignition[880]: ERROR : files: createFilesystemsFiles: createFiles: op(a): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1305305416", trying btrfs: device or resource busy Feb 8 23:23:11.546880 ignition[880]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1305305416" Feb 8 23:23:11.577307 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (880) Feb 8 23:23:11.566085 systemd[1]: mnt-oem1305305416.mount: Deactivated successfully. Feb 8 23:23:11.580441 ignition[880]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1305305416" Feb 8 23:23:11.580441 ignition[880]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [started] unmounting "/mnt/oem1305305416" Feb 8 23:23:11.580441 ignition[880]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [finished] unmounting "/mnt/oem1305305416" Feb 8 23:23:11.580441 ignition[880]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 8 23:23:11.580441 ignition[880]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 8 23:23:11.580441 ignition[880]: INFO : files: createFilesystemsFiles: createFiles: op(e): oem config not found in "/usr/share/oem", looking on oem partition Feb 8 23:23:11.580441 ignition[880]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(f): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1535572091" Feb 8 23:23:11.580441 ignition[880]: CRITICAL : files: createFilesystemsFiles: createFiles: op(e): op(f): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1535572091": device or resource busy Feb 8 23:23:11.580441 ignition[880]: ERROR : files: createFilesystemsFiles: createFiles: op(e): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1535572091", trying btrfs: device or resource busy Feb 8 23:23:11.580441 ignition[880]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(10): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1535572091" Feb 8 23:23:11.633989 kernel: audit: type=1130 audit(1707434591.592:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:11.592000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:11.584576 systemd[1]: mnt-oem1535572091.mount: Deactivated successfully. Feb 8 23:23:11.634942 ignition[880]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(10): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1535572091" Feb 8 23:23:11.634942 ignition[880]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(11): [started] unmounting "/mnt/oem1535572091" Feb 8 23:23:11.634942 ignition[880]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(11): [finished] unmounting "/mnt/oem1535572091" Feb 8 23:23:11.634942 ignition[880]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 8 23:23:11.634942 ignition[880]: INFO : files: op(12): [started] processing unit "waagent.service" Feb 8 23:23:11.634942 ignition[880]: INFO : files: op(12): [finished] processing unit "waagent.service" Feb 8 23:23:11.634942 ignition[880]: INFO : files: op(13): [started] processing unit "nvidia.service" Feb 8 23:23:11.634942 ignition[880]: INFO : files: op(13): [finished] processing unit "nvidia.service" Feb 8 23:23:11.634942 ignition[880]: INFO : files: op(14): [started] processing unit "prepare-critools.service" Feb 8 23:23:11.634942 ignition[880]: INFO : files: op(14): op(15): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 8 23:23:11.634942 ignition[880]: INFO : files: op(14): op(15): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 8 23:23:11.634942 ignition[880]: INFO : files: op(14): [finished] processing unit "prepare-critools.service" Feb 8 23:23:11.634942 ignition[880]: INFO : files: op(16): [started] processing unit "prepare-cni-plugins.service" Feb 8 23:23:11.634942 ignition[880]: INFO : files: op(16): op(17): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 8 23:23:11.634942 ignition[880]: INFO : files: op(16): op(17): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 8 23:23:11.634942 ignition[880]: INFO : files: op(16): [finished] processing unit "prepare-cni-plugins.service" Feb 8 23:23:11.634942 ignition[880]: INFO : files: op(18): [started] setting preset to enabled for "waagent.service" Feb 8 23:23:11.634942 ignition[880]: INFO : files: op(18): [finished] setting preset to enabled for "waagent.service" Feb 8 23:23:11.634942 ignition[880]: INFO : files: op(19): [started] setting preset to enabled for "nvidia.service" Feb 8 23:23:11.634942 ignition[880]: INFO : files: op(19): [finished] setting preset to enabled for "nvidia.service" Feb 8 23:23:11.588839 systemd[1]: Finished ignition-files.service. Feb 8 23:23:11.644747 ignition[880]: INFO : files: op(1a): [started] setting preset to enabled for "prepare-critools.service" Feb 8 23:23:11.644747 ignition[880]: INFO : files: op(1a): [finished] setting preset to enabled for "prepare-critools.service" Feb 8 23:23:11.644747 ignition[880]: INFO : files: op(1b): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 8 23:23:11.644747 ignition[880]: INFO : files: op(1b): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 8 23:23:11.644747 ignition[880]: INFO : files: createResultFile: createFiles: op(1c): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 8 23:23:11.644747 ignition[880]: INFO : files: createResultFile: createFiles: op(1c): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 8 23:23:11.644747 ignition[880]: INFO : files: files passed Feb 8 23:23:11.644747 ignition[880]: INFO : Ignition finished successfully Feb 8 23:23:11.619610 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 8 23:23:11.648378 initrd-setup-root-after-ignition[904]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 8 23:23:11.787873 kernel: audit: type=1130 audit(1707434591.762:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:11.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:11.627152 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 8 23:23:11.628793 systemd[1]: Starting ignition-quench.service... Feb 8 23:23:11.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:11.649612 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 8 23:23:11.826781 kernel: audit: type=1130 audit(1707434591.795:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:11.826806 kernel: audit: type=1131 audit(1707434591.795:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:11.826819 kernel: audit: type=1130 audit(1707434591.826:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:11.795000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:11.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:11.762618 systemd[1]: Reached target ignition-complete.target. Feb 8 23:23:11.851866 kernel: audit: type=1131 audit(1707434591.826:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:11.826000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:11.763631 systemd[1]: Starting initrd-parse-etc.service... Feb 8 23:23:11.792693 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 8 23:23:11.792767 systemd[1]: Finished initrd-parse-etc.service. Feb 8 23:23:11.795477 systemd[1]: Reached target initrd-fs.target. Feb 8 23:23:11.822116 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 8 23:23:11.822344 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 8 23:23:11.822418 systemd[1]: Finished ignition-quench.service. Feb 8 23:23:11.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:11.827179 systemd[1]: Reached target initrd.target. Feb 8 23:23:11.889273 kernel: audit: type=1130 audit(1707434591.873:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:11.856712 systemd[1]: Starting dracut-pre-pivot.service... Feb 8 23:23:11.869612 systemd[1]: Finished dracut-pre-pivot.service. Feb 8 23:23:11.886107 systemd[1]: Starting initrd-cleanup.service... Feb 8 23:23:11.910880 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 8 23:23:11.910980 systemd[1]: Finished initrd-cleanup.service. Feb 8 23:23:11.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:11.915000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:11.928871 systemd[1]: Stopped target nss-lookup.target. Feb 8 23:23:11.945870 kernel: audit: type=1130 audit(1707434591.915:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:11.947938 kernel: audit: type=1131 audit(1707434591.915:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:11.945836 systemd[1]: Stopped target remote-cryptsetup.target. Feb 8 23:23:11.947933 systemd[1]: Stopped target timers.target. Feb 8 23:23:11.950049 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 8 23:23:11.950095 systemd[1]: Stopped dracut-pre-pivot.service. Feb 8 23:23:11.960000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:11.961249 systemd[1]: Stopped target initrd.target. Feb 8 23:23:11.965416 systemd[1]: Stopped target basic.target. Feb 8 23:23:11.967000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:11.967000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:11.967000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:11.973000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:11.973000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:11.978000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:11.967344 systemd[1]: Stopped target ignition-complete.target. Feb 8 23:23:12.010762 iscsid[728]: iscsid shutting down. Feb 8 23:23:11.967459 systemd[1]: Stopped target ignition-diskful.target. Feb 8 23:23:11.967902 systemd[1]: Stopped target initrd-root-device.target. Feb 8 23:23:12.019858 ignition[918]: INFO : Ignition 2.14.0 Feb 8 23:23:12.019858 ignition[918]: INFO : Stage: umount Feb 8 23:23:12.019858 ignition[918]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:23:12.019858 ignition[918]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 8 23:23:12.028000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:12.032000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:12.036000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:12.046000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:11.968353 systemd[1]: Stopped target remote-fs.target. Feb 8 23:23:12.051000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:12.051454 ignition[918]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 8 23:23:12.051454 ignition[918]: INFO : umount: umount passed Feb 8 23:23:12.051454 ignition[918]: INFO : Ignition finished successfully Feb 8 23:23:12.056000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:11.968782 systemd[1]: Stopped target remote-fs-pre.target. Feb 8 23:23:11.969240 systemd[1]: Stopped target sysinit.target. Feb 8 23:23:11.969683 systemd[1]: Stopped target local-fs.target. Feb 8 23:23:11.970251 systemd[1]: Stopped target local-fs-pre.target. Feb 8 23:23:11.970691 systemd[1]: Stopped target swap.target. Feb 8 23:23:11.971177 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 8 23:23:11.971219 systemd[1]: Stopped dracut-pre-mount.service. Feb 8 23:23:11.971673 systemd[1]: Stopped target cryptsetup.target. Feb 8 23:23:11.972216 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 8 23:23:11.972251 systemd[1]: Stopped dracut-initqueue.service. Feb 8 23:23:11.972775 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 8 23:23:11.972807 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 8 23:23:11.973178 systemd[1]: ignition-files.service: Deactivated successfully. Feb 8 23:23:11.973210 systemd[1]: Stopped ignition-files.service. Feb 8 23:23:11.973643 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 8 23:23:11.973675 systemd[1]: Stopped flatcar-metadata-hostname.service. Feb 8 23:23:11.974768 systemd[1]: Stopping ignition-mount.service... Feb 8 23:23:11.979343 systemd[1]: Stopping iscsid.service... Feb 8 23:23:11.979472 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 8 23:23:11.979528 systemd[1]: Stopped kmod-static-nodes.service. Feb 8 23:23:12.005465 systemd[1]: Stopping sysroot-boot.service... Feb 8 23:23:12.024410 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 8 23:23:12.024472 systemd[1]: Stopped systemd-udev-trigger.service. Feb 8 23:23:12.028789 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 8 23:23:12.028836 systemd[1]: Stopped dracut-pre-trigger.service. Feb 8 23:23:12.033198 systemd[1]: iscsid.service: Deactivated successfully. Feb 8 23:23:12.033294 systemd[1]: Stopped iscsid.service. Feb 8 23:23:12.038348 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 8 23:23:12.038437 systemd[1]: Stopped ignition-mount.service. Feb 8 23:23:12.046875 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 8 23:23:12.046921 systemd[1]: Stopped ignition-disks.service. Feb 8 23:23:12.051448 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 8 23:23:12.051502 systemd[1]: Stopped ignition-kargs.service. Feb 8 23:23:12.056975 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 8 23:23:12.061368 systemd[1]: Stopped ignition-fetch.service. Feb 8 23:23:12.136000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:12.136596 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 8 23:23:12.136655 systemd[1]: Stopped ignition-fetch-offline.service. Feb 8 23:23:12.143000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:12.143866 systemd[1]: Stopped target paths.target. Feb 8 23:23:12.148073 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 8 23:23:12.151371 systemd[1]: Stopped systemd-ask-password-console.path. Feb 8 23:23:12.156240 systemd[1]: Stopped target slices.target. Feb 8 23:23:12.158430 systemd[1]: Stopped target sockets.target. Feb 8 23:23:12.162612 systemd[1]: iscsid.socket: Deactivated successfully. Feb 8 23:23:12.164555 systemd[1]: Closed iscsid.socket. Feb 8 23:23:12.170255 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 8 23:23:12.170312 systemd[1]: Stopped ignition-setup.service. Feb 8 23:23:12.176000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:12.177073 systemd[1]: Stopping iscsiuio.service... Feb 8 23:23:12.179892 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 8 23:23:12.181000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:12.179975 systemd[1]: Stopped iscsiuio.service. Feb 8 23:23:12.183397 systemd[1]: Stopped target network.target. Feb 8 23:23:12.185778 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 8 23:23:12.185813 systemd[1]: Closed iscsiuio.socket. Feb 8 23:23:12.205000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:12.190142 systemd[1]: Stopping systemd-networkd.service... Feb 8 23:23:12.194579 systemd[1]: Stopping systemd-resolved.service... Feb 8 23:23:12.201087 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 8 23:23:12.201177 systemd[1]: Stopped systemd-resolved.service. Feb 8 23:23:12.216000 audit: BPF prog-id=6 op=UNLOAD Feb 8 23:23:12.216386 systemd-networkd[709]: eth0: DHCPv6 lease lost Feb 8 23:23:12.219238 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 8 23:23:12.224000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:12.219390 systemd[1]: Stopped systemd-networkd.service. Feb 8 23:23:12.226830 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 8 23:23:12.229000 audit: BPF prog-id=9 op=UNLOAD Feb 8 23:23:12.226879 systemd[1]: Closed systemd-networkd.socket. Feb 8 23:23:12.232658 systemd[1]: Stopping network-cleanup.service... Feb 8 23:23:12.240000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:12.236296 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 8 23:23:12.246000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:12.236363 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 8 23:23:12.248000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:12.241090 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 8 23:23:12.241142 systemd[1]: Stopped systemd-sysctl.service. Feb 8 23:23:12.246430 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 8 23:23:12.246485 systemd[1]: Stopped systemd-modules-load.service. Feb 8 23:23:12.252615 systemd[1]: Stopping systemd-udevd.service... Feb 8 23:23:12.267900 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 8 23:23:12.271000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:12.268436 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 8 23:23:12.268558 systemd[1]: Stopped systemd-udevd.service. Feb 8 23:23:12.274633 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 8 23:23:12.290000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:12.274682 systemd[1]: Closed systemd-udevd-control.socket. Feb 8 23:23:12.292000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:12.297000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:12.279415 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 8 23:23:12.279449 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 8 23:23:12.285250 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 8 23:23:12.285305 systemd[1]: Stopped dracut-pre-udev.service. Feb 8 23:23:12.290423 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 8 23:23:12.290472 systemd[1]: Stopped dracut-cmdline.service. Feb 8 23:23:12.315000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:12.292647 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 8 23:23:12.292684 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 8 23:23:12.298223 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 8 23:23:12.310189 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 8 23:23:12.310249 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 8 23:23:12.329541 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 8 23:23:12.332359 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 8 23:23:12.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:12.337000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:12.353337 kernel: hv_netvsc 000d3a64-9dc5-000d-3a64-9dc5000d3a64 eth0: Data path switched from VF: enP49219s1 Feb 8 23:23:12.369093 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 8 23:23:12.369204 systemd[1]: Stopped network-cleanup.service. Feb 8 23:23:12.374000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:13.066137 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 8 23:23:13.349735 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 8 23:23:13.355000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:13.349868 systemd[1]: Stopped sysroot-boot.service. Feb 8 23:23:13.376853 kernel: kauditd_printk_skb: 32 callbacks suppressed Feb 8 23:23:13.376883 kernel: audit: type=1131 audit(1707434593.355:79): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:13.355645 systemd[1]: Reached target initrd-switch-root.target. Feb 8 23:23:13.376817 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 8 23:23:13.376879 systemd[1]: Stopped initrd-setup-root.service. Feb 8 23:23:13.379000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:13.380268 systemd[1]: Starting initrd-switch-root.service... Feb 8 23:23:13.401762 kernel: audit: type=1131 audit(1707434593.379:80): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:13.408170 systemd[1]: Switching root. Feb 8 23:23:13.431445 systemd-journald[183]: Journal stopped Feb 8 23:23:29.170532 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Feb 8 23:23:29.170559 kernel: SELinux: Class mctp_socket not defined in policy. Feb 8 23:23:29.170572 kernel: SELinux: Class anon_inode not defined in policy. Feb 8 23:23:29.170581 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 8 23:23:29.170591 kernel: SELinux: policy capability network_peer_controls=1 Feb 8 23:23:29.170601 kernel: SELinux: policy capability open_perms=1 Feb 8 23:23:29.170613 kernel: SELinux: policy capability extended_socket_class=1 Feb 8 23:23:29.170625 kernel: SELinux: policy capability always_check_network=0 Feb 8 23:23:29.170634 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 8 23:23:29.170644 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 8 23:23:29.170652 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 8 23:23:29.170663 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 8 23:23:29.170673 kernel: audit: type=1403 audit(1707434596.163:81): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 8 23:23:29.170684 systemd[1]: Successfully loaded SELinux policy in 405.697ms. Feb 8 23:23:29.170699 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 38.006ms. Feb 8 23:23:29.170714 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 8 23:23:29.170723 systemd[1]: Detected virtualization microsoft. Feb 8 23:23:29.170735 systemd[1]: Detected architecture x86-64. Feb 8 23:23:29.170749 systemd[1]: Detected first boot. Feb 8 23:23:29.170758 systemd[1]: Hostname set to . Feb 8 23:23:29.170771 systemd[1]: Initializing machine ID from random generator. Feb 8 23:23:29.170782 kernel: audit: type=1400 audit(1707434596.939:82): avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 8 23:23:29.170792 kernel: audit: type=1400 audit(1707434596.956:83): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 8 23:23:29.170804 kernel: audit: type=1400 audit(1707434596.956:84): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 8 23:23:29.170816 kernel: audit: type=1334 audit(1707434596.969:85): prog-id=10 op=LOAD Feb 8 23:23:29.170826 kernel: audit: type=1334 audit(1707434596.969:86): prog-id=10 op=UNLOAD Feb 8 23:23:29.170836 kernel: audit: type=1334 audit(1707434596.982:87): prog-id=11 op=LOAD Feb 8 23:23:29.170845 kernel: audit: type=1334 audit(1707434596.982:88): prog-id=11 op=UNLOAD Feb 8 23:23:29.170857 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 8 23:23:29.170866 kernel: audit: type=1400 audit(1707434598.648:89): avc: denied { associate } for pid=952 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 8 23:23:29.170879 kernel: audit: type=1300 audit(1707434598.648:89): arch=c000003e syscall=188 success=yes exit=0 a0=c000024802 a1=c00002aae0 a2=c000028d00 a3=32 items=0 ppid=935 pid=952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:29.170891 kernel: audit: type=1327 audit(1707434598.648:89): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 8 23:23:29.170902 kernel: audit: type=1400 audit(1707434598.656:90): avc: denied { associate } for pid=952 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 8 23:23:29.170915 kernel: audit: type=1300 audit(1707434598.656:90): arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0000248d9 a2=1ed a3=0 items=2 ppid=935 pid=952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:29.170926 kernel: audit: type=1307 audit(1707434598.656:90): cwd="/" Feb 8 23:23:29.170936 kernel: audit: type=1302 audit(1707434598.656:90): item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:29.170948 kernel: audit: type=1302 audit(1707434598.656:90): item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:29.170959 kernel: audit: type=1327 audit(1707434598.656:90): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 8 23:23:29.170972 systemd[1]: Populated /etc with preset unit settings. Feb 8 23:23:29.170984 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 8 23:23:29.171000 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 8 23:23:29.171010 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 8 23:23:29.171020 kernel: audit: type=1334 audit(1707434608.630:91): prog-id=12 op=LOAD Feb 8 23:23:29.171028 kernel: audit: type=1334 audit(1707434608.630:92): prog-id=3 op=UNLOAD Feb 8 23:23:29.171037 kernel: audit: type=1334 audit(1707434608.636:93): prog-id=13 op=LOAD Feb 8 23:23:29.171048 kernel: audit: type=1334 audit(1707434608.646:94): prog-id=14 op=LOAD Feb 8 23:23:29.171056 kernel: audit: type=1334 audit(1707434608.646:95): prog-id=4 op=UNLOAD Feb 8 23:23:29.171065 kernel: audit: type=1334 audit(1707434608.646:96): prog-id=5 op=UNLOAD Feb 8 23:23:29.171074 kernel: audit: type=1334 audit(1707434608.651:97): prog-id=15 op=LOAD Feb 8 23:23:29.171082 kernel: audit: type=1334 audit(1707434608.651:98): prog-id=12 op=UNLOAD Feb 8 23:23:29.171091 kernel: audit: type=1334 audit(1707434608.656:99): prog-id=16 op=LOAD Feb 8 23:23:29.171100 kernel: audit: type=1334 audit(1707434608.661:100): prog-id=17 op=LOAD Feb 8 23:23:29.171111 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 8 23:23:29.171121 systemd[1]: Stopped initrd-switch-root.service. Feb 8 23:23:29.171136 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 8 23:23:29.171146 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 8 23:23:29.171158 systemd[1]: Created slice system-addon\x2drun.slice. Feb 8 23:23:29.171171 systemd[1]: Created slice system-getty.slice. Feb 8 23:23:29.171181 systemd[1]: Created slice system-modprobe.slice. Feb 8 23:23:29.171193 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 8 23:23:29.171205 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 8 23:23:29.171215 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 8 23:23:29.171229 systemd[1]: Created slice user.slice. Feb 8 23:23:29.171241 systemd[1]: Started systemd-ask-password-console.path. Feb 8 23:23:29.171251 systemd[1]: Started systemd-ask-password-wall.path. Feb 8 23:23:29.171262 systemd[1]: Set up automount boot.automount. Feb 8 23:23:29.171273 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 8 23:23:29.171285 systemd[1]: Stopped target initrd-switch-root.target. Feb 8 23:23:29.171295 systemd[1]: Stopped target initrd-fs.target. Feb 8 23:23:29.171306 systemd[1]: Stopped target initrd-root-fs.target. Feb 8 23:23:29.171321 systemd[1]: Reached target integritysetup.target. Feb 8 23:23:29.171343 systemd[1]: Reached target remote-cryptsetup.target. Feb 8 23:23:29.171353 systemd[1]: Reached target remote-fs.target. Feb 8 23:23:29.171362 systemd[1]: Reached target slices.target. Feb 8 23:23:29.171375 systemd[1]: Reached target swap.target. Feb 8 23:23:29.171392 systemd[1]: Reached target torcx.target. Feb 8 23:23:29.171407 systemd[1]: Reached target veritysetup.target. Feb 8 23:23:29.171417 systemd[1]: Listening on systemd-coredump.socket. Feb 8 23:23:29.171436 systemd[1]: Listening on systemd-initctl.socket. Feb 8 23:23:29.171455 systemd[1]: Listening on systemd-networkd.socket. Feb 8 23:23:29.171466 systemd[1]: Listening on systemd-udevd-control.socket. Feb 8 23:23:29.171477 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 8 23:23:29.171497 systemd[1]: Listening on systemd-userdbd.socket. Feb 8 23:23:29.171521 systemd[1]: Mounting dev-hugepages.mount... Feb 8 23:23:29.171534 systemd[1]: Mounting dev-mqueue.mount... Feb 8 23:23:29.171544 systemd[1]: Mounting media.mount... Feb 8 23:23:29.171553 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 8 23:23:29.171571 systemd[1]: Mounting sys-kernel-debug.mount... Feb 8 23:23:29.171591 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 8 23:23:29.171611 systemd[1]: Mounting tmp.mount... Feb 8 23:23:29.171621 systemd[1]: Starting flatcar-tmpfiles.service... Feb 8 23:23:29.171636 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 8 23:23:29.171657 systemd[1]: Starting kmod-static-nodes.service... Feb 8 23:23:29.171668 systemd[1]: Starting modprobe@configfs.service... Feb 8 23:23:29.171684 systemd[1]: Starting modprobe@dm_mod.service... Feb 8 23:23:29.171702 systemd[1]: Starting modprobe@drm.service... Feb 8 23:23:29.171724 systemd[1]: Starting modprobe@efi_pstore.service... Feb 8 23:23:29.171743 systemd[1]: Starting modprobe@fuse.service... Feb 8 23:23:29.171758 systemd[1]: Starting modprobe@loop.service... Feb 8 23:23:29.171768 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 8 23:23:29.171778 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 8 23:23:29.171790 systemd[1]: Stopped systemd-fsck-root.service. Feb 8 23:23:29.171809 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 8 23:23:29.171822 systemd[1]: Stopped systemd-fsck-usr.service. Feb 8 23:23:29.171832 systemd[1]: Stopped systemd-journald.service. Feb 8 23:23:29.171842 systemd[1]: Starting systemd-journald.service... Feb 8 23:23:29.171851 systemd[1]: Starting systemd-modules-load.service... Feb 8 23:23:29.171861 kernel: loop: module loaded Feb 8 23:23:29.171873 systemd[1]: Starting systemd-network-generator.service... Feb 8 23:23:29.171882 systemd[1]: Starting systemd-remount-fs.service... Feb 8 23:23:29.171892 systemd[1]: Starting systemd-udev-trigger.service... Feb 8 23:23:29.171902 systemd[1]: verity-setup.service: Deactivated successfully. Feb 8 23:23:29.171914 systemd[1]: Stopped verity-setup.service. Feb 8 23:23:29.171931 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 8 23:23:29.171942 systemd[1]: Mounted dev-hugepages.mount. Feb 8 23:23:29.171954 systemd[1]: Mounted dev-mqueue.mount. Feb 8 23:23:29.171970 systemd[1]: Mounted media.mount. Feb 8 23:23:29.171986 systemd[1]: Mounted sys-kernel-debug.mount. Feb 8 23:23:29.172006 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 8 23:23:29.172029 systemd[1]: Mounted tmp.mount. Feb 8 23:23:29.172051 kernel: fuse: init (API version 7.34) Feb 8 23:23:29.172071 systemd[1]: Finished flatcar-tmpfiles.service. Feb 8 23:23:29.172087 systemd[1]: Finished kmod-static-nodes.service. Feb 8 23:23:29.172097 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 8 23:23:29.172113 systemd[1]: Finished modprobe@configfs.service. Feb 8 23:23:29.172134 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 8 23:23:29.172153 systemd[1]: Finished modprobe@dm_mod.service. Feb 8 23:23:29.172167 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 8 23:23:29.172187 systemd-journald[1057]: Journal started Feb 8 23:23:29.172240 systemd-journald[1057]: Runtime Journal (/run/log/journal/f7496c0df8fb4b6fa0857ad7b6a941ea) is 8.0M, max 159.0M, 151.0M free. Feb 8 23:23:16.163000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 8 23:23:16.939000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 8 23:23:16.956000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 8 23:23:16.956000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 8 23:23:16.969000 audit: BPF prog-id=10 op=LOAD Feb 8 23:23:16.969000 audit: BPF prog-id=10 op=UNLOAD Feb 8 23:23:16.982000 audit: BPF prog-id=11 op=LOAD Feb 8 23:23:16.982000 audit: BPF prog-id=11 op=UNLOAD Feb 8 23:23:18.648000 audit[952]: AVC avc: denied { associate } for pid=952 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 8 23:23:18.648000 audit[952]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c000024802 a1=c00002aae0 a2=c000028d00 a3=32 items=0 ppid=935 pid=952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:18.648000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 8 23:23:18.656000 audit[952]: AVC avc: denied { associate } for pid=952 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 8 23:23:18.656000 audit[952]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0000248d9 a2=1ed a3=0 items=2 ppid=935 pid=952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:18.656000 audit: CWD cwd="/" Feb 8 23:23:18.656000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:18.656000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:18.656000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 8 23:23:28.630000 audit: BPF prog-id=12 op=LOAD Feb 8 23:23:28.630000 audit: BPF prog-id=3 op=UNLOAD Feb 8 23:23:28.636000 audit: BPF prog-id=13 op=LOAD Feb 8 23:23:28.646000 audit: BPF prog-id=14 op=LOAD Feb 8 23:23:28.646000 audit: BPF prog-id=4 op=UNLOAD Feb 8 23:23:28.646000 audit: BPF prog-id=5 op=UNLOAD Feb 8 23:23:28.651000 audit: BPF prog-id=15 op=LOAD Feb 8 23:23:28.651000 audit: BPF prog-id=12 op=UNLOAD Feb 8 23:23:28.656000 audit: BPF prog-id=16 op=LOAD Feb 8 23:23:28.661000 audit: BPF prog-id=17 op=LOAD Feb 8 23:23:28.661000 audit: BPF prog-id=13 op=UNLOAD Feb 8 23:23:28.661000 audit: BPF prog-id=14 op=UNLOAD Feb 8 23:23:28.666000 audit: BPF prog-id=18 op=LOAD Feb 8 23:23:28.666000 audit: BPF prog-id=15 op=UNLOAD Feb 8 23:23:28.681000 audit: BPF prog-id=19 op=LOAD Feb 8 23:23:28.682000 audit: BPF prog-id=20 op=LOAD Feb 8 23:23:28.682000 audit: BPF prog-id=16 op=UNLOAD Feb 8 23:23:28.682000 audit: BPF prog-id=17 op=UNLOAD Feb 8 23:23:28.682000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:28.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:28.694000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:28.694000 audit: BPF prog-id=18 op=UNLOAD Feb 8 23:23:29.012000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:29.023000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:29.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:29.027000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:29.030000 audit: BPF prog-id=21 op=LOAD Feb 8 23:23:29.030000 audit: BPF prog-id=22 op=LOAD Feb 8 23:23:29.030000 audit: BPF prog-id=23 op=LOAD Feb 8 23:23:29.030000 audit: BPF prog-id=19 op=UNLOAD Feb 8 23:23:29.031000 audit: BPF prog-id=20 op=UNLOAD Feb 8 23:23:29.083000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:29.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:29.148000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:29.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:29.157000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:29.167000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 8 23:23:29.167000 audit[1057]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffe2a9b98f0 a2=4000 a3=7ffe2a9b998c items=0 ppid=1 pid=1057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:29.167000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 8 23:23:29.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:29.167000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:18.590650 /usr/lib/systemd/system-generators/torcx-generator[952]: time="2024-02-08T23:23:18Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 8 23:23:28.629875 systemd[1]: Queued start job for default target multi-user.target. Feb 8 23:23:18.605957 /usr/lib/systemd/system-generators/torcx-generator[952]: time="2024-02-08T23:23:18Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 8 23:23:28.682728 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 8 23:23:18.605986 /usr/lib/systemd/system-generators/torcx-generator[952]: time="2024-02-08T23:23:18Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 8 23:23:18.606035 /usr/lib/systemd/system-generators/torcx-generator[952]: time="2024-02-08T23:23:18Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 8 23:23:18.606047 /usr/lib/systemd/system-generators/torcx-generator[952]: time="2024-02-08T23:23:18Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 8 23:23:18.606097 /usr/lib/systemd/system-generators/torcx-generator[952]: time="2024-02-08T23:23:18Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 8 23:23:18.606113 /usr/lib/systemd/system-generators/torcx-generator[952]: time="2024-02-08T23:23:18Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 8 23:23:18.606371 /usr/lib/systemd/system-generators/torcx-generator[952]: time="2024-02-08T23:23:18Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 8 23:23:18.606413 /usr/lib/systemd/system-generators/torcx-generator[952]: time="2024-02-08T23:23:18Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 8 23:23:18.606434 /usr/lib/systemd/system-generators/torcx-generator[952]: time="2024-02-08T23:23:18Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 8 23:23:18.627066 /usr/lib/systemd/system-generators/torcx-generator[952]: time="2024-02-08T23:23:18Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 8 23:23:18.627119 /usr/lib/systemd/system-generators/torcx-generator[952]: time="2024-02-08T23:23:18Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 8 23:23:18.627158 /usr/lib/systemd/system-generators/torcx-generator[952]: time="2024-02-08T23:23:18Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 8 23:23:18.627179 /usr/lib/systemd/system-generators/torcx-generator[952]: time="2024-02-08T23:23:18Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 8 23:23:18.627207 /usr/lib/systemd/system-generators/torcx-generator[952]: time="2024-02-08T23:23:18Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 8 23:23:18.627224 /usr/lib/systemd/system-generators/torcx-generator[952]: time="2024-02-08T23:23:18Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 8 23:23:27.466972 /usr/lib/systemd/system-generators/torcx-generator[952]: time="2024-02-08T23:23:27Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 8 23:23:27.467225 /usr/lib/systemd/system-generators/torcx-generator[952]: time="2024-02-08T23:23:27Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 8 23:23:27.467321 /usr/lib/systemd/system-generators/torcx-generator[952]: time="2024-02-08T23:23:27Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 8 23:23:27.467496 /usr/lib/systemd/system-generators/torcx-generator[952]: time="2024-02-08T23:23:27Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 8 23:23:27.467553 /usr/lib/systemd/system-generators/torcx-generator[952]: time="2024-02-08T23:23:27Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 8 23:23:27.467616 /usr/lib/systemd/system-generators/torcx-generator[952]: time="2024-02-08T23:23:27Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 8 23:23:29.176354 systemd[1]: Finished modprobe@drm.service. Feb 8 23:23:29.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:29.177000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:29.182343 systemd[1]: Started systemd-journald.service. Feb 8 23:23:29.183000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:29.184221 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 8 23:23:29.184381 systemd[1]: Finished modprobe@efi_pstore.service. Feb 8 23:23:29.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:29.186000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:29.186654 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 8 23:23:29.186792 systemd[1]: Finished modprobe@fuse.service. Feb 8 23:23:29.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:29.188000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:29.189017 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 8 23:23:29.189153 systemd[1]: Finished modprobe@loop.service. Feb 8 23:23:29.190000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:29.190000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:29.191274 systemd[1]: Finished systemd-network-generator.service. Feb 8 23:23:29.193000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:29.193845 systemd[1]: Finished systemd-remount-fs.service. Feb 8 23:23:29.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:29.196304 systemd[1]: Finished systemd-modules-load.service. Feb 8 23:23:29.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:29.199540 systemd[1]: Reached target network-pre.target. Feb 8 23:23:29.203359 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 8 23:23:29.206805 systemd[1]: Mounting sys-kernel-config.mount... Feb 8 23:23:29.208974 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 8 23:23:29.210463 systemd[1]: Starting systemd-hwdb-update.service... Feb 8 23:23:29.213785 systemd[1]: Starting systemd-journal-flush.service... Feb 8 23:23:29.215958 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 8 23:23:29.217020 systemd[1]: Starting systemd-random-seed.service... Feb 8 23:23:29.219066 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 8 23:23:29.220852 systemd[1]: Starting systemd-sysctl.service... Feb 8 23:23:29.224174 systemd[1]: Starting systemd-sysusers.service... Feb 8 23:23:29.229188 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 8 23:23:29.231846 systemd[1]: Mounted sys-kernel-config.mount. Feb 8 23:23:29.246963 systemd[1]: Finished systemd-random-seed.service. Feb 8 23:23:29.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:29.249691 systemd[1]: Reached target first-boot-complete.target. Feb 8 23:23:29.287623 systemd-journald[1057]: Time spent on flushing to /var/log/journal/f7496c0df8fb4b6fa0857ad7b6a941ea is 23.837ms for 1180 entries. Feb 8 23:23:29.287623 systemd-journald[1057]: System Journal (/var/log/journal/f7496c0df8fb4b6fa0857ad7b6a941ea) is 8.0M, max 2.6G, 2.6G free. Feb 8 23:23:29.353619 systemd-journald[1057]: Received client request to flush runtime journal. Feb 8 23:23:29.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:29.306000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:29.294924 systemd[1]: Finished systemd-sysctl.service. Feb 8 23:23:29.354717 udevadm[1076]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 8 23:23:29.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:29.303971 systemd[1]: Finished systemd-udev-trigger.service. Feb 8 23:23:29.307671 systemd[1]: Starting systemd-udev-settle.service... Feb 8 23:23:29.354690 systemd[1]: Finished systemd-journal-flush.service. Feb 8 23:23:29.872228 systemd[1]: Finished systemd-sysusers.service. Feb 8 23:23:29.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:30.560602 systemd[1]: Finished systemd-hwdb-update.service. Feb 8 23:23:30.563000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:30.564000 audit: BPF prog-id=24 op=LOAD Feb 8 23:23:30.564000 audit: BPF prog-id=25 op=LOAD Feb 8 23:23:30.564000 audit: BPF prog-id=7 op=UNLOAD Feb 8 23:23:30.564000 audit: BPF prog-id=8 op=UNLOAD Feb 8 23:23:30.565237 systemd[1]: Starting systemd-udevd.service... Feb 8 23:23:30.582790 systemd-udevd[1078]: Using default interface naming scheme 'v252'. Feb 8 23:23:30.917990 systemd[1]: Started systemd-udevd.service. Feb 8 23:23:30.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:30.921000 audit: BPF prog-id=26 op=LOAD Feb 8 23:23:30.923163 systemd[1]: Starting systemd-networkd.service... Feb 8 23:23:30.957683 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Feb 8 23:23:31.043364 kernel: mousedev: PS/2 mouse device common for all mice Feb 8 23:23:31.061000 audit: BPF prog-id=27 op=LOAD Feb 8 23:23:31.061000 audit: BPF prog-id=28 op=LOAD Feb 8 23:23:31.061000 audit: BPF prog-id=29 op=LOAD Feb 8 23:23:31.062728 systemd[1]: Starting systemd-userdbd.service... Feb 8 23:23:31.074349 kernel: hv_vmbus: registering driver hyperv_fb Feb 8 23:23:31.057000 audit[1085]: AVC avc: denied { confidentiality } for pid=1085 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 8 23:23:31.079355 kernel: hv_vmbus: registering driver hv_balloon Feb 8 23:23:31.098593 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Feb 8 23:23:31.098669 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Feb 8 23:23:31.098713 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Feb 8 23:23:31.104228 kernel: Console: switching to colour dummy device 80x25 Feb 8 23:23:31.112276 kernel: Console: switching to colour frame buffer device 128x48 Feb 8 23:23:31.113345 kernel: hv_utils: Registering HyperV Utility Driver Feb 8 23:23:31.113404 kernel: hv_vmbus: registering driver hv_utils Feb 8 23:23:31.113431 kernel: hv_utils: Heartbeat IC version 3.0 Feb 8 23:23:31.116606 kernel: hv_utils: Shutdown IC version 3.2 Feb 8 23:23:30.669696 kernel: hv_utils: TimeSync IC version 4.0 Feb 8 23:23:30.720329 systemd-journald[1057]: Time jumped backwards, rotating. Feb 8 23:23:31.057000 audit[1085]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5603cf0cf9a0 a1=f884 a2=7fe22e554bc5 a3=5 items=12 ppid=1078 pid=1085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:31.057000 audit: CWD cwd="/" Feb 8 23:23:31.057000 audit: PATH item=0 name=(null) inode=1237 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:31.057000 audit: PATH item=1 name=(null) inode=15289 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:31.057000 audit: PATH item=2 name=(null) inode=15289 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:31.057000 audit: PATH item=3 name=(null) inode=15290 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:31.057000 audit: PATH item=4 name=(null) inode=15289 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:31.057000 audit: PATH item=5 name=(null) inode=15291 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:31.057000 audit: PATH item=6 name=(null) inode=15289 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:31.057000 audit: PATH item=7 name=(null) inode=15292 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:31.057000 audit: PATH item=8 name=(null) inode=15289 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:31.057000 audit: PATH item=9 name=(null) inode=15293 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:31.057000 audit: PATH item=10 name=(null) inode=15289 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:31.057000 audit: PATH item=11 name=(null) inode=15294 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:31.057000 audit: PROCTITLE proctitle="(udev-worker)" Feb 8 23:23:30.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:30.692929 systemd[1]: Started systemd-userdbd.service. Feb 8 23:23:30.878392 kernel: KVM: vmx: using Hyper-V Enlightened VMCS Feb 8 23:23:30.946386 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1099) Feb 8 23:23:30.979316 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 8 23:23:30.982791 systemd[1]: Finished systemd-udev-settle.service. Feb 8 23:23:30.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:30.986753 systemd[1]: Starting lvm2-activation-early.service... Feb 8 23:23:31.029895 systemd-networkd[1088]: lo: Link UP Feb 8 23:23:31.029905 systemd-networkd[1088]: lo: Gained carrier Feb 8 23:23:31.030489 systemd-networkd[1088]: Enumeration completed Feb 8 23:23:31.030606 systemd[1]: Started systemd-networkd.service. Feb 8 23:23:31.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:31.034823 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 8 23:23:31.061352 systemd-networkd[1088]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 8 23:23:31.119383 kernel: mlx5_core c043:00:02.0 enP49219s1: Link up Feb 8 23:23:31.158384 kernel: hv_netvsc 000d3a64-9dc5-000d-3a64-9dc5000d3a64 eth0: Data path switched to VF: enP49219s1 Feb 8 23:23:31.159298 systemd-networkd[1088]: enP49219s1: Link UP Feb 8 23:23:31.159592 systemd-networkd[1088]: eth0: Link UP Feb 8 23:23:31.159696 systemd-networkd[1088]: eth0: Gained carrier Feb 8 23:23:31.165637 systemd-networkd[1088]: enP49219s1: Gained carrier Feb 8 23:23:31.202513 systemd-networkd[1088]: eth0: DHCPv4 address 10.200.8.22/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 8 23:23:31.394535 lvm[1156]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 8 23:23:31.424464 systemd[1]: Finished lvm2-activation-early.service. Feb 8 23:23:31.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:31.428093 systemd[1]: Reached target cryptsetup.target. Feb 8 23:23:31.432319 systemd[1]: Starting lvm2-activation.service... Feb 8 23:23:31.436839 lvm[1158]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 8 23:23:31.454374 systemd[1]: Finished lvm2-activation.service. Feb 8 23:23:31.455000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:31.456887 systemd[1]: Reached target local-fs-pre.target. Feb 8 23:23:31.459186 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 8 23:23:31.459216 systemd[1]: Reached target local-fs.target. Feb 8 23:23:31.461463 systemd[1]: Reached target machines.target. Feb 8 23:23:31.464748 systemd[1]: Starting ldconfig.service... Feb 8 23:23:31.466872 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 8 23:23:31.466992 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 8 23:23:31.468142 systemd[1]: Starting systemd-boot-update.service... Feb 8 23:23:31.471641 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 8 23:23:31.475525 systemd[1]: Starting systemd-machine-id-commit.service... Feb 8 23:23:31.477985 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 8 23:23:31.478085 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 8 23:23:31.479210 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 8 23:23:31.959683 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1160 (bootctl) Feb 8 23:23:31.981000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:31.961376 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 8 23:23:31.979664 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 8 23:23:32.080820 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 8 23:23:32.081453 systemd[1]: Finished systemd-machine-id-commit.service. Feb 8 23:23:32.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:32.154158 systemd-tmpfiles[1163]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 8 23:23:32.232855 systemd-tmpfiles[1163]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 8 23:23:32.304953 systemd-tmpfiles[1163]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 8 23:23:32.466632 systemd-networkd[1088]: eth0: Gained IPv6LL Feb 8 23:23:32.472291 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 8 23:23:32.474000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:32.900479 systemd-fsck[1168]: fsck.fat 4.2 (2021-01-31) Feb 8 23:23:32.900479 systemd-fsck[1168]: /dev/sda1: 789 files, 115332/258078 clusters Feb 8 23:23:32.902804 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 8 23:23:32.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:32.908858 systemd[1]: Mounting boot.mount... Feb 8 23:23:32.922072 systemd[1]: Mounted boot.mount. Feb 8 23:23:32.937212 systemd[1]: Finished systemd-boot-update.service. Feb 8 23:23:32.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:34.816705 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 8 23:23:34.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:34.820771 systemd[1]: Starting audit-rules.service... Feb 8 23:23:34.822309 kernel: kauditd_printk_skb: 84 callbacks suppressed Feb 8 23:23:34.822376 kernel: audit: type=1130 audit(1707434614.818:168): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:34.836772 systemd[1]: Starting clean-ca-certificates.service... Feb 8 23:23:34.840180 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 8 23:23:34.842000 audit: BPF prog-id=30 op=LOAD Feb 8 23:23:34.847065 systemd[1]: Starting systemd-resolved.service... Feb 8 23:23:34.849370 kernel: audit: type=1334 audit(1707434614.842:169): prog-id=30 op=LOAD Feb 8 23:23:34.850000 audit: BPF prog-id=31 op=LOAD Feb 8 23:23:34.853236 systemd[1]: Starting systemd-timesyncd.service... Feb 8 23:23:34.857368 kernel: audit: type=1334 audit(1707434614.850:170): prog-id=31 op=LOAD Feb 8 23:23:34.859923 systemd[1]: Starting systemd-update-utmp.service... Feb 8 23:23:34.892000 audit[1185]: SYSTEM_BOOT pid=1185 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 8 23:23:34.907388 kernel: audit: type=1127 audit(1707434614.892:171): pid=1185 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 8 23:23:34.908447 systemd[1]: Finished systemd-update-utmp.service. Feb 8 23:23:34.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:34.924378 kernel: audit: type=1130 audit(1707434614.909:172): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:34.930350 systemd[1]: Finished clean-ca-certificates.service. Feb 8 23:23:34.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:34.933399 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 8 23:23:34.946373 kernel: audit: type=1130 audit(1707434614.932:173): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:34.983979 systemd[1]: Started systemd-timesyncd.service. Feb 8 23:23:34.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:34.986882 systemd[1]: Reached target time-set.target. Feb 8 23:23:35.000378 kernel: audit: type=1130 audit(1707434614.985:174): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:35.096083 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 8 23:23:35.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:35.104145 systemd-resolved[1180]: Positive Trust Anchors: Feb 8 23:23:35.104156 systemd-resolved[1180]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 8 23:23:35.104188 systemd-resolved[1180]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 8 23:23:35.114460 kernel: audit: type=1130 audit(1707434615.098:175): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:35.184903 systemd-timesyncd[1184]: Contacted time server 89.234.64.77:123 (0.flatcar.pool.ntp.org). Feb 8 23:23:35.185049 systemd-timesyncd[1184]: Initial clock synchronization to Thu 2024-02-08 23:23:35.189448 UTC. Feb 8 23:23:35.184000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 8 23:23:35.186934 systemd[1]: Finished audit-rules.service. Feb 8 23:23:35.187551 augenrules[1195]: No rules Feb 8 23:23:35.184000 audit[1195]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcae9bca50 a2=420 a3=0 items=0 ppid=1174 pid=1195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:35.213152 kernel: audit: type=1305 audit(1707434615.184:176): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 8 23:23:35.213218 kernel: audit: type=1300 audit(1707434615.184:176): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcae9bca50 a2=420 a3=0 items=0 ppid=1174 pid=1195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:35.184000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 8 23:23:35.262463 systemd-resolved[1180]: Using system hostname 'ci-3510.3.2-a-5de6cd8e96'. Feb 8 23:23:35.264340 systemd[1]: Started systemd-resolved.service. Feb 8 23:23:35.267735 systemd[1]: Reached target network.target. Feb 8 23:23:35.270262 systemd[1]: Reached target network-online.target. Feb 8 23:23:35.272845 systemd[1]: Reached target nss-lookup.target. Feb 8 23:23:40.674577 ldconfig[1159]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 8 23:23:40.691138 systemd[1]: Finished ldconfig.service. Feb 8 23:23:40.695324 systemd[1]: Starting systemd-update-done.service... Feb 8 23:23:40.719479 systemd[1]: Finished systemd-update-done.service. Feb 8 23:23:40.722092 systemd[1]: Reached target sysinit.target. Feb 8 23:23:40.724260 systemd[1]: Started motdgen.path. Feb 8 23:23:40.726120 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 8 23:23:40.729134 systemd[1]: Started logrotate.timer. Feb 8 23:23:40.731248 systemd[1]: Started mdadm.timer. Feb 8 23:23:40.733036 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 8 23:23:40.735278 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 8 23:23:40.735302 systemd[1]: Reached target paths.target. Feb 8 23:23:40.737254 systemd[1]: Reached target timers.target. Feb 8 23:23:40.739485 systemd[1]: Listening on dbus.socket. Feb 8 23:23:40.742418 systemd[1]: Starting docker.socket... Feb 8 23:23:40.746550 systemd[1]: Listening on sshd.socket. Feb 8 23:23:40.748826 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 8 23:23:40.749246 systemd[1]: Listening on docker.socket. Feb 8 23:23:40.751595 systemd[1]: Reached target sockets.target. Feb 8 23:23:40.753930 systemd[1]: Reached target basic.target. Feb 8 23:23:40.756115 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 8 23:23:40.756148 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 8 23:23:40.757058 systemd[1]: Starting containerd.service... Feb 8 23:23:40.760259 systemd[1]: Starting dbus.service... Feb 8 23:23:40.763047 systemd[1]: Starting enable-oem-cloudinit.service... Feb 8 23:23:40.766278 systemd[1]: Starting extend-filesystems.service... Feb 8 23:23:40.768282 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 8 23:23:40.769741 systemd[1]: Starting motdgen.service... Feb 8 23:23:40.776222 systemd[1]: Started nvidia.service. Feb 8 23:23:40.779367 systemd[1]: Starting prepare-cni-plugins.service... Feb 8 23:23:40.782602 systemd[1]: Starting prepare-critools.service... Feb 8 23:23:40.785509 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 8 23:23:40.788675 systemd[1]: Starting sshd-keygen.service... Feb 8 23:23:40.793645 systemd[1]: Starting systemd-logind.service... Feb 8 23:23:40.795683 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 8 23:23:40.795767 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 8 23:23:40.796277 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 8 23:23:40.797243 systemd[1]: Starting update-engine.service... Feb 8 23:23:40.800428 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 8 23:23:40.807045 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 8 23:23:40.807252 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 8 23:23:40.856264 systemd[1]: motdgen.service: Deactivated successfully. Feb 8 23:23:40.856485 systemd[1]: Finished motdgen.service. Feb 8 23:23:40.863634 extend-filesystems[1206]: Found sda Feb 8 23:23:40.866115 extend-filesystems[1206]: Found sda1 Feb 8 23:23:40.866115 extend-filesystems[1206]: Found sda2 Feb 8 23:23:40.866115 extend-filesystems[1206]: Found sda3 Feb 8 23:23:40.866115 extend-filesystems[1206]: Found usr Feb 8 23:23:40.866115 extend-filesystems[1206]: Found sda4 Feb 8 23:23:40.866115 extend-filesystems[1206]: Found sda6 Feb 8 23:23:40.866115 extend-filesystems[1206]: Found sda7 Feb 8 23:23:40.866115 extend-filesystems[1206]: Found sda9 Feb 8 23:23:40.866115 extend-filesystems[1206]: Checking size of /dev/sda9 Feb 8 23:23:40.894967 jq[1219]: true Feb 8 23:23:40.895572 jq[1205]: false Feb 8 23:23:40.896310 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 8 23:23:40.896555 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 8 23:23:40.933251 jq[1234]: true Feb 8 23:23:40.955628 systemd-logind[1217]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 8 23:23:40.960575 systemd-logind[1217]: New seat seat0. Feb 8 23:23:40.981759 env[1228]: time="2024-02-08T23:23:40.981711918Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 8 23:23:41.015735 env[1228]: time="2024-02-08T23:23:41.015694984Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 8 23:23:41.015974 env[1228]: time="2024-02-08T23:23:41.015952139Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 8 23:23:41.020410 extend-filesystems[1206]: Old size kept for /dev/sda9 Feb 8 23:23:41.032901 extend-filesystems[1206]: Found sr0 Feb 8 23:23:41.038447 env[1228]: time="2024-02-08T23:23:41.028244939Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 8 23:23:41.038447 env[1228]: time="2024-02-08T23:23:41.028278947Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 8 23:23:41.038447 env[1228]: time="2024-02-08T23:23:41.028541302Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 8 23:23:41.038447 env[1228]: time="2024-02-08T23:23:41.028565007Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 8 23:23:41.038447 env[1228]: time="2024-02-08T23:23:41.028581811Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 8 23:23:41.038447 env[1228]: time="2024-02-08T23:23:41.028595413Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 8 23:23:41.038447 env[1228]: time="2024-02-08T23:23:41.028847867Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 8 23:23:41.038447 env[1228]: time="2024-02-08T23:23:41.029090718Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 8 23:23:41.038447 env[1228]: time="2024-02-08T23:23:41.029260254Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 8 23:23:41.038447 env[1228]: time="2024-02-08T23:23:41.029280158Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 8 23:23:41.022721 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 8 23:23:41.038878 tar[1222]: ./ Feb 8 23:23:41.038878 tar[1222]: ./loopback Feb 8 23:23:41.040516 env[1228]: time="2024-02-08T23:23:41.029331469Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 8 23:23:41.040516 env[1228]: time="2024-02-08T23:23:41.029345872Z" level=info msg="metadata content store policy set" policy=shared Feb 8 23:23:41.022875 systemd[1]: Finished extend-filesystems.service. Feb 8 23:23:41.040695 tar[1224]: crictl Feb 8 23:23:41.064435 env[1228]: time="2024-02-08T23:23:41.063381073Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 8 23:23:41.064435 env[1228]: time="2024-02-08T23:23:41.063428083Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 8 23:23:41.064435 env[1228]: time="2024-02-08T23:23:41.063448587Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 8 23:23:41.064435 env[1228]: time="2024-02-08T23:23:41.063494197Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 8 23:23:41.064435 env[1228]: time="2024-02-08T23:23:41.063517002Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 8 23:23:41.064435 env[1228]: time="2024-02-08T23:23:41.063536706Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 8 23:23:41.064435 env[1228]: time="2024-02-08T23:23:41.063554109Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 8 23:23:41.064435 env[1228]: time="2024-02-08T23:23:41.063572813Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 8 23:23:41.064435 env[1228]: time="2024-02-08T23:23:41.063590517Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 8 23:23:41.064435 env[1228]: time="2024-02-08T23:23:41.063608521Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 8 23:23:41.064435 env[1228]: time="2024-02-08T23:23:41.063627125Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 8 23:23:41.064435 env[1228]: time="2024-02-08T23:23:41.063646129Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 8 23:23:41.064435 env[1228]: time="2024-02-08T23:23:41.063771655Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 8 23:23:41.067371 env[1228]: time="2024-02-08T23:23:41.064940703Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 8 23:23:41.067371 env[1228]: time="2024-02-08T23:23:41.065295578Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 8 23:23:41.067371 env[1228]: time="2024-02-08T23:23:41.065333186Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 8 23:23:41.067371 env[1228]: time="2024-02-08T23:23:41.065349789Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 8 23:23:41.067371 env[1228]: time="2024-02-08T23:23:41.065416804Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 8 23:23:41.067371 env[1228]: time="2024-02-08T23:23:41.065437008Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 8 23:23:41.067371 env[1228]: time="2024-02-08T23:23:41.065454211Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 8 23:23:41.067371 env[1228]: time="2024-02-08T23:23:41.065471815Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 8 23:23:41.067371 env[1228]: time="2024-02-08T23:23:41.065489219Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 8 23:23:41.067371 env[1228]: time="2024-02-08T23:23:41.065506322Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 8 23:23:41.067371 env[1228]: time="2024-02-08T23:23:41.065522926Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 8 23:23:41.067371 env[1228]: time="2024-02-08T23:23:41.065539730Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 8 23:23:41.067371 env[1228]: time="2024-02-08T23:23:41.065564635Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 8 23:23:41.067371 env[1228]: time="2024-02-08T23:23:41.065700264Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 8 23:23:41.067371 env[1228]: time="2024-02-08T23:23:41.065717267Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 8 23:23:41.069919 env[1228]: time="2024-02-08T23:23:41.065733471Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 8 23:23:41.069919 env[1228]: time="2024-02-08T23:23:41.065750174Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 8 23:23:41.069919 env[1228]: time="2024-02-08T23:23:41.065772079Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 8 23:23:41.069919 env[1228]: time="2024-02-08T23:23:41.065787582Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 8 23:23:41.069919 env[1228]: time="2024-02-08T23:23:41.065813387Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 8 23:23:41.069919 env[1228]: time="2024-02-08T23:23:41.065853496Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 8 23:23:41.067416 systemd[1]: Started containerd.service. Feb 8 23:23:41.070206 env[1228]: time="2024-02-08T23:23:41.066123153Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 8 23:23:41.070206 env[1228]: time="2024-02-08T23:23:41.066197169Z" level=info msg="Connect containerd service" Feb 8 23:23:41.070206 env[1228]: time="2024-02-08T23:23:41.066255581Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 8 23:23:41.070206 env[1228]: time="2024-02-08T23:23:41.066920822Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 8 23:23:41.070206 env[1228]: time="2024-02-08T23:23:41.067226987Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 8 23:23:41.070206 env[1228]: time="2024-02-08T23:23:41.067274497Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 8 23:23:41.070206 env[1228]: time="2024-02-08T23:23:41.067328708Z" level=info msg="containerd successfully booted in 0.086225s" Feb 8 23:23:41.128789 env[1228]: time="2024-02-08T23:23:41.070937071Z" level=info msg="Start subscribing containerd event" Feb 8 23:23:41.128789 env[1228]: time="2024-02-08T23:23:41.070985882Z" level=info msg="Start recovering state" Feb 8 23:23:41.128789 env[1228]: time="2024-02-08T23:23:41.071059697Z" level=info msg="Start event monitor" Feb 8 23:23:41.128789 env[1228]: time="2024-02-08T23:23:41.071082502Z" level=info msg="Start snapshots syncer" Feb 8 23:23:41.128789 env[1228]: time="2024-02-08T23:23:41.071094705Z" level=info msg="Start cni network conf syncer for default" Feb 8 23:23:41.128789 env[1228]: time="2024-02-08T23:23:41.071104007Z" level=info msg="Start streaming server" Feb 8 23:23:41.129529 bash[1252]: Updated "/home/core/.ssh/authorized_keys" Feb 8 23:23:41.129897 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 8 23:23:41.133663 tar[1222]: ./bandwidth Feb 8 23:23:41.136669 dbus-daemon[1204]: [system] SELinux support is enabled Feb 8 23:23:41.136831 systemd[1]: Started dbus.service. Feb 8 23:23:41.141069 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 8 23:23:41.141098 systemd[1]: Reached target system-config.target. Feb 8 23:23:41.143304 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 8 23:23:41.143330 systemd[1]: Reached target user-config.target. Feb 8 23:23:41.155018 systemd[1]: Started systemd-logind.service. Feb 8 23:23:41.241991 tar[1222]: ./ptp Feb 8 23:23:41.257619 systemd[1]: nvidia.service: Deactivated successfully. Feb 8 23:23:41.329346 tar[1222]: ./vlan Feb 8 23:23:41.419142 tar[1222]: ./host-device Feb 8 23:23:41.497473 tar[1222]: ./tuning Feb 8 23:23:41.571080 tar[1222]: ./vrf Feb 8 23:23:41.653604 tar[1222]: ./sbr Feb 8 23:23:41.718343 tar[1222]: ./tap Feb 8 23:23:41.800528 tar[1222]: ./dhcp Feb 8 23:23:41.961319 systemd[1]: Finished prepare-critools.service. Feb 8 23:23:41.970978 update_engine[1218]: I0208 23:23:41.970340 1218 main.cc:92] Flatcar Update Engine starting Feb 8 23:23:41.989979 tar[1222]: ./static Feb 8 23:23:42.021660 tar[1222]: ./firewall Feb 8 23:23:42.031733 systemd[1]: Started update-engine.service. Feb 8 23:23:42.037257 systemd[1]: Started locksmithd.service. Feb 8 23:23:42.040484 update_engine[1218]: I0208 23:23:42.040345 1218 update_check_scheduler.cc:74] Next update check in 10m40s Feb 8 23:23:42.070048 tar[1222]: ./macvlan Feb 8 23:23:42.116751 tar[1222]: ./dummy Feb 8 23:23:42.160326 tar[1222]: ./bridge Feb 8 23:23:42.208186 tar[1222]: ./ipvlan Feb 8 23:23:42.252402 tar[1222]: ./portmap Feb 8 23:23:42.294019 tar[1222]: ./host-local Feb 8 23:23:42.385236 systemd[1]: Finished prepare-cni-plugins.service. Feb 8 23:23:42.731735 sshd_keygen[1226]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 8 23:23:42.751492 systemd[1]: Finished sshd-keygen.service. Feb 8 23:23:42.755741 systemd[1]: Starting issuegen.service... Feb 8 23:23:42.759842 systemd[1]: Started waagent.service. Feb 8 23:23:42.765901 systemd[1]: issuegen.service: Deactivated successfully. Feb 8 23:23:42.766071 systemd[1]: Finished issuegen.service. Feb 8 23:23:42.769620 systemd[1]: Starting systemd-user-sessions.service... Feb 8 23:23:42.777900 systemd[1]: Finished systemd-user-sessions.service. Feb 8 23:23:42.781700 systemd[1]: Started getty@tty1.service. Feb 8 23:23:42.785311 systemd[1]: Started serial-getty@ttyS0.service. Feb 8 23:23:42.788313 systemd[1]: Reached target getty.target. Feb 8 23:23:42.790662 systemd[1]: Reached target multi-user.target. Feb 8 23:23:42.794819 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 8 23:23:42.802639 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 8 23:23:42.802801 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 8 23:23:42.806125 systemd[1]: Startup finished in 368ms (firmware) + 1.785s (loader) + 970ms (kernel) + 11.763s (initrd) + 27.783s (userspace) = 42.671s. Feb 8 23:23:43.421617 login[1328]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 8 23:23:43.423544 login[1329]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 8 23:23:43.451237 systemd[1]: Created slice user-500.slice. Feb 8 23:23:43.452793 systemd[1]: Starting user-runtime-dir@500.service... Feb 8 23:23:43.456600 systemd-logind[1217]: New session 1 of user core. Feb 8 23:23:43.460449 systemd-logind[1217]: New session 2 of user core. Feb 8 23:23:43.464304 systemd[1]: Finished user-runtime-dir@500.service. Feb 8 23:23:43.465966 systemd[1]: Starting user@500.service... Feb 8 23:23:43.469352 (systemd)[1332]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:23:43.635606 systemd[1332]: Queued start job for default target default.target. Feb 8 23:23:43.636486 systemd[1332]: Reached target paths.target. Feb 8 23:23:43.636522 systemd[1332]: Reached target sockets.target. Feb 8 23:23:43.636544 systemd[1332]: Reached target timers.target. Feb 8 23:23:43.636563 systemd[1332]: Reached target basic.target. Feb 8 23:23:43.636625 systemd[1332]: Reached target default.target. Feb 8 23:23:43.636669 systemd[1332]: Startup finished in 161ms. Feb 8 23:23:43.637075 systemd[1]: Started user@500.service. Feb 8 23:23:43.638657 systemd[1]: Started session-1.scope. Feb 8 23:23:43.639650 systemd[1]: Started session-2.scope. Feb 8 23:23:43.734012 locksmithd[1308]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 8 23:23:49.798111 waagent[1322]: 2024-02-08T23:23:49.797997Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Feb 8 23:23:49.826917 waagent[1322]: 2024-02-08T23:23:49.814986Z INFO Daemon Daemon OS: flatcar 3510.3.2 Feb 8 23:23:49.826917 waagent[1322]: 2024-02-08T23:23:49.816139Z INFO Daemon Daemon Python: 3.9.16 Feb 8 23:23:49.826917 waagent[1322]: 2024-02-08T23:23:49.817567Z INFO Daemon Daemon Run daemon Feb 8 23:23:49.826917 waagent[1322]: 2024-02-08T23:23:49.818742Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.2' Feb 8 23:23:49.831254 waagent[1322]: 2024-02-08T23:23:49.831137Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 8 23:23:49.834693 waagent[1322]: 2024-02-08T23:23:49.834590Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 8 23:23:49.835701 waagent[1322]: 2024-02-08T23:23:49.835650Z INFO Daemon Daemon cloud-init is enabled: False Feb 8 23:23:49.836676 waagent[1322]: 2024-02-08T23:23:49.836627Z INFO Daemon Daemon Using waagent for provisioning Feb 8 23:23:49.838121 waagent[1322]: 2024-02-08T23:23:49.838070Z INFO Daemon Daemon Activate resource disk Feb 8 23:23:49.838962 waagent[1322]: 2024-02-08T23:23:49.838914Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Feb 8 23:23:49.846711 waagent[1322]: 2024-02-08T23:23:49.846658Z INFO Daemon Daemon Found device: None Feb 8 23:23:49.847637 waagent[1322]: 2024-02-08T23:23:49.847586Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Feb 8 23:23:49.848519 waagent[1322]: 2024-02-08T23:23:49.848470Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Feb 8 23:23:49.850246 waagent[1322]: 2024-02-08T23:23:49.850196Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 8 23:23:49.851559 waagent[1322]: 2024-02-08T23:23:49.851511Z INFO Daemon Daemon Running default provisioning handler Feb 8 23:23:49.865188 waagent[1322]: 2024-02-08T23:23:49.865089Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 8 23:23:49.867806 waagent[1322]: 2024-02-08T23:23:49.867706Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 8 23:23:49.868845 waagent[1322]: 2024-02-08T23:23:49.868793Z INFO Daemon Daemon cloud-init is enabled: False Feb 8 23:23:49.869680 waagent[1322]: 2024-02-08T23:23:49.869632Z INFO Daemon Daemon Copying ovf-env.xml Feb 8 23:23:49.951105 waagent[1322]: 2024-02-08T23:23:49.948219Z INFO Daemon Daemon Successfully mounted dvd Feb 8 23:23:50.137069 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Feb 8 23:23:50.142151 waagent[1322]: 2024-02-08T23:23:50.142029Z INFO Daemon Daemon Detect protocol endpoint Feb 8 23:23:50.158068 waagent[1322]: 2024-02-08T23:23:50.143625Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 8 23:23:50.158068 waagent[1322]: 2024-02-08T23:23:50.144769Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Feb 8 23:23:50.158068 waagent[1322]: 2024-02-08T23:23:50.145699Z INFO Daemon Daemon Test for route to 168.63.129.16 Feb 8 23:23:50.158068 waagent[1322]: 2024-02-08T23:23:50.146995Z INFO Daemon Daemon Route to 168.63.129.16 exists Feb 8 23:23:50.158068 waagent[1322]: 2024-02-08T23:23:50.147744Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Feb 8 23:23:50.315257 waagent[1322]: 2024-02-08T23:23:50.315174Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Feb 8 23:23:50.319271 waagent[1322]: 2024-02-08T23:23:50.319223Z INFO Daemon Daemon Wire protocol version:2012-11-30 Feb 8 23:23:50.322135 waagent[1322]: 2024-02-08T23:23:50.322075Z INFO Daemon Daemon Server preferred version:2015-04-05 Feb 8 23:23:50.656382 waagent[1322]: 2024-02-08T23:23:50.656222Z INFO Daemon Daemon Initializing goal state during protocol detection Feb 8 23:23:50.668240 waagent[1322]: 2024-02-08T23:23:50.668157Z INFO Daemon Daemon Forcing an update of the goal state.. Feb 8 23:23:50.671551 waagent[1322]: 2024-02-08T23:23:50.671481Z INFO Daemon Daemon Fetching goal state [incarnation 1] Feb 8 23:23:50.752240 waagent[1322]: 2024-02-08T23:23:50.752112Z INFO Daemon Daemon Found private key matching thumbprint E66D6B71BFD9B9C85D9C3C8F57BE79A5D539F266 Feb 8 23:23:50.762945 waagent[1322]: 2024-02-08T23:23:50.753537Z INFO Daemon Daemon Certificate with thumbprint 1440D3592315C4E140D35E024D8761CD85BF1D0B has no matching private key. Feb 8 23:23:50.762945 waagent[1322]: 2024-02-08T23:23:50.754522Z INFO Daemon Daemon Fetch goal state completed Feb 8 23:23:50.801786 waagent[1322]: 2024-02-08T23:23:50.801707Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: 6d8a0faa-c6a1-4817-a4fb-5d9a98f057b9 New eTag: 14338721575596293035] Feb 8 23:23:50.810513 waagent[1322]: 2024-02-08T23:23:50.803617Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Feb 8 23:23:50.816392 waagent[1322]: 2024-02-08T23:23:50.816324Z INFO Daemon Daemon Starting provisioning Feb 8 23:23:50.823459 waagent[1322]: 2024-02-08T23:23:50.817549Z INFO Daemon Daemon Handle ovf-env.xml. Feb 8 23:23:50.823459 waagent[1322]: 2024-02-08T23:23:50.818498Z INFO Daemon Daemon Set hostname [ci-3510.3.2-a-5de6cd8e96] Feb 8 23:23:50.838499 waagent[1322]: 2024-02-08T23:23:50.838400Z INFO Daemon Daemon Publish hostname [ci-3510.3.2-a-5de6cd8e96] Feb 8 23:23:50.847170 waagent[1322]: 2024-02-08T23:23:50.840208Z INFO Daemon Daemon Examine /proc/net/route for primary interface Feb 8 23:23:50.847170 waagent[1322]: 2024-02-08T23:23:50.841281Z INFO Daemon Daemon Primary interface is [eth0] Feb 8 23:23:50.854163 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Feb 8 23:23:50.854446 systemd[1]: Stopped systemd-networkd-wait-online.service. Feb 8 23:23:50.854529 systemd[1]: Stopping systemd-networkd-wait-online.service... Feb 8 23:23:50.854892 systemd[1]: Stopping systemd-networkd.service... Feb 8 23:23:50.860414 systemd-networkd[1088]: eth0: DHCPv6 lease lost Feb 8 23:23:50.861999 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 8 23:23:50.862157 systemd[1]: Stopped systemd-networkd.service. Feb 8 23:23:50.864636 systemd[1]: Starting systemd-networkd.service... Feb 8 23:23:50.894683 systemd-networkd[1377]: enP49219s1: Link UP Feb 8 23:23:50.894693 systemd-networkd[1377]: enP49219s1: Gained carrier Feb 8 23:23:50.896008 systemd-networkd[1377]: eth0: Link UP Feb 8 23:23:50.896017 systemd-networkd[1377]: eth0: Gained carrier Feb 8 23:23:50.896467 systemd-networkd[1377]: lo: Link UP Feb 8 23:23:50.896476 systemd-networkd[1377]: lo: Gained carrier Feb 8 23:23:50.896788 systemd-networkd[1377]: eth0: Gained IPv6LL Feb 8 23:23:50.897330 systemd-networkd[1377]: Enumeration completed Feb 8 23:23:50.897436 systemd[1]: Started systemd-networkd.service. Feb 8 23:23:50.900480 waagent[1322]: 2024-02-08T23:23:50.898895Z INFO Daemon Daemon Create user account if not exists Feb 8 23:23:50.899673 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 8 23:23:50.904482 waagent[1322]: 2024-02-08T23:23:50.904405Z INFO Daemon Daemon User core already exists, skip useradd Feb 8 23:23:50.909550 waagent[1322]: 2024-02-08T23:23:50.909438Z INFO Daemon Daemon Configure sudoer Feb 8 23:23:50.909970 systemd-networkd[1377]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 8 23:23:50.939424 systemd-networkd[1377]: eth0: DHCPv4 address 10.200.8.22/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 8 23:23:50.942644 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 8 23:23:50.951626 waagent[1322]: 2024-02-08T23:23:50.951549Z INFO Daemon Daemon Configure sshd Feb 8 23:23:50.956348 waagent[1322]: 2024-02-08T23:23:50.952873Z INFO Daemon Daemon Deploy ssh public key. Feb 8 23:23:52.220200 waagent[1322]: 2024-02-08T23:23:52.220106Z INFO Daemon Daemon Provisioning complete Feb 8 23:23:52.237064 waagent[1322]: 2024-02-08T23:23:52.236991Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Feb 8 23:23:52.240562 waagent[1322]: 2024-02-08T23:23:52.240493Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Feb 8 23:23:52.246509 waagent[1322]: 2024-02-08T23:23:52.246431Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Feb 8 23:23:52.512471 waagent[1386]: 2024-02-08T23:23:52.512302Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Feb 8 23:23:52.513181 waagent[1386]: 2024-02-08T23:23:52.513113Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 8 23:23:52.513329 waagent[1386]: 2024-02-08T23:23:52.513273Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 8 23:23:52.524189 waagent[1386]: 2024-02-08T23:23:52.524114Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Feb 8 23:23:52.524348 waagent[1386]: 2024-02-08T23:23:52.524295Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Feb 8 23:23:52.585441 waagent[1386]: 2024-02-08T23:23:52.585280Z INFO ExtHandler ExtHandler Found private key matching thumbprint E66D6B71BFD9B9C85D9C3C8F57BE79A5D539F266 Feb 8 23:23:52.585673 waagent[1386]: 2024-02-08T23:23:52.585612Z INFO ExtHandler ExtHandler Certificate with thumbprint 1440D3592315C4E140D35E024D8761CD85BF1D0B has no matching private key. Feb 8 23:23:52.585908 waagent[1386]: 2024-02-08T23:23:52.585857Z INFO ExtHandler ExtHandler Fetch goal state completed Feb 8 23:23:52.599013 waagent[1386]: 2024-02-08T23:23:52.598919Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: 85d7278c-7e9c-48fe-9ece-0aa908022cad New eTag: 14338721575596293035] Feb 8 23:23:52.599556 waagent[1386]: 2024-02-08T23:23:52.599495Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Feb 8 23:23:52.725531 waagent[1386]: 2024-02-08T23:23:52.725350Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 8 23:23:52.749191 waagent[1386]: 2024-02-08T23:23:52.749099Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1386 Feb 8 23:23:52.752594 waagent[1386]: 2024-02-08T23:23:52.752527Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 8 23:23:52.753816 waagent[1386]: 2024-02-08T23:23:52.753757Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 8 23:23:52.838886 waagent[1386]: 2024-02-08T23:23:52.838770Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 8 23:23:52.839230 waagent[1386]: 2024-02-08T23:23:52.839165Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 8 23:23:52.846888 waagent[1386]: 2024-02-08T23:23:52.846833Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 8 23:23:52.847333 waagent[1386]: 2024-02-08T23:23:52.847277Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 8 23:23:52.848377 waagent[1386]: 2024-02-08T23:23:52.848306Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Feb 8 23:23:52.849638 waagent[1386]: 2024-02-08T23:23:52.849578Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 8 23:23:52.850226 waagent[1386]: 2024-02-08T23:23:52.850155Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 8 23:23:52.850499 waagent[1386]: 2024-02-08T23:23:52.850443Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 8 23:23:52.851022 waagent[1386]: 2024-02-08T23:23:52.850970Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 8 23:23:52.851110 waagent[1386]: 2024-02-08T23:23:52.851054Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 8 23:23:52.851306 waagent[1386]: 2024-02-08T23:23:52.851257Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 8 23:23:52.851638 waagent[1386]: 2024-02-08T23:23:52.851580Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 8 23:23:52.852080 waagent[1386]: 2024-02-08T23:23:52.852025Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 8 23:23:52.852742 waagent[1386]: 2024-02-08T23:23:52.852690Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 8 23:23:52.853561 waagent[1386]: 2024-02-08T23:23:52.853505Z INFO EnvHandler ExtHandler Configure routes Feb 8 23:23:52.854001 waagent[1386]: 2024-02-08T23:23:52.853943Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 8 23:23:52.854225 waagent[1386]: 2024-02-08T23:23:52.854170Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 8 23:23:52.854371 waagent[1386]: 2024-02-08T23:23:52.854310Z INFO EnvHandler ExtHandler Gateway:None Feb 8 23:23:52.854499 waagent[1386]: 2024-02-08T23:23:52.854445Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 8 23:23:52.854499 waagent[1386]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 8 23:23:52.854499 waagent[1386]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Feb 8 23:23:52.854499 waagent[1386]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 8 23:23:52.854499 waagent[1386]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 8 23:23:52.854499 waagent[1386]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 8 23:23:52.854499 waagent[1386]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 8 23:23:52.856801 waagent[1386]: 2024-02-08T23:23:52.856576Z INFO EnvHandler ExtHandler Routes:None Feb 8 23:23:52.858568 waagent[1386]: 2024-02-08T23:23:52.858514Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 8 23:23:52.870937 waagent[1386]: 2024-02-08T23:23:52.870881Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Feb 8 23:23:52.871636 waagent[1386]: 2024-02-08T23:23:52.871582Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 8 23:23:52.872548 waagent[1386]: 2024-02-08T23:23:52.872491Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Feb 8 23:23:52.903914 waagent[1386]: 2024-02-08T23:23:52.903815Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1377' Feb 8 23:23:52.923945 waagent[1386]: 2024-02-08T23:23:52.923878Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Feb 8 23:23:53.004797 waagent[1386]: 2024-02-08T23:23:53.004685Z INFO MonitorHandler ExtHandler Network interfaces: Feb 8 23:23:53.004797 waagent[1386]: Executing ['ip', '-a', '-o', 'link']: Feb 8 23:23:53.004797 waagent[1386]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 8 23:23:53.004797 waagent[1386]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:64:9d:c5 brd ff:ff:ff:ff:ff:ff Feb 8 23:23:53.004797 waagent[1386]: 3: enP49219s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:64:9d:c5 brd ff:ff:ff:ff:ff:ff\ altname enP49219p0s2 Feb 8 23:23:53.004797 waagent[1386]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 8 23:23:53.004797 waagent[1386]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 8 23:23:53.004797 waagent[1386]: 2: eth0 inet 10.200.8.22/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 8 23:23:53.004797 waagent[1386]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 8 23:23:53.004797 waagent[1386]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 8 23:23:53.004797 waagent[1386]: 2: eth0 inet6 fe80::20d:3aff:fe64:9dc5/64 scope link \ valid_lft forever preferred_lft forever Feb 8 23:23:53.262110 waagent[1386]: 2024-02-08T23:23:53.262042Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.9.1.1 -- exiting Feb 8 23:23:54.251598 waagent[1322]: 2024-02-08T23:23:54.251443Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Feb 8 23:23:54.257217 waagent[1322]: 2024-02-08T23:23:54.257153Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.9.1.1 to be the latest agent Feb 8 23:23:55.269944 waagent[1423]: 2024-02-08T23:23:55.269828Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Feb 8 23:23:55.270672 waagent[1423]: 2024-02-08T23:23:55.270602Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.2 Feb 8 23:23:55.270820 waagent[1423]: 2024-02-08T23:23:55.270765Z INFO ExtHandler ExtHandler Python: 3.9.16 Feb 8 23:23:55.280167 waagent[1423]: 2024-02-08T23:23:55.280057Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 8 23:23:55.280564 waagent[1423]: 2024-02-08T23:23:55.280506Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 8 23:23:55.280730 waagent[1423]: 2024-02-08T23:23:55.280679Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 8 23:23:55.292135 waagent[1423]: 2024-02-08T23:23:55.292060Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Feb 8 23:23:55.304223 waagent[1423]: 2024-02-08T23:23:55.304160Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.143 Feb 8 23:23:55.305134 waagent[1423]: 2024-02-08T23:23:55.305075Z INFO ExtHandler Feb 8 23:23:55.305285 waagent[1423]: 2024-02-08T23:23:55.305230Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 64169e18-218c-4eb1-8578-b721137bcad4 eTag: 14338721575596293035 source: Fabric] Feb 8 23:23:55.305983 waagent[1423]: 2024-02-08T23:23:55.305925Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Feb 8 23:23:55.307052 waagent[1423]: 2024-02-08T23:23:55.306992Z INFO ExtHandler Feb 8 23:23:55.307184 waagent[1423]: 2024-02-08T23:23:55.307132Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Feb 8 23:23:55.314443 waagent[1423]: 2024-02-08T23:23:55.314385Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Feb 8 23:23:55.314864 waagent[1423]: 2024-02-08T23:23:55.314815Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 8 23:23:55.332849 waagent[1423]: 2024-02-08T23:23:55.332781Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Feb 8 23:23:55.393987 waagent[1423]: 2024-02-08T23:23:55.393848Z INFO ExtHandler Downloaded certificate {'thumbprint': 'E66D6B71BFD9B9C85D9C3C8F57BE79A5D539F266', 'hasPrivateKey': True} Feb 8 23:23:55.394982 waagent[1423]: 2024-02-08T23:23:55.394914Z INFO ExtHandler Downloaded certificate {'thumbprint': '1440D3592315C4E140D35E024D8761CD85BF1D0B', 'hasPrivateKey': False} Feb 8 23:23:55.395946 waagent[1423]: 2024-02-08T23:23:55.395880Z INFO ExtHandler Fetch goal state completed Feb 8 23:23:55.415674 waagent[1423]: 2024-02-08T23:23:55.415603Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1423 Feb 8 23:23:55.418858 waagent[1423]: 2024-02-08T23:23:55.418796Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 8 23:23:55.420253 waagent[1423]: 2024-02-08T23:23:55.420196Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 8 23:23:55.424717 waagent[1423]: 2024-02-08T23:23:55.424646Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 8 23:23:55.425041 waagent[1423]: 2024-02-08T23:23:55.424985Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 8 23:23:55.432570 waagent[1423]: 2024-02-08T23:23:55.432518Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 8 23:23:55.432995 waagent[1423]: 2024-02-08T23:23:55.432940Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 8 23:23:55.438656 waagent[1423]: 2024-02-08T23:23:55.438564Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Feb 8 23:23:55.443150 waagent[1423]: 2024-02-08T23:23:55.443093Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Feb 8 23:23:55.444533 waagent[1423]: 2024-02-08T23:23:55.444476Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 8 23:23:55.445087 waagent[1423]: 2024-02-08T23:23:55.445032Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 8 23:23:55.445243 waagent[1423]: 2024-02-08T23:23:55.445195Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 8 23:23:55.445772 waagent[1423]: 2024-02-08T23:23:55.445715Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 8 23:23:55.446041 waagent[1423]: 2024-02-08T23:23:55.445987Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 8 23:23:55.446041 waagent[1423]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 8 23:23:55.446041 waagent[1423]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Feb 8 23:23:55.446041 waagent[1423]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 8 23:23:55.446041 waagent[1423]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 8 23:23:55.446041 waagent[1423]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 8 23:23:55.446041 waagent[1423]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 8 23:23:55.449190 waagent[1423]: 2024-02-08T23:23:55.449118Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 8 23:23:55.449430 waagent[1423]: 2024-02-08T23:23:55.449347Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 8 23:23:55.449925 waagent[1423]: 2024-02-08T23:23:55.449854Z INFO EnvHandler ExtHandler Configure routes Feb 8 23:23:55.449925 waagent[1423]: 2024-02-08T23:23:55.448879Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 8 23:23:55.450479 waagent[1423]: 2024-02-08T23:23:55.450420Z INFO EnvHandler ExtHandler Gateway:None Feb 8 23:23:55.452764 waagent[1423]: 2024-02-08T23:23:55.452646Z INFO EnvHandler ExtHandler Routes:None Feb 8 23:23:55.456335 waagent[1423]: 2024-02-08T23:23:55.456085Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 8 23:23:55.456613 waagent[1423]: 2024-02-08T23:23:55.456538Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 8 23:23:55.461630 waagent[1423]: 2024-02-08T23:23:55.461519Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 8 23:23:55.465394 waagent[1423]: 2024-02-08T23:23:55.461077Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 8 23:23:55.465520 waagent[1423]: 2024-02-08T23:23:55.465454Z INFO MonitorHandler ExtHandler Network interfaces: Feb 8 23:23:55.465520 waagent[1423]: Executing ['ip', '-a', '-o', 'link']: Feb 8 23:23:55.465520 waagent[1423]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 8 23:23:55.465520 waagent[1423]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:64:9d:c5 brd ff:ff:ff:ff:ff:ff Feb 8 23:23:55.465520 waagent[1423]: 3: enP49219s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:64:9d:c5 brd ff:ff:ff:ff:ff:ff\ altname enP49219p0s2 Feb 8 23:23:55.465520 waagent[1423]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 8 23:23:55.465520 waagent[1423]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 8 23:23:55.465520 waagent[1423]: 2: eth0 inet 10.200.8.22/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 8 23:23:55.465520 waagent[1423]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 8 23:23:55.465520 waagent[1423]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 8 23:23:55.465520 waagent[1423]: 2: eth0 inet6 fe80::20d:3aff:fe64:9dc5/64 scope link \ valid_lft forever preferred_lft forever Feb 8 23:23:55.472238 waagent[1423]: 2024-02-08T23:23:55.471996Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 8 23:23:55.483108 waagent[1423]: 2024-02-08T23:23:55.483049Z INFO ExtHandler ExtHandler No requested version specified, checking for all versions for agent update (family: Prod) Feb 8 23:23:55.483322 waagent[1423]: 2024-02-08T23:23:55.483270Z INFO ExtHandler ExtHandler Downloading manifest Feb 8 23:23:55.521557 waagent[1423]: 2024-02-08T23:23:55.521448Z INFO ExtHandler ExtHandler Feb 8 23:23:55.525792 waagent[1423]: 2024-02-08T23:23:55.525673Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 3504c05a-24ef-4895-bdb2-be60d79033d5 correlation 5225b510-3396-429b-9ece-f692fc909fa5 created: 2024-02-08T23:17:08.709427Z] Feb 8 23:23:55.531001 waagent[1423]: 2024-02-08T23:23:55.530874Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Feb 8 23:23:55.533415 waagent[1423]: 2024-02-08T23:23:55.533340Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 11 ms] Feb 8 23:23:55.556973 waagent[1423]: 2024-02-08T23:23:55.556900Z INFO ExtHandler ExtHandler Looking for existing remote access users. Feb 8 23:23:55.576840 waagent[1423]: 2024-02-08T23:23:55.576703Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 20CE3D4D-F819-4A6D-9091-9D334ACB0D5A;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1] Feb 8 23:23:55.591535 waagent[1423]: 2024-02-08T23:23:55.591440Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Feb 8 23:23:55.591535 waagent[1423]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 8 23:23:55.591535 waagent[1423]: pkts bytes target prot opt in out source destination Feb 8 23:23:55.591535 waagent[1423]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 8 23:23:55.591535 waagent[1423]: pkts bytes target prot opt in out source destination Feb 8 23:23:55.591535 waagent[1423]: Chain OUTPUT (policy ACCEPT 1 packets, 52 bytes) Feb 8 23:23:55.591535 waagent[1423]: pkts bytes target prot opt in out source destination Feb 8 23:23:55.591535 waagent[1423]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 8 23:23:55.591535 waagent[1423]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 8 23:23:55.591535 waagent[1423]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 8 23:23:55.598333 waagent[1423]: 2024-02-08T23:23:55.598238Z INFO EnvHandler ExtHandler Current Firewall rules: Feb 8 23:23:55.598333 waagent[1423]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 8 23:23:55.598333 waagent[1423]: pkts bytes target prot opt in out source destination Feb 8 23:23:55.598333 waagent[1423]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 8 23:23:55.598333 waagent[1423]: pkts bytes target prot opt in out source destination Feb 8 23:23:55.598333 waagent[1423]: Chain OUTPUT (policy ACCEPT 1 packets, 52 bytes) Feb 8 23:23:55.598333 waagent[1423]: pkts bytes target prot opt in out source destination Feb 8 23:23:55.598333 waagent[1423]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 8 23:23:55.598333 waagent[1423]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 8 23:23:55.598333 waagent[1423]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 8 23:23:55.598873 waagent[1423]: 2024-02-08T23:23:55.598820Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Feb 8 23:24:17.219796 systemd[1]: Created slice system-sshd.slice. Feb 8 23:24:17.221686 systemd[1]: Started sshd@0-10.200.8.22:22-10.200.12.6:49878.service. Feb 8 23:24:18.090836 sshd[1468]: Accepted publickey for core from 10.200.12.6 port 49878 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc Feb 8 23:24:18.092486 sshd[1468]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:24:18.097429 systemd-logind[1217]: New session 3 of user core. Feb 8 23:24:18.098599 systemd[1]: Started session-3.scope. Feb 8 23:24:18.626805 systemd[1]: Started sshd@1-10.200.8.22:22-10.200.12.6:49894.service. Feb 8 23:24:18.787312 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Feb 8 23:24:19.281388 sshd[1473]: Accepted publickey for core from 10.200.12.6 port 49894 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc Feb 8 23:24:19.282989 sshd[1473]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:24:19.287267 systemd-logind[1217]: New session 4 of user core. Feb 8 23:24:19.287867 systemd[1]: Started session-4.scope. Feb 8 23:24:19.718045 sshd[1473]: pam_unix(sshd:session): session closed for user core Feb 8 23:24:19.721207 systemd[1]: sshd@1-10.200.8.22:22-10.200.12.6:49894.service: Deactivated successfully. Feb 8 23:24:19.722270 systemd[1]: session-4.scope: Deactivated successfully. Feb 8 23:24:19.722958 systemd-logind[1217]: Session 4 logged out. Waiting for processes to exit. Feb 8 23:24:19.723692 systemd-logind[1217]: Removed session 4. Feb 8 23:24:19.822980 systemd[1]: Started sshd@2-10.200.8.22:22-10.200.12.6:49908.service. Feb 8 23:24:20.443937 sshd[1479]: Accepted publickey for core from 10.200.12.6 port 49908 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc Feb 8 23:24:20.445515 sshd[1479]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:24:20.450060 systemd[1]: Started session-5.scope. Feb 8 23:24:20.450645 systemd-logind[1217]: New session 5 of user core. Feb 8 23:24:20.878417 sshd[1479]: pam_unix(sshd:session): session closed for user core Feb 8 23:24:20.882091 systemd[1]: sshd@2-10.200.8.22:22-10.200.12.6:49908.service: Deactivated successfully. Feb 8 23:24:20.883058 systemd[1]: session-5.scope: Deactivated successfully. Feb 8 23:24:20.883675 systemd-logind[1217]: Session 5 logged out. Waiting for processes to exit. Feb 8 23:24:20.884544 systemd-logind[1217]: Removed session 5. Feb 8 23:24:20.981503 systemd[1]: Started sshd@3-10.200.8.22:22-10.200.12.6:49924.service. Feb 8 23:24:21.599437 sshd[1485]: Accepted publickey for core from 10.200.12.6 port 49924 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc Feb 8 23:24:21.600990 sshd[1485]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:24:21.606567 systemd[1]: Started session-6.scope. Feb 8 23:24:21.607004 systemd-logind[1217]: New session 6 of user core. Feb 8 23:24:22.034829 sshd[1485]: pam_unix(sshd:session): session closed for user core Feb 8 23:24:22.037974 systemd[1]: sshd@3-10.200.8.22:22-10.200.12.6:49924.service: Deactivated successfully. Feb 8 23:24:22.038961 systemd[1]: session-6.scope: Deactivated successfully. Feb 8 23:24:22.039737 systemd-logind[1217]: Session 6 logged out. Waiting for processes to exit. Feb 8 23:24:22.040639 systemd-logind[1217]: Removed session 6. Feb 8 23:24:22.140672 systemd[1]: Started sshd@4-10.200.8.22:22-10.200.12.6:49934.service. Feb 8 23:24:22.769539 sshd[1491]: Accepted publickey for core from 10.200.12.6 port 49934 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc Feb 8 23:24:22.771079 sshd[1491]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:24:22.776427 systemd-logind[1217]: New session 7 of user core. Feb 8 23:24:22.776669 systemd[1]: Started session-7.scope. Feb 8 23:24:23.353719 sudo[1494]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 8 23:24:23.354057 sudo[1494]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 8 23:24:24.134423 systemd[1]: Reloading. Feb 8 23:24:24.196323 /usr/lib/systemd/system-generators/torcx-generator[1523]: time="2024-02-08T23:24:24Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 8 23:24:24.196810 /usr/lib/systemd/system-generators/torcx-generator[1523]: time="2024-02-08T23:24:24Z" level=info msg="torcx already run" Feb 8 23:24:24.299151 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 8 23:24:24.299170 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 8 23:24:24.315007 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 8 23:24:24.401180 systemd[1]: Started kubelet.service. Feb 8 23:24:24.430474 systemd[1]: Starting coreos-metadata.service... Feb 8 23:24:24.473801 kubelet[1585]: E0208 23:24:24.473743 1585 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 8 23:24:24.476164 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 8 23:24:24.476330 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 8 23:24:24.488844 coreos-metadata[1593]: Feb 08 23:24:24.488 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 8 23:24:24.490958 coreos-metadata[1593]: Feb 08 23:24:24.490 INFO Fetch successful Feb 8 23:24:24.491163 coreos-metadata[1593]: Feb 08 23:24:24.491 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Feb 8 23:24:24.492309 coreos-metadata[1593]: Feb 08 23:24:24.492 INFO Fetch successful Feb 8 23:24:24.492687 coreos-metadata[1593]: Feb 08 23:24:24.492 INFO Fetching http://168.63.129.16/machine/016c1a47-dfe4-4b9d-9b08-4f6c379d3e6a/ce0a1235%2D8e8e%2D441f%2Da8ee%2D89b5a8e4033f.%5Fci%2D3510.3.2%2Da%2D5de6cd8e96?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Feb 8 23:24:24.493899 coreos-metadata[1593]: Feb 08 23:24:24.493 INFO Fetch successful Feb 8 23:24:24.525847 coreos-metadata[1593]: Feb 08 23:24:24.525 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Feb 8 23:24:24.536940 coreos-metadata[1593]: Feb 08 23:24:24.536 INFO Fetch successful Feb 8 23:24:24.544937 systemd[1]: Finished coreos-metadata.service. Feb 8 23:24:27.292658 update_engine[1218]: I0208 23:24:27.292591 1218 update_attempter.cc:509] Updating boot flags... Feb 8 23:24:27.967399 systemd[1]: Stopped kubelet.service. Feb 8 23:24:27.982131 systemd[1]: Reloading. Feb 8 23:24:28.063929 /usr/lib/systemd/system-generators/torcx-generator[1689]: time="2024-02-08T23:24:28Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 8 23:24:28.063969 /usr/lib/systemd/system-generators/torcx-generator[1689]: time="2024-02-08T23:24:28Z" level=info msg="torcx already run" Feb 8 23:24:28.145322 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 8 23:24:28.145341 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 8 23:24:28.161154 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 8 23:24:28.252543 systemd[1]: Started kubelet.service. Feb 8 23:24:28.298500 kubelet[1752]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 8 23:24:28.298500 kubelet[1752]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 8 23:24:28.298500 kubelet[1752]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 8 23:24:28.298927 kubelet[1752]: I0208 23:24:28.298547 1752 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 8 23:24:28.644125 kubelet[1752]: I0208 23:24:28.643677 1752 server.go:467] "Kubelet version" kubeletVersion="v1.28.1" Feb 8 23:24:28.644125 kubelet[1752]: I0208 23:24:28.643703 1752 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 8 23:24:28.644125 kubelet[1752]: I0208 23:24:28.643954 1752 server.go:895] "Client rotation is on, will bootstrap in background" Feb 8 23:24:28.646125 kubelet[1752]: I0208 23:24:28.646100 1752 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 8 23:24:28.651733 kubelet[1752]: I0208 23:24:28.651713 1752 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 8 23:24:28.651969 kubelet[1752]: I0208 23:24:28.651953 1752 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 8 23:24:28.652155 kubelet[1752]: I0208 23:24:28.652136 1752 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 8 23:24:28.652316 kubelet[1752]: I0208 23:24:28.652170 1752 topology_manager.go:138] "Creating topology manager with none policy" Feb 8 23:24:28.652316 kubelet[1752]: I0208 23:24:28.652182 1752 container_manager_linux.go:301] "Creating device plugin manager" Feb 8 23:24:28.652316 kubelet[1752]: I0208 23:24:28.652282 1752 state_mem.go:36] "Initialized new in-memory state store" Feb 8 23:24:28.652465 kubelet[1752]: I0208 23:24:28.652401 1752 kubelet.go:393] "Attempting to sync node with API server" Feb 8 23:24:28.652465 kubelet[1752]: I0208 23:24:28.652423 1752 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 8 23:24:28.652465 kubelet[1752]: I0208 23:24:28.652454 1752 kubelet.go:309] "Adding apiserver pod source" Feb 8 23:24:28.652573 kubelet[1752]: I0208 23:24:28.652475 1752 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 8 23:24:28.652951 kubelet[1752]: E0208 23:24:28.652932 1752 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:28.653035 kubelet[1752]: E0208 23:24:28.652990 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:28.653536 kubelet[1752]: I0208 23:24:28.653521 1752 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 8 23:24:28.653758 kubelet[1752]: W0208 23:24:28.653742 1752 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 8 23:24:28.654160 kubelet[1752]: I0208 23:24:28.654142 1752 server.go:1232] "Started kubelet" Feb 8 23:24:28.655717 kubelet[1752]: I0208 23:24:28.655666 1752 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 8 23:24:28.656492 kubelet[1752]: I0208 23:24:28.656481 1752 server.go:462] "Adding debug handlers to kubelet server" Feb 8 23:24:28.657669 kubelet[1752]: I0208 23:24:28.657655 1752 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 8 23:24:28.657911 kubelet[1752]: I0208 23:24:28.657902 1752 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 8 23:24:28.660780 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 8 23:24:28.662780 kubelet[1752]: E0208 23:24:28.662569 1752 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 8 23:24:28.662780 kubelet[1752]: E0208 23:24:28.662593 1752 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 8 23:24:28.662780 kubelet[1752]: I0208 23:24:28.662747 1752 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 8 23:24:28.668784 kubelet[1752]: I0208 23:24:28.668764 1752 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 8 23:24:28.669776 kubelet[1752]: W0208 23:24:28.669755 1752 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 8 23:24:28.669893 kubelet[1752]: E0208 23:24:28.669881 1752 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 8 23:24:28.670124 kubelet[1752]: E0208 23:24:28.670023 1752 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.22.17b206c8e650e3ac", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.22", UID:"10.200.8.22", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.22"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 24, 28, 654125996, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 24, 28, 654125996, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.200.8.22"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:24:28.670461 kubelet[1752]: W0208 23:24:28.670444 1752 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.200.8.22" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 8 23:24:28.670571 kubelet[1752]: E0208 23:24:28.670560 1752 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.200.8.22" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 8 23:24:28.671617 kubelet[1752]: I0208 23:24:28.671599 1752 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 8 23:24:28.671696 kubelet[1752]: I0208 23:24:28.671669 1752 reconciler_new.go:29] "Reconciler: start to sync state" Feb 8 23:24:28.673671 kubelet[1752]: E0208 23:24:28.673654 1752 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.200.8.22\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Feb 8 23:24:28.673859 kubelet[1752]: E0208 23:24:28.673798 1752 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.22.17b206c8e6d1ec6e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.22", UID:"10.200.8.22", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.22"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 24, 28, 662582382, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 24, 28, 662582382, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.200.8.22"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:24:28.676110 kubelet[1752]: W0208 23:24:28.676089 1752 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 8 23:24:28.676232 kubelet[1752]: E0208 23:24:28.676221 1752 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 8 23:24:28.696329 kubelet[1752]: I0208 23:24:28.696308 1752 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 8 23:24:28.696329 kubelet[1752]: I0208 23:24:28.696330 1752 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 8 23:24:28.696469 kubelet[1752]: I0208 23:24:28.696345 1752 state_mem.go:36] "Initialized new in-memory state store" Feb 8 23:24:28.697460 kubelet[1752]: I0208 23:24:28.697447 1752 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 8 23:24:28.698529 kubelet[1752]: I0208 23:24:28.698516 1752 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 8 23:24:28.699644 kubelet[1752]: I0208 23:24:28.698582 1752 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 8 23:24:28.699644 kubelet[1752]: I0208 23:24:28.698596 1752 kubelet.go:2303] "Starting kubelet main sync loop" Feb 8 23:24:28.699644 kubelet[1752]: E0208 23:24:28.698630 1752 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 8 23:24:28.700870 kubelet[1752]: I0208 23:24:28.700849 1752 policy_none.go:49] "None policy: Start" Feb 8 23:24:28.701416 kubelet[1752]: E0208 23:24:28.701337 1752 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.22.17b206c8e8c40a87", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.22", UID:"10.200.8.22", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.8.22 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.22"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 24, 28, 695227015, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 24, 28, 695227015, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.200.8.22"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:24:28.701860 kubelet[1752]: I0208 23:24:28.701833 1752 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 8 23:24:28.701860 kubelet[1752]: I0208 23:24:28.701861 1752 state_mem.go:35] "Initializing new in-memory state store" Feb 8 23:24:28.705386 kubelet[1752]: E0208 23:24:28.705292 1752 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.22.17b206c8e8c41f3b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.22", UID:"10.200.8.22", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.8.22 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.22"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 24, 28, 695232315, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 24, 28, 695232315, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.200.8.22"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:24:28.705768 kubelet[1752]: W0208 23:24:28.705757 1752 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 8 23:24:28.705850 kubelet[1752]: E0208 23:24:28.705843 1752 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 8 23:24:28.706063 kubelet[1752]: E0208 23:24:28.706022 1752 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.22.17b206c8e8c42a2b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.22", UID:"10.200.8.22", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.8.22 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.22"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 24, 28, 695235115, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 24, 28, 695235115, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.200.8.22"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:24:28.709727 systemd[1]: Created slice kubepods.slice. Feb 8 23:24:28.713651 systemd[1]: Created slice kubepods-burstable.slice. Feb 8 23:24:28.716388 systemd[1]: Created slice kubepods-besteffort.slice. Feb 8 23:24:28.723975 kubelet[1752]: I0208 23:24:28.723959 1752 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 8 23:24:28.724166 kubelet[1752]: I0208 23:24:28.724148 1752 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 8 23:24:28.725398 kubelet[1752]: E0208 23:24:28.725198 1752 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.200.8.22\" not found" Feb 8 23:24:28.726820 kubelet[1752]: E0208 23:24:28.726709 1752 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.22.17b206c8ea93a6f0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.22", UID:"10.200.8.22", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.22"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 24, 28, 725610224, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 24, 28, 725610224, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.200.8.22"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:24:28.769947 kubelet[1752]: I0208 23:24:28.769918 1752 kubelet_node_status.go:70] "Attempting to register node" node="10.200.8.22" Feb 8 23:24:28.771047 kubelet[1752]: E0208 23:24:28.771026 1752 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.200.8.22" Feb 8 23:24:28.771318 kubelet[1752]: E0208 23:24:28.771253 1752 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.22.17b206c8e8c40a87", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.22", UID:"10.200.8.22", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.8.22 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.22"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 24, 28, 695227015, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 24, 28, 769873575, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.200.8.22"}': 'events "10.200.8.22.17b206c8e8c40a87" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:24:28.772163 kubelet[1752]: E0208 23:24:28.772090 1752 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.22.17b206c8e8c41f3b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.22", UID:"10.200.8.22", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.8.22 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.22"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 24, 28, 695232315, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 24, 28, 769885375, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.200.8.22"}': 'events "10.200.8.22.17b206c8e8c41f3b" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:24:28.773521 kubelet[1752]: E0208 23:24:28.773455 1752 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.22.17b206c8e8c42a2b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.22", UID:"10.200.8.22", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.8.22 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.22"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 24, 28, 695235115, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 24, 28, 769889475, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.200.8.22"}': 'events "10.200.8.22.17b206c8e8c42a2b" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:24:28.874867 kubelet[1752]: E0208 23:24:28.874836 1752 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.200.8.22\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="400ms" Feb 8 23:24:28.972685 kubelet[1752]: I0208 23:24:28.972659 1752 kubelet_node_status.go:70] "Attempting to register node" node="10.200.8.22" Feb 8 23:24:28.973835 kubelet[1752]: E0208 23:24:28.973751 1752 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.22.17b206c8e8c40a87", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.22", UID:"10.200.8.22", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.8.22 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.22"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 24, 28, 695227015, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 24, 28, 972620641, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.200.8.22"}': 'events "10.200.8.22.17b206c8e8c40a87" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:24:28.974061 kubelet[1752]: E0208 23:24:28.973823 1752 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.200.8.22" Feb 8 23:24:28.975223 kubelet[1752]: E0208 23:24:28.975152 1752 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.22.17b206c8e8c41f3b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.22", UID:"10.200.8.22", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.8.22 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.22"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 24, 28, 695232315, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 24, 28, 972627741, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.200.8.22"}': 'events "10.200.8.22.17b206c8e8c41f3b" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:24:28.976003 kubelet[1752]: E0208 23:24:28.975947 1752 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.22.17b206c8e8c42a2b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.22", UID:"10.200.8.22", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.8.22 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.22"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 24, 28, 695235115, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 24, 28, 972631741, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.200.8.22"}': 'events "10.200.8.22.17b206c8e8c42a2b" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:24:29.277040 kubelet[1752]: E0208 23:24:29.276931 1752 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.200.8.22\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="800ms" Feb 8 23:24:29.375104 kubelet[1752]: I0208 23:24:29.375065 1752 kubelet_node_status.go:70] "Attempting to register node" node="10.200.8.22" Feb 8 23:24:29.376707 kubelet[1752]: E0208 23:24:29.376682 1752 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.200.8.22" Feb 8 23:24:29.376849 kubelet[1752]: E0208 23:24:29.376657 1752 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.22.17b206c8e8c40a87", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.22", UID:"10.200.8.22", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.8.22 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.22"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 24, 28, 695227015, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 24, 29, 374967213, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.200.8.22"}': 'events "10.200.8.22.17b206c8e8c40a87" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:24:29.377696 kubelet[1752]: E0208 23:24:29.377611 1752 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.22.17b206c8e8c41f3b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.22", UID:"10.200.8.22", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.8.22 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.22"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 24, 28, 695232315, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 24, 29, 375015113, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.200.8.22"}': 'events "10.200.8.22.17b206c8e8c41f3b" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:24:29.378926 kubelet[1752]: E0208 23:24:29.378855 1752 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.22.17b206c8e8c42a2b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.22", UID:"10.200.8.22", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.8.22 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.22"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 24, 28, 695235115, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 24, 29, 375033813, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.200.8.22"}': 'events "10.200.8.22.17b206c8e8c42a2b" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:24:29.646368 kubelet[1752]: I0208 23:24:29.646054 1752 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 8 23:24:29.653330 kubelet[1752]: E0208 23:24:29.653296 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:30.021772 kubelet[1752]: E0208 23:24:30.021738 1752 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.200.8.22" not found Feb 8 23:24:30.080846 kubelet[1752]: E0208 23:24:30.080803 1752 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.200.8.22\" not found" node="10.200.8.22" Feb 8 23:24:30.178392 kubelet[1752]: I0208 23:24:30.178344 1752 kubelet_node_status.go:70] "Attempting to register node" node="10.200.8.22" Feb 8 23:24:30.182647 kubelet[1752]: I0208 23:24:30.182609 1752 kubelet_node_status.go:73] "Successfully registered node" node="10.200.8.22" Feb 8 23:24:30.198527 kubelet[1752]: E0208 23:24:30.198506 1752 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.22\" not found" Feb 8 23:24:30.266089 sudo[1494]: pam_unix(sudo:session): session closed for user root Feb 8 23:24:30.300169 kubelet[1752]: I0208 23:24:30.300069 1752 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 8 23:24:30.300914 env[1228]: time="2024-02-08T23:24:30.300859688Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 8 23:24:30.301313 kubelet[1752]: I0208 23:24:30.301082 1752 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 8 23:24:30.382817 sshd[1491]: pam_unix(sshd:session): session closed for user core Feb 8 23:24:30.385669 systemd[1]: sshd@4-10.200.8.22:22-10.200.12.6:49934.service: Deactivated successfully. Feb 8 23:24:30.386558 systemd[1]: session-7.scope: Deactivated successfully. Feb 8 23:24:30.387239 systemd-logind[1217]: Session 7 logged out. Waiting for processes to exit. Feb 8 23:24:30.388051 systemd-logind[1217]: Removed session 7. Feb 8 23:24:30.654450 kubelet[1752]: E0208 23:24:30.654335 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:30.654894 kubelet[1752]: I0208 23:24:30.654348 1752 apiserver.go:52] "Watching apiserver" Feb 8 23:24:30.657013 kubelet[1752]: I0208 23:24:30.656988 1752 topology_manager.go:215] "Topology Admit Handler" podUID="fbd00318-b8fd-407e-8b79-d63dccdf3906" podNamespace="kube-system" podName="cilium-rhftn" Feb 8 23:24:30.657147 kubelet[1752]: I0208 23:24:30.657131 1752 topology_manager.go:215] "Topology Admit Handler" podUID="916b7475-7d6c-438a-bff4-fa6f2d9885cd" podNamespace="kube-system" podName="kube-proxy-wj2bm" Feb 8 23:24:30.662410 systemd[1]: Created slice kubepods-besteffort-pod916b7475_7d6c_438a_bff4_fa6f2d9885cd.slice. Feb 8 23:24:30.671568 systemd[1]: Created slice kubepods-burstable-podfbd00318_b8fd_407e_8b79_d63dccdf3906.slice. Feb 8 23:24:30.672335 kubelet[1752]: I0208 23:24:30.672317 1752 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 8 23:24:30.680135 kubelet[1752]: I0208 23:24:30.680112 1752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fbd00318-b8fd-407e-8b79-d63dccdf3906-lib-modules\") pod \"cilium-rhftn\" (UID: \"fbd00318-b8fd-407e-8b79-d63dccdf3906\") " pod="kube-system/cilium-rhftn" Feb 8 23:24:30.680243 kubelet[1752]: I0208 23:24:30.680166 1752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkct2\" (UniqueName: \"kubernetes.io/projected/fbd00318-b8fd-407e-8b79-d63dccdf3906-kube-api-access-hkct2\") pod \"cilium-rhftn\" (UID: \"fbd00318-b8fd-407e-8b79-d63dccdf3906\") " pod="kube-system/cilium-rhftn" Feb 8 23:24:30.680243 kubelet[1752]: I0208 23:24:30.680196 1752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/916b7475-7d6c-438a-bff4-fa6f2d9885cd-kube-proxy\") pod \"kube-proxy-wj2bm\" (UID: \"916b7475-7d6c-438a-bff4-fa6f2d9885cd\") " pod="kube-system/kube-proxy-wj2bm" Feb 8 23:24:30.680243 kubelet[1752]: I0208 23:24:30.680234 1752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/916b7475-7d6c-438a-bff4-fa6f2d9885cd-xtables-lock\") pod \"kube-proxy-wj2bm\" (UID: \"916b7475-7d6c-438a-bff4-fa6f2d9885cd\") " pod="kube-system/kube-proxy-wj2bm" Feb 8 23:24:30.680394 kubelet[1752]: I0208 23:24:30.680270 1752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fbd00318-b8fd-407e-8b79-d63dccdf3906-hostproc\") pod \"cilium-rhftn\" (UID: \"fbd00318-b8fd-407e-8b79-d63dccdf3906\") " pod="kube-system/cilium-rhftn" Feb 8 23:24:30.680394 kubelet[1752]: I0208 23:24:30.680301 1752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fbd00318-b8fd-407e-8b79-d63dccdf3906-host-proc-sys-kernel\") pod \"cilium-rhftn\" (UID: \"fbd00318-b8fd-407e-8b79-d63dccdf3906\") " pod="kube-system/cilium-rhftn" Feb 8 23:24:30.680394 kubelet[1752]: I0208 23:24:30.680326 1752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fbd00318-b8fd-407e-8b79-d63dccdf3906-bpf-maps\") pod \"cilium-rhftn\" (UID: \"fbd00318-b8fd-407e-8b79-d63dccdf3906\") " pod="kube-system/cilium-rhftn" Feb 8 23:24:30.680394 kubelet[1752]: I0208 23:24:30.680380 1752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fbd00318-b8fd-407e-8b79-d63dccdf3906-clustermesh-secrets\") pod \"cilium-rhftn\" (UID: \"fbd00318-b8fd-407e-8b79-d63dccdf3906\") " pod="kube-system/cilium-rhftn" Feb 8 23:24:30.681091 kubelet[1752]: I0208 23:24:30.680410 1752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fbd00318-b8fd-407e-8b79-d63dccdf3906-cilium-config-path\") pod \"cilium-rhftn\" (UID: \"fbd00318-b8fd-407e-8b79-d63dccdf3906\") " pod="kube-system/cilium-rhftn" Feb 8 23:24:30.681091 kubelet[1752]: I0208 23:24:30.680444 1752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fbd00318-b8fd-407e-8b79-d63dccdf3906-host-proc-sys-net\") pod \"cilium-rhftn\" (UID: \"fbd00318-b8fd-407e-8b79-d63dccdf3906\") " pod="kube-system/cilium-rhftn" Feb 8 23:24:30.681091 kubelet[1752]: I0208 23:24:30.680468 1752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52rwg\" (UniqueName: \"kubernetes.io/projected/916b7475-7d6c-438a-bff4-fa6f2d9885cd-kube-api-access-52rwg\") pod \"kube-proxy-wj2bm\" (UID: \"916b7475-7d6c-438a-bff4-fa6f2d9885cd\") " pod="kube-system/kube-proxy-wj2bm" Feb 8 23:24:30.681091 kubelet[1752]: I0208 23:24:30.680493 1752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fbd00318-b8fd-407e-8b79-d63dccdf3906-cilium-run\") pod \"cilium-rhftn\" (UID: \"fbd00318-b8fd-407e-8b79-d63dccdf3906\") " pod="kube-system/cilium-rhftn" Feb 8 23:24:30.681091 kubelet[1752]: I0208 23:24:30.680516 1752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fbd00318-b8fd-407e-8b79-d63dccdf3906-cni-path\") pod \"cilium-rhftn\" (UID: \"fbd00318-b8fd-407e-8b79-d63dccdf3906\") " pod="kube-system/cilium-rhftn" Feb 8 23:24:30.681091 kubelet[1752]: I0208 23:24:30.681059 1752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fbd00318-b8fd-407e-8b79-d63dccdf3906-etc-cni-netd\") pod \"cilium-rhftn\" (UID: \"fbd00318-b8fd-407e-8b79-d63dccdf3906\") " pod="kube-system/cilium-rhftn" Feb 8 23:24:30.684810 kubelet[1752]: I0208 23:24:30.684782 1752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fbd00318-b8fd-407e-8b79-d63dccdf3906-xtables-lock\") pod \"cilium-rhftn\" (UID: \"fbd00318-b8fd-407e-8b79-d63dccdf3906\") " pod="kube-system/cilium-rhftn" Feb 8 23:24:30.685787 kubelet[1752]: I0208 23:24:30.685765 1752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fbd00318-b8fd-407e-8b79-d63dccdf3906-hubble-tls\") pod \"cilium-rhftn\" (UID: \"fbd00318-b8fd-407e-8b79-d63dccdf3906\") " pod="kube-system/cilium-rhftn" Feb 8 23:24:30.685879 kubelet[1752]: I0208 23:24:30.685850 1752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/916b7475-7d6c-438a-bff4-fa6f2d9885cd-lib-modules\") pod \"kube-proxy-wj2bm\" (UID: \"916b7475-7d6c-438a-bff4-fa6f2d9885cd\") " pod="kube-system/kube-proxy-wj2bm" Feb 8 23:24:30.685930 kubelet[1752]: I0208 23:24:30.685922 1752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fbd00318-b8fd-407e-8b79-d63dccdf3906-cilium-cgroup\") pod \"cilium-rhftn\" (UID: \"fbd00318-b8fd-407e-8b79-d63dccdf3906\") " pod="kube-system/cilium-rhftn" Feb 8 23:24:30.970419 env[1228]: time="2024-02-08T23:24:30.970351584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wj2bm,Uid:916b7475-7d6c-438a-bff4-fa6f2d9885cd,Namespace:kube-system,Attempt:0,}" Feb 8 23:24:30.978793 env[1228]: time="2024-02-08T23:24:30.978755260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rhftn,Uid:fbd00318-b8fd-407e-8b79-d63dccdf3906,Namespace:kube-system,Attempt:0,}" Feb 8 23:24:31.654640 kubelet[1752]: E0208 23:24:31.654598 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:32.655675 kubelet[1752]: E0208 23:24:32.655643 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:33.152003 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3660857705.mount: Deactivated successfully. Feb 8 23:24:33.180748 env[1228]: time="2024-02-08T23:24:33.180700763Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:24:33.182857 env[1228]: time="2024-02-08T23:24:33.182821779Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:24:33.192472 env[1228]: time="2024-02-08T23:24:33.192440150Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:24:33.195132 env[1228]: time="2024-02-08T23:24:33.195097769Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:24:33.197460 env[1228]: time="2024-02-08T23:24:33.197430487Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:24:33.200211 env[1228]: time="2024-02-08T23:24:33.200181207Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:24:33.202549 env[1228]: time="2024-02-08T23:24:33.202509124Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:24:33.209205 env[1228]: time="2024-02-08T23:24:33.209170773Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:24:33.261149 env[1228]: time="2024-02-08T23:24:33.261083956Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:24:33.261307 env[1228]: time="2024-02-08T23:24:33.261130557Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:24:33.261307 env[1228]: time="2024-02-08T23:24:33.261144157Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:24:33.267393 env[1228]: time="2024-02-08T23:24:33.261810962Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/86836ee84c217178b0ee82d00df34eaddba63b00f038c0d55ca68bbd67f1edca pid=1793 runtime=io.containerd.runc.v2 Feb 8 23:24:33.289960 systemd[1]: Started cri-containerd-86836ee84c217178b0ee82d00df34eaddba63b00f038c0d55ca68bbd67f1edca.scope. Feb 8 23:24:33.292297 env[1228]: time="2024-02-08T23:24:33.286287842Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:24:33.292297 env[1228]: time="2024-02-08T23:24:33.286328943Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:24:33.292297 env[1228]: time="2024-02-08T23:24:33.286346243Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:24:33.297194 env[1228]: time="2024-02-08T23:24:33.296699819Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/619f578e6c2939c8ccff7285d7725d25fe43703215e3c1b222690deabe82de1b pid=1817 runtime=io.containerd.runc.v2 Feb 8 23:24:33.310576 systemd[1]: Started cri-containerd-619f578e6c2939c8ccff7285d7725d25fe43703215e3c1b222690deabe82de1b.scope. Feb 8 23:24:33.334747 env[1228]: time="2024-02-08T23:24:33.334708800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wj2bm,Uid:916b7475-7d6c-438a-bff4-fa6f2d9885cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"86836ee84c217178b0ee82d00df34eaddba63b00f038c0d55ca68bbd67f1edca\"" Feb 8 23:24:33.338427 env[1228]: time="2024-02-08T23:24:33.338396927Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.6\"" Feb 8 23:24:33.348437 env[1228]: time="2024-02-08T23:24:33.348404501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rhftn,Uid:fbd00318-b8fd-407e-8b79-d63dccdf3906,Namespace:kube-system,Attempt:0,} returns sandbox id \"619f578e6c2939c8ccff7285d7725d25fe43703215e3c1b222690deabe82de1b\"" Feb 8 23:24:33.656124 kubelet[1752]: E0208 23:24:33.656088 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:34.476682 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2125706766.mount: Deactivated successfully. Feb 8 23:24:34.656927 kubelet[1752]: E0208 23:24:34.656853 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:35.032138 env[1228]: time="2024-02-08T23:24:35.032031036Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:24:35.041044 env[1228]: time="2024-02-08T23:24:35.041007694Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:342a759d88156b4f56ba522a1aed0e3d32d72542545346b40877f6583bebe05f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:24:35.044924 env[1228]: time="2024-02-08T23:24:35.044898619Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:24:35.048819 env[1228]: time="2024-02-08T23:24:35.048793045Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:3898a1671ae42be1cd3c2e777549bc7b5b306b8da3a224b747365f6679fb902a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:24:35.049178 env[1228]: time="2024-02-08T23:24:35.049150247Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.6\" returns image reference \"sha256:342a759d88156b4f56ba522a1aed0e3d32d72542545346b40877f6583bebe05f\"" Feb 8 23:24:35.050494 env[1228]: time="2024-02-08T23:24:35.050468956Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 8 23:24:35.051302 env[1228]: time="2024-02-08T23:24:35.051272761Z" level=info msg="CreateContainer within sandbox \"86836ee84c217178b0ee82d00df34eaddba63b00f038c0d55ca68bbd67f1edca\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 8 23:24:35.095633 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2299398920.mount: Deactivated successfully. Feb 8 23:24:35.116248 env[1228]: time="2024-02-08T23:24:35.116148782Z" level=info msg="CreateContainer within sandbox \"86836ee84c217178b0ee82d00df34eaddba63b00f038c0d55ca68bbd67f1edca\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6d04d3052af3269b86879d202c161fdc5b3f779800d2bdd5d669b63f6d4a9e54\"" Feb 8 23:24:35.117217 env[1228]: time="2024-02-08T23:24:35.117186588Z" level=info msg="StartContainer for \"6d04d3052af3269b86879d202c161fdc5b3f779800d2bdd5d669b63f6d4a9e54\"" Feb 8 23:24:35.133720 systemd[1]: Started cri-containerd-6d04d3052af3269b86879d202c161fdc5b3f779800d2bdd5d669b63f6d4a9e54.scope. Feb 8 23:24:35.146047 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount193816270.mount: Deactivated successfully. Feb 8 23:24:35.169031 env[1228]: time="2024-02-08T23:24:35.168978824Z" level=info msg="StartContainer for \"6d04d3052af3269b86879d202c161fdc5b3f779800d2bdd5d669b63f6d4a9e54\" returns successfully" Feb 8 23:24:35.657079 kubelet[1752]: E0208 23:24:35.657029 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:35.727393 kubelet[1752]: I0208 23:24:35.727349 1752 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-wj2bm" podStartSLOduration=4.014979014 podCreationTimestamp="2024-02-08 23:24:30 +0000 UTC" firstStartedPulling="2024-02-08 23:24:33.337224518 +0000 UTC m=+5.079716265" lastFinishedPulling="2024-02-08 23:24:35.04955995 +0000 UTC m=+6.792051697" observedRunningTime="2024-02-08 23:24:35.727029944 +0000 UTC m=+7.469521691" watchObservedRunningTime="2024-02-08 23:24:35.727314446 +0000 UTC m=+7.469806293" Feb 8 23:24:36.657276 kubelet[1752]: E0208 23:24:36.657235 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:37.657860 kubelet[1752]: E0208 23:24:37.657760 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:38.658146 kubelet[1752]: E0208 23:24:38.658076 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:39.658843 kubelet[1752]: E0208 23:24:39.658795 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:40.659097 kubelet[1752]: E0208 23:24:40.659019 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:40.682036 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2531541118.mount: Deactivated successfully. Feb 8 23:24:41.659470 kubelet[1752]: E0208 23:24:41.659403 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:42.660410 kubelet[1752]: E0208 23:24:42.660373 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:43.402978 env[1228]: time="2024-02-08T23:24:43.402866554Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:24:43.407691 env[1228]: time="2024-02-08T23:24:43.407655810Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:24:43.412021 env[1228]: time="2024-02-08T23:24:43.411987752Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:24:43.412479 env[1228]: time="2024-02-08T23:24:43.412447267Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 8 23:24:43.414623 env[1228]: time="2024-02-08T23:24:43.414593736Z" level=info msg="CreateContainer within sandbox \"619f578e6c2939c8ccff7285d7725d25fe43703215e3c1b222690deabe82de1b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 8 23:24:43.445418 env[1228]: time="2024-02-08T23:24:43.445384439Z" level=info msg="CreateContainer within sandbox \"619f578e6c2939c8ccff7285d7725d25fe43703215e3c1b222690deabe82de1b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"16f66901659090d2a80895200903d60c29e8bc2733a63e1e447846fbb342290a\"" Feb 8 23:24:43.445893 env[1228]: time="2024-02-08T23:24:43.445860155Z" level=info msg="StartContainer for \"16f66901659090d2a80895200903d60c29e8bc2733a63e1e447846fbb342290a\"" Feb 8 23:24:43.463879 systemd[1]: Started cri-containerd-16f66901659090d2a80895200903d60c29e8bc2733a63e1e447846fbb342290a.scope. Feb 8 23:24:43.497512 env[1228]: time="2024-02-08T23:24:43.497470536Z" level=info msg="StartContainer for \"16f66901659090d2a80895200903d60c29e8bc2733a63e1e447846fbb342290a\" returns successfully" Feb 8 23:24:43.504050 systemd[1]: cri-containerd-16f66901659090d2a80895200903d60c29e8bc2733a63e1e447846fbb342290a.scope: Deactivated successfully. Feb 8 23:24:43.661003 kubelet[1752]: E0208 23:24:43.660907 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:44.429069 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-16f66901659090d2a80895200903d60c29e8bc2733a63e1e447846fbb342290a-rootfs.mount: Deactivated successfully. Feb 8 23:24:44.661303 kubelet[1752]: E0208 23:24:44.661259 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:45.662309 kubelet[1752]: E0208 23:24:45.662267 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:46.663254 kubelet[1752]: E0208 23:24:46.663212 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:47.664177 kubelet[1752]: E0208 23:24:47.664124 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:47.829055 env[1228]: time="2024-02-08T23:24:47.828999366Z" level=info msg="shim disconnected" id=16f66901659090d2a80895200903d60c29e8bc2733a63e1e447846fbb342290a Feb 8 23:24:47.829861 env[1228]: time="2024-02-08T23:24:47.829063768Z" level=warning msg="cleaning up after shim disconnected" id=16f66901659090d2a80895200903d60c29e8bc2733a63e1e447846fbb342290a namespace=k8s.io Feb 8 23:24:47.829861 env[1228]: time="2024-02-08T23:24:47.829076669Z" level=info msg="cleaning up dead shim" Feb 8 23:24:47.837309 env[1228]: time="2024-02-08T23:24:47.837274208Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:24:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2076 runtime=io.containerd.runc.v2\n" Feb 8 23:24:48.653513 kubelet[1752]: E0208 23:24:48.653462 1752 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:48.664774 kubelet[1752]: E0208 23:24:48.664747 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:48.753021 env[1228]: time="2024-02-08T23:24:48.752985522Z" level=info msg="CreateContainer within sandbox \"619f578e6c2939c8ccff7285d7725d25fe43703215e3c1b222690deabe82de1b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 8 23:24:48.789084 env[1228]: time="2024-02-08T23:24:48.789046345Z" level=info msg="CreateContainer within sandbox \"619f578e6c2939c8ccff7285d7725d25fe43703215e3c1b222690deabe82de1b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d4a44b678ec0f8ed01f13cda0792caf98c05984064ab8edc9bcab41e5460159e\"" Feb 8 23:24:48.789857 env[1228]: time="2024-02-08T23:24:48.789821667Z" level=info msg="StartContainer for \"d4a44b678ec0f8ed01f13cda0792caf98c05984064ab8edc9bcab41e5460159e\"" Feb 8 23:24:48.814078 systemd[1]: Started cri-containerd-d4a44b678ec0f8ed01f13cda0792caf98c05984064ab8edc9bcab41e5460159e.scope. Feb 8 23:24:48.845343 env[1228]: time="2024-02-08T23:24:48.845311142Z" level=info msg="StartContainer for \"d4a44b678ec0f8ed01f13cda0792caf98c05984064ab8edc9bcab41e5460159e\" returns successfully" Feb 8 23:24:48.852390 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 8 23:24:48.852690 systemd[1]: Stopped systemd-sysctl.service. Feb 8 23:24:48.852849 systemd[1]: Stopping systemd-sysctl.service... Feb 8 23:24:48.855811 systemd[1]: Starting systemd-sysctl.service... Feb 8 23:24:48.858083 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 8 23:24:48.858968 systemd[1]: cri-containerd-d4a44b678ec0f8ed01f13cda0792caf98c05984064ab8edc9bcab41e5460159e.scope: Deactivated successfully. Feb 8 23:24:48.867755 systemd[1]: Finished systemd-sysctl.service. Feb 8 23:24:48.902551 env[1228]: time="2024-02-08T23:24:48.902507064Z" level=info msg="shim disconnected" id=d4a44b678ec0f8ed01f13cda0792caf98c05984064ab8edc9bcab41e5460159e Feb 8 23:24:48.902791 env[1228]: time="2024-02-08T23:24:48.902555866Z" level=warning msg="cleaning up after shim disconnected" id=d4a44b678ec0f8ed01f13cda0792caf98c05984064ab8edc9bcab41e5460159e namespace=k8s.io Feb 8 23:24:48.902791 env[1228]: time="2024-02-08T23:24:48.902570366Z" level=info msg="cleaning up dead shim" Feb 8 23:24:48.911383 env[1228]: time="2024-02-08T23:24:48.910650595Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:24:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2141 runtime=io.containerd.runc.v2\n" Feb 8 23:24:49.665413 kubelet[1752]: E0208 23:24:49.665371 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:49.756052 env[1228]: time="2024-02-08T23:24:49.756008408Z" level=info msg="CreateContainer within sandbox \"619f578e6c2939c8ccff7285d7725d25fe43703215e3c1b222690deabe82de1b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 8 23:24:49.773457 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d4a44b678ec0f8ed01f13cda0792caf98c05984064ab8edc9bcab41e5460159e-rootfs.mount: Deactivated successfully. Feb 8 23:24:49.785177 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount797666162.mount: Deactivated successfully. Feb 8 23:24:49.802017 env[1228]: time="2024-02-08T23:24:49.801978077Z" level=info msg="CreateContainer within sandbox \"619f578e6c2939c8ccff7285d7725d25fe43703215e3c1b222690deabe82de1b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a5b7afb5066ab8ea31e8924d1df06421c27727c0a361ce18d1ed52a20df3f5c9\"" Feb 8 23:24:49.802560 env[1228]: time="2024-02-08T23:24:49.802525392Z" level=info msg="StartContainer for \"a5b7afb5066ab8ea31e8924d1df06421c27727c0a361ce18d1ed52a20df3f5c9\"" Feb 8 23:24:49.822173 systemd[1]: Started cri-containerd-a5b7afb5066ab8ea31e8924d1df06421c27727c0a361ce18d1ed52a20df3f5c9.scope. Feb 8 23:24:49.854157 systemd[1]: cri-containerd-a5b7afb5066ab8ea31e8924d1df06421c27727c0a361ce18d1ed52a20df3f5c9.scope: Deactivated successfully. Feb 8 23:24:49.861612 env[1228]: time="2024-02-08T23:24:49.861574422Z" level=info msg="StartContainer for \"a5b7afb5066ab8ea31e8924d1df06421c27727c0a361ce18d1ed52a20df3f5c9\" returns successfully" Feb 8 23:24:49.889032 env[1228]: time="2024-02-08T23:24:49.888982579Z" level=info msg="shim disconnected" id=a5b7afb5066ab8ea31e8924d1df06421c27727c0a361ce18d1ed52a20df3f5c9 Feb 8 23:24:49.889271 env[1228]: time="2024-02-08T23:24:49.889032080Z" level=warning msg="cleaning up after shim disconnected" id=a5b7afb5066ab8ea31e8924d1df06421c27727c0a361ce18d1ed52a20df3f5c9 namespace=k8s.io Feb 8 23:24:49.889271 env[1228]: time="2024-02-08T23:24:49.889044480Z" level=info msg="cleaning up dead shim" Feb 8 23:24:49.896071 env[1228]: time="2024-02-08T23:24:49.896039974Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:24:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2198 runtime=io.containerd.runc.v2\n" Feb 8 23:24:50.666481 kubelet[1752]: E0208 23:24:50.666441 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:50.760390 env[1228]: time="2024-02-08T23:24:50.760328374Z" level=info msg="CreateContainer within sandbox \"619f578e6c2939c8ccff7285d7725d25fe43703215e3c1b222690deabe82de1b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 8 23:24:50.773101 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a5b7afb5066ab8ea31e8924d1df06421c27727c0a361ce18d1ed52a20df3f5c9-rootfs.mount: Deactivated successfully. Feb 8 23:24:50.786185 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3508449193.mount: Deactivated successfully. Feb 8 23:24:50.802802 env[1228]: time="2024-02-08T23:24:50.802760514Z" level=info msg="CreateContainer within sandbox \"619f578e6c2939c8ccff7285d7725d25fe43703215e3c1b222690deabe82de1b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7ce7211bb4f820be0e3673d8ce0d90a1d405707ab2d717ed8df7365cf7d725bd\"" Feb 8 23:24:50.803213 env[1228]: time="2024-02-08T23:24:50.803167625Z" level=info msg="StartContainer for \"7ce7211bb4f820be0e3673d8ce0d90a1d405707ab2d717ed8df7365cf7d725bd\"" Feb 8 23:24:50.819176 systemd[1]: Started cri-containerd-7ce7211bb4f820be0e3673d8ce0d90a1d405707ab2d717ed8df7365cf7d725bd.scope. Feb 8 23:24:50.846150 systemd[1]: cri-containerd-7ce7211bb4f820be0e3673d8ce0d90a1d405707ab2d717ed8df7365cf7d725bd.scope: Deactivated successfully. Feb 8 23:24:50.847599 env[1228]: time="2024-02-08T23:24:50.847474516Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfbd00318_b8fd_407e_8b79_d63dccdf3906.slice/cri-containerd-7ce7211bb4f820be0e3673d8ce0d90a1d405707ab2d717ed8df7365cf7d725bd.scope/memory.events\": no such file or directory" Feb 8 23:24:50.852337 env[1228]: time="2024-02-08T23:24:50.852300945Z" level=info msg="StartContainer for \"7ce7211bb4f820be0e3673d8ce0d90a1d405707ab2d717ed8df7365cf7d725bd\" returns successfully" Feb 8 23:24:50.884055 env[1228]: time="2024-02-08T23:24:50.884002397Z" level=info msg="shim disconnected" id=7ce7211bb4f820be0e3673d8ce0d90a1d405707ab2d717ed8df7365cf7d725bd Feb 8 23:24:50.884487 env[1228]: time="2024-02-08T23:24:50.884053898Z" level=warning msg="cleaning up after shim disconnected" id=7ce7211bb4f820be0e3673d8ce0d90a1d405707ab2d717ed8df7365cf7d725bd namespace=k8s.io Feb 8 23:24:50.884487 env[1228]: time="2024-02-08T23:24:50.884066399Z" level=info msg="cleaning up dead shim" Feb 8 23:24:50.891501 env[1228]: time="2024-02-08T23:24:50.891469398Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:24:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2252 runtime=io.containerd.runc.v2\n" Feb 8 23:24:51.667211 kubelet[1752]: E0208 23:24:51.667160 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:51.764072 env[1228]: time="2024-02-08T23:24:51.764027699Z" level=info msg="CreateContainer within sandbox \"619f578e6c2939c8ccff7285d7725d25fe43703215e3c1b222690deabe82de1b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 8 23:24:51.773147 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7ce7211bb4f820be0e3673d8ce0d90a1d405707ab2d717ed8df7365cf7d725bd-rootfs.mount: Deactivated successfully. Feb 8 23:24:51.798088 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1130470197.mount: Deactivated successfully. Feb 8 23:24:51.804211 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2409854531.mount: Deactivated successfully. Feb 8 23:24:51.815006 env[1228]: time="2024-02-08T23:24:51.814972032Z" level=info msg="CreateContainer within sandbox \"619f578e6c2939c8ccff7285d7725d25fe43703215e3c1b222690deabe82de1b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"123d464c47ede82b115211f0ae89983093e4fab58510373ca2ae3c8bc6731d9b\"" Feb 8 23:24:51.815535 env[1228]: time="2024-02-08T23:24:51.815511046Z" level=info msg="StartContainer for \"123d464c47ede82b115211f0ae89983093e4fab58510373ca2ae3c8bc6731d9b\"" Feb 8 23:24:51.833532 systemd[1]: Started cri-containerd-123d464c47ede82b115211f0ae89983093e4fab58510373ca2ae3c8bc6731d9b.scope. Feb 8 23:24:51.869757 env[1228]: time="2024-02-08T23:24:51.869717263Z" level=info msg="StartContainer for \"123d464c47ede82b115211f0ae89983093e4fab58510373ca2ae3c8bc6731d9b\" returns successfully" Feb 8 23:24:51.952326 kubelet[1752]: I0208 23:24:51.952297 1752 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 8 23:24:52.379567 kernel: Initializing XFRM netlink socket Feb 8 23:24:52.668141 kubelet[1752]: E0208 23:24:52.668088 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:52.780771 kubelet[1752]: I0208 23:24:52.780730 1752 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-rhftn" podStartSLOduration=12.717405179 podCreationTimestamp="2024-02-08 23:24:30 +0000 UTC" firstStartedPulling="2024-02-08 23:24:33.349406008 +0000 UTC m=+5.091897755" lastFinishedPulling="2024-02-08 23:24:43.412697975 +0000 UTC m=+15.155189722" observedRunningTime="2024-02-08 23:24:52.780677046 +0000 UTC m=+24.523168893" watchObservedRunningTime="2024-02-08 23:24:52.780697146 +0000 UTC m=+24.523188893" Feb 8 23:24:53.669195 kubelet[1752]: E0208 23:24:53.669086 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:54.013763 systemd-networkd[1377]: cilium_host: Link UP Feb 8 23:24:54.022813 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 8 23:24:54.022899 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 8 23:24:54.022488 systemd-networkd[1377]: cilium_net: Link UP Feb 8 23:24:54.024370 systemd-networkd[1377]: cilium_net: Gained carrier Feb 8 23:24:54.024575 systemd-networkd[1377]: cilium_host: Gained carrier Feb 8 23:24:54.076507 systemd-networkd[1377]: cilium_net: Gained IPv6LL Feb 8 23:24:54.219014 systemd-networkd[1377]: cilium_vxlan: Link UP Feb 8 23:24:54.219023 systemd-networkd[1377]: cilium_vxlan: Gained carrier Feb 8 23:24:54.448462 kernel: NET: Registered PF_ALG protocol family Feb 8 23:24:54.669748 kubelet[1752]: E0208 23:24:54.669689 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:54.706479 systemd-networkd[1377]: cilium_host: Gained IPv6LL Feb 8 23:24:55.160693 systemd-networkd[1377]: lxc_health: Link UP Feb 8 23:24:55.186427 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 8 23:24:55.186187 systemd-networkd[1377]: lxc_health: Gained carrier Feb 8 23:24:55.670400 kubelet[1752]: E0208 23:24:55.670303 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:56.050488 systemd-networkd[1377]: cilium_vxlan: Gained IPv6LL Feb 8 23:24:56.307459 systemd-networkd[1377]: lxc_health: Gained IPv6LL Feb 8 23:24:56.671510 kubelet[1752]: E0208 23:24:56.671346 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:57.671704 kubelet[1752]: E0208 23:24:57.671647 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:58.440882 kubelet[1752]: I0208 23:24:58.440828 1752 topology_manager.go:215] "Topology Admit Handler" podUID="4ac5cce6-57c7-4dae-b9d6-d4abe80de5a4" podNamespace="default" podName="nginx-deployment-6d5f899847-h5slv" Feb 8 23:24:58.448795 systemd[1]: Created slice kubepods-besteffort-pod4ac5cce6_57c7_4dae_b9d6_d4abe80de5a4.slice. Feb 8 23:24:58.466798 kubelet[1752]: I0208 23:24:58.466774 1752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7777x\" (UniqueName: \"kubernetes.io/projected/4ac5cce6-57c7-4dae-b9d6-d4abe80de5a4-kube-api-access-7777x\") pod \"nginx-deployment-6d5f899847-h5slv\" (UID: \"4ac5cce6-57c7-4dae-b9d6-d4abe80de5a4\") " pod="default/nginx-deployment-6d5f899847-h5slv" Feb 8 23:24:58.671838 kubelet[1752]: E0208 23:24:58.671808 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:58.771724 env[1228]: time="2024-02-08T23:24:58.767073536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-h5slv,Uid:4ac5cce6-57c7-4dae-b9d6-d4abe80de5a4,Namespace:default,Attempt:0,}" Feb 8 23:24:58.846463 systemd-networkd[1377]: lxcac3e5517f0ce: Link UP Feb 8 23:24:58.858385 kernel: eth0: renamed from tmp1acc9 Feb 8 23:24:58.872459 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 8 23:24:58.872559 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcac3e5517f0ce: link becomes ready Feb 8 23:24:58.870786 systemd-networkd[1377]: lxcac3e5517f0ce: Gained carrier Feb 8 23:24:59.514082 env[1228]: time="2024-02-08T23:24:59.514011279Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:24:59.514082 env[1228]: time="2024-02-08T23:24:59.514050779Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:24:59.514391 env[1228]: time="2024-02-08T23:24:59.514063980Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:24:59.514506 env[1228]: time="2024-02-08T23:24:59.514402987Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1acc9b69afd9fc54f5e68ef989d4d5f022a386e4d38106cbe977014f476d0b36 pid=2782 runtime=io.containerd.runc.v2 Feb 8 23:24:59.534780 systemd[1]: Started cri-containerd-1acc9b69afd9fc54f5e68ef989d4d5f022a386e4d38106cbe977014f476d0b36.scope. Feb 8 23:24:59.572494 env[1228]: time="2024-02-08T23:24:59.572457216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-h5slv,Uid:4ac5cce6-57c7-4dae-b9d6-d4abe80de5a4,Namespace:default,Attempt:0,} returns sandbox id \"1acc9b69afd9fc54f5e68ef989d4d5f022a386e4d38106cbe977014f476d0b36\"" Feb 8 23:24:59.574058 env[1228]: time="2024-02-08T23:24:59.574028049Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 8 23:24:59.579646 systemd[1]: run-containerd-runc-k8s.io-1acc9b69afd9fc54f5e68ef989d4d5f022a386e4d38106cbe977014f476d0b36-runc.iYNeOw.mount: Deactivated successfully. Feb 8 23:24:59.672510 kubelet[1752]: E0208 23:24:59.672437 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:25:00.018606 systemd-networkd[1377]: lxcac3e5517f0ce: Gained IPv6LL Feb 8 23:25:00.673475 kubelet[1752]: E0208 23:25:00.673424 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:25:01.674502 kubelet[1752]: E0208 23:25:01.674446 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:25:02.674855 kubelet[1752]: E0208 23:25:02.674805 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:25:03.675348 kubelet[1752]: E0208 23:25:03.675305 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:25:04.154916 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3503927486.mount: Deactivated successfully. Feb 8 23:25:04.676301 kubelet[1752]: E0208 23:25:04.676243 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:25:05.121473 env[1228]: time="2024-02-08T23:25:05.121421603Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:25:05.126913 env[1228]: time="2024-02-08T23:25:05.126871402Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:25:05.130897 env[1228]: time="2024-02-08T23:25:05.130866775Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:25:05.134949 env[1228]: time="2024-02-08T23:25:05.134918849Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:25:05.135500 env[1228]: time="2024-02-08T23:25:05.135468959Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\"" Feb 8 23:25:05.137871 env[1228]: time="2024-02-08T23:25:05.137841902Z" level=info msg="CreateContainer within sandbox \"1acc9b69afd9fc54f5e68ef989d4d5f022a386e4d38106cbe977014f476d0b36\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 8 23:25:05.167434 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3746447153.mount: Deactivated successfully. Feb 8 23:25:05.179414 env[1228]: time="2024-02-08T23:25:05.179378857Z" level=info msg="CreateContainer within sandbox \"1acc9b69afd9fc54f5e68ef989d4d5f022a386e4d38106cbe977014f476d0b36\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"af4786af565360e0ede82f02d6cfd7d0058ec10a6acc5a168c6eb192eaae3f7b\"" Feb 8 23:25:05.179909 env[1228]: time="2024-02-08T23:25:05.179877266Z" level=info msg="StartContainer for \"af4786af565360e0ede82f02d6cfd7d0058ec10a6acc5a168c6eb192eaae3f7b\"" Feb 8 23:25:05.203173 systemd[1]: Started cri-containerd-af4786af565360e0ede82f02d6cfd7d0058ec10a6acc5a168c6eb192eaae3f7b.scope. Feb 8 23:25:05.236589 env[1228]: time="2024-02-08T23:25:05.236547996Z" level=info msg="StartContainer for \"af4786af565360e0ede82f02d6cfd7d0058ec10a6acc5a168c6eb192eaae3f7b\" returns successfully" Feb 8 23:25:05.677279 kubelet[1752]: E0208 23:25:05.677222 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:25:05.798628 kubelet[1752]: I0208 23:25:05.798593 1752 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-6d5f899847-h5slv" podStartSLOduration=2.236286087 podCreationTimestamp="2024-02-08 23:24:58 +0000 UTC" firstStartedPulling="2024-02-08 23:24:59.573472037 +0000 UTC m=+31.315963884" lastFinishedPulling="2024-02-08 23:25:05.135745864 +0000 UTC m=+36.878237711" observedRunningTime="2024-02-08 23:25:05.798495913 +0000 UTC m=+37.540987660" watchObservedRunningTime="2024-02-08 23:25:05.798559914 +0000 UTC m=+37.541051661" Feb 8 23:25:06.158896 systemd[1]: run-containerd-runc-k8s.io-af4786af565360e0ede82f02d6cfd7d0058ec10a6acc5a168c6eb192eaae3f7b-runc.EusMyg.mount: Deactivated successfully. Feb 8 23:25:06.678069 kubelet[1752]: E0208 23:25:06.678019 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:25:07.679237 kubelet[1752]: E0208 23:25:07.679177 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:25:08.652825 kubelet[1752]: E0208 23:25:08.652763 1752 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:25:08.680015 kubelet[1752]: E0208 23:25:08.679979 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:25:09.410654 kubelet[1752]: I0208 23:25:09.410615 1752 topology_manager.go:215] "Topology Admit Handler" podUID="3a69f250-0d55-4ce9-b090-459eddc2dc78" podNamespace="default" podName="nfs-server-provisioner-0" Feb 8 23:25:09.417078 systemd[1]: Created slice kubepods-besteffort-pod3a69f250_0d55_4ce9_b090_459eddc2dc78.slice. Feb 8 23:25:09.418268 kubelet[1752]: I0208 23:25:09.418249 1752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/3a69f250-0d55-4ce9-b090-459eddc2dc78-data\") pod \"nfs-server-provisioner-0\" (UID: \"3a69f250-0d55-4ce9-b090-459eddc2dc78\") " pod="default/nfs-server-provisioner-0" Feb 8 23:25:09.420912 kubelet[1752]: I0208 23:25:09.420896 1752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2p4gr\" (UniqueName: \"kubernetes.io/projected/3a69f250-0d55-4ce9-b090-459eddc2dc78-kube-api-access-2p4gr\") pod \"nfs-server-provisioner-0\" (UID: \"3a69f250-0d55-4ce9-b090-459eddc2dc78\") " pod="default/nfs-server-provisioner-0" Feb 8 23:25:09.680978 kubelet[1752]: E0208 23:25:09.680850 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:25:09.722030 env[1228]: time="2024-02-08T23:25:09.721978407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:3a69f250-0d55-4ce9-b090-459eddc2dc78,Namespace:default,Attempt:0,}" Feb 8 23:25:09.822761 systemd-networkd[1377]: lxc68b0a89e3929: Link UP Feb 8 23:25:09.829401 kernel: eth0: renamed from tmp0fcaa Feb 8 23:25:09.839715 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 8 23:25:09.840090 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc68b0a89e3929: link becomes ready Feb 8 23:25:09.839863 systemd-networkd[1377]: lxc68b0a89e3929: Gained carrier Feb 8 23:25:10.023610 env[1228]: time="2024-02-08T23:25:10.023542069Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:25:10.023795 env[1228]: time="2024-02-08T23:25:10.023592670Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:25:10.023881 env[1228]: time="2024-02-08T23:25:10.023783773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:25:10.024107 env[1228]: time="2024-02-08T23:25:10.024065177Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0fcaa6e0e28910f2ac92239727df65272654fb7b8f198af1cf3dee8ec3164082 pid=2910 runtime=io.containerd.runc.v2 Feb 8 23:25:10.044257 systemd[1]: Started cri-containerd-0fcaa6e0e28910f2ac92239727df65272654fb7b8f198af1cf3dee8ec3164082.scope. Feb 8 23:25:10.084265 env[1228]: time="2024-02-08T23:25:10.084222045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:3a69f250-0d55-4ce9-b090-459eddc2dc78,Namespace:default,Attempt:0,} returns sandbox id \"0fcaa6e0e28910f2ac92239727df65272654fb7b8f198af1cf3dee8ec3164082\"" Feb 8 23:25:10.085927 env[1228]: time="2024-02-08T23:25:10.085886672Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 8 23:25:10.531845 systemd[1]: run-containerd-runc-k8s.io-0fcaa6e0e28910f2ac92239727df65272654fb7b8f198af1cf3dee8ec3164082-runc.13iHYh.mount: Deactivated successfully. Feb 8 23:25:10.681539 kubelet[1752]: E0208 23:25:10.681481 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:25:10.962593 systemd-networkd[1377]: lxc68b0a89e3929: Gained IPv6LL Feb 8 23:25:11.682553 kubelet[1752]: E0208 23:25:11.682501 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:25:12.683167 kubelet[1752]: E0208 23:25:12.683121 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:25:12.963028 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1841021520.mount: Deactivated successfully. Feb 8 23:25:13.683899 kubelet[1752]: E0208 23:25:13.683835 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:25:14.684241 kubelet[1752]: E0208 23:25:14.684177 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:25:14.963623 env[1228]: time="2024-02-08T23:25:14.963573627Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:25:14.971855 env[1228]: time="2024-02-08T23:25:14.971797448Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:25:14.982705 env[1228]: time="2024-02-08T23:25:14.982667607Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:25:14.988707 env[1228]: time="2024-02-08T23:25:14.988672595Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:25:14.989430 env[1228]: time="2024-02-08T23:25:14.989399505Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Feb 8 23:25:14.991744 env[1228]: time="2024-02-08T23:25:14.991709239Z" level=info msg="CreateContainer within sandbox \"0fcaa6e0e28910f2ac92239727df65272654fb7b8f198af1cf3dee8ec3164082\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 8 23:25:15.021378 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2927094586.mount: Deactivated successfully. Feb 8 23:25:15.038005 env[1228]: time="2024-02-08T23:25:15.037966904Z" level=info msg="CreateContainer within sandbox \"0fcaa6e0e28910f2ac92239727df65272654fb7b8f198af1cf3dee8ec3164082\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"ea073586c7b6b833b33a325466eb442f8c91af4f08933b2ce68ed8c22ef1bbe3\"" Feb 8 23:25:15.038579 env[1228]: time="2024-02-08T23:25:15.038545812Z" level=info msg="StartContainer for \"ea073586c7b6b833b33a325466eb442f8c91af4f08933b2ce68ed8c22ef1bbe3\"" Feb 8 23:25:15.057518 systemd[1]: Started cri-containerd-ea073586c7b6b833b33a325466eb442f8c91af4f08933b2ce68ed8c22ef1bbe3.scope. Feb 8 23:25:15.093089 env[1228]: time="2024-02-08T23:25:15.093046392Z" level=info msg="StartContainer for \"ea073586c7b6b833b33a325466eb442f8c91af4f08933b2ce68ed8c22ef1bbe3\" returns successfully" Feb 8 23:25:15.685029 kubelet[1752]: E0208 23:25:15.684972 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:25:15.832060 kubelet[1752]: I0208 23:25:15.832023 1752 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.92785722 podCreationTimestamp="2024-02-08 23:25:09 +0000 UTC" firstStartedPulling="2024-02-08 23:25:10.085567567 +0000 UTC m=+41.828059314" lastFinishedPulling="2024-02-08 23:25:14.98969141 +0000 UTC m=+46.732183157" observedRunningTime="2024-02-08 23:25:15.831475156 +0000 UTC m=+47.573967003" watchObservedRunningTime="2024-02-08 23:25:15.831981063 +0000 UTC m=+47.574472910" Feb 8 23:25:16.686152 kubelet[1752]: E0208 23:25:16.686091 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:25:17.686494 kubelet[1752]: E0208 23:25:17.686435 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:25:18.687198 kubelet[1752]: E0208 23:25:18.687141 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:25:19.688210 kubelet[1752]: E0208 23:25:19.688155 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:25:20.688747 kubelet[1752]: E0208 23:25:20.688690 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:25:21.688918 kubelet[1752]: E0208 23:25:21.688867 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:25:22.689757 kubelet[1752]: E0208 23:25:22.689697 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:25:23.690889 kubelet[1752]: E0208 23:25:23.690827 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:25:24.451287 kubelet[1752]: I0208 23:25:24.451240 1752 topology_manager.go:215] "Topology Admit Handler" podUID="a4e6f28a-8717-4879-9e05-5dd72acbaf66" podNamespace="default" podName="test-pod-1" Feb 8 23:25:24.460193 systemd[1]: Created slice kubepods-besteffort-poda4e6f28a_8717_4879_9e05_5dd72acbaf66.slice. Feb 8 23:25:24.602782 kubelet[1752]: I0208 23:25:24.602727 1752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-60a6ad5e-f726-46df-8697-a2e611d5d02a\" (UniqueName: \"kubernetes.io/nfs/a4e6f28a-8717-4879-9e05-5dd72acbaf66-pvc-60a6ad5e-f726-46df-8697-a2e611d5d02a\") pod \"test-pod-1\" (UID: \"a4e6f28a-8717-4879-9e05-5dd72acbaf66\") " pod="default/test-pod-1" Feb 8 23:25:24.602782 kubelet[1752]: I0208 23:25:24.602791 1752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twsms\" (UniqueName: \"kubernetes.io/projected/a4e6f28a-8717-4879-9e05-5dd72acbaf66-kube-api-access-twsms\") pod \"test-pod-1\" (UID: \"a4e6f28a-8717-4879-9e05-5dd72acbaf66\") " pod="default/test-pod-1" Feb 8 23:25:24.691417 kubelet[1752]: E0208 23:25:24.691379 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:25:24.792387 kernel: FS-Cache: Loaded Feb 8 23:25:24.871960 kernel: RPC: Registered named UNIX socket transport module. Feb 8 23:25:24.872083 kernel: RPC: Registered udp transport module. Feb 8 23:25:24.872106 kernel: RPC: Registered tcp transport module. Feb 8 23:25:24.876022 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 8 23:25:25.095388 kernel: FS-Cache: Netfs 'nfs' registered for caching Feb 8 23:25:25.336563 kernel: NFS: Registering the id_resolver key type Feb 8 23:25:25.338185 kernel: Key type id_resolver registered Feb 8 23:25:25.338702 kernel: Key type id_legacy registered Feb 8 23:25:25.542676 nfsidmap[3028]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.2-a-5de6cd8e96' Feb 8 23:25:25.549032 nfsidmap[3029]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.2-a-5de6cd8e96' Feb 8 23:25:25.665334 env[1228]: time="2024-02-08T23:25:25.664844858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:a4e6f28a-8717-4879-9e05-5dd72acbaf66,Namespace:default,Attempt:0,}" Feb 8 23:25:25.692792 kubelet[1752]: E0208 23:25:25.692508 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:25:25.717749 systemd-networkd[1377]: lxc48d7037db0d7: Link UP Feb 8 23:25:25.726395 kernel: eth0: renamed from tmp9213d Feb 8 23:25:25.741710 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 8 23:25:25.741779 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc48d7037db0d7: link becomes ready Feb 8 23:25:25.742006 systemd-networkd[1377]: lxc48d7037db0d7: Gained carrier Feb 8 23:25:25.937503 env[1228]: time="2024-02-08T23:25:25.937437689Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:25:25.937503 env[1228]: time="2024-02-08T23:25:25.937471989Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:25:25.937503 env[1228]: time="2024-02-08T23:25:25.937485589Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:25:25.937916 env[1228]: time="2024-02-08T23:25:25.937870094Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9213df98798148b9b77e584f4d0f1adcb842b48979835c5c8b3344da21ec9ccc pid=3055 runtime=io.containerd.runc.v2 Feb 8 23:25:25.957782 systemd[1]: Started cri-containerd-9213df98798148b9b77e584f4d0f1adcb842b48979835c5c8b3344da21ec9ccc.scope. Feb 8 23:25:25.994385 env[1228]: time="2024-02-08T23:25:25.994332142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:a4e6f28a-8717-4879-9e05-5dd72acbaf66,Namespace:default,Attempt:0,} returns sandbox id \"9213df98798148b9b77e584f4d0f1adcb842b48979835c5c8b3344da21ec9ccc\"" Feb 8 23:25:25.996101 env[1228]: time="2024-02-08T23:25:25.996079562Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 8 23:25:26.681670 env[1228]: time="2024-02-08T23:25:26.681623076Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:25:26.689344 env[1228]: time="2024-02-08T23:25:26.689303962Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:25:26.692662 env[1228]: time="2024-02-08T23:25:26.692633899Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:25:26.693378 kubelet[1752]: E0208 23:25:26.693319 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:25:26.697283 env[1228]: time="2024-02-08T23:25:26.697252851Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:25:26.697923 env[1228]: time="2024-02-08T23:25:26.697891359Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\"" Feb 8 23:25:26.701585 env[1228]: time="2024-02-08T23:25:26.701553300Z" level=info msg="CreateContainer within sandbox \"9213df98798148b9b77e584f4d0f1adcb842b48979835c5c8b3344da21ec9ccc\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 8 23:25:26.727112 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount945023209.mount: Deactivated successfully. Feb 8 23:25:26.741266 env[1228]: time="2024-02-08T23:25:26.741233846Z" level=info msg="CreateContainer within sandbox \"9213df98798148b9b77e584f4d0f1adcb842b48979835c5c8b3344da21ec9ccc\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"95edb3c34b2a09ba0a9e1165a6a5ff3e020f396e09afc4e795339d1215ae494b\"" Feb 8 23:25:26.741779 env[1228]: time="2024-02-08T23:25:26.741754052Z" level=info msg="StartContainer for \"95edb3c34b2a09ba0a9e1165a6a5ff3e020f396e09afc4e795339d1215ae494b\"" Feb 8 23:25:26.758679 systemd[1]: Started cri-containerd-95edb3c34b2a09ba0a9e1165a6a5ff3e020f396e09afc4e795339d1215ae494b.scope. Feb 8 23:25:26.792005 env[1228]: time="2024-02-08T23:25:26.791962817Z" level=info msg="StartContainer for \"95edb3c34b2a09ba0a9e1165a6a5ff3e020f396e09afc4e795339d1215ae494b\" returns successfully" Feb 8 23:25:26.851505 kubelet[1752]: I0208 23:25:26.851409 1752 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=16.148674779 podCreationTimestamp="2024-02-08 23:25:10 +0000 UTC" firstStartedPulling="2024-02-08 23:25:25.995453455 +0000 UTC m=+57.737945202" lastFinishedPulling="2024-02-08 23:25:26.698148261 +0000 UTC m=+58.440640008" observedRunningTime="2024-02-08 23:25:26.851183883 +0000 UTC m=+58.593675730" watchObservedRunningTime="2024-02-08 23:25:26.851369585 +0000 UTC m=+58.593861432" Feb 8 23:25:27.090587 systemd-networkd[1377]: lxc48d7037db0d7: Gained IPv6LL Feb 8 23:25:27.694008 kubelet[1752]: E0208 23:25:27.693945 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:25:28.652743 kubelet[1752]: E0208 23:25:28.652687 1752 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:25:28.694459 kubelet[1752]: E0208 23:25:28.694419 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:25:29.695391 kubelet[1752]: E0208 23:25:29.695333 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:25:30.696029 kubelet[1752]: E0208 23:25:30.695975 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:25:31.696513 kubelet[1752]: E0208 23:25:31.696457 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:25:32.696910 kubelet[1752]: E0208 23:25:32.696846 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:25:33.141551 env[1228]: time="2024-02-08T23:25:33.141432593Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 8 23:25:33.146483 env[1228]: time="2024-02-08T23:25:33.146448642Z" level=info msg="StopContainer for \"123d464c47ede82b115211f0ae89983093e4fab58510373ca2ae3c8bc6731d9b\" with timeout 2 (s)" Feb 8 23:25:33.146744 env[1228]: time="2024-02-08T23:25:33.146714345Z" level=info msg="Stop container \"123d464c47ede82b115211f0ae89983093e4fab58510373ca2ae3c8bc6731d9b\" with signal terminated" Feb 8 23:25:33.153790 systemd-networkd[1377]: lxc_health: Link DOWN Feb 8 23:25:33.153800 systemd-networkd[1377]: lxc_health: Lost carrier Feb 8 23:25:33.175865 systemd[1]: cri-containerd-123d464c47ede82b115211f0ae89983093e4fab58510373ca2ae3c8bc6731d9b.scope: Deactivated successfully. Feb 8 23:25:33.176159 systemd[1]: cri-containerd-123d464c47ede82b115211f0ae89983093e4fab58510373ca2ae3c8bc6731d9b.scope: Consumed 6.376s CPU time. Feb 8 23:25:33.194759 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-123d464c47ede82b115211f0ae89983093e4fab58510373ca2ae3c8bc6731d9b-rootfs.mount: Deactivated successfully. Feb 8 23:25:33.697318 kubelet[1752]: E0208 23:25:33.697263 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:25:33.743326 kubelet[1752]: E0208 23:25:33.743294 1752 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 8 23:25:34.698124 kubelet[1752]: E0208 23:25:34.698072 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:25:35.153781 env[1228]: time="2024-02-08T23:25:35.153647361Z" level=info msg="Kill container \"123d464c47ede82b115211f0ae89983093e4fab58510373ca2ae3c8bc6731d9b\"" Feb 8 23:25:35.699002 kubelet[1752]: E0208 23:25:35.698946 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:25:36.469126 env[1228]: time="2024-02-08T23:25:36.469068088Z" level=info msg="shim disconnected" id=123d464c47ede82b115211f0ae89983093e4fab58510373ca2ae3c8bc6731d9b Feb 8 23:25:36.469126 env[1228]: time="2024-02-08T23:25:36.469123388Z" level=warning msg="cleaning up after shim disconnected" id=123d464c47ede82b115211f0ae89983093e4fab58510373ca2ae3c8bc6731d9b namespace=k8s.io Feb 8 23:25:36.469126 env[1228]: time="2024-02-08T23:25:36.469134788Z" level=info msg="cleaning up dead shim" Feb 8 23:25:36.476949 env[1228]: time="2024-02-08T23:25:36.476907060Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:25:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3190 runtime=io.containerd.runc.v2\n" Feb 8 23:25:36.483384 env[1228]: time="2024-02-08T23:25:36.483321520Z" level=info msg="StopContainer for \"123d464c47ede82b115211f0ae89983093e4fab58510373ca2ae3c8bc6731d9b\" returns successfully" Feb 8 23:25:36.484069 env[1228]: time="2024-02-08T23:25:36.483965626Z" level=info msg="StopPodSandbox for \"619f578e6c2939c8ccff7285d7725d25fe43703215e3c1b222690deabe82de1b\"" Feb 8 23:25:36.484069 env[1228]: time="2024-02-08T23:25:36.484035426Z" level=info msg="Container to stop \"16f66901659090d2a80895200903d60c29e8bc2733a63e1e447846fbb342290a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:25:36.484069 env[1228]: time="2024-02-08T23:25:36.484056427Z" level=info msg="Container to stop \"d4a44b678ec0f8ed01f13cda0792caf98c05984064ab8edc9bcab41e5460159e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:25:36.489522 env[1228]: time="2024-02-08T23:25:36.484071927Z" level=info msg="Container to stop \"7ce7211bb4f820be0e3673d8ce0d90a1d405707ab2d717ed8df7365cf7d725bd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:25:36.489522 env[1228]: time="2024-02-08T23:25:36.484086827Z" level=info msg="Container to stop \"123d464c47ede82b115211f0ae89983093e4fab58510373ca2ae3c8bc6731d9b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:25:36.489522 env[1228]: time="2024-02-08T23:25:36.484101227Z" level=info msg="Container to stop \"a5b7afb5066ab8ea31e8924d1df06421c27727c0a361ce18d1ed52a20df3f5c9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:25:36.486402 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-619f578e6c2939c8ccff7285d7725d25fe43703215e3c1b222690deabe82de1b-shm.mount: Deactivated successfully. Feb 8 23:25:36.491691 systemd[1]: cri-containerd-619f578e6c2939c8ccff7285d7725d25fe43703215e3c1b222690deabe82de1b.scope: Deactivated successfully. Feb 8 23:25:36.508827 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-619f578e6c2939c8ccff7285d7725d25fe43703215e3c1b222690deabe82de1b-rootfs.mount: Deactivated successfully. Feb 8 23:25:36.521539 env[1228]: time="2024-02-08T23:25:36.521490073Z" level=info msg="shim disconnected" id=619f578e6c2939c8ccff7285d7725d25fe43703215e3c1b222690deabe82de1b Feb 8 23:25:36.521802 env[1228]: time="2024-02-08T23:25:36.521773976Z" level=warning msg="cleaning up after shim disconnected" id=619f578e6c2939c8ccff7285d7725d25fe43703215e3c1b222690deabe82de1b namespace=k8s.io Feb 8 23:25:36.521875 env[1228]: time="2024-02-08T23:25:36.521800276Z" level=info msg="cleaning up dead shim" Feb 8 23:25:36.529751 env[1228]: time="2024-02-08T23:25:36.529717049Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:25:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3220 runtime=io.containerd.runc.v2\n" Feb 8 23:25:36.530049 env[1228]: time="2024-02-08T23:25:36.530022552Z" level=info msg="TearDown network for sandbox \"619f578e6c2939c8ccff7285d7725d25fe43703215e3c1b222690deabe82de1b\" successfully" Feb 8 23:25:36.530121 env[1228]: time="2024-02-08T23:25:36.530047153Z" level=info msg="StopPodSandbox for \"619f578e6c2939c8ccff7285d7725d25fe43703215e3c1b222690deabe82de1b\" returns successfully" Feb 8 23:25:36.675174 kubelet[1752]: I0208 23:25:36.675113 1752 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fbd00318-b8fd-407e-8b79-d63dccdf3906-cilium-run\") pod \"fbd00318-b8fd-407e-8b79-d63dccdf3906\" (UID: \"fbd00318-b8fd-407e-8b79-d63dccdf3906\") " Feb 8 23:25:36.675174 kubelet[1752]: I0208 23:25:36.675177 1752 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fbd00318-b8fd-407e-8b79-d63dccdf3906-bpf-maps\") pod \"fbd00318-b8fd-407e-8b79-d63dccdf3906\" (UID: \"fbd00318-b8fd-407e-8b79-d63dccdf3906\") " Feb 8 23:25:36.675521 kubelet[1752]: I0208 23:25:36.675223 1752 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fbd00318-b8fd-407e-8b79-d63dccdf3906-cilium-config-path\") pod \"fbd00318-b8fd-407e-8b79-d63dccdf3906\" (UID: \"fbd00318-b8fd-407e-8b79-d63dccdf3906\") " Feb 8 23:25:36.675521 kubelet[1752]: I0208 23:25:36.675249 1752 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fbd00318-b8fd-407e-8b79-d63dccdf3906-hostproc\") pod \"fbd00318-b8fd-407e-8b79-d63dccdf3906\" (UID: \"fbd00318-b8fd-407e-8b79-d63dccdf3906\") " Feb 8 23:25:36.675521 kubelet[1752]: I0208 23:25:36.675277 1752 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fbd00318-b8fd-407e-8b79-d63dccdf3906-xtables-lock\") pod \"fbd00318-b8fd-407e-8b79-d63dccdf3906\" (UID: \"fbd00318-b8fd-407e-8b79-d63dccdf3906\") " Feb 8 23:25:36.675521 kubelet[1752]: I0208 23:25:36.675305 1752 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fbd00318-b8fd-407e-8b79-d63dccdf3906-etc-cni-netd\") pod \"fbd00318-b8fd-407e-8b79-d63dccdf3906\" (UID: \"fbd00318-b8fd-407e-8b79-d63dccdf3906\") " Feb 8 23:25:36.675521 kubelet[1752]: I0208 23:25:36.675333 1752 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fbd00318-b8fd-407e-8b79-d63dccdf3906-cilium-cgroup\") pod \"fbd00318-b8fd-407e-8b79-d63dccdf3906\" (UID: \"fbd00318-b8fd-407e-8b79-d63dccdf3906\") " Feb 8 23:25:36.675521 kubelet[1752]: I0208 23:25:36.675390 1752 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fbd00318-b8fd-407e-8b79-d63dccdf3906-clustermesh-secrets\") pod \"fbd00318-b8fd-407e-8b79-d63dccdf3906\" (UID: \"fbd00318-b8fd-407e-8b79-d63dccdf3906\") " Feb 8 23:25:36.675839 kubelet[1752]: I0208 23:25:36.675507 1752 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fbd00318-b8fd-407e-8b79-d63dccdf3906-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "fbd00318-b8fd-407e-8b79-d63dccdf3906" (UID: "fbd00318-b8fd-407e-8b79-d63dccdf3906"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:25:36.675839 kubelet[1752]: I0208 23:25:36.675563 1752 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fbd00318-b8fd-407e-8b79-d63dccdf3906-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "fbd00318-b8fd-407e-8b79-d63dccdf3906" (UID: "fbd00318-b8fd-407e-8b79-d63dccdf3906"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:25:36.675839 kubelet[1752]: I0208 23:25:36.675591 1752 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fbd00318-b8fd-407e-8b79-d63dccdf3906-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "fbd00318-b8fd-407e-8b79-d63dccdf3906" (UID: "fbd00318-b8fd-407e-8b79-d63dccdf3906"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:25:36.676959 kubelet[1752]: I0208 23:25:36.676112 1752 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fbd00318-b8fd-407e-8b79-d63dccdf3906-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "fbd00318-b8fd-407e-8b79-d63dccdf3906" (UID: "fbd00318-b8fd-407e-8b79-d63dccdf3906"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:25:36.676959 kubelet[1752]: I0208 23:25:36.676160 1752 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fbd00318-b8fd-407e-8b79-d63dccdf3906-hostproc" (OuterVolumeSpecName: "hostproc") pod "fbd00318-b8fd-407e-8b79-d63dccdf3906" (UID: "fbd00318-b8fd-407e-8b79-d63dccdf3906"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:25:36.676959 kubelet[1752]: I0208 23:25:36.676189 1752 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fbd00318-b8fd-407e-8b79-d63dccdf3906-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "fbd00318-b8fd-407e-8b79-d63dccdf3906" (UID: "fbd00318-b8fd-407e-8b79-d63dccdf3906"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:25:36.676959 kubelet[1752]: I0208 23:25:36.676206 1752 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fbd00318-b8fd-407e-8b79-d63dccdf3906-host-proc-sys-net\") pod \"fbd00318-b8fd-407e-8b79-d63dccdf3906\" (UID: \"fbd00318-b8fd-407e-8b79-d63dccdf3906\") " Feb 8 23:25:36.676959 kubelet[1752]: I0208 23:25:36.676250 1752 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fbd00318-b8fd-407e-8b79-d63dccdf3906-cni-path\") pod \"fbd00318-b8fd-407e-8b79-d63dccdf3906\" (UID: \"fbd00318-b8fd-407e-8b79-d63dccdf3906\") " Feb 8 23:25:36.677298 kubelet[1752]: I0208 23:25:36.676293 1752 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkct2\" (UniqueName: \"kubernetes.io/projected/fbd00318-b8fd-407e-8b79-d63dccdf3906-kube-api-access-hkct2\") pod \"fbd00318-b8fd-407e-8b79-d63dccdf3906\" (UID: \"fbd00318-b8fd-407e-8b79-d63dccdf3906\") " Feb 8 23:25:36.677298 kubelet[1752]: I0208 23:25:36.676327 1752 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fbd00318-b8fd-407e-8b79-d63dccdf3906-hubble-tls\") pod \"fbd00318-b8fd-407e-8b79-d63dccdf3906\" (UID: \"fbd00318-b8fd-407e-8b79-d63dccdf3906\") " Feb 8 23:25:36.677298 kubelet[1752]: I0208 23:25:36.676377 1752 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fbd00318-b8fd-407e-8b79-d63dccdf3906-lib-modules\") pod \"fbd00318-b8fd-407e-8b79-d63dccdf3906\" (UID: \"fbd00318-b8fd-407e-8b79-d63dccdf3906\") " Feb 8 23:25:36.677298 kubelet[1752]: I0208 23:25:36.676411 1752 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fbd00318-b8fd-407e-8b79-d63dccdf3906-host-proc-sys-kernel\") pod \"fbd00318-b8fd-407e-8b79-d63dccdf3906\" (UID: \"fbd00318-b8fd-407e-8b79-d63dccdf3906\") " Feb 8 23:25:36.677298 kubelet[1752]: I0208 23:25:36.676468 1752 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fbd00318-b8fd-407e-8b79-d63dccdf3906-host-proc-sys-net\") on node \"10.200.8.22\" DevicePath \"\"" Feb 8 23:25:36.677298 kubelet[1752]: I0208 23:25:36.676492 1752 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fbd00318-b8fd-407e-8b79-d63dccdf3906-cilium-run\") on node \"10.200.8.22\" DevicePath \"\"" Feb 8 23:25:36.677298 kubelet[1752]: I0208 23:25:36.676511 1752 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fbd00318-b8fd-407e-8b79-d63dccdf3906-bpf-maps\") on node \"10.200.8.22\" DevicePath \"\"" Feb 8 23:25:36.677699 kubelet[1752]: I0208 23:25:36.676530 1752 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fbd00318-b8fd-407e-8b79-d63dccdf3906-hostproc\") on node \"10.200.8.22\" DevicePath \"\"" Feb 8 23:25:36.677699 kubelet[1752]: I0208 23:25:36.676548 1752 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fbd00318-b8fd-407e-8b79-d63dccdf3906-xtables-lock\") on node \"10.200.8.22\" DevicePath \"\"" Feb 8 23:25:36.677699 kubelet[1752]: I0208 23:25:36.676566 1752 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fbd00318-b8fd-407e-8b79-d63dccdf3906-etc-cni-netd\") on node \"10.200.8.22\" DevicePath \"\"" Feb 8 23:25:36.677699 kubelet[1752]: I0208 23:25:36.676591 1752 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fbd00318-b8fd-407e-8b79-d63dccdf3906-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "fbd00318-b8fd-407e-8b79-d63dccdf3906" (UID: "fbd00318-b8fd-407e-8b79-d63dccdf3906"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:25:36.677699 kubelet[1752]: I0208 23:25:36.676623 1752 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fbd00318-b8fd-407e-8b79-d63dccdf3906-cni-path" (OuterVolumeSpecName: "cni-path") pod "fbd00318-b8fd-407e-8b79-d63dccdf3906" (UID: "fbd00318-b8fd-407e-8b79-d63dccdf3906"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:25:36.678925 kubelet[1752]: I0208 23:25:36.678886 1752 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fbd00318-b8fd-407e-8b79-d63dccdf3906-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "fbd00318-b8fd-407e-8b79-d63dccdf3906" (UID: "fbd00318-b8fd-407e-8b79-d63dccdf3906"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 8 23:25:36.679086 kubelet[1752]: I0208 23:25:36.678973 1752 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fbd00318-b8fd-407e-8b79-d63dccdf3906-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "fbd00318-b8fd-407e-8b79-d63dccdf3906" (UID: "fbd00318-b8fd-407e-8b79-d63dccdf3906"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:25:36.682093 kubelet[1752]: I0208 23:25:36.682066 1752 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbd00318-b8fd-407e-8b79-d63dccdf3906-kube-api-access-hkct2" (OuterVolumeSpecName: "kube-api-access-hkct2") pod "fbd00318-b8fd-407e-8b79-d63dccdf3906" (UID: "fbd00318-b8fd-407e-8b79-d63dccdf3906"). InnerVolumeSpecName "kube-api-access-hkct2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 8 23:25:36.683758 systemd[1]: var-lib-kubelet-pods-fbd00318\x2db8fd\x2d407e\x2d8b79\x2dd63dccdf3906-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhkct2.mount: Deactivated successfully. Feb 8 23:25:36.686950 systemd[1]: var-lib-kubelet-pods-fbd00318\x2db8fd\x2d407e\x2d8b79\x2dd63dccdf3906-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 8 23:25:36.688333 kubelet[1752]: I0208 23:25:36.688308 1752 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fbd00318-b8fd-407e-8b79-d63dccdf3906-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "fbd00318-b8fd-407e-8b79-d63dccdf3906" (UID: "fbd00318-b8fd-407e-8b79-d63dccdf3906"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:25:36.688523 kubelet[1752]: I0208 23:25:36.688503 1752 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbd00318-b8fd-407e-8b79-d63dccdf3906-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "fbd00318-b8fd-407e-8b79-d63dccdf3906" (UID: "fbd00318-b8fd-407e-8b79-d63dccdf3906"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 8 23:25:36.690903 kubelet[1752]: I0208 23:25:36.690880 1752 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbd00318-b8fd-407e-8b79-d63dccdf3906-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "fbd00318-b8fd-407e-8b79-d63dccdf3906" (UID: "fbd00318-b8fd-407e-8b79-d63dccdf3906"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 8 23:25:36.690913 systemd[1]: var-lib-kubelet-pods-fbd00318\x2db8fd\x2d407e\x2d8b79\x2dd63dccdf3906-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 8 23:25:36.699264 kubelet[1752]: E0208 23:25:36.699246 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:25:36.704207 systemd[1]: Removed slice kubepods-burstable-podfbd00318_b8fd_407e_8b79_d63dccdf3906.slice. Feb 8 23:25:36.704315 systemd[1]: kubepods-burstable-podfbd00318_b8fd_407e_8b79_d63dccdf3906.slice: Consumed 6.478s CPU time. Feb 8 23:25:36.777534 kubelet[1752]: I0208 23:25:36.777326 1752 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fbd00318-b8fd-407e-8b79-d63dccdf3906-cilium-cgroup\") on node \"10.200.8.22\" DevicePath \"\"" Feb 8 23:25:36.777534 kubelet[1752]: I0208 23:25:36.777389 1752 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-hkct2\" (UniqueName: \"kubernetes.io/projected/fbd00318-b8fd-407e-8b79-d63dccdf3906-kube-api-access-hkct2\") on node \"10.200.8.22\" DevicePath \"\"" Feb 8 23:25:36.777534 kubelet[1752]: I0208 23:25:36.777406 1752 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fbd00318-b8fd-407e-8b79-d63dccdf3906-hubble-tls\") on node \"10.200.8.22\" DevicePath \"\"" Feb 8 23:25:36.777534 kubelet[1752]: I0208 23:25:36.777425 1752 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fbd00318-b8fd-407e-8b79-d63dccdf3906-lib-modules\") on node \"10.200.8.22\" DevicePath \"\"" Feb 8 23:25:36.777534 kubelet[1752]: I0208 23:25:36.777442 1752 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fbd00318-b8fd-407e-8b79-d63dccdf3906-host-proc-sys-kernel\") on node \"10.200.8.22\" DevicePath \"\"" Feb 8 23:25:36.777534 kubelet[1752]: I0208 23:25:36.777458 1752 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fbd00318-b8fd-407e-8b79-d63dccdf3906-clustermesh-secrets\") on node \"10.200.8.22\" DevicePath \"\"" Feb 8 23:25:36.777534 kubelet[1752]: I0208 23:25:36.777472 1752 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fbd00318-b8fd-407e-8b79-d63dccdf3906-cni-path\") on node \"10.200.8.22\" DevicePath \"\"" Feb 8 23:25:36.777534 kubelet[1752]: I0208 23:25:36.777491 1752 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fbd00318-b8fd-407e-8b79-d63dccdf3906-cilium-config-path\") on node \"10.200.8.22\" DevicePath \"\"" Feb 8 23:25:36.866623 kubelet[1752]: I0208 23:25:36.866594 1752 scope.go:117] "RemoveContainer" containerID="123d464c47ede82b115211f0ae89983093e4fab58510373ca2ae3c8bc6731d9b" Feb 8 23:25:36.869907 env[1228]: time="2024-02-08T23:25:36.869861900Z" level=info msg="RemoveContainer for \"123d464c47ede82b115211f0ae89983093e4fab58510373ca2ae3c8bc6731d9b\"" Feb 8 23:25:36.877007 env[1228]: time="2024-02-08T23:25:36.876969365Z" level=info msg="RemoveContainer for \"123d464c47ede82b115211f0ae89983093e4fab58510373ca2ae3c8bc6731d9b\" returns successfully" Feb 8 23:25:36.877245 kubelet[1752]: I0208 23:25:36.877223 1752 scope.go:117] "RemoveContainer" containerID="7ce7211bb4f820be0e3673d8ce0d90a1d405707ab2d717ed8df7365cf7d725bd" Feb 8 23:25:36.878264 env[1228]: time="2024-02-08T23:25:36.878235977Z" level=info msg="RemoveContainer for \"7ce7211bb4f820be0e3673d8ce0d90a1d405707ab2d717ed8df7365cf7d725bd\"" Feb 8 23:25:36.886404 env[1228]: time="2024-02-08T23:25:36.886369652Z" level=info msg="RemoveContainer for \"7ce7211bb4f820be0e3673d8ce0d90a1d405707ab2d717ed8df7365cf7d725bd\" returns successfully" Feb 8 23:25:36.886561 kubelet[1752]: I0208 23:25:36.886544 1752 scope.go:117] "RemoveContainer" containerID="a5b7afb5066ab8ea31e8924d1df06421c27727c0a361ce18d1ed52a20df3f5c9" Feb 8 23:25:36.887525 env[1228]: time="2024-02-08T23:25:36.887497163Z" level=info msg="RemoveContainer for \"a5b7afb5066ab8ea31e8924d1df06421c27727c0a361ce18d1ed52a20df3f5c9\"" Feb 8 23:25:36.896599 env[1228]: time="2024-02-08T23:25:36.896568147Z" level=info msg="RemoveContainer for \"a5b7afb5066ab8ea31e8924d1df06421c27727c0a361ce18d1ed52a20df3f5c9\" returns successfully" Feb 8 23:25:36.896754 kubelet[1752]: I0208 23:25:36.896736 1752 scope.go:117] "RemoveContainer" containerID="d4a44b678ec0f8ed01f13cda0792caf98c05984064ab8edc9bcab41e5460159e" Feb 8 23:25:36.897829 env[1228]: time="2024-02-08T23:25:36.897800758Z" level=info msg="RemoveContainer for \"d4a44b678ec0f8ed01f13cda0792caf98c05984064ab8edc9bcab41e5460159e\"" Feb 8 23:25:36.898624 kubelet[1752]: I0208 23:25:36.898601 1752 topology_manager.go:215] "Topology Admit Handler" podUID="094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad" podNamespace="kube-system" podName="cilium-8n98j" Feb 8 23:25:36.898720 kubelet[1752]: E0208 23:25:36.898651 1752 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fbd00318-b8fd-407e-8b79-d63dccdf3906" containerName="mount-cgroup" Feb 8 23:25:36.898720 kubelet[1752]: E0208 23:25:36.898664 1752 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fbd00318-b8fd-407e-8b79-d63dccdf3906" containerName="clean-cilium-state" Feb 8 23:25:36.898720 kubelet[1752]: E0208 23:25:36.898673 1752 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fbd00318-b8fd-407e-8b79-d63dccdf3906" containerName="apply-sysctl-overwrites" Feb 8 23:25:36.898720 kubelet[1752]: E0208 23:25:36.898682 1752 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fbd00318-b8fd-407e-8b79-d63dccdf3906" containerName="mount-bpf-fs" Feb 8 23:25:36.898720 kubelet[1752]: E0208 23:25:36.898715 1752 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fbd00318-b8fd-407e-8b79-d63dccdf3906" containerName="cilium-agent" Feb 8 23:25:36.898919 kubelet[1752]: I0208 23:25:36.898749 1752 memory_manager.go:346] "RemoveStaleState removing state" podUID="fbd00318-b8fd-407e-8b79-d63dccdf3906" containerName="cilium-agent" Feb 8 23:25:36.901883 kubelet[1752]: I0208 23:25:36.901549 1752 topology_manager.go:215] "Topology Admit Handler" podUID="af463ab4-e693-4b71-8401-a0f85de5fe40" podNamespace="kube-system" podName="cilium-operator-6bc8ccdb58-4lbkt" Feb 8 23:25:36.904503 systemd[1]: Created slice kubepods-burstable-pod094cfa5b_ab14_4485_9a5a_1cd6b15bb5ad.slice. Feb 8 23:25:36.906264 env[1228]: time="2024-02-08T23:25:36.906231736Z" level=info msg="RemoveContainer for \"d4a44b678ec0f8ed01f13cda0792caf98c05984064ab8edc9bcab41e5460159e\" returns successfully" Feb 8 23:25:36.906700 kubelet[1752]: I0208 23:25:36.906681 1752 scope.go:117] "RemoveContainer" containerID="16f66901659090d2a80895200903d60c29e8bc2733a63e1e447846fbb342290a" Feb 8 23:25:36.907711 env[1228]: time="2024-02-08T23:25:36.907673050Z" level=info msg="RemoveContainer for \"16f66901659090d2a80895200903d60c29e8bc2733a63e1e447846fbb342290a\"" Feb 8 23:25:36.912320 systemd[1]: Created slice kubepods-besteffort-podaf463ab4_e693_4b71_8401_a0f85de5fe40.slice. Feb 8 23:25:36.917752 env[1228]: time="2024-02-08T23:25:36.917717643Z" level=info msg="RemoveContainer for \"16f66901659090d2a80895200903d60c29e8bc2733a63e1e447846fbb342290a\" returns successfully" Feb 8 23:25:36.917921 kubelet[1752]: I0208 23:25:36.917902 1752 scope.go:117] "RemoveContainer" containerID="123d464c47ede82b115211f0ae89983093e4fab58510373ca2ae3c8bc6731d9b" Feb 8 23:25:36.918156 env[1228]: time="2024-02-08T23:25:36.918094646Z" level=error msg="ContainerStatus for \"123d464c47ede82b115211f0ae89983093e4fab58510373ca2ae3c8bc6731d9b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"123d464c47ede82b115211f0ae89983093e4fab58510373ca2ae3c8bc6731d9b\": not found" Feb 8 23:25:36.918311 kubelet[1752]: E0208 23:25:36.918294 1752 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"123d464c47ede82b115211f0ae89983093e4fab58510373ca2ae3c8bc6731d9b\": not found" containerID="123d464c47ede82b115211f0ae89983093e4fab58510373ca2ae3c8bc6731d9b" Feb 8 23:25:36.918446 kubelet[1752]: I0208 23:25:36.918428 1752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"123d464c47ede82b115211f0ae89983093e4fab58510373ca2ae3c8bc6731d9b"} err="failed to get container status \"123d464c47ede82b115211f0ae89983093e4fab58510373ca2ae3c8bc6731d9b\": rpc error: code = NotFound desc = an error occurred when try to find container \"123d464c47ede82b115211f0ae89983093e4fab58510373ca2ae3c8bc6731d9b\": not found" Feb 8 23:25:36.918516 kubelet[1752]: I0208 23:25:36.918449 1752 scope.go:117] "RemoveContainer" containerID="7ce7211bb4f820be0e3673d8ce0d90a1d405707ab2d717ed8df7365cf7d725bd" Feb 8 23:25:36.918717 env[1228]: time="2024-02-08T23:25:36.918669652Z" level=error msg="ContainerStatus for \"7ce7211bb4f820be0e3673d8ce0d90a1d405707ab2d717ed8df7365cf7d725bd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7ce7211bb4f820be0e3673d8ce0d90a1d405707ab2d717ed8df7365cf7d725bd\": not found" Feb 8 23:25:36.918865 kubelet[1752]: E0208 23:25:36.918849 1752 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7ce7211bb4f820be0e3673d8ce0d90a1d405707ab2d717ed8df7365cf7d725bd\": not found" containerID="7ce7211bb4f820be0e3673d8ce0d90a1d405707ab2d717ed8df7365cf7d725bd" Feb 8 23:25:36.918942 kubelet[1752]: I0208 23:25:36.918884 1752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7ce7211bb4f820be0e3673d8ce0d90a1d405707ab2d717ed8df7365cf7d725bd"} err="failed to get container status \"7ce7211bb4f820be0e3673d8ce0d90a1d405707ab2d717ed8df7365cf7d725bd\": rpc error: code = NotFound desc = an error occurred when try to find container \"7ce7211bb4f820be0e3673d8ce0d90a1d405707ab2d717ed8df7365cf7d725bd\": not found" Feb 8 23:25:36.918942 kubelet[1752]: I0208 23:25:36.918908 1752 scope.go:117] "RemoveContainer" containerID="a5b7afb5066ab8ea31e8924d1df06421c27727c0a361ce18d1ed52a20df3f5c9" Feb 8 23:25:36.919137 env[1228]: time="2024-02-08T23:25:36.919088955Z" level=error msg="ContainerStatus for \"a5b7afb5066ab8ea31e8924d1df06421c27727c0a361ce18d1ed52a20df3f5c9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a5b7afb5066ab8ea31e8924d1df06421c27727c0a361ce18d1ed52a20df3f5c9\": not found" Feb 8 23:25:36.919249 kubelet[1752]: E0208 23:25:36.919231 1752 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a5b7afb5066ab8ea31e8924d1df06421c27727c0a361ce18d1ed52a20df3f5c9\": not found" containerID="a5b7afb5066ab8ea31e8924d1df06421c27727c0a361ce18d1ed52a20df3f5c9" Feb 8 23:25:36.919320 kubelet[1752]: I0208 23:25:36.919261 1752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a5b7afb5066ab8ea31e8924d1df06421c27727c0a361ce18d1ed52a20df3f5c9"} err="failed to get container status \"a5b7afb5066ab8ea31e8924d1df06421c27727c0a361ce18d1ed52a20df3f5c9\": rpc error: code = NotFound desc = an error occurred when try to find container \"a5b7afb5066ab8ea31e8924d1df06421c27727c0a361ce18d1ed52a20df3f5c9\": not found" Feb 8 23:25:36.919320 kubelet[1752]: I0208 23:25:36.919273 1752 scope.go:117] "RemoveContainer" containerID="d4a44b678ec0f8ed01f13cda0792caf98c05984064ab8edc9bcab41e5460159e" Feb 8 23:25:36.919504 env[1228]: time="2024-02-08T23:25:36.919453759Z" level=error msg="ContainerStatus for \"d4a44b678ec0f8ed01f13cda0792caf98c05984064ab8edc9bcab41e5460159e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d4a44b678ec0f8ed01f13cda0792caf98c05984064ab8edc9bcab41e5460159e\": not found" Feb 8 23:25:36.919653 kubelet[1752]: E0208 23:25:36.919635 1752 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d4a44b678ec0f8ed01f13cda0792caf98c05984064ab8edc9bcab41e5460159e\": not found" containerID="d4a44b678ec0f8ed01f13cda0792caf98c05984064ab8edc9bcab41e5460159e" Feb 8 23:25:36.919727 kubelet[1752]: I0208 23:25:36.919665 1752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d4a44b678ec0f8ed01f13cda0792caf98c05984064ab8edc9bcab41e5460159e"} err="failed to get container status \"d4a44b678ec0f8ed01f13cda0792caf98c05984064ab8edc9bcab41e5460159e\": rpc error: code = NotFound desc = an error occurred when try to find container \"d4a44b678ec0f8ed01f13cda0792caf98c05984064ab8edc9bcab41e5460159e\": not found" Feb 8 23:25:36.919727 kubelet[1752]: I0208 23:25:36.919677 1752 scope.go:117] "RemoveContainer" containerID="16f66901659090d2a80895200903d60c29e8bc2733a63e1e447846fbb342290a" Feb 8 23:25:36.919957 env[1228]: time="2024-02-08T23:25:36.919913463Z" level=error msg="ContainerStatus for \"16f66901659090d2a80895200903d60c29e8bc2733a63e1e447846fbb342290a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"16f66901659090d2a80895200903d60c29e8bc2733a63e1e447846fbb342290a\": not found" Feb 8 23:25:36.920120 kubelet[1752]: E0208 23:25:36.920103 1752 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"16f66901659090d2a80895200903d60c29e8bc2733a63e1e447846fbb342290a\": not found" containerID="16f66901659090d2a80895200903d60c29e8bc2733a63e1e447846fbb342290a" Feb 8 23:25:36.920181 kubelet[1752]: I0208 23:25:36.920148 1752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"16f66901659090d2a80895200903d60c29e8bc2733a63e1e447846fbb342290a"} err="failed to get container status \"16f66901659090d2a80895200903d60c29e8bc2733a63e1e447846fbb342290a\": rpc error: code = NotFound desc = an error occurred when try to find container \"16f66901659090d2a80895200903d60c29e8bc2733a63e1e447846fbb342290a\": not found" Feb 8 23:25:37.080773 kubelet[1752]: I0208 23:25:37.078867 1752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad-bpf-maps\") pod \"cilium-8n98j\" (UID: \"094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad\") " pod="kube-system/cilium-8n98j" Feb 8 23:25:37.080773 kubelet[1752]: I0208 23:25:37.079017 1752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad-hostproc\") pod \"cilium-8n98j\" (UID: \"094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad\") " pod="kube-system/cilium-8n98j" Feb 8 23:25:37.080773 kubelet[1752]: I0208 23:25:37.079155 1752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad-cilium-cgroup\") pod \"cilium-8n98j\" (UID: \"094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad\") " pod="kube-system/cilium-8n98j" Feb 8 23:25:37.080773 kubelet[1752]: I0208 23:25:37.079245 1752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad-lib-modules\") pod \"cilium-8n98j\" (UID: \"094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad\") " pod="kube-system/cilium-8n98j" Feb 8 23:25:37.080773 kubelet[1752]: I0208 23:25:37.079328 1752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad-xtables-lock\") pod \"cilium-8n98j\" (UID: \"094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad\") " pod="kube-system/cilium-8n98j" Feb 8 23:25:37.080773 kubelet[1752]: I0208 23:25:37.079402 1752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad-clustermesh-secrets\") pod \"cilium-8n98j\" (UID: \"094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad\") " pod="kube-system/cilium-8n98j" Feb 8 23:25:37.081167 kubelet[1752]: I0208 23:25:37.079487 1752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad-cilium-ipsec-secrets\") pod \"cilium-8n98j\" (UID: \"094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad\") " pod="kube-system/cilium-8n98j" Feb 8 23:25:37.081167 kubelet[1752]: I0208 23:25:37.079565 1752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad-hubble-tls\") pod \"cilium-8n98j\" (UID: \"094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad\") " pod="kube-system/cilium-8n98j" Feb 8 23:25:37.081167 kubelet[1752]: I0208 23:25:37.079647 1752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad-host-proc-sys-net\") pod \"cilium-8n98j\" (UID: \"094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad\") " pod="kube-system/cilium-8n98j" Feb 8 23:25:37.081167 kubelet[1752]: I0208 23:25:37.079683 1752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad-cilium-run\") pod \"cilium-8n98j\" (UID: \"094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad\") " pod="kube-system/cilium-8n98j" Feb 8 23:25:37.081167 kubelet[1752]: I0208 23:25:37.079784 1752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad-cni-path\") pod \"cilium-8n98j\" (UID: \"094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad\") " pod="kube-system/cilium-8n98j" Feb 8 23:25:37.081167 kubelet[1752]: I0208 23:25:37.079861 1752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad-cilium-config-path\") pod \"cilium-8n98j\" (UID: \"094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad\") " pod="kube-system/cilium-8n98j" Feb 8 23:25:37.081399 kubelet[1752]: I0208 23:25:37.079944 1752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p57wb\" (UniqueName: \"kubernetes.io/projected/af463ab4-e693-4b71-8401-a0f85de5fe40-kube-api-access-p57wb\") pod \"cilium-operator-6bc8ccdb58-4lbkt\" (UID: \"af463ab4-e693-4b71-8401-a0f85de5fe40\") " pod="kube-system/cilium-operator-6bc8ccdb58-4lbkt" Feb 8 23:25:37.081399 kubelet[1752]: I0208 23:25:37.080023 1752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc92f\" (UniqueName: \"kubernetes.io/projected/094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad-kube-api-access-zc92f\") pod \"cilium-8n98j\" (UID: \"094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad\") " pod="kube-system/cilium-8n98j" Feb 8 23:25:37.081399 kubelet[1752]: I0208 23:25:37.080102 1752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/af463ab4-e693-4b71-8401-a0f85de5fe40-cilium-config-path\") pod \"cilium-operator-6bc8ccdb58-4lbkt\" (UID: \"af463ab4-e693-4b71-8401-a0f85de5fe40\") " pod="kube-system/cilium-operator-6bc8ccdb58-4lbkt" Feb 8 23:25:37.081399 kubelet[1752]: I0208 23:25:37.080176 1752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad-etc-cni-netd\") pod \"cilium-8n98j\" (UID: \"094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad\") " pod="kube-system/cilium-8n98j" Feb 8 23:25:37.081399 kubelet[1752]: I0208 23:25:37.080216 1752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad-host-proc-sys-kernel\") pod \"cilium-8n98j\" (UID: \"094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad\") " pod="kube-system/cilium-8n98j" Feb 8 23:25:37.215717 env[1228]: time="2024-02-08T23:25:37.215670467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-4lbkt,Uid:af463ab4-e693-4b71-8401-a0f85de5fe40,Namespace:kube-system,Attempt:0,}" Feb 8 23:25:37.243858 env[1228]: time="2024-02-08T23:25:37.243787022Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:25:37.243858 env[1228]: time="2024-02-08T23:25:37.243823823Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:25:37.243858 env[1228]: time="2024-02-08T23:25:37.243838323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:25:37.244273 env[1228]: time="2024-02-08T23:25:37.244230526Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ee2c42d6266ba1d8c05d04ce84e93d7fae37351922e1c8a13ed272319df6b48c pid=3249 runtime=io.containerd.runc.v2 Feb 8 23:25:37.257424 systemd[1]: Started cri-containerd-ee2c42d6266ba1d8c05d04ce84e93d7fae37351922e1c8a13ed272319df6b48c.scope. Feb 8 23:25:37.298809 env[1228]: time="2024-02-08T23:25:37.298765322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-4lbkt,Uid:af463ab4-e693-4b71-8401-a0f85de5fe40,Namespace:kube-system,Attempt:0,} returns sandbox id \"ee2c42d6266ba1d8c05d04ce84e93d7fae37351922e1c8a13ed272319df6b48c\"" Feb 8 23:25:37.300613 env[1228]: time="2024-02-08T23:25:37.300579039Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 8 23:25:37.516680 env[1228]: time="2024-02-08T23:25:37.516628904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8n98j,Uid:094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad,Namespace:kube-system,Attempt:0,}" Feb 8 23:25:37.558748 env[1228]: time="2024-02-08T23:25:37.558673786Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:25:37.558942 env[1228]: time="2024-02-08T23:25:37.558717686Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:25:37.558942 env[1228]: time="2024-02-08T23:25:37.558731487Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:25:37.559203 env[1228]: time="2024-02-08T23:25:37.559157190Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/82c31e3b4b459cddde0cbff2d09ac7700797bea36b8fa30be6834ac60001ca4a pid=3290 runtime=io.containerd.runc.v2 Feb 8 23:25:37.589327 systemd[1]: Started cri-containerd-82c31e3b4b459cddde0cbff2d09ac7700797bea36b8fa30be6834ac60001ca4a.scope. Feb 8 23:25:37.612583 env[1228]: time="2024-02-08T23:25:37.612547776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8n98j,Uid:094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad,Namespace:kube-system,Attempt:0,} returns sandbox id \"82c31e3b4b459cddde0cbff2d09ac7700797bea36b8fa30be6834ac60001ca4a\"" Feb 8 23:25:37.617136 env[1228]: time="2024-02-08T23:25:37.616976916Z" level=info msg="CreateContainer within sandbox \"82c31e3b4b459cddde0cbff2d09ac7700797bea36b8fa30be6834ac60001ca4a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 8 23:25:37.657790 env[1228]: time="2024-02-08T23:25:37.657720987Z" level=info msg="CreateContainer within sandbox \"82c31e3b4b459cddde0cbff2d09ac7700797bea36b8fa30be6834ac60001ca4a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"46b04d6b5af4a0909719f5e0e7253cdd61854456fae8ed20084d0a9eb89758ee\"" Feb 8 23:25:37.658303 env[1228]: time="2024-02-08T23:25:37.658270892Z" level=info msg="StartContainer for \"46b04d6b5af4a0909719f5e0e7253cdd61854456fae8ed20084d0a9eb89758ee\"" Feb 8 23:25:37.674159 systemd[1]: Started cri-containerd-46b04d6b5af4a0909719f5e0e7253cdd61854456fae8ed20084d0a9eb89758ee.scope. Feb 8 23:25:37.685202 systemd[1]: cri-containerd-46b04d6b5af4a0909719f5e0e7253cdd61854456fae8ed20084d0a9eb89758ee.scope: Deactivated successfully. Feb 8 23:25:37.700471 kubelet[1752]: E0208 23:25:37.700439 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:25:37.713230 env[1228]: time="2024-02-08T23:25:37.713175591Z" level=info msg="shim disconnected" id=46b04d6b5af4a0909719f5e0e7253cdd61854456fae8ed20084d0a9eb89758ee Feb 8 23:25:37.713230 env[1228]: time="2024-02-08T23:25:37.713229992Z" level=warning msg="cleaning up after shim disconnected" id=46b04d6b5af4a0909719f5e0e7253cdd61854456fae8ed20084d0a9eb89758ee namespace=k8s.io Feb 8 23:25:37.713471 env[1228]: time="2024-02-08T23:25:37.713241492Z" level=info msg="cleaning up dead shim" Feb 8 23:25:37.721811 env[1228]: time="2024-02-08T23:25:37.721769869Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:25:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3351 runtime=io.containerd.runc.v2\ntime=\"2024-02-08T23:25:37Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/46b04d6b5af4a0909719f5e0e7253cdd61854456fae8ed20084d0a9eb89758ee/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 8 23:25:37.722126 env[1228]: time="2024-02-08T23:25:37.722023372Z" level=error msg="copy shim log" error="read /proc/self/fd/69: file already closed" Feb 8 23:25:37.722448 env[1228]: time="2024-02-08T23:25:37.722400875Z" level=error msg="Failed to pipe stderr of container \"46b04d6b5af4a0909719f5e0e7253cdd61854456fae8ed20084d0a9eb89758ee\"" error="reading from a closed fifo" Feb 8 23:25:37.723439 env[1228]: time="2024-02-08T23:25:37.723396584Z" level=error msg="Failed to pipe stdout of container \"46b04d6b5af4a0909719f5e0e7253cdd61854456fae8ed20084d0a9eb89758ee\"" error="reading from a closed fifo" Feb 8 23:25:37.727392 env[1228]: time="2024-02-08T23:25:37.727336020Z" level=error msg="StartContainer for \"46b04d6b5af4a0909719f5e0e7253cdd61854456fae8ed20084d0a9eb89758ee\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 8 23:25:37.727623 kubelet[1752]: E0208 23:25:37.727589 1752 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="46b04d6b5af4a0909719f5e0e7253cdd61854456fae8ed20084d0a9eb89758ee" Feb 8 23:25:37.727767 kubelet[1752]: E0208 23:25:37.727749 1752 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 8 23:25:37.727767 kubelet[1752]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 8 23:25:37.727767 kubelet[1752]: rm /hostbin/cilium-mount Feb 8 23:25:37.727901 kubelet[1752]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-zc92f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-8n98j_kube-system(094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 8 23:25:37.727901 kubelet[1752]: E0208 23:25:37.727811 1752 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-8n98j" podUID="094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad" Feb 8 23:25:37.874165 env[1228]: time="2024-02-08T23:25:37.874056554Z" level=info msg="CreateContainer within sandbox \"82c31e3b4b459cddde0cbff2d09ac7700797bea36b8fa30be6834ac60001ca4a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Feb 8 23:25:37.935801 env[1228]: time="2024-02-08T23:25:37.935748515Z" level=info msg="CreateContainer within sandbox \"82c31e3b4b459cddde0cbff2d09ac7700797bea36b8fa30be6834ac60001ca4a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"8b51ba46b8f3e1ed1141bab106fc30aafbb4ccdbc1efa85378c228376c0b0413\"" Feb 8 23:25:37.936321 env[1228]: time="2024-02-08T23:25:37.936287220Z" level=info msg="StartContainer for \"8b51ba46b8f3e1ed1141bab106fc30aafbb4ccdbc1efa85378c228376c0b0413\"" Feb 8 23:25:37.952784 systemd[1]: Started cri-containerd-8b51ba46b8f3e1ed1141bab106fc30aafbb4ccdbc1efa85378c228376c0b0413.scope. Feb 8 23:25:37.963228 systemd[1]: cri-containerd-8b51ba46b8f3e1ed1141bab106fc30aafbb4ccdbc1efa85378c228376c0b0413.scope: Deactivated successfully. Feb 8 23:25:37.963531 systemd[1]: Stopped cri-containerd-8b51ba46b8f3e1ed1141bab106fc30aafbb4ccdbc1efa85378c228376c0b0413.scope. Feb 8 23:25:37.988201 env[1228]: time="2024-02-08T23:25:37.988136792Z" level=info msg="shim disconnected" id=8b51ba46b8f3e1ed1141bab106fc30aafbb4ccdbc1efa85378c228376c0b0413 Feb 8 23:25:37.988201 env[1228]: time="2024-02-08T23:25:37.988201492Z" level=warning msg="cleaning up after shim disconnected" id=8b51ba46b8f3e1ed1141bab106fc30aafbb4ccdbc1efa85378c228376c0b0413 namespace=k8s.io Feb 8 23:25:37.988485 env[1228]: time="2024-02-08T23:25:37.988213693Z" level=info msg="cleaning up dead shim" Feb 8 23:25:37.996557 env[1228]: time="2024-02-08T23:25:37.996505968Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:25:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3386 runtime=io.containerd.runc.v2\ntime=\"2024-02-08T23:25:37Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/8b51ba46b8f3e1ed1141bab106fc30aafbb4ccdbc1efa85378c228376c0b0413/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 8 23:25:37.996805 env[1228]: time="2024-02-08T23:25:37.996750170Z" level=error msg="copy shim log" error="read /proc/self/fd/69: file already closed" Feb 8 23:25:38.000482 env[1228]: time="2024-02-08T23:25:38.000417904Z" level=error msg="Failed to pipe stdout of container \"8b51ba46b8f3e1ed1141bab106fc30aafbb4ccdbc1efa85378c228376c0b0413\"" error="reading from a closed fifo" Feb 8 23:25:38.000682 env[1228]: time="2024-02-08T23:25:38.000636005Z" level=error msg="Failed to pipe stderr of container \"8b51ba46b8f3e1ed1141bab106fc30aafbb4ccdbc1efa85378c228376c0b0413\"" error="reading from a closed fifo" Feb 8 23:25:38.005297 env[1228]: time="2024-02-08T23:25:38.005253247Z" level=error msg="StartContainer for \"8b51ba46b8f3e1ed1141bab106fc30aafbb4ccdbc1efa85378c228376c0b0413\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 8 23:25:38.005523 kubelet[1752]: E0208 23:25:38.005501 1752 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="8b51ba46b8f3e1ed1141bab106fc30aafbb4ccdbc1efa85378c228376c0b0413" Feb 8 23:25:38.005958 kubelet[1752]: E0208 23:25:38.005939 1752 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 8 23:25:38.005958 kubelet[1752]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 8 23:25:38.005958 kubelet[1752]: rm /hostbin/cilium-mount Feb 8 23:25:38.005958 kubelet[1752]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-zc92f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-8n98j_kube-system(094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 8 23:25:38.006176 kubelet[1752]: E0208 23:25:38.006024 1752 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-8n98j" podUID="094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad" Feb 8 23:25:38.489051 systemd[1]: run-containerd-runc-k8s.io-82c31e3b4b459cddde0cbff2d09ac7700797bea36b8fa30be6834ac60001ca4a-runc.AB4ZdQ.mount: Deactivated successfully. Feb 8 23:25:38.680492 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2484396690.mount: Deactivated successfully. Feb 8 23:25:38.700557 kubelet[1752]: E0208 23:25:38.700508 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:25:38.701828 kubelet[1752]: I0208 23:25:38.701632 1752 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="fbd00318-b8fd-407e-8b79-d63dccdf3906" path="/var/lib/kubelet/pods/fbd00318-b8fd-407e-8b79-d63dccdf3906/volumes" Feb 8 23:25:38.744402 kubelet[1752]: E0208 23:25:38.743969 1752 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 8 23:25:38.879196 kubelet[1752]: I0208 23:25:38.879162 1752 scope.go:117] "RemoveContainer" containerID="46b04d6b5af4a0909719f5e0e7253cdd61854456fae8ed20084d0a9eb89758ee" Feb 8 23:25:38.879897 env[1228]: time="2024-02-08T23:25:38.879853160Z" level=info msg="StopPodSandbox for \"82c31e3b4b459cddde0cbff2d09ac7700797bea36b8fa30be6834ac60001ca4a\"" Feb 8 23:25:38.880334 env[1228]: time="2024-02-08T23:25:38.880306364Z" level=info msg="Container to stop \"46b04d6b5af4a0909719f5e0e7253cdd61854456fae8ed20084d0a9eb89758ee\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:25:38.880452 env[1228]: time="2024-02-08T23:25:38.880425465Z" level=info msg="Container to stop \"8b51ba46b8f3e1ed1141bab106fc30aafbb4ccdbc1efa85378c228376c0b0413\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:25:38.882706 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-82c31e3b4b459cddde0cbff2d09ac7700797bea36b8fa30be6834ac60001ca4a-shm.mount: Deactivated successfully. Feb 8 23:25:38.885779 env[1228]: time="2024-02-08T23:25:38.885742112Z" level=info msg="RemoveContainer for \"46b04d6b5af4a0909719f5e0e7253cdd61854456fae8ed20084d0a9eb89758ee\"" Feb 8 23:25:38.892667 systemd[1]: cri-containerd-82c31e3b4b459cddde0cbff2d09ac7700797bea36b8fa30be6834ac60001ca4a.scope: Deactivated successfully. Feb 8 23:25:38.899855 env[1228]: time="2024-02-08T23:25:38.899818738Z" level=info msg="RemoveContainer for \"46b04d6b5af4a0909719f5e0e7253cdd61854456fae8ed20084d0a9eb89758ee\" returns successfully" Feb 8 23:25:38.919168 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-82c31e3b4b459cddde0cbff2d09ac7700797bea36b8fa30be6834ac60001ca4a-rootfs.mount: Deactivated successfully. Feb 8 23:25:38.965502 env[1228]: time="2024-02-08T23:25:38.965445724Z" level=info msg="shim disconnected" id=82c31e3b4b459cddde0cbff2d09ac7700797bea36b8fa30be6834ac60001ca4a Feb 8 23:25:38.965728 env[1228]: time="2024-02-08T23:25:38.965507725Z" level=warning msg="cleaning up after shim disconnected" id=82c31e3b4b459cddde0cbff2d09ac7700797bea36b8fa30be6834ac60001ca4a namespace=k8s.io Feb 8 23:25:38.965728 env[1228]: time="2024-02-08T23:25:38.965520325Z" level=info msg="cleaning up dead shim" Feb 8 23:25:38.975876 env[1228]: time="2024-02-08T23:25:38.975832417Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:25:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3420 runtime=io.containerd.runc.v2\n" Feb 8 23:25:38.976180 env[1228]: time="2024-02-08T23:25:38.976146520Z" level=info msg="TearDown network for sandbox \"82c31e3b4b459cddde0cbff2d09ac7700797bea36b8fa30be6834ac60001ca4a\" successfully" Feb 8 23:25:38.976254 env[1228]: time="2024-02-08T23:25:38.976182420Z" level=info msg="StopPodSandbox for \"82c31e3b4b459cddde0cbff2d09ac7700797bea36b8fa30be6834ac60001ca4a\" returns successfully" Feb 8 23:25:39.093553 kubelet[1752]: I0208 23:25:39.092640 1752 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad-cilium-cgroup\") pod \"094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad\" (UID: \"094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad\") " Feb 8 23:25:39.093553 kubelet[1752]: I0208 23:25:39.092799 1752 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad-cilium-run\") pod \"094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad\" (UID: \"094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad\") " Feb 8 23:25:39.093553 kubelet[1752]: I0208 23:25:39.092850 1752 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad-hostproc\") pod \"094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad\" (UID: \"094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad\") " Feb 8 23:25:39.093553 kubelet[1752]: I0208 23:25:39.092880 1752 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad-host-proc-sys-kernel\") pod \"094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad\" (UID: \"094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad\") " Feb 8 23:25:39.093553 kubelet[1752]: I0208 23:25:39.092929 1752 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad-clustermesh-secrets\") pod \"094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad\" (UID: \"094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad\") " Feb 8 23:25:39.093553 kubelet[1752]: I0208 23:25:39.092954 1752 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad-xtables-lock\") pod \"094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad\" (UID: \"094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad\") " Feb 8 23:25:39.093553 kubelet[1752]: I0208 23:25:39.092982 1752 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad-hubble-tls\") pod \"094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad\" (UID: \"094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad\") " Feb 8 23:25:39.093553 kubelet[1752]: I0208 23:25:39.093019 1752 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad-host-proc-sys-net\") pod \"094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad\" (UID: \"094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad\") " Feb 8 23:25:39.093553 kubelet[1752]: I0208 23:25:39.093044 1752 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad-cni-path\") pod \"094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad\" (UID: \"094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad\") " Feb 8 23:25:39.093553 kubelet[1752]: I0208 23:25:39.093087 1752 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zc92f\" (UniqueName: \"kubernetes.io/projected/094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad-kube-api-access-zc92f\") pod \"094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad\" (UID: \"094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad\") " Feb 8 23:25:39.093553 kubelet[1752]: I0208 23:25:39.093115 1752 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad-lib-modules\") pod \"094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad\" (UID: \"094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad\") " Feb 8 23:25:39.093553 kubelet[1752]: I0208 23:25:39.093153 1752 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad-cilium-ipsec-secrets\") pod \"094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad\" (UID: \"094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad\") " Feb 8 23:25:39.093553 kubelet[1752]: I0208 23:25:39.093195 1752 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad-cilium-config-path\") pod \"094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad\" (UID: \"094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad\") " Feb 8 23:25:39.093553 kubelet[1752]: I0208 23:25:39.093234 1752 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad-etc-cni-netd\") pod \"094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad\" (UID: \"094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad\") " Feb 8 23:25:39.093553 kubelet[1752]: I0208 23:25:39.093263 1752 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad-bpf-maps\") pod \"094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad\" (UID: \"094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad\") " Feb 8 23:25:39.093553 kubelet[1752]: I0208 23:25:39.093318 1752 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad" (UID: "094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:25:39.094375 kubelet[1752]: I0208 23:25:39.092735 1752 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad" (UID: "094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:25:39.094375 kubelet[1752]: I0208 23:25:39.093390 1752 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad" (UID: "094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:25:39.094375 kubelet[1752]: I0208 23:25:39.093413 1752 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad-hostproc" (OuterVolumeSpecName: "hostproc") pod "094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad" (UID: "094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:25:39.094375 kubelet[1752]: I0208 23:25:39.093430 1752 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad" (UID: "094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:25:39.094375 kubelet[1752]: I0208 23:25:39.094090 1752 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad" (UID: "094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:25:39.094375 kubelet[1752]: I0208 23:25:39.094328 1752 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad" (UID: "094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:25:39.094659 kubelet[1752]: I0208 23:25:39.094379 1752 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad-cni-path" (OuterVolumeSpecName: "cni-path") pod "094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad" (UID: "094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:25:39.094659 kubelet[1752]: I0208 23:25:39.094612 1752 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad" (UID: "094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:25:39.097179 kubelet[1752]: I0208 23:25:39.097149 1752 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad" (UID: "094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 8 23:25:39.097287 kubelet[1752]: I0208 23:25:39.097201 1752 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad" (UID: "094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:25:39.103083 kubelet[1752]: I0208 23:25:39.103051 1752 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad" (UID: "094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 8 23:25:39.105551 kubelet[1752]: I0208 23:25:39.105520 1752 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad-kube-api-access-zc92f" (OuterVolumeSpecName: "kube-api-access-zc92f") pod "094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad" (UID: "094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad"). InnerVolumeSpecName "kube-api-access-zc92f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 8 23:25:39.105787 kubelet[1752]: I0208 23:25:39.105764 1752 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad" (UID: "094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 8 23:25:39.108182 kubelet[1752]: I0208 23:25:39.108152 1752 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad" (UID: "094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 8 23:25:39.193711 kubelet[1752]: I0208 23:25:39.193454 1752 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad-clustermesh-secrets\") on node \"10.200.8.22\" DevicePath \"\"" Feb 8 23:25:39.193711 kubelet[1752]: I0208 23:25:39.193493 1752 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad-xtables-lock\") on node \"10.200.8.22\" DevicePath \"\"" Feb 8 23:25:39.193711 kubelet[1752]: I0208 23:25:39.193509 1752 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad-cni-path\") on node \"10.200.8.22\" DevicePath \"\"" Feb 8 23:25:39.193711 kubelet[1752]: I0208 23:25:39.193526 1752 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-zc92f\" (UniqueName: \"kubernetes.io/projected/094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad-kube-api-access-zc92f\") on node \"10.200.8.22\" DevicePath \"\"" Feb 8 23:25:39.193711 kubelet[1752]: I0208 23:25:39.193540 1752 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad-lib-modules\") on node \"10.200.8.22\" DevicePath \"\"" Feb 8 23:25:39.193711 kubelet[1752]: I0208 23:25:39.193554 1752 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad-hubble-tls\") on node \"10.200.8.22\" DevicePath \"\"" Feb 8 23:25:39.193711 kubelet[1752]: I0208 23:25:39.193568 1752 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad-host-proc-sys-net\") on node \"10.200.8.22\" DevicePath \"\"" Feb 8 23:25:39.193711 kubelet[1752]: I0208 23:25:39.193583 1752 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad-etc-cni-netd\") on node \"10.200.8.22\" DevicePath \"\"" Feb 8 23:25:39.193711 kubelet[1752]: I0208 23:25:39.193596 1752 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad-bpf-maps\") on node \"10.200.8.22\" DevicePath \"\"" Feb 8 23:25:39.193711 kubelet[1752]: I0208 23:25:39.193611 1752 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad-cilium-ipsec-secrets\") on node \"10.200.8.22\" DevicePath \"\"" Feb 8 23:25:39.193711 kubelet[1752]: I0208 23:25:39.193625 1752 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad-cilium-config-path\") on node \"10.200.8.22\" DevicePath \"\"" Feb 8 23:25:39.193711 kubelet[1752]: I0208 23:25:39.193638 1752 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad-cilium-run\") on node \"10.200.8.22\" DevicePath \"\"" Feb 8 23:25:39.193711 kubelet[1752]: I0208 23:25:39.193652 1752 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad-hostproc\") on node \"10.200.8.22\" DevicePath \"\"" Feb 8 23:25:39.193711 kubelet[1752]: I0208 23:25:39.193667 1752 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad-cilium-cgroup\") on node \"10.200.8.22\" DevicePath \"\"" Feb 8 23:25:39.193711 kubelet[1752]: I0208 23:25:39.193681 1752 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad-host-proc-sys-kernel\") on node \"10.200.8.22\" DevicePath \"\"" Feb 8 23:25:39.446142 env[1228]: time="2024-02-08T23:25:39.446086449Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:25:39.451220 env[1228]: time="2024-02-08T23:25:39.451179293Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:25:39.454818 env[1228]: time="2024-02-08T23:25:39.454786025Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:25:39.455238 env[1228]: time="2024-02-08T23:25:39.455199829Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 8 23:25:39.457228 env[1228]: time="2024-02-08T23:25:39.457197446Z" level=info msg="CreateContainer within sandbox \"ee2c42d6266ba1d8c05d04ce84e93d7fae37351922e1c8a13ed272319df6b48c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 8 23:25:39.485869 env[1228]: time="2024-02-08T23:25:39.485748397Z" level=info msg="CreateContainer within sandbox \"ee2c42d6266ba1d8c05d04ce84e93d7fae37351922e1c8a13ed272319df6b48c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"267c298fb85cb02880c2cc0f81cb1d290bf7aab28f308fb561a50ef7e3698596\"" Feb 8 23:25:39.486671 env[1228]: time="2024-02-08T23:25:39.486611404Z" level=info msg="StartContainer for \"267c298fb85cb02880c2cc0f81cb1d290bf7aab28f308fb561a50ef7e3698596\"" Feb 8 23:25:39.488975 systemd[1]: var-lib-kubelet-pods-094cfa5b\x2dab14\x2d4485\x2d9a5a\x2d1cd6b15bb5ad-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzc92f.mount: Deactivated successfully. Feb 8 23:25:39.489113 systemd[1]: var-lib-kubelet-pods-094cfa5b\x2dab14\x2d4485\x2d9a5a\x2d1cd6b15bb5ad-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 8 23:25:39.489198 systemd[1]: var-lib-kubelet-pods-094cfa5b\x2dab14\x2d4485\x2d9a5a\x2d1cd6b15bb5ad-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 8 23:25:39.489269 systemd[1]: var-lib-kubelet-pods-094cfa5b\x2dab14\x2d4485\x2d9a5a\x2d1cd6b15bb5ad-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 8 23:25:39.512226 systemd[1]: run-containerd-runc-k8s.io-267c298fb85cb02880c2cc0f81cb1d290bf7aab28f308fb561a50ef7e3698596-runc.9UguGz.mount: Deactivated successfully. Feb 8 23:25:39.516203 systemd[1]: Started cri-containerd-267c298fb85cb02880c2cc0f81cb1d290bf7aab28f308fb561a50ef7e3698596.scope. Feb 8 23:25:39.549932 env[1228]: time="2024-02-08T23:25:39.549875660Z" level=info msg="StartContainer for \"267c298fb85cb02880c2cc0f81cb1d290bf7aab28f308fb561a50ef7e3698596\" returns successfully" Feb 8 23:25:39.701675 kubelet[1752]: E0208 23:25:39.701568 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:25:39.882754 kubelet[1752]: I0208 23:25:39.882670 1752 scope.go:117] "RemoveContainer" containerID="8b51ba46b8f3e1ed1141bab106fc30aafbb4ccdbc1efa85378c228376c0b0413" Feb 8 23:25:39.887478 systemd[1]: Removed slice kubepods-burstable-pod094cfa5b_ab14_4485_9a5a_1cd6b15bb5ad.slice. Feb 8 23:25:39.890290 env[1228]: time="2024-02-08T23:25:39.890247047Z" level=info msg="RemoveContainer for \"8b51ba46b8f3e1ed1141bab106fc30aafbb4ccdbc1efa85378c228376c0b0413\"" Feb 8 23:25:39.899932 env[1228]: time="2024-02-08T23:25:39.899897132Z" level=info msg="RemoveContainer for \"8b51ba46b8f3e1ed1141bab106fc30aafbb4ccdbc1efa85378c228376c0b0413\" returns successfully" Feb 8 23:25:39.910443 kubelet[1752]: I0208 23:25:39.910397 1752 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-6bc8ccdb58-4lbkt" podStartSLOduration=1.754891427 podCreationTimestamp="2024-02-08 23:25:36 +0000 UTC" firstStartedPulling="2024-02-08 23:25:37.300071934 +0000 UTC m=+69.042563681" lastFinishedPulling="2024-02-08 23:25:39.455529631 +0000 UTC m=+71.198021378" observedRunningTime="2024-02-08 23:25:39.896207799 +0000 UTC m=+71.638699646" watchObservedRunningTime="2024-02-08 23:25:39.910349124 +0000 UTC m=+71.652840871" Feb 8 23:25:39.929630 kubelet[1752]: I0208 23:25:39.929606 1752 topology_manager.go:215] "Topology Admit Handler" podUID="c92adedc-c87e-4e12-bd36-bed3fdb71c51" podNamespace="kube-system" podName="cilium-pn7zq" Feb 8 23:25:39.929748 kubelet[1752]: E0208 23:25:39.929657 1752 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad" containerName="mount-cgroup" Feb 8 23:25:39.929748 kubelet[1752]: I0208 23:25:39.929687 1752 memory_manager.go:346] "RemoveStaleState removing state" podUID="094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad" containerName="mount-cgroup" Feb 8 23:25:39.929748 kubelet[1752]: E0208 23:25:39.929710 1752 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad" containerName="mount-cgroup" Feb 8 23:25:39.929748 kubelet[1752]: I0208 23:25:39.929732 1752 memory_manager.go:346] "RemoveStaleState removing state" podUID="094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad" containerName="mount-cgroup" Feb 8 23:25:39.934877 systemd[1]: Created slice kubepods-burstable-podc92adedc_c87e_4e12_bd36_bed3fdb71c51.slice. Feb 8 23:25:40.099621 kubelet[1752]: I0208 23:25:40.099474 1752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c92adedc-c87e-4e12-bd36-bed3fdb71c51-bpf-maps\") pod \"cilium-pn7zq\" (UID: \"c92adedc-c87e-4e12-bd36-bed3fdb71c51\") " pod="kube-system/cilium-pn7zq" Feb 8 23:25:40.099621 kubelet[1752]: I0208 23:25:40.099540 1752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c92adedc-c87e-4e12-bd36-bed3fdb71c51-hostproc\") pod \"cilium-pn7zq\" (UID: \"c92adedc-c87e-4e12-bd36-bed3fdb71c51\") " pod="kube-system/cilium-pn7zq" Feb 8 23:25:40.099621 kubelet[1752]: I0208 23:25:40.099574 1752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c92adedc-c87e-4e12-bd36-bed3fdb71c51-etc-cni-netd\") pod \"cilium-pn7zq\" (UID: \"c92adedc-c87e-4e12-bd36-bed3fdb71c51\") " pod="kube-system/cilium-pn7zq" Feb 8 23:25:40.100478 kubelet[1752]: I0208 23:25:40.100443 1752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c92adedc-c87e-4e12-bd36-bed3fdb71c51-lib-modules\") pod \"cilium-pn7zq\" (UID: \"c92adedc-c87e-4e12-bd36-bed3fdb71c51\") " pod="kube-system/cilium-pn7zq" Feb 8 23:25:40.100613 kubelet[1752]: I0208 23:25:40.100522 1752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c92adedc-c87e-4e12-bd36-bed3fdb71c51-clustermesh-secrets\") pod \"cilium-pn7zq\" (UID: \"c92adedc-c87e-4e12-bd36-bed3fdb71c51\") " pod="kube-system/cilium-pn7zq" Feb 8 23:25:40.100613 kubelet[1752]: I0208 23:25:40.100560 1752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c92adedc-c87e-4e12-bd36-bed3fdb71c51-cilium-ipsec-secrets\") pod \"cilium-pn7zq\" (UID: \"c92adedc-c87e-4e12-bd36-bed3fdb71c51\") " pod="kube-system/cilium-pn7zq" Feb 8 23:25:40.100613 kubelet[1752]: I0208 23:25:40.100599 1752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c92adedc-c87e-4e12-bd36-bed3fdb71c51-cilium-cgroup\") pod \"cilium-pn7zq\" (UID: \"c92adedc-c87e-4e12-bd36-bed3fdb71c51\") " pod="kube-system/cilium-pn7zq" Feb 8 23:25:40.100813 kubelet[1752]: I0208 23:25:40.100634 1752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c92adedc-c87e-4e12-bd36-bed3fdb71c51-cni-path\") pod \"cilium-pn7zq\" (UID: \"c92adedc-c87e-4e12-bd36-bed3fdb71c51\") " pod="kube-system/cilium-pn7zq" Feb 8 23:25:40.100813 kubelet[1752]: I0208 23:25:40.100671 1752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c92adedc-c87e-4e12-bd36-bed3fdb71c51-xtables-lock\") pod \"cilium-pn7zq\" (UID: \"c92adedc-c87e-4e12-bd36-bed3fdb71c51\") " pod="kube-system/cilium-pn7zq" Feb 8 23:25:40.100813 kubelet[1752]: I0208 23:25:40.100708 1752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c92adedc-c87e-4e12-bd36-bed3fdb71c51-host-proc-sys-net\") pod \"cilium-pn7zq\" (UID: \"c92adedc-c87e-4e12-bd36-bed3fdb71c51\") " pod="kube-system/cilium-pn7zq" Feb 8 23:25:40.100813 kubelet[1752]: I0208 23:25:40.100749 1752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c92adedc-c87e-4e12-bd36-bed3fdb71c51-hubble-tls\") pod \"cilium-pn7zq\" (UID: \"c92adedc-c87e-4e12-bd36-bed3fdb71c51\") " pod="kube-system/cilium-pn7zq" Feb 8 23:25:40.100813 kubelet[1752]: I0208 23:25:40.100785 1752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c92adedc-c87e-4e12-bd36-bed3fdb71c51-host-proc-sys-kernel\") pod \"cilium-pn7zq\" (UID: \"c92adedc-c87e-4e12-bd36-bed3fdb71c51\") " pod="kube-system/cilium-pn7zq" Feb 8 23:25:40.101076 kubelet[1752]: I0208 23:25:40.100823 1752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8tm2\" (UniqueName: \"kubernetes.io/projected/c92adedc-c87e-4e12-bd36-bed3fdb71c51-kube-api-access-d8tm2\") pod \"cilium-pn7zq\" (UID: \"c92adedc-c87e-4e12-bd36-bed3fdb71c51\") " pod="kube-system/cilium-pn7zq" Feb 8 23:25:40.101076 kubelet[1752]: I0208 23:25:40.100862 1752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c92adedc-c87e-4e12-bd36-bed3fdb71c51-cilium-config-path\") pod \"cilium-pn7zq\" (UID: \"c92adedc-c87e-4e12-bd36-bed3fdb71c51\") " pod="kube-system/cilium-pn7zq" Feb 8 23:25:40.101076 kubelet[1752]: I0208 23:25:40.100901 1752 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c92adedc-c87e-4e12-bd36-bed3fdb71c51-cilium-run\") pod \"cilium-pn7zq\" (UID: \"c92adedc-c87e-4e12-bd36-bed3fdb71c51\") " pod="kube-system/cilium-pn7zq" Feb 8 23:25:40.241829 env[1228]: time="2024-02-08T23:25:40.241780696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pn7zq,Uid:c92adedc-c87e-4e12-bd36-bed3fdb71c51,Namespace:kube-system,Attempt:0,}" Feb 8 23:25:40.276261 env[1228]: time="2024-02-08T23:25:40.276190193Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:25:40.276261 env[1228]: time="2024-02-08T23:25:40.276223393Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:25:40.276261 env[1228]: time="2024-02-08T23:25:40.276236693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:25:40.276677 env[1228]: time="2024-02-08T23:25:40.276626597Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/20e1de448908a20b7f7076519e785aba3548048d81f1f2feb0dbb8e347377461 pid=3488 runtime=io.containerd.runc.v2 Feb 8 23:25:40.290494 systemd[1]: Started cri-containerd-20e1de448908a20b7f7076519e785aba3548048d81f1f2feb0dbb8e347377461.scope. Feb 8 23:25:40.312869 env[1228]: time="2024-02-08T23:25:40.312822109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pn7zq,Uid:c92adedc-c87e-4e12-bd36-bed3fdb71c51,Namespace:kube-system,Attempt:0,} returns sandbox id \"20e1de448908a20b7f7076519e785aba3548048d81f1f2feb0dbb8e347377461\"" Feb 8 23:25:40.315486 env[1228]: time="2024-02-08T23:25:40.315451932Z" level=info msg="CreateContainer within sandbox \"20e1de448908a20b7f7076519e785aba3548048d81f1f2feb0dbb8e347377461\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 8 23:25:40.365781 env[1228]: time="2024-02-08T23:25:40.364750157Z" level=info msg="CreateContainer within sandbox \"20e1de448908a20b7f7076519e785aba3548048d81f1f2feb0dbb8e347377461\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5da3245a9823e4e02b9f6757a40d3898f51dac5bdf3de2449d7edeefe553fd80\"" Feb 8 23:25:40.365781 env[1228]: time="2024-02-08T23:25:40.365411362Z" level=info msg="StartContainer for \"5da3245a9823e4e02b9f6757a40d3898f51dac5bdf3de2449d7edeefe553fd80\"" Feb 8 23:25:40.381244 systemd[1]: Started cri-containerd-5da3245a9823e4e02b9f6757a40d3898f51dac5bdf3de2449d7edeefe553fd80.scope. Feb 8 23:25:40.413270 env[1228]: time="2024-02-08T23:25:40.413225075Z" level=info msg="StartContainer for \"5da3245a9823e4e02b9f6757a40d3898f51dac5bdf3de2449d7edeefe553fd80\" returns successfully" Feb 8 23:25:40.417099 systemd[1]: cri-containerd-5da3245a9823e4e02b9f6757a40d3898f51dac5bdf3de2449d7edeefe553fd80.scope: Deactivated successfully. Feb 8 23:25:40.747460 kubelet[1752]: E0208 23:25:40.702435 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:25:40.796598 kubelet[1752]: I0208 23:25:40.796528 1752 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad" path="/var/lib/kubelet/pods/094cfa5b-ab14-4485-9a5a-1cd6b15bb5ad/volumes" Feb 8 23:25:40.818890 kubelet[1752]: W0208 23:25:40.818818 1752 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod094cfa5b_ab14_4485_9a5a_1cd6b15bb5ad.slice/cri-containerd-46b04d6b5af4a0909719f5e0e7253cdd61854456fae8ed20084d0a9eb89758ee.scope WatchSource:0}: container "46b04d6b5af4a0909719f5e0e7253cdd61854456fae8ed20084d0a9eb89758ee" in namespace "k8s.io": not found Feb 8 23:25:40.820861 env[1228]: time="2024-02-08T23:25:40.820793791Z" level=info msg="shim disconnected" id=5da3245a9823e4e02b9f6757a40d3898f51dac5bdf3de2449d7edeefe553fd80 Feb 8 23:25:40.820999 env[1228]: time="2024-02-08T23:25:40.820867091Z" level=warning msg="cleaning up after shim disconnected" id=5da3245a9823e4e02b9f6757a40d3898f51dac5bdf3de2449d7edeefe553fd80 namespace=k8s.io Feb 8 23:25:40.820999 env[1228]: time="2024-02-08T23:25:40.820881091Z" level=info msg="cleaning up dead shim" Feb 8 23:25:40.834699 env[1228]: time="2024-02-08T23:25:40.834655310Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:25:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3569 runtime=io.containerd.runc.v2\n" Feb 8 23:25:40.896448 env[1228]: time="2024-02-08T23:25:40.896400143Z" level=info msg="CreateContainer within sandbox \"20e1de448908a20b7f7076519e785aba3548048d81f1f2feb0dbb8e347377461\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 8 23:25:40.935291 env[1228]: time="2024-02-08T23:25:40.935246578Z" level=info msg="CreateContainer within sandbox \"20e1de448908a20b7f7076519e785aba3548048d81f1f2feb0dbb8e347377461\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f43d0748a196dbf9d203cb2c7a6b08253c7e07b9c2565ceca12c40eeca084e14\"" Feb 8 23:25:40.936033 env[1228]: time="2024-02-08T23:25:40.936002184Z" level=info msg="StartContainer for \"f43d0748a196dbf9d203cb2c7a6b08253c7e07b9c2565ceca12c40eeca084e14\"" Feb 8 23:25:40.960451 systemd[1]: Started cri-containerd-f43d0748a196dbf9d203cb2c7a6b08253c7e07b9c2565ceca12c40eeca084e14.scope. Feb 8 23:25:40.989194 env[1228]: time="2024-02-08T23:25:40.989146243Z" level=info msg="StartContainer for \"f43d0748a196dbf9d203cb2c7a6b08253c7e07b9c2565ceca12c40eeca084e14\" returns successfully" Feb 8 23:25:40.992141 systemd[1]: cri-containerd-f43d0748a196dbf9d203cb2c7a6b08253c7e07b9c2565ceca12c40eeca084e14.scope: Deactivated successfully. Feb 8 23:25:40.996911 kubelet[1752]: I0208 23:25:40.996881 1752 setters.go:552] "Node became not ready" node="10.200.8.22" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-02-08T23:25:40Z","lastTransitionTime":"2024-02-08T23:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 8 23:25:41.028345 env[1228]: time="2024-02-08T23:25:41.028228376Z" level=info msg="shim disconnected" id=f43d0748a196dbf9d203cb2c7a6b08253c7e07b9c2565ceca12c40eeca084e14 Feb 8 23:25:41.028345 env[1228]: time="2024-02-08T23:25:41.028288576Z" level=warning msg="cleaning up after shim disconnected" id=f43d0748a196dbf9d203cb2c7a6b08253c7e07b9c2565ceca12c40eeca084e14 namespace=k8s.io Feb 8 23:25:41.028345 env[1228]: time="2024-02-08T23:25:41.028302476Z" level=info msg="cleaning up dead shim" Feb 8 23:25:41.036229 env[1228]: time="2024-02-08T23:25:41.036193343Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:25:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3634 runtime=io.containerd.runc.v2\n" Feb 8 23:25:41.488956 systemd[1]: run-containerd-runc-k8s.io-f43d0748a196dbf9d203cb2c7a6b08253c7e07b9c2565ceca12c40eeca084e14-runc.7TGiBC.mount: Deactivated successfully. Feb 8 23:25:41.489099 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f43d0748a196dbf9d203cb2c7a6b08253c7e07b9c2565ceca12c40eeca084e14-rootfs.mount: Deactivated successfully. Feb 8 23:25:41.702891 kubelet[1752]: E0208 23:25:41.702840 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:25:41.900215 env[1228]: time="2024-02-08T23:25:41.900094369Z" level=info msg="CreateContainer within sandbox \"20e1de448908a20b7f7076519e785aba3548048d81f1f2feb0dbb8e347377461\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 8 23:25:41.931176 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1890063294.mount: Deactivated successfully. Feb 8 23:25:41.958079 env[1228]: time="2024-02-08T23:25:41.958028160Z" level=info msg="CreateContainer within sandbox \"20e1de448908a20b7f7076519e785aba3548048d81f1f2feb0dbb8e347377461\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4565785ba5c4d69d851463f5abcbf50102cccf1e5623efba75e882bf9cd1eb25\"" Feb 8 23:25:41.958601 env[1228]: time="2024-02-08T23:25:41.958525464Z" level=info msg="StartContainer for \"4565785ba5c4d69d851463f5abcbf50102cccf1e5623efba75e882bf9cd1eb25\"" Feb 8 23:25:41.976557 systemd[1]: Started cri-containerd-4565785ba5c4d69d851463f5abcbf50102cccf1e5623efba75e882bf9cd1eb25.scope. Feb 8 23:25:42.005277 systemd[1]: cri-containerd-4565785ba5c4d69d851463f5abcbf50102cccf1e5623efba75e882bf9cd1eb25.scope: Deactivated successfully. Feb 8 23:25:42.009084 env[1228]: time="2024-02-08T23:25:42.009045892Z" level=info msg="StartContainer for \"4565785ba5c4d69d851463f5abcbf50102cccf1e5623efba75e882bf9cd1eb25\" returns successfully" Feb 8 23:25:42.042915 env[1228]: time="2024-02-08T23:25:42.042863674Z" level=info msg="shim disconnected" id=4565785ba5c4d69d851463f5abcbf50102cccf1e5623efba75e882bf9cd1eb25 Feb 8 23:25:42.043118 env[1228]: time="2024-02-08T23:25:42.042933074Z" level=warning msg="cleaning up after shim disconnected" id=4565785ba5c4d69d851463f5abcbf50102cccf1e5623efba75e882bf9cd1eb25 namespace=k8s.io Feb 8 23:25:42.043118 env[1228]: time="2024-02-08T23:25:42.042946374Z" level=info msg="cleaning up dead shim" Feb 8 23:25:42.050104 env[1228]: time="2024-02-08T23:25:42.050074334Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:25:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3692 runtime=io.containerd.runc.v2\n" Feb 8 23:25:42.488855 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4565785ba5c4d69d851463f5abcbf50102cccf1e5623efba75e882bf9cd1eb25-rootfs.mount: Deactivated successfully. Feb 8 23:25:42.702978 kubelet[1752]: E0208 23:25:42.702944 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:25:42.904769 env[1228]: time="2024-02-08T23:25:42.904636459Z" level=info msg="CreateContainer within sandbox \"20e1de448908a20b7f7076519e785aba3548048d81f1f2feb0dbb8e347377461\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 8 23:25:42.946758 env[1228]: time="2024-02-08T23:25:42.946715209Z" level=info msg="CreateContainer within sandbox \"20e1de448908a20b7f7076519e785aba3548048d81f1f2feb0dbb8e347377461\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"37c0a3cc202b17e4a24afee138392bc8e10c5141844be0c76821e8fd76088c78\"" Feb 8 23:25:42.947176 env[1228]: time="2024-02-08T23:25:42.947144713Z" level=info msg="StartContainer for \"37c0a3cc202b17e4a24afee138392bc8e10c5141844be0c76821e8fd76088c78\"" Feb 8 23:25:42.970719 systemd[1]: Started cri-containerd-37c0a3cc202b17e4a24afee138392bc8e10c5141844be0c76821e8fd76088c78.scope. Feb 8 23:25:42.994103 systemd[1]: cri-containerd-37c0a3cc202b17e4a24afee138392bc8e10c5141844be0c76821e8fd76088c78.scope: Deactivated successfully. Feb 8 23:25:42.998515 env[1228]: time="2024-02-08T23:25:42.998423241Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc92adedc_c87e_4e12_bd36_bed3fdb71c51.slice/cri-containerd-37c0a3cc202b17e4a24afee138392bc8e10c5141844be0c76821e8fd76088c78.scope/memory.events\": no such file or directory" Feb 8 23:25:43.000680 env[1228]: time="2024-02-08T23:25:43.000647359Z" level=info msg="StartContainer for \"37c0a3cc202b17e4a24afee138392bc8e10c5141844be0c76821e8fd76088c78\" returns successfully" Feb 8 23:25:43.028624 env[1228]: time="2024-02-08T23:25:43.028568188Z" level=info msg="shim disconnected" id=37c0a3cc202b17e4a24afee138392bc8e10c5141844be0c76821e8fd76088c78 Feb 8 23:25:43.028835 env[1228]: time="2024-02-08T23:25:43.028627189Z" level=warning msg="cleaning up after shim disconnected" id=37c0a3cc202b17e4a24afee138392bc8e10c5141844be0c76821e8fd76088c78 namespace=k8s.io Feb 8 23:25:43.028835 env[1228]: time="2024-02-08T23:25:43.028639289Z" level=info msg="cleaning up dead shim" Feb 8 23:25:43.035843 env[1228]: time="2024-02-08T23:25:43.035806048Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:25:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3749 runtime=io.containerd.runc.v2\n" Feb 8 23:25:43.488908 systemd[1]: run-containerd-runc-k8s.io-37c0a3cc202b17e4a24afee138392bc8e10c5141844be0c76821e8fd76088c78-runc.oMiQ29.mount: Deactivated successfully. Feb 8 23:25:43.489053 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-37c0a3cc202b17e4a24afee138392bc8e10c5141844be0c76821e8fd76088c78-rootfs.mount: Deactivated successfully. Feb 8 23:25:43.704093 kubelet[1752]: E0208 23:25:43.704035 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:25:43.745309 kubelet[1752]: E0208 23:25:43.745195 1752 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 8 23:25:43.909215 env[1228]: time="2024-02-08T23:25:43.909169909Z" level=info msg="CreateContainer within sandbox \"20e1de448908a20b7f7076519e785aba3548048d81f1f2feb0dbb8e347377461\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 8 23:25:43.938560 kubelet[1752]: W0208 23:25:43.938524 1752 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc92adedc_c87e_4e12_bd36_bed3fdb71c51.slice/cri-containerd-5da3245a9823e4e02b9f6757a40d3898f51dac5bdf3de2449d7edeefe553fd80.scope WatchSource:0}: task 5da3245a9823e4e02b9f6757a40d3898f51dac5bdf3de2449d7edeefe553fd80 not found: not found Feb 8 23:25:43.951883 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2030293415.mount: Deactivated successfully. Feb 8 23:25:43.964176 env[1228]: time="2024-02-08T23:25:43.964129960Z" level=info msg="CreateContainer within sandbox \"20e1de448908a20b7f7076519e785aba3548048d81f1f2feb0dbb8e347377461\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"dab823487311e51072950be1145406c6206d5bc842cd9232d08e53b2a4fe42e5\"" Feb 8 23:25:43.964700 env[1228]: time="2024-02-08T23:25:43.964665664Z" level=info msg="StartContainer for \"dab823487311e51072950be1145406c6206d5bc842cd9232d08e53b2a4fe42e5\"" Feb 8 23:25:43.983916 systemd[1]: Started cri-containerd-dab823487311e51072950be1145406c6206d5bc842cd9232d08e53b2a4fe42e5.scope. Feb 8 23:25:44.019854 env[1228]: time="2024-02-08T23:25:44.019743713Z" level=info msg="StartContainer for \"dab823487311e51072950be1145406c6206d5bc842cd9232d08e53b2a4fe42e5\" returns successfully" Feb 8 23:25:44.349391 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 8 23:25:44.708706 kubelet[1752]: E0208 23:25:44.708664 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:25:44.929776 kubelet[1752]: I0208 23:25:44.929739 1752 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-pn7zq" podStartSLOduration=5.929707254 podCreationTimestamp="2024-02-08 23:25:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:25:44.929653854 +0000 UTC m=+76.672145601" watchObservedRunningTime="2024-02-08 23:25:44.929707254 +0000 UTC m=+76.672199101" Feb 8 23:25:45.709819 kubelet[1752]: E0208 23:25:45.709765 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:25:46.710855 kubelet[1752]: E0208 23:25:46.710819 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:25:46.859424 systemd-networkd[1377]: lxc_health: Link UP Feb 8 23:25:46.871682 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 8 23:25:46.871887 systemd-networkd[1377]: lxc_health: Gained carrier Feb 8 23:25:47.046568 kubelet[1752]: W0208 23:25:47.046445 1752 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc92adedc_c87e_4e12_bd36_bed3fdb71c51.slice/cri-containerd-f43d0748a196dbf9d203cb2c7a6b08253c7e07b9c2565ceca12c40eeca084e14.scope WatchSource:0}: task f43d0748a196dbf9d203cb2c7a6b08253c7e07b9c2565ceca12c40eeca084e14 not found: not found Feb 8 23:25:47.613611 systemd[1]: run-containerd-runc-k8s.io-dab823487311e51072950be1145406c6206d5bc842cd9232d08e53b2a4fe42e5-runc.AhoC4B.mount: Deactivated successfully. Feb 8 23:25:47.711177 kubelet[1752]: E0208 23:25:47.711126 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:25:48.653047 kubelet[1752]: E0208 23:25:48.652990 1752 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:25:48.711527 kubelet[1752]: E0208 23:25:48.711496 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:25:48.914663 systemd-networkd[1377]: lxc_health: Gained IPv6LL Feb 8 23:25:49.712753 kubelet[1752]: E0208 23:25:49.712707 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:25:49.823422 systemd[1]: run-containerd-runc-k8s.io-dab823487311e51072950be1145406c6206d5bc842cd9232d08e53b2a4fe42e5-runc.V42FgW.mount: Deactivated successfully. Feb 8 23:25:50.156138 kubelet[1752]: W0208 23:25:50.156026 1752 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc92adedc_c87e_4e12_bd36_bed3fdb71c51.slice/cri-containerd-4565785ba5c4d69d851463f5abcbf50102cccf1e5623efba75e882bf9cd1eb25.scope WatchSource:0}: task 4565785ba5c4d69d851463f5abcbf50102cccf1e5623efba75e882bf9cd1eb25 not found: not found Feb 8 23:25:50.714280 kubelet[1752]: E0208 23:25:50.714239 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:25:51.714809 kubelet[1752]: E0208 23:25:51.714765 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:25:51.983806 systemd[1]: run-containerd-runc-k8s.io-dab823487311e51072950be1145406c6206d5bc842cd9232d08e53b2a4fe42e5-runc.TPnCdF.mount: Deactivated successfully. Feb 8 23:25:52.715769 kubelet[1752]: E0208 23:25:52.715718 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:25:53.263698 kubelet[1752]: W0208 23:25:53.263649 1752 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc92adedc_c87e_4e12_bd36_bed3fdb71c51.slice/cri-containerd-37c0a3cc202b17e4a24afee138392bc8e10c5141844be0c76821e8fd76088c78.scope WatchSource:0}: task 37c0a3cc202b17e4a24afee138392bc8e10c5141844be0c76821e8fd76088c78 not found: not found Feb 8 23:25:53.716563 kubelet[1752]: E0208 23:25:53.716525 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:25:54.140859 systemd[1]: run-containerd-runc-k8s.io-dab823487311e51072950be1145406c6206d5bc842cd9232d08e53b2a4fe42e5-runc.WUgtSe.mount: Deactivated successfully. Feb 8 23:25:54.716737 kubelet[1752]: E0208 23:25:54.716689 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:25:55.717685 kubelet[1752]: E0208 23:25:55.717633 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:25:56.718159 kubelet[1752]: E0208 23:25:56.718121 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:25:57.718326 kubelet[1752]: E0208 23:25:57.718266 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:25:58.718680 kubelet[1752]: E0208 23:25:58.718638 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:25:59.719822 kubelet[1752]: E0208 23:25:59.719762 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:26:00.720018 kubelet[1752]: E0208 23:26:00.719973 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:26:01.720744 kubelet[1752]: E0208 23:26:01.720692 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:26:02.720896 kubelet[1752]: E0208 23:26:02.720861 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:26:03.721993 kubelet[1752]: E0208 23:26:03.721937 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:26:04.722821 kubelet[1752]: E0208 23:26:04.722767 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:26:05.723580 kubelet[1752]: E0208 23:26:05.723521 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:26:06.724045 kubelet[1752]: E0208 23:26:06.724007 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:26:07.594697 systemd[1]: cri-containerd-267c298fb85cb02880c2cc0f81cb1d290bf7aab28f308fb561a50ef7e3698596.scope: Deactivated successfully. Feb 8 23:26:07.613668 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-267c298fb85cb02880c2cc0f81cb1d290bf7aab28f308fb561a50ef7e3698596-rootfs.mount: Deactivated successfully. Feb 8 23:26:07.647241 env[1228]: time="2024-02-08T23:26:07.647178542Z" level=info msg="shim disconnected" id=267c298fb85cb02880c2cc0f81cb1d290bf7aab28f308fb561a50ef7e3698596 Feb 8 23:26:07.647241 env[1228]: time="2024-02-08T23:26:07.647238443Z" level=warning msg="cleaning up after shim disconnected" id=267c298fb85cb02880c2cc0f81cb1d290bf7aab28f308fb561a50ef7e3698596 namespace=k8s.io Feb 8 23:26:07.647241 env[1228]: time="2024-02-08T23:26:07.647251943Z" level=info msg="cleaning up dead shim" Feb 8 23:26:07.655604 env[1228]: time="2024-02-08T23:26:07.655558692Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:26:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4466 runtime=io.containerd.runc.v2\n" Feb 8 23:26:07.724937 kubelet[1752]: E0208 23:26:07.724891 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:26:07.961995 kubelet[1752]: I0208 23:26:07.961867 1752 scope.go:117] "RemoveContainer" containerID="267c298fb85cb02880c2cc0f81cb1d290bf7aab28f308fb561a50ef7e3698596" Feb 8 23:26:07.964672 env[1228]: time="2024-02-08T23:26:07.964618224Z" level=info msg="CreateContainer within sandbox \"ee2c42d6266ba1d8c05d04ce84e93d7fae37351922e1c8a13ed272319df6b48c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:1,}" Feb 8 23:26:08.002587 env[1228]: time="2024-02-08T23:26:08.002537749Z" level=info msg="CreateContainer within sandbox \"ee2c42d6266ba1d8c05d04ce84e93d7fae37351922e1c8a13ed272319df6b48c\" for &ContainerMetadata{Name:cilium-operator,Attempt:1,} returns container id \"8599207750c2eac855e1075a438da417d3f010b96ec7e68d3f0c04c6c116398d\"" Feb 8 23:26:08.003129 env[1228]: time="2024-02-08T23:26:08.003071752Z" level=info msg="StartContainer for \"8599207750c2eac855e1075a438da417d3f010b96ec7e68d3f0c04c6c116398d\"" Feb 8 23:26:08.022730 systemd[1]: Started cri-containerd-8599207750c2eac855e1075a438da417d3f010b96ec7e68d3f0c04c6c116398d.scope. Feb 8 23:26:08.059531 env[1228]: time="2024-02-08T23:26:08.059477683Z" level=info msg="StartContainer for \"8599207750c2eac855e1075a438da417d3f010b96ec7e68d3f0c04c6c116398d\" returns successfully" Feb 8 23:26:08.652724 kubelet[1752]: E0208 23:26:08.652689 1752 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:26:08.725279 kubelet[1752]: E0208 23:26:08.725227 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:26:09.726026 kubelet[1752]: E0208 23:26:09.725964 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:26:10.726660 kubelet[1752]: E0208 23:26:10.726621 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:26:11.341044 kubelet[1752]: E0208 23:26:11.340994 1752 kubelet_node_status.go:540] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2024-02-08T23:26:01Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-02-08T23:26:01Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-02-08T23:26:01Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-02-08T23:26:01Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\\\"],\\\"sizeBytes\\\":166719855},{\\\"names\\\":[\\\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\\\",\\\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\\\"],\\\"sizeBytes\\\":91036984},{\\\"names\\\":[\\\"ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6\\\",\\\"ghcr.io/flatcar/nginx:latest\\\"],\\\"sizeBytes\\\":57035507},{\\\"names\\\":[\\\"registry.k8s.io/kube-proxy@sha256:3898a1671ae42be1cd3c2e777549bc7b5b306b8da3a224b747365f6679fb902a\\\",\\\"registry.k8s.io/kube-proxy:v1.28.6\\\"],\\\"sizeBytes\\\":26354482},{\\\"names\\\":[\\\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\\\"],\\\"sizeBytes\\\":18897442},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db\\\",\\\"registry.k8s.io/pause:3.6\\\"],\\\"sizeBytes\\\":301773}]}}\" for node \"10.200.8.22\": the server was unable to return a response in the time allotted, but may still be processing the request (patch nodes 10.200.8.22)" Feb 8 23:26:11.727561 kubelet[1752]: E0208 23:26:11.727503 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:26:11.913420 kubelet[1752]: E0208 23:26:11.913375 1752 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"10.200.8.22\": rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.39:40210->10.200.8.13:2379: read: connection timed out" Feb 8 23:26:12.074950 kubelet[1752]: E0208 23:26:12.074811 1752 controller.go:193] "Failed to update lease" err="Put \"https://10.200.8.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/10.200.8.22?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 8 23:26:12.554208 kubelet[1752]: E0208 23:26:12.554170 1752 controller.go:193] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.39:40302->10.200.8.13:2379: read: connection timed out" Feb 8 23:26:12.616598 kubelet[1752]: E0208 23:26:12.616468 1752 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"cilium-operator-6bc8ccdb58-4lbkt.17b206e005938b01", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"cilium-operator-6bc8ccdb58-4lbkt", UID:"af463ab4-e693-4b71-8401-a0f85de5fe40", APIVersion:"v1", ResourceVersion:"1607", FieldPath:"spec.containers{cilium-operator}"}, Reason:"Pulled", Message:"Container image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" already present on machine", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.22"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 26, 7, 962835713, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 26, 7, 962835713, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.200.8.22"}': 'rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.39:40120->10.200.8.13:2379: read: connection timed out' (will not retry!) Feb 8 23:26:12.727966 kubelet[1752]: E0208 23:26:12.727917 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:26:13.728941 kubelet[1752]: E0208 23:26:13.728865 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:26:14.729107 kubelet[1752]: E0208 23:26:14.729070 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:26:15.730021 kubelet[1752]: E0208 23:26:15.729966 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:26:16.730385 kubelet[1752]: E0208 23:26:16.730326 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:26:17.731286 kubelet[1752]: E0208 23:26:17.731236 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:26:18.376508 kubelet[1752]: I0208 23:26:18.376462 1752 status_manager.go:853] "Failed to get status for pod" podUID="af463ab4-e693-4b71-8401-a0f85de5fe40" pod="kube-system/cilium-operator-6bc8ccdb58-4lbkt" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.39:40224->10.200.8.13:2379: read: connection timed out" Feb 8 23:26:18.731884 kubelet[1752]: E0208 23:26:18.731843 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:26:19.732907 kubelet[1752]: E0208 23:26:19.732846 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:26:20.733040 kubelet[1752]: E0208 23:26:20.733001 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:26:21.733204 kubelet[1752]: E0208 23:26:21.733119 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:26:21.914573 kubelet[1752]: E0208 23:26:21.914528 1752 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"10.200.8.22\": Get \"https://10.200.8.39:6443/api/v1/nodes/10.200.8.22?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 8 23:26:22.555323 kubelet[1752]: E0208 23:26:22.555274 1752 controller.go:193] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io 10.200.8.22)" Feb 8 23:26:22.734205 kubelet[1752]: E0208 23:26:22.734165 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:26:23.734426 kubelet[1752]: E0208 23:26:23.734366 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:26:24.735376 kubelet[1752]: E0208 23:26:24.735322 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:26:25.735558 kubelet[1752]: E0208 23:26:25.735502 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:26:26.736161 kubelet[1752]: E0208 23:26:26.736109 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:26:27.736623 kubelet[1752]: E0208 23:26:27.736566 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:26:28.652684 kubelet[1752]: E0208 23:26:28.652634 1752 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:26:28.670234 env[1228]: time="2024-02-08T23:26:28.670195805Z" level=info msg="StopPodSandbox for \"619f578e6c2939c8ccff7285d7725d25fe43703215e3c1b222690deabe82de1b\"" Feb 8 23:26:28.670722 env[1228]: time="2024-02-08T23:26:28.670659008Z" level=info msg="TearDown network for sandbox \"619f578e6c2939c8ccff7285d7725d25fe43703215e3c1b222690deabe82de1b\" successfully" Feb 8 23:26:28.670811 env[1228]: time="2024-02-08T23:26:28.670719108Z" level=info msg="StopPodSandbox for \"619f578e6c2939c8ccff7285d7725d25fe43703215e3c1b222690deabe82de1b\" returns successfully" Feb 8 23:26:28.672145 env[1228]: time="2024-02-08T23:26:28.672115415Z" level=info msg="RemovePodSandbox for \"619f578e6c2939c8ccff7285d7725d25fe43703215e3c1b222690deabe82de1b\"" Feb 8 23:26:28.672254 env[1228]: time="2024-02-08T23:26:28.672151815Z" level=info msg="Forcibly stopping sandbox \"619f578e6c2939c8ccff7285d7725d25fe43703215e3c1b222690deabe82de1b\"" Feb 8 23:26:28.672304 env[1228]: time="2024-02-08T23:26:28.672253516Z" level=info msg="TearDown network for sandbox \"619f578e6c2939c8ccff7285d7725d25fe43703215e3c1b222690deabe82de1b\" successfully" Feb 8 23:26:28.687170 env[1228]: time="2024-02-08T23:26:28.687129489Z" level=info msg="RemovePodSandbox \"619f578e6c2939c8ccff7285d7725d25fe43703215e3c1b222690deabe82de1b\" returns successfully" Feb 8 23:26:28.687922 env[1228]: time="2024-02-08T23:26:28.687881393Z" level=info msg="StopPodSandbox for \"82c31e3b4b459cddde0cbff2d09ac7700797bea36b8fa30be6834ac60001ca4a\"" Feb 8 23:26:28.688069 env[1228]: time="2024-02-08T23:26:28.687999394Z" level=info msg="TearDown network for sandbox \"82c31e3b4b459cddde0cbff2d09ac7700797bea36b8fa30be6834ac60001ca4a\" successfully" Feb 8 23:26:28.688145 env[1228]: time="2024-02-08T23:26:28.688065394Z" level=info msg="StopPodSandbox for \"82c31e3b4b459cddde0cbff2d09ac7700797bea36b8fa30be6834ac60001ca4a\" returns successfully" Feb 8 23:26:28.688541 env[1228]: time="2024-02-08T23:26:28.688494996Z" level=info msg="RemovePodSandbox for \"82c31e3b4b459cddde0cbff2d09ac7700797bea36b8fa30be6834ac60001ca4a\"" Feb 8 23:26:28.688649 env[1228]: time="2024-02-08T23:26:28.688537896Z" level=info msg="Forcibly stopping sandbox \"82c31e3b4b459cddde0cbff2d09ac7700797bea36b8fa30be6834ac60001ca4a\"" Feb 8 23:26:28.688649 env[1228]: time="2024-02-08T23:26:28.688635597Z" level=info msg="TearDown network for sandbox \"82c31e3b4b459cddde0cbff2d09ac7700797bea36b8fa30be6834ac60001ca4a\" successfully" Feb 8 23:26:28.696956 env[1228]: time="2024-02-08T23:26:28.696928438Z" level=info msg="RemovePodSandbox \"82c31e3b4b459cddde0cbff2d09ac7700797bea36b8fa30be6834ac60001ca4a\" returns successfully" Feb 8 23:26:28.737002 kubelet[1752]: E0208 23:26:28.736940 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:26:29.737338 kubelet[1752]: E0208 23:26:29.737281 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:26:30.738334 kubelet[1752]: E0208 23:26:30.738291 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:26:31.739276 kubelet[1752]: E0208 23:26:31.739220 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:26:31.915546 kubelet[1752]: E0208 23:26:31.915499 1752 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"10.200.8.22\": Get \"https://10.200.8.39:6443/api/v1/nodes/10.200.8.22?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 8 23:26:32.555622 kubelet[1752]: E0208 23:26:32.555521 1752 controller.go:193] "Failed to update lease" err="Put \"https://10.200.8.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/10.200.8.22?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 8 23:26:32.739913 kubelet[1752]: E0208 23:26:32.739854 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:26:33.741051 kubelet[1752]: E0208 23:26:33.740988 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:26:34.741178 kubelet[1752]: E0208 23:26:34.741122 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:26:35.742063 kubelet[1752]: E0208 23:26:35.742006 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:26:36.742588 kubelet[1752]: E0208 23:26:36.742535 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:26:37.742801 kubelet[1752]: E0208 23:26:37.742738 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:26:38.743651 kubelet[1752]: E0208 23:26:38.743598 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:26:39.743999 kubelet[1752]: E0208 23:26:39.743940 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:26:40.744460 kubelet[1752]: E0208 23:26:40.744405 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:26:41.744943 kubelet[1752]: E0208 23:26:41.744879 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:26:41.915874 kubelet[1752]: E0208 23:26:41.915819 1752 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"10.200.8.22\": Get \"https://10.200.8.39:6443/api/v1/nodes/10.200.8.22?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 8 23:26:41.915874 kubelet[1752]: E0208 23:26:41.915863 1752 kubelet_node_status.go:527] "Unable to update node status" err="update node status exceeds retry count" Feb 8 23:26:42.555909 kubelet[1752]: E0208 23:26:42.555859 1752 controller.go:193] "Failed to update lease" err="Put \"https://10.200.8.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/10.200.8.22?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 8 23:26:42.555909 kubelet[1752]: I0208 23:26:42.555908 1752 controller.go:116] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 8 23:26:42.746063 kubelet[1752]: E0208 23:26:42.746011 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:26:43.746272 kubelet[1752]: E0208 23:26:43.746209 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:26:44.747371 kubelet[1752]: E0208 23:26:44.747298 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:26:45.747859 kubelet[1752]: E0208 23:26:45.747806 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:26:46.748464 kubelet[1752]: E0208 23:26:46.748395 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:26:47.748655 kubelet[1752]: E0208 23:26:47.748592 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:26:48.653455 kubelet[1752]: E0208 23:26:48.653392 1752 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:26:48.749430 kubelet[1752]: E0208 23:26:48.749375 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:26:49.750313 kubelet[1752]: E0208 23:26:49.750252 1752 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"