Feb 9 19:42:57.904372 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Feb 9 17:23:38 -00 2024 Feb 9 19:42:57.904401 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:42:57.904415 kernel: BIOS-provided physical RAM map: Feb 9 19:42:57.904422 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 9 19:42:57.904429 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Feb 9 19:42:57.904435 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Feb 9 19:42:57.904444 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Feb 9 19:42:57.904451 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Feb 9 19:42:57.904458 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Feb 9 19:42:57.904466 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Feb 9 19:42:57.904473 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Feb 9 19:42:57.904480 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Feb 9 19:42:57.904487 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Feb 9 19:42:57.904493 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Feb 9 19:42:57.904502 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Feb 9 19:42:57.904511 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Feb 9 19:42:57.904518 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Feb 9 19:42:57.904525 kernel: NX (Execute Disable) protection: active Feb 9 19:42:57.904533 kernel: e820: update [mem 0x9b3fa018-0x9b403c57] usable ==> usable Feb 9 19:42:57.904540 kernel: e820: update [mem 0x9b3fa018-0x9b403c57] usable ==> usable Feb 9 19:42:57.904547 kernel: e820: update [mem 0x9b3bd018-0x9b3f9e57] usable ==> usable Feb 9 19:42:57.904554 kernel: e820: update [mem 0x9b3bd018-0x9b3f9e57] usable ==> usable Feb 9 19:42:57.904564 kernel: extended physical RAM map: Feb 9 19:42:57.904571 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 9 19:42:57.904578 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Feb 9 19:42:57.904587 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Feb 9 19:42:57.904595 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Feb 9 19:42:57.904602 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Feb 9 19:42:57.904609 kernel: reserve setup_data: [mem 0x000000000080c000-0x000000000080ffff] usable Feb 9 19:42:57.904616 kernel: reserve setup_data: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Feb 9 19:42:57.904623 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b3bd017] usable Feb 9 19:42:57.904631 kernel: reserve setup_data: [mem 0x000000009b3bd018-0x000000009b3f9e57] usable Feb 9 19:42:57.904640 kernel: reserve setup_data: [mem 0x000000009b3f9e58-0x000000009b3fa017] usable Feb 9 19:42:57.904647 kernel: reserve setup_data: [mem 0x000000009b3fa018-0x000000009b403c57] usable Feb 9 19:42:57.904655 kernel: reserve setup_data: [mem 0x000000009b403c58-0x000000009c8eefff] usable Feb 9 19:42:57.904663 kernel: reserve setup_data: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Feb 9 19:42:57.904675 kernel: reserve setup_data: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Feb 9 19:42:57.904683 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Feb 9 19:42:57.904691 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Feb 9 19:42:57.904699 kernel: reserve setup_data: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Feb 9 19:42:57.904711 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Feb 9 19:42:57.904719 kernel: efi: EFI v2.70 by EDK II Feb 9 19:42:57.904728 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b773018 RNG=0x9cb75018 Feb 9 19:42:57.904740 kernel: random: crng init done Feb 9 19:42:57.904748 kernel: SMBIOS 2.8 present. Feb 9 19:42:57.904756 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 0.0.0 02/06/2015 Feb 9 19:42:57.904764 kernel: Hypervisor detected: KVM Feb 9 19:42:57.904771 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 9 19:42:57.904779 kernel: kvm-clock: cpu 0, msr 4efaa001, primary cpu clock Feb 9 19:42:57.904787 kernel: kvm-clock: using sched offset of 6171769182 cycles Feb 9 19:42:57.904796 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 9 19:42:57.904805 kernel: tsc: Detected 2794.750 MHz processor Feb 9 19:42:57.904822 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 9 19:42:57.904831 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 9 19:42:57.904839 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Feb 9 19:42:57.904848 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 9 19:42:57.904912 kernel: Using GB pages for direct mapping Feb 9 19:42:57.904923 kernel: Secure boot disabled Feb 9 19:42:57.904932 kernel: ACPI: Early table checksum verification disabled Feb 9 19:42:57.904941 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Feb 9 19:42:57.904950 kernel: ACPI: XSDT 0x000000009CB7D0E8 00004C (v01 BOCHS BXPC 00000001 01000013) Feb 9 19:42:57.904972 kernel: ACPI: FACP 0x000000009CB7A000 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 19:42:57.904982 kernel: ACPI: DSDT 0x000000009CB7B000 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 19:42:57.904991 kernel: ACPI: FACS 0x000000009CBDD000 000040 Feb 9 19:42:57.905000 kernel: ACPI: APIC 0x000000009CB79000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 19:42:57.905010 kernel: ACPI: HPET 0x000000009CB78000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 19:42:57.905019 kernel: ACPI: WAET 0x000000009CB77000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 19:42:57.905028 kernel: ACPI: BGRT 0x000000009CB76000 000038 (v01 INTEL EDK2 00000002 01000013) Feb 9 19:42:57.905038 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb7a000-0x9cb7a073] Feb 9 19:42:57.905051 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7b000-0x9cb7ca38] Feb 9 19:42:57.905066 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Feb 9 19:42:57.905075 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb79000-0x9cb7908f] Feb 9 19:42:57.905084 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb78000-0x9cb78037] Feb 9 19:42:57.905093 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb77000-0x9cb77027] Feb 9 19:42:57.905102 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb76000-0x9cb76037] Feb 9 19:42:57.905111 kernel: No NUMA configuration found Feb 9 19:42:57.905121 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Feb 9 19:42:57.905130 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Feb 9 19:42:57.905139 kernel: Zone ranges: Feb 9 19:42:57.905155 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 9 19:42:57.905164 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Feb 9 19:42:57.905173 kernel: Normal empty Feb 9 19:42:57.905183 kernel: Movable zone start for each node Feb 9 19:42:57.905192 kernel: Early memory node ranges Feb 9 19:42:57.905204 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Feb 9 19:42:57.905213 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Feb 9 19:42:57.905222 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Feb 9 19:42:57.905232 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Feb 9 19:42:57.905246 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Feb 9 19:42:57.905256 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Feb 9 19:42:57.905265 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Feb 9 19:42:57.905274 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 9 19:42:57.905283 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Feb 9 19:42:57.905293 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Feb 9 19:42:57.905302 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 9 19:42:57.905311 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Feb 9 19:42:57.905321 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Feb 9 19:42:57.905333 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Feb 9 19:42:57.905342 kernel: ACPI: PM-Timer IO Port: 0xb008 Feb 9 19:42:57.905351 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 9 19:42:57.905360 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 9 19:42:57.905370 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 9 19:42:57.905379 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 9 19:42:57.905388 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 9 19:42:57.905398 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 9 19:42:57.905407 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 9 19:42:57.905419 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 9 19:42:57.905428 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 9 19:42:57.905437 kernel: TSC deadline timer available Feb 9 19:42:57.905446 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Feb 9 19:42:57.905455 kernel: kvm-guest: KVM setup pv remote TLB flush Feb 9 19:42:57.905464 kernel: kvm-guest: setup PV sched yield Feb 9 19:42:57.905477 kernel: [mem 0x9d000000-0xffffffff] available for PCI devices Feb 9 19:42:57.905486 kernel: Booting paravirtualized kernel on KVM Feb 9 19:42:57.905496 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 9 19:42:57.905505 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Feb 9 19:42:57.905522 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u524288 Feb 9 19:42:57.905532 kernel: pcpu-alloc: s185624 r8192 d31464 u524288 alloc=1*2097152 Feb 9 19:42:57.905556 kernel: pcpu-alloc: [0] 0 1 2 3 Feb 9 19:42:57.905570 kernel: kvm-guest: setup async PF for cpu 0 Feb 9 19:42:57.905580 kernel: kvm-guest: stealtime: cpu 0, msr 9b01c0c0 Feb 9 19:42:57.905590 kernel: kvm-guest: PV spinlocks enabled Feb 9 19:42:57.905599 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 9 19:42:57.905609 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Feb 9 19:42:57.905619 kernel: Policy zone: DMA32 Feb 9 19:42:57.905630 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:42:57.905640 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 19:42:57.905653 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 9 19:42:57.905664 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 19:42:57.905675 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 19:42:57.905687 kernel: Memory: 2400512K/2567000K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 166228K reserved, 0K cma-reserved) Feb 9 19:42:57.905703 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 9 19:42:57.905713 kernel: ftrace: allocating 34475 entries in 135 pages Feb 9 19:42:57.905722 kernel: ftrace: allocated 135 pages with 4 groups Feb 9 19:42:57.905732 kernel: rcu: Hierarchical RCU implementation. Feb 9 19:42:57.905742 kernel: rcu: RCU event tracing is enabled. Feb 9 19:42:57.905752 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 9 19:42:57.905762 kernel: Rude variant of Tasks RCU enabled. Feb 9 19:42:57.905772 kernel: Tracing variant of Tasks RCU enabled. Feb 9 19:42:57.905782 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 19:42:57.905794 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 9 19:42:57.905803 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Feb 9 19:42:57.905813 kernel: Console: colour dummy device 80x25 Feb 9 19:42:57.905822 kernel: printk: console [ttyS0] enabled Feb 9 19:42:57.905832 kernel: ACPI: Core revision 20210730 Feb 9 19:42:57.905842 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 9 19:42:57.905873 kernel: APIC: Switch to symmetric I/O mode setup Feb 9 19:42:57.905884 kernel: x2apic enabled Feb 9 19:42:57.905894 kernel: Switched APIC routing to physical x2apic. Feb 9 19:42:57.905904 kernel: kvm-guest: setup PV IPIs Feb 9 19:42:57.905921 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 9 19:42:57.905931 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 9 19:42:57.905960 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Feb 9 19:42:57.905972 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 9 19:42:57.905982 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Feb 9 19:42:57.905991 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Feb 9 19:42:57.906001 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 9 19:42:57.906011 kernel: Spectre V2 : Mitigation: Retpolines Feb 9 19:42:57.906025 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 9 19:42:57.906035 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 9 19:42:57.906045 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Feb 9 19:42:57.906054 kernel: RETBleed: Mitigation: untrained return thunk Feb 9 19:42:57.906068 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 9 19:42:57.906078 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Feb 9 19:42:57.906088 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 9 19:42:57.906097 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 9 19:42:57.906111 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 9 19:42:57.906123 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 9 19:42:57.906133 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 9 19:42:57.906143 kernel: Freeing SMP alternatives memory: 32K Feb 9 19:42:57.906153 kernel: pid_max: default: 32768 minimum: 301 Feb 9 19:42:57.906162 kernel: LSM: Security Framework initializing Feb 9 19:42:57.906172 kernel: SELinux: Initializing. Feb 9 19:42:57.906181 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 19:42:57.906191 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 19:42:57.906201 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Feb 9 19:42:57.906214 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Feb 9 19:42:57.906223 kernel: ... version: 0 Feb 9 19:42:57.906233 kernel: ... bit width: 48 Feb 9 19:42:57.906242 kernel: ... generic registers: 6 Feb 9 19:42:57.906252 kernel: ... value mask: 0000ffffffffffff Feb 9 19:42:57.906261 kernel: ... max period: 00007fffffffffff Feb 9 19:42:57.906271 kernel: ... fixed-purpose events: 0 Feb 9 19:42:57.906281 kernel: ... event mask: 000000000000003f Feb 9 19:42:57.906290 kernel: signal: max sigframe size: 1776 Feb 9 19:42:57.906306 kernel: rcu: Hierarchical SRCU implementation. Feb 9 19:42:57.906316 kernel: smp: Bringing up secondary CPUs ... Feb 9 19:42:57.906325 kernel: x86: Booting SMP configuration: Feb 9 19:42:57.906335 kernel: .... node #0, CPUs: #1 Feb 9 19:42:57.906345 kernel: kvm-clock: cpu 1, msr 4efaa041, secondary cpu clock Feb 9 19:42:57.906354 kernel: kvm-guest: setup async PF for cpu 1 Feb 9 19:42:57.906364 kernel: kvm-guest: stealtime: cpu 1, msr 9b09c0c0 Feb 9 19:42:57.906373 kernel: #2 Feb 9 19:42:57.906383 kernel: kvm-clock: cpu 2, msr 4efaa081, secondary cpu clock Feb 9 19:42:57.906393 kernel: kvm-guest: setup async PF for cpu 2 Feb 9 19:42:57.906405 kernel: kvm-guest: stealtime: cpu 2, msr 9b11c0c0 Feb 9 19:42:57.906415 kernel: #3 Feb 9 19:42:57.906424 kernel: kvm-clock: cpu 3, msr 4efaa0c1, secondary cpu clock Feb 9 19:42:57.906434 kernel: kvm-guest: setup async PF for cpu 3 Feb 9 19:42:57.906444 kernel: kvm-guest: stealtime: cpu 3, msr 9b19c0c0 Feb 9 19:42:57.906453 kernel: smp: Brought up 1 node, 4 CPUs Feb 9 19:42:57.906463 kernel: smpboot: Max logical packages: 1 Feb 9 19:42:57.906473 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Feb 9 19:42:57.906482 kernel: devtmpfs: initialized Feb 9 19:42:57.906494 kernel: x86/mm: Memory block size: 128MB Feb 9 19:42:57.906504 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Feb 9 19:42:57.906514 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Feb 9 19:42:57.906524 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Feb 9 19:42:57.906534 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Feb 9 19:42:57.906547 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Feb 9 19:42:57.906557 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 19:42:57.906567 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 9 19:42:57.906576 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 19:42:57.906589 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 19:42:57.906598 kernel: audit: initializing netlink subsys (disabled) Feb 9 19:42:57.906608 kernel: audit: type=2000 audit(1707507777.678:1): state=initialized audit_enabled=0 res=1 Feb 9 19:42:57.906617 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 19:42:57.906627 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 9 19:42:57.906637 kernel: cpuidle: using governor menu Feb 9 19:42:57.906646 kernel: ACPI: bus type PCI registered Feb 9 19:42:57.906656 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 19:42:57.906665 kernel: dca service started, version 1.12.1 Feb 9 19:42:57.906678 kernel: PCI: Using configuration type 1 for base access Feb 9 19:42:57.906688 kernel: PCI: Using configuration type 1 for extended access Feb 9 19:42:57.906698 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 9 19:42:57.906707 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 19:42:57.906717 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 19:42:57.906727 kernel: ACPI: Added _OSI(Module Device) Feb 9 19:42:57.906736 kernel: ACPI: Added _OSI(Processor Device) Feb 9 19:42:57.906746 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 19:42:57.906755 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 19:42:57.906767 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 19:42:57.906777 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 19:42:57.906787 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 19:42:57.906796 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 9 19:42:57.906806 kernel: ACPI: Interpreter enabled Feb 9 19:42:57.906816 kernel: ACPI: PM: (supports S0 S3 S5) Feb 9 19:42:57.906825 kernel: ACPI: Using IOAPIC for interrupt routing Feb 9 19:42:57.906835 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 9 19:42:57.906845 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Feb 9 19:42:57.906871 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 9 19:42:57.907067 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 9 19:42:57.907086 kernel: acpiphp: Slot [3] registered Feb 9 19:42:57.907096 kernel: acpiphp: Slot [4] registered Feb 9 19:42:57.907105 kernel: acpiphp: Slot [5] registered Feb 9 19:42:57.907115 kernel: acpiphp: Slot [6] registered Feb 9 19:42:57.907124 kernel: acpiphp: Slot [7] registered Feb 9 19:42:57.907134 kernel: acpiphp: Slot [8] registered Feb 9 19:42:57.907151 kernel: acpiphp: Slot [9] registered Feb 9 19:42:57.907161 kernel: acpiphp: Slot [10] registered Feb 9 19:42:57.907170 kernel: acpiphp: Slot [11] registered Feb 9 19:42:57.907180 kernel: acpiphp: Slot [12] registered Feb 9 19:42:57.907189 kernel: acpiphp: Slot [13] registered Feb 9 19:42:57.907199 kernel: acpiphp: Slot [14] registered Feb 9 19:42:57.907208 kernel: acpiphp: Slot [15] registered Feb 9 19:42:57.907218 kernel: acpiphp: Slot [16] registered Feb 9 19:42:57.907227 kernel: acpiphp: Slot [17] registered Feb 9 19:42:57.907237 kernel: acpiphp: Slot [18] registered Feb 9 19:42:57.907252 kernel: acpiphp: Slot [19] registered Feb 9 19:42:57.907262 kernel: acpiphp: Slot [20] registered Feb 9 19:42:57.907271 kernel: acpiphp: Slot [21] registered Feb 9 19:42:57.907281 kernel: acpiphp: Slot [22] registered Feb 9 19:42:57.907291 kernel: acpiphp: Slot [23] registered Feb 9 19:42:57.907300 kernel: acpiphp: Slot [24] registered Feb 9 19:42:57.907310 kernel: acpiphp: Slot [25] registered Feb 9 19:42:57.907319 kernel: acpiphp: Slot [26] registered Feb 9 19:42:57.907329 kernel: acpiphp: Slot [27] registered Feb 9 19:42:57.907344 kernel: acpiphp: Slot [28] registered Feb 9 19:42:57.907354 kernel: acpiphp: Slot [29] registered Feb 9 19:42:57.907363 kernel: acpiphp: Slot [30] registered Feb 9 19:42:57.907372 kernel: acpiphp: Slot [31] registered Feb 9 19:42:57.907382 kernel: PCI host bridge to bus 0000:00 Feb 9 19:42:57.907557 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 9 19:42:57.907675 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 9 19:42:57.907969 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 9 19:42:57.908130 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Feb 9 19:42:57.908238 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0x87fffffff window] Feb 9 19:42:57.908338 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 9 19:42:57.908478 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 9 19:42:57.908633 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 9 19:42:57.908764 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Feb 9 19:42:57.908919 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Feb 9 19:42:57.909038 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Feb 9 19:42:57.909131 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Feb 9 19:42:57.909221 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Feb 9 19:42:57.909315 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Feb 9 19:42:57.909442 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 9 19:42:57.909556 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Feb 9 19:42:57.909688 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Feb 9 19:42:57.909809 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Feb 9 19:42:57.909937 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Feb 9 19:42:57.910058 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc1043000-0xc1043fff] Feb 9 19:42:57.910167 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Feb 9 19:42:57.910276 kernel: pci 0000:00:02.0: BAR 0: assigned to efifb Feb 9 19:42:57.910385 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 9 19:42:57.910521 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Feb 9 19:42:57.910634 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0a0-0xc0bf] Feb 9 19:42:57.910754 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Feb 9 19:42:57.910879 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Feb 9 19:42:57.911021 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Feb 9 19:42:57.911132 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Feb 9 19:42:57.911243 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Feb 9 19:42:57.911359 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Feb 9 19:42:57.911493 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Feb 9 19:42:57.911604 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Feb 9 19:42:57.911718 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc1040000-0xc1040fff] Feb 9 19:42:57.911827 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Feb 9 19:42:57.911950 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Feb 9 19:42:57.911974 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 9 19:42:57.911988 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 9 19:42:57.912023 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 9 19:42:57.912048 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 9 19:42:57.912062 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 9 19:42:57.912072 kernel: iommu: Default domain type: Translated Feb 9 19:42:57.912081 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 9 19:42:57.912306 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Feb 9 19:42:57.912449 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 9 19:42:57.912596 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Feb 9 19:42:57.912616 kernel: vgaarb: loaded Feb 9 19:42:57.912626 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 19:42:57.912635 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 19:42:57.912645 kernel: PTP clock support registered Feb 9 19:42:57.912654 kernel: Registered efivars operations Feb 9 19:42:57.912663 kernel: PCI: Using ACPI for IRQ routing Feb 9 19:42:57.912672 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 9 19:42:57.912680 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Feb 9 19:42:57.912687 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Feb 9 19:42:57.912696 kernel: e820: reserve RAM buffer [mem 0x9b3bd018-0x9bffffff] Feb 9 19:42:57.912713 kernel: e820: reserve RAM buffer [mem 0x9b3fa018-0x9bffffff] Feb 9 19:42:57.912728 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Feb 9 19:42:57.912738 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Feb 9 19:42:57.912747 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 9 19:42:57.912756 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 9 19:42:57.912765 kernel: clocksource: Switched to clocksource kvm-clock Feb 9 19:42:57.912774 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 19:42:57.912787 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 19:42:57.912796 kernel: pnp: PnP ACPI init Feb 9 19:42:57.912981 kernel: pnp 00:02: [dma 2] Feb 9 19:42:57.913011 kernel: pnp: PnP ACPI: found 6 devices Feb 9 19:42:57.913022 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 9 19:42:57.913031 kernel: NET: Registered PF_INET protocol family Feb 9 19:42:57.913045 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 19:42:57.913055 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 9 19:42:57.913064 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 19:42:57.913086 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 9 19:42:57.913101 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 9 19:42:57.913111 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 9 19:42:57.913120 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 19:42:57.913130 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 19:42:57.913139 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 19:42:57.913149 kernel: NET: Registered PF_XDP protocol family Feb 9 19:42:57.913288 kernel: pci 0000:00:05.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Feb 9 19:42:57.913464 kernel: pci 0000:00:05.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Feb 9 19:42:57.913591 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 9 19:42:57.913731 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 9 19:42:57.913843 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 9 19:42:57.914041 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Feb 9 19:42:57.914178 kernel: pci_bus 0000:00: resource 8 [mem 0x800000000-0x87fffffff window] Feb 9 19:42:57.914300 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Feb 9 19:42:57.914395 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 9 19:42:57.914477 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Feb 9 19:42:57.914487 kernel: PCI: CLS 0 bytes, default 64 Feb 9 19:42:57.914494 kernel: Initialise system trusted keyrings Feb 9 19:42:57.914502 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 9 19:42:57.914510 kernel: Key type asymmetric registered Feb 9 19:42:57.914517 kernel: Asymmetric key parser 'x509' registered Feb 9 19:42:57.914524 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 19:42:57.914531 kernel: io scheduler mq-deadline registered Feb 9 19:42:57.914541 kernel: io scheduler kyber registered Feb 9 19:42:57.914548 kernel: io scheduler bfq registered Feb 9 19:42:57.914555 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 9 19:42:57.914563 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 9 19:42:57.914571 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Feb 9 19:42:57.914578 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 9 19:42:57.914585 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 19:42:57.914592 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 9 19:42:57.914600 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 9 19:42:57.914614 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 9 19:42:57.914621 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 9 19:42:57.914629 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 9 19:42:57.914719 kernel: rtc_cmos 00:05: RTC can wake from S4 Feb 9 19:42:57.914790 kernel: rtc_cmos 00:05: registered as rtc0 Feb 9 19:42:57.914880 kernel: rtc_cmos 00:05: setting system clock to 2024-02-09T19:42:57 UTC (1707507777) Feb 9 19:42:57.914982 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Feb 9 19:42:57.914997 kernel: efifb: probing for efifb Feb 9 19:42:57.915008 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Feb 9 19:42:57.915018 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Feb 9 19:42:57.915028 kernel: efifb: scrolling: redraw Feb 9 19:42:57.915038 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 9 19:42:57.915049 kernel: Console: switching to colour frame buffer device 160x50 Feb 9 19:42:57.915058 kernel: fb0: EFI VGA frame buffer device Feb 9 19:42:57.915072 kernel: pstore: Registered efi as persistent store backend Feb 9 19:42:57.915082 kernel: NET: Registered PF_INET6 protocol family Feb 9 19:42:57.915092 kernel: Segment Routing with IPv6 Feb 9 19:42:57.915102 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 19:42:57.915114 kernel: NET: Registered PF_PACKET protocol family Feb 9 19:42:57.915124 kernel: Key type dns_resolver registered Feb 9 19:42:57.915134 kernel: IPI shorthand broadcast: enabled Feb 9 19:42:57.915144 kernel: sched_clock: Marking stable (413367178, 91679990)->(560878286, -55831118) Feb 9 19:42:57.915154 kernel: registered taskstats version 1 Feb 9 19:42:57.915166 kernel: Loading compiled-in X.509 certificates Feb 9 19:42:57.915176 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 56154408a02b3bd349a9e9180c9bd837fd1d636a' Feb 9 19:42:57.915186 kernel: Key type .fscrypt registered Feb 9 19:42:57.915195 kernel: Key type fscrypt-provisioning registered Feb 9 19:42:57.915207 kernel: pstore: Using crash dump compression: deflate Feb 9 19:42:57.915217 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 19:42:57.915227 kernel: ima: Allocated hash algorithm: sha1 Feb 9 19:42:57.915238 kernel: ima: No architecture policies found Feb 9 19:42:57.915247 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 9 19:42:57.915264 kernel: Write protecting the kernel read-only data: 28672k Feb 9 19:42:57.915275 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 9 19:42:57.915285 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 9 19:42:57.915295 kernel: Run /init as init process Feb 9 19:42:57.915305 kernel: with arguments: Feb 9 19:42:57.915314 kernel: /init Feb 9 19:42:57.915324 kernel: with environment: Feb 9 19:42:57.915333 kernel: HOME=/ Feb 9 19:42:57.915343 kernel: TERM=linux Feb 9 19:42:57.915355 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 19:42:57.915368 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 19:42:57.915381 systemd[1]: Detected virtualization kvm. Feb 9 19:42:57.915393 systemd[1]: Detected architecture x86-64. Feb 9 19:42:57.915403 systemd[1]: Running in initrd. Feb 9 19:42:57.915414 systemd[1]: No hostname configured, using default hostname. Feb 9 19:42:57.915424 systemd[1]: Hostname set to . Feb 9 19:42:57.915438 systemd[1]: Initializing machine ID from VM UUID. Feb 9 19:42:57.915449 systemd[1]: Queued start job for default target initrd.target. Feb 9 19:42:57.915459 systemd[1]: Started systemd-ask-password-console.path. Feb 9 19:42:57.915469 systemd[1]: Reached target cryptsetup.target. Feb 9 19:42:57.915480 systemd[1]: Reached target paths.target. Feb 9 19:42:57.915490 systemd[1]: Reached target slices.target. Feb 9 19:42:57.915501 systemd[1]: Reached target swap.target. Feb 9 19:42:57.915511 systemd[1]: Reached target timers.target. Feb 9 19:42:57.915525 systemd[1]: Listening on iscsid.socket. Feb 9 19:42:57.915536 systemd[1]: Listening on iscsiuio.socket. Feb 9 19:42:57.915546 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 19:42:57.915554 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 19:42:57.915562 systemd[1]: Listening on systemd-journald.socket. Feb 9 19:42:57.915570 systemd[1]: Listening on systemd-networkd.socket. Feb 9 19:42:57.915578 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 19:42:57.915585 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 19:42:57.915593 systemd[1]: Reached target sockets.target. Feb 9 19:42:57.915603 systemd[1]: Starting kmod-static-nodes.service... Feb 9 19:42:57.915611 systemd[1]: Finished network-cleanup.service. Feb 9 19:42:57.915618 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 19:42:57.915626 systemd[1]: Starting systemd-journald.service... Feb 9 19:42:57.915634 systemd[1]: Starting systemd-modules-load.service... Feb 9 19:42:57.915642 systemd[1]: Starting systemd-resolved.service... Feb 9 19:42:57.915650 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 19:42:57.915657 systemd[1]: Finished kmod-static-nodes.service. Feb 9 19:42:57.915665 kernel: audit: type=1130 audit(1707507777.904:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:42:57.915678 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 19:42:57.915686 kernel: audit: type=1130 audit(1707507777.908:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:42:57.915697 systemd-journald[198]: Journal started Feb 9 19:42:57.915744 systemd-journald[198]: Runtime Journal (/run/log/journal/16492c3ebe6048e1bb8fd5905b701e8c) is 6.0M, max 48.4M, 42.4M free. Feb 9 19:42:57.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:42:57.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:42:57.908055 systemd-modules-load[199]: Inserted module 'overlay' Feb 9 19:42:57.917202 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 19:42:57.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:42:57.920876 kernel: audit: type=1130 audit(1707507777.917:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:42:57.920901 systemd[1]: Started systemd-journald.service. Feb 9 19:42:57.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:42:57.922433 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 19:42:57.925435 kernel: audit: type=1130 audit(1707507777.920:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:42:57.922681 systemd-resolved[200]: Positive Trust Anchors: Feb 9 19:42:57.922700 systemd-resolved[200]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 19:42:57.922728 systemd-resolved[200]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 19:42:57.924811 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 19:42:57.926009 systemd-resolved[200]: Defaulting to hostname 'linux'. Feb 9 19:42:57.926824 systemd[1]: Started systemd-resolved.service. Feb 9 19:42:57.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:42:57.927942 systemd[1]: Reached target nss-lookup.target. Feb 9 19:42:57.930144 kernel: audit: type=1130 audit(1707507777.926:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:42:57.941127 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 19:42:57.944414 kernel: audit: type=1130 audit(1707507777.940:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:42:57.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:42:57.941651 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 19:42:57.948547 kernel: audit: type=1130 audit(1707507777.943:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:42:57.943000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:42:57.945671 systemd[1]: Starting dracut-cmdline.service... Feb 9 19:42:57.953874 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 19:42:57.955110 dracut-cmdline[214]: dracut-dracut-053 Feb 9 19:42:57.957015 systemd-modules-load[199]: Inserted module 'br_netfilter' Feb 9 19:42:57.958108 kernel: Bridge firewalling registered Feb 9 19:42:57.958131 dracut-cmdline[214]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:42:57.975879 kernel: SCSI subsystem initialized Feb 9 19:42:57.986643 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 19:42:57.986692 kernel: device-mapper: uevent: version 1.0.3 Feb 9 19:42:57.986702 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 19:42:57.989381 systemd-modules-load[199]: Inserted module 'dm_multipath' Feb 9 19:42:57.990722 systemd[1]: Finished systemd-modules-load.service. Feb 9 19:42:57.993905 kernel: audit: type=1130 audit(1707507777.990:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:42:57.990000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:42:57.993956 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:42:58.001362 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:42:58.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:42:58.004889 kernel: audit: type=1130 audit(1707507778.001:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:42:58.015906 kernel: Loading iSCSI transport class v2.0-870. Feb 9 19:42:58.026888 kernel: iscsi: registered transport (tcp) Feb 9 19:42:58.047185 kernel: iscsi: registered transport (qla4xxx) Feb 9 19:42:58.047264 kernel: QLogic iSCSI HBA Driver Feb 9 19:42:58.080094 systemd[1]: Finished dracut-cmdline.service. Feb 9 19:42:58.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:42:58.083395 systemd[1]: Starting dracut-pre-udev.service... Feb 9 19:42:58.128907 kernel: raid6: avx2x4 gen() 29819 MB/s Feb 9 19:42:58.145889 kernel: raid6: avx2x4 xor() 8005 MB/s Feb 9 19:42:58.162888 kernel: raid6: avx2x2 gen() 32501 MB/s Feb 9 19:42:58.179892 kernel: raid6: avx2x2 xor() 19072 MB/s Feb 9 19:42:58.196886 kernel: raid6: avx2x1 gen() 26458 MB/s Feb 9 19:42:58.213891 kernel: raid6: avx2x1 xor() 12601 MB/s Feb 9 19:42:58.230900 kernel: raid6: sse2x4 gen() 13813 MB/s Feb 9 19:42:58.247889 kernel: raid6: sse2x4 xor() 6935 MB/s Feb 9 19:42:58.264888 kernel: raid6: sse2x2 gen() 15295 MB/s Feb 9 19:42:58.281887 kernel: raid6: sse2x2 xor() 9654 MB/s Feb 9 19:42:58.298875 kernel: raid6: sse2x1 gen() 12385 MB/s Feb 9 19:42:58.316084 kernel: raid6: sse2x1 xor() 7368 MB/s Feb 9 19:42:58.316123 kernel: raid6: using algorithm avx2x2 gen() 32501 MB/s Feb 9 19:42:58.316136 kernel: raid6: .... xor() 19072 MB/s, rmw enabled Feb 9 19:42:58.317281 kernel: raid6: using avx2x2 recovery algorithm Feb 9 19:42:58.329887 kernel: xor: automatically using best checksumming function avx Feb 9 19:42:58.422902 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 9 19:42:58.431850 systemd[1]: Finished dracut-pre-udev.service. Feb 9 19:42:58.431000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:42:58.432000 audit: BPF prog-id=7 op=LOAD Feb 9 19:42:58.432000 audit: BPF prog-id=8 op=LOAD Feb 9 19:42:58.433439 systemd[1]: Starting systemd-udevd.service... Feb 9 19:42:58.445687 systemd-udevd[400]: Using default interface naming scheme 'v252'. Feb 9 19:42:58.449891 systemd[1]: Started systemd-udevd.service. Feb 9 19:42:58.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:42:58.450899 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 19:42:58.459656 dracut-pre-trigger[405]: rd.md=0: removing MD RAID activation Feb 9 19:42:58.486879 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 19:42:58.486000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:42:58.488612 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 19:42:58.527673 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 19:42:58.527000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:42:58.569032 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 9 19:42:58.569258 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 19:42:58.583894 kernel: libata version 3.00 loaded. Feb 9 19:42:58.588432 kernel: AVX2 version of gcm_enc/dec engaged. Feb 9 19:42:58.588497 kernel: AES CTR mode by8 optimization enabled Feb 9 19:42:58.588509 kernel: ata_piix 0000:00:01.1: version 2.13 Feb 9 19:42:58.589996 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 9 19:42:58.591140 kernel: GPT:9289727 != 19775487 Feb 9 19:42:58.591176 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 9 19:42:58.591191 kernel: GPT:9289727 != 19775487 Feb 9 19:42:58.593709 kernel: scsi host0: ata_piix Feb 9 19:42:58.594013 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 9 19:42:58.594028 kernel: scsi host1: ata_piix Feb 9 19:42:58.594190 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 19:42:58.594201 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Feb 9 19:42:58.595840 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Feb 9 19:42:58.615947 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (452) Feb 9 19:42:58.614983 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 19:42:58.620047 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 19:42:58.633232 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 19:42:58.633708 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 19:42:58.640802 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 19:42:58.641979 systemd[1]: Starting disk-uuid.service... Feb 9 19:42:58.649353 disk-uuid[515]: Primary Header is updated. Feb 9 19:42:58.649353 disk-uuid[515]: Secondary Entries is updated. Feb 9 19:42:58.649353 disk-uuid[515]: Secondary Header is updated. Feb 9 19:42:58.652027 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 19:42:58.750888 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Feb 9 19:42:58.754737 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 9 19:42:58.786305 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Feb 9 19:42:58.786574 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 9 19:42:58.803927 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Feb 9 19:42:59.661890 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 19:42:59.661966 disk-uuid[516]: The operation has completed successfully. Feb 9 19:42:59.687695 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 19:42:59.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:42:59.688000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:42:59.687778 systemd[1]: Finished disk-uuid.service. Feb 9 19:42:59.690203 systemd[1]: Starting verity-setup.service... Feb 9 19:42:59.701885 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Feb 9 19:42:59.720190 systemd[1]: Found device dev-mapper-usr.device. Feb 9 19:42:59.722993 systemd[1]: Mounting sysusr-usr.mount... Feb 9 19:42:59.724603 systemd[1]: Finished verity-setup.service. Feb 9 19:42:59.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:42:59.784878 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 19:42:59.785398 systemd[1]: Mounted sysusr-usr.mount. Feb 9 19:42:59.786544 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 19:42:59.787355 systemd[1]: Starting ignition-setup.service... Feb 9 19:42:59.788522 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 19:42:59.794953 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:42:59.795006 kernel: BTRFS info (device vda6): using free space tree Feb 9 19:42:59.795017 kernel: BTRFS info (device vda6): has skinny extents Feb 9 19:42:59.803470 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 19:42:59.811196 systemd[1]: Finished ignition-setup.service. Feb 9 19:42:59.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:42:59.812361 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 19:42:59.867766 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 19:42:59.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:42:59.868000 audit: BPF prog-id=9 op=LOAD Feb 9 19:42:59.869580 systemd[1]: Starting systemd-networkd.service... Feb 9 19:42:59.885941 ignition[625]: Ignition 2.14.0 Feb 9 19:42:59.885970 ignition[625]: Stage: fetch-offline Feb 9 19:42:59.886046 ignition[625]: no configs at "/usr/lib/ignition/base.d" Feb 9 19:42:59.886060 ignition[625]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 19:42:59.886224 ignition[625]: parsed url from cmdline: "" Feb 9 19:42:59.886229 ignition[625]: no config URL provided Feb 9 19:42:59.886236 ignition[625]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 19:42:59.886246 ignition[625]: no config at "/usr/lib/ignition/user.ign" Feb 9 19:42:59.886273 ignition[625]: op(1): [started] loading QEMU firmware config module Feb 9 19:42:59.886295 ignition[625]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 9 19:42:59.891944 systemd-networkd[708]: lo: Link UP Feb 9 19:42:59.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:42:59.892012 ignition[625]: op(1): [finished] loading QEMU firmware config module Feb 9 19:42:59.891951 systemd-networkd[708]: lo: Gained carrier Feb 9 19:42:59.892362 systemd-networkd[708]: Enumeration completed Feb 9 19:42:59.892427 systemd[1]: Started systemd-networkd.service. Feb 9 19:42:59.893223 systemd-networkd[708]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:42:59.893341 systemd[1]: Reached target network.target. Feb 9 19:42:59.894676 systemd-networkd[708]: eth0: Link UP Feb 9 19:42:59.894679 systemd-networkd[708]: eth0: Gained carrier Feb 9 19:42:59.896079 systemd[1]: Starting iscsiuio.service... Feb 9 19:42:59.913363 systemd[1]: Started iscsiuio.service. Feb 9 19:42:59.913000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:42:59.915455 systemd[1]: Starting iscsid.service... Feb 9 19:42:59.918929 iscsid[714]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 19:42:59.918929 iscsid[714]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 19:42:59.918929 iscsid[714]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 19:42:59.918929 iscsid[714]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 19:42:59.918929 iscsid[714]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 19:42:59.925371 iscsid[714]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 19:42:59.924928 systemd[1]: Started iscsid.service. Feb 9 19:42:59.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:42:59.929471 systemd[1]: Starting dracut-initqueue.service... Feb 9 19:42:59.941986 systemd[1]: Finished dracut-initqueue.service. Feb 9 19:42:59.942000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:42:59.943493 systemd[1]: Reached target remote-fs-pre.target. Feb 9 19:42:59.944807 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 19:42:59.946119 systemd[1]: Reached target remote-fs.target. Feb 9 19:42:59.947929 systemd[1]: Starting dracut-pre-mount.service... Feb 9 19:42:59.955435 systemd[1]: Finished dracut-pre-mount.service. Feb 9 19:42:59.955000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:42:59.971736 ignition[625]: parsing config with SHA512: 94577cc41e363a93ec4c409ac84d1abd520c20f0e16c3bba3ab2d21cd0bbc24006a32e5bf0f3ec0f882a77c0143cd0a81df3848d65514043006bb264249fb2c3 Feb 9 19:42:59.977029 systemd-networkd[708]: eth0: DHCPv4 address 10.0.0.49/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 9 19:43:00.067101 unknown[625]: fetched base config from "system" Feb 9 19:43:00.067119 unknown[625]: fetched user config from "qemu" Feb 9 19:43:00.068931 ignition[625]: fetch-offline: fetch-offline passed Feb 9 19:43:00.069569 ignition[625]: Ignition finished successfully Feb 9 19:43:00.071217 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 19:43:00.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:00.071623 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 9 19:43:00.072429 systemd[1]: Starting ignition-kargs.service... Feb 9 19:43:00.083369 ignition[729]: Ignition 2.14.0 Feb 9 19:43:00.083380 ignition[729]: Stage: kargs Feb 9 19:43:00.083487 ignition[729]: no configs at "/usr/lib/ignition/base.d" Feb 9 19:43:00.083497 ignition[729]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 19:43:00.085627 ignition[729]: kargs: kargs passed Feb 9 19:43:00.085691 ignition[729]: Ignition finished successfully Feb 9 19:43:00.087868 systemd[1]: Finished ignition-kargs.service. Feb 9 19:43:00.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:00.089050 systemd[1]: Starting ignition-disks.service... Feb 9 19:43:00.098270 ignition[735]: Ignition 2.14.0 Feb 9 19:43:00.098280 ignition[735]: Stage: disks Feb 9 19:43:00.098383 ignition[735]: no configs at "/usr/lib/ignition/base.d" Feb 9 19:43:00.098393 ignition[735]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 19:43:00.101431 ignition[735]: disks: disks passed Feb 9 19:43:00.101898 ignition[735]: Ignition finished successfully Feb 9 19:43:00.103163 systemd[1]: Finished ignition-disks.service. Feb 9 19:43:00.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:00.103545 systemd[1]: Reached target initrd-root-device.target. Feb 9 19:43:00.104481 systemd[1]: Reached target local-fs-pre.target. Feb 9 19:43:00.104685 systemd[1]: Reached target local-fs.target. Feb 9 19:43:00.106512 systemd[1]: Reached target sysinit.target. Feb 9 19:43:00.107494 systemd[1]: Reached target basic.target. Feb 9 19:43:00.109172 systemd[1]: Starting systemd-fsck-root.service... Feb 9 19:43:00.121187 systemd-fsck[743]: ROOT: clean, 602/553520 files, 56014/553472 blocks Feb 9 19:43:00.125694 systemd[1]: Finished systemd-fsck-root.service. Feb 9 19:43:00.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:00.127246 systemd[1]: Mounting sysroot.mount... Feb 9 19:43:00.133878 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 19:43:00.133850 systemd[1]: Mounted sysroot.mount. Feb 9 19:43:00.134443 systemd[1]: Reached target initrd-root-fs.target. Feb 9 19:43:00.136080 systemd[1]: Mounting sysroot-usr.mount... Feb 9 19:43:00.136929 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 9 19:43:00.136961 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 19:43:00.136978 systemd[1]: Reached target ignition-diskful.target. Feb 9 19:43:00.139187 systemd[1]: Mounted sysroot-usr.mount. Feb 9 19:43:00.141446 systemd[1]: Starting initrd-setup-root.service... Feb 9 19:43:00.146204 initrd-setup-root[753]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 19:43:00.150219 initrd-setup-root[761]: cut: /sysroot/etc/group: No such file or directory Feb 9 19:43:00.153773 initrd-setup-root[769]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 19:43:00.157393 initrd-setup-root[777]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 19:43:00.181210 systemd[1]: Finished initrd-setup-root.service. Feb 9 19:43:00.181000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:00.182522 systemd[1]: Starting ignition-mount.service... Feb 9 19:43:00.183682 systemd[1]: Starting sysroot-boot.service... Feb 9 19:43:00.188320 bash[794]: umount: /sysroot/usr/share/oem: not mounted. Feb 9 19:43:00.200951 systemd[1]: Finished sysroot-boot.service. Feb 9 19:43:00.202181 ignition[796]: INFO : Ignition 2.14.0 Feb 9 19:43:00.202181 ignition[796]: INFO : Stage: mount Feb 9 19:43:00.202181 ignition[796]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 19:43:00.202181 ignition[796]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 19:43:00.201000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:00.205257 ignition[796]: INFO : mount: mount passed Feb 9 19:43:00.205257 ignition[796]: INFO : Ignition finished successfully Feb 9 19:43:00.207308 systemd[1]: Finished ignition-mount.service. Feb 9 19:43:00.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:00.733984 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 19:43:00.740874 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (804) Feb 9 19:43:00.740911 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:43:00.742135 kernel: BTRFS info (device vda6): using free space tree Feb 9 19:43:00.742155 kernel: BTRFS info (device vda6): has skinny extents Feb 9 19:43:00.745491 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 19:43:00.747504 systemd[1]: Starting ignition-files.service... Feb 9 19:43:00.761459 ignition[824]: INFO : Ignition 2.14.0 Feb 9 19:43:00.761459 ignition[824]: INFO : Stage: files Feb 9 19:43:00.762732 ignition[824]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 19:43:00.762732 ignition[824]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 19:43:00.764483 ignition[824]: DEBUG : files: compiled without relabeling support, skipping Feb 9 19:43:00.765938 ignition[824]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 19:43:00.765938 ignition[824]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 19:43:00.769402 ignition[824]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 19:43:00.770505 ignition[824]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 19:43:00.772013 unknown[824]: wrote ssh authorized keys file for user: core Feb 9 19:43:00.772884 ignition[824]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 19:43:00.774349 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 9 19:43:00.775733 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 9 19:43:00.829712 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 9 19:43:00.924498 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 9 19:43:00.926213 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 9 19:43:00.926213 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Feb 9 19:43:01.305084 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 9 19:43:01.533668 ignition[824]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Feb 9 19:43:01.536004 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 9 19:43:01.536004 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 9 19:43:01.536004 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-amd64.tar.gz: attempt #1 Feb 9 19:43:01.813244 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 9 19:43:01.909036 systemd-networkd[708]: eth0: Gained IPv6LL Feb 9 19:43:01.940728 ignition[824]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: a3a2c02a90b008686c20babaf272e703924db2a3e2a0d4e2a7c81d994cbc68c47458a4a354ecc243af095b390815c7f203348b9749351ae817bd52a522300449 Feb 9 19:43:01.942940 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 9 19:43:01.942940 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 9 19:43:01.942940 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 9 19:43:01.942940 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 19:43:01.942940 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubeadm: attempt #1 Feb 9 19:43:02.012851 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 9 19:43:02.348875 ignition[824]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 1c324cd645a7bf93d19d24c87498d9a17878eb1cc927e2680200ffeab2f85051ddec47d85b79b8e774042dc6726299ad3d7caf52c060701f00deba30dc33f660 Feb 9 19:43:02.351657 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 19:43:02.351657 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubectl" Feb 9 19:43:02.351657 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubectl: attempt #1 Feb 9 19:43:02.397521 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 9 19:43:02.718003 ignition[824]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 97840854134909d75a1a2563628cc4ba632067369ce7fc8a8a1e90a387d32dd7bfd73f4f5b5a82ef842088e7470692951eb7fc869c5f297dd740f855672ee628 Feb 9 19:43:02.718003 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 9 19:43:02.721246 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 19:43:02.721246 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubelet: attempt #1 Feb 9 19:43:02.767444 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET result: OK Feb 9 19:43:03.405927 ignition[824]: DEBUG : files: createFilesystemsFiles: createFiles: op(9): file matches expected sum of: 40daf2a9b9e666c14b10e627da931bd79978628b1f23ef6429c1cb4fcba261f86ccff440c0dbb0070ee760fe55772b4fd279c4582dfbb17fa30bc94b7f00126b Feb 9 19:43:03.408610 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 19:43:03.408610 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 19:43:03.408610 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 19:43:03.408610 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 19:43:03.408610 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 9 19:43:03.727499 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 9 19:43:03.830506 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 19:43:03.831925 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/install.sh" Feb 9 19:43:03.831925 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 19:43:03.831925 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 9 19:43:03.831925 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 9 19:43:03.831925 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 19:43:03.831925 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 19:43:03.831925 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 19:43:03.831925 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 19:43:03.831925 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 19:43:03.831925 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 19:43:03.831925 ignition[824]: INFO : files: op(11): [started] processing unit "containerd.service" Feb 9 19:43:03.831925 ignition[824]: INFO : files: op(11): op(12): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 9 19:43:03.831925 ignition[824]: INFO : files: op(11): op(12): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 9 19:43:03.831925 ignition[824]: INFO : files: op(11): [finished] processing unit "containerd.service" Feb 9 19:43:03.831925 ignition[824]: INFO : files: op(13): [started] processing unit "prepare-cni-plugins.service" Feb 9 19:43:03.831925 ignition[824]: INFO : files: op(13): op(14): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 19:43:03.853022 ignition[824]: INFO : files: op(13): op(14): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 19:43:03.853022 ignition[824]: INFO : files: op(13): [finished] processing unit "prepare-cni-plugins.service" Feb 9 19:43:03.853022 ignition[824]: INFO : files: op(15): [started] processing unit "prepare-critools.service" Feb 9 19:43:03.853022 ignition[824]: INFO : files: op(15): op(16): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 19:43:03.853022 ignition[824]: INFO : files: op(15): op(16): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 19:43:03.853022 ignition[824]: INFO : files: op(15): [finished] processing unit "prepare-critools.service" Feb 9 19:43:03.853022 ignition[824]: INFO : files: op(17): [started] processing unit "prepare-helm.service" Feb 9 19:43:03.853022 ignition[824]: INFO : files: op(17): op(18): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 19:43:03.853022 ignition[824]: INFO : files: op(17): op(18): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 19:43:03.853022 ignition[824]: INFO : files: op(17): [finished] processing unit "prepare-helm.service" Feb 9 19:43:03.853022 ignition[824]: INFO : files: op(19): [started] processing unit "coreos-metadata.service" Feb 9 19:43:03.853022 ignition[824]: INFO : files: op(19): op(1a): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 9 19:43:03.853022 ignition[824]: INFO : files: op(19): op(1a): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 9 19:43:03.853022 ignition[824]: INFO : files: op(19): [finished] processing unit "coreos-metadata.service" Feb 9 19:43:03.853022 ignition[824]: INFO : files: op(1b): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 19:43:03.853022 ignition[824]: INFO : files: op(1b): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 19:43:03.853022 ignition[824]: INFO : files: op(1c): [started] setting preset to enabled for "prepare-critools.service" Feb 9 19:43:03.853022 ignition[824]: INFO : files: op(1c): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 19:43:03.853022 ignition[824]: INFO : files: op(1d): [started] setting preset to enabled for "prepare-helm.service" Feb 9 19:43:03.875361 ignition[824]: INFO : files: op(1d): [finished] setting preset to enabled for "prepare-helm.service" Feb 9 19:43:03.875361 ignition[824]: INFO : files: op(1e): [started] setting preset to disabled for "coreos-metadata.service" Feb 9 19:43:03.875361 ignition[824]: INFO : files: op(1e): op(1f): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 9 19:43:03.880994 ignition[824]: INFO : files: op(1e): op(1f): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 9 19:43:03.882264 ignition[824]: INFO : files: op(1e): [finished] setting preset to disabled for "coreos-metadata.service" Feb 9 19:43:03.882264 ignition[824]: INFO : files: createResultFile: createFiles: op(20): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 19:43:03.882264 ignition[824]: INFO : files: createResultFile: createFiles: op(20): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 19:43:03.882264 ignition[824]: INFO : files: files passed Feb 9 19:43:03.882264 ignition[824]: INFO : Ignition finished successfully Feb 9 19:43:03.887619 systemd[1]: Finished ignition-files.service. Feb 9 19:43:03.891266 kernel: kauditd_printk_skb: 25 callbacks suppressed Feb 9 19:43:03.891296 kernel: audit: type=1130 audit(1707507783.887:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:03.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:03.891360 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 19:43:03.892706 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 19:43:03.893459 systemd[1]: Starting ignition-quench.service... Feb 9 19:43:03.896362 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 19:43:03.897174 systemd[1]: Finished ignition-quench.service. Feb 9 19:43:03.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:03.899216 initrd-setup-root-after-ignition[849]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Feb 9 19:43:03.907074 kernel: audit: type=1130 audit(1707507783.897:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:03.907099 kernel: audit: type=1131 audit(1707507783.897:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:03.907109 kernel: audit: type=1130 audit(1707507783.901:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:03.897000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:03.901000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:03.907182 initrd-setup-root-after-ignition[851]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 19:43:03.901834 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 19:43:03.902671 systemd[1]: Reached target ignition-complete.target. Feb 9 19:43:03.906063 systemd[1]: Starting initrd-parse-etc.service... Feb 9 19:43:03.918576 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 19:43:03.918658 systemd[1]: Finished initrd-parse-etc.service. Feb 9 19:43:03.924919 kernel: audit: type=1130 audit(1707507783.919:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:03.924935 kernel: audit: type=1131 audit(1707507783.919:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:03.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:03.919000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:03.920049 systemd[1]: Reached target initrd-fs.target. Feb 9 19:43:03.924935 systemd[1]: Reached target initrd.target. Feb 9 19:43:03.925537 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 19:43:03.926342 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 19:43:03.936435 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 19:43:03.940064 kernel: audit: type=1130 audit(1707507783.936:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:03.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:03.937761 systemd[1]: Starting initrd-cleanup.service... Feb 9 19:43:03.945776 systemd[1]: Stopped target network.target. Feb 9 19:43:03.946443 systemd[1]: Stopped target nss-lookup.target. Feb 9 19:43:03.947477 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 19:43:03.948662 systemd[1]: Stopped target timers.target. Feb 9 19:43:03.949779 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 19:43:03.953913 kernel: audit: type=1131 audit(1707507783.949:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:03.949000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:03.949887 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 19:43:03.950968 systemd[1]: Stopped target initrd.target. Feb 9 19:43:03.953995 systemd[1]: Stopped target basic.target. Feb 9 19:43:03.955109 systemd[1]: Stopped target ignition-complete.target. Feb 9 19:43:03.956242 systemd[1]: Stopped target ignition-diskful.target. Feb 9 19:43:03.957352 systemd[1]: Stopped target initrd-root-device.target. Feb 9 19:43:03.958600 systemd[1]: Stopped target remote-fs.target. Feb 9 19:43:03.959762 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 19:43:03.960978 systemd[1]: Stopped target sysinit.target. Feb 9 19:43:03.962075 systemd[1]: Stopped target local-fs.target. Feb 9 19:43:03.963194 systemd[1]: Stopped target local-fs-pre.target. Feb 9 19:43:03.964484 systemd[1]: Stopped target swap.target. Feb 9 19:43:03.965000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:03.965510 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 19:43:03.971217 kernel: audit: type=1131 audit(1707507783.965:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:03.965602 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 19:43:03.974586 kernel: audit: type=1131 audit(1707507783.970:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:03.970000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:03.966874 systemd[1]: Stopped target cryptsetup.target. Feb 9 19:43:03.973000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:03.969875 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 19:43:03.969967 systemd[1]: Stopped dracut-initqueue.service. Feb 9 19:43:03.971311 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 19:43:03.971399 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 19:43:03.974702 systemd[1]: Stopped target paths.target. Feb 9 19:43:03.975768 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 19:43:03.976916 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 19:43:03.977841 systemd[1]: Stopped target slices.target. Feb 9 19:43:03.982000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:03.979117 systemd[1]: Stopped target sockets.target. Feb 9 19:43:03.984000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:03.980267 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 19:43:03.980336 systemd[1]: Closed iscsid.socket. Feb 9 19:43:03.981275 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 19:43:03.981341 systemd[1]: Closed iscsiuio.socket. Feb 9 19:43:03.982540 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 19:43:03.982626 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 19:43:03.983884 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 19:43:03.983971 systemd[1]: Stopped ignition-files.service. Feb 9 19:43:03.985707 systemd[1]: Stopping ignition-mount.service... Feb 9 19:43:03.993000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:03.987598 systemd[1]: Stopping sysroot-boot.service... Feb 9 19:43:03.988897 systemd[1]: Stopping systemd-networkd.service... Feb 9 19:43:03.990764 systemd[1]: Stopping systemd-resolved.service... Feb 9 19:43:03.991974 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 19:43:03.998279 ignition[864]: INFO : Ignition 2.14.0 Feb 9 19:43:03.998279 ignition[864]: INFO : Stage: umount Feb 9 19:43:03.998279 ignition[864]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 19:43:03.998279 ignition[864]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 19:43:03.992710 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 19:43:03.994629 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 19:43:03.996002 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 19:43:03.996949 systemd-networkd[708]: eth0: DHCPv6 lease lost Feb 9 19:43:04.009729 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 19:43:04.010764 systemd[1]: Finished initrd-cleanup.service. Feb 9 19:43:03.997000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:04.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:04.011000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:04.015091 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 19:43:04.016311 systemd[1]: Stopped systemd-networkd.service. Feb 9 19:43:04.017881 ignition[864]: INFO : umount: umount passed Feb 9 19:43:04.017881 ignition[864]: INFO : Ignition finished successfully Feb 9 19:43:04.016000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:04.020402 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 19:43:04.020606 systemd[1]: Stopped systemd-resolved.service. Feb 9 19:43:04.020000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:04.021000 audit: BPF prog-id=9 op=UNLOAD Feb 9 19:43:04.021693 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 19:43:04.021797 systemd[1]: Stopped ignition-mount.service. Feb 9 19:43:04.023000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:04.024850 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 19:43:04.025000 audit: BPF prog-id=6 op=UNLOAD Feb 9 19:43:04.024906 systemd[1]: Closed systemd-networkd.socket. Feb 9 19:43:04.025000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:04.025880 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 19:43:04.025000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:04.025982 systemd[1]: Stopped ignition-disks.service. Feb 9 19:43:04.026000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:04.026637 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 19:43:04.026669 systemd[1]: Stopped ignition-kargs.service. Feb 9 19:43:04.030000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:04.026886 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 19:43:04.026916 systemd[1]: Stopped ignition-setup.service. Feb 9 19:43:04.033000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:04.028107 systemd[1]: Stopping network-cleanup.service... Feb 9 19:43:04.030683 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 19:43:04.034000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:04.030729 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 19:43:04.031085 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 19:43:04.031125 systemd[1]: Stopped systemd-sysctl.service. Feb 9 19:43:04.034508 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 19:43:04.034561 systemd[1]: Stopped systemd-modules-load.service. Feb 9 19:43:04.042000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:04.035545 systemd[1]: Stopping systemd-udevd.service... Feb 9 19:43:04.044000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:04.038999 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 19:43:04.045000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:04.039945 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 19:43:04.042281 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 19:43:04.042371 systemd[1]: Stopped network-cleanup.service. Feb 9 19:43:04.050000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:04.043602 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 19:43:04.050000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:04.050000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:04.050000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:04.052000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:04.054000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:04.055000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:04.043687 systemd[1]: Stopped sysroot-boot.service. Feb 9 19:43:04.045179 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 19:43:04.045281 systemd[1]: Stopped systemd-udevd.service. Feb 9 19:43:04.046934 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 19:43:04.046964 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 19:43:04.059000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:04.059000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:04.048516 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 19:43:04.048552 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 19:43:04.050036 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 19:43:04.050076 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 19:43:04.051326 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 19:43:04.051359 systemd[1]: Stopped dracut-cmdline.service. Feb 9 19:43:04.051574 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 19:43:04.051606 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 19:43:04.051727 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 19:43:04.051755 systemd[1]: Stopped initrd-setup-root.service. Feb 9 19:43:04.052557 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 19:43:04.052766 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 9 19:43:04.070000 audit: BPF prog-id=8 op=UNLOAD Feb 9 19:43:04.070000 audit: BPF prog-id=7 op=UNLOAD Feb 9 19:43:04.052807 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 9 19:43:04.054511 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 19:43:04.071000 audit: BPF prog-id=5 op=UNLOAD Feb 9 19:43:04.071000 audit: BPF prog-id=4 op=UNLOAD Feb 9 19:43:04.071000 audit: BPF prog-id=3 op=UNLOAD Feb 9 19:43:04.054544 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 19:43:04.055871 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 19:43:04.055927 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 19:43:04.057635 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 9 19:43:04.058968 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 19:43:04.059041 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 19:43:04.060191 systemd[1]: Reached target initrd-switch-root.target. Feb 9 19:43:04.062095 systemd[1]: Starting initrd-switch-root.service... Feb 9 19:43:04.068414 systemd[1]: Switching root. Feb 9 19:43:04.087487 iscsid[714]: iscsid shutting down. Feb 9 19:43:04.088080 systemd-journald[198]: Received SIGTERM from PID 1 (n/a). Feb 9 19:43:04.088111 systemd-journald[198]: Journal stopped Feb 9 19:43:07.828659 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 19:43:07.828717 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 19:43:07.828736 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 19:43:07.828750 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 19:43:07.828762 kernel: SELinux: policy capability open_perms=1 Feb 9 19:43:07.828773 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 19:43:07.828783 kernel: SELinux: policy capability always_check_network=0 Feb 9 19:43:07.828797 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 19:43:07.828807 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 19:43:07.828816 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 19:43:07.828826 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 19:43:07.828837 systemd[1]: Successfully loaded SELinux policy in 40.980ms. Feb 9 19:43:07.828872 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.006ms. Feb 9 19:43:07.828886 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 19:43:07.828904 systemd[1]: Detected virtualization kvm. Feb 9 19:43:07.828914 systemd[1]: Detected architecture x86-64. Feb 9 19:43:07.828924 systemd[1]: Detected first boot. Feb 9 19:43:07.828934 systemd[1]: Initializing machine ID from VM UUID. Feb 9 19:43:07.828945 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 19:43:07.828955 systemd[1]: Populated /etc with preset unit settings. Feb 9 19:43:07.828970 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:43:07.828985 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:43:07.828996 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:43:07.829008 systemd[1]: Queued start job for default target multi-user.target. Feb 9 19:43:07.829018 systemd[1]: Unnecessary job was removed for dev-vda6.device. Feb 9 19:43:07.829029 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 19:43:07.829039 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 19:43:07.829049 systemd[1]: Created slice system-getty.slice. Feb 9 19:43:07.829062 systemd[1]: Created slice system-modprobe.slice. Feb 9 19:43:07.829072 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 19:43:07.829083 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 19:43:07.829093 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 19:43:07.829103 systemd[1]: Created slice user.slice. Feb 9 19:43:07.829114 systemd[1]: Started systemd-ask-password-console.path. Feb 9 19:43:07.829124 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 19:43:07.829135 systemd[1]: Set up automount boot.automount. Feb 9 19:43:07.829146 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 19:43:07.829156 systemd[1]: Reached target integritysetup.target. Feb 9 19:43:07.829168 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 19:43:07.829181 systemd[1]: Reached target remote-fs.target. Feb 9 19:43:07.829192 systemd[1]: Reached target slices.target. Feb 9 19:43:07.829202 systemd[1]: Reached target swap.target. Feb 9 19:43:07.829212 systemd[1]: Reached target torcx.target. Feb 9 19:43:07.829222 systemd[1]: Reached target veritysetup.target. Feb 9 19:43:07.829233 systemd[1]: Listening on systemd-coredump.socket. Feb 9 19:43:07.829243 systemd[1]: Listening on systemd-initctl.socket. Feb 9 19:43:07.829254 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 19:43:07.829264 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 19:43:07.829275 systemd[1]: Listening on systemd-journald.socket. Feb 9 19:43:07.829285 systemd[1]: Listening on systemd-networkd.socket. Feb 9 19:43:07.829296 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 19:43:07.829306 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 19:43:07.829316 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 19:43:07.829327 systemd[1]: Mounting dev-hugepages.mount... Feb 9 19:43:07.829337 systemd[1]: Mounting dev-mqueue.mount... Feb 9 19:43:07.829348 systemd[1]: Mounting media.mount... Feb 9 19:43:07.829360 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 19:43:07.829373 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 19:43:07.829384 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 19:43:07.829396 systemd[1]: Mounting tmp.mount... Feb 9 19:43:07.829407 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 19:43:07.829417 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 19:43:07.829428 systemd[1]: Starting kmod-static-nodes.service... Feb 9 19:43:07.829440 systemd[1]: Starting modprobe@configfs.service... Feb 9 19:43:07.829454 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 19:43:07.829466 systemd[1]: Starting modprobe@drm.service... Feb 9 19:43:07.829476 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 19:43:07.829486 systemd[1]: Starting modprobe@fuse.service... Feb 9 19:43:07.829497 systemd[1]: Starting modprobe@loop.service... Feb 9 19:43:07.829509 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 19:43:07.829520 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 9 19:43:07.829531 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Feb 9 19:43:07.829542 systemd[1]: Starting systemd-journald.service... Feb 9 19:43:07.829552 systemd[1]: Starting systemd-modules-load.service... Feb 9 19:43:07.829562 kernel: fuse: init (API version 7.34) Feb 9 19:43:07.829572 systemd[1]: Starting systemd-network-generator.service... Feb 9 19:43:07.829583 kernel: loop: module loaded Feb 9 19:43:07.829594 systemd[1]: Starting systemd-remount-fs.service... Feb 9 19:43:07.829606 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 19:43:07.829616 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 19:43:07.829627 systemd[1]: Mounted dev-hugepages.mount. Feb 9 19:43:07.829638 systemd[1]: Mounted dev-mqueue.mount. Feb 9 19:43:07.829648 systemd[1]: Mounted media.mount. Feb 9 19:43:07.829658 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 19:43:07.829669 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 19:43:07.829679 systemd[1]: Mounted tmp.mount. Feb 9 19:43:07.829690 systemd[1]: Finished kmod-static-nodes.service. Feb 9 19:43:07.829702 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 19:43:07.829715 systemd-journald[1004]: Journal started Feb 9 19:43:07.829759 systemd-journald[1004]: Runtime Journal (/run/log/journal/16492c3ebe6048e1bb8fd5905b701e8c) is 6.0M, max 48.4M, 42.4M free. Feb 9 19:43:07.752000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 19:43:07.752000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 9 19:43:07.826000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 19:43:07.826000 audit[1004]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffe5ac94500 a2=4000 a3=7ffe5ac9459c items=0 ppid=1 pid=1004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:43:07.826000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 19:43:07.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:07.831161 systemd[1]: Finished modprobe@configfs.service. Feb 9 19:43:07.830000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:07.830000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:07.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:07.832877 systemd[1]: Started systemd-journald.service. Feb 9 19:43:07.833317 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 19:43:07.833528 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 19:43:07.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:07.833000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:07.834507 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 19:43:07.834739 systemd[1]: Finished modprobe@drm.service. Feb 9 19:43:07.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:07.834000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:07.835509 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 19:43:07.835755 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 19:43:07.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:07.835000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:07.836867 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 19:43:07.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:07.837637 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 19:43:07.837845 systemd[1]: Finished modprobe@fuse.service. Feb 9 19:43:07.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:07.837000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:07.838616 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 19:43:07.838878 systemd[1]: Finished modprobe@loop.service. Feb 9 19:43:07.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:07.838000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:07.840066 systemd[1]: Finished systemd-modules-load.service. Feb 9 19:43:07.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:07.841266 systemd[1]: Finished systemd-network-generator.service. Feb 9 19:43:07.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:07.842300 systemd[1]: Finished systemd-remount-fs.service. Feb 9 19:43:07.842000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:07.843408 systemd[1]: Reached target network-pre.target. Feb 9 19:43:07.845164 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 19:43:07.846807 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 19:43:07.847536 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 19:43:07.849312 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 19:43:07.851334 systemd[1]: Starting systemd-journal-flush.service... Feb 9 19:43:07.852115 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 19:43:07.853168 systemd[1]: Starting systemd-random-seed.service... Feb 9 19:43:07.853989 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 19:43:07.855079 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:43:07.857523 systemd-journald[1004]: Time spent on flushing to /var/log/journal/16492c3ebe6048e1bb8fd5905b701e8c is 23.173ms for 1129 entries. Feb 9 19:43:07.857523 systemd-journald[1004]: System Journal (/var/log/journal/16492c3ebe6048e1bb8fd5905b701e8c) is 8.0M, max 195.6M, 187.6M free. Feb 9 19:43:07.894230 systemd-journald[1004]: Received client request to flush runtime journal. Feb 9 19:43:07.862000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:07.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:07.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:07.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:07.856783 systemd[1]: Starting systemd-sysusers.service... Feb 9 19:43:07.860776 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 19:43:07.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:07.861651 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 19:43:07.896160 udevadm[1051]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 9 19:43:07.862873 systemd[1]: Finished systemd-random-seed.service. Feb 9 19:43:07.863571 systemd[1]: Reached target first-boot-complete.target. Feb 9 19:43:07.871137 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 19:43:07.873070 systemd[1]: Starting systemd-udev-settle.service... Feb 9 19:43:07.874015 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:43:07.876643 systemd[1]: Finished systemd-sysusers.service. Feb 9 19:43:07.878463 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 19:43:07.895092 systemd[1]: Finished systemd-journal-flush.service. Feb 9 19:43:07.898352 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 19:43:07.898000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:08.558275 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 19:43:08.558000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:08.560513 systemd[1]: Starting systemd-udevd.service... Feb 9 19:43:08.576366 systemd-udevd[1060]: Using default interface naming scheme 'v252'. Feb 9 19:43:08.588062 systemd[1]: Started systemd-udevd.service. Feb 9 19:43:08.587000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:08.590524 systemd[1]: Starting systemd-networkd.service... Feb 9 19:43:08.596580 systemd[1]: Starting systemd-userdbd.service... Feb 9 19:43:08.611788 systemd[1]: Found device dev-ttyS0.device. Feb 9 19:43:08.634296 systemd[1]: Started systemd-userdbd.service. Feb 9 19:43:08.640000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:08.655358 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 19:43:08.667916 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 9 19:43:08.671877 kernel: ACPI: button: Power Button [PWRF] Feb 9 19:43:08.683890 systemd-networkd[1069]: lo: Link UP Feb 9 19:43:08.684157 systemd-networkd[1069]: lo: Gained carrier Feb 9 19:43:08.688682 systemd-networkd[1069]: Enumeration completed Feb 9 19:43:08.688885 systemd-networkd[1069]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:43:08.688930 systemd[1]: Started systemd-networkd.service. Feb 9 19:43:08.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:08.690665 systemd-networkd[1069]: eth0: Link UP Feb 9 19:43:08.690758 systemd-networkd[1069]: eth0: Gained carrier Feb 9 19:43:08.703073 systemd-networkd[1069]: eth0: DHCPv4 address 10.0.0.49/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 9 19:43:08.685000 audit[1089]: AVC avc: denied { confidentiality } for pid=1089 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 19:43:08.685000 audit[1089]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=556547051460 a1=32194 a2=7f56eb9f5bc5 a3=5 items=108 ppid=1060 pid=1089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:43:08.685000 audit: CWD cwd="/" Feb 9 19:43:08.685000 audit: PATH item=0 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=1 name=(null) inode=11996 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=2 name=(null) inode=11996 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=3 name=(null) inode=11997 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=4 name=(null) inode=11996 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=5 name=(null) inode=11998 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=6 name=(null) inode=11996 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=7 name=(null) inode=11999 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=8 name=(null) inode=11999 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=9 name=(null) inode=12000 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=10 name=(null) inode=11999 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=11 name=(null) inode=12001 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=12 name=(null) inode=11999 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=13 name=(null) inode=12002 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=14 name=(null) inode=11999 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=15 name=(null) inode=12003 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=16 name=(null) inode=11999 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=17 name=(null) inode=12004 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=18 name=(null) inode=11996 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=19 name=(null) inode=12005 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=20 name=(null) inode=12005 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=21 name=(null) inode=12006 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=22 name=(null) inode=12005 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=23 name=(null) inode=12007 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=24 name=(null) inode=12005 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=25 name=(null) inode=12008 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=26 name=(null) inode=12005 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=27 name=(null) inode=12009 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=28 name=(null) inode=12005 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=29 name=(null) inode=12010 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=30 name=(null) inode=11996 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=31 name=(null) inode=12011 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=32 name=(null) inode=12011 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=33 name=(null) inode=12012 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=34 name=(null) inode=12011 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=35 name=(null) inode=12013 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=36 name=(null) inode=12011 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=37 name=(null) inode=12014 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=38 name=(null) inode=12011 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=39 name=(null) inode=12015 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=40 name=(null) inode=12011 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=41 name=(null) inode=12016 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=42 name=(null) inode=11996 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=43 name=(null) inode=12017 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=44 name=(null) inode=12017 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=45 name=(null) inode=12018 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=46 name=(null) inode=12017 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=47 name=(null) inode=12019 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=48 name=(null) inode=12017 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=49 name=(null) inode=12020 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=50 name=(null) inode=12017 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=51 name=(null) inode=12021 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=52 name=(null) inode=12017 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=53 name=(null) inode=12022 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=54 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=55 name=(null) inode=12023 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=56 name=(null) inode=12023 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=57 name=(null) inode=12024 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=58 name=(null) inode=12023 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=59 name=(null) inode=12025 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=60 name=(null) inode=12023 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=61 name=(null) inode=12026 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=62 name=(null) inode=12026 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=63 name=(null) inode=12027 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=64 name=(null) inode=12026 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=65 name=(null) inode=12028 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=66 name=(null) inode=12026 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=67 name=(null) inode=12029 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=68 name=(null) inode=12026 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=69 name=(null) inode=12030 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=70 name=(null) inode=12026 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=71 name=(null) inode=12031 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=72 name=(null) inode=12023 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=73 name=(null) inode=12032 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=74 name=(null) inode=12032 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=75 name=(null) inode=12033 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=76 name=(null) inode=12032 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=77 name=(null) inode=12034 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=78 name=(null) inode=12032 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=79 name=(null) inode=12035 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=80 name=(null) inode=12032 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=81 name=(null) inode=12036 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=82 name=(null) inode=12032 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=83 name=(null) inode=12037 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=84 name=(null) inode=12023 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=85 name=(null) inode=12038 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=86 name=(null) inode=12038 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=87 name=(null) inode=12039 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=88 name=(null) inode=12038 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=89 name=(null) inode=12040 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=90 name=(null) inode=12038 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=91 name=(null) inode=12041 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=92 name=(null) inode=12038 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=93 name=(null) inode=12042 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=94 name=(null) inode=12038 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=95 name=(null) inode=12043 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=96 name=(null) inode=12023 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=97 name=(null) inode=12044 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=98 name=(null) inode=12044 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=99 name=(null) inode=12045 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=100 name=(null) inode=12044 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=101 name=(null) inode=12046 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=102 name=(null) inode=12044 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=103 name=(null) inode=12047 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=104 name=(null) inode=12044 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=105 name=(null) inode=12048 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=106 name=(null) inode=12044 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PATH item=107 name=(null) inode=12049 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:08.685000 audit: PROCTITLE proctitle="(udev-worker)" Feb 9 19:43:08.724878 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 9 19:43:08.724931 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 0 Feb 9 19:43:08.730002 kernel: mousedev: PS/2 mouse device common for all mice Feb 9 19:43:08.779182 kernel: kvm: Nested Virtualization enabled Feb 9 19:43:08.779287 kernel: SVM: kvm: Nested Paging enabled Feb 9 19:43:08.780155 kernel: SVM: Virtual VMLOAD VMSAVE supported Feb 9 19:43:08.780189 kernel: SVM: Virtual GIF supported Feb 9 19:43:08.794880 kernel: EDAC MC: Ver: 3.0.0 Feb 9 19:43:08.812341 systemd[1]: Finished systemd-udev-settle.service. Feb 9 19:43:08.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:08.814220 systemd[1]: Starting lvm2-activation-early.service... Feb 9 19:43:08.821114 lvm[1098]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 19:43:08.854628 systemd[1]: Finished lvm2-activation-early.service. Feb 9 19:43:08.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:08.855449 systemd[1]: Reached target cryptsetup.target. Feb 9 19:43:08.857347 systemd[1]: Starting lvm2-activation.service... Feb 9 19:43:08.861520 lvm[1100]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 19:43:08.892100 systemd[1]: Finished lvm2-activation.service. Feb 9 19:43:08.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:08.892967 systemd[1]: Reached target local-fs-pre.target. Feb 9 19:43:08.896335 kernel: kauditd_printk_skb: 193 callbacks suppressed Feb 9 19:43:08.896381 kernel: audit: type=1130 audit(1707507788.892:119): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:08.897017 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 19:43:08.897043 systemd[1]: Reached target local-fs.target. Feb 9 19:43:08.897660 systemd[1]: Reached target machines.target. Feb 9 19:43:08.899511 systemd[1]: Starting ldconfig.service... Feb 9 19:43:08.900318 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 19:43:08.900375 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:43:08.901369 systemd[1]: Starting systemd-boot-update.service... Feb 9 19:43:08.903289 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 19:43:08.905741 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 19:43:08.906554 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 19:43:08.906604 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 19:43:08.907605 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 19:43:08.912439 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1103 (bootctl) Feb 9 19:43:08.913886 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 19:43:08.918951 systemd-tmpfiles[1106]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 19:43:08.919823 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 19:43:08.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:08.921775 systemd-tmpfiles[1106]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 19:43:08.925789 systemd-tmpfiles[1106]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 19:43:08.926045 kernel: audit: type=1130 audit(1707507788.920:120): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:08.955259 systemd-fsck[1112]: fsck.fat 4.2 (2021-01-31) Feb 9 19:43:08.955259 systemd-fsck[1112]: /dev/vda1: 790 files, 115362/258078 clusters Feb 9 19:43:08.957000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:08.956570 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 19:43:08.964943 kernel: audit: type=1130 audit(1707507788.957:121): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:08.959613 systemd[1]: Mounting boot.mount... Feb 9 19:43:09.026047 systemd[1]: Mounted boot.mount. Feb 9 19:43:09.865778 systemd[1]: Finished systemd-boot-update.service. Feb 9 19:43:09.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:09.869914 kernel: audit: type=1130 audit(1707507789.865:122): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:09.887488 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 19:43:09.888221 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 19:43:09.888000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:09.891880 kernel: audit: type=1130 audit(1707507789.888:123): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:09.897939 ldconfig[1102]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 19:43:09.903328 systemd[1]: Finished ldconfig.service. Feb 9 19:43:09.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:09.906883 kernel: audit: type=1130 audit(1707507789.904:124): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:09.923103 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 19:43:09.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:09.925467 systemd[1]: Starting audit-rules.service... Feb 9 19:43:09.926874 kernel: audit: type=1130 audit(1707507789.923:125): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:09.928441 systemd[1]: Starting clean-ca-certificates.service... Feb 9 19:43:09.930384 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 19:43:09.932602 systemd[1]: Starting systemd-resolved.service... Feb 9 19:43:09.934827 systemd[1]: Starting systemd-timesyncd.service... Feb 9 19:43:09.936331 systemd[1]: Starting systemd-update-utmp.service... Feb 9 19:43:09.937424 systemd[1]: Finished clean-ca-certificates.service. Feb 9 19:43:09.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:09.940265 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 19:43:09.943333 kernel: audit: type=1130 audit(1707507789.939:126): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:09.943000 audit[1131]: SYSTEM_BOOT pid=1131 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 19:43:09.946879 kernel: audit: type=1127 audit(1707507789.943:127): pid=1131 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 19:43:09.948619 systemd[1]: Finished systemd-update-utmp.service. Feb 9 19:43:09.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:09.951871 kernel: audit: type=1130 audit(1707507789.948:128): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:09.956456 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 19:43:09.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:09.966620 systemd[1]: Starting systemd-update-done.service... Feb 9 19:43:09.968000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 19:43:09.968000 audit[1145]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffc0c48ce0 a2=420 a3=0 items=0 ppid=1121 pid=1145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:43:09.968000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 19:43:09.969882 augenrules[1145]: No rules Feb 9 19:43:09.970395 systemd[1]: Finished audit-rules.service. Feb 9 19:43:09.978888 systemd[1]: Finished systemd-update-done.service. Feb 9 19:43:10.012940 systemd[1]: Started systemd-timesyncd.service. Feb 9 19:43:10.649239 systemd-timesyncd[1128]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 9 19:43:10.649282 systemd-timesyncd[1128]: Initial clock synchronization to Fri 2024-02-09 19:43:10.649161 UTC. Feb 9 19:43:10.649421 systemd[1]: Reached target time-set.target. Feb 9 19:43:10.651574 systemd-resolved[1126]: Positive Trust Anchors: Feb 9 19:43:10.651598 systemd-resolved[1126]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 19:43:10.651634 systemd-resolved[1126]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 19:43:10.660610 systemd-resolved[1126]: Defaulting to hostname 'linux'. Feb 9 19:43:10.662152 systemd[1]: Started systemd-resolved.service. Feb 9 19:43:10.666348 systemd[1]: Reached target network.target. Feb 9 19:43:10.666903 systemd[1]: Reached target nss-lookup.target. Feb 9 19:43:10.667467 systemd[1]: Reached target sysinit.target. Feb 9 19:43:10.668150 systemd[1]: Started motdgen.path. Feb 9 19:43:10.668671 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 19:43:10.669597 systemd[1]: Started logrotate.timer. Feb 9 19:43:10.670190 systemd[1]: Started mdadm.timer. Feb 9 19:43:10.670671 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 19:43:10.682339 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 19:43:10.682359 systemd[1]: Reached target paths.target. Feb 9 19:43:10.682919 systemd[1]: Reached target timers.target. Feb 9 19:43:10.683768 systemd[1]: Listening on dbus.socket. Feb 9 19:43:10.685553 systemd[1]: Starting docker.socket... Feb 9 19:43:10.687022 systemd[1]: Listening on sshd.socket. Feb 9 19:43:10.687620 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:43:10.687868 systemd[1]: Listening on docker.socket. Feb 9 19:43:10.688424 systemd[1]: Reached target sockets.target. Feb 9 19:43:10.689083 systemd[1]: Reached target basic.target. Feb 9 19:43:10.689718 systemd[1]: System is tainted: cgroupsv1 Feb 9 19:43:10.689758 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 19:43:10.689774 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 19:43:10.690698 systemd[1]: Starting containerd.service... Feb 9 19:43:10.692238 systemd[1]: Starting dbus.service... Feb 9 19:43:10.693938 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 19:43:10.695493 systemd[1]: Starting extend-filesystems.service... Feb 9 19:43:10.698604 jq[1158]: false Feb 9 19:43:10.710616 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 19:43:10.712241 systemd[1]: Starting motdgen.service... Feb 9 19:43:10.714010 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 19:43:10.715809 systemd[1]: Starting prepare-critools.service... Feb 9 19:43:10.718687 systemd[1]: Starting prepare-helm.service... Feb 9 19:43:10.720373 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 19:43:10.726183 dbus-daemon[1156]: [system] SELinux support is enabled Feb 9 19:43:10.728668 systemd[1]: Starting sshd-keygen.service... Feb 9 19:43:10.732088 extend-filesystems[1159]: Found sr0 Feb 9 19:43:10.732088 extend-filesystems[1159]: Found vda Feb 9 19:43:10.732088 extend-filesystems[1159]: Found vda1 Feb 9 19:43:10.732088 extend-filesystems[1159]: Found vda2 Feb 9 19:43:10.732088 extend-filesystems[1159]: Found vda3 Feb 9 19:43:10.732088 extend-filesystems[1159]: Found usr Feb 9 19:43:10.732088 extend-filesystems[1159]: Found vda4 Feb 9 19:43:10.732088 extend-filesystems[1159]: Found vda6 Feb 9 19:43:10.732088 extend-filesystems[1159]: Found vda7 Feb 9 19:43:10.732088 extend-filesystems[1159]: Found vda9 Feb 9 19:43:10.732088 extend-filesystems[1159]: Checking size of /dev/vda9 Feb 9 19:43:10.731775 systemd[1]: Starting systemd-logind.service... Feb 9 19:43:10.732350 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:43:10.732438 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 19:43:10.733686 systemd[1]: Starting update-engine.service... Feb 9 19:43:10.789004 jq[1184]: true Feb 9 19:43:10.735324 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 19:43:10.736723 systemd[1]: Started dbus.service. Feb 9 19:43:10.790833 tar[1191]: ./ Feb 9 19:43:10.790833 tar[1191]: ./macvlan Feb 9 19:43:10.739909 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 19:43:10.791169 tar[1192]: crictl Feb 9 19:43:10.740164 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 19:43:10.791429 tar[1193]: linux-amd64/helm Feb 9 19:43:10.741558 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 19:43:10.791688 jq[1197]: true Feb 9 19:43:10.741757 systemd[1]: Finished motdgen.service. Feb 9 19:43:10.748234 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 19:43:10.748470 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 19:43:10.769265 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 19:43:10.769294 systemd[1]: Reached target system-config.target. Feb 9 19:43:10.769952 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 19:43:10.769967 systemd[1]: Reached target user-config.target. Feb 9 19:43:10.795740 update_engine[1183]: I0209 19:43:10.795511 1183 main.cc:92] Flatcar Update Engine starting Feb 9 19:43:10.795935 extend-filesystems[1159]: Resized partition /dev/vda9 Feb 9 19:43:10.797424 systemd[1]: Started update-engine.service. Feb 9 19:43:10.798346 update_engine[1183]: I0209 19:43:10.797473 1183 update_check_scheduler.cc:74] Next update check in 7m37s Feb 9 19:43:10.799355 systemd[1]: Started locksmithd.service. Feb 9 19:43:10.801230 systemd-networkd[1069]: eth0: Gained IPv6LL Feb 9 19:43:10.806899 extend-filesystems[1205]: resize2fs 1.46.5 (30-Dec-2021) Feb 9 19:43:10.809087 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 9 19:43:10.818335 tar[1191]: ./static Feb 9 19:43:10.821442 systemd-logind[1179]: Watching system buttons on /dev/input/event1 (Power Button) Feb 9 19:43:10.821472 systemd-logind[1179]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 9 19:43:10.821632 systemd-logind[1179]: New seat seat0. Feb 9 19:43:10.822772 systemd[1]: Started systemd-logind.service. Feb 9 19:43:10.842089 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 9 19:43:10.861919 extend-filesystems[1205]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 9 19:43:10.861919 extend-filesystems[1205]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 9 19:43:10.861919 extend-filesystems[1205]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 9 19:43:10.869074 extend-filesystems[1159]: Resized filesystem in /dev/vda9 Feb 9 19:43:10.863411 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 19:43:10.869862 env[1198]: time="2024-02-09T19:43:10.862074145Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 19:43:10.863640 systemd[1]: Finished extend-filesystems.service. Feb 9 19:43:10.870208 bash[1224]: Updated "/home/core/.ssh/authorized_keys" Feb 9 19:43:10.870837 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 19:43:10.873132 tar[1191]: ./vlan Feb 9 19:43:10.905908 tar[1191]: ./portmap Feb 9 19:43:10.922102 env[1198]: time="2024-02-09T19:43:10.922017228Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 19:43:10.939262 env[1198]: time="2024-02-09T19:43:10.939209554Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:43:11.000495 env[1198]: time="2024-02-09T19:43:11.000442476Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:43:11.000702 env[1198]: time="2024-02-09T19:43:11.000680943Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:43:11.001107 env[1198]: time="2024-02-09T19:43:11.001085641Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:43:11.001200 env[1198]: time="2024-02-09T19:43:11.001179487Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 19:43:11.001289 env[1198]: time="2024-02-09T19:43:11.001267693Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 19:43:11.001368 env[1198]: time="2024-02-09T19:43:11.001347953Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 19:43:11.001521 env[1198]: time="2024-02-09T19:43:11.001503234Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:43:11.001830 env[1198]: time="2024-02-09T19:43:11.001812104Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:43:11.002056 env[1198]: time="2024-02-09T19:43:11.002036074Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:43:11.002152 env[1198]: time="2024-02-09T19:43:11.002132304Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 19:43:11.002288 env[1198]: time="2024-02-09T19:43:11.002269000Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 19:43:11.002370 env[1198]: time="2024-02-09T19:43:11.002348409Z" level=info msg="metadata content store policy set" policy=shared Feb 9 19:43:11.004833 tar[1191]: ./host-local Feb 9 19:43:11.039399 tar[1191]: ./vrf Feb 9 19:43:11.086520 tar[1191]: ./bridge Feb 9 19:43:11.099638 env[1198]: time="2024-02-09T19:43:11.099574238Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 19:43:11.099638 env[1198]: time="2024-02-09T19:43:11.099633388Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 19:43:11.099638 env[1198]: time="2024-02-09T19:43:11.099649829Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 19:43:11.099854 env[1198]: time="2024-02-09T19:43:11.099689363Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 19:43:11.099854 env[1198]: time="2024-02-09T19:43:11.099705033Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 19:43:11.099854 env[1198]: time="2024-02-09T19:43:11.099717436Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 19:43:11.099854 env[1198]: time="2024-02-09T19:43:11.099729118Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 19:43:11.099854 env[1198]: time="2024-02-09T19:43:11.099742593Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 19:43:11.099854 env[1198]: time="2024-02-09T19:43:11.099757060Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 19:43:11.099854 env[1198]: time="2024-02-09T19:43:11.099770045Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 19:43:11.099854 env[1198]: time="2024-02-09T19:43:11.099781486Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 19:43:11.099854 env[1198]: time="2024-02-09T19:43:11.099793879Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 19:43:11.100045 env[1198]: time="2024-02-09T19:43:11.099947988Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 19:43:11.100045 env[1198]: time="2024-02-09T19:43:11.100017699Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 19:43:11.100357 env[1198]: time="2024-02-09T19:43:11.100331638Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 19:43:11.100406 env[1198]: time="2024-02-09T19:43:11.100361474Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 19:43:11.100406 env[1198]: time="2024-02-09T19:43:11.100375560Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 19:43:11.100457 env[1198]: time="2024-02-09T19:43:11.100419763Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 19:43:11.100457 env[1198]: time="2024-02-09T19:43:11.100433248Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 19:43:11.100457 env[1198]: time="2024-02-09T19:43:11.100444589Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 19:43:11.100457 env[1198]: time="2024-02-09T19:43:11.100454698Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 19:43:11.100544 env[1198]: time="2024-02-09T19:43:11.100465869Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 19:43:11.100544 env[1198]: time="2024-02-09T19:43:11.100477702Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 19:43:11.100544 env[1198]: time="2024-02-09T19:43:11.100488903Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 19:43:11.100544 env[1198]: time="2024-02-09T19:43:11.100498901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 19:43:11.100544 env[1198]: time="2024-02-09T19:43:11.100510463Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 19:43:11.100654 env[1198]: time="2024-02-09T19:43:11.100628124Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 19:43:11.100654 env[1198]: time="2024-02-09T19:43:11.100641849Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 19:43:11.100654 env[1198]: time="2024-02-09T19:43:11.100652840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 19:43:11.100721 env[1198]: time="2024-02-09T19:43:11.100663109Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 19:43:11.100721 env[1198]: time="2024-02-09T19:43:11.100677396Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 19:43:11.100721 env[1198]: time="2024-02-09T19:43:11.100686854Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 19:43:11.100721 env[1198]: time="2024-02-09T19:43:11.100703846Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 19:43:11.100799 env[1198]: time="2024-02-09T19:43:11.100736777Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 19:43:11.100976 env[1198]: time="2024-02-09T19:43:11.100922215Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 19:43:11.100976 env[1198]: time="2024-02-09T19:43:11.100976737Z" level=info msg="Connect containerd service" Feb 9 19:43:11.101864 env[1198]: time="2024-02-09T19:43:11.101010450Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 19:43:11.101864 env[1198]: time="2024-02-09T19:43:11.101564800Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 19:43:11.101864 env[1198]: time="2024-02-09T19:43:11.101817664Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 19:43:11.101864 env[1198]: time="2024-02-09T19:43:11.101851728Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 19:43:11.102002 systemd[1]: Started containerd.service. Feb 9 19:43:11.102637 env[1198]: time="2024-02-09T19:43:11.102258110Z" level=info msg="containerd successfully booted in 0.250070s" Feb 9 19:43:11.111231 env[1198]: time="2024-02-09T19:43:11.111136958Z" level=info msg="Start subscribing containerd event" Feb 9 19:43:11.111375 env[1198]: time="2024-02-09T19:43:11.111260680Z" level=info msg="Start recovering state" Feb 9 19:43:11.111375 env[1198]: time="2024-02-09T19:43:11.111364224Z" level=info msg="Start event monitor" Feb 9 19:43:11.111440 env[1198]: time="2024-02-09T19:43:11.111377509Z" level=info msg="Start snapshots syncer" Feb 9 19:43:11.111440 env[1198]: time="2024-02-09T19:43:11.111392447Z" level=info msg="Start cni network conf syncer for default" Feb 9 19:43:11.111440 env[1198]: time="2024-02-09T19:43:11.111401975Z" level=info msg="Start streaming server" Feb 9 19:43:11.141008 tar[1191]: ./tuning Feb 9 19:43:11.175292 tar[1191]: ./firewall Feb 9 19:43:11.243470 tar[1191]: ./host-device Feb 9 19:43:11.281104 tar[1191]: ./sbr Feb 9 19:43:11.328363 tar[1191]: ./loopback Feb 9 19:43:11.359588 tar[1191]: ./dhcp Feb 9 19:43:11.388012 systemd[1]: Created slice system-sshd.slice. Feb 9 19:43:11.414270 tar[1193]: linux-amd64/LICENSE Feb 9 19:43:11.414561 tar[1193]: linux-amd64/README.md Feb 9 19:43:11.419046 systemd[1]: Finished prepare-helm.service. Feb 9 19:43:11.437394 locksmithd[1206]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 19:43:11.461139 tar[1191]: ./ptp Feb 9 19:43:11.469286 systemd[1]: Finished prepare-critools.service. Feb 9 19:43:11.493384 tar[1191]: ./ipvlan Feb 9 19:43:11.520723 tar[1191]: ./bandwidth Feb 9 19:43:11.554311 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 19:43:11.814271 sshd_keygen[1188]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 19:43:11.831666 systemd[1]: Finished sshd-keygen.service. Feb 9 19:43:11.833912 systemd[1]: Starting issuegen.service... Feb 9 19:43:11.835408 systemd[1]: Started sshd@0-10.0.0.49:22-10.0.0.1:51760.service. Feb 9 19:43:11.838986 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 19:43:11.839193 systemd[1]: Finished issuegen.service. Feb 9 19:43:11.841200 systemd[1]: Starting systemd-user-sessions.service... Feb 9 19:43:11.846133 systemd[1]: Finished systemd-user-sessions.service. Feb 9 19:43:11.848094 systemd[1]: Started getty@tty1.service. Feb 9 19:43:11.849847 systemd[1]: Started serial-getty@ttyS0.service. Feb 9 19:43:11.850590 systemd[1]: Reached target getty.target. Feb 9 19:43:11.851201 systemd[1]: Reached target multi-user.target. Feb 9 19:43:11.852838 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 19:43:11.859551 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 19:43:11.859724 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 19:43:11.860497 systemd[1]: Startup finished in 7.032s (kernel) + 7.092s (userspace) = 14.124s. Feb 9 19:43:11.871638 sshd[1260]: Accepted publickey for core from 10.0.0.1 port 51760 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:43:11.873023 sshd[1260]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:43:11.880902 systemd-logind[1179]: New session 1 of user core. Feb 9 19:43:11.881763 systemd[1]: Created slice user-500.slice. Feb 9 19:43:11.882676 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 19:43:11.890010 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 19:43:11.891122 systemd[1]: Starting user@500.service... Feb 9 19:43:11.893585 (systemd)[1273]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:43:11.957315 systemd[1273]: Queued start job for default target default.target. Feb 9 19:43:11.957500 systemd[1273]: Reached target paths.target. Feb 9 19:43:11.957515 systemd[1273]: Reached target sockets.target. Feb 9 19:43:11.957527 systemd[1273]: Reached target timers.target. Feb 9 19:43:11.957538 systemd[1273]: Reached target basic.target. Feb 9 19:43:11.957573 systemd[1273]: Reached target default.target. Feb 9 19:43:11.957594 systemd[1273]: Startup finished in 59ms. Feb 9 19:43:11.957672 systemd[1]: Started user@500.service. Feb 9 19:43:11.958549 systemd[1]: Started session-1.scope. Feb 9 19:43:12.008164 systemd[1]: Started sshd@1-10.0.0.49:22-10.0.0.1:51762.service. Feb 9 19:43:12.038796 sshd[1283]: Accepted publickey for core from 10.0.0.1 port 51762 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:43:12.039983 sshd[1283]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:43:12.043509 systemd-logind[1179]: New session 2 of user core. Feb 9 19:43:12.044357 systemd[1]: Started session-2.scope. Feb 9 19:43:12.097616 sshd[1283]: pam_unix(sshd:session): session closed for user core Feb 9 19:43:12.099989 systemd[1]: Started sshd@2-10.0.0.49:22-10.0.0.1:51776.service. Feb 9 19:43:12.100497 systemd[1]: sshd@1-10.0.0.49:22-10.0.0.1:51762.service: Deactivated successfully. Feb 9 19:43:12.101664 systemd-logind[1179]: Session 2 logged out. Waiting for processes to exit. Feb 9 19:43:12.101816 systemd[1]: session-2.scope: Deactivated successfully. Feb 9 19:43:12.102940 systemd-logind[1179]: Removed session 2. Feb 9 19:43:12.129569 sshd[1288]: Accepted publickey for core from 10.0.0.1 port 51776 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:43:12.130546 sshd[1288]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:43:12.133778 systemd-logind[1179]: New session 3 of user core. Feb 9 19:43:12.134553 systemd[1]: Started session-3.scope. Feb 9 19:43:12.183682 sshd[1288]: pam_unix(sshd:session): session closed for user core Feb 9 19:43:12.185887 systemd[1]: Started sshd@3-10.0.0.49:22-10.0.0.1:51782.service. Feb 9 19:43:12.186312 systemd[1]: sshd@2-10.0.0.49:22-10.0.0.1:51776.service: Deactivated successfully. Feb 9 19:43:12.187215 systemd-logind[1179]: Session 3 logged out. Waiting for processes to exit. Feb 9 19:43:12.187246 systemd[1]: session-3.scope: Deactivated successfully. Feb 9 19:43:12.188193 systemd-logind[1179]: Removed session 3. Feb 9 19:43:12.215536 sshd[1295]: Accepted publickey for core from 10.0.0.1 port 51782 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:43:12.216465 sshd[1295]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:43:12.219404 systemd-logind[1179]: New session 4 of user core. Feb 9 19:43:12.220122 systemd[1]: Started session-4.scope. Feb 9 19:43:12.273417 sshd[1295]: pam_unix(sshd:session): session closed for user core Feb 9 19:43:12.275355 systemd[1]: Started sshd@4-10.0.0.49:22-10.0.0.1:51792.service. Feb 9 19:43:12.275758 systemd[1]: sshd@3-10.0.0.49:22-10.0.0.1:51782.service: Deactivated successfully. Feb 9 19:43:12.276505 systemd-logind[1179]: Session 4 logged out. Waiting for processes to exit. Feb 9 19:43:12.276543 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 19:43:12.277377 systemd-logind[1179]: Removed session 4. Feb 9 19:43:12.304795 sshd[1302]: Accepted publickey for core from 10.0.0.1 port 51792 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:43:12.305752 sshd[1302]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:43:12.308914 systemd-logind[1179]: New session 5 of user core. Feb 9 19:43:12.309677 systemd[1]: Started session-5.scope. Feb 9 19:43:12.363159 sudo[1308]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 19:43:12.363330 sudo[1308]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 19:43:12.882458 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 19:43:13.873864 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 19:43:13.874173 systemd[1]: Reached target network-online.target. Feb 9 19:43:13.875621 systemd[1]: Starting docker.service... Feb 9 19:43:13.917059 env[1327]: time="2024-02-09T19:43:13.916995953Z" level=info msg="Starting up" Feb 9 19:43:13.918646 env[1327]: time="2024-02-09T19:43:13.918605340Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 19:43:13.918646 env[1327]: time="2024-02-09T19:43:13.918633273Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 19:43:13.918749 env[1327]: time="2024-02-09T19:43:13.918663149Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 19:43:13.918749 env[1327]: time="2024-02-09T19:43:13.918683938Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 19:43:13.920438 env[1327]: time="2024-02-09T19:43:13.920403442Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 19:43:13.920438 env[1327]: time="2024-02-09T19:43:13.920425222Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 19:43:13.920526 env[1327]: time="2024-02-09T19:43:13.920453846Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 19:43:13.920526 env[1327]: time="2024-02-09T19:43:13.920476448Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 19:43:14.558575 env[1327]: time="2024-02-09T19:43:14.558514890Z" level=warning msg="Your kernel does not support cgroup blkio weight" Feb 9 19:43:14.558575 env[1327]: time="2024-02-09T19:43:14.558541641Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Feb 9 19:43:14.558858 env[1327]: time="2024-02-09T19:43:14.558706079Z" level=info msg="Loading containers: start." Feb 9 19:43:14.646088 kernel: Initializing XFRM netlink socket Feb 9 19:43:14.670921 env[1327]: time="2024-02-09T19:43:14.670883351Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 9 19:43:14.721676 systemd-networkd[1069]: docker0: Link UP Feb 9 19:43:14.731001 env[1327]: time="2024-02-09T19:43:14.730955827Z" level=info msg="Loading containers: done." Feb 9 19:43:14.741661 env[1327]: time="2024-02-09T19:43:14.741607960Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 9 19:43:14.741850 env[1327]: time="2024-02-09T19:43:14.741823183Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 9 19:43:14.741944 env[1327]: time="2024-02-09T19:43:14.741921257Z" level=info msg="Daemon has completed initialization" Feb 9 19:43:14.757347 systemd[1]: Started docker.service. Feb 9 19:43:14.761319 env[1327]: time="2024-02-09T19:43:14.761270968Z" level=info msg="API listen on /run/docker.sock" Feb 9 19:43:14.778343 systemd[1]: Reloading. Feb 9 19:43:14.834906 /usr/lib/systemd/system-generators/torcx-generator[1471]: time="2024-02-09T19:43:14Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:43:14.834940 /usr/lib/systemd/system-generators/torcx-generator[1471]: time="2024-02-09T19:43:14Z" level=info msg="torcx already run" Feb 9 19:43:14.901695 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:43:14.901713 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:43:14.920440 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:43:14.986148 systemd[1]: Started kubelet.service. Feb 9 19:43:15.045514 kubelet[1515]: E0209 19:43:15.045431 1515 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 19:43:15.047226 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:43:15.047383 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:43:15.356440 env[1198]: time="2024-02-09T19:43:15.356386590Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\"" Feb 9 19:43:16.260411 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount181346303.mount: Deactivated successfully. Feb 9 19:43:18.361418 env[1198]: time="2024-02-09T19:43:18.361343974Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:43:18.363086 env[1198]: time="2024-02-09T19:43:18.363036127Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:43:18.365254 env[1198]: time="2024-02-09T19:43:18.365199122Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:43:18.367083 env[1198]: time="2024-02-09T19:43:18.366799653Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2f28bed4096abd572a56595ac0304238bdc271dcfe22c650707c09bf97ec16fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:43:18.368109 env[1198]: time="2024-02-09T19:43:18.368010063Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\" returns image reference \"sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f\"" Feb 9 19:43:18.381085 env[1198]: time="2024-02-09T19:43:18.380974651Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\"" Feb 9 19:43:21.092506 env[1198]: time="2024-02-09T19:43:21.092426874Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:43:21.094335 env[1198]: time="2024-02-09T19:43:21.094286741Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:43:21.096190 env[1198]: time="2024-02-09T19:43:21.096156226Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:43:21.097822 env[1198]: time="2024-02-09T19:43:21.097793797Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fda420c6c15cdd01c4eba3404f0662fe486a9c7f38fa13c741a21334673841a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:43:21.098420 env[1198]: time="2024-02-09T19:43:21.098380216Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\" returns image reference \"sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486\"" Feb 9 19:43:21.113336 env[1198]: time="2024-02-09T19:43:21.113299209Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\"" Feb 9 19:43:22.698388 env[1198]: time="2024-02-09T19:43:22.698315805Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:43:22.700043 env[1198]: time="2024-02-09T19:43:22.699983392Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:43:22.702786 env[1198]: time="2024-02-09T19:43:22.702733448Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:43:22.704416 env[1198]: time="2024-02-09T19:43:22.704385466Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c3c7303ee6d01c8e5a769db28661cf854b55175aa72c67e9b6a7b9d47ac42af3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:43:22.705030 env[1198]: time="2024-02-09T19:43:22.704996581Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\" returns image reference \"sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e\"" Feb 9 19:43:22.715314 env[1198]: time="2024-02-09T19:43:22.715277288Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 9 19:43:24.382535 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount552765010.mount: Deactivated successfully. Feb 9 19:43:24.863589 env[1198]: time="2024-02-09T19:43:24.863520383Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:43:24.866424 env[1198]: time="2024-02-09T19:43:24.866402056Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:43:24.868293 env[1198]: time="2024-02-09T19:43:24.868267333Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:43:24.871174 env[1198]: time="2024-02-09T19:43:24.871124130Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:43:24.871579 env[1198]: time="2024-02-09T19:43:24.871539178Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f\"" Feb 9 19:43:24.880705 env[1198]: time="2024-02-09T19:43:24.880671281Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 9 19:43:25.178839 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 9 19:43:25.179031 systemd[1]: Stopped kubelet.service. Feb 9 19:43:25.180960 systemd[1]: Started kubelet.service. Feb 9 19:43:25.222315 kubelet[1567]: E0209 19:43:25.222223 1567 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 19:43:25.225646 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:43:25.225809 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:43:25.410781 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount433642142.mount: Deactivated successfully. Feb 9 19:43:25.417104 env[1198]: time="2024-02-09T19:43:25.417032859Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:43:25.419252 env[1198]: time="2024-02-09T19:43:25.419212095Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:43:25.420790 env[1198]: time="2024-02-09T19:43:25.420766519Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:43:25.422136 env[1198]: time="2024-02-09T19:43:25.422107283Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:43:25.422614 env[1198]: time="2024-02-09T19:43:25.422586341Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 9 19:43:25.432001 env[1198]: time="2024-02-09T19:43:25.431909392Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\"" Feb 9 19:43:26.205978 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1108062732.mount: Deactivated successfully. Feb 9 19:43:31.333605 env[1198]: time="2024-02-09T19:43:31.333492669Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:43:31.335604 env[1198]: time="2024-02-09T19:43:31.335542002Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:43:31.339086 env[1198]: time="2024-02-09T19:43:31.339034009Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:43:31.340629 env[1198]: time="2024-02-09T19:43:31.340597230Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:43:31.341460 env[1198]: time="2024-02-09T19:43:31.341429150Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\" returns image reference \"sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7\"" Feb 9 19:43:31.352047 env[1198]: time="2024-02-09T19:43:31.351998678Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\"" Feb 9 19:43:32.113271 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount850506315.mount: Deactivated successfully. Feb 9 19:43:33.543514 env[1198]: time="2024-02-09T19:43:33.543451872Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:43:33.545309 env[1198]: time="2024-02-09T19:43:33.545272907Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:43:33.546728 env[1198]: time="2024-02-09T19:43:33.546683461Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:43:33.547942 env[1198]: time="2024-02-09T19:43:33.547917906Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:43:33.548379 env[1198]: time="2024-02-09T19:43:33.548342412Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\" returns image reference \"sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a\"" Feb 9 19:43:35.428981 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 9 19:43:35.429271 systemd[1]: Stopped kubelet.service. Feb 9 19:43:35.431126 systemd[1]: Started kubelet.service. Feb 9 19:43:35.475331 kubelet[1655]: E0209 19:43:35.475257 1655 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 19:43:35.477612 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:43:35.477792 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:43:35.529695 systemd[1]: Stopped kubelet.service. Feb 9 19:43:35.541820 systemd[1]: Reloading. Feb 9 19:43:35.607656 /usr/lib/systemd/system-generators/torcx-generator[1687]: time="2024-02-09T19:43:35Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:43:35.608179 /usr/lib/systemd/system-generators/torcx-generator[1687]: time="2024-02-09T19:43:35Z" level=info msg="torcx already run" Feb 9 19:43:35.678262 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:43:35.678285 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:43:35.697111 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:43:35.768258 systemd[1]: Started kubelet.service. Feb 9 19:43:35.808542 kubelet[1734]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:43:35.808542 kubelet[1734]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:43:35.808835 kubelet[1734]: I0209 19:43:35.808582 1734 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 19:43:35.811565 kubelet[1734]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:43:35.811565 kubelet[1734]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:43:36.282540 kubelet[1734]: I0209 19:43:36.282492 1734 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 19:43:36.282540 kubelet[1734]: I0209 19:43:36.282527 1734 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 19:43:36.282780 kubelet[1734]: I0209 19:43:36.282754 1734 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 19:43:36.285511 kubelet[1734]: I0209 19:43:36.285484 1734 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 19:43:36.286228 kubelet[1734]: E0209 19:43:36.286208 1734 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.49:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.49:6443: connect: connection refused Feb 9 19:43:36.290029 kubelet[1734]: I0209 19:43:36.289999 1734 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 19:43:36.290372 kubelet[1734]: I0209 19:43:36.290351 1734 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 19:43:36.290452 kubelet[1734]: I0209 19:43:36.290432 1734 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 19:43:36.290554 kubelet[1734]: I0209 19:43:36.290457 1734 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 19:43:36.290554 kubelet[1734]: I0209 19:43:36.290468 1734 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 19:43:36.290554 kubelet[1734]: I0209 19:43:36.290553 1734 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:43:36.293341 kubelet[1734]: I0209 19:43:36.293318 1734 kubelet.go:398] "Attempting to sync node with API server" Feb 9 19:43:36.293458 kubelet[1734]: I0209 19:43:36.293346 1734 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 19:43:36.293458 kubelet[1734]: I0209 19:43:36.293386 1734 kubelet.go:297] "Adding apiserver pod source" Feb 9 19:43:36.293458 kubelet[1734]: I0209 19:43:36.293412 1734 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 19:43:36.294400 kubelet[1734]: W0209 19:43:36.294335 1734 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Feb 9 19:43:36.294496 kubelet[1734]: W0209 19:43:36.294449 1734 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.49:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Feb 9 19:43:36.294496 kubelet[1734]: I0209 19:43:36.294384 1734 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 19:43:36.294681 kubelet[1734]: E0209 19:43:36.294666 1734 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Feb 9 19:43:36.294844 kubelet[1734]: W0209 19:43:36.294829 1734 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 19:43:36.295031 kubelet[1734]: E0209 19:43:36.295013 1734 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.49:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Feb 9 19:43:36.295632 kubelet[1734]: I0209 19:43:36.295609 1734 server.go:1186] "Started kubelet" Feb 9 19:43:36.296123 kubelet[1734]: I0209 19:43:36.296105 1734 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 19:43:36.296735 kubelet[1734]: E0209 19:43:36.296615 1734 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b2494fe98038bc", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 43, 36, 295577788, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 43, 36, 295577788, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.0.0.49:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.49:6443: connect: connection refused'(may retry after sleeping) Feb 9 19:43:36.297386 kubelet[1734]: I0209 19:43:36.297368 1734 server.go:451] "Adding debug handlers to kubelet server" Feb 9 19:43:36.299758 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 9 19:43:36.299844 kubelet[1734]: E0209 19:43:36.299766 1734 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 19:43:36.299896 kubelet[1734]: E0209 19:43:36.299873 1734 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 19:43:36.299968 kubelet[1734]: I0209 19:43:36.299944 1734 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 19:43:36.300789 kubelet[1734]: E0209 19:43:36.300773 1734 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 19:43:36.300962 kubelet[1734]: I0209 19:43:36.300931 1734 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 19:43:36.301142 kubelet[1734]: E0209 19:43:36.301113 1734 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://10.0.0.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.49:6443: connect: connection refused Feb 9 19:43:36.301394 kubelet[1734]: I0209 19:43:36.301367 1734 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 19:43:36.301648 kubelet[1734]: W0209 19:43:36.301607 1734 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Feb 9 19:43:36.301648 kubelet[1734]: E0209 19:43:36.301647 1734 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Feb 9 19:43:36.333964 kubelet[1734]: I0209 19:43:36.333925 1734 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 19:43:36.338558 kubelet[1734]: I0209 19:43:36.338529 1734 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 19:43:36.338558 kubelet[1734]: I0209 19:43:36.338553 1734 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 19:43:36.338702 kubelet[1734]: I0209 19:43:36.338568 1734 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:43:36.341392 kubelet[1734]: I0209 19:43:36.341359 1734 policy_none.go:49] "None policy: Start" Feb 9 19:43:36.342026 kubelet[1734]: I0209 19:43:36.341994 1734 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 19:43:36.342175 kubelet[1734]: I0209 19:43:36.342053 1734 state_mem.go:35] "Initializing new in-memory state store" Feb 9 19:43:36.347924 kubelet[1734]: I0209 19:43:36.347892 1734 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 19:43:36.348187 kubelet[1734]: I0209 19:43:36.348153 1734 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 19:43:36.348915 kubelet[1734]: E0209 19:43:36.348887 1734 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 9 19:43:36.354427 kubelet[1734]: I0209 19:43:36.354401 1734 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 19:43:36.354427 kubelet[1734]: I0209 19:43:36.354430 1734 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 19:43:36.354508 kubelet[1734]: I0209 19:43:36.354454 1734 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 19:43:36.354532 kubelet[1734]: E0209 19:43:36.354522 1734 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 9 19:43:36.354803 kubelet[1734]: W0209 19:43:36.354779 1734 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Feb 9 19:43:36.354863 kubelet[1734]: E0209 19:43:36.354809 1734 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Feb 9 19:43:36.402866 kubelet[1734]: I0209 19:43:36.402829 1734 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 19:43:36.403265 kubelet[1734]: E0209 19:43:36.403242 1734 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.49:6443/api/v1/nodes\": dial tcp 10.0.0.49:6443: connect: connection refused" node="localhost" Feb 9 19:43:36.455409 kubelet[1734]: I0209 19:43:36.455374 1734 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:43:36.456474 kubelet[1734]: I0209 19:43:36.456436 1734 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:43:36.457228 kubelet[1734]: I0209 19:43:36.457203 1734 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:43:36.458102 kubelet[1734]: I0209 19:43:36.458078 1734 status_manager.go:698] "Failed to get status for pod" podUID=550020dd9f101bcc23e1d3c651841c4d pod="kube-system/kube-controller-manager-localhost" err="Get \"https://10.0.0.49:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 10.0.0.49:6443: connect: connection refused" Feb 9 19:43:36.459204 kubelet[1734]: I0209 19:43:36.459181 1734 status_manager.go:698] "Failed to get status for pod" podUID=72ae17a74a2eae76daac6d298477aff0 pod="kube-system/kube-scheduler-localhost" err="Get \"https://10.0.0.49:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 10.0.0.49:6443: connect: connection refused" Feb 9 19:43:36.459608 kubelet[1734]: I0209 19:43:36.459580 1734 status_manager.go:698] "Failed to get status for pod" podUID=d6c2564eaffda21aa116fb933e78ce0c pod="kube-system/kube-apiserver-localhost" err="Get \"https://10.0.0.49:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.49:6443: connect: connection refused" Feb 9 19:43:36.501720 kubelet[1734]: E0209 19:43:36.501661 1734 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://10.0.0.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.49:6443: connect: connection refused Feb 9 19:43:36.502824 kubelet[1734]: I0209 19:43:36.502785 1734 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 19:43:36.502972 kubelet[1734]: I0209 19:43:36.502910 1734 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 19:43:36.503061 kubelet[1734]: I0209 19:43:36.503043 1734 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d6c2564eaffda21aa116fb933e78ce0c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d6c2564eaffda21aa116fb933e78ce0c\") " pod="kube-system/kube-apiserver-localhost" Feb 9 19:43:36.503110 kubelet[1734]: I0209 19:43:36.503100 1734 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 19:43:36.503140 kubelet[1734]: I0209 19:43:36.503124 1734 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 19:43:36.503167 kubelet[1734]: I0209 19:43:36.503154 1734 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 19:43:36.503193 kubelet[1734]: I0209 19:43:36.503174 1734 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae17a74a2eae76daac6d298477aff0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae17a74a2eae76daac6d298477aff0\") " pod="kube-system/kube-scheduler-localhost" Feb 9 19:43:36.503193 kubelet[1734]: I0209 19:43:36.503193 1734 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d6c2564eaffda21aa116fb933e78ce0c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d6c2564eaffda21aa116fb933e78ce0c\") " pod="kube-system/kube-apiserver-localhost" Feb 9 19:43:36.503242 kubelet[1734]: I0209 19:43:36.503212 1734 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d6c2564eaffda21aa116fb933e78ce0c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d6c2564eaffda21aa116fb933e78ce0c\") " pod="kube-system/kube-apiserver-localhost" Feb 9 19:43:36.605322 kubelet[1734]: I0209 19:43:36.605215 1734 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 19:43:36.605688 kubelet[1734]: E0209 19:43:36.605668 1734 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.49:6443/api/v1/nodes\": dial tcp 10.0.0.49:6443: connect: connection refused" node="localhost" Feb 9 19:43:36.762629 kubelet[1734]: E0209 19:43:36.762558 1734 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:43:36.762629 kubelet[1734]: E0209 19:43:36.762588 1734 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:43:36.763409 env[1198]: time="2024-02-09T19:43:36.763366632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae17a74a2eae76daac6d298477aff0,Namespace:kube-system,Attempt:0,}" Feb 9 19:43:36.763829 env[1198]: time="2024-02-09T19:43:36.763585682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:550020dd9f101bcc23e1d3c651841c4d,Namespace:kube-system,Attempt:0,}" Feb 9 19:43:36.764673 kubelet[1734]: E0209 19:43:36.764634 1734 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:43:36.765217 env[1198]: time="2024-02-09T19:43:36.765092748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d6c2564eaffda21aa116fb933e78ce0c,Namespace:kube-system,Attempt:0,}" Feb 9 19:43:36.902708 kubelet[1734]: E0209 19:43:36.902568 1734 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://10.0.0.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.49:6443: connect: connection refused Feb 9 19:43:37.007011 kubelet[1734]: I0209 19:43:37.006972 1734 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 19:43:37.007576 kubelet[1734]: E0209 19:43:37.007542 1734 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.49:6443/api/v1/nodes\": dial tcp 10.0.0.49:6443: connect: connection refused" node="localhost" Feb 9 19:43:37.166860 kubelet[1734]: W0209 19:43:37.166566 1734 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Feb 9 19:43:37.166860 kubelet[1734]: E0209 19:43:37.166637 1734 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Feb 9 19:43:37.280350 kubelet[1734]: W0209 19:43:37.280263 1734 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Feb 9 19:43:37.280350 kubelet[1734]: E0209 19:43:37.280342 1734 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Feb 9 19:43:37.703390 kubelet[1734]: E0209 19:43:37.703342 1734 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: Get "https://10.0.0.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.49:6443: connect: connection refused Feb 9 19:43:37.757293 kubelet[1734]: W0209 19:43:37.757192 1734 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Feb 9 19:43:37.757293 kubelet[1734]: E0209 19:43:37.757264 1734 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Feb 9 19:43:37.784574 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount967384028.mount: Deactivated successfully. Feb 9 19:43:37.788351 env[1198]: time="2024-02-09T19:43:37.788293529Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:43:37.790637 env[1198]: time="2024-02-09T19:43:37.790588913Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:43:37.791594 env[1198]: time="2024-02-09T19:43:37.791565634Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:43:37.793152 env[1198]: time="2024-02-09T19:43:37.793124287Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:43:37.793881 kubelet[1734]: W0209 19:43:37.793814 1734 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.49:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Feb 9 19:43:37.793940 kubelet[1734]: E0209 19:43:37.793899 1734 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.49:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Feb 9 19:43:37.794670 env[1198]: time="2024-02-09T19:43:37.794646872Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:43:37.796104 env[1198]: time="2024-02-09T19:43:37.796081772Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:43:37.797967 env[1198]: time="2024-02-09T19:43:37.797937993Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:43:37.801649 env[1198]: time="2024-02-09T19:43:37.801592855Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:43:37.803997 env[1198]: time="2024-02-09T19:43:37.803962889Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:43:37.805594 env[1198]: time="2024-02-09T19:43:37.805566085Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:43:37.806334 env[1198]: time="2024-02-09T19:43:37.806293078Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:43:37.807399 env[1198]: time="2024-02-09T19:43:37.807352645Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:43:37.809357 kubelet[1734]: I0209 19:43:37.809336 1734 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 19:43:37.809720 kubelet[1734]: E0209 19:43:37.809685 1734 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.49:6443/api/v1/nodes\": dial tcp 10.0.0.49:6443: connect: connection refused" node="localhost" Feb 9 19:43:37.824955 env[1198]: time="2024-02-09T19:43:37.824873126Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:43:37.824955 env[1198]: time="2024-02-09T19:43:37.824911267Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:43:37.824955 env[1198]: time="2024-02-09T19:43:37.824923891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:43:37.825269 env[1198]: time="2024-02-09T19:43:37.825143393Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c34325042cf3ede7a0bc560c2b4745a8e630b9721cc4961eb1501b9d8782bd35 pid=1811 runtime=io.containerd.runc.v2 Feb 9 19:43:37.832263 env[1198]: time="2024-02-09T19:43:37.831982035Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:43:37.832263 env[1198]: time="2024-02-09T19:43:37.832112019Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:43:37.832263 env[1198]: time="2024-02-09T19:43:37.832133138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:43:37.832448 env[1198]: time="2024-02-09T19:43:37.832321001Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ec051b8f5eb1dce456b981d260517e2f18d54e93f228d5b0398dac3fefc34cba pid=1833 runtime=io.containerd.runc.v2 Feb 9 19:43:37.834166 env[1198]: time="2024-02-09T19:43:37.834085179Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:43:37.834344 env[1198]: time="2024-02-09T19:43:37.834130884Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:43:37.834344 env[1198]: time="2024-02-09T19:43:37.834147706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:43:37.835037 env[1198]: time="2024-02-09T19:43:37.834720339Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/70750a6a7324603f9dcdd99ce35273159753e34f5a2e8e1658f7cee0a8eaae59 pid=1845 runtime=io.containerd.runc.v2 Feb 9 19:43:37.882468 env[1198]: time="2024-02-09T19:43:37.882420878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d6c2564eaffda21aa116fb933e78ce0c,Namespace:kube-system,Attempt:0,} returns sandbox id \"ec051b8f5eb1dce456b981d260517e2f18d54e93f228d5b0398dac3fefc34cba\"" Feb 9 19:43:37.883346 kubelet[1734]: E0209 19:43:37.883325 1734 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:43:37.886348 env[1198]: time="2024-02-09T19:43:37.886311753Z" level=info msg="CreateContainer within sandbox \"ec051b8f5eb1dce456b981d260517e2f18d54e93f228d5b0398dac3fefc34cba\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 9 19:43:37.886473 env[1198]: time="2024-02-09T19:43:37.886444132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae17a74a2eae76daac6d298477aff0,Namespace:kube-system,Attempt:0,} returns sandbox id \"c34325042cf3ede7a0bc560c2b4745a8e630b9721cc4961eb1501b9d8782bd35\"" Feb 9 19:43:37.886946 kubelet[1734]: E0209 19:43:37.886911 1734 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:43:37.890984 env[1198]: time="2024-02-09T19:43:37.890910887Z" level=info msg="CreateContainer within sandbox \"c34325042cf3ede7a0bc560c2b4745a8e630b9721cc4961eb1501b9d8782bd35\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 9 19:43:37.897868 env[1198]: time="2024-02-09T19:43:37.897825952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:550020dd9f101bcc23e1d3c651841c4d,Namespace:kube-system,Attempt:0,} returns sandbox id \"70750a6a7324603f9dcdd99ce35273159753e34f5a2e8e1658f7cee0a8eaae59\"" Feb 9 19:43:37.898699 kubelet[1734]: E0209 19:43:37.898663 1734 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:43:37.900654 env[1198]: time="2024-02-09T19:43:37.900622436Z" level=info msg="CreateContainer within sandbox \"70750a6a7324603f9dcdd99ce35273159753e34f5a2e8e1658f7cee0a8eaae59\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 9 19:43:37.921041 env[1198]: time="2024-02-09T19:43:37.920992740Z" level=info msg="CreateContainer within sandbox \"c34325042cf3ede7a0bc560c2b4745a8e630b9721cc4961eb1501b9d8782bd35\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7064c6b28314c5b5dae658c9466c3362035772400bb42504d7c26af25249e416\"" Feb 9 19:43:37.921710 env[1198]: time="2024-02-09T19:43:37.921677905Z" level=info msg="StartContainer for \"7064c6b28314c5b5dae658c9466c3362035772400bb42504d7c26af25249e416\"" Feb 9 19:43:37.921992 env[1198]: time="2024-02-09T19:43:37.921743338Z" level=info msg="CreateContainer within sandbox \"ec051b8f5eb1dce456b981d260517e2f18d54e93f228d5b0398dac3fefc34cba\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a68b3a4219a08004c7367d1cb8bcacc76598053429c4c47501f3e461004e2596\"" Feb 9 19:43:37.922546 env[1198]: time="2024-02-09T19:43:37.922512319Z" level=info msg="StartContainer for \"a68b3a4219a08004c7367d1cb8bcacc76598053429c4c47501f3e461004e2596\"" Feb 9 19:43:37.930834 env[1198]: time="2024-02-09T19:43:37.930787646Z" level=info msg="CreateContainer within sandbox \"70750a6a7324603f9dcdd99ce35273159753e34f5a2e8e1658f7cee0a8eaae59\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"de48331966b35d13e3841c95c42dfdb5ef5b7e8ea765b963eba43b3599debb85\"" Feb 9 19:43:37.931588 env[1198]: time="2024-02-09T19:43:37.931565995Z" level=info msg="StartContainer for \"de48331966b35d13e3841c95c42dfdb5ef5b7e8ea765b963eba43b3599debb85\"" Feb 9 19:43:37.987386 env[1198]: time="2024-02-09T19:43:37.986245148Z" level=info msg="StartContainer for \"a68b3a4219a08004c7367d1cb8bcacc76598053429c4c47501f3e461004e2596\" returns successfully" Feb 9 19:43:38.001416 env[1198]: time="2024-02-09T19:43:37.999236998Z" level=info msg="StartContainer for \"7064c6b28314c5b5dae658c9466c3362035772400bb42504d7c26af25249e416\" returns successfully" Feb 9 19:43:38.005206 env[1198]: time="2024-02-09T19:43:38.005073842Z" level=info msg="StartContainer for \"de48331966b35d13e3841c95c42dfdb5ef5b7e8ea765b963eba43b3599debb85\" returns successfully" Feb 9 19:43:38.361152 kubelet[1734]: E0209 19:43:38.360631 1734 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:43:38.363213 kubelet[1734]: E0209 19:43:38.363158 1734 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:43:38.364745 kubelet[1734]: E0209 19:43:38.364731 1734 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:43:39.333837 kubelet[1734]: E0209 19:43:39.333766 1734 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 9 19:43:39.366812 kubelet[1734]: E0209 19:43:39.366772 1734 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:43:39.366812 kubelet[1734]: E0209 19:43:39.366776 1734 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:43:39.367273 kubelet[1734]: E0209 19:43:39.366850 1734 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:43:39.411000 kubelet[1734]: I0209 19:43:39.410963 1734 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 19:43:39.697953 kubelet[1734]: I0209 19:43:39.697824 1734 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Feb 9 19:43:39.704420 kubelet[1734]: E0209 19:43:39.704380 1734 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 19:43:39.805117 kubelet[1734]: E0209 19:43:39.805076 1734 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 19:43:39.853816 kubelet[1734]: E0209 19:43:39.853726 1734 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b2494fe98038bc", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 43, 36, 295577788, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 43, 36, 295577788, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:43:39.905632 kubelet[1734]: E0209 19:43:39.905581 1734 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 19:43:39.906872 kubelet[1734]: E0209 19:43:39.906805 1734 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b2494fe9c059e2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 43, 36, 299780578, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 43, 36, 299780578, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:43:39.959658 kubelet[1734]: E0209 19:43:39.959521 1734 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b2494fec05a15f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node localhost status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 43, 36, 337875295, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 43, 36, 337875295, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:43:40.005823 kubelet[1734]: E0209 19:43:40.005781 1734 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 19:43:40.012287 kubelet[1734]: E0209 19:43:40.012169 1734 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b2494fec05cf0b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node localhost status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 43, 36, 337886987, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 43, 36, 337886987, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:43:40.065954 kubelet[1734]: E0209 19:43:40.065843 1734 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b2494fec05e275", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node localhost status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 43, 36, 337891957, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 43, 36, 337891957, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:43:40.106303 kubelet[1734]: E0209 19:43:40.106271 1734 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 19:43:40.118651 kubelet[1734]: E0209 19:43:40.118586 1734 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b2494fecab930f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 43, 36, 348750607, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 43, 36, 348750607, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:43:40.173284 kubelet[1734]: E0209 19:43:40.173166 1734 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b2494fec05a15f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node localhost status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 43, 36, 337875295, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 43, 36, 402780402, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:43:40.206740 kubelet[1734]: E0209 19:43:40.206661 1734 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 19:43:40.229231 kubelet[1734]: E0209 19:43:40.229033 1734 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b2494fec05cf0b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node localhost status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 43, 36, 337886987, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 43, 36, 402795290, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:43:40.283297 kubelet[1734]: E0209 19:43:40.283208 1734 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b2494fec05e275", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node localhost status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 43, 36, 337891957, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 43, 36, 402799348, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:43:40.307813 kubelet[1734]: E0209 19:43:40.307771 1734 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 19:43:40.368172 kubelet[1734]: E0209 19:43:40.368133 1734 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:43:40.368670 kubelet[1734]: E0209 19:43:40.368622 1734 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:43:40.408892 kubelet[1734]: E0209 19:43:40.408848 1734 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 19:43:40.510038 kubelet[1734]: E0209 19:43:40.509906 1734 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 19:43:40.610922 kubelet[1734]: E0209 19:43:40.610841 1734 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 19:43:41.296194 kubelet[1734]: I0209 19:43:41.296145 1734 apiserver.go:52] "Watching apiserver" Feb 9 19:43:41.301972 kubelet[1734]: I0209 19:43:41.301945 1734 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 19:43:41.331789 kubelet[1734]: I0209 19:43:41.331751 1734 reconciler.go:41] "Reconciler: start to sync state" Feb 9 19:43:41.839661 systemd[1]: Reloading. Feb 9 19:43:41.904381 /usr/lib/systemd/system-generators/torcx-generator[2075]: time="2024-02-09T19:43:41Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:43:41.904409 /usr/lib/systemd/system-generators/torcx-generator[2075]: time="2024-02-09T19:43:41Z" level=info msg="torcx already run" Feb 9 19:43:41.965393 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:43:41.965408 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:43:41.984134 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:43:42.060134 systemd[1]: Stopping kubelet.service... Feb 9 19:43:42.079549 systemd[1]: kubelet.service: Deactivated successfully. Feb 9 19:43:42.079868 systemd[1]: Stopped kubelet.service. Feb 9 19:43:42.081695 systemd[1]: Started kubelet.service. Feb 9 19:43:42.142521 kubelet[2124]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:43:42.142521 kubelet[2124]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:43:42.142521 kubelet[2124]: I0209 19:43:42.142481 2124 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 19:43:42.143905 kubelet[2124]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:43:42.143905 kubelet[2124]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:43:42.147023 kubelet[2124]: I0209 19:43:42.146986 2124 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 19:43:42.147023 kubelet[2124]: I0209 19:43:42.147023 2124 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 19:43:42.148034 kubelet[2124]: I0209 19:43:42.148008 2124 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 19:43:42.150820 kubelet[2124]: I0209 19:43:42.150795 2124 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 9 19:43:42.151612 kubelet[2124]: I0209 19:43:42.151595 2124 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 19:43:42.155831 kubelet[2124]: I0209 19:43:42.155812 2124 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 19:43:42.156262 kubelet[2124]: I0209 19:43:42.156245 2124 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 19:43:42.156319 kubelet[2124]: I0209 19:43:42.156311 2124 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 19:43:42.156399 kubelet[2124]: I0209 19:43:42.156332 2124 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 19:43:42.156399 kubelet[2124]: I0209 19:43:42.156342 2124 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 19:43:42.156399 kubelet[2124]: I0209 19:43:42.156371 2124 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:43:42.159237 kubelet[2124]: I0209 19:43:42.159214 2124 kubelet.go:398] "Attempting to sync node with API server" Feb 9 19:43:42.159237 kubelet[2124]: I0209 19:43:42.159238 2124 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 19:43:42.159361 kubelet[2124]: I0209 19:43:42.159269 2124 kubelet.go:297] "Adding apiserver pod source" Feb 9 19:43:42.159361 kubelet[2124]: I0209 19:43:42.159305 2124 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 19:43:42.160164 kubelet[2124]: I0209 19:43:42.159894 2124 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 19:43:42.170264 kubelet[2124]: I0209 19:43:42.162637 2124 server.go:1186] "Started kubelet" Feb 9 19:43:42.170264 kubelet[2124]: I0209 19:43:42.162920 2124 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 19:43:42.170264 kubelet[2124]: I0209 19:43:42.163580 2124 server.go:451] "Adding debug handlers to kubelet server" Feb 9 19:43:42.170766 kubelet[2124]: I0209 19:43:42.170484 2124 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 19:43:42.170968 kubelet[2124]: E0209 19:43:42.170923 2124 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 19:43:42.170968 kubelet[2124]: E0209 19:43:42.170972 2124 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 19:43:42.177120 kubelet[2124]: I0209 19:43:42.177086 2124 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 19:43:42.177387 kubelet[2124]: I0209 19:43:42.177329 2124 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 19:43:42.207763 kubelet[2124]: I0209 19:43:42.207731 2124 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 19:43:42.221456 kubelet[2124]: I0209 19:43:42.221416 2124 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 19:43:42.221603 kubelet[2124]: I0209 19:43:42.221479 2124 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 19:43:42.221603 kubelet[2124]: I0209 19:43:42.221506 2124 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 19:43:42.222235 kubelet[2124]: E0209 19:43:42.222195 2124 kubelet.go:2137] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 19:43:42.225195 sudo[2174]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 9 19:43:42.225376 sudo[2174]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Feb 9 19:43:42.244737 kubelet[2124]: I0209 19:43:42.244701 2124 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 19:43:42.244737 kubelet[2124]: I0209 19:43:42.244724 2124 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 19:43:42.244922 kubelet[2124]: I0209 19:43:42.244749 2124 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:43:42.244922 kubelet[2124]: I0209 19:43:42.244914 2124 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 9 19:43:42.244993 kubelet[2124]: I0209 19:43:42.244930 2124 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 9 19:43:42.244993 kubelet[2124]: I0209 19:43:42.244938 2124 policy_none.go:49] "None policy: Start" Feb 9 19:43:42.245512 kubelet[2124]: I0209 19:43:42.245496 2124 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 19:43:42.245608 kubelet[2124]: I0209 19:43:42.245591 2124 state_mem.go:35] "Initializing new in-memory state store" Feb 9 19:43:42.245787 kubelet[2124]: I0209 19:43:42.245774 2124 state_mem.go:75] "Updated machine memory state" Feb 9 19:43:42.247563 kubelet[2124]: I0209 19:43:42.247547 2124 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 19:43:42.250251 kubelet[2124]: I0209 19:43:42.249916 2124 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 19:43:42.322814 kubelet[2124]: I0209 19:43:42.322761 2124 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:43:42.322991 kubelet[2124]: I0209 19:43:42.322889 2124 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:43:42.322991 kubelet[2124]: I0209 19:43:42.322929 2124 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:43:42.358058 kubelet[2124]: I0209 19:43:42.358028 2124 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 19:43:42.479031 kubelet[2124]: I0209 19:43:42.478968 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d6c2564eaffda21aa116fb933e78ce0c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d6c2564eaffda21aa116fb933e78ce0c\") " pod="kube-system/kube-apiserver-localhost" Feb 9 19:43:42.479252 kubelet[2124]: I0209 19:43:42.479117 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 19:43:42.479252 kubelet[2124]: I0209 19:43:42.479174 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 19:43:42.479354 kubelet[2124]: I0209 19:43:42.479261 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d6c2564eaffda21aa116fb933e78ce0c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d6c2564eaffda21aa116fb933e78ce0c\") " pod="kube-system/kube-apiserver-localhost" Feb 9 19:43:42.479354 kubelet[2124]: I0209 19:43:42.479325 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d6c2564eaffda21aa116fb933e78ce0c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d6c2564eaffda21aa116fb933e78ce0c\") " pod="kube-system/kube-apiserver-localhost" Feb 9 19:43:42.479427 kubelet[2124]: I0209 19:43:42.479362 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 19:43:42.479427 kubelet[2124]: I0209 19:43:42.479394 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 19:43:42.479501 kubelet[2124]: I0209 19:43:42.479431 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 19:43:42.479501 kubelet[2124]: I0209 19:43:42.479472 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae17a74a2eae76daac6d298477aff0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae17a74a2eae76daac6d298477aff0\") " pod="kube-system/kube-scheduler-localhost" Feb 9 19:43:42.630956 kubelet[2124]: E0209 19:43:42.630920 2124 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:43:42.630956 kubelet[2124]: E0209 19:43:42.630986 2124 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:43:42.631232 kubelet[2124]: E0209 19:43:42.631101 2124 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:43:42.715627 sudo[2174]: pam_unix(sudo:session): session closed for user root Feb 9 19:43:42.962460 kubelet[2124]: I0209 19:43:42.962420 2124 kubelet_node_status.go:108] "Node was previously registered" node="localhost" Feb 9 19:43:42.962689 kubelet[2124]: I0209 19:43:42.962516 2124 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Feb 9 19:43:43.160430 kubelet[2124]: I0209 19:43:43.160373 2124 apiserver.go:52] "Watching apiserver" Feb 9 19:43:43.177814 kubelet[2124]: I0209 19:43:43.177779 2124 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 19:43:43.182931 kubelet[2124]: I0209 19:43:43.182905 2124 reconciler.go:41] "Reconciler: start to sync state" Feb 9 19:43:43.232035 kubelet[2124]: E0209 19:43:43.231908 2124 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:43:43.570140 kubelet[2124]: E0209 19:43:43.569992 2124 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Feb 9 19:43:43.570772 kubelet[2124]: E0209 19:43:43.570754 2124 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:43:43.797365 kubelet[2124]: E0209 19:43:43.797307 2124 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Feb 9 19:43:43.797668 kubelet[2124]: E0209 19:43:43.797614 2124 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:43:44.233354 kubelet[2124]: E0209 19:43:44.233309 2124 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:43:44.234272 kubelet[2124]: E0209 19:43:44.233552 2124 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:43:44.234712 kubelet[2124]: E0209 19:43:44.234681 2124 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:43:44.239724 sudo[1308]: pam_unix(sudo:session): session closed for user root Feb 9 19:43:44.241366 sshd[1302]: pam_unix(sshd:session): session closed for user core Feb 9 19:43:44.244751 systemd[1]: sshd@4-10.0.0.49:22-10.0.0.1:51792.service: Deactivated successfully. Feb 9 19:43:44.246164 systemd-logind[1179]: Session 5 logged out. Waiting for processes to exit. Feb 9 19:43:44.246180 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 19:43:44.246939 systemd-logind[1179]: Removed session 5. Feb 9 19:43:44.364503 kubelet[2124]: I0209 19:43:44.364459 2124 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.364366434 pod.CreationTimestamp="2024-02-09 19:43:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:43:44.102240324 +0000 UTC m=+2.017201540" watchObservedRunningTime="2024-02-09 19:43:44.364366434 +0000 UTC m=+2.279327620" Feb 9 19:43:44.765209 kubelet[2124]: I0209 19:43:44.765145 2124 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.765106818 pod.CreationTimestamp="2024-02-09 19:43:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:43:44.364757662 +0000 UTC m=+2.279718859" watchObservedRunningTime="2024-02-09 19:43:44.765106818 +0000 UTC m=+2.680068014" Feb 9 19:43:44.765209 kubelet[2124]: I0209 19:43:44.765207 2124 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.765194115 pod.CreationTimestamp="2024-02-09 19:43:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:43:44.765102841 +0000 UTC m=+2.680064057" watchObservedRunningTime="2024-02-09 19:43:44.765194115 +0000 UTC m=+2.680155311" Feb 9 19:43:50.839952 kubelet[2124]: E0209 19:43:50.839902 2124 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:43:51.099569 kubelet[2124]: E0209 19:43:51.099021 2124 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:43:51.241720 kubelet[2124]: E0209 19:43:51.241690 2124 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:43:51.242193 kubelet[2124]: E0209 19:43:51.242133 2124 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:43:52.106198 kubelet[2124]: E0209 19:43:52.106157 2124 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:43:52.243365 kubelet[2124]: E0209 19:43:52.243335 2124 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:43:52.243863 kubelet[2124]: E0209 19:43:52.243828 2124 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:43:55.740424 update_engine[1183]: I0209 19:43:55.740371 1183 update_attempter.cc:509] Updating boot flags... Feb 9 19:43:55.825160 kubelet[2124]: I0209 19:43:55.824746 2124 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 9 19:43:55.825573 env[1198]: time="2024-02-09T19:43:55.825049136Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 19:43:55.825871 kubelet[2124]: I0209 19:43:55.825256 2124 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 9 19:43:56.719514 kubelet[2124]: I0209 19:43:56.719466 2124 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:43:56.725654 kubelet[2124]: I0209 19:43:56.725610 2124 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:43:56.773407 kubelet[2124]: I0209 19:43:56.773355 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpkrn\" (UniqueName: \"kubernetes.io/projected/07902f82-ebb8-4f91-82de-20a3499783fe-kube-api-access-dpkrn\") pod \"kube-proxy-nwrg8\" (UID: \"07902f82-ebb8-4f91-82de-20a3499783fe\") " pod="kube-system/kube-proxy-nwrg8" Feb 9 19:43:56.773407 kubelet[2124]: I0209 19:43:56.773405 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0892f3e6-1c32-4bbd-96da-da7da388feac-hostproc\") pod \"cilium-w44bs\" (UID: \"0892f3e6-1c32-4bbd-96da-da7da388feac\") " pod="kube-system/cilium-w44bs" Feb 9 19:43:56.773633 kubelet[2124]: I0209 19:43:56.773435 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0892f3e6-1c32-4bbd-96da-da7da388feac-hubble-tls\") pod \"cilium-w44bs\" (UID: \"0892f3e6-1c32-4bbd-96da-da7da388feac\") " pod="kube-system/cilium-w44bs" Feb 9 19:43:56.773633 kubelet[2124]: I0209 19:43:56.773453 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/07902f82-ebb8-4f91-82de-20a3499783fe-xtables-lock\") pod \"kube-proxy-nwrg8\" (UID: \"07902f82-ebb8-4f91-82de-20a3499783fe\") " pod="kube-system/kube-proxy-nwrg8" Feb 9 19:43:56.773633 kubelet[2124]: I0209 19:43:56.773470 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0892f3e6-1c32-4bbd-96da-da7da388feac-cilium-cgroup\") pod \"cilium-w44bs\" (UID: \"0892f3e6-1c32-4bbd-96da-da7da388feac\") " pod="kube-system/cilium-w44bs" Feb 9 19:43:56.773633 kubelet[2124]: I0209 19:43:56.773486 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0892f3e6-1c32-4bbd-96da-da7da388feac-lib-modules\") pod \"cilium-w44bs\" (UID: \"0892f3e6-1c32-4bbd-96da-da7da388feac\") " pod="kube-system/cilium-w44bs" Feb 9 19:43:56.773633 kubelet[2124]: I0209 19:43:56.773504 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0892f3e6-1c32-4bbd-96da-da7da388feac-bpf-maps\") pod \"cilium-w44bs\" (UID: \"0892f3e6-1c32-4bbd-96da-da7da388feac\") " pod="kube-system/cilium-w44bs" Feb 9 19:43:56.773633 kubelet[2124]: I0209 19:43:56.773525 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tzpn\" (UniqueName: \"kubernetes.io/projected/0892f3e6-1c32-4bbd-96da-da7da388feac-kube-api-access-4tzpn\") pod \"cilium-w44bs\" (UID: \"0892f3e6-1c32-4bbd-96da-da7da388feac\") " pod="kube-system/cilium-w44bs" Feb 9 19:43:56.773786 kubelet[2124]: I0209 19:43:56.773559 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/07902f82-ebb8-4f91-82de-20a3499783fe-kube-proxy\") pod \"kube-proxy-nwrg8\" (UID: \"07902f82-ebb8-4f91-82de-20a3499783fe\") " pod="kube-system/kube-proxy-nwrg8" Feb 9 19:43:56.773786 kubelet[2124]: I0209 19:43:56.773589 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0892f3e6-1c32-4bbd-96da-da7da388feac-cilium-run\") pod \"cilium-w44bs\" (UID: \"0892f3e6-1c32-4bbd-96da-da7da388feac\") " pod="kube-system/cilium-w44bs" Feb 9 19:43:56.773786 kubelet[2124]: I0209 19:43:56.773606 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0892f3e6-1c32-4bbd-96da-da7da388feac-clustermesh-secrets\") pod \"cilium-w44bs\" (UID: \"0892f3e6-1c32-4bbd-96da-da7da388feac\") " pod="kube-system/cilium-w44bs" Feb 9 19:43:56.773786 kubelet[2124]: I0209 19:43:56.773624 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0892f3e6-1c32-4bbd-96da-da7da388feac-cilium-config-path\") pod \"cilium-w44bs\" (UID: \"0892f3e6-1c32-4bbd-96da-da7da388feac\") " pod="kube-system/cilium-w44bs" Feb 9 19:43:56.773786 kubelet[2124]: I0209 19:43:56.773645 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0892f3e6-1c32-4bbd-96da-da7da388feac-host-proc-sys-kernel\") pod \"cilium-w44bs\" (UID: \"0892f3e6-1c32-4bbd-96da-da7da388feac\") " pod="kube-system/cilium-w44bs" Feb 9 19:43:56.773786 kubelet[2124]: I0209 19:43:56.773673 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0892f3e6-1c32-4bbd-96da-da7da388feac-etc-cni-netd\") pod \"cilium-w44bs\" (UID: \"0892f3e6-1c32-4bbd-96da-da7da388feac\") " pod="kube-system/cilium-w44bs" Feb 9 19:43:56.773927 kubelet[2124]: I0209 19:43:56.773698 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/07902f82-ebb8-4f91-82de-20a3499783fe-lib-modules\") pod \"kube-proxy-nwrg8\" (UID: \"07902f82-ebb8-4f91-82de-20a3499783fe\") " pod="kube-system/kube-proxy-nwrg8" Feb 9 19:43:56.773927 kubelet[2124]: I0209 19:43:56.773718 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0892f3e6-1c32-4bbd-96da-da7da388feac-cni-path\") pod \"cilium-w44bs\" (UID: \"0892f3e6-1c32-4bbd-96da-da7da388feac\") " pod="kube-system/cilium-w44bs" Feb 9 19:43:56.773927 kubelet[2124]: I0209 19:43:56.773735 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0892f3e6-1c32-4bbd-96da-da7da388feac-host-proc-sys-net\") pod \"cilium-w44bs\" (UID: \"0892f3e6-1c32-4bbd-96da-da7da388feac\") " pod="kube-system/cilium-w44bs" Feb 9 19:43:56.773927 kubelet[2124]: I0209 19:43:56.773751 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0892f3e6-1c32-4bbd-96da-da7da388feac-xtables-lock\") pod \"cilium-w44bs\" (UID: \"0892f3e6-1c32-4bbd-96da-da7da388feac\") " pod="kube-system/cilium-w44bs" Feb 9 19:43:57.012989 kubelet[2124]: I0209 19:43:57.012685 2124 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:43:57.024388 kubelet[2124]: E0209 19:43:57.024344 2124 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:43:57.025036 env[1198]: time="2024-02-09T19:43:57.024983849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nwrg8,Uid:07902f82-ebb8-4f91-82de-20a3499783fe,Namespace:kube-system,Attempt:0,}" Feb 9 19:43:57.030182 kubelet[2124]: E0209 19:43:57.030163 2124 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:43:57.030607 env[1198]: time="2024-02-09T19:43:57.030570676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w44bs,Uid:0892f3e6-1c32-4bbd-96da-da7da388feac,Namespace:kube-system,Attempt:0,}" Feb 9 19:43:57.075544 kubelet[2124]: I0209 19:43:57.075489 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/32dd2c75-957a-44cd-8607-0eb3f08c2401-cilium-config-path\") pod \"cilium-operator-f59cbd8c6-v8k6k\" (UID: \"32dd2c75-957a-44cd-8607-0eb3f08c2401\") " pod="kube-system/cilium-operator-f59cbd8c6-v8k6k" Feb 9 19:43:57.075544 kubelet[2124]: I0209 19:43:57.075543 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzmr8\" (UniqueName: \"kubernetes.io/projected/32dd2c75-957a-44cd-8607-0eb3f08c2401-kube-api-access-pzmr8\") pod \"cilium-operator-f59cbd8c6-v8k6k\" (UID: \"32dd2c75-957a-44cd-8607-0eb3f08c2401\") " pod="kube-system/cilium-operator-f59cbd8c6-v8k6k" Feb 9 19:43:57.239882 env[1198]: time="2024-02-09T19:43:57.239805191Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:43:57.239882 env[1198]: time="2024-02-09T19:43:57.239858552Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:43:57.239882 env[1198]: time="2024-02-09T19:43:57.239873450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:43:57.240172 env[1198]: time="2024-02-09T19:43:57.240044344Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/090a56c7d0407505187f0a13d5219ce969ea984437b6517cbfdf8d4661f3abdc pid=2256 runtime=io.containerd.runc.v2 Feb 9 19:43:57.243188 env[1198]: time="2024-02-09T19:43:57.243133206Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:43:57.243277 env[1198]: time="2024-02-09T19:43:57.243216433Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:43:57.243277 env[1198]: time="2024-02-09T19:43:57.243257852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:43:57.243545 env[1198]: time="2024-02-09T19:43:57.243490703Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b784746f057d5a702265547a2348214a9e152d7bc852d6946172ffbf951bd490 pid=2275 runtime=io.containerd.runc.v2 Feb 9 19:43:57.284623 env[1198]: time="2024-02-09T19:43:57.284083953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nwrg8,Uid:07902f82-ebb8-4f91-82de-20a3499783fe,Namespace:kube-system,Attempt:0,} returns sandbox id \"090a56c7d0407505187f0a13d5219ce969ea984437b6517cbfdf8d4661f3abdc\"" Feb 9 19:43:57.284963 kubelet[2124]: E0209 19:43:57.284941 2124 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:43:57.286957 env[1198]: time="2024-02-09T19:43:57.286917712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w44bs,Uid:0892f3e6-1c32-4bbd-96da-da7da388feac,Namespace:kube-system,Attempt:0,} returns sandbox id \"b784746f057d5a702265547a2348214a9e152d7bc852d6946172ffbf951bd490\"" Feb 9 19:43:57.293428 kubelet[2124]: E0209 19:43:57.293395 2124 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:43:57.294490 env[1198]: time="2024-02-09T19:43:57.294446212Z" level=info msg="CreateContainer within sandbox \"090a56c7d0407505187f0a13d5219ce969ea984437b6517cbfdf8d4661f3abdc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 19:43:57.295256 env[1198]: time="2024-02-09T19:43:57.295203084Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 9 19:43:57.315208 env[1198]: time="2024-02-09T19:43:57.315162396Z" level=info msg="CreateContainer within sandbox \"090a56c7d0407505187f0a13d5219ce969ea984437b6517cbfdf8d4661f3abdc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"972c55348662a19b7833a94d8ef591af7520083ff80154a2e2d88c156e262bb1\"" Feb 9 19:43:57.315866 env[1198]: time="2024-02-09T19:43:57.315830549Z" level=info msg="StartContainer for \"972c55348662a19b7833a94d8ef591af7520083ff80154a2e2d88c156e262bb1\"" Feb 9 19:43:57.364771 env[1198]: time="2024-02-09T19:43:57.364713769Z" level=info msg="StartContainer for \"972c55348662a19b7833a94d8ef591af7520083ff80154a2e2d88c156e262bb1\" returns successfully" Feb 9 19:43:57.615317 kubelet[2124]: E0209 19:43:57.615210 2124 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:43:57.615739 env[1198]: time="2024-02-09T19:43:57.615689285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-v8k6k,Uid:32dd2c75-957a-44cd-8607-0eb3f08c2401,Namespace:kube-system,Attempt:0,}" Feb 9 19:43:57.635181 env[1198]: time="2024-02-09T19:43:57.635094448Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:43:57.635181 env[1198]: time="2024-02-09T19:43:57.635149963Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:43:57.635181 env[1198]: time="2024-02-09T19:43:57.635163931Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:43:57.635427 env[1198]: time="2024-02-09T19:43:57.635383716Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/83d4f9ef533728196503f37d555b99e6b21979fb5b4f95ce02a15836c1cfa6a8 pid=2478 runtime=io.containerd.runc.v2 Feb 9 19:43:57.683333 env[1198]: time="2024-02-09T19:43:57.683265281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-v8k6k,Uid:32dd2c75-957a-44cd-8607-0eb3f08c2401,Namespace:kube-system,Attempt:0,} returns sandbox id \"83d4f9ef533728196503f37d555b99e6b21979fb5b4f95ce02a15836c1cfa6a8\"" Feb 9 19:43:57.684044 kubelet[2124]: E0209 19:43:57.684002 2124 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:43:58.254556 kubelet[2124]: E0209 19:43:58.254468 2124 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:43:58.264538 kubelet[2124]: I0209 19:43:58.264507 2124 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-nwrg8" podStartSLOduration=2.264467301 pod.CreationTimestamp="2024-02-09 19:43:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:43:58.264306859 +0000 UTC m=+16.179268055" watchObservedRunningTime="2024-02-09 19:43:58.264467301 +0000 UTC m=+16.179428497" Feb 9 19:43:59.255753 kubelet[2124]: E0209 19:43:59.255721 2124 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:44:02.533966 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3946049678.mount: Deactivated successfully. Feb 9 19:44:07.966232 env[1198]: time="2024-02-09T19:44:07.966169706Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:44:07.969232 env[1198]: time="2024-02-09T19:44:07.969171821Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:44:07.972006 env[1198]: time="2024-02-09T19:44:07.971964651Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:44:07.972465 env[1198]: time="2024-02-09T19:44:07.972437111Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 9 19:44:07.974897 env[1198]: time="2024-02-09T19:44:07.974850627Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 9 19:44:07.975809 env[1198]: time="2024-02-09T19:44:07.975780399Z" level=info msg="CreateContainer within sandbox \"b784746f057d5a702265547a2348214a9e152d7bc852d6946172ffbf951bd490\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 19:44:07.989839 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1963111277.mount: Deactivated successfully. Feb 9 19:44:07.990784 env[1198]: time="2024-02-09T19:44:07.990740648Z" level=info msg="CreateContainer within sandbox \"b784746f057d5a702265547a2348214a9e152d7bc852d6946172ffbf951bd490\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"fbcbab468f978e6a7dff0b3f5c0930f1c8f425e19926024941168854d8ec0244\"" Feb 9 19:44:07.991346 env[1198]: time="2024-02-09T19:44:07.991323546Z" level=info msg="StartContainer for \"fbcbab468f978e6a7dff0b3f5c0930f1c8f425e19926024941168854d8ec0244\"" Feb 9 19:44:08.034885 env[1198]: time="2024-02-09T19:44:08.034823415Z" level=info msg="StartContainer for \"fbcbab468f978e6a7dff0b3f5c0930f1c8f425e19926024941168854d8ec0244\" returns successfully" Feb 9 19:44:08.272592 kubelet[2124]: E0209 19:44:08.272091 2124 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:44:08.546838 env[1198]: time="2024-02-09T19:44:08.546673522Z" level=info msg="shim disconnected" id=fbcbab468f978e6a7dff0b3f5c0930f1c8f425e19926024941168854d8ec0244 Feb 9 19:44:08.546838 env[1198]: time="2024-02-09T19:44:08.546729318Z" level=warning msg="cleaning up after shim disconnected" id=fbcbab468f978e6a7dff0b3f5c0930f1c8f425e19926024941168854d8ec0244 namespace=k8s.io Feb 9 19:44:08.546838 env[1198]: time="2024-02-09T19:44:08.546738455Z" level=info msg="cleaning up dead shim" Feb 9 19:44:08.558230 env[1198]: time="2024-02-09T19:44:08.558157678Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:44:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2560 runtime=io.containerd.runc.v2\n" Feb 9 19:44:08.987100 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fbcbab468f978e6a7dff0b3f5c0930f1c8f425e19926024941168854d8ec0244-rootfs.mount: Deactivated successfully. Feb 9 19:44:09.279800 kubelet[2124]: E0209 19:44:09.279621 2124 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:44:09.288226 env[1198]: time="2024-02-09T19:44:09.288175520Z" level=info msg="CreateContainer within sandbox \"b784746f057d5a702265547a2348214a9e152d7bc852d6946172ffbf951bd490\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 19:44:10.190356 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2200223226.mount: Deactivated successfully. Feb 9 19:44:10.203502 env[1198]: time="2024-02-09T19:44:10.203448538Z" level=info msg="CreateContainer within sandbox \"b784746f057d5a702265547a2348214a9e152d7bc852d6946172ffbf951bd490\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a320b7ee4d6481975168cfae4cfbbd7efa5c83aac32e22e9bd6295e84ecfe8b9\"" Feb 9 19:44:10.204017 env[1198]: time="2024-02-09T19:44:10.203989506Z" level=info msg="StartContainer for \"a320b7ee4d6481975168cfae4cfbbd7efa5c83aac32e22e9bd6295e84ecfe8b9\"" Feb 9 19:44:10.246634 env[1198]: time="2024-02-09T19:44:10.246566661Z" level=info msg="StartContainer for \"a320b7ee4d6481975168cfae4cfbbd7efa5c83aac32e22e9bd6295e84ecfe8b9\" returns successfully" Feb 9 19:44:10.257231 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 19:44:10.257620 systemd[1]: Stopped systemd-sysctl.service. Feb 9 19:44:10.257842 systemd[1]: Stopping systemd-sysctl.service... Feb 9 19:44:10.259912 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:44:10.269703 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:44:10.283085 kubelet[2124]: E0209 19:44:10.283048 2124 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:44:10.299603 env[1198]: time="2024-02-09T19:44:10.298647713Z" level=info msg="shim disconnected" id=a320b7ee4d6481975168cfae4cfbbd7efa5c83aac32e22e9bd6295e84ecfe8b9 Feb 9 19:44:10.299603 env[1198]: time="2024-02-09T19:44:10.298704821Z" level=warning msg="cleaning up after shim disconnected" id=a320b7ee4d6481975168cfae4cfbbd7efa5c83aac32e22e9bd6295e84ecfe8b9 namespace=k8s.io Feb 9 19:44:10.299603 env[1198]: time="2024-02-09T19:44:10.298718357Z" level=info msg="cleaning up dead shim" Feb 9 19:44:10.306259 env[1198]: time="2024-02-09T19:44:10.306203444Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:44:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2625 runtime=io.containerd.runc.v2\n" Feb 9 19:44:10.880635 env[1198]: time="2024-02-09T19:44:10.880563841Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:44:10.882056 env[1198]: time="2024-02-09T19:44:10.882011005Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:44:10.883431 env[1198]: time="2024-02-09T19:44:10.883405740Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:44:10.883825 env[1198]: time="2024-02-09T19:44:10.883788821Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 9 19:44:10.885630 env[1198]: time="2024-02-09T19:44:10.885586344Z" level=info msg="CreateContainer within sandbox \"83d4f9ef533728196503f37d555b99e6b21979fb5b4f95ce02a15836c1cfa6a8\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 9 19:44:10.894273 env[1198]: time="2024-02-09T19:44:10.894228018Z" level=info msg="CreateContainer within sandbox \"83d4f9ef533728196503f37d555b99e6b21979fb5b4f95ce02a15836c1cfa6a8\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"1538d578a583fa91f578bf37a33d2e3d442ff0a1a9d15cf13960936151610fc2\"" Feb 9 19:44:10.894631 env[1198]: time="2024-02-09T19:44:10.894599668Z" level=info msg="StartContainer for \"1538d578a583fa91f578bf37a33d2e3d442ff0a1a9d15cf13960936151610fc2\"" Feb 9 19:44:10.934969 env[1198]: time="2024-02-09T19:44:10.934929964Z" level=info msg="StartContainer for \"1538d578a583fa91f578bf37a33d2e3d442ff0a1a9d15cf13960936151610fc2\" returns successfully" Feb 9 19:44:11.188110 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a320b7ee4d6481975168cfae4cfbbd7efa5c83aac32e22e9bd6295e84ecfe8b9-rootfs.mount: Deactivated successfully. Feb 9 19:44:11.285768 kubelet[2124]: E0209 19:44:11.285729 2124 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:44:11.287379 kubelet[2124]: E0209 19:44:11.287345 2124 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:44:11.288782 env[1198]: time="2024-02-09T19:44:11.288743180Z" level=info msg="CreateContainer within sandbox \"b784746f057d5a702265547a2348214a9e152d7bc852d6946172ffbf951bd490\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 19:44:11.339344 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2295951091.mount: Deactivated successfully. Feb 9 19:44:11.344295 env[1198]: time="2024-02-09T19:44:11.343010172Z" level=info msg="CreateContainer within sandbox \"b784746f057d5a702265547a2348214a9e152d7bc852d6946172ffbf951bd490\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8d31a1b11ddc7e68cd4aa4b9d4a4e790f364e318dede72484070bbff4b7e83d6\"" Feb 9 19:44:11.345214 env[1198]: time="2024-02-09T19:44:11.345185766Z" level=info msg="StartContainer for \"8d31a1b11ddc7e68cd4aa4b9d4a4e790f364e318dede72484070bbff4b7e83d6\"" Feb 9 19:44:11.409543 kubelet[2124]: I0209 19:44:11.409496 2124 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-f59cbd8c6-v8k6k" podStartSLOduration=-9.223372021445316e+09 pod.CreationTimestamp="2024-02-09 19:43:56 +0000 UTC" firstStartedPulling="2024-02-09 19:43:57.684653878 +0000 UTC m=+15.599615074" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:44:11.359569932 +0000 UTC m=+29.274531128" watchObservedRunningTime="2024-02-09 19:44:11.409458957 +0000 UTC m=+29.324420153" Feb 9 19:44:11.418166 env[1198]: time="2024-02-09T19:44:11.418098974Z" level=info msg="StartContainer for \"8d31a1b11ddc7e68cd4aa4b9d4a4e790f364e318dede72484070bbff4b7e83d6\" returns successfully" Feb 9 19:44:11.652300 env[1198]: time="2024-02-09T19:44:11.652151502Z" level=info msg="shim disconnected" id=8d31a1b11ddc7e68cd4aa4b9d4a4e790f364e318dede72484070bbff4b7e83d6 Feb 9 19:44:11.652300 env[1198]: time="2024-02-09T19:44:11.652211775Z" level=warning msg="cleaning up after shim disconnected" id=8d31a1b11ddc7e68cd4aa4b9d4a4e790f364e318dede72484070bbff4b7e83d6 namespace=k8s.io Feb 9 19:44:11.652300 env[1198]: time="2024-02-09T19:44:11.652221233Z" level=info msg="cleaning up dead shim" Feb 9 19:44:11.666558 env[1198]: time="2024-02-09T19:44:11.666488950Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:44:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2718 runtime=io.containerd.runc.v2\n" Feb 9 19:44:12.187555 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8d31a1b11ddc7e68cd4aa4b9d4a4e790f364e318dede72484070bbff4b7e83d6-rootfs.mount: Deactivated successfully. Feb 9 19:44:12.291765 kubelet[2124]: E0209 19:44:12.291729 2124 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:44:12.292150 kubelet[2124]: E0209 19:44:12.291820 2124 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:44:12.294470 env[1198]: time="2024-02-09T19:44:12.294434328Z" level=info msg="CreateContainer within sandbox \"b784746f057d5a702265547a2348214a9e152d7bc852d6946172ffbf951bd490\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 19:44:12.315152 env[1198]: time="2024-02-09T19:44:12.315098312Z" level=info msg="CreateContainer within sandbox \"b784746f057d5a702265547a2348214a9e152d7bc852d6946172ffbf951bd490\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"57089121cca5b641775db666b19a739d03b43505f5e26961b7344f6de78ae503\"" Feb 9 19:44:12.315839 env[1198]: time="2024-02-09T19:44:12.315796316Z" level=info msg="StartContainer for \"57089121cca5b641775db666b19a739d03b43505f5e26961b7344f6de78ae503\"" Feb 9 19:44:12.361924 env[1198]: time="2024-02-09T19:44:12.361865042Z" level=info msg="StartContainer for \"57089121cca5b641775db666b19a739d03b43505f5e26961b7344f6de78ae503\" returns successfully" Feb 9 19:44:12.382969 env[1198]: time="2024-02-09T19:44:12.382904693Z" level=info msg="shim disconnected" id=57089121cca5b641775db666b19a739d03b43505f5e26961b7344f6de78ae503 Feb 9 19:44:12.382969 env[1198]: time="2024-02-09T19:44:12.382966108Z" level=warning msg="cleaning up after shim disconnected" id=57089121cca5b641775db666b19a739d03b43505f5e26961b7344f6de78ae503 namespace=k8s.io Feb 9 19:44:12.382969 env[1198]: time="2024-02-09T19:44:12.382974815Z" level=info msg="cleaning up dead shim" Feb 9 19:44:12.394318 env[1198]: time="2024-02-09T19:44:12.394276727Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:44:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2775 runtime=io.containerd.runc.v2\n" Feb 9 19:44:13.187693 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-57089121cca5b641775db666b19a739d03b43505f5e26961b7344f6de78ae503-rootfs.mount: Deactivated successfully. Feb 9 19:44:13.294619 kubelet[2124]: E0209 19:44:13.294591 2124 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:44:13.296524 env[1198]: time="2024-02-09T19:44:13.296477763Z" level=info msg="CreateContainer within sandbox \"b784746f057d5a702265547a2348214a9e152d7bc852d6946172ffbf951bd490\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 19:44:13.312702 env[1198]: time="2024-02-09T19:44:13.312647015Z" level=info msg="CreateContainer within sandbox \"b784746f057d5a702265547a2348214a9e152d7bc852d6946172ffbf951bd490\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"caba6169d5257e9c28ebc958a24c5c7f7d51227e77cbfce171ec1b44c5a68023\"" Feb 9 19:44:13.313148 env[1198]: time="2024-02-09T19:44:13.313116187Z" level=info msg="StartContainer for \"caba6169d5257e9c28ebc958a24c5c7f7d51227e77cbfce171ec1b44c5a68023\"" Feb 9 19:44:13.361274 env[1198]: time="2024-02-09T19:44:13.361210075Z" level=info msg="StartContainer for \"caba6169d5257e9c28ebc958a24c5c7f7d51227e77cbfce171ec1b44c5a68023\" returns successfully" Feb 9 19:44:13.489630 kubelet[2124]: I0209 19:44:13.489528 2124 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 19:44:13.507047 kubelet[2124]: I0209 19:44:13.506984 2124 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:44:13.512156 kubelet[2124]: I0209 19:44:13.512121 2124 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:44:13.596509 kubelet[2124]: I0209 19:44:13.596456 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgqtj\" (UniqueName: \"kubernetes.io/projected/f2c875f4-843f-40a8-8415-deb0875994a9-kube-api-access-rgqtj\") pod \"coredns-787d4945fb-phzzb\" (UID: \"f2c875f4-843f-40a8-8415-deb0875994a9\") " pod="kube-system/coredns-787d4945fb-phzzb" Feb 9 19:44:13.596509 kubelet[2124]: I0209 19:44:13.596523 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b6c8b7e5-4122-4e81-92d7-7f62072741a3-config-volume\") pod \"coredns-787d4945fb-4sxqx\" (UID: \"b6c8b7e5-4122-4e81-92d7-7f62072741a3\") " pod="kube-system/coredns-787d4945fb-4sxqx" Feb 9 19:44:13.596828 kubelet[2124]: I0209 19:44:13.596599 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dntgl\" (UniqueName: \"kubernetes.io/projected/b6c8b7e5-4122-4e81-92d7-7f62072741a3-kube-api-access-dntgl\") pod \"coredns-787d4945fb-4sxqx\" (UID: \"b6c8b7e5-4122-4e81-92d7-7f62072741a3\") " pod="kube-system/coredns-787d4945fb-4sxqx" Feb 9 19:44:13.596828 kubelet[2124]: I0209 19:44:13.596700 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f2c875f4-843f-40a8-8415-deb0875994a9-config-volume\") pod \"coredns-787d4945fb-phzzb\" (UID: \"f2c875f4-843f-40a8-8415-deb0875994a9\") " pod="kube-system/coredns-787d4945fb-phzzb" Feb 9 19:44:13.817261 kubelet[2124]: E0209 19:44:13.817122 2124 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:44:13.817261 kubelet[2124]: E0209 19:44:13.817143 2124 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:44:13.817745 env[1198]: time="2024-02-09T19:44:13.817696018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-4sxqx,Uid:b6c8b7e5-4122-4e81-92d7-7f62072741a3,Namespace:kube-system,Attempt:0,}" Feb 9 19:44:13.818253 env[1198]: time="2024-02-09T19:44:13.817725515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-phzzb,Uid:f2c875f4-843f-40a8-8415-deb0875994a9,Namespace:kube-system,Attempt:0,}" Feb 9 19:44:14.298418 kubelet[2124]: E0209 19:44:14.298377 2124 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:44:14.309894 kubelet[2124]: I0209 19:44:14.309860 2124 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-w44bs" podStartSLOduration=-9.22337201854498e+09 pod.CreationTimestamp="2024-02-09 19:43:56 +0000 UTC" firstStartedPulling="2024-02-09 19:43:57.294140002 +0000 UTC m=+15.209101198" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:44:14.309767472 +0000 UTC m=+32.224728668" watchObservedRunningTime="2024-02-09 19:44:14.309796036 +0000 UTC m=+32.224757232" Feb 9 19:44:15.301325 kubelet[2124]: E0209 19:44:15.300689 2124 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:44:15.306259 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 9 19:44:15.306393 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 9 19:44:15.306558 systemd-networkd[1069]: cilium_host: Link UP Feb 9 19:44:15.306706 systemd-networkd[1069]: cilium_net: Link UP Feb 9 19:44:15.306849 systemd-networkd[1069]: cilium_net: Gained carrier Feb 9 19:44:15.306982 systemd-networkd[1069]: cilium_host: Gained carrier Feb 9 19:44:15.308515 systemd-networkd[1069]: cilium_net: Gained IPv6LL Feb 9 19:44:15.309020 systemd-networkd[1069]: cilium_host: Gained IPv6LL Feb 9 19:44:15.374848 systemd-networkd[1069]: cilium_vxlan: Link UP Feb 9 19:44:15.374856 systemd-networkd[1069]: cilium_vxlan: Gained carrier Feb 9 19:44:15.567101 kernel: NET: Registered PF_ALG protocol family Feb 9 19:44:16.082906 systemd-networkd[1069]: lxc_health: Link UP Feb 9 19:44:16.091390 systemd-networkd[1069]: lxc_health: Gained carrier Feb 9 19:44:16.092093 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 19:44:16.301804 kubelet[2124]: E0209 19:44:16.301774 2124 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:44:16.390170 systemd-networkd[1069]: lxccc073a3e5f2f: Link UP Feb 9 19:44:16.397108 kernel: eth0: renamed from tmp03d01 Feb 9 19:44:16.406249 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:44:16.406368 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxccc073a3e5f2f: link becomes ready Feb 9 19:44:16.406485 systemd-networkd[1069]: lxccc073a3e5f2f: Gained carrier Feb 9 19:44:16.406627 systemd-networkd[1069]: lxc240059c7d8e2: Link UP Feb 9 19:44:16.414206 kernel: eth0: renamed from tmp5cf09 Feb 9 19:44:16.425716 systemd-networkd[1069]: lxc240059c7d8e2: Gained carrier Feb 9 19:44:16.426186 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc240059c7d8e2: link becomes ready Feb 9 19:44:16.724437 systemd-networkd[1069]: cilium_vxlan: Gained IPv6LL Feb 9 19:44:17.296255 systemd-networkd[1069]: lxc_health: Gained IPv6LL Feb 9 19:44:17.304353 kubelet[2124]: E0209 19:44:17.304321 2124 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:44:17.808222 systemd-networkd[1069]: lxccc073a3e5f2f: Gained IPv6LL Feb 9 19:44:18.000221 systemd-networkd[1069]: lxc240059c7d8e2: Gained IPv6LL Feb 9 19:44:19.743217 env[1198]: time="2024-02-09T19:44:19.743134463Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:44:19.743217 env[1198]: time="2024-02-09T19:44:19.743212810Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:44:19.743641 env[1198]: time="2024-02-09T19:44:19.743241574Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:44:19.743641 env[1198]: time="2024-02-09T19:44:19.743411854Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5cf0973e927ab6def6cd023dac1b4238d8b4b4065ea609fa23905a84d16c064d pid=3349 runtime=io.containerd.runc.v2 Feb 9 19:44:19.744354 env[1198]: time="2024-02-09T19:44:19.744169878Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:44:19.744354 env[1198]: time="2024-02-09T19:44:19.744212749Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:44:19.744354 env[1198]: time="2024-02-09T19:44:19.744222397Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:44:19.744550 env[1198]: time="2024-02-09T19:44:19.744476194Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/03d010f1058db5899e25cfab21b72da9f680ef358e65b92038725f693a6c9f9e pid=3358 runtime=io.containerd.runc.v2 Feb 9 19:44:19.767554 systemd-resolved[1126]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 19:44:19.775203 systemd-resolved[1126]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 19:44:19.788257 systemd[1]: Started sshd@5-10.0.0.49:22-10.0.0.1:43086.service. Feb 9 19:44:19.807530 env[1198]: time="2024-02-09T19:44:19.807460850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-4sxqx,Uid:b6c8b7e5-4122-4e81-92d7-7f62072741a3,Namespace:kube-system,Attempt:0,} returns sandbox id \"5cf0973e927ab6def6cd023dac1b4238d8b4b4065ea609fa23905a84d16c064d\"" Feb 9 19:44:19.808305 kubelet[2124]: E0209 19:44:19.808276 2124 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:44:19.808857 env[1198]: time="2024-02-09T19:44:19.808835464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-phzzb,Uid:f2c875f4-843f-40a8-8415-deb0875994a9,Namespace:kube-system,Attempt:0,} returns sandbox id \"03d010f1058db5899e25cfab21b72da9f680ef358e65b92038725f693a6c9f9e\"" Feb 9 19:44:19.810122 kubelet[2124]: E0209 19:44:19.810098 2124 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:44:19.816285 env[1198]: time="2024-02-09T19:44:19.816260955Z" level=info msg="CreateContainer within sandbox \"5cf0973e927ab6def6cd023dac1b4238d8b4b4065ea609fa23905a84d16c064d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 19:44:19.816513 env[1198]: time="2024-02-09T19:44:19.816460140Z" level=info msg="CreateContainer within sandbox \"03d010f1058db5899e25cfab21b72da9f680ef358e65b92038725f693a6c9f9e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 19:44:19.827207 sshd[3406]: Accepted publickey for core from 10.0.0.1 port 43086 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:44:19.828037 sshd[3406]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:44:19.831805 env[1198]: time="2024-02-09T19:44:19.831776950Z" level=info msg="CreateContainer within sandbox \"5cf0973e927ab6def6cd023dac1b4238d8b4b4065ea609fa23905a84d16c064d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f69c144a3702d99f0d5ad43b4a2343d466c29f2ed20b850900541089455209a0\"" Feb 9 19:44:19.834484 env[1198]: time="2024-02-09T19:44:19.834463598Z" level=info msg="StartContainer for \"f69c144a3702d99f0d5ad43b4a2343d466c29f2ed20b850900541089455209a0\"" Feb 9 19:44:19.836164 systemd-logind[1179]: New session 6 of user core. Feb 9 19:44:19.836899 systemd[1]: Started session-6.scope. Feb 9 19:44:19.839352 env[1198]: time="2024-02-09T19:44:19.839262205Z" level=info msg="CreateContainer within sandbox \"03d010f1058db5899e25cfab21b72da9f680ef358e65b92038725f693a6c9f9e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2b093a01aab2db7ff54bb27e79bbf640795dfba211df1456b5d7a2d4026c5cb3\"" Feb 9 19:44:19.842941 env[1198]: time="2024-02-09T19:44:19.842887476Z" level=info msg="StartContainer for \"2b093a01aab2db7ff54bb27e79bbf640795dfba211df1456b5d7a2d4026c5cb3\"" Feb 9 19:44:19.902720 env[1198]: time="2024-02-09T19:44:19.902680896Z" level=info msg="StartContainer for \"2b093a01aab2db7ff54bb27e79bbf640795dfba211df1456b5d7a2d4026c5cb3\" returns successfully" Feb 9 19:44:19.910925 env[1198]: time="2024-02-09T19:44:19.910894129Z" level=info msg="StartContainer for \"f69c144a3702d99f0d5ad43b4a2343d466c29f2ed20b850900541089455209a0\" returns successfully" Feb 9 19:44:19.995377 sshd[3406]: pam_unix(sshd:session): session closed for user core Feb 9 19:44:19.998811 systemd-logind[1179]: Session 6 logged out. Waiting for processes to exit. Feb 9 19:44:19.998968 systemd[1]: sshd@5-10.0.0.49:22-10.0.0.1:43086.service: Deactivated successfully. Feb 9 19:44:19.999812 systemd[1]: session-6.scope: Deactivated successfully. Feb 9 19:44:20.000251 systemd-logind[1179]: Removed session 6. Feb 9 19:44:20.309755 kubelet[2124]: E0209 19:44:20.309111 2124 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:44:20.310588 kubelet[2124]: E0209 19:44:20.310559 2124 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:44:20.317318 kubelet[2124]: I0209 19:44:20.317100 2124 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-phzzb" podStartSLOduration=24.317048995 pod.CreationTimestamp="2024-02-09 19:43:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:44:20.316812591 +0000 UTC m=+38.231773787" watchObservedRunningTime="2024-02-09 19:44:20.317048995 +0000 UTC m=+38.232010191" Feb 9 19:44:20.324727 kubelet[2124]: I0209 19:44:20.324688 2124 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-4sxqx" podStartSLOduration=24.324646359 pod.CreationTimestamp="2024-02-09 19:43:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:44:20.324274109 +0000 UTC m=+38.239235305" watchObservedRunningTime="2024-02-09 19:44:20.324646359 +0000 UTC m=+38.239607555" Feb 9 19:44:21.311950 kubelet[2124]: E0209 19:44:21.311919 2124 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:44:21.312412 kubelet[2124]: E0209 19:44:21.312005 2124 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:44:22.108002 kubelet[2124]: I0209 19:44:22.107955 2124 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness" Feb 9 19:44:22.109004 kubelet[2124]: E0209 19:44:22.108917 2124 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:44:22.313346 kubelet[2124]: E0209 19:44:22.313304 2124 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:44:22.313815 kubelet[2124]: E0209 19:44:22.313532 2124 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:44:22.313937 kubelet[2124]: E0209 19:44:22.313916 2124 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:44:24.998327 systemd[1]: Started sshd@6-10.0.0.49:22-10.0.0.1:43092.service. Feb 9 19:44:25.032384 sshd[3572]: Accepted publickey for core from 10.0.0.1 port 43092 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:44:25.033458 sshd[3572]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:44:25.036866 systemd-logind[1179]: New session 7 of user core. Feb 9 19:44:25.037707 systemd[1]: Started session-7.scope. Feb 9 19:44:25.138755 sshd[3572]: pam_unix(sshd:session): session closed for user core Feb 9 19:44:25.140852 systemd[1]: sshd@6-10.0.0.49:22-10.0.0.1:43092.service: Deactivated successfully. Feb 9 19:44:25.141812 systemd-logind[1179]: Session 7 logged out. Waiting for processes to exit. Feb 9 19:44:25.141857 systemd[1]: session-7.scope: Deactivated successfully. Feb 9 19:44:25.142573 systemd-logind[1179]: Removed session 7. Feb 9 19:44:30.141626 systemd[1]: Started sshd@7-10.0.0.49:22-10.0.0.1:39962.service. Feb 9 19:44:30.172552 sshd[3589]: Accepted publickey for core from 10.0.0.1 port 39962 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:44:30.173731 sshd[3589]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:44:30.177392 systemd-logind[1179]: New session 8 of user core. Feb 9 19:44:30.178509 systemd[1]: Started session-8.scope. Feb 9 19:44:30.283576 sshd[3589]: pam_unix(sshd:session): session closed for user core Feb 9 19:44:30.286016 systemd[1]: sshd@7-10.0.0.49:22-10.0.0.1:39962.service: Deactivated successfully. Feb 9 19:44:30.287173 systemd-logind[1179]: Session 8 logged out. Waiting for processes to exit. Feb 9 19:44:30.287218 systemd[1]: session-8.scope: Deactivated successfully. Feb 9 19:44:30.288183 systemd-logind[1179]: Removed session 8. Feb 9 19:44:35.286763 systemd[1]: Started sshd@8-10.0.0.49:22-10.0.0.1:39972.service. Feb 9 19:44:35.317142 sshd[3604]: Accepted publickey for core from 10.0.0.1 port 39972 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:44:35.318041 sshd[3604]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:44:35.321182 systemd-logind[1179]: New session 9 of user core. Feb 9 19:44:35.322242 systemd[1]: Started session-9.scope. Feb 9 19:44:35.422421 sshd[3604]: pam_unix(sshd:session): session closed for user core Feb 9 19:44:35.424787 systemd[1]: sshd@8-10.0.0.49:22-10.0.0.1:39972.service: Deactivated successfully. Feb 9 19:44:35.425729 systemd[1]: session-9.scope: Deactivated successfully. Feb 9 19:44:35.425807 systemd-logind[1179]: Session 9 logged out. Waiting for processes to exit. Feb 9 19:44:35.426511 systemd-logind[1179]: Removed session 9. Feb 9 19:44:40.424881 systemd[1]: Started sshd@9-10.0.0.49:22-10.0.0.1:33176.service. Feb 9 19:44:40.456384 sshd[3620]: Accepted publickey for core from 10.0.0.1 port 33176 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:44:40.457427 sshd[3620]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:44:40.460449 systemd-logind[1179]: New session 10 of user core. Feb 9 19:44:40.461173 systemd[1]: Started session-10.scope. Feb 9 19:44:40.563009 sshd[3620]: pam_unix(sshd:session): session closed for user core Feb 9 19:44:40.565557 systemd[1]: Started sshd@10-10.0.0.49:22-10.0.0.1:33186.service. Feb 9 19:44:40.566891 systemd[1]: sshd@9-10.0.0.49:22-10.0.0.1:33176.service: Deactivated successfully. Feb 9 19:44:40.567702 systemd[1]: session-10.scope: Deactivated successfully. Feb 9 19:44:40.568116 systemd-logind[1179]: Session 10 logged out. Waiting for processes to exit. Feb 9 19:44:40.568766 systemd-logind[1179]: Removed session 10. Feb 9 19:44:40.595869 sshd[3633]: Accepted publickey for core from 10.0.0.1 port 33186 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:44:40.596915 sshd[3633]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:44:40.599929 systemd-logind[1179]: New session 11 of user core. Feb 9 19:44:40.600642 systemd[1]: Started session-11.scope. Feb 9 19:44:41.303145 sshd[3633]: pam_unix(sshd:session): session closed for user core Feb 9 19:44:41.305707 systemd[1]: Started sshd@11-10.0.0.49:22-10.0.0.1:33202.service. Feb 9 19:44:41.315711 systemd[1]: sshd@10-10.0.0.49:22-10.0.0.1:33186.service: Deactivated successfully. Feb 9 19:44:41.316602 systemd[1]: session-11.scope: Deactivated successfully. Feb 9 19:44:41.317125 systemd-logind[1179]: Session 11 logged out. Waiting for processes to exit. Feb 9 19:44:41.317897 systemd-logind[1179]: Removed session 11. Feb 9 19:44:41.342547 sshd[3645]: Accepted publickey for core from 10.0.0.1 port 33202 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:44:41.343568 sshd[3645]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:44:41.346917 systemd-logind[1179]: New session 12 of user core. Feb 9 19:44:41.347648 systemd[1]: Started session-12.scope. Feb 9 19:44:41.452182 sshd[3645]: pam_unix(sshd:session): session closed for user core Feb 9 19:44:41.454172 systemd[1]: sshd@11-10.0.0.49:22-10.0.0.1:33202.service: Deactivated successfully. Feb 9 19:44:41.455166 systemd-logind[1179]: Session 12 logged out. Waiting for processes to exit. Feb 9 19:44:41.455246 systemd[1]: session-12.scope: Deactivated successfully. Feb 9 19:44:41.456055 systemd-logind[1179]: Removed session 12. Feb 9 19:44:46.455573 systemd[1]: Started sshd@12-10.0.0.49:22-10.0.0.1:33204.service. Feb 9 19:44:46.488247 sshd[3664]: Accepted publickey for core from 10.0.0.1 port 33204 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:44:46.489460 sshd[3664]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:44:46.492853 systemd-logind[1179]: New session 13 of user core. Feb 9 19:44:46.493801 systemd[1]: Started session-13.scope. Feb 9 19:44:46.603432 sshd[3664]: pam_unix(sshd:session): session closed for user core Feb 9 19:44:46.605611 systemd[1]: sshd@12-10.0.0.49:22-10.0.0.1:33204.service: Deactivated successfully. Feb 9 19:44:46.606744 systemd-logind[1179]: Session 13 logged out. Waiting for processes to exit. Feb 9 19:44:46.606790 systemd[1]: session-13.scope: Deactivated successfully. Feb 9 19:44:46.607672 systemd-logind[1179]: Removed session 13. Feb 9 19:44:51.607292 systemd[1]: Started sshd@13-10.0.0.49:22-10.0.0.1:53742.service. Feb 9 19:44:51.638389 sshd[3678]: Accepted publickey for core from 10.0.0.1 port 53742 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:44:51.639600 sshd[3678]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:44:51.642736 systemd-logind[1179]: New session 14 of user core. Feb 9 19:44:51.643510 systemd[1]: Started session-14.scope. Feb 9 19:44:51.751832 sshd[3678]: pam_unix(sshd:session): session closed for user core Feb 9 19:44:51.754462 systemd[1]: Started sshd@14-10.0.0.49:22-10.0.0.1:53752.service. Feb 9 19:44:51.754910 systemd[1]: sshd@13-10.0.0.49:22-10.0.0.1:53742.service: Deactivated successfully. Feb 9 19:44:51.755882 systemd-logind[1179]: Session 14 logged out. Waiting for processes to exit. Feb 9 19:44:51.756107 systemd[1]: session-14.scope: Deactivated successfully. Feb 9 19:44:51.756853 systemd-logind[1179]: Removed session 14. Feb 9 19:44:51.784103 sshd[3691]: Accepted publickey for core from 10.0.0.1 port 53752 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:44:51.785141 sshd[3691]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:44:51.788376 systemd-logind[1179]: New session 15 of user core. Feb 9 19:44:51.789127 systemd[1]: Started session-15.scope. Feb 9 19:44:51.940590 sshd[3691]: pam_unix(sshd:session): session closed for user core Feb 9 19:44:51.943131 systemd[1]: Started sshd@15-10.0.0.49:22-10.0.0.1:53762.service. Feb 9 19:44:51.943600 systemd[1]: sshd@14-10.0.0.49:22-10.0.0.1:53752.service: Deactivated successfully. Feb 9 19:44:51.944622 systemd-logind[1179]: Session 15 logged out. Waiting for processes to exit. Feb 9 19:44:51.944624 systemd[1]: session-15.scope: Deactivated successfully. Feb 9 19:44:51.945459 systemd-logind[1179]: Removed session 15. Feb 9 19:44:51.975382 sshd[3704]: Accepted publickey for core from 10.0.0.1 port 53762 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:44:51.976404 sshd[3704]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:44:51.979637 systemd-logind[1179]: New session 16 of user core. Feb 9 19:44:51.980411 systemd[1]: Started session-16.scope. Feb 9 19:44:52.781223 sshd[3704]: pam_unix(sshd:session): session closed for user core Feb 9 19:44:52.783260 systemd[1]: Started sshd@16-10.0.0.49:22-10.0.0.1:53766.service. Feb 9 19:44:52.784806 systemd[1]: sshd@15-10.0.0.49:22-10.0.0.1:53762.service: Deactivated successfully. Feb 9 19:44:52.786408 systemd[1]: session-16.scope: Deactivated successfully. Feb 9 19:44:52.787168 systemd-logind[1179]: Session 16 logged out. Waiting for processes to exit. Feb 9 19:44:52.790921 systemd-logind[1179]: Removed session 16. Feb 9 19:44:52.829489 sshd[3734]: Accepted publickey for core from 10.0.0.1 port 53766 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:44:52.831086 sshd[3734]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:44:52.834730 systemd-logind[1179]: New session 17 of user core. Feb 9 19:44:52.835797 systemd[1]: Started session-17.scope. Feb 9 19:44:53.031042 sshd[3734]: pam_unix(sshd:session): session closed for user core Feb 9 19:44:53.033681 systemd[1]: Started sshd@17-10.0.0.49:22-10.0.0.1:53780.service. Feb 9 19:44:53.034916 systemd-logind[1179]: Session 17 logged out. Waiting for processes to exit. Feb 9 19:44:53.035831 systemd[1]: sshd@16-10.0.0.49:22-10.0.0.1:53766.service: Deactivated successfully. Feb 9 19:44:53.036647 systemd[1]: session-17.scope: Deactivated successfully. Feb 9 19:44:53.037741 systemd-logind[1179]: Removed session 17. Feb 9 19:44:53.065191 sshd[3786]: Accepted publickey for core from 10.0.0.1 port 53780 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:44:53.066169 sshd[3786]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:44:53.069799 systemd-logind[1179]: New session 18 of user core. Feb 9 19:44:53.070530 systemd[1]: Started session-18.scope. Feb 9 19:44:53.169951 sshd[3786]: pam_unix(sshd:session): session closed for user core Feb 9 19:44:53.172306 systemd[1]: sshd@17-10.0.0.49:22-10.0.0.1:53780.service: Deactivated successfully. Feb 9 19:44:53.173601 systemd-logind[1179]: Session 18 logged out. Waiting for processes to exit. Feb 9 19:44:53.173655 systemd[1]: session-18.scope: Deactivated successfully. Feb 9 19:44:53.174919 systemd-logind[1179]: Removed session 18. Feb 9 19:44:58.173675 systemd[1]: Started sshd@18-10.0.0.49:22-10.0.0.1:50328.service. Feb 9 19:44:58.203824 sshd[3804]: Accepted publickey for core from 10.0.0.1 port 50328 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:44:58.204928 sshd[3804]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:44:58.208740 systemd-logind[1179]: New session 19 of user core. Feb 9 19:44:58.209778 systemd[1]: Started session-19.scope. Feb 9 19:44:58.223364 kubelet[2124]: E0209 19:44:58.223312 2124 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:44:58.313751 sshd[3804]: pam_unix(sshd:session): session closed for user core Feb 9 19:44:58.316583 systemd[1]: sshd@18-10.0.0.49:22-10.0.0.1:50328.service: Deactivated successfully. Feb 9 19:44:58.317741 systemd-logind[1179]: Session 19 logged out. Waiting for processes to exit. Feb 9 19:44:58.317809 systemd[1]: session-19.scope: Deactivated successfully. Feb 9 19:44:58.318734 systemd-logind[1179]: Removed session 19. Feb 9 19:45:01.222406 kubelet[2124]: E0209 19:45:01.222355 2124 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:03.317273 systemd[1]: Started sshd@19-10.0.0.49:22-10.0.0.1:50340.service. Feb 9 19:45:03.346993 sshd[3845]: Accepted publickey for core from 10.0.0.1 port 50340 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:45:03.348028 sshd[3845]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:45:03.351411 systemd-logind[1179]: New session 20 of user core. Feb 9 19:45:03.352288 systemd[1]: Started session-20.scope. Feb 9 19:45:03.449227 sshd[3845]: pam_unix(sshd:session): session closed for user core Feb 9 19:45:03.451742 systemd[1]: sshd@19-10.0.0.49:22-10.0.0.1:50340.service: Deactivated successfully. Feb 9 19:45:03.452688 systemd[1]: session-20.scope: Deactivated successfully. Feb 9 19:45:03.453788 systemd-logind[1179]: Session 20 logged out. Waiting for processes to exit. Feb 9 19:45:03.454731 systemd-logind[1179]: Removed session 20. Feb 9 19:45:08.453440 systemd[1]: Started sshd@20-10.0.0.49:22-10.0.0.1:51682.service. Feb 9 19:45:08.486762 sshd[3859]: Accepted publickey for core from 10.0.0.1 port 51682 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:45:08.488527 sshd[3859]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:45:08.493858 systemd-logind[1179]: New session 21 of user core. Feb 9 19:45:08.495268 systemd[1]: Started session-21.scope. Feb 9 19:45:08.615485 sshd[3859]: pam_unix(sshd:session): session closed for user core Feb 9 19:45:08.618725 systemd[1]: sshd@20-10.0.0.49:22-10.0.0.1:51682.service: Deactivated successfully. Feb 9 19:45:08.620502 systemd-logind[1179]: Session 21 logged out. Waiting for processes to exit. Feb 9 19:45:08.620734 systemd[1]: session-21.scope: Deactivated successfully. Feb 9 19:45:08.622051 systemd-logind[1179]: Removed session 21. Feb 9 19:45:13.618868 systemd[1]: Started sshd@21-10.0.0.49:22-10.0.0.1:51684.service. Feb 9 19:45:13.648727 sshd[3873]: Accepted publickey for core from 10.0.0.1 port 51684 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:45:13.649686 sshd[3873]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:45:13.652958 systemd-logind[1179]: New session 22 of user core. Feb 9 19:45:13.653926 systemd[1]: Started session-22.scope. Feb 9 19:45:13.757748 sshd[3873]: pam_unix(sshd:session): session closed for user core Feb 9 19:45:13.760657 systemd[1]: Started sshd@22-10.0.0.49:22-10.0.0.1:51692.service. Feb 9 19:45:13.762761 systemd[1]: sshd@21-10.0.0.49:22-10.0.0.1:51684.service: Deactivated successfully. Feb 9 19:45:13.763857 systemd[1]: session-22.scope: Deactivated successfully. Feb 9 19:45:13.764459 systemd-logind[1179]: Session 22 logged out. Waiting for processes to exit. Feb 9 19:45:13.765342 systemd-logind[1179]: Removed session 22. Feb 9 19:45:13.791077 sshd[3885]: Accepted publickey for core from 10.0.0.1 port 51692 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:45:13.792167 sshd[3885]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:45:13.795892 systemd-logind[1179]: New session 23 of user core. Feb 9 19:45:13.796953 systemd[1]: Started session-23.scope. Feb 9 19:45:14.223836 kubelet[2124]: E0209 19:45:14.223464 2124 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:15.134758 env[1198]: time="2024-02-09T19:45:15.134404885Z" level=info msg="StopContainer for \"1538d578a583fa91f578bf37a33d2e3d442ff0a1a9d15cf13960936151610fc2\" with timeout 30 (s)" Feb 9 19:45:15.135565 env[1198]: time="2024-02-09T19:45:15.135537697Z" level=info msg="Stop container \"1538d578a583fa91f578bf37a33d2e3d442ff0a1a9d15cf13960936151610fc2\" with signal terminated" Feb 9 19:45:15.158879 env[1198]: time="2024-02-09T19:45:15.157235353Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 19:45:15.164749 env[1198]: time="2024-02-09T19:45:15.164672437Z" level=info msg="StopContainer for \"caba6169d5257e9c28ebc958a24c5c7f7d51227e77cbfce171ec1b44c5a68023\" with timeout 1 (s)" Feb 9 19:45:15.165000 env[1198]: time="2024-02-09T19:45:15.164976976Z" level=info msg="Stop container \"caba6169d5257e9c28ebc958a24c5c7f7d51227e77cbfce171ec1b44c5a68023\" with signal terminated" Feb 9 19:45:15.172869 systemd-networkd[1069]: lxc_health: Link DOWN Feb 9 19:45:15.172878 systemd-networkd[1069]: lxc_health: Lost carrier Feb 9 19:45:15.173766 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1538d578a583fa91f578bf37a33d2e3d442ff0a1a9d15cf13960936151610fc2-rootfs.mount: Deactivated successfully. Feb 9 19:45:15.186764 env[1198]: time="2024-02-09T19:45:15.186686263Z" level=info msg="shim disconnected" id=1538d578a583fa91f578bf37a33d2e3d442ff0a1a9d15cf13960936151610fc2 Feb 9 19:45:15.186764 env[1198]: time="2024-02-09T19:45:15.186759121Z" level=warning msg="cleaning up after shim disconnected" id=1538d578a583fa91f578bf37a33d2e3d442ff0a1a9d15cf13960936151610fc2 namespace=k8s.io Feb 9 19:45:15.186981 env[1198]: time="2024-02-09T19:45:15.186772708Z" level=info msg="cleaning up dead shim" Feb 9 19:45:15.194649 env[1198]: time="2024-02-09T19:45:15.194595574Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:45:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3945 runtime=io.containerd.runc.v2\n" Feb 9 19:45:15.199791 env[1198]: time="2024-02-09T19:45:15.199756362Z" level=info msg="StopContainer for \"1538d578a583fa91f578bf37a33d2e3d442ff0a1a9d15cf13960936151610fc2\" returns successfully" Feb 9 19:45:15.200640 env[1198]: time="2024-02-09T19:45:15.200614824Z" level=info msg="StopPodSandbox for \"83d4f9ef533728196503f37d555b99e6b21979fb5b4f95ce02a15836c1cfa6a8\"" Feb 9 19:45:15.200818 env[1198]: time="2024-02-09T19:45:15.200789716Z" level=info msg="Container to stop \"1538d578a583fa91f578bf37a33d2e3d442ff0a1a9d15cf13960936151610fc2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:45:15.203149 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-83d4f9ef533728196503f37d555b99e6b21979fb5b4f95ce02a15836c1cfa6a8-shm.mount: Deactivated successfully. Feb 9 19:45:15.219268 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-caba6169d5257e9c28ebc958a24c5c7f7d51227e77cbfce171ec1b44c5a68023-rootfs.mount: Deactivated successfully. Feb 9 19:45:15.226968 env[1198]: time="2024-02-09T19:45:15.226897733Z" level=info msg="shim disconnected" id=caba6169d5257e9c28ebc958a24c5c7f7d51227e77cbfce171ec1b44c5a68023 Feb 9 19:45:15.226968 env[1198]: time="2024-02-09T19:45:15.226969248Z" level=warning msg="cleaning up after shim disconnected" id=caba6169d5257e9c28ebc958a24c5c7f7d51227e77cbfce171ec1b44c5a68023 namespace=k8s.io Feb 9 19:45:15.227236 env[1198]: time="2024-02-09T19:45:15.226984979Z" level=info msg="cleaning up dead shim" Feb 9 19:45:15.231241 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-83d4f9ef533728196503f37d555b99e6b21979fb5b4f95ce02a15836c1cfa6a8-rootfs.mount: Deactivated successfully. Feb 9 19:45:15.236531 env[1198]: time="2024-02-09T19:45:15.236470806Z" level=info msg="shim disconnected" id=83d4f9ef533728196503f37d555b99e6b21979fb5b4f95ce02a15836c1cfa6a8 Feb 9 19:45:15.236765 env[1198]: time="2024-02-09T19:45:15.236544857Z" level=warning msg="cleaning up after shim disconnected" id=83d4f9ef533728196503f37d555b99e6b21979fb5b4f95ce02a15836c1cfa6a8 namespace=k8s.io Feb 9 19:45:15.236765 env[1198]: time="2024-02-09T19:45:15.236561799Z" level=info msg="cleaning up dead shim" Feb 9 19:45:15.239566 env[1198]: time="2024-02-09T19:45:15.239518017Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:45:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3992 runtime=io.containerd.runc.v2\n" Feb 9 19:45:15.242275 env[1198]: time="2024-02-09T19:45:15.242222416Z" level=info msg="StopContainer for \"caba6169d5257e9c28ebc958a24c5c7f7d51227e77cbfce171ec1b44c5a68023\" returns successfully" Feb 9 19:45:15.242799 env[1198]: time="2024-02-09T19:45:15.242766741Z" level=info msg="StopPodSandbox for \"b784746f057d5a702265547a2348214a9e152d7bc852d6946172ffbf951bd490\"" Feb 9 19:45:15.242873 env[1198]: time="2024-02-09T19:45:15.242842114Z" level=info msg="Container to stop \"57089121cca5b641775db666b19a739d03b43505f5e26961b7344f6de78ae503\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:45:15.242873 env[1198]: time="2024-02-09T19:45:15.242865950Z" level=info msg="Container to stop \"a320b7ee4d6481975168cfae4cfbbd7efa5c83aac32e22e9bd6295e84ecfe8b9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:45:15.242951 env[1198]: time="2024-02-09T19:45:15.242880457Z" level=info msg="Container to stop \"8d31a1b11ddc7e68cd4aa4b9d4a4e790f364e318dede72484070bbff4b7e83d6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:45:15.242951 env[1198]: time="2024-02-09T19:45:15.242894163Z" level=info msg="Container to stop \"caba6169d5257e9c28ebc958a24c5c7f7d51227e77cbfce171ec1b44c5a68023\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:45:15.242951 env[1198]: time="2024-02-09T19:45:15.242910444Z" level=info msg="Container to stop \"fbcbab468f978e6a7dff0b3f5c0930f1c8f425e19926024941168854d8ec0244\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:45:15.245007 env[1198]: time="2024-02-09T19:45:15.244971751Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:45:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4005 runtime=io.containerd.runc.v2\n" Feb 9 19:45:15.245310 env[1198]: time="2024-02-09T19:45:15.245277923Z" level=info msg="TearDown network for sandbox \"83d4f9ef533728196503f37d555b99e6b21979fb5b4f95ce02a15836c1cfa6a8\" successfully" Feb 9 19:45:15.245310 env[1198]: time="2024-02-09T19:45:15.245301238Z" level=info msg="StopPodSandbox for \"83d4f9ef533728196503f37d555b99e6b21979fb5b4f95ce02a15836c1cfa6a8\" returns successfully" Feb 9 19:45:15.267499 env[1198]: time="2024-02-09T19:45:15.267442425Z" level=info msg="shim disconnected" id=b784746f057d5a702265547a2348214a9e152d7bc852d6946172ffbf951bd490 Feb 9 19:45:15.267663 env[1198]: time="2024-02-09T19:45:15.267501919Z" level=warning msg="cleaning up after shim disconnected" id=b784746f057d5a702265547a2348214a9e152d7bc852d6946172ffbf951bd490 namespace=k8s.io Feb 9 19:45:15.267663 env[1198]: time="2024-02-09T19:45:15.267512940Z" level=info msg="cleaning up dead shim" Feb 9 19:45:15.274691 env[1198]: time="2024-02-09T19:45:15.274632250Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:45:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4038 runtime=io.containerd.runc.v2\n" Feb 9 19:45:15.275038 env[1198]: time="2024-02-09T19:45:15.275000059Z" level=info msg="TearDown network for sandbox \"b784746f057d5a702265547a2348214a9e152d7bc852d6946172ffbf951bd490\" successfully" Feb 9 19:45:15.275146 env[1198]: time="2024-02-09T19:45:15.275040896Z" level=info msg="StopPodSandbox for \"b784746f057d5a702265547a2348214a9e152d7bc852d6946172ffbf951bd490\" returns successfully" Feb 9 19:45:15.320637 kubelet[2124]: I0209 19:45:15.320584 2124 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0892f3e6-1c32-4bbd-96da-da7da388feac-etc-cni-netd\") pod \"0892f3e6-1c32-4bbd-96da-da7da388feac\" (UID: \"0892f3e6-1c32-4bbd-96da-da7da388feac\") " Feb 9 19:45:15.320637 kubelet[2124]: I0209 19:45:15.320637 2124 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0892f3e6-1c32-4bbd-96da-da7da388feac-lib-modules\") pod \"0892f3e6-1c32-4bbd-96da-da7da388feac\" (UID: \"0892f3e6-1c32-4bbd-96da-da7da388feac\") " Feb 9 19:45:15.321814 kubelet[2124]: I0209 19:45:15.320686 2124 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0892f3e6-1c32-4bbd-96da-da7da388feac-bpf-maps\") pod \"0892f3e6-1c32-4bbd-96da-da7da388feac\" (UID: \"0892f3e6-1c32-4bbd-96da-da7da388feac\") " Feb 9 19:45:15.321814 kubelet[2124]: I0209 19:45:15.320723 2124 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0892f3e6-1c32-4bbd-96da-da7da388feac-clustermesh-secrets\") pod \"0892f3e6-1c32-4bbd-96da-da7da388feac\" (UID: \"0892f3e6-1c32-4bbd-96da-da7da388feac\") " Feb 9 19:45:15.321814 kubelet[2124]: I0209 19:45:15.320739 2124 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0892f3e6-1c32-4bbd-96da-da7da388feac-cni-path\") pod \"0892f3e6-1c32-4bbd-96da-da7da388feac\" (UID: \"0892f3e6-1c32-4bbd-96da-da7da388feac\") " Feb 9 19:45:15.321814 kubelet[2124]: I0209 19:45:15.320729 2124 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0892f3e6-1c32-4bbd-96da-da7da388feac-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "0892f3e6-1c32-4bbd-96da-da7da388feac" (UID: "0892f3e6-1c32-4bbd-96da-da7da388feac"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:45:15.321814 kubelet[2124]: I0209 19:45:15.320733 2124 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0892f3e6-1c32-4bbd-96da-da7da388feac-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0892f3e6-1c32-4bbd-96da-da7da388feac" (UID: "0892f3e6-1c32-4bbd-96da-da7da388feac"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:45:15.321814 kubelet[2124]: I0209 19:45:15.320761 2124 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0892f3e6-1c32-4bbd-96da-da7da388feac-xtables-lock\") pod \"0892f3e6-1c32-4bbd-96da-da7da388feac\" (UID: \"0892f3e6-1c32-4bbd-96da-da7da388feac\") " Feb 9 19:45:15.321977 kubelet[2124]: I0209 19:45:15.320777 2124 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0892f3e6-1c32-4bbd-96da-da7da388feac-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "0892f3e6-1c32-4bbd-96da-da7da388feac" (UID: "0892f3e6-1c32-4bbd-96da-da7da388feac"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:45:15.321977 kubelet[2124]: I0209 19:45:15.320795 2124 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0892f3e6-1c32-4bbd-96da-da7da388feac-cni-path" (OuterVolumeSpecName: "cni-path") pod "0892f3e6-1c32-4bbd-96da-da7da388feac" (UID: "0892f3e6-1c32-4bbd-96da-da7da388feac"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:45:15.321977 kubelet[2124]: I0209 19:45:15.320802 2124 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0892f3e6-1c32-4bbd-96da-da7da388feac-cilium-cgroup\") pod \"0892f3e6-1c32-4bbd-96da-da7da388feac\" (UID: \"0892f3e6-1c32-4bbd-96da-da7da388feac\") " Feb 9 19:45:15.321977 kubelet[2124]: I0209 19:45:15.320807 2124 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0892f3e6-1c32-4bbd-96da-da7da388feac-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "0892f3e6-1c32-4bbd-96da-da7da388feac" (UID: "0892f3e6-1c32-4bbd-96da-da7da388feac"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:45:15.321977 kubelet[2124]: I0209 19:45:15.320832 2124 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0892f3e6-1c32-4bbd-96da-da7da388feac-hubble-tls\") pod \"0892f3e6-1c32-4bbd-96da-da7da388feac\" (UID: \"0892f3e6-1c32-4bbd-96da-da7da388feac\") " Feb 9 19:45:15.322225 kubelet[2124]: I0209 19:45:15.320865 2124 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4tzpn\" (UniqueName: \"kubernetes.io/projected/0892f3e6-1c32-4bbd-96da-da7da388feac-kube-api-access-4tzpn\") pod \"0892f3e6-1c32-4bbd-96da-da7da388feac\" (UID: \"0892f3e6-1c32-4bbd-96da-da7da388feac\") " Feb 9 19:45:15.322225 kubelet[2124]: I0209 19:45:15.320897 2124 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0892f3e6-1c32-4bbd-96da-da7da388feac-cilium-config-path\") pod \"0892f3e6-1c32-4bbd-96da-da7da388feac\" (UID: \"0892f3e6-1c32-4bbd-96da-da7da388feac\") " Feb 9 19:45:15.322225 kubelet[2124]: I0209 19:45:15.320896 2124 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0892f3e6-1c32-4bbd-96da-da7da388feac-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "0892f3e6-1c32-4bbd-96da-da7da388feac" (UID: "0892f3e6-1c32-4bbd-96da-da7da388feac"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:45:15.322225 kubelet[2124]: I0209 19:45:15.320932 2124 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0892f3e6-1c32-4bbd-96da-da7da388feac-cilium-run\") pod \"0892f3e6-1c32-4bbd-96da-da7da388feac\" (UID: \"0892f3e6-1c32-4bbd-96da-da7da388feac\") " Feb 9 19:45:15.322225 kubelet[2124]: I0209 19:45:15.320969 2124 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/32dd2c75-957a-44cd-8607-0eb3f08c2401-cilium-config-path\") pod \"32dd2c75-957a-44cd-8607-0eb3f08c2401\" (UID: \"32dd2c75-957a-44cd-8607-0eb3f08c2401\") " Feb 9 19:45:15.322225 kubelet[2124]: I0209 19:45:15.321017 2124 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0892f3e6-1c32-4bbd-96da-da7da388feac-hostproc\") pod \"0892f3e6-1c32-4bbd-96da-da7da388feac\" (UID: \"0892f3e6-1c32-4bbd-96da-da7da388feac\") " Feb 9 19:45:15.322462 kubelet[2124]: I0209 19:45:15.321086 2124 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0892f3e6-1c32-4bbd-96da-da7da388feac-host-proc-sys-kernel\") pod \"0892f3e6-1c32-4bbd-96da-da7da388feac\" (UID: \"0892f3e6-1c32-4bbd-96da-da7da388feac\") " Feb 9 19:45:15.322462 kubelet[2124]: I0209 19:45:15.321121 2124 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0892f3e6-1c32-4bbd-96da-da7da388feac-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "0892f3e6-1c32-4bbd-96da-da7da388feac" (UID: "0892f3e6-1c32-4bbd-96da-da7da388feac"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:45:15.322462 kubelet[2124]: I0209 19:45:15.321135 2124 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0892f3e6-1c32-4bbd-96da-da7da388feac-host-proc-sys-net\") pod \"0892f3e6-1c32-4bbd-96da-da7da388feac\" (UID: \"0892f3e6-1c32-4bbd-96da-da7da388feac\") " Feb 9 19:45:15.322462 kubelet[2124]: I0209 19:45:15.321151 2124 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0892f3e6-1c32-4bbd-96da-da7da388feac-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "0892f3e6-1c32-4bbd-96da-da7da388feac" (UID: "0892f3e6-1c32-4bbd-96da-da7da388feac"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:45:15.322462 kubelet[2124]: I0209 19:45:15.321175 2124 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzmr8\" (UniqueName: \"kubernetes.io/projected/32dd2c75-957a-44cd-8607-0eb3f08c2401-kube-api-access-pzmr8\") pod \"32dd2c75-957a-44cd-8607-0eb3f08c2401\" (UID: \"32dd2c75-957a-44cd-8607-0eb3f08c2401\") " Feb 9 19:45:15.322462 kubelet[2124]: I0209 19:45:15.321270 2124 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0892f3e6-1c32-4bbd-96da-da7da388feac-cilium-run\") on node \"localhost\" DevicePath \"\"" Feb 9 19:45:15.322684 kubelet[2124]: I0209 19:45:15.321325 2124 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0892f3e6-1c32-4bbd-96da-da7da388feac-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Feb 9 19:45:15.322684 kubelet[2124]: I0209 19:45:15.321346 2124 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0892f3e6-1c32-4bbd-96da-da7da388feac-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Feb 9 19:45:15.322684 kubelet[2124]: I0209 19:45:15.321354 2124 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0892f3e6-1c32-4bbd-96da-da7da388feac-lib-modules\") on node \"localhost\" DevicePath \"\"" Feb 9 19:45:15.322684 kubelet[2124]: W0209 19:45:15.321342 2124 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/32dd2c75-957a-44cd-8607-0eb3f08c2401/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 19:45:15.322684 kubelet[2124]: I0209 19:45:15.321415 2124 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0892f3e6-1c32-4bbd-96da-da7da388feac-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "0892f3e6-1c32-4bbd-96da-da7da388feac" (UID: "0892f3e6-1c32-4bbd-96da-da7da388feac"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:45:15.322684 kubelet[2124]: I0209 19:45:15.321409 2124 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0892f3e6-1c32-4bbd-96da-da7da388feac-bpf-maps\") on node \"localhost\" DevicePath \"\"" Feb 9 19:45:15.322684 kubelet[2124]: I0209 19:45:15.321678 2124 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0892f3e6-1c32-4bbd-96da-da7da388feac-cni-path\") on node \"localhost\" DevicePath \"\"" Feb 9 19:45:15.322684 kubelet[2124]: I0209 19:45:15.321693 2124 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0892f3e6-1c32-4bbd-96da-da7da388feac-xtables-lock\") on node \"localhost\" DevicePath \"\"" Feb 9 19:45:15.322869 kubelet[2124]: I0209 19:45:15.321725 2124 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0892f3e6-1c32-4bbd-96da-da7da388feac-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Feb 9 19:45:15.322869 kubelet[2124]: I0209 19:45:15.321734 2124 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0892f3e6-1c32-4bbd-96da-da7da388feac-hostproc" (OuterVolumeSpecName: "hostproc") pod "0892f3e6-1c32-4bbd-96da-da7da388feac" (UID: "0892f3e6-1c32-4bbd-96da-da7da388feac"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:45:15.322869 kubelet[2124]: W0209 19:45:15.321864 2124 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/0892f3e6-1c32-4bbd-96da-da7da388feac/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 19:45:15.324501 kubelet[2124]: I0209 19:45:15.324043 2124 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0892f3e6-1c32-4bbd-96da-da7da388feac-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0892f3e6-1c32-4bbd-96da-da7da388feac" (UID: "0892f3e6-1c32-4bbd-96da-da7da388feac"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 19:45:15.324887 kubelet[2124]: I0209 19:45:15.324860 2124 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32dd2c75-957a-44cd-8607-0eb3f08c2401-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "32dd2c75-957a-44cd-8607-0eb3f08c2401" (UID: "32dd2c75-957a-44cd-8607-0eb3f08c2401"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 19:45:15.325238 kubelet[2124]: I0209 19:45:15.325206 2124 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32dd2c75-957a-44cd-8607-0eb3f08c2401-kube-api-access-pzmr8" (OuterVolumeSpecName: "kube-api-access-pzmr8") pod "32dd2c75-957a-44cd-8607-0eb3f08c2401" (UID: "32dd2c75-957a-44cd-8607-0eb3f08c2401"). InnerVolumeSpecName "kube-api-access-pzmr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:45:15.325451 kubelet[2124]: I0209 19:45:15.325330 2124 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0892f3e6-1c32-4bbd-96da-da7da388feac-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "0892f3e6-1c32-4bbd-96da-da7da388feac" (UID: "0892f3e6-1c32-4bbd-96da-da7da388feac"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:45:15.326433 kubelet[2124]: I0209 19:45:15.326385 2124 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0892f3e6-1c32-4bbd-96da-da7da388feac-kube-api-access-4tzpn" (OuterVolumeSpecName: "kube-api-access-4tzpn") pod "0892f3e6-1c32-4bbd-96da-da7da388feac" (UID: "0892f3e6-1c32-4bbd-96da-da7da388feac"). InnerVolumeSpecName "kube-api-access-4tzpn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:45:15.327400 kubelet[2124]: I0209 19:45:15.327369 2124 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0892f3e6-1c32-4bbd-96da-da7da388feac-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "0892f3e6-1c32-4bbd-96da-da7da388feac" (UID: "0892f3e6-1c32-4bbd-96da-da7da388feac"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:45:15.409816 kubelet[2124]: I0209 19:45:15.409638 2124 scope.go:115] "RemoveContainer" containerID="1538d578a583fa91f578bf37a33d2e3d442ff0a1a9d15cf13960936151610fc2" Feb 9 19:45:15.411541 env[1198]: time="2024-02-09T19:45:15.411480502Z" level=info msg="RemoveContainer for \"1538d578a583fa91f578bf37a33d2e3d442ff0a1a9d15cf13960936151610fc2\"" Feb 9 19:45:15.422115 env[1198]: time="2024-02-09T19:45:15.422031043Z" level=info msg="RemoveContainer for \"1538d578a583fa91f578bf37a33d2e3d442ff0a1a9d15cf13960936151610fc2\" returns successfully" Feb 9 19:45:15.423388 kubelet[2124]: I0209 19:45:15.422516 2124 scope.go:115] "RemoveContainer" containerID="1538d578a583fa91f578bf37a33d2e3d442ff0a1a9d15cf13960936151610fc2" Feb 9 19:45:15.423388 kubelet[2124]: I0209 19:45:15.422541 2124 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0892f3e6-1c32-4bbd-96da-da7da388feac-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Feb 9 19:45:15.423388 kubelet[2124]: I0209 19:45:15.422580 2124 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0892f3e6-1c32-4bbd-96da-da7da388feac-hubble-tls\") on node \"localhost\" DevicePath \"\"" Feb 9 19:45:15.423388 kubelet[2124]: I0209 19:45:15.422608 2124 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-4tzpn\" (UniqueName: \"kubernetes.io/projected/0892f3e6-1c32-4bbd-96da-da7da388feac-kube-api-access-4tzpn\") on node \"localhost\" DevicePath \"\"" Feb 9 19:45:15.423388 kubelet[2124]: I0209 19:45:15.422626 2124 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0892f3e6-1c32-4bbd-96da-da7da388feac-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 9 19:45:15.423388 kubelet[2124]: I0209 19:45:15.422639 2124 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0892f3e6-1c32-4bbd-96da-da7da388feac-hostproc\") on node \"localhost\" DevicePath \"\"" Feb 9 19:45:15.423388 kubelet[2124]: I0209 19:45:15.422667 2124 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/32dd2c75-957a-44cd-8607-0eb3f08c2401-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 9 19:45:15.423388 kubelet[2124]: I0209 19:45:15.422684 2124 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0892f3e6-1c32-4bbd-96da-da7da388feac-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Feb 9 19:45:15.424210 env[1198]: time="2024-02-09T19:45:15.422787750Z" level=error msg="ContainerStatus for \"1538d578a583fa91f578bf37a33d2e3d442ff0a1a9d15cf13960936151610fc2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1538d578a583fa91f578bf37a33d2e3d442ff0a1a9d15cf13960936151610fc2\": not found" Feb 9 19:45:15.424307 kubelet[2124]: I0209 19:45:15.422727 2124 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-pzmr8\" (UniqueName: \"kubernetes.io/projected/32dd2c75-957a-44cd-8607-0eb3f08c2401-kube-api-access-pzmr8\") on node \"localhost\" DevicePath \"\"" Feb 9 19:45:15.424307 kubelet[2124]: E0209 19:45:15.423138 2124 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1538d578a583fa91f578bf37a33d2e3d442ff0a1a9d15cf13960936151610fc2\": not found" containerID="1538d578a583fa91f578bf37a33d2e3d442ff0a1a9d15cf13960936151610fc2" Feb 9 19:45:15.424307 kubelet[2124]: I0209 19:45:15.423218 2124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:1538d578a583fa91f578bf37a33d2e3d442ff0a1a9d15cf13960936151610fc2} err="failed to get container status \"1538d578a583fa91f578bf37a33d2e3d442ff0a1a9d15cf13960936151610fc2\": rpc error: code = NotFound desc = an error occurred when try to find container \"1538d578a583fa91f578bf37a33d2e3d442ff0a1a9d15cf13960936151610fc2\": not found" Feb 9 19:45:15.424307 kubelet[2124]: I0209 19:45:15.423248 2124 scope.go:115] "RemoveContainer" containerID="caba6169d5257e9c28ebc958a24c5c7f7d51227e77cbfce171ec1b44c5a68023" Feb 9 19:45:15.426035 env[1198]: time="2024-02-09T19:45:15.425425324Z" level=info msg="RemoveContainer for \"caba6169d5257e9c28ebc958a24c5c7f7d51227e77cbfce171ec1b44c5a68023\"" Feb 9 19:45:15.430096 env[1198]: time="2024-02-09T19:45:15.430022831Z" level=info msg="RemoveContainer for \"caba6169d5257e9c28ebc958a24c5c7f7d51227e77cbfce171ec1b44c5a68023\" returns successfully" Feb 9 19:45:15.430387 kubelet[2124]: I0209 19:45:15.430356 2124 scope.go:115] "RemoveContainer" containerID="57089121cca5b641775db666b19a739d03b43505f5e26961b7344f6de78ae503" Feb 9 19:45:15.431946 env[1198]: time="2024-02-09T19:45:15.431780762Z" level=info msg="RemoveContainer for \"57089121cca5b641775db666b19a739d03b43505f5e26961b7344f6de78ae503\"" Feb 9 19:45:15.435746 env[1198]: time="2024-02-09T19:45:15.435681855Z" level=info msg="RemoveContainer for \"57089121cca5b641775db666b19a739d03b43505f5e26961b7344f6de78ae503\" returns successfully" Feb 9 19:45:15.435987 kubelet[2124]: I0209 19:45:15.435953 2124 scope.go:115] "RemoveContainer" containerID="8d31a1b11ddc7e68cd4aa4b9d4a4e790f364e318dede72484070bbff4b7e83d6" Feb 9 19:45:15.437328 env[1198]: time="2024-02-09T19:45:15.437283089Z" level=info msg="RemoveContainer for \"8d31a1b11ddc7e68cd4aa4b9d4a4e790f364e318dede72484070bbff4b7e83d6\"" Feb 9 19:45:15.440490 env[1198]: time="2024-02-09T19:45:15.440449887Z" level=info msg="RemoveContainer for \"8d31a1b11ddc7e68cd4aa4b9d4a4e790f364e318dede72484070bbff4b7e83d6\" returns successfully" Feb 9 19:45:15.440665 kubelet[2124]: I0209 19:45:15.440630 2124 scope.go:115] "RemoveContainer" containerID="a320b7ee4d6481975168cfae4cfbbd7efa5c83aac32e22e9bd6295e84ecfe8b9" Feb 9 19:45:15.441652 env[1198]: time="2024-02-09T19:45:15.441610422Z" level=info msg="RemoveContainer for \"a320b7ee4d6481975168cfae4cfbbd7efa5c83aac32e22e9bd6295e84ecfe8b9\"" Feb 9 19:45:15.446898 env[1198]: time="2024-02-09T19:45:15.446832948Z" level=info msg="RemoveContainer for \"a320b7ee4d6481975168cfae4cfbbd7efa5c83aac32e22e9bd6295e84ecfe8b9\" returns successfully" Feb 9 19:45:15.447291 kubelet[2124]: I0209 19:45:15.447232 2124 scope.go:115] "RemoveContainer" containerID="fbcbab468f978e6a7dff0b3f5c0930f1c8f425e19926024941168854d8ec0244" Feb 9 19:45:15.448915 env[1198]: time="2024-02-09T19:45:15.448881521Z" level=info msg="RemoveContainer for \"fbcbab468f978e6a7dff0b3f5c0930f1c8f425e19926024941168854d8ec0244\"" Feb 9 19:45:15.454044 env[1198]: time="2024-02-09T19:45:15.453986593Z" level=info msg="RemoveContainer for \"fbcbab468f978e6a7dff0b3f5c0930f1c8f425e19926024941168854d8ec0244\" returns successfully" Feb 9 19:45:15.454308 kubelet[2124]: I0209 19:45:15.454273 2124 scope.go:115] "RemoveContainer" containerID="caba6169d5257e9c28ebc958a24c5c7f7d51227e77cbfce171ec1b44c5a68023" Feb 9 19:45:15.455388 env[1198]: time="2024-02-09T19:45:15.455325537Z" level=error msg="ContainerStatus for \"caba6169d5257e9c28ebc958a24c5c7f7d51227e77cbfce171ec1b44c5a68023\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"caba6169d5257e9c28ebc958a24c5c7f7d51227e77cbfce171ec1b44c5a68023\": not found" Feb 9 19:45:15.455550 kubelet[2124]: E0209 19:45:15.455529 2124 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"caba6169d5257e9c28ebc958a24c5c7f7d51227e77cbfce171ec1b44c5a68023\": not found" containerID="caba6169d5257e9c28ebc958a24c5c7f7d51227e77cbfce171ec1b44c5a68023" Feb 9 19:45:15.455615 kubelet[2124]: I0209 19:45:15.455570 2124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:caba6169d5257e9c28ebc958a24c5c7f7d51227e77cbfce171ec1b44c5a68023} err="failed to get container status \"caba6169d5257e9c28ebc958a24c5c7f7d51227e77cbfce171ec1b44c5a68023\": rpc error: code = NotFound desc = an error occurred when try to find container \"caba6169d5257e9c28ebc958a24c5c7f7d51227e77cbfce171ec1b44c5a68023\": not found" Feb 9 19:45:15.455615 kubelet[2124]: I0209 19:45:15.455583 2124 scope.go:115] "RemoveContainer" containerID="57089121cca5b641775db666b19a739d03b43505f5e26961b7344f6de78ae503" Feb 9 19:45:15.455885 env[1198]: time="2024-02-09T19:45:15.455819697Z" level=error msg="ContainerStatus for \"57089121cca5b641775db666b19a739d03b43505f5e26961b7344f6de78ae503\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"57089121cca5b641775db666b19a739d03b43505f5e26961b7344f6de78ae503\": not found" Feb 9 19:45:15.456034 kubelet[2124]: E0209 19:45:15.455997 2124 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"57089121cca5b641775db666b19a739d03b43505f5e26961b7344f6de78ae503\": not found" containerID="57089121cca5b641775db666b19a739d03b43505f5e26961b7344f6de78ae503" Feb 9 19:45:15.456034 kubelet[2124]: I0209 19:45:15.456033 2124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:57089121cca5b641775db666b19a739d03b43505f5e26961b7344f6de78ae503} err="failed to get container status \"57089121cca5b641775db666b19a739d03b43505f5e26961b7344f6de78ae503\": rpc error: code = NotFound desc = an error occurred when try to find container \"57089121cca5b641775db666b19a739d03b43505f5e26961b7344f6de78ae503\": not found" Feb 9 19:45:15.456204 kubelet[2124]: I0209 19:45:15.456052 2124 scope.go:115] "RemoveContainer" containerID="8d31a1b11ddc7e68cd4aa4b9d4a4e790f364e318dede72484070bbff4b7e83d6" Feb 9 19:45:15.456384 env[1198]: time="2024-02-09T19:45:15.456304237Z" level=error msg="ContainerStatus for \"8d31a1b11ddc7e68cd4aa4b9d4a4e790f364e318dede72484070bbff4b7e83d6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8d31a1b11ddc7e68cd4aa4b9d4a4e790f364e318dede72484070bbff4b7e83d6\": not found" Feb 9 19:45:15.456547 kubelet[2124]: E0209 19:45:15.456531 2124 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8d31a1b11ddc7e68cd4aa4b9d4a4e790f364e318dede72484070bbff4b7e83d6\": not found" containerID="8d31a1b11ddc7e68cd4aa4b9d4a4e790f364e318dede72484070bbff4b7e83d6" Feb 9 19:45:15.456588 kubelet[2124]: I0209 19:45:15.456561 2124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:8d31a1b11ddc7e68cd4aa4b9d4a4e790f364e318dede72484070bbff4b7e83d6} err="failed to get container status \"8d31a1b11ddc7e68cd4aa4b9d4a4e790f364e318dede72484070bbff4b7e83d6\": rpc error: code = NotFound desc = an error occurred when try to find container \"8d31a1b11ddc7e68cd4aa4b9d4a4e790f364e318dede72484070bbff4b7e83d6\": not found" Feb 9 19:45:15.456588 kubelet[2124]: I0209 19:45:15.456572 2124 scope.go:115] "RemoveContainer" containerID="a320b7ee4d6481975168cfae4cfbbd7efa5c83aac32e22e9bd6295e84ecfe8b9" Feb 9 19:45:15.456817 env[1198]: time="2024-02-09T19:45:15.456741548Z" level=error msg="ContainerStatus for \"a320b7ee4d6481975168cfae4cfbbd7efa5c83aac32e22e9bd6295e84ecfe8b9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a320b7ee4d6481975168cfae4cfbbd7efa5c83aac32e22e9bd6295e84ecfe8b9\": not found" Feb 9 19:45:15.456931 kubelet[2124]: E0209 19:45:15.456913 2124 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a320b7ee4d6481975168cfae4cfbbd7efa5c83aac32e22e9bd6295e84ecfe8b9\": not found" containerID="a320b7ee4d6481975168cfae4cfbbd7efa5c83aac32e22e9bd6295e84ecfe8b9" Feb 9 19:45:15.456986 kubelet[2124]: I0209 19:45:15.456948 2124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:a320b7ee4d6481975168cfae4cfbbd7efa5c83aac32e22e9bd6295e84ecfe8b9} err="failed to get container status \"a320b7ee4d6481975168cfae4cfbbd7efa5c83aac32e22e9bd6295e84ecfe8b9\": rpc error: code = NotFound desc = an error occurred when try to find container \"a320b7ee4d6481975168cfae4cfbbd7efa5c83aac32e22e9bd6295e84ecfe8b9\": not found" Feb 9 19:45:15.456986 kubelet[2124]: I0209 19:45:15.456966 2124 scope.go:115] "RemoveContainer" containerID="fbcbab468f978e6a7dff0b3f5c0930f1c8f425e19926024941168854d8ec0244" Feb 9 19:45:15.457250 env[1198]: time="2024-02-09T19:45:15.457183418Z" level=error msg="ContainerStatus for \"fbcbab468f978e6a7dff0b3f5c0930f1c8f425e19926024941168854d8ec0244\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fbcbab468f978e6a7dff0b3f5c0930f1c8f425e19926024941168854d8ec0244\": not found" Feb 9 19:45:15.457389 kubelet[2124]: E0209 19:45:15.457366 2124 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fbcbab468f978e6a7dff0b3f5c0930f1c8f425e19926024941168854d8ec0244\": not found" containerID="fbcbab468f978e6a7dff0b3f5c0930f1c8f425e19926024941168854d8ec0244" Feb 9 19:45:15.457438 kubelet[2124]: I0209 19:45:15.457403 2124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:fbcbab468f978e6a7dff0b3f5c0930f1c8f425e19926024941168854d8ec0244} err="failed to get container status \"fbcbab468f978e6a7dff0b3f5c0930f1c8f425e19926024941168854d8ec0244\": rpc error: code = NotFound desc = an error occurred when try to find container \"fbcbab468f978e6a7dff0b3f5c0930f1c8f425e19926024941168854d8ec0244\": not found" Feb 9 19:45:16.141883 systemd[1]: var-lib-kubelet-pods-32dd2c75\x2d957a\x2d44cd\x2d8607\x2d0eb3f08c2401-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpzmr8.mount: Deactivated successfully. Feb 9 19:45:16.142074 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b784746f057d5a702265547a2348214a9e152d7bc852d6946172ffbf951bd490-rootfs.mount: Deactivated successfully. Feb 9 19:45:16.142212 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b784746f057d5a702265547a2348214a9e152d7bc852d6946172ffbf951bd490-shm.mount: Deactivated successfully. Feb 9 19:45:16.142320 systemd[1]: var-lib-kubelet-pods-0892f3e6\x2d1c32\x2d4bbd\x2d96da\x2dda7da388feac-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4tzpn.mount: Deactivated successfully. Feb 9 19:45:16.142458 systemd[1]: var-lib-kubelet-pods-0892f3e6\x2d1c32\x2d4bbd\x2d96da\x2dda7da388feac-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 19:45:16.142612 systemd[1]: var-lib-kubelet-pods-0892f3e6\x2d1c32\x2d4bbd\x2d96da\x2dda7da388feac-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 19:45:16.223570 env[1198]: time="2024-02-09T19:45:16.223473316Z" level=info msg="StopContainer for \"1538d578a583fa91f578bf37a33d2e3d442ff0a1a9d15cf13960936151610fc2\" with timeout 1 (s)" Feb 9 19:45:16.224083 env[1198]: time="2024-02-09T19:45:16.223550933Z" level=error msg="StopContainer for \"1538d578a583fa91f578bf37a33d2e3d442ff0a1a9d15cf13960936151610fc2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1538d578a583fa91f578bf37a33d2e3d442ff0a1a9d15cf13960936151610fc2\": not found" Feb 9 19:45:16.224083 env[1198]: time="2024-02-09T19:45:16.223471472Z" level=info msg="StopContainer for \"caba6169d5257e9c28ebc958a24c5c7f7d51227e77cbfce171ec1b44c5a68023\" with timeout 1 (s)" Feb 9 19:45:16.224083 env[1198]: time="2024-02-09T19:45:16.223758197Z" level=error msg="StopContainer for \"caba6169d5257e9c28ebc958a24c5c7f7d51227e77cbfce171ec1b44c5a68023\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"caba6169d5257e9c28ebc958a24c5c7f7d51227e77cbfce171ec1b44c5a68023\": not found" Feb 9 19:45:16.224233 kubelet[2124]: E0209 19:45:16.224158 2124 remote_runtime.go:349] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1538d578a583fa91f578bf37a33d2e3d442ff0a1a9d15cf13960936151610fc2\": not found" containerID="1538d578a583fa91f578bf37a33d2e3d442ff0a1a9d15cf13960936151610fc2" Feb 9 19:45:16.224581 kubelet[2124]: E0209 19:45:16.224373 2124 remote_runtime.go:349] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"caba6169d5257e9c28ebc958a24c5c7f7d51227e77cbfce171ec1b44c5a68023\": not found" containerID="caba6169d5257e9c28ebc958a24c5c7f7d51227e77cbfce171ec1b44c5a68023" Feb 9 19:45:16.224652 env[1198]: time="2024-02-09T19:45:16.224402841Z" level=info msg="StopPodSandbox for \"83d4f9ef533728196503f37d555b99e6b21979fb5b4f95ce02a15836c1cfa6a8\"" Feb 9 19:45:16.224652 env[1198]: time="2024-02-09T19:45:16.224472694Z" level=info msg="TearDown network for sandbox \"83d4f9ef533728196503f37d555b99e6b21979fb5b4f95ce02a15836c1cfa6a8\" successfully" Feb 9 19:45:16.224652 env[1198]: time="2024-02-09T19:45:16.224518221Z" level=info msg="StopPodSandbox for \"83d4f9ef533728196503f37d555b99e6b21979fb5b4f95ce02a15836c1cfa6a8\" returns successfully" Feb 9 19:45:16.224872 kubelet[2124]: I0209 19:45:16.224841 2124 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=0892f3e6-1c32-4bbd-96da-da7da388feac path="/var/lib/kubelet/pods/0892f3e6-1c32-4bbd-96da-da7da388feac/volumes" Feb 9 19:45:16.224994 env[1198]: time="2024-02-09T19:45:16.224895798Z" level=info msg="StopPodSandbox for \"b784746f057d5a702265547a2348214a9e152d7bc852d6946172ffbf951bd490\"" Feb 9 19:45:16.225053 env[1198]: time="2024-02-09T19:45:16.224987101Z" level=info msg="TearDown network for sandbox \"b784746f057d5a702265547a2348214a9e152d7bc852d6946172ffbf951bd490\" successfully" Feb 9 19:45:16.225053 env[1198]: time="2024-02-09T19:45:16.225011117Z" level=info msg="StopPodSandbox for \"b784746f057d5a702265547a2348214a9e152d7bc852d6946172ffbf951bd490\" returns successfully" Feb 9 19:45:16.226304 kubelet[2124]: I0209 19:45:16.226265 2124 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=32dd2c75-957a-44cd-8607-0eb3f08c2401 path="/var/lib/kubelet/pods/32dd2c75-957a-44cd-8607-0eb3f08c2401/volumes" Feb 9 19:45:17.091642 sshd[3885]: pam_unix(sshd:session): session closed for user core Feb 9 19:45:17.094182 systemd[1]: Started sshd@23-10.0.0.49:22-10.0.0.1:51700.service. Feb 9 19:45:17.094672 systemd[1]: sshd@22-10.0.0.49:22-10.0.0.1:51692.service: Deactivated successfully. Feb 9 19:45:17.095724 systemd[1]: session-23.scope: Deactivated successfully. Feb 9 19:45:17.096228 systemd-logind[1179]: Session 23 logged out. Waiting for processes to exit. Feb 9 19:45:17.097231 systemd-logind[1179]: Removed session 23. Feb 9 19:45:17.126870 sshd[4054]: Accepted publickey for core from 10.0.0.1 port 51700 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:45:17.127716 sshd[4054]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:45:17.131340 systemd-logind[1179]: New session 24 of user core. Feb 9 19:45:17.132233 systemd[1]: Started session-24.scope. Feb 9 19:45:17.268612 kubelet[2124]: E0209 19:45:17.268571 2124 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 19:45:17.639515 kubelet[2124]: I0209 19:45:17.639453 2124 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:45:17.639729 kubelet[2124]: E0209 19:45:17.639550 2124 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0892f3e6-1c32-4bbd-96da-da7da388feac" containerName="mount-cgroup" Feb 9 19:45:17.639729 kubelet[2124]: E0209 19:45:17.639564 2124 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0892f3e6-1c32-4bbd-96da-da7da388feac" containerName="clean-cilium-state" Feb 9 19:45:17.639729 kubelet[2124]: E0209 19:45:17.639575 2124 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0892f3e6-1c32-4bbd-96da-da7da388feac" containerName="apply-sysctl-overwrites" Feb 9 19:45:17.639729 kubelet[2124]: E0209 19:45:17.639584 2124 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="32dd2c75-957a-44cd-8607-0eb3f08c2401" containerName="cilium-operator" Feb 9 19:45:17.639729 kubelet[2124]: E0209 19:45:17.639592 2124 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0892f3e6-1c32-4bbd-96da-da7da388feac" containerName="mount-bpf-fs" Feb 9 19:45:17.639729 kubelet[2124]: E0209 19:45:17.639601 2124 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0892f3e6-1c32-4bbd-96da-da7da388feac" containerName="cilium-agent" Feb 9 19:45:17.639729 kubelet[2124]: I0209 19:45:17.639644 2124 memory_manager.go:346] "RemoveStaleState removing state" podUID="32dd2c75-957a-44cd-8607-0eb3f08c2401" containerName="cilium-operator" Feb 9 19:45:17.639729 kubelet[2124]: I0209 19:45:17.639654 2124 memory_manager.go:346] "RemoveStaleState removing state" podUID="0892f3e6-1c32-4bbd-96da-da7da388feac" containerName="cilium-agent" Feb 9 19:45:17.643204 sshd[4054]: pam_unix(sshd:session): session closed for user core Feb 9 19:45:17.646680 systemd[1]: Started sshd@24-10.0.0.49:22-10.0.0.1:51710.service. Feb 9 19:45:17.655269 systemd[1]: sshd@23-10.0.0.49:22-10.0.0.1:51700.service: Deactivated successfully. Feb 9 19:45:17.656418 systemd[1]: session-24.scope: Deactivated successfully. Feb 9 19:45:17.666351 systemd-logind[1179]: Session 24 logged out. Waiting for processes to exit. Feb 9 19:45:17.670521 systemd-logind[1179]: Removed session 24. Feb 9 19:45:17.714108 sshd[4067]: Accepted publickey for core from 10.0.0.1 port 51710 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:45:17.715659 sshd[4067]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:45:17.720180 systemd-logind[1179]: New session 25 of user core. Feb 9 19:45:17.720947 systemd[1]: Started session-25.scope. Feb 9 19:45:17.735508 kubelet[2124]: I0209 19:45:17.735463 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cfb59561-6930-4734-8568-58dd71b0d378-host-proc-sys-net\") pod \"cilium-k8288\" (UID: \"cfb59561-6930-4734-8568-58dd71b0d378\") " pod="kube-system/cilium-k8288" Feb 9 19:45:17.735680 kubelet[2124]: I0209 19:45:17.735531 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cfb59561-6930-4734-8568-58dd71b0d378-cilium-cgroup\") pod \"cilium-k8288\" (UID: \"cfb59561-6930-4734-8568-58dd71b0d378\") " pod="kube-system/cilium-k8288" Feb 9 19:45:17.735680 kubelet[2124]: I0209 19:45:17.735578 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cfb59561-6930-4734-8568-58dd71b0d378-cni-path\") pod \"cilium-k8288\" (UID: \"cfb59561-6930-4734-8568-58dd71b0d378\") " pod="kube-system/cilium-k8288" Feb 9 19:45:17.735752 kubelet[2124]: I0209 19:45:17.735685 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cfb59561-6930-4734-8568-58dd71b0d378-xtables-lock\") pod \"cilium-k8288\" (UID: \"cfb59561-6930-4734-8568-58dd71b0d378\") " pod="kube-system/cilium-k8288" Feb 9 19:45:17.735819 kubelet[2124]: I0209 19:45:17.735804 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cfb59561-6930-4734-8568-58dd71b0d378-bpf-maps\") pod \"cilium-k8288\" (UID: \"cfb59561-6930-4734-8568-58dd71b0d378\") " pod="kube-system/cilium-k8288" Feb 9 19:45:17.735852 kubelet[2124]: I0209 19:45:17.735827 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cfb59561-6930-4734-8568-58dd71b0d378-hubble-tls\") pod \"cilium-k8288\" (UID: \"cfb59561-6930-4734-8568-58dd71b0d378\") " pod="kube-system/cilium-k8288" Feb 9 19:45:17.735876 kubelet[2124]: I0209 19:45:17.735870 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cfb59561-6930-4734-8568-58dd71b0d378-lib-modules\") pod \"cilium-k8288\" (UID: \"cfb59561-6930-4734-8568-58dd71b0d378\") " pod="kube-system/cilium-k8288" Feb 9 19:45:17.735903 kubelet[2124]: I0209 19:45:17.735894 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cfb59561-6930-4734-8568-58dd71b0d378-clustermesh-secrets\") pod \"cilium-k8288\" (UID: \"cfb59561-6930-4734-8568-58dd71b0d378\") " pod="kube-system/cilium-k8288" Feb 9 19:45:17.735946 kubelet[2124]: I0209 19:45:17.735928 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cfb59561-6930-4734-8568-58dd71b0d378-etc-cni-netd\") pod \"cilium-k8288\" (UID: \"cfb59561-6930-4734-8568-58dd71b0d378\") " pod="kube-system/cilium-k8288" Feb 9 19:45:17.735999 kubelet[2124]: I0209 19:45:17.735980 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/cfb59561-6930-4734-8568-58dd71b0d378-cilium-ipsec-secrets\") pod \"cilium-k8288\" (UID: \"cfb59561-6930-4734-8568-58dd71b0d378\") " pod="kube-system/cilium-k8288" Feb 9 19:45:17.736039 kubelet[2124]: I0209 19:45:17.736025 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cfb59561-6930-4734-8568-58dd71b0d378-cilium-run\") pod \"cilium-k8288\" (UID: \"cfb59561-6930-4734-8568-58dd71b0d378\") " pod="kube-system/cilium-k8288" Feb 9 19:45:17.736116 kubelet[2124]: I0209 19:45:17.736092 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cfb59561-6930-4734-8568-58dd71b0d378-cilium-config-path\") pod \"cilium-k8288\" (UID: \"cfb59561-6930-4734-8568-58dd71b0d378\") " pod="kube-system/cilium-k8288" Feb 9 19:45:17.736296 kubelet[2124]: I0209 19:45:17.736137 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cfb59561-6930-4734-8568-58dd71b0d378-host-proc-sys-kernel\") pod \"cilium-k8288\" (UID: \"cfb59561-6930-4734-8568-58dd71b0d378\") " pod="kube-system/cilium-k8288" Feb 9 19:45:17.736296 kubelet[2124]: I0209 19:45:17.736176 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt2fw\" (UniqueName: \"kubernetes.io/projected/cfb59561-6930-4734-8568-58dd71b0d378-kube-api-access-zt2fw\") pod \"cilium-k8288\" (UID: \"cfb59561-6930-4734-8568-58dd71b0d378\") " pod="kube-system/cilium-k8288" Feb 9 19:45:17.736296 kubelet[2124]: I0209 19:45:17.736232 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cfb59561-6930-4734-8568-58dd71b0d378-hostproc\") pod \"cilium-k8288\" (UID: \"cfb59561-6930-4734-8568-58dd71b0d378\") " pod="kube-system/cilium-k8288" Feb 9 19:45:17.847423 sshd[4067]: pam_unix(sshd:session): session closed for user core Feb 9 19:45:17.852598 systemd[1]: Started sshd@25-10.0.0.49:22-10.0.0.1:51714.service. Feb 9 19:45:17.853530 systemd[1]: sshd@24-10.0.0.49:22-10.0.0.1:51710.service: Deactivated successfully. Feb 9 19:45:17.854589 systemd[1]: session-25.scope: Deactivated successfully. Feb 9 19:45:17.854788 systemd-logind[1179]: Session 25 logged out. Waiting for processes to exit. Feb 9 19:45:17.855811 systemd-logind[1179]: Removed session 25. Feb 9 19:45:17.882432 sshd[4086]: Accepted publickey for core from 10.0.0.1 port 51714 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:45:17.883466 sshd[4086]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:45:17.886765 systemd-logind[1179]: New session 26 of user core. Feb 9 19:45:17.887500 systemd[1]: Started session-26.scope. Feb 9 19:45:18.245480 kubelet[2124]: E0209 19:45:18.245429 2124 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:18.245973 env[1198]: time="2024-02-09T19:45:18.245920499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k8288,Uid:cfb59561-6930-4734-8568-58dd71b0d378,Namespace:kube-system,Attempt:0,}" Feb 9 19:45:18.258884 env[1198]: time="2024-02-09T19:45:18.258800685Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:45:18.258884 env[1198]: time="2024-02-09T19:45:18.258837715Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:45:18.258884 env[1198]: time="2024-02-09T19:45:18.258848365Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:45:18.259159 env[1198]: time="2024-02-09T19:45:18.259055328Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8e8823d120b774f54ef663ae3ada23c86fbf369c6634ffc3292de9ee19cf4e6a pid=4107 runtime=io.containerd.runc.v2 Feb 9 19:45:18.292152 env[1198]: time="2024-02-09T19:45:18.292102304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k8288,Uid:cfb59561-6930-4734-8568-58dd71b0d378,Namespace:kube-system,Attempt:0,} returns sandbox id \"8e8823d120b774f54ef663ae3ada23c86fbf369c6634ffc3292de9ee19cf4e6a\"" Feb 9 19:45:18.292687 kubelet[2124]: E0209 19:45:18.292640 2124 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:18.294735 env[1198]: time="2024-02-09T19:45:18.294674638Z" level=info msg="CreateContainer within sandbox \"8e8823d120b774f54ef663ae3ada23c86fbf369c6634ffc3292de9ee19cf4e6a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 19:45:18.307610 env[1198]: time="2024-02-09T19:45:18.307549272Z" level=info msg="CreateContainer within sandbox \"8e8823d120b774f54ef663ae3ada23c86fbf369c6634ffc3292de9ee19cf4e6a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5d970df92b456d9e6d77d26147e76106b1165af17251ca0bccad7aad04c659c8\"" Feb 9 19:45:18.308207 env[1198]: time="2024-02-09T19:45:18.308140465Z" level=info msg="StartContainer for \"5d970df92b456d9e6d77d26147e76106b1165af17251ca0bccad7aad04c659c8\"" Feb 9 19:45:18.353062 env[1198]: time="2024-02-09T19:45:18.352993326Z" level=info msg="StartContainer for \"5d970df92b456d9e6d77d26147e76106b1165af17251ca0bccad7aad04c659c8\" returns successfully" Feb 9 19:45:18.392512 env[1198]: time="2024-02-09T19:45:18.392456824Z" level=info msg="shim disconnected" id=5d970df92b456d9e6d77d26147e76106b1165af17251ca0bccad7aad04c659c8 Feb 9 19:45:18.392512 env[1198]: time="2024-02-09T19:45:18.392507660Z" level=warning msg="cleaning up after shim disconnected" id=5d970df92b456d9e6d77d26147e76106b1165af17251ca0bccad7aad04c659c8 namespace=k8s.io Feb 9 19:45:18.392512 env[1198]: time="2024-02-09T19:45:18.392517018Z" level=info msg="cleaning up dead shim" Feb 9 19:45:18.400115 env[1198]: time="2024-02-09T19:45:18.400078996Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:45:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4191 runtime=io.containerd.runc.v2\n" Feb 9 19:45:18.425781 env[1198]: time="2024-02-09T19:45:18.425703418Z" level=info msg="StopPodSandbox for \"8e8823d120b774f54ef663ae3ada23c86fbf369c6634ffc3292de9ee19cf4e6a\"" Feb 9 19:45:18.426023 env[1198]: time="2024-02-09T19:45:18.425788590Z" level=info msg="Container to stop \"5d970df92b456d9e6d77d26147e76106b1165af17251ca0bccad7aad04c659c8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:45:18.544552 env[1198]: time="2024-02-09T19:45:18.543859669Z" level=info msg="shim disconnected" id=8e8823d120b774f54ef663ae3ada23c86fbf369c6634ffc3292de9ee19cf4e6a Feb 9 19:45:18.544552 env[1198]: time="2024-02-09T19:45:18.543925523Z" level=warning msg="cleaning up after shim disconnected" id=8e8823d120b774f54ef663ae3ada23c86fbf369c6634ffc3292de9ee19cf4e6a namespace=k8s.io Feb 9 19:45:18.544552 env[1198]: time="2024-02-09T19:45:18.543937967Z" level=info msg="cleaning up dead shim" Feb 9 19:45:18.552047 env[1198]: time="2024-02-09T19:45:18.551986528Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:45:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4223 runtime=io.containerd.runc.v2\n" Feb 9 19:45:18.552404 env[1198]: time="2024-02-09T19:45:18.552369206Z" level=info msg="TearDown network for sandbox \"8e8823d120b774f54ef663ae3ada23c86fbf369c6634ffc3292de9ee19cf4e6a\" successfully" Feb 9 19:45:18.552404 env[1198]: time="2024-02-09T19:45:18.552395876Z" level=info msg="StopPodSandbox for \"8e8823d120b774f54ef663ae3ada23c86fbf369c6634ffc3292de9ee19cf4e6a\" returns successfully" Feb 9 19:45:18.645978 kubelet[2124]: I0209 19:45:18.645905 2124 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cfb59561-6930-4734-8568-58dd71b0d378-lib-modules\") pod \"cfb59561-6930-4734-8568-58dd71b0d378\" (UID: \"cfb59561-6930-4734-8568-58dd71b0d378\") " Feb 9 19:45:18.645978 kubelet[2124]: I0209 19:45:18.645973 2124 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cfb59561-6930-4734-8568-58dd71b0d378-hostproc\") pod \"cfb59561-6930-4734-8568-58dd71b0d378\" (UID: \"cfb59561-6930-4734-8568-58dd71b0d378\") " Feb 9 19:45:18.645978 kubelet[2124]: I0209 19:45:18.645991 2124 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cfb59561-6930-4734-8568-58dd71b0d378-cni-path\") pod \"cfb59561-6930-4734-8568-58dd71b0d378\" (UID: \"cfb59561-6930-4734-8568-58dd71b0d378\") " Feb 9 19:45:18.646293 kubelet[2124]: I0209 19:45:18.646006 2124 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cfb59561-6930-4734-8568-58dd71b0d378-xtables-lock\") pod \"cfb59561-6930-4734-8568-58dd71b0d378\" (UID: \"cfb59561-6930-4734-8568-58dd71b0d378\") " Feb 9 19:45:18.646293 kubelet[2124]: I0209 19:45:18.646027 2124 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cfb59561-6930-4734-8568-58dd71b0d378-etc-cni-netd\") pod \"cfb59561-6930-4734-8568-58dd71b0d378\" (UID: \"cfb59561-6930-4734-8568-58dd71b0d378\") " Feb 9 19:45:18.646293 kubelet[2124]: I0209 19:45:18.646056 2124 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/cfb59561-6930-4734-8568-58dd71b0d378-cilium-ipsec-secrets\") pod \"cfb59561-6930-4734-8568-58dd71b0d378\" (UID: \"cfb59561-6930-4734-8568-58dd71b0d378\") " Feb 9 19:45:18.646293 kubelet[2124]: I0209 19:45:18.646107 2124 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cfb59561-6930-4734-8568-58dd71b0d378-host-proc-sys-net\") pod \"cfb59561-6930-4734-8568-58dd71b0d378\" (UID: \"cfb59561-6930-4734-8568-58dd71b0d378\") " Feb 9 19:45:18.646293 kubelet[2124]: I0209 19:45:18.646043 2124 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cfb59561-6930-4734-8568-58dd71b0d378-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "cfb59561-6930-4734-8568-58dd71b0d378" (UID: "cfb59561-6930-4734-8568-58dd71b0d378"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:45:18.646293 kubelet[2124]: I0209 19:45:18.646128 2124 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cfb59561-6930-4734-8568-58dd71b0d378-hubble-tls\") pod \"cfb59561-6930-4734-8568-58dd71b0d378\" (UID: \"cfb59561-6930-4734-8568-58dd71b0d378\") " Feb 9 19:45:18.646451 kubelet[2124]: I0209 19:45:18.646099 2124 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cfb59561-6930-4734-8568-58dd71b0d378-hostproc" (OuterVolumeSpecName: "hostproc") pod "cfb59561-6930-4734-8568-58dd71b0d378" (UID: "cfb59561-6930-4734-8568-58dd71b0d378"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:45:18.646451 kubelet[2124]: I0209 19:45:18.646145 2124 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cfb59561-6930-4734-8568-58dd71b0d378-cilium-run\") pod \"cfb59561-6930-4734-8568-58dd71b0d378\" (UID: \"cfb59561-6930-4734-8568-58dd71b0d378\") " Feb 9 19:45:18.646451 kubelet[2124]: I0209 19:45:18.646184 2124 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cfb59561-6930-4734-8568-58dd71b0d378-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "cfb59561-6930-4734-8568-58dd71b0d378" (UID: "cfb59561-6930-4734-8568-58dd71b0d378"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:45:18.646451 kubelet[2124]: I0209 19:45:18.646209 2124 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cfb59561-6930-4734-8568-58dd71b0d378-host-proc-sys-kernel\") pod \"cfb59561-6930-4734-8568-58dd71b0d378\" (UID: \"cfb59561-6930-4734-8568-58dd71b0d378\") " Feb 9 19:45:18.646451 kubelet[2124]: I0209 19:45:18.646213 2124 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cfb59561-6930-4734-8568-58dd71b0d378-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "cfb59561-6930-4734-8568-58dd71b0d378" (UID: "cfb59561-6930-4734-8568-58dd71b0d378"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:45:18.646575 kubelet[2124]: I0209 19:45:18.646242 2124 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cfb59561-6930-4734-8568-58dd71b0d378-cilium-cgroup\") pod \"cfb59561-6930-4734-8568-58dd71b0d378\" (UID: \"cfb59561-6930-4734-8568-58dd71b0d378\") " Feb 9 19:45:18.646575 kubelet[2124]: I0209 19:45:18.646267 2124 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zt2fw\" (UniqueName: \"kubernetes.io/projected/cfb59561-6930-4734-8568-58dd71b0d378-kube-api-access-zt2fw\") pod \"cfb59561-6930-4734-8568-58dd71b0d378\" (UID: \"cfb59561-6930-4734-8568-58dd71b0d378\") " Feb 9 19:45:18.646575 kubelet[2124]: I0209 19:45:18.646284 2124 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cfb59561-6930-4734-8568-58dd71b0d378-bpf-maps\") pod \"cfb59561-6930-4734-8568-58dd71b0d378\" (UID: \"cfb59561-6930-4734-8568-58dd71b0d378\") " Feb 9 19:45:18.646575 kubelet[2124]: I0209 19:45:18.646309 2124 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cfb59561-6930-4734-8568-58dd71b0d378-clustermesh-secrets\") pod \"cfb59561-6930-4734-8568-58dd71b0d378\" (UID: \"cfb59561-6930-4734-8568-58dd71b0d378\") " Feb 9 19:45:18.646575 kubelet[2124]: I0209 19:45:18.646329 2124 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cfb59561-6930-4734-8568-58dd71b0d378-cilium-config-path\") pod \"cfb59561-6930-4734-8568-58dd71b0d378\" (UID: \"cfb59561-6930-4734-8568-58dd71b0d378\") " Feb 9 19:45:18.646575 kubelet[2124]: I0209 19:45:18.646375 2124 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cfb59561-6930-4734-8568-58dd71b0d378-lib-modules\") on node \"localhost\" DevicePath \"\"" Feb 9 19:45:18.646575 kubelet[2124]: I0209 19:45:18.646384 2124 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cfb59561-6930-4734-8568-58dd71b0d378-hostproc\") on node \"localhost\" DevicePath \"\"" Feb 9 19:45:18.646750 kubelet[2124]: I0209 19:45:18.646395 2124 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cfb59561-6930-4734-8568-58dd71b0d378-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Feb 9 19:45:18.646750 kubelet[2124]: I0209 19:45:18.646405 2124 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cfb59561-6930-4734-8568-58dd71b0d378-cilium-run\") on node \"localhost\" DevicePath \"\"" Feb 9 19:45:18.646750 kubelet[2124]: W0209 19:45:18.646571 2124 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/cfb59561-6930-4734-8568-58dd71b0d378/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 19:45:18.646907 kubelet[2124]: I0209 19:45:18.646873 2124 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cfb59561-6930-4734-8568-58dd71b0d378-cni-path" (OuterVolumeSpecName: "cni-path") pod "cfb59561-6930-4734-8568-58dd71b0d378" (UID: "cfb59561-6930-4734-8568-58dd71b0d378"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:45:18.647013 kubelet[2124]: I0209 19:45:18.646995 2124 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cfb59561-6930-4734-8568-58dd71b0d378-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "cfb59561-6930-4734-8568-58dd71b0d378" (UID: "cfb59561-6930-4734-8568-58dd71b0d378"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:45:18.647210 kubelet[2124]: I0209 19:45:18.647189 2124 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cfb59561-6930-4734-8568-58dd71b0d378-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "cfb59561-6930-4734-8568-58dd71b0d378" (UID: "cfb59561-6930-4734-8568-58dd71b0d378"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:45:18.648639 kubelet[2124]: I0209 19:45:18.648612 2124 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfb59561-6930-4734-8568-58dd71b0d378-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "cfb59561-6930-4734-8568-58dd71b0d378" (UID: "cfb59561-6930-4734-8568-58dd71b0d378"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 19:45:18.648704 kubelet[2124]: I0209 19:45:18.648665 2124 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cfb59561-6930-4734-8568-58dd71b0d378-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "cfb59561-6930-4734-8568-58dd71b0d378" (UID: "cfb59561-6930-4734-8568-58dd71b0d378"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:45:18.648704 kubelet[2124]: I0209 19:45:18.648685 2124 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cfb59561-6930-4734-8568-58dd71b0d378-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "cfb59561-6930-4734-8568-58dd71b0d378" (UID: "cfb59561-6930-4734-8568-58dd71b0d378"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:45:18.648786 kubelet[2124]: I0209 19:45:18.648702 2124 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cfb59561-6930-4734-8568-58dd71b0d378-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "cfb59561-6930-4734-8568-58dd71b0d378" (UID: "cfb59561-6930-4734-8568-58dd71b0d378"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:45:18.648856 kubelet[2124]: I0209 19:45:18.648819 2124 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfb59561-6930-4734-8568-58dd71b0d378-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "cfb59561-6930-4734-8568-58dd71b0d378" (UID: "cfb59561-6930-4734-8568-58dd71b0d378"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:45:18.649461 kubelet[2124]: I0209 19:45:18.649428 2124 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfb59561-6930-4734-8568-58dd71b0d378-kube-api-access-zt2fw" (OuterVolumeSpecName: "kube-api-access-zt2fw") pod "cfb59561-6930-4734-8568-58dd71b0d378" (UID: "cfb59561-6930-4734-8568-58dd71b0d378"). InnerVolumeSpecName "kube-api-access-zt2fw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:45:18.650360 kubelet[2124]: I0209 19:45:18.650328 2124 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfb59561-6930-4734-8568-58dd71b0d378-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "cfb59561-6930-4734-8568-58dd71b0d378" (UID: "cfb59561-6930-4734-8568-58dd71b0d378"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:45:18.650684 kubelet[2124]: I0209 19:45:18.650663 2124 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfb59561-6930-4734-8568-58dd71b0d378-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "cfb59561-6930-4734-8568-58dd71b0d378" (UID: "cfb59561-6930-4734-8568-58dd71b0d378"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:45:18.747241 kubelet[2124]: I0209 19:45:18.747177 2124 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cfb59561-6930-4734-8568-58dd71b0d378-bpf-maps\") on node \"localhost\" DevicePath \"\"" Feb 9 19:45:18.747241 kubelet[2124]: I0209 19:45:18.747222 2124 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cfb59561-6930-4734-8568-58dd71b0d378-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Feb 9 19:45:18.747241 kubelet[2124]: I0209 19:45:18.747237 2124 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cfb59561-6930-4734-8568-58dd71b0d378-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 9 19:45:18.747241 kubelet[2124]: I0209 19:45:18.747249 2124 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cfb59561-6930-4734-8568-58dd71b0d378-cni-path\") on node \"localhost\" DevicePath \"\"" Feb 9 19:45:18.747241 kubelet[2124]: I0209 19:45:18.747259 2124 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cfb59561-6930-4734-8568-58dd71b0d378-xtables-lock\") on node \"localhost\" DevicePath \"\"" Feb 9 19:45:18.747241 kubelet[2124]: I0209 19:45:18.747267 2124 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cfb59561-6930-4734-8568-58dd71b0d378-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Feb 9 19:45:18.747573 kubelet[2124]: I0209 19:45:18.747277 2124 reconciler_common.go:295] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/cfb59561-6930-4734-8568-58dd71b0d378-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Feb 9 19:45:18.747573 kubelet[2124]: I0209 19:45:18.747287 2124 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cfb59561-6930-4734-8568-58dd71b0d378-hubble-tls\") on node \"localhost\" DevicePath \"\"" Feb 9 19:45:18.747573 kubelet[2124]: I0209 19:45:18.747296 2124 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cfb59561-6930-4734-8568-58dd71b0d378-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Feb 9 19:45:18.747573 kubelet[2124]: I0209 19:45:18.747305 2124 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cfb59561-6930-4734-8568-58dd71b0d378-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Feb 9 19:45:18.747573 kubelet[2124]: I0209 19:45:18.747315 2124 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-zt2fw\" (UniqueName: \"kubernetes.io/projected/cfb59561-6930-4734-8568-58dd71b0d378-kube-api-access-zt2fw\") on node \"localhost\" DevicePath \"\"" Feb 9 19:45:18.843271 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8e8823d120b774f54ef663ae3ada23c86fbf369c6634ffc3292de9ee19cf4e6a-shm.mount: Deactivated successfully. Feb 9 19:45:18.843435 systemd[1]: var-lib-kubelet-pods-cfb59561\x2d6930\x2d4734\x2d8568\x2d58dd71b0d378-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzt2fw.mount: Deactivated successfully. Feb 9 19:45:18.843520 systemd[1]: var-lib-kubelet-pods-cfb59561\x2d6930\x2d4734\x2d8568\x2d58dd71b0d378-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 19:45:18.843615 systemd[1]: var-lib-kubelet-pods-cfb59561\x2d6930\x2d4734\x2d8568\x2d58dd71b0d378-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 9 19:45:18.843717 systemd[1]: var-lib-kubelet-pods-cfb59561\x2d6930\x2d4734\x2d8568\x2d58dd71b0d378-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 19:45:19.223219 kubelet[2124]: E0209 19:45:19.223152 2124 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:19.428870 kubelet[2124]: I0209 19:45:19.428829 2124 scope.go:115] "RemoveContainer" containerID="5d970df92b456d9e6d77d26147e76106b1165af17251ca0bccad7aad04c659c8" Feb 9 19:45:19.432382 env[1198]: time="2024-02-09T19:45:19.432336987Z" level=info msg="RemoveContainer for \"5d970df92b456d9e6d77d26147e76106b1165af17251ca0bccad7aad04c659c8\"" Feb 9 19:45:19.436255 env[1198]: time="2024-02-09T19:45:19.436194088Z" level=info msg="RemoveContainer for \"5d970df92b456d9e6d77d26147e76106b1165af17251ca0bccad7aad04c659c8\" returns successfully" Feb 9 19:45:19.454596 kubelet[2124]: I0209 19:45:19.454292 2124 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:45:19.454596 kubelet[2124]: E0209 19:45:19.454376 2124 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cfb59561-6930-4734-8568-58dd71b0d378" containerName="mount-cgroup" Feb 9 19:45:19.454596 kubelet[2124]: I0209 19:45:19.454412 2124 memory_manager.go:346] "RemoveStaleState removing state" podUID="cfb59561-6930-4734-8568-58dd71b0d378" containerName="mount-cgroup" Feb 9 19:45:19.551483 kubelet[2124]: I0209 19:45:19.551326 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c9f8bf1a-e67b-429c-a972-3cc9f9237608-lib-modules\") pod \"cilium-6ls8z\" (UID: \"c9f8bf1a-e67b-429c-a972-3cc9f9237608\") " pod="kube-system/cilium-6ls8z" Feb 9 19:45:19.551483 kubelet[2124]: I0209 19:45:19.551374 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c9f8bf1a-e67b-429c-a972-3cc9f9237608-xtables-lock\") pod \"cilium-6ls8z\" (UID: \"c9f8bf1a-e67b-429c-a972-3cc9f9237608\") " pod="kube-system/cilium-6ls8z" Feb 9 19:45:19.551483 kubelet[2124]: I0209 19:45:19.551394 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-652q5\" (UniqueName: \"kubernetes.io/projected/c9f8bf1a-e67b-429c-a972-3cc9f9237608-kube-api-access-652q5\") pod \"cilium-6ls8z\" (UID: \"c9f8bf1a-e67b-429c-a972-3cc9f9237608\") " pod="kube-system/cilium-6ls8z" Feb 9 19:45:19.551483 kubelet[2124]: I0209 19:45:19.551411 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c9f8bf1a-e67b-429c-a972-3cc9f9237608-host-proc-sys-net\") pod \"cilium-6ls8z\" (UID: \"c9f8bf1a-e67b-429c-a972-3cc9f9237608\") " pod="kube-system/cilium-6ls8z" Feb 9 19:45:19.551813 kubelet[2124]: I0209 19:45:19.551550 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c9f8bf1a-e67b-429c-a972-3cc9f9237608-host-proc-sys-kernel\") pod \"cilium-6ls8z\" (UID: \"c9f8bf1a-e67b-429c-a972-3cc9f9237608\") " pod="kube-system/cilium-6ls8z" Feb 9 19:45:19.551813 kubelet[2124]: I0209 19:45:19.551675 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c9f8bf1a-e67b-429c-a972-3cc9f9237608-cilium-cgroup\") pod \"cilium-6ls8z\" (UID: \"c9f8bf1a-e67b-429c-a972-3cc9f9237608\") " pod="kube-system/cilium-6ls8z" Feb 9 19:45:19.551813 kubelet[2124]: I0209 19:45:19.551746 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c9f8bf1a-e67b-429c-a972-3cc9f9237608-cni-path\") pod \"cilium-6ls8z\" (UID: \"c9f8bf1a-e67b-429c-a972-3cc9f9237608\") " pod="kube-system/cilium-6ls8z" Feb 9 19:45:19.551813 kubelet[2124]: I0209 19:45:19.551777 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c9f8bf1a-e67b-429c-a972-3cc9f9237608-clustermesh-secrets\") pod \"cilium-6ls8z\" (UID: \"c9f8bf1a-e67b-429c-a972-3cc9f9237608\") " pod="kube-system/cilium-6ls8z" Feb 9 19:45:19.551813 kubelet[2124]: I0209 19:45:19.551816 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c9f8bf1a-e67b-429c-a972-3cc9f9237608-hubble-tls\") pod \"cilium-6ls8z\" (UID: \"c9f8bf1a-e67b-429c-a972-3cc9f9237608\") " pod="kube-system/cilium-6ls8z" Feb 9 19:45:19.551970 kubelet[2124]: I0209 19:45:19.551845 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c9f8bf1a-e67b-429c-a972-3cc9f9237608-bpf-maps\") pod \"cilium-6ls8z\" (UID: \"c9f8bf1a-e67b-429c-a972-3cc9f9237608\") " pod="kube-system/cilium-6ls8z" Feb 9 19:45:19.551970 kubelet[2124]: I0209 19:45:19.551866 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c9f8bf1a-e67b-429c-a972-3cc9f9237608-hostproc\") pod \"cilium-6ls8z\" (UID: \"c9f8bf1a-e67b-429c-a972-3cc9f9237608\") " pod="kube-system/cilium-6ls8z" Feb 9 19:45:19.551970 kubelet[2124]: I0209 19:45:19.551891 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c9f8bf1a-e67b-429c-a972-3cc9f9237608-etc-cni-netd\") pod \"cilium-6ls8z\" (UID: \"c9f8bf1a-e67b-429c-a972-3cc9f9237608\") " pod="kube-system/cilium-6ls8z" Feb 9 19:45:19.551970 kubelet[2124]: I0209 19:45:19.551969 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c9f8bf1a-e67b-429c-a972-3cc9f9237608-cilium-run\") pod \"cilium-6ls8z\" (UID: \"c9f8bf1a-e67b-429c-a972-3cc9f9237608\") " pod="kube-system/cilium-6ls8z" Feb 9 19:45:19.552127 kubelet[2124]: I0209 19:45:19.552005 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c9f8bf1a-e67b-429c-a972-3cc9f9237608-cilium-config-path\") pod \"cilium-6ls8z\" (UID: \"c9f8bf1a-e67b-429c-a972-3cc9f9237608\") " pod="kube-system/cilium-6ls8z" Feb 9 19:45:19.552127 kubelet[2124]: I0209 19:45:19.552037 2124 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c9f8bf1a-e67b-429c-a972-3cc9f9237608-cilium-ipsec-secrets\") pod \"cilium-6ls8z\" (UID: \"c9f8bf1a-e67b-429c-a972-3cc9f9237608\") " pod="kube-system/cilium-6ls8z" Feb 9 19:45:19.765177 kubelet[2124]: E0209 19:45:19.765111 2124 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:19.765810 env[1198]: time="2024-02-09T19:45:19.765751667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6ls8z,Uid:c9f8bf1a-e67b-429c-a972-3cc9f9237608,Namespace:kube-system,Attempt:0,}" Feb 9 19:45:19.780831 env[1198]: time="2024-02-09T19:45:19.780726922Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:45:19.780831 env[1198]: time="2024-02-09T19:45:19.780785153Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:45:19.780831 env[1198]: time="2024-02-09T19:45:19.780797536Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:45:19.781156 env[1198]: time="2024-02-09T19:45:19.781094641Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/96ddd099aa0d5961a7b5a1b3dc95dade61d114cc1182607a6a53f667edeb1856 pid=4251 runtime=io.containerd.runc.v2 Feb 9 19:45:19.815029 env[1198]: time="2024-02-09T19:45:19.814904115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6ls8z,Uid:c9f8bf1a-e67b-429c-a972-3cc9f9237608,Namespace:kube-system,Attempt:0,} returns sandbox id \"96ddd099aa0d5961a7b5a1b3dc95dade61d114cc1182607a6a53f667edeb1856\"" Feb 9 19:45:19.815555 kubelet[2124]: E0209 19:45:19.815525 2124 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:19.818257 env[1198]: time="2024-02-09T19:45:19.818210771Z" level=info msg="CreateContainer within sandbox \"96ddd099aa0d5961a7b5a1b3dc95dade61d114cc1182607a6a53f667edeb1856\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 19:45:19.837339 env[1198]: time="2024-02-09T19:45:19.837286459Z" level=info msg="CreateContainer within sandbox \"96ddd099aa0d5961a7b5a1b3dc95dade61d114cc1182607a6a53f667edeb1856\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9945a9b4e815a014e8be0527b939fe72df27730fa9f0c01cd2ab344d200af98a\"" Feb 9 19:45:19.837802 env[1198]: time="2024-02-09T19:45:19.837772472Z" level=info msg="StartContainer for \"9945a9b4e815a014e8be0527b939fe72df27730fa9f0c01cd2ab344d200af98a\"" Feb 9 19:45:19.883440 env[1198]: time="2024-02-09T19:45:19.883372967Z" level=info msg="StartContainer for \"9945a9b4e815a014e8be0527b939fe72df27730fa9f0c01cd2ab344d200af98a\" returns successfully" Feb 9 19:45:19.905493 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9945a9b4e815a014e8be0527b939fe72df27730fa9f0c01cd2ab344d200af98a-rootfs.mount: Deactivated successfully. Feb 9 19:45:19.916420 env[1198]: time="2024-02-09T19:45:19.916282062Z" level=info msg="shim disconnected" id=9945a9b4e815a014e8be0527b939fe72df27730fa9f0c01cd2ab344d200af98a Feb 9 19:45:19.916420 env[1198]: time="2024-02-09T19:45:19.916414804Z" level=warning msg="cleaning up after shim disconnected" id=9945a9b4e815a014e8be0527b939fe72df27730fa9f0c01cd2ab344d200af98a namespace=k8s.io Feb 9 19:45:19.916420 env[1198]: time="2024-02-09T19:45:19.916425484Z" level=info msg="cleaning up dead shim" Feb 9 19:45:19.925027 env[1198]: time="2024-02-09T19:45:19.924972096Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:45:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4335 runtime=io.containerd.runc.v2\n" Feb 9 19:45:20.227500 kubelet[2124]: I0209 19:45:20.227456 2124 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=cfb59561-6930-4734-8568-58dd71b0d378 path="/var/lib/kubelet/pods/cfb59561-6930-4734-8568-58dd71b0d378/volumes" Feb 9 19:45:20.432678 kubelet[2124]: E0209 19:45:20.432648 2124 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:20.434985 env[1198]: time="2024-02-09T19:45:20.434926358Z" level=info msg="CreateContainer within sandbox \"96ddd099aa0d5961a7b5a1b3dc95dade61d114cc1182607a6a53f667edeb1856\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 19:45:20.490444 env[1198]: time="2024-02-09T19:45:20.490300123Z" level=info msg="CreateContainer within sandbox \"96ddd099aa0d5961a7b5a1b3dc95dade61d114cc1182607a6a53f667edeb1856\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1d8a82ebd8e59600150146c8e71f653b1bf843d1491463fda5eaa74c13bebc07\"" Feb 9 19:45:20.491212 env[1198]: time="2024-02-09T19:45:20.491151870Z" level=info msg="StartContainer for \"1d8a82ebd8e59600150146c8e71f653b1bf843d1491463fda5eaa74c13bebc07\"" Feb 9 19:45:20.535491 env[1198]: time="2024-02-09T19:45:20.535417958Z" level=info msg="StartContainer for \"1d8a82ebd8e59600150146c8e71f653b1bf843d1491463fda5eaa74c13bebc07\" returns successfully" Feb 9 19:45:20.559943 env[1198]: time="2024-02-09T19:45:20.559875750Z" level=info msg="shim disconnected" id=1d8a82ebd8e59600150146c8e71f653b1bf843d1491463fda5eaa74c13bebc07 Feb 9 19:45:20.559943 env[1198]: time="2024-02-09T19:45:20.559932738Z" level=warning msg="cleaning up after shim disconnected" id=1d8a82ebd8e59600150146c8e71f653b1bf843d1491463fda5eaa74c13bebc07 namespace=k8s.io Feb 9 19:45:20.559943 env[1198]: time="2024-02-09T19:45:20.559944850Z" level=info msg="cleaning up dead shim" Feb 9 19:45:20.567768 env[1198]: time="2024-02-09T19:45:20.567705527Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:45:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4395 runtime=io.containerd.runc.v2\n" Feb 9 19:45:21.436422 kubelet[2124]: E0209 19:45:21.436387 2124 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:21.439269 env[1198]: time="2024-02-09T19:45:21.439216043Z" level=info msg="CreateContainer within sandbox \"96ddd099aa0d5961a7b5a1b3dc95dade61d114cc1182607a6a53f667edeb1856\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 19:45:21.706346 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3185890166.mount: Deactivated successfully. Feb 9 19:45:21.897691 env[1198]: time="2024-02-09T19:45:21.897617628Z" level=info msg="CreateContainer within sandbox \"96ddd099aa0d5961a7b5a1b3dc95dade61d114cc1182607a6a53f667edeb1856\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f049d7bb609d58517fd06c9f1da22152fb88a18696609e4ad0c1d72ba7761820\"" Feb 9 19:45:21.898234 env[1198]: time="2024-02-09T19:45:21.898189794Z" level=info msg="StartContainer for \"f049d7bb609d58517fd06c9f1da22152fb88a18696609e4ad0c1d72ba7761820\"" Feb 9 19:45:21.954174 env[1198]: time="2024-02-09T19:45:21.953277368Z" level=info msg="StartContainer for \"f049d7bb609d58517fd06c9f1da22152fb88a18696609e4ad0c1d72ba7761820\" returns successfully" Feb 9 19:45:21.970967 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f049d7bb609d58517fd06c9f1da22152fb88a18696609e4ad0c1d72ba7761820-rootfs.mount: Deactivated successfully. Feb 9 19:45:21.977328 env[1198]: time="2024-02-09T19:45:21.977285253Z" level=info msg="shim disconnected" id=f049d7bb609d58517fd06c9f1da22152fb88a18696609e4ad0c1d72ba7761820 Feb 9 19:45:21.977479 env[1198]: time="2024-02-09T19:45:21.977331070Z" level=warning msg="cleaning up after shim disconnected" id=f049d7bb609d58517fd06c9f1da22152fb88a18696609e4ad0c1d72ba7761820 namespace=k8s.io Feb 9 19:45:21.977479 env[1198]: time="2024-02-09T19:45:21.977339896Z" level=info msg="cleaning up dead shim" Feb 9 19:45:21.984103 env[1198]: time="2024-02-09T19:45:21.984041019Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:45:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4452 runtime=io.containerd.runc.v2\n" Feb 9 19:45:22.270428 kubelet[2124]: E0209 19:45:22.270291 2124 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 19:45:22.441097 kubelet[2124]: E0209 19:45:22.439821 2124 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:22.441652 env[1198]: time="2024-02-09T19:45:22.441361913Z" level=info msg="CreateContainer within sandbox \"96ddd099aa0d5961a7b5a1b3dc95dade61d114cc1182607a6a53f667edeb1856\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 19:45:22.456440 env[1198]: time="2024-02-09T19:45:22.456376619Z" level=info msg="CreateContainer within sandbox \"96ddd099aa0d5961a7b5a1b3dc95dade61d114cc1182607a6a53f667edeb1856\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d4325c2cb901dd38325eb31a6ed59249c4a5b6e23b429ba2443454869143174c\"" Feb 9 19:45:22.457287 env[1198]: time="2024-02-09T19:45:22.457235979Z" level=info msg="StartContainer for \"d4325c2cb901dd38325eb31a6ed59249c4a5b6e23b429ba2443454869143174c\"" Feb 9 19:45:22.501815 env[1198]: time="2024-02-09T19:45:22.501746004Z" level=info msg="StartContainer for \"d4325c2cb901dd38325eb31a6ed59249c4a5b6e23b429ba2443454869143174c\" returns successfully" Feb 9 19:45:22.526894 env[1198]: time="2024-02-09T19:45:22.526755466Z" level=info msg="shim disconnected" id=d4325c2cb901dd38325eb31a6ed59249c4a5b6e23b429ba2443454869143174c Feb 9 19:45:22.526894 env[1198]: time="2024-02-09T19:45:22.526804679Z" level=warning msg="cleaning up after shim disconnected" id=d4325c2cb901dd38325eb31a6ed59249c4a5b6e23b429ba2443454869143174c namespace=k8s.io Feb 9 19:45:22.526894 env[1198]: time="2024-02-09T19:45:22.526812796Z" level=info msg="cleaning up dead shim" Feb 9 19:45:22.534694 env[1198]: time="2024-02-09T19:45:22.534631644Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:45:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4507 runtime=io.containerd.runc.v2\n" Feb 9 19:45:23.443584 kubelet[2124]: E0209 19:45:23.443534 2124 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:23.445969 env[1198]: time="2024-02-09T19:45:23.445916481Z" level=info msg="CreateContainer within sandbox \"96ddd099aa0d5961a7b5a1b3dc95dade61d114cc1182607a6a53f667edeb1856\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 19:45:23.462408 env[1198]: time="2024-02-09T19:45:23.462303893Z" level=info msg="CreateContainer within sandbox \"96ddd099aa0d5961a7b5a1b3dc95dade61d114cc1182607a6a53f667edeb1856\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1dc90998a57412b3101a3ffc398d50aefe07741fddc2f6f8645bf4be6dfbbfca\"" Feb 9 19:45:23.462935 env[1198]: time="2024-02-09T19:45:23.462904151Z" level=info msg="StartContainer for \"1dc90998a57412b3101a3ffc398d50aefe07741fddc2f6f8645bf4be6dfbbfca\"" Feb 9 19:45:23.503703 env[1198]: time="2024-02-09T19:45:23.503655869Z" level=info msg="StartContainer for \"1dc90998a57412b3101a3ffc398d50aefe07741fddc2f6f8645bf4be6dfbbfca\" returns successfully" Feb 9 19:45:23.750105 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 9 19:45:24.448633 kubelet[2124]: E0209 19:45:24.448591 2124 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:24.460176 kubelet[2124]: I0209 19:45:24.460138 2124 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-6ls8z" podStartSLOduration=5.460091932 pod.CreationTimestamp="2024-02-09 19:45:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:45:24.458799011 +0000 UTC m=+102.373760207" watchObservedRunningTime="2024-02-09 19:45:24.460091932 +0000 UTC m=+102.375053128" Feb 9 19:45:25.437090 kubelet[2124]: I0209 19:45:25.437050 2124 setters.go:548] "Node became not ready" node="localhost" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-09 19:45:25.436992632 +0000 UTC m=+103.351953818 LastTransitionTime:2024-02-09 19:45:25.436992632 +0000 UTC m=+103.351953818 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 9 19:45:25.450584 kubelet[2124]: E0209 19:45:25.450557 2124 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:26.132383 systemd[1]: run-containerd-runc-k8s.io-1dc90998a57412b3101a3ffc398d50aefe07741fddc2f6f8645bf4be6dfbbfca-runc.X6zoDh.mount: Deactivated successfully. Feb 9 19:45:26.452541 kubelet[2124]: E0209 19:45:26.452494 2124 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:26.461282 systemd-networkd[1069]: lxc_health: Link UP Feb 9 19:45:26.523304 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 19:45:26.523126 systemd-networkd[1069]: lxc_health: Gained carrier Feb 9 19:45:27.570351 systemd-networkd[1069]: lxc_health: Gained IPv6LL Feb 9 19:45:27.768378 kubelet[2124]: E0209 19:45:27.768337 2124 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:28.226471 systemd[1]: run-containerd-runc-k8s.io-1dc90998a57412b3101a3ffc398d50aefe07741fddc2f6f8645bf4be6dfbbfca-runc.Btdjxa.mount: Deactivated successfully. Feb 9 19:45:28.456300 kubelet[2124]: E0209 19:45:28.456249 2124 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:32.467590 sshd[4086]: pam_unix(sshd:session): session closed for user core Feb 9 19:45:32.470102 systemd[1]: sshd@25-10.0.0.49:22-10.0.0.1:51714.service: Deactivated successfully. Feb 9 19:45:32.471103 systemd-logind[1179]: Session 26 logged out. Waiting for processes to exit. Feb 9 19:45:32.471118 systemd[1]: session-26.scope: Deactivated successfully. Feb 9 19:45:32.472041 systemd-logind[1179]: Removed session 26.