Feb 9 00:41:51.271470 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Feb 8 21:14:17 -00 2024 Feb 9 00:41:51.271516 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 9 00:41:51.271534 kernel: BIOS-provided physical RAM map: Feb 9 00:41:51.271542 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 9 00:41:51.271549 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Feb 9 00:41:51.271556 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Feb 9 00:41:51.271565 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Feb 9 00:41:51.271572 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Feb 9 00:41:51.271579 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Feb 9 00:41:51.271588 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Feb 9 00:41:51.271595 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Feb 9 00:41:51.271603 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Feb 9 00:41:51.271610 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Feb 9 00:41:51.271618 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Feb 9 00:41:51.271627 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Feb 9 00:41:51.271636 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Feb 9 00:41:51.271644 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Feb 9 00:41:51.271652 kernel: NX (Execute Disable) protection: active Feb 9 00:41:51.271660 kernel: e820: update [mem 0x9b3f7018-0x9b400c57] usable ==> usable Feb 9 00:41:51.271668 kernel: e820: update [mem 0x9b3f7018-0x9b400c57] usable ==> usable Feb 9 00:41:51.271676 kernel: e820: update [mem 0x9b1aa018-0x9b1e6e57] usable ==> usable Feb 9 00:41:51.271683 kernel: e820: update [mem 0x9b1aa018-0x9b1e6e57] usable ==> usable Feb 9 00:41:51.271695 kernel: extended physical RAM map: Feb 9 00:41:51.271702 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 9 00:41:51.271710 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Feb 9 00:41:51.271720 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Feb 9 00:41:51.271738 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Feb 9 00:41:51.271747 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Feb 9 00:41:51.271755 kernel: reserve setup_data: [mem 0x000000000080c000-0x000000000080ffff] usable Feb 9 00:41:51.271763 kernel: reserve setup_data: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Feb 9 00:41:51.271771 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b1aa017] usable Feb 9 00:41:51.271778 kernel: reserve setup_data: [mem 0x000000009b1aa018-0x000000009b1e6e57] usable Feb 9 00:41:51.271786 kernel: reserve setup_data: [mem 0x000000009b1e6e58-0x000000009b3f7017] usable Feb 9 00:41:51.271794 kernel: reserve setup_data: [mem 0x000000009b3f7018-0x000000009b400c57] usable Feb 9 00:41:51.271802 kernel: reserve setup_data: [mem 0x000000009b400c58-0x000000009c8eefff] usable Feb 9 00:41:51.271810 kernel: reserve setup_data: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Feb 9 00:41:51.271819 kernel: reserve setup_data: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Feb 9 00:41:51.271827 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Feb 9 00:41:51.271835 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Feb 9 00:41:51.271843 kernel: reserve setup_data: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Feb 9 00:41:51.271854 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Feb 9 00:41:51.271863 kernel: efi: EFI v2.70 by EDK II Feb 9 00:41:51.271871 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b773018 RNG=0x9cb75018 Feb 9 00:41:51.271881 kernel: random: crng init done Feb 9 00:41:51.271889 kernel: SMBIOS 2.8 present. Feb 9 00:41:51.271897 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 0.0.0 02/06/2015 Feb 9 00:41:51.271914 kernel: Hypervisor detected: KVM Feb 9 00:41:51.271922 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 9 00:41:51.271931 kernel: kvm-clock: cpu 0, msr 37faa001, primary cpu clock Feb 9 00:41:51.271939 kernel: kvm-clock: using sched offset of 5144743339 cycles Feb 9 00:41:51.271948 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 9 00:41:51.271957 kernel: tsc: Detected 2794.750 MHz processor Feb 9 00:41:51.271971 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 9 00:41:51.271981 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 9 00:41:51.271990 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Feb 9 00:41:51.271999 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 9 00:41:51.272008 kernel: Using GB pages for direct mapping Feb 9 00:41:51.272017 kernel: Secure boot disabled Feb 9 00:41:51.272026 kernel: ACPI: Early table checksum verification disabled Feb 9 00:41:51.272035 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Feb 9 00:41:51.272043 kernel: ACPI: XSDT 0x000000009CB7D0E8 00004C (v01 BOCHS BXPC 00000001 01000013) Feb 9 00:41:51.272054 kernel: ACPI: FACP 0x000000009CB7A000 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 00:41:51.272062 kernel: ACPI: DSDT 0x000000009CB7B000 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 00:41:51.272089 kernel: ACPI: FACS 0x000000009CBDD000 000040 Feb 9 00:41:51.272100 kernel: ACPI: APIC 0x000000009CB79000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 00:41:51.272109 kernel: ACPI: HPET 0x000000009CB78000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 00:41:51.272118 kernel: ACPI: WAET 0x000000009CB77000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 00:41:51.272128 kernel: ACPI: BGRT 0x000000009CB76000 000038 (v01 INTEL EDK2 00000002 01000013) Feb 9 00:41:51.272137 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb7a000-0x9cb7a073] Feb 9 00:41:51.272149 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7b000-0x9cb7ca38] Feb 9 00:41:51.272160 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Feb 9 00:41:51.272168 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb79000-0x9cb7908f] Feb 9 00:41:51.272177 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb78000-0x9cb78037] Feb 9 00:41:51.272187 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb77000-0x9cb77027] Feb 9 00:41:51.272196 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb76000-0x9cb76037] Feb 9 00:41:51.272206 kernel: No NUMA configuration found Feb 9 00:41:51.272215 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Feb 9 00:41:51.272225 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Feb 9 00:41:51.272234 kernel: Zone ranges: Feb 9 00:41:51.272256 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 9 00:41:51.272265 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Feb 9 00:41:51.272274 kernel: Normal empty Feb 9 00:41:51.272283 kernel: Movable zone start for each node Feb 9 00:41:51.272292 kernel: Early memory node ranges Feb 9 00:41:51.272305 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Feb 9 00:41:51.272314 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Feb 9 00:41:51.272324 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Feb 9 00:41:51.272333 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Feb 9 00:41:51.272345 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Feb 9 00:41:51.272355 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Feb 9 00:41:51.272364 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Feb 9 00:41:51.272374 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 9 00:41:51.272382 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Feb 9 00:41:51.272392 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Feb 9 00:41:51.272401 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 9 00:41:51.272410 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Feb 9 00:41:51.272419 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Feb 9 00:41:51.272430 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Feb 9 00:41:51.272439 kernel: ACPI: PM-Timer IO Port: 0xb008 Feb 9 00:41:51.272449 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 9 00:41:51.272458 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 9 00:41:51.272467 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 9 00:41:51.272477 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 9 00:41:51.272486 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 9 00:41:51.272495 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 9 00:41:51.272505 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 9 00:41:51.272515 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 9 00:41:51.272524 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 9 00:41:51.272534 kernel: TSC deadline timer available Feb 9 00:41:51.272543 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Feb 9 00:41:51.272552 kernel: kvm-guest: KVM setup pv remote TLB flush Feb 9 00:41:51.272561 kernel: kvm-guest: setup PV sched yield Feb 9 00:41:51.272574 kernel: [mem 0x9d000000-0xffffffff] available for PCI devices Feb 9 00:41:51.272583 kernel: Booting paravirtualized kernel on KVM Feb 9 00:41:51.272593 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 9 00:41:51.272602 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Feb 9 00:41:51.272617 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u524288 Feb 9 00:41:51.272627 kernel: pcpu-alloc: s185624 r8192 d31464 u524288 alloc=1*2097152 Feb 9 00:41:51.272642 kernel: pcpu-alloc: [0] 0 1 2 3 Feb 9 00:41:51.272653 kernel: kvm-guest: setup async PF for cpu 0 Feb 9 00:41:51.272663 kernel: kvm-guest: stealtime: cpu 0, msr 9ae1c0c0 Feb 9 00:41:51.272672 kernel: kvm-guest: PV spinlocks enabled Feb 9 00:41:51.272682 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 9 00:41:51.272691 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Feb 9 00:41:51.272701 kernel: Policy zone: DMA32 Feb 9 00:41:51.272711 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 9 00:41:51.272722 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 00:41:51.272734 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 9 00:41:51.272743 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 00:41:51.272753 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 00:41:51.272763 kernel: Memory: 2400436K/2567000K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 166304K reserved, 0K cma-reserved) Feb 9 00:41:51.272775 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 9 00:41:51.272784 kernel: ftrace: allocating 34475 entries in 135 pages Feb 9 00:41:51.272794 kernel: ftrace: allocated 135 pages with 4 groups Feb 9 00:41:51.272804 kernel: rcu: Hierarchical RCU implementation. Feb 9 00:41:51.272814 kernel: rcu: RCU event tracing is enabled. Feb 9 00:41:51.272824 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 9 00:41:51.272834 kernel: Rude variant of Tasks RCU enabled. Feb 9 00:41:51.272844 kernel: Tracing variant of Tasks RCU enabled. Feb 9 00:41:51.272854 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 00:41:51.272866 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 9 00:41:51.272876 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Feb 9 00:41:51.272886 kernel: Console: colour dummy device 80x25 Feb 9 00:41:51.272896 kernel: printk: console [ttyS0] enabled Feb 9 00:41:51.272912 kernel: ACPI: Core revision 20210730 Feb 9 00:41:51.272922 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 9 00:41:51.272932 kernel: APIC: Switch to symmetric I/O mode setup Feb 9 00:41:51.272941 kernel: x2apic enabled Feb 9 00:41:51.272951 kernel: Switched APIC routing to physical x2apic. Feb 9 00:41:51.272961 kernel: kvm-guest: setup PV IPIs Feb 9 00:41:51.272973 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 9 00:41:51.272983 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 9 00:41:51.272993 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Feb 9 00:41:51.273003 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 9 00:41:51.273013 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Feb 9 00:41:51.273022 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Feb 9 00:41:51.273032 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 9 00:41:51.273042 kernel: Spectre V2 : Mitigation: Retpolines Feb 9 00:41:51.273053 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 9 00:41:51.273063 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 9 00:41:51.273085 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Feb 9 00:41:51.273095 kernel: RETBleed: Mitigation: untrained return thunk Feb 9 00:41:51.273108 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 9 00:41:51.273118 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Feb 9 00:41:51.273128 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 9 00:41:51.273138 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 9 00:41:51.273151 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 9 00:41:51.273163 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 9 00:41:51.273173 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 9 00:41:51.273182 kernel: Freeing SMP alternatives memory: 32K Feb 9 00:41:51.273192 kernel: pid_max: default: 32768 minimum: 301 Feb 9 00:41:51.273202 kernel: LSM: Security Framework initializing Feb 9 00:41:51.273211 kernel: SELinux: Initializing. Feb 9 00:41:51.273222 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 00:41:51.273232 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 00:41:51.273242 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Feb 9 00:41:51.273253 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Feb 9 00:41:51.273263 kernel: ... version: 0 Feb 9 00:41:51.273273 kernel: ... bit width: 48 Feb 9 00:41:51.273282 kernel: ... generic registers: 6 Feb 9 00:41:51.273292 kernel: ... value mask: 0000ffffffffffff Feb 9 00:41:51.273302 kernel: ... max period: 00007fffffffffff Feb 9 00:41:51.273311 kernel: ... fixed-purpose events: 0 Feb 9 00:41:51.273321 kernel: ... event mask: 000000000000003f Feb 9 00:41:51.273331 kernel: signal: max sigframe size: 1776 Feb 9 00:41:51.273342 kernel: rcu: Hierarchical SRCU implementation. Feb 9 00:41:51.273352 kernel: smp: Bringing up secondary CPUs ... Feb 9 00:41:51.273361 kernel: x86: Booting SMP configuration: Feb 9 00:41:51.273371 kernel: .... node #0, CPUs: #1 Feb 9 00:41:51.273381 kernel: kvm-clock: cpu 1, msr 37faa041, secondary cpu clock Feb 9 00:41:51.273391 kernel: kvm-guest: setup async PF for cpu 1 Feb 9 00:41:51.273401 kernel: kvm-guest: stealtime: cpu 1, msr 9ae9c0c0 Feb 9 00:41:51.273411 kernel: #2 Feb 9 00:41:51.273421 kernel: kvm-clock: cpu 2, msr 37faa081, secondary cpu clock Feb 9 00:41:51.273431 kernel: kvm-guest: setup async PF for cpu 2 Feb 9 00:41:51.273443 kernel: kvm-guest: stealtime: cpu 2, msr 9af1c0c0 Feb 9 00:41:51.273453 kernel: #3 Feb 9 00:41:51.273462 kernel: kvm-clock: cpu 3, msr 37faa0c1, secondary cpu clock Feb 9 00:41:51.273472 kernel: kvm-guest: setup async PF for cpu 3 Feb 9 00:41:51.273482 kernel: kvm-guest: stealtime: cpu 3, msr 9af9c0c0 Feb 9 00:41:51.273491 kernel: smp: Brought up 1 node, 4 CPUs Feb 9 00:41:51.273501 kernel: smpboot: Max logical packages: 1 Feb 9 00:41:51.273510 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Feb 9 00:41:51.273519 kernel: devtmpfs: initialized Feb 9 00:41:51.273531 kernel: x86/mm: Memory block size: 128MB Feb 9 00:41:51.273541 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Feb 9 00:41:51.273551 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Feb 9 00:41:51.273561 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Feb 9 00:41:51.273571 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Feb 9 00:41:51.273586 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Feb 9 00:41:51.273596 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 00:41:51.273606 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 9 00:41:51.273616 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 00:41:51.273628 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 00:41:51.273638 kernel: audit: initializing netlink subsys (disabled) Feb 9 00:41:51.273648 kernel: audit: type=2000 audit(1707439309.092:1): state=initialized audit_enabled=0 res=1 Feb 9 00:41:51.273658 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 00:41:51.273667 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 9 00:41:51.273677 kernel: cpuidle: using governor menu Feb 9 00:41:51.273687 kernel: ACPI: bus type PCI registered Feb 9 00:41:51.273697 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 00:41:51.273707 kernel: dca service started, version 1.12.1 Feb 9 00:41:51.273719 kernel: PCI: Using configuration type 1 for base access Feb 9 00:41:51.273729 kernel: PCI: Using configuration type 1 for extended access Feb 9 00:41:51.273738 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 9 00:41:51.273748 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 00:41:51.273758 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 00:41:51.273768 kernel: ACPI: Added _OSI(Module Device) Feb 9 00:41:51.273778 kernel: ACPI: Added _OSI(Processor Device) Feb 9 00:41:51.273787 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 00:41:51.273797 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 00:41:51.273809 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 00:41:51.273819 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 00:41:51.273829 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 00:41:51.273839 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 9 00:41:51.273849 kernel: ACPI: Interpreter enabled Feb 9 00:41:51.273859 kernel: ACPI: PM: (supports S0 S3 S5) Feb 9 00:41:51.273869 kernel: ACPI: Using IOAPIC for interrupt routing Feb 9 00:41:51.273879 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 9 00:41:51.273889 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Feb 9 00:41:51.273901 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 9 00:41:51.277892 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 9 00:41:51.277920 kernel: acpiphp: Slot [3] registered Feb 9 00:41:51.277931 kernel: acpiphp: Slot [4] registered Feb 9 00:41:51.277940 kernel: acpiphp: Slot [5] registered Feb 9 00:41:51.277950 kernel: acpiphp: Slot [6] registered Feb 9 00:41:51.277960 kernel: acpiphp: Slot [7] registered Feb 9 00:41:51.277970 kernel: acpiphp: Slot [8] registered Feb 9 00:41:51.277985 kernel: acpiphp: Slot [9] registered Feb 9 00:41:51.277994 kernel: acpiphp: Slot [10] registered Feb 9 00:41:51.278003 kernel: acpiphp: Slot [11] registered Feb 9 00:41:51.278011 kernel: acpiphp: Slot [12] registered Feb 9 00:41:51.278020 kernel: acpiphp: Slot [13] registered Feb 9 00:41:51.278029 kernel: acpiphp: Slot [14] registered Feb 9 00:41:51.278038 kernel: acpiphp: Slot [15] registered Feb 9 00:41:51.278046 kernel: acpiphp: Slot [16] registered Feb 9 00:41:51.278055 kernel: acpiphp: Slot [17] registered Feb 9 00:41:51.278064 kernel: acpiphp: Slot [18] registered Feb 9 00:41:51.278088 kernel: acpiphp: Slot [19] registered Feb 9 00:41:51.278097 kernel: acpiphp: Slot [20] registered Feb 9 00:41:51.278106 kernel: acpiphp: Slot [21] registered Feb 9 00:41:51.278115 kernel: acpiphp: Slot [22] registered Feb 9 00:41:51.278124 kernel: acpiphp: Slot [23] registered Feb 9 00:41:51.278134 kernel: acpiphp: Slot [24] registered Feb 9 00:41:51.278143 kernel: acpiphp: Slot [25] registered Feb 9 00:41:51.278152 kernel: acpiphp: Slot [26] registered Feb 9 00:41:51.278161 kernel: acpiphp: Slot [27] registered Feb 9 00:41:51.278170 kernel: acpiphp: Slot [28] registered Feb 9 00:41:51.278182 kernel: acpiphp: Slot [29] registered Feb 9 00:41:51.278191 kernel: acpiphp: Slot [30] registered Feb 9 00:41:51.278200 kernel: acpiphp: Slot [31] registered Feb 9 00:41:51.278210 kernel: PCI host bridge to bus 0000:00 Feb 9 00:41:51.278361 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 9 00:41:51.278458 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 9 00:41:51.278561 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 9 00:41:51.278644 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Feb 9 00:41:51.278725 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0x87fffffff window] Feb 9 00:41:51.278803 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 9 00:41:51.278947 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 9 00:41:51.279106 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 9 00:41:51.279263 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Feb 9 00:41:51.279363 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Feb 9 00:41:51.279465 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Feb 9 00:41:51.279563 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Feb 9 00:41:51.279660 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Feb 9 00:41:51.279759 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Feb 9 00:41:51.279878 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 9 00:41:51.280007 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Feb 9 00:41:51.280125 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Feb 9 00:41:51.280270 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Feb 9 00:41:51.280381 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Feb 9 00:41:51.280480 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc1043000-0xc1043fff] Feb 9 00:41:51.280577 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Feb 9 00:41:51.280667 kernel: pci 0000:00:02.0: BAR 0: assigned to efifb Feb 9 00:41:51.280763 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 9 00:41:51.280880 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Feb 9 00:41:51.281033 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0a0-0xc0bf] Feb 9 00:41:51.281163 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Feb 9 00:41:51.281265 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Feb 9 00:41:51.281379 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Feb 9 00:41:51.281478 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Feb 9 00:41:51.281577 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Feb 9 00:41:51.281679 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Feb 9 00:41:51.281807 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Feb 9 00:41:51.281916 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Feb 9 00:41:51.282017 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc1040000-0xc1040fff] Feb 9 00:41:51.282130 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Feb 9 00:41:51.282246 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Feb 9 00:41:51.282264 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 9 00:41:51.282274 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 9 00:41:51.282287 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 9 00:41:51.282297 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 9 00:41:51.282307 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 9 00:41:51.282317 kernel: iommu: Default domain type: Translated Feb 9 00:41:51.282327 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 9 00:41:51.282425 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Feb 9 00:41:51.282522 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 9 00:41:51.282618 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Feb 9 00:41:51.282632 kernel: vgaarb: loaded Feb 9 00:41:51.282645 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 00:41:51.282655 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 00:41:51.282665 kernel: PTP clock support registered Feb 9 00:41:51.282674 kernel: Registered efivars operations Feb 9 00:41:51.282684 kernel: PCI: Using ACPI for IRQ routing Feb 9 00:41:51.282694 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 9 00:41:51.282704 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Feb 9 00:41:51.282713 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Feb 9 00:41:51.282723 kernel: e820: reserve RAM buffer [mem 0x9b1aa018-0x9bffffff] Feb 9 00:41:51.282735 kernel: e820: reserve RAM buffer [mem 0x9b3f7018-0x9bffffff] Feb 9 00:41:51.282744 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Feb 9 00:41:51.282754 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Feb 9 00:41:51.282763 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 9 00:41:51.282773 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 9 00:41:51.282782 kernel: clocksource: Switched to clocksource kvm-clock Feb 9 00:41:51.282792 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 00:41:51.282802 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 00:41:51.282813 kernel: pnp: PnP ACPI init Feb 9 00:41:51.282960 kernel: pnp 00:02: [dma 2] Feb 9 00:41:51.282975 kernel: pnp: PnP ACPI: found 6 devices Feb 9 00:41:51.282985 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 9 00:41:51.282994 kernel: NET: Registered PF_INET protocol family Feb 9 00:41:51.283004 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 00:41:51.283014 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 9 00:41:51.283023 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 00:41:51.283035 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 9 00:41:51.283045 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 9 00:41:51.283055 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 9 00:41:51.283065 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 00:41:51.283088 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 00:41:51.283098 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 00:41:51.283108 kernel: NET: Registered PF_XDP protocol family Feb 9 00:41:51.283224 kernel: pci 0000:00:05.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Feb 9 00:41:51.283349 kernel: pci 0000:00:05.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Feb 9 00:41:51.283442 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 9 00:41:51.283545 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 9 00:41:51.283631 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 9 00:41:51.283713 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Feb 9 00:41:51.283794 kernel: pci_bus 0000:00: resource 8 [mem 0x800000000-0x87fffffff window] Feb 9 00:41:51.283887 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Feb 9 00:41:51.283987 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 9 00:41:51.284097 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Feb 9 00:41:51.284111 kernel: PCI: CLS 0 bytes, default 64 Feb 9 00:41:51.284122 kernel: Initialise system trusted keyrings Feb 9 00:41:51.284132 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 9 00:41:51.284143 kernel: Key type asymmetric registered Feb 9 00:41:51.284153 kernel: Asymmetric key parser 'x509' registered Feb 9 00:41:51.284163 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 00:41:51.284174 kernel: io scheduler mq-deadline registered Feb 9 00:41:51.284184 kernel: io scheduler kyber registered Feb 9 00:41:51.284196 kernel: io scheduler bfq registered Feb 9 00:41:51.284213 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 9 00:41:51.284224 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 9 00:41:51.284234 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Feb 9 00:41:51.284248 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 9 00:41:51.284261 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 00:41:51.284270 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 9 00:41:51.284279 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 9 00:41:51.284289 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 9 00:41:51.284301 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 9 00:41:51.284311 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 9 00:41:51.284447 kernel: rtc_cmos 00:05: RTC can wake from S4 Feb 9 00:41:51.284543 kernel: rtc_cmos 00:05: registered as rtc0 Feb 9 00:41:51.284627 kernel: rtc_cmos 00:05: setting system clock to 2024-02-09T00:41:48 UTC (1707439308) Feb 9 00:41:51.284713 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Feb 9 00:41:51.284726 kernel: efifb: probing for efifb Feb 9 00:41:51.284737 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Feb 9 00:41:51.284748 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Feb 9 00:41:51.284758 kernel: efifb: scrolling: redraw Feb 9 00:41:51.284769 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 9 00:41:51.284779 kernel: Console: switching to colour frame buffer device 160x50 Feb 9 00:41:51.284790 kernel: fb0: EFI VGA frame buffer device Feb 9 00:41:51.284800 kernel: pstore: Registered efi as persistent store backend Feb 9 00:41:51.284812 kernel: NET: Registered PF_INET6 protocol family Feb 9 00:41:51.284822 kernel: Segment Routing with IPv6 Feb 9 00:41:51.284832 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 00:41:51.284843 kernel: NET: Registered PF_PACKET protocol family Feb 9 00:41:51.284853 kernel: Key type dns_resolver registered Feb 9 00:41:51.284863 kernel: IPI shorthand broadcast: enabled Feb 9 00:41:51.284873 kernel: sched_clock: Marking stable (813002429, 114075656)->(973438957, -46360872) Feb 9 00:41:51.284883 kernel: registered taskstats version 1 Feb 9 00:41:51.284893 kernel: Loading compiled-in X.509 certificates Feb 9 00:41:51.286816 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: e9d857ae0e8100c174221878afd1046acbb054a6' Feb 9 00:41:51.286836 kernel: Key type .fscrypt registered Feb 9 00:41:51.286846 kernel: Key type fscrypt-provisioning registered Feb 9 00:41:51.286856 kernel: pstore: Using crash dump compression: deflate Feb 9 00:41:51.286867 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 00:41:51.286877 kernel: ima: Allocated hash algorithm: sha1 Feb 9 00:41:51.286887 kernel: ima: No architecture policies found Feb 9 00:41:51.286897 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 9 00:41:51.286918 kernel: Write protecting the kernel read-only data: 28672k Feb 9 00:41:51.286934 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 9 00:41:51.286946 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 9 00:41:51.286957 kernel: Run /init as init process Feb 9 00:41:51.286967 kernel: with arguments: Feb 9 00:41:51.286977 kernel: /init Feb 9 00:41:51.286987 kernel: with environment: Feb 9 00:41:51.286997 kernel: HOME=/ Feb 9 00:41:51.287006 kernel: TERM=linux Feb 9 00:41:51.287016 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 00:41:51.287031 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 00:41:51.287044 systemd[1]: Detected virtualization kvm. Feb 9 00:41:51.287055 systemd[1]: Detected architecture x86-64. Feb 9 00:41:51.287065 systemd[1]: Running in initrd. Feb 9 00:41:51.287087 systemd[1]: No hostname configured, using default hostname. Feb 9 00:41:51.287097 systemd[1]: Hostname set to . Feb 9 00:41:51.287108 systemd[1]: Initializing machine ID from VM UUID. Feb 9 00:41:51.287120 systemd[1]: Queued start job for default target initrd.target. Feb 9 00:41:51.287130 systemd[1]: Started systemd-ask-password-console.path. Feb 9 00:41:51.287140 systemd[1]: Reached target cryptsetup.target. Feb 9 00:41:51.287150 systemd[1]: Reached target paths.target. Feb 9 00:41:51.287159 systemd[1]: Reached target slices.target. Feb 9 00:41:51.287168 systemd[1]: Reached target swap.target. Feb 9 00:41:51.287178 systemd[1]: Reached target timers.target. Feb 9 00:41:51.287188 systemd[1]: Listening on iscsid.socket. Feb 9 00:41:51.287200 systemd[1]: Listening on iscsiuio.socket. Feb 9 00:41:51.287210 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 00:41:51.287220 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 00:41:51.287230 systemd[1]: Listening on systemd-journald.socket. Feb 9 00:41:51.287239 systemd[1]: Listening on systemd-networkd.socket. Feb 9 00:41:51.287249 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 00:41:51.287259 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 00:41:51.287269 systemd[1]: Reached target sockets.target. Feb 9 00:41:51.287282 systemd[1]: Starting kmod-static-nodes.service... Feb 9 00:41:51.287291 systemd[1]: Finished network-cleanup.service. Feb 9 00:41:51.287301 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 00:41:51.287313 systemd[1]: Starting systemd-journald.service... Feb 9 00:41:51.287323 systemd[1]: Starting systemd-modules-load.service... Feb 9 00:41:51.287333 systemd[1]: Starting systemd-resolved.service... Feb 9 00:41:51.287342 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 00:41:51.287353 systemd[1]: Finished kmod-static-nodes.service. Feb 9 00:41:51.287364 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 00:41:51.287376 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 00:41:51.287387 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 00:41:51.287398 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 00:41:51.287411 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 00:41:51.287422 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 00:41:51.287433 systemd[1]: Starting dracut-cmdline.service... Feb 9 00:41:51.287455 systemd-journald[198]: Journal started Feb 9 00:41:51.287553 systemd-journald[198]: Runtime Journal (/run/log/journal/b7160d68dd464a7685582b2f6c63e54a) is 6.0M, max 48.4M, 42.4M free. Feb 9 00:41:50.215794 systemd-modules-load[199]: Inserted module 'overlay' Feb 9 00:41:51.291321 dracut-cmdline[217]: dracut-dracut-053 Feb 9 00:41:51.291321 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Feb 9 00:41:51.291321 dracut-cmdline[217]: BEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 9 00:41:51.309150 systemd[1]: Started systemd-journald.service. Feb 9 00:41:51.309202 kernel: audit: type=1130 audit(1707439311.303:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:41:51.303000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:41:51.374780 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 00:41:51.393264 kernel: Bridge firewalling registered Feb 9 00:41:51.395437 systemd-modules-load[199]: Inserted module 'br_netfilter' Feb 9 00:41:51.435350 kernel: SCSI subsystem initialized Feb 9 00:41:51.450785 systemd-resolved[200]: Positive Trust Anchors: Feb 9 00:41:51.454615 kernel: Loading iSCSI transport class v2.0-870. Feb 9 00:41:51.450798 systemd-resolved[200]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 00:41:51.463173 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 00:41:51.463205 kernel: device-mapper: uevent: version 1.0.3 Feb 9 00:41:51.463217 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 00:41:51.450830 systemd-resolved[200]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 00:41:51.454085 systemd-resolved[200]: Defaulting to hostname 'linux'. Feb 9 00:41:51.455048 systemd[1]: Started systemd-resolved.service. Feb 9 00:41:51.472011 systemd[1]: Reached target nss-lookup.target. Feb 9 00:41:51.471000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:41:51.477952 kernel: audit: type=1130 audit(1707439311.471:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:41:51.496198 systemd-modules-load[199]: Inserted module 'dm_multipath' Feb 9 00:41:51.503731 systemd[1]: Finished systemd-modules-load.service. Feb 9 00:41:51.512000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:41:51.518063 kernel: audit: type=1130 audit(1707439311.512:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:41:51.516150 systemd[1]: Starting systemd-sysctl.service... Feb 9 00:41:51.535745 kernel: iscsi: registered transport (tcp) Feb 9 00:41:51.536386 systemd[1]: Finished systemd-sysctl.service. Feb 9 00:41:51.542264 kernel: audit: type=1130 audit(1707439311.537:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:41:51.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:41:51.591634 kernel: iscsi: registered transport (qla4xxx) Feb 9 00:41:51.591723 kernel: QLogic iSCSI HBA Driver Feb 9 00:41:51.701723 systemd[1]: Finished dracut-cmdline.service. Feb 9 00:41:51.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:41:51.703731 systemd[1]: Starting dracut-pre-udev.service... Feb 9 00:41:51.719233 kernel: audit: type=1130 audit(1707439311.701:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:41:51.810186 kernel: raid6: avx2x4 gen() 18928 MB/s Feb 9 00:41:51.832103 kernel: raid6: avx2x4 xor() 4051 MB/s Feb 9 00:41:51.861192 kernel: raid6: avx2x2 gen() 17926 MB/s Feb 9 00:41:51.887201 kernel: raid6: avx2x2 xor() 12156 MB/s Feb 9 00:41:51.909028 kernel: raid6: avx2x1 gen() 15029 MB/s Feb 9 00:41:51.937767 kernel: raid6: avx2x1 xor() 9186 MB/s Feb 9 00:41:51.968163 kernel: raid6: sse2x4 gen() 4643 MB/s Feb 9 00:41:51.993165 kernel: raid6: sse2x4 xor() 4183 MB/s Feb 9 00:41:52.016165 kernel: raid6: sse2x2 gen() 9155 MB/s Feb 9 00:41:52.041048 kernel: raid6: sse2x2 xor() 6179 MB/s Feb 9 00:41:52.064376 kernel: raid6: sse2x1 gen() 7059 MB/s Feb 9 00:41:52.085895 kernel: raid6: sse2x1 xor() 5686 MB/s Feb 9 00:41:52.085982 kernel: raid6: using algorithm avx2x4 gen() 18928 MB/s Feb 9 00:41:52.085994 kernel: raid6: .... xor() 4051 MB/s, rmw enabled Feb 9 00:41:52.086005 kernel: raid6: using avx2x2 recovery algorithm Feb 9 00:41:52.125363 kernel: xor: automatically using best checksumming function avx Feb 9 00:41:52.451392 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 9 00:41:52.488438 systemd[1]: Finished dracut-pre-udev.service. Feb 9 00:41:52.489000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:41:52.495087 kernel: audit: type=1130 audit(1707439312.489:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:41:52.496000 audit: BPF prog-id=7 op=LOAD Feb 9 00:41:52.498611 kernel: audit: type=1334 audit(1707439312.496:8): prog-id=7 op=LOAD Feb 9 00:41:52.498000 audit: BPF prog-id=8 op=LOAD Feb 9 00:41:52.505507 systemd[1]: Starting systemd-udevd.service... Feb 9 00:41:52.513055 kernel: audit: type=1334 audit(1707439312.498:9): prog-id=8 op=LOAD Feb 9 00:41:52.542648 systemd-udevd[400]: Using default interface naming scheme 'v252'. Feb 9 00:41:52.567746 systemd[1]: Started systemd-udevd.service. Feb 9 00:41:52.585191 kernel: audit: type=1130 audit(1707439312.581:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:41:52.581000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:41:52.604264 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 00:41:52.630388 dracut-pre-trigger[416]: rd.md=0: removing MD RAID activation Feb 9 00:41:52.682345 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 00:41:52.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:41:52.685511 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 00:41:52.784205 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 00:41:52.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:41:52.926095 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 00:41:52.940419 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 9 00:41:52.946098 kernel: libata version 3.00 loaded. Feb 9 00:41:52.970440 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 9 00:41:52.970506 kernel: GPT:9289727 != 19775487 Feb 9 00:41:52.970519 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 9 00:41:52.970530 kernel: GPT:9289727 != 19775487 Feb 9 00:41:52.970553 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 9 00:41:52.970565 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 00:41:52.986956 kernel: ata_piix 0000:00:01.1: version 2.13 Feb 9 00:41:52.987269 kernel: scsi host0: ata_piix Feb 9 00:41:52.987405 kernel: scsi host1: ata_piix Feb 9 00:41:52.987523 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Feb 9 00:41:52.987535 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Feb 9 00:41:53.033922 kernel: AVX2 version of gcm_enc/dec engaged. Feb 9 00:41:53.062036 kernel: AES CTR mode by8 optimization enabled Feb 9 00:41:53.150175 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Feb 9 00:41:53.153826 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 9 00:41:53.276187 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Feb 9 00:41:53.281435 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 9 00:41:53.292959 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 00:41:53.306146 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 00:41:53.336134 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (457) Feb 9 00:41:53.336167 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Feb 9 00:41:53.334106 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 00:41:53.351108 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 00:41:53.355668 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 00:41:53.361929 systemd[1]: Starting disk-uuid.service... Feb 9 00:41:53.380259 disk-uuid[535]: Primary Header is updated. Feb 9 00:41:53.380259 disk-uuid[535]: Secondary Entries is updated. Feb 9 00:41:53.380259 disk-uuid[535]: Secondary Header is updated. Feb 9 00:41:53.418918 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 00:41:53.446836 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 00:41:54.497280 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 00:41:54.507318 disk-uuid[536]: The operation has completed successfully. Feb 9 00:41:54.629469 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 00:41:54.653992 kernel: kauditd_printk_skb: 2 callbacks suppressed Feb 9 00:41:54.654029 kernel: audit: type=1130 audit(1707439314.629:13): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:41:54.654044 kernel: audit: type=1131 audit(1707439314.629:14): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:41:54.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:41:54.629000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:41:54.629585 systemd[1]: Finished disk-uuid.service. Feb 9 00:41:54.641190 systemd[1]: Starting verity-setup.service... Feb 9 00:41:54.724869 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Feb 9 00:41:55.185311 systemd[1]: Found device dev-mapper-usr.device. Feb 9 00:41:55.189218 systemd[1]: Mounting sysusr-usr.mount... Feb 9 00:41:55.228540 systemd[1]: Finished verity-setup.service. Feb 9 00:41:55.230000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:41:55.235275 kernel: audit: type=1130 audit(1707439315.230:15): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:41:55.554121 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 00:41:55.555605 systemd[1]: Mounted sysusr-usr.mount. Feb 9 00:41:55.556523 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 00:41:55.557635 systemd[1]: Starting ignition-setup.service... Feb 9 00:41:55.560257 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 00:41:55.683271 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 00:41:55.683367 kernel: BTRFS info (device vda6): using free space tree Feb 9 00:41:55.683387 kernel: BTRFS info (device vda6): has skinny extents Feb 9 00:41:55.754199 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 00:41:55.753000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:41:55.760516 kernel: audit: type=1130 audit(1707439315.753:16): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:41:55.764000 audit: BPF prog-id=9 op=LOAD Feb 9 00:41:55.774236 kernel: audit: type=1334 audit(1707439315.764:17): prog-id=9 op=LOAD Feb 9 00:41:55.769918 systemd[1]: Starting systemd-networkd.service... Feb 9 00:41:55.784303 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 00:41:55.833249 systemd-networkd[707]: lo: Link UP Feb 9 00:41:55.833261 systemd-networkd[707]: lo: Gained carrier Feb 9 00:41:55.833745 systemd-networkd[707]: Enumeration completed Feb 9 00:41:55.834036 systemd-networkd[707]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 00:41:55.834802 systemd[1]: Started systemd-networkd.service. Feb 9 00:41:55.838985 systemd-networkd[707]: eth0: Link UP Feb 9 00:41:55.839007 systemd-networkd[707]: eth0: Gained carrier Feb 9 00:41:55.852462 kernel: audit: type=1130 audit(1707439315.847:18): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:41:55.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:41:55.848515 systemd[1]: Reached target network.target. Feb 9 00:41:55.860009 systemd[1]: Starting iscsiuio.service... Feb 9 00:41:55.895702 systemd[1]: Finished ignition-setup.service. Feb 9 00:41:55.894000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:41:55.905346 kernel: audit: type=1130 audit(1707439315.894:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:41:55.906377 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 00:41:55.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:41:55.960868 systemd[1]: Started iscsiuio.service. Feb 9 00:41:55.969762 kernel: audit: type=1130 audit(1707439315.960:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:41:55.967995 systemd[1]: Starting iscsid.service... Feb 9 00:41:55.969182 systemd-networkd[707]: eth0: DHCPv4 address 10.0.0.24/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 9 00:41:55.986919 iscsid[714]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 00:41:55.986919 iscsid[714]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 00:41:55.986919 iscsid[714]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 00:41:55.986919 iscsid[714]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 00:41:55.986919 iscsid[714]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 00:41:55.986919 iscsid[714]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 00:41:56.055997 kernel: audit: type=1130 audit(1707439316.002:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:41:56.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:41:55.987058 systemd[1]: Started iscsid.service. Feb 9 00:41:56.009792 systemd[1]: Starting dracut-initqueue.service... Feb 9 00:41:56.058754 systemd[1]: Finished dracut-initqueue.service. Feb 9 00:41:56.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:41:56.068511 systemd[1]: Reached target remote-fs-pre.target. Feb 9 00:41:56.072774 kernel: audit: type=1130 audit(1707439316.068:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:41:56.072712 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 00:41:56.076426 systemd[1]: Reached target remote-fs.target. Feb 9 00:41:56.078447 systemd[1]: Starting dracut-pre-mount.service... Feb 9 00:41:56.119337 systemd[1]: Finished dracut-pre-mount.service. Feb 9 00:41:56.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:41:56.490519 ignition[712]: Ignition 2.14.0 Feb 9 00:41:56.490545 ignition[712]: Stage: fetch-offline Feb 9 00:41:56.490965 ignition[712]: no configs at "/usr/lib/ignition/base.d" Feb 9 00:41:56.490980 ignition[712]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 00:41:56.491171 ignition[712]: parsed url from cmdline: "" Feb 9 00:41:56.491176 ignition[712]: no config URL provided Feb 9 00:41:56.491184 ignition[712]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 00:41:56.491194 ignition[712]: no config at "/usr/lib/ignition/user.ign" Feb 9 00:41:56.491217 ignition[712]: op(1): [started] loading QEMU firmware config module Feb 9 00:41:56.491223 ignition[712]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 9 00:41:56.532208 ignition[712]: op(1): [finished] loading QEMU firmware config module Feb 9 00:41:56.646827 ignition[712]: parsing config with SHA512: 73a5f59d7d755f9c34f6d596d3e49b657ecb946f4202c8159386a0837b5979a9fdaca5eb8c7bdaeb7ca666693313639edfdea0bcd43d238a6a5bdc949011ae03 Feb 9 00:41:57.307320 unknown[712]: fetched base config from "system" Feb 9 00:41:57.313208 unknown[712]: fetched user config from "qemu" Feb 9 00:41:57.318589 ignition[712]: fetch-offline: fetch-offline passed Feb 9 00:41:57.318703 ignition[712]: Ignition finished successfully Feb 9 00:41:57.329482 systemd-networkd[707]: eth0: Gained IPv6LL Feb 9 00:41:57.342325 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 00:41:57.358000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:41:57.359562 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 9 00:41:57.360765 systemd[1]: Starting ignition-kargs.service... Feb 9 00:41:57.521396 ignition[737]: Ignition 2.14.0 Feb 9 00:41:57.521410 ignition[737]: Stage: kargs Feb 9 00:41:57.521551 ignition[737]: no configs at "/usr/lib/ignition/base.d" Feb 9 00:41:57.521564 ignition[737]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 00:41:57.523272 ignition[737]: kargs: kargs passed Feb 9 00:41:57.523329 ignition[737]: Ignition finished successfully Feb 9 00:41:57.550415 systemd[1]: Finished ignition-kargs.service. Feb 9 00:41:57.553000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:41:57.555487 systemd[1]: Starting ignition-disks.service... Feb 9 00:41:57.569460 ignition[743]: Ignition 2.14.0 Feb 9 00:41:57.569477 ignition[743]: Stage: disks Feb 9 00:41:57.569630 ignition[743]: no configs at "/usr/lib/ignition/base.d" Feb 9 00:41:57.569643 ignition[743]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 00:41:57.571298 ignition[743]: disks: disks passed Feb 9 00:41:57.571353 ignition[743]: Ignition finished successfully Feb 9 00:41:57.580666 systemd[1]: Finished ignition-disks.service. Feb 9 00:41:57.585000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:41:57.586225 systemd[1]: Reached target initrd-root-device.target. Feb 9 00:41:57.590991 systemd[1]: Reached target local-fs-pre.target. Feb 9 00:41:57.594985 systemd[1]: Reached target local-fs.target. Feb 9 00:41:57.607279 systemd[1]: Reached target sysinit.target. Feb 9 00:41:57.611188 systemd[1]: Reached target basic.target. Feb 9 00:41:57.622096 systemd[1]: Starting systemd-fsck-root.service... Feb 9 00:41:57.722571 systemd-fsck[751]: ROOT: clean, 602/553520 files, 56014/553472 blocks Feb 9 00:41:57.783223 systemd[1]: Finished systemd-fsck-root.service. Feb 9 00:41:57.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:41:57.788437 systemd[1]: Mounting sysroot.mount... Feb 9 00:41:57.877390 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 00:41:57.879065 systemd[1]: Mounted sysroot.mount. Feb 9 00:41:57.882395 systemd[1]: Reached target initrd-root-fs.target. Feb 9 00:41:57.904347 systemd[1]: Mounting sysroot-usr.mount... Feb 9 00:41:57.907806 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 9 00:41:57.907857 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 00:41:57.907889 systemd[1]: Reached target ignition-diskful.target. Feb 9 00:41:57.910557 systemd[1]: Mounted sysroot-usr.mount. Feb 9 00:41:57.930715 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 00:41:57.940082 systemd[1]: Starting initrd-setup-root.service... Feb 9 00:41:57.955804 initrd-setup-root[762]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 00:41:57.985053 initrd-setup-root[770]: cut: /sysroot/etc/group: No such file or directory Feb 9 00:41:57.992457 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (757) Feb 9 00:41:58.004647 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 00:41:58.004749 kernel: BTRFS info (device vda6): using free space tree Feb 9 00:41:58.004764 kernel: BTRFS info (device vda6): has skinny extents Feb 9 00:41:58.022443 initrd-setup-root[782]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 00:41:58.047489 initrd-setup-root[795]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 00:41:58.088364 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 00:41:58.341171 systemd[1]: Finished initrd-setup-root.service. Feb 9 00:41:58.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:41:58.356401 systemd[1]: Starting ignition-mount.service... Feb 9 00:41:58.369630 systemd[1]: Starting sysroot-boot.service... Feb 9 00:41:58.373297 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 9 00:41:58.373398 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 9 00:41:58.430977 ignition[822]: INFO : Ignition 2.14.0 Feb 9 00:41:58.430977 ignition[822]: INFO : Stage: mount Feb 9 00:41:58.436356 ignition[822]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 00:41:58.436356 ignition[822]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 00:41:58.452361 ignition[822]: INFO : mount: mount passed Feb 9 00:41:58.453610 ignition[822]: INFO : Ignition finished successfully Feb 9 00:41:58.456546 systemd[1]: Finished ignition-mount.service. Feb 9 00:41:58.457000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:41:58.459673 systemd[1]: Starting ignition-files.service... Feb 9 00:41:58.478791 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 00:41:58.516006 systemd[1]: Finished sysroot-boot.service. Feb 9 00:41:58.527000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:41:58.532600 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (831) Feb 9 00:41:58.535300 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 00:41:58.535372 kernel: BTRFS info (device vda6): using free space tree Feb 9 00:41:58.535385 kernel: BTRFS info (device vda6): has skinny extents Feb 9 00:41:58.549686 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 00:41:58.605502 ignition[851]: INFO : Ignition 2.14.0 Feb 9 00:41:58.605502 ignition[851]: INFO : Stage: files Feb 9 00:41:58.605502 ignition[851]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 00:41:58.605502 ignition[851]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 00:41:58.641894 ignition[851]: DEBUG : files: compiled without relabeling support, skipping Feb 9 00:41:58.641894 ignition[851]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 00:41:58.641894 ignition[851]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 00:41:58.641894 ignition[851]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 00:41:58.641894 ignition[851]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 00:41:58.641894 ignition[851]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 00:41:58.641894 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 9 00:41:58.641894 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 9 00:41:58.636363 unknown[851]: wrote ssh authorized keys file for user: core Feb 9 00:41:58.856827 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 9 00:41:59.307818 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 9 00:41:59.307818 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 9 00:41:59.307818 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-amd64.tar.gz: attempt #1 Feb 9 00:41:59.828113 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 9 00:42:00.433725 ignition[851]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: a3a2c02a90b008686c20babaf272e703924db2a3e2a0d4e2a7c81d994cbc68c47458a4a354ecc243af095b390815c7f203348b9749351ae817bd52a522300449 Feb 9 00:42:00.433725 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 9 00:42:00.433725 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 9 00:42:00.433725 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Feb 9 00:42:00.820090 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 9 00:42:01.624122 ignition[851]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Feb 9 00:42:01.624122 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 9 00:42:01.635910 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 00:42:01.635910 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 00:42:01.635910 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubectl" Feb 9 00:42:01.635910 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubectl: attempt #1 Feb 9 00:42:01.810392 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 9 00:42:03.177261 ignition[851]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 97840854134909d75a1a2563628cc4ba632067369ce7fc8a8a1e90a387d32dd7bfd73f4f5b5a82ef842088e7470692951eb7fc869c5f297dd740f855672ee628 Feb 9 00:42:03.188306 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 9 00:42:03.188306 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 00:42:03.188306 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubelet: attempt #1 Feb 9 00:42:03.236968 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 9 00:42:06.087407 ignition[851]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 40daf2a9b9e666c14b10e627da931bd79978628b1f23ef6429c1cb4fcba261f86ccff440c0dbb0070ee760fe55772b4fd279c4582dfbb17fa30bc94b7f00126b Feb 9 00:42:06.087407 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 00:42:06.087407 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 00:42:06.087407 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubeadm: attempt #1 Feb 9 00:42:06.145954 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET result: OK Feb 9 00:42:08.056508 ignition[851]: DEBUG : files: createFilesystemsFiles: createFiles: op(9): file matches expected sum of: 1c324cd645a7bf93d19d24c87498d9a17878eb1cc927e2680200ffeab2f85051ddec47d85b79b8e774042dc6726299ad3d7caf52c060701f00deba30dc33f660 Feb 9 00:42:08.056508 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 00:42:08.067174 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 00:42:08.067174 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 9 00:42:08.529535 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 9 00:42:08.762259 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 00:42:08.762259 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Feb 9 00:42:08.773369 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 00:42:08.773369 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 9 00:42:08.773369 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 9 00:42:08.773369 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 00:42:08.773369 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 00:42:08.773369 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 00:42:08.773369 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 00:42:08.787768 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 00:42:08.787768 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 00:42:08.787768 ignition[851]: INFO : files: op(10): [started] processing unit "prepare-helm.service" Feb 9 00:42:08.787768 ignition[851]: INFO : files: op(10): op(11): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 00:42:08.787768 ignition[851]: INFO : files: op(10): op(11): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 00:42:08.787768 ignition[851]: INFO : files: op(10): [finished] processing unit "prepare-helm.service" Feb 9 00:42:08.787768 ignition[851]: INFO : files: op(12): [started] processing unit "coreos-metadata.service" Feb 9 00:42:08.787768 ignition[851]: INFO : files: op(12): op(13): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 9 00:42:08.787768 ignition[851]: INFO : files: op(12): op(13): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 9 00:42:08.787768 ignition[851]: INFO : files: op(12): [finished] processing unit "coreos-metadata.service" Feb 9 00:42:08.787768 ignition[851]: INFO : files: op(14): [started] processing unit "prepare-cni-plugins.service" Feb 9 00:42:08.787768 ignition[851]: INFO : files: op(14): op(15): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 00:42:08.787768 ignition[851]: INFO : files: op(14): op(15): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 00:42:08.787768 ignition[851]: INFO : files: op(14): [finished] processing unit "prepare-cni-plugins.service" Feb 9 00:42:08.787768 ignition[851]: INFO : files: op(16): [started] processing unit "prepare-critools.service" Feb 9 00:42:08.787768 ignition[851]: INFO : files: op(16): op(17): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 00:42:08.787768 ignition[851]: INFO : files: op(16): op(17): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 00:42:08.787768 ignition[851]: INFO : files: op(16): [finished] processing unit "prepare-critools.service" Feb 9 00:42:08.834804 ignition[851]: INFO : files: op(18): [started] setting preset to disabled for "coreos-metadata.service" Feb 9 00:42:08.834804 ignition[851]: INFO : files: op(18): op(19): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 9 00:42:08.999230 ignition[851]: INFO : files: op(18): op(19): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 9 00:42:08.999230 ignition[851]: INFO : files: op(18): [finished] setting preset to disabled for "coreos-metadata.service" Feb 9 00:42:09.031880 kernel: kauditd_printk_skb: 8 callbacks suppressed Feb 9 00:42:09.031919 kernel: audit: type=1130 audit(1707439329.012:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:09.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:09.005714 systemd[1]: Finished ignition-files.service. Feb 9 00:42:09.032984 ignition[851]: INFO : files: op(1a): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 00:42:09.032984 ignition[851]: INFO : files: op(1a): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 00:42:09.032984 ignition[851]: INFO : files: op(1b): [started] setting preset to enabled for "prepare-critools.service" Feb 9 00:42:09.032984 ignition[851]: INFO : files: op(1b): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 00:42:09.032984 ignition[851]: INFO : files: op(1c): [started] setting preset to enabled for "prepare-helm.service" Feb 9 00:42:09.032984 ignition[851]: INFO : files: op(1c): [finished] setting preset to enabled for "prepare-helm.service" Feb 9 00:42:09.032984 ignition[851]: INFO : files: createResultFile: createFiles: op(1d): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 00:42:09.032984 ignition[851]: INFO : files: createResultFile: createFiles: op(1d): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 00:42:09.032984 ignition[851]: INFO : files: files passed Feb 9 00:42:09.032984 ignition[851]: INFO : Ignition finished successfully Feb 9 00:42:09.138609 kernel: audit: type=1130 audit(1707439329.059:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:09.138655 kernel: audit: type=1130 audit(1707439329.067:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:09.138674 kernel: audit: type=1131 audit(1707439329.067:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:09.059000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:09.067000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:09.067000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:09.018281 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 00:42:09.022816 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 00:42:09.150601 initrd-setup-root-after-ignition[876]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Feb 9 00:42:09.030758 systemd[1]: Starting ignition-quench.service... Feb 9 00:42:09.162344 initrd-setup-root-after-ignition[878]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 00:42:09.038592 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 00:42:09.063318 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 00:42:09.063435 systemd[1]: Finished ignition-quench.service. Feb 9 00:42:09.068914 systemd[1]: Reached target ignition-complete.target. Feb 9 00:42:09.138482 systemd[1]: Starting initrd-parse-etc.service... Feb 9 00:42:09.228361 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 00:42:09.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:09.228501 systemd[1]: Finished initrd-parse-etc.service. Feb 9 00:42:09.233547 systemd[1]: Reached target initrd-fs.target. Feb 9 00:42:09.236840 kernel: audit: type=1130 audit(1707439329.228:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:09.236931 kernel: audit: type=1131 audit(1707439329.232:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:09.232000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:09.239948 systemd[1]: Reached target initrd.target. Feb 9 00:42:09.243826 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 00:42:09.260179 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 00:42:09.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:09.280807 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 00:42:09.289705 kernel: audit: type=1130 audit(1707439329.280:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:09.284490 systemd[1]: Starting initrd-cleanup.service... Feb 9 00:42:09.310647 systemd[1]: Stopped target network.target. Feb 9 00:42:09.321826 systemd[1]: Stopped target nss-lookup.target. Feb 9 00:42:09.323786 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 00:42:09.333876 systemd[1]: Stopped target timers.target. Feb 9 00:42:09.336841 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 00:42:09.337007 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 00:42:09.345000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:09.350135 kernel: audit: type=1131 audit(1707439329.345:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:09.351597 systemd[1]: Stopped target initrd.target. Feb 9 00:42:09.353774 systemd[1]: Stopped target basic.target. Feb 9 00:42:09.378229 systemd[1]: Stopped target ignition-complete.target. Feb 9 00:42:09.388523 systemd[1]: Stopped target ignition-diskful.target. Feb 9 00:42:09.404818 systemd[1]: Stopped target initrd-root-device.target. Feb 9 00:42:09.412420 systemd[1]: Stopped target remote-fs.target. Feb 9 00:42:09.429726 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 00:42:09.437279 systemd[1]: Stopped target sysinit.target. Feb 9 00:42:09.457368 kernel: audit: type=1131 audit(1707439329.442:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:09.442000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:09.438126 systemd[1]: Stopped target local-fs.target. Feb 9 00:42:09.438980 systemd[1]: Stopped target local-fs-pre.target. Feb 9 00:42:09.439850 systemd[1]: Stopped target swap.target. Feb 9 00:42:09.442142 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 00:42:09.442288 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 00:42:09.458467 systemd[1]: Stopped target cryptsetup.target. Feb 9 00:42:09.482634 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 00:42:09.485006 systemd[1]: Stopped dracut-initqueue.service. Feb 9 00:42:09.498000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:09.526523 kernel: audit: type=1131 audit(1707439329.498:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:09.516000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:09.503022 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 00:42:09.509873 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 00:42:09.520754 systemd[1]: Stopped target paths.target. Feb 9 00:42:09.538416 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 00:42:09.554189 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 00:42:09.568941 systemd[1]: Stopped target slices.target. Feb 9 00:42:09.585423 systemd[1]: Stopped target sockets.target. Feb 9 00:42:09.594575 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 00:42:09.597650 systemd[1]: Closed iscsid.socket. Feb 9 00:42:09.614210 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 00:42:09.614338 systemd[1]: Closed iscsiuio.socket. Feb 9 00:42:09.621237 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 00:42:09.621380 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 00:42:09.632000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:09.633981 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 00:42:09.636000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:09.635661 systemd[1]: Stopped ignition-files.service. Feb 9 00:42:09.653548 systemd[1]: Stopping ignition-mount.service... Feb 9 00:42:09.663706 ignition[892]: INFO : Ignition 2.14.0 Feb 9 00:42:09.663706 ignition[892]: INFO : Stage: umount Feb 9 00:42:09.663706 ignition[892]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 00:42:09.663706 ignition[892]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 00:42:09.663706 ignition[892]: INFO : umount: umount passed Feb 9 00:42:09.663706 ignition[892]: INFO : Ignition finished successfully Feb 9 00:42:09.671234 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 00:42:09.672000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:09.671439 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 00:42:09.673971 systemd[1]: Stopping sysroot-boot.service... Feb 9 00:42:09.690177 systemd[1]: Stopping systemd-networkd.service... Feb 9 00:42:09.703000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:09.697333 systemd[1]: Stopping systemd-resolved.service... Feb 9 00:42:09.708000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:09.698268 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 00:42:09.698441 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 00:42:09.703468 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 00:42:09.703628 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 00:42:09.718601 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 00:42:09.719515 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 00:42:09.720039 systemd-networkd[707]: eth0: DHCPv6 lease lost Feb 9 00:42:09.721040 systemd[1]: Stopped systemd-resolved.service. Feb 9 00:42:09.724000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:09.731991 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 00:42:09.738000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:09.743000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:09.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:09.732144 systemd[1]: Stopped systemd-networkd.service. Feb 9 00:42:09.740918 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 00:42:09.741031 systemd[1]: Stopped ignition-mount.service. Feb 9 00:42:09.744631 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 00:42:09.744737 systemd[1]: Stopped sysroot-boot.service. Feb 9 00:42:09.748482 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 00:42:09.754000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:09.757000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:09.748563 systemd[1]: Closed systemd-networkd.socket. Feb 9 00:42:09.762000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:09.751884 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 00:42:09.752618 systemd[1]: Stopped ignition-disks.service. Feb 9 00:42:09.783000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:09.755524 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 00:42:09.756095 systemd[1]: Stopped ignition-kargs.service. Feb 9 00:42:09.757535 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 00:42:09.757581 systemd[1]: Stopped ignition-setup.service. Feb 9 00:42:09.762912 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 00:42:09.762987 systemd[1]: Stopped initrd-setup-root.service. Feb 9 00:42:09.802958 systemd[1]: Stopping network-cleanup.service... Feb 9 00:42:09.849000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:09.829615 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 00:42:09.852000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:09.829737 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 00:42:09.849561 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 00:42:09.849718 systemd[1]: Stopped systemd-sysctl.service. Feb 9 00:42:09.855038 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 00:42:09.855127 systemd[1]: Stopped systemd-modules-load.service. Feb 9 00:42:09.859000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:09.859907 systemd[1]: Stopping systemd-udevd.service... Feb 9 00:42:09.862497 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 00:42:09.862000 audit: BPF prog-id=6 op=UNLOAD Feb 9 00:42:09.863230 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 00:42:09.864659 systemd[1]: Finished initrd-cleanup.service. Feb 9 00:42:09.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:09.869000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:09.869801 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 00:42:09.870000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:09.869971 systemd[1]: Stopped systemd-udevd.service. Feb 9 00:42:09.879000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:09.879000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:09.879000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:09.884000 audit: BPF prog-id=9 op=UNLOAD Feb 9 00:42:09.873596 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 00:42:09.873711 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 00:42:09.879903 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 00:42:09.879949 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 00:42:09.880691 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 00:42:09.880737 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 00:42:09.881493 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 00:42:09.881530 systemd[1]: Stopped dracut-cmdline.service. Feb 9 00:42:09.882234 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 00:42:09.882270 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 00:42:09.894847 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 00:42:09.942551 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 00:42:09.942655 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 00:42:09.962000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:09.963179 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 00:42:09.970000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:09.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:09.973000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:09.963323 systemd[1]: Stopped network-cleanup.service. Feb 9 00:42:09.972931 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 00:42:09.973028 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 00:42:09.974247 systemd[1]: Reached target initrd-switch-root.target. Feb 9 00:42:09.993938 systemd[1]: Starting initrd-switch-root.service... Feb 9 00:42:10.008947 systemd[1]: Switching root. Feb 9 00:42:10.041807 iscsid[714]: iscsid shutting down. Feb 9 00:42:10.043197 systemd-journald[198]: Received SIGTERM from PID 1 (n/a). Feb 9 00:42:10.043260 systemd-journald[198]: Journal stopped Feb 9 00:42:22.061255 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 00:42:22.061313 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 00:42:22.061329 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 00:42:22.061344 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 00:42:22.061363 kernel: SELinux: policy capability open_perms=1 Feb 9 00:42:22.061378 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 00:42:22.061392 kernel: SELinux: policy capability always_check_network=0 Feb 9 00:42:22.061405 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 00:42:22.061419 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 00:42:22.061436 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 00:42:22.061450 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 00:42:22.061465 systemd[1]: Successfully loaded SELinux policy in 49.443ms. Feb 9 00:42:22.061490 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.390ms. Feb 9 00:42:22.061508 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 00:42:22.061526 systemd[1]: Detected virtualization kvm. Feb 9 00:42:22.061541 systemd[1]: Detected architecture x86-64. Feb 9 00:42:22.061556 systemd[1]: Detected first boot. Feb 9 00:42:22.061572 systemd[1]: Initializing machine ID from VM UUID. Feb 9 00:42:22.061587 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 00:42:22.061601 systemd[1]: Populated /etc with preset unit settings. Feb 9 00:42:22.061618 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 00:42:22.061646 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 00:42:22.061663 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 00:42:22.061680 kernel: kauditd_printk_skb: 45 callbacks suppressed Feb 9 00:42:22.061694 kernel: audit: type=1334 audit(1707439341.418:79): prog-id=12 op=LOAD Feb 9 00:42:22.061708 kernel: audit: type=1334 audit(1707439341.422:80): prog-id=3 op=UNLOAD Feb 9 00:42:22.061721 kernel: audit: type=1334 audit(1707439341.425:81): prog-id=13 op=LOAD Feb 9 00:42:22.061738 kernel: audit: type=1334 audit(1707439341.435:82): prog-id=14 op=LOAD Feb 9 00:42:22.061753 kernel: audit: type=1334 audit(1707439341.435:83): prog-id=4 op=UNLOAD Feb 9 00:42:22.061767 kernel: audit: type=1334 audit(1707439341.435:84): prog-id=5 op=UNLOAD Feb 9 00:42:22.061781 kernel: audit: type=1131 audit(1707439341.441:85): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:22.061795 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 00:42:22.061810 systemd[1]: Stopped iscsiuio.service. Feb 9 00:42:22.061825 systemd[1]: iscsid.service: Deactivated successfully. Feb 9 00:42:22.061840 kernel: audit: type=1131 audit(1707439341.455:86): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:22.061855 systemd[1]: Stopped iscsid.service. Feb 9 00:42:22.061872 kernel: audit: type=1131 audit(1707439341.460:87): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:22.061887 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 9 00:42:22.061903 systemd[1]: Stopped initrd-switch-root.service. Feb 9 00:42:22.061922 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 9 00:42:22.061941 kernel: audit: type=1130 audit(1707439341.469:88): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:22.061959 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 00:42:22.061988 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 00:42:22.062010 systemd[1]: Created slice system-getty.slice. Feb 9 00:42:22.062027 systemd[1]: Created slice system-modprobe.slice. Feb 9 00:42:22.062045 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 00:42:22.062064 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 00:42:22.062102 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 00:42:22.062127 systemd[1]: Created slice user.slice. Feb 9 00:42:22.062144 systemd[1]: Started systemd-ask-password-console.path. Feb 9 00:42:22.062159 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 00:42:22.062174 systemd[1]: Set up automount boot.automount. Feb 9 00:42:22.062192 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 00:42:22.062208 systemd[1]: Stopped target initrd-switch-root.target. Feb 9 00:42:22.062224 systemd[1]: Stopped target initrd-fs.target. Feb 9 00:42:22.062238 systemd[1]: Stopped target initrd-root-fs.target. Feb 9 00:42:22.062253 systemd[1]: Reached target integritysetup.target. Feb 9 00:42:22.062268 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 00:42:22.062282 systemd[1]: Reached target remote-fs.target. Feb 9 00:42:22.062297 systemd[1]: Reached target slices.target. Feb 9 00:42:22.062312 systemd[1]: Reached target swap.target. Feb 9 00:42:22.062330 systemd[1]: Reached target torcx.target. Feb 9 00:42:22.062346 systemd[1]: Reached target veritysetup.target. Feb 9 00:42:22.062369 systemd[1]: Listening on systemd-coredump.socket. Feb 9 00:42:22.062390 systemd[1]: Listening on systemd-initctl.socket. Feb 9 00:42:22.062408 systemd[1]: Listening on systemd-networkd.socket. Feb 9 00:42:22.062425 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 00:42:22.062443 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 00:42:22.062458 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 00:42:22.062474 systemd[1]: Mounting dev-hugepages.mount... Feb 9 00:42:22.062489 systemd[1]: Mounting dev-mqueue.mount... Feb 9 00:42:22.062504 systemd[1]: Mounting media.mount... Feb 9 00:42:22.062519 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 00:42:22.062534 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 00:42:22.062549 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 00:42:22.062567 systemd[1]: Mounting tmp.mount... Feb 9 00:42:22.062582 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 00:42:22.062596 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 00:42:22.062611 systemd[1]: Starting kmod-static-nodes.service... Feb 9 00:42:22.062626 systemd[1]: Starting modprobe@configfs.service... Feb 9 00:42:22.062641 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 00:42:22.062657 systemd[1]: Starting modprobe@drm.service... Feb 9 00:42:22.062672 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 00:42:22.062686 systemd[1]: Starting modprobe@fuse.service... Feb 9 00:42:22.062708 systemd[1]: Starting modprobe@loop.service... Feb 9 00:42:22.062727 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 00:42:22.062742 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 9 00:42:22.062756 systemd[1]: Stopped systemd-fsck-root.service. Feb 9 00:42:22.062771 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 9 00:42:22.062786 systemd[1]: Stopped systemd-fsck-usr.service. Feb 9 00:42:22.062801 systemd[1]: Stopped systemd-journald.service. Feb 9 00:42:22.062816 kernel: fuse: init (API version 7.34) Feb 9 00:42:22.062832 systemd[1]: systemd-journald.service: Consumed 1.453s CPU time. Feb 9 00:42:22.062849 kernel: loop: module loaded Feb 9 00:42:22.062864 systemd[1]: Starting systemd-journald.service... Feb 9 00:42:22.062880 systemd[1]: Starting systemd-modules-load.service... Feb 9 00:42:22.062895 systemd[1]: Starting systemd-network-generator.service... Feb 9 00:42:22.062913 systemd[1]: Starting systemd-remount-fs.service... Feb 9 00:42:22.062930 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 00:42:22.062949 systemd[1]: verity-setup.service: Deactivated successfully. Feb 9 00:42:22.062968 systemd[1]: Stopped verity-setup.service. Feb 9 00:42:22.062988 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 00:42:22.063007 systemd[1]: Mounted dev-hugepages.mount. Feb 9 00:42:22.063028 systemd[1]: Mounted dev-mqueue.mount. Feb 9 00:42:22.063048 systemd[1]: Mounted media.mount. Feb 9 00:42:22.063083 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 00:42:22.063100 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 00:42:22.063123 systemd[1]: Mounted tmp.mount. Feb 9 00:42:22.063138 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 00:42:22.063152 systemd[1]: Finished kmod-static-nodes.service. Feb 9 00:42:22.063168 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 00:42:22.063183 systemd[1]: Finished modprobe@configfs.service. Feb 9 00:42:22.063198 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 00:42:22.063213 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 00:42:22.063228 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 00:42:22.063243 systemd[1]: Finished modprobe@drm.service. Feb 9 00:42:22.063260 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 00:42:22.063275 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 00:42:22.063291 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 00:42:22.063306 systemd[1]: Finished modprobe@fuse.service. Feb 9 00:42:22.063321 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 00:42:22.063338 systemd[1]: Finished modprobe@loop.service. Feb 9 00:42:22.063351 systemd[1]: Finished systemd-modules-load.service. Feb 9 00:42:22.063365 systemd[1]: Finished systemd-network-generator.service. Feb 9 00:42:22.063384 systemd-journald[1054]: Journal started Feb 9 00:42:22.063452 systemd-journald[1054]: Runtime Journal (/run/log/journal/b7160d68dd464a7685582b2f6c63e54a) is 6.0M, max 48.4M, 42.4M free. Feb 9 00:42:10.202000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 9 00:42:10.632000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 00:42:10.639000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 00:42:10.642000 audit: BPF prog-id=10 op=LOAD Feb 9 00:42:10.644000 audit: BPF prog-id=10 op=UNLOAD Feb 9 00:42:10.644000 audit: BPF prog-id=11 op=LOAD Feb 9 00:42:10.644000 audit: BPF prog-id=11 op=UNLOAD Feb 9 00:42:10.737000 audit[965]: AVC avc: denied { associate } for pid=965 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 00:42:10.737000 audit[965]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001c58b2 a1=c000146de0 a2=c00014f0c0 a3=32 items=0 ppid=948 pid=965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 00:42:10.737000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 00:42:10.738000 audit[965]: AVC avc: denied { associate } for pid=965 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 9 00:42:10.738000 audit[965]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001c5989 a2=1ed a3=0 items=2 ppid=948 pid=965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 00:42:10.738000 audit: CWD cwd="/" Feb 9 00:42:10.738000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:10.738000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:10.738000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 00:42:21.418000 audit: BPF prog-id=12 op=LOAD Feb 9 00:42:21.422000 audit: BPF prog-id=3 op=UNLOAD Feb 9 00:42:21.425000 audit: BPF prog-id=13 op=LOAD Feb 9 00:42:21.435000 audit: BPF prog-id=14 op=LOAD Feb 9 00:42:21.435000 audit: BPF prog-id=4 op=UNLOAD Feb 9 00:42:21.435000 audit: BPF prog-id=5 op=UNLOAD Feb 9 00:42:21.441000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:21.455000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:21.460000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:21.469000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:21.469000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:21.510000 audit: BPF prog-id=12 op=UNLOAD Feb 9 00:42:21.865000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:21.868000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:21.878000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:21.878000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:21.899000 audit: BPF prog-id=15 op=LOAD Feb 9 00:42:21.909000 audit: BPF prog-id=16 op=LOAD Feb 9 00:42:21.910000 audit: BPF prog-id=17 op=LOAD Feb 9 00:42:21.910000 audit: BPF prog-id=13 op=UNLOAD Feb 9 00:42:21.910000 audit: BPF prog-id=14 op=UNLOAD Feb 9 00:42:21.979000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:22.007000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:22.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:22.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:22.017000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:22.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:22.027000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:22.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:22.069357 systemd[1]: Started systemd-journald.service. Feb 9 00:42:22.036000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:22.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:22.049000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:22.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:22.052000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:22.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:22.055000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:22.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:22.059000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 00:42:22.059000 audit[1054]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffe81b56350 a2=4000 a3=7ffe81b563ec items=0 ppid=1 pid=1054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 00:42:22.059000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 00:42:22.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:22.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:10.735024 /usr/lib/systemd/system-generators/torcx-generator[965]: time="2024-02-09T00:42:10Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 00:42:21.408678 systemd[1]: Queued start job for default target multi-user.target. Feb 9 00:42:10.735398 /usr/lib/systemd/system-generators/torcx-generator[965]: time="2024-02-09T00:42:10Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 00:42:21.408700 systemd[1]: Unnecessary job was removed for dev-vda6.device. Feb 9 00:42:10.735437 /usr/lib/systemd/system-generators/torcx-generator[965]: time="2024-02-09T00:42:10Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 00:42:21.438765 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 9 00:42:10.735472 /usr/lib/systemd/system-generators/torcx-generator[965]: time="2024-02-09T00:42:10Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 9 00:42:21.441962 systemd[1]: systemd-journald.service: Consumed 1.453s CPU time. Feb 9 00:42:10.735483 /usr/lib/systemd/system-generators/torcx-generator[965]: time="2024-02-09T00:42:10Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 9 00:42:22.068472 systemd[1]: Finished systemd-remount-fs.service. Feb 9 00:42:10.735515 /usr/lib/systemd/system-generators/torcx-generator[965]: time="2024-02-09T00:42:10Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 9 00:42:10.735528 /usr/lib/systemd/system-generators/torcx-generator[965]: time="2024-02-09T00:42:10Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 9 00:42:10.735751 /usr/lib/systemd/system-generators/torcx-generator[965]: time="2024-02-09T00:42:10Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 9 00:42:10.735790 /usr/lib/systemd/system-generators/torcx-generator[965]: time="2024-02-09T00:42:10Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 00:42:10.735804 /usr/lib/systemd/system-generators/torcx-generator[965]: time="2024-02-09T00:42:10Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 00:42:10.736248 /usr/lib/systemd/system-generators/torcx-generator[965]: time="2024-02-09T00:42:10Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 9 00:42:10.736285 /usr/lib/systemd/system-generators/torcx-generator[965]: time="2024-02-09T00:42:10Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 9 00:42:10.736305 /usr/lib/systemd/system-generators/torcx-generator[965]: time="2024-02-09T00:42:10Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 9 00:42:10.736320 /usr/lib/systemd/system-generators/torcx-generator[965]: time="2024-02-09T00:42:10Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 9 00:42:10.736336 /usr/lib/systemd/system-generators/torcx-generator[965]: time="2024-02-09T00:42:10Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 9 00:42:10.736351 /usr/lib/systemd/system-generators/torcx-generator[965]: time="2024-02-09T00:42:10Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 9 00:42:20.110399 /usr/lib/systemd/system-generators/torcx-generator[965]: time="2024-02-09T00:42:20Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 00:42:20.110781 /usr/lib/systemd/system-generators/torcx-generator[965]: time="2024-02-09T00:42:20Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 00:42:20.110943 /usr/lib/systemd/system-generators/torcx-generator[965]: time="2024-02-09T00:42:20Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 00:42:20.111215 /usr/lib/systemd/system-generators/torcx-generator[965]: time="2024-02-09T00:42:20Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 00:42:20.111286 /usr/lib/systemd/system-generators/torcx-generator[965]: time="2024-02-09T00:42:20Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 9 00:42:20.111380 /usr/lib/systemd/system-generators/torcx-generator[965]: time="2024-02-09T00:42:20Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 9 00:42:22.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:22.072978 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 00:42:22.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:22.076552 systemd[1]: Reached target network-pre.target. Feb 9 00:42:22.082369 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 00:42:22.086871 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 00:42:22.087766 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 00:42:22.103814 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 00:42:22.121840 systemd[1]: Starting systemd-journal-flush.service... Feb 9 00:42:22.126519 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 00:42:22.128534 systemd[1]: Starting systemd-random-seed.service... Feb 9 00:42:22.133641 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 00:42:22.139404 systemd[1]: Starting systemd-sysctl.service... Feb 9 00:42:22.153935 systemd[1]: Starting systemd-sysusers.service... Feb 9 00:42:22.171462 systemd-journald[1054]: Time spent on flushing to /var/log/journal/b7160d68dd464a7685582b2f6c63e54a is 36.925ms for 1216 entries. Feb 9 00:42:22.171462 systemd-journald[1054]: System Journal (/var/log/journal/b7160d68dd464a7685582b2f6c63e54a) is 8.0M, max 195.6M, 187.6M free. Feb 9 00:42:22.242343 systemd-journald[1054]: Received client request to flush runtime journal. Feb 9 00:42:22.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:22.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:22.160336 systemd[1]: Starting systemd-udev-settle.service... Feb 9 00:42:22.163588 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 00:42:22.243545 udevadm[1068]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 9 00:42:22.164608 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 00:42:22.195381 systemd[1]: Finished systemd-random-seed.service. Feb 9 00:42:22.196610 systemd[1]: Reached target first-boot-complete.target. Feb 9 00:42:22.222504 systemd[1]: Finished systemd-sysctl.service. Feb 9 00:42:22.252663 systemd[1]: Finished systemd-journal-flush.service. Feb 9 00:42:22.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:22.477611 systemd[1]: Finished systemd-sysusers.service. Feb 9 00:42:22.484000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:23.466576 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 00:42:23.466000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:23.471000 audit: BPF prog-id=18 op=LOAD Feb 9 00:42:23.474000 audit: BPF prog-id=19 op=LOAD Feb 9 00:42:23.475000 audit: BPF prog-id=7 op=UNLOAD Feb 9 00:42:23.477000 audit: BPF prog-id=8 op=UNLOAD Feb 9 00:42:23.489048 systemd[1]: Starting systemd-udevd.service... Feb 9 00:42:23.532643 systemd-udevd[1071]: Using default interface naming scheme 'v252'. Feb 9 00:42:23.753730 systemd[1]: Started systemd-udevd.service. Feb 9 00:42:23.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:23.799000 audit: BPF prog-id=20 op=LOAD Feb 9 00:42:23.804384 systemd[1]: Starting systemd-networkd.service... Feb 9 00:42:23.815718 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Feb 9 00:42:23.834000 audit: BPF prog-id=21 op=LOAD Feb 9 00:42:23.838000 audit: BPF prog-id=22 op=LOAD Feb 9 00:42:23.841000 audit: BPF prog-id=23 op=LOAD Feb 9 00:42:23.848824 systemd[1]: Starting systemd-userdbd.service... Feb 9 00:42:23.918895 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 00:42:23.982376 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 9 00:42:23.987135 kernel: ACPI: button: Power Button [PWRF] Feb 9 00:42:23.902000 audit[1075]: AVC avc: denied { confidentiality } for pid=1075 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 00:42:23.902000 audit[1075]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=556de6dc8b10 a1=32194 a2=7f0c8f12bbc5 a3=5 items=108 ppid=1071 pid=1075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 00:42:23.902000 audit: CWD cwd="/" Feb 9 00:42:23.902000 audit: PATH item=0 name=(null) inode=2063 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=1 name=(null) inode=14627 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=2 name=(null) inode=14627 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=3 name=(null) inode=14628 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=4 name=(null) inode=14627 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=5 name=(null) inode=14629 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=6 name=(null) inode=14627 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=7 name=(null) inode=14630 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=8 name=(null) inode=14630 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=9 name=(null) inode=14631 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=10 name=(null) inode=14630 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=11 name=(null) inode=14632 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=12 name=(null) inode=14630 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=13 name=(null) inode=14633 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=14 name=(null) inode=14630 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=15 name=(null) inode=14634 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=16 name=(null) inode=14630 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=17 name=(null) inode=14635 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=18 name=(null) inode=14627 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=19 name=(null) inode=14636 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=20 name=(null) inode=14636 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=21 name=(null) inode=14637 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=22 name=(null) inode=14636 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=23 name=(null) inode=14638 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=24 name=(null) inode=14636 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=25 name=(null) inode=14639 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=26 name=(null) inode=14636 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=27 name=(null) inode=14640 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=28 name=(null) inode=14636 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=29 name=(null) inode=14641 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=30 name=(null) inode=14627 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=31 name=(null) inode=14642 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=32 name=(null) inode=14642 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=33 name=(null) inode=14643 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=34 name=(null) inode=14642 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=35 name=(null) inode=14644 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=36 name=(null) inode=14642 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=37 name=(null) inode=14645 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=38 name=(null) inode=14642 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=39 name=(null) inode=14646 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=40 name=(null) inode=14642 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=41 name=(null) inode=14647 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=42 name=(null) inode=14627 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=43 name=(null) inode=14648 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=44 name=(null) inode=14648 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=45 name=(null) inode=14649 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=46 name=(null) inode=14648 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=47 name=(null) inode=14650 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=48 name=(null) inode=14648 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=49 name=(null) inode=14651 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=50 name=(null) inode=14648 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=51 name=(null) inode=14652 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=52 name=(null) inode=14648 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=53 name=(null) inode=14653 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=54 name=(null) inode=2063 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=55 name=(null) inode=14654 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=56 name=(null) inode=14654 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=57 name=(null) inode=14655 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=58 name=(null) inode=14654 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=59 name=(null) inode=14656 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=60 name=(null) inode=14654 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=61 name=(null) inode=14657 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=62 name=(null) inode=14657 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=63 name=(null) inode=14658 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=64 name=(null) inode=14657 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=65 name=(null) inode=14659 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=66 name=(null) inode=14657 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=67 name=(null) inode=14660 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=68 name=(null) inode=14657 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=69 name=(null) inode=14661 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=70 name=(null) inode=14657 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=71 name=(null) inode=14662 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=72 name=(null) inode=14654 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=73 name=(null) inode=14663 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=74 name=(null) inode=14663 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=75 name=(null) inode=14664 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=76 name=(null) inode=14663 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=77 name=(null) inode=14665 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=78 name=(null) inode=14663 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=79 name=(null) inode=14666 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=80 name=(null) inode=14663 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=81 name=(null) inode=14667 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=82 name=(null) inode=14663 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=83 name=(null) inode=14668 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=84 name=(null) inode=14654 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=85 name=(null) inode=14669 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=86 name=(null) inode=14669 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=87 name=(null) inode=14670 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=88 name=(null) inode=14669 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=89 name=(null) inode=14671 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=90 name=(null) inode=14669 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=91 name=(null) inode=14672 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=92 name=(null) inode=14669 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=93 name=(null) inode=14673 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=94 name=(null) inode=14669 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=95 name=(null) inode=14674 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=96 name=(null) inode=14654 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=97 name=(null) inode=14675 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=98 name=(null) inode=14675 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=99 name=(null) inode=14676 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=100 name=(null) inode=14675 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=101 name=(null) inode=14677 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=102 name=(null) inode=14675 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=103 name=(null) inode=14678 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=104 name=(null) inode=14675 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=105 name=(null) inode=14679 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=106 name=(null) inode=14675 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PATH item=107 name=(null) inode=14680 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:42:23.902000 audit: PROCTITLE proctitle="(udev-worker)" Feb 9 00:42:24.050111 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 9 00:42:24.053175 systemd[1]: Started systemd-userdbd.service. Feb 9 00:42:24.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:24.068112 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 0 Feb 9 00:42:24.074105 kernel: mousedev: PS/2 mouse device common for all mice Feb 9 00:42:24.133225 kernel: kvm: Nested Virtualization enabled Feb 9 00:42:24.133374 kernel: SVM: kvm: Nested Paging enabled Feb 9 00:42:24.133406 kernel: SVM: Virtual VMLOAD VMSAVE supported Feb 9 00:42:24.134836 kernel: SVM: Virtual GIF supported Feb 9 00:42:24.158621 systemd-networkd[1090]: lo: Link UP Feb 9 00:42:24.158628 systemd-networkd[1090]: lo: Gained carrier Feb 9 00:42:24.159163 systemd-networkd[1090]: Enumeration completed Feb 9 00:42:24.159287 systemd-networkd[1090]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 00:42:24.160535 systemd[1]: Started systemd-networkd.service. Feb 9 00:42:24.160559 systemd-networkd[1090]: eth0: Link UP Feb 9 00:42:24.160563 systemd-networkd[1090]: eth0: Gained carrier Feb 9 00:42:24.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:24.366472 systemd-networkd[1090]: eth0: DHCPv4 address 10.0.0.24/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 9 00:42:24.381247 kernel: EDAC MC: Ver: 3.0.0 Feb 9 00:42:24.420676 systemd[1]: Finished systemd-udev-settle.service. Feb 9 00:42:24.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:24.424941 systemd[1]: Starting lvm2-activation-early.service... Feb 9 00:42:24.438653 lvm[1107]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 00:42:24.471804 systemd[1]: Finished lvm2-activation-early.service. Feb 9 00:42:24.473816 systemd[1]: Reached target cryptsetup.target. Feb 9 00:42:24.473000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:24.479127 systemd[1]: Starting lvm2-activation.service... Feb 9 00:42:24.485572 lvm[1108]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 00:42:24.534984 systemd[1]: Finished lvm2-activation.service. Feb 9 00:42:24.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:24.537789 systemd[1]: Reached target local-fs-pre.target. Feb 9 00:42:24.539526 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 00:42:24.539578 systemd[1]: Reached target local-fs.target. Feb 9 00:42:24.540668 systemd[1]: Reached target machines.target. Feb 9 00:42:24.546302 systemd[1]: Starting ldconfig.service... Feb 9 00:42:24.547609 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 00:42:24.547771 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 00:42:24.554603 systemd[1]: Starting systemd-boot-update.service... Feb 9 00:42:24.565871 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 00:42:24.601192 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 00:42:24.601717 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 00:42:24.601792 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 00:42:24.614905 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 00:42:24.618445 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 00:42:24.617000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:24.620171 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1110 (bootctl) Feb 9 00:42:24.623770 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 00:42:24.760312 systemd-tmpfiles[1114]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 00:42:24.864754 systemd-tmpfiles[1114]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 00:42:25.681342 systemd-networkd[1090]: eth0: Gained IPv6LL Feb 9 00:42:25.688346 systemd-tmpfiles[1114]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 00:42:25.841724 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 00:42:25.842548 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 00:42:25.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:25.883137 systemd-fsck[1118]: fsck.fat 4.2 (2021-01-31) Feb 9 00:42:25.883137 systemd-fsck[1118]: /dev/vda1: 790 files, 115355/258078 clusters Feb 9 00:42:25.886560 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 00:42:25.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:25.892813 systemd[1]: Mounting boot.mount... Feb 9 00:42:25.939247 systemd[1]: Mounted boot.mount. Feb 9 00:42:25.986381 systemd[1]: Finished systemd-boot-update.service. Feb 9 00:42:25.990000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:26.252474 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 00:42:26.257000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:26.281366 systemd[1]: Starting audit-rules.service... Feb 9 00:42:26.297685 systemd[1]: Starting clean-ca-certificates.service... Feb 9 00:42:26.310841 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 00:42:26.320000 audit: BPF prog-id=24 op=LOAD Feb 9 00:42:26.324332 systemd[1]: Starting systemd-resolved.service... Feb 9 00:42:26.330000 audit: BPF prog-id=25 op=LOAD Feb 9 00:42:26.332411 systemd[1]: Starting systemd-timesyncd.service... Feb 9 00:42:26.336015 systemd[1]: Starting systemd-update-utmp.service... Feb 9 00:42:26.337876 systemd[1]: Finished clean-ca-certificates.service. Feb 9 00:42:26.339309 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 00:42:26.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:26.343000 audit[1134]: SYSTEM_BOOT pid=1134 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 00:42:26.349752 systemd[1]: Finished systemd-update-utmp.service. Feb 9 00:42:26.358000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:26.408971 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 00:42:26.410000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:26.427252 augenrules[1142]: No rules Feb 9 00:42:26.427000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 00:42:26.429446 kernel: kauditd_printk_skb: 176 callbacks suppressed Feb 9 00:42:26.429520 kernel: audit: type=1305 audit(1707439346.427:152): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 00:42:26.427000 audit[1142]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdd0b35020 a2=420 a3=0 items=0 ppid=1122 pid=1142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 00:42:26.435025 systemd[1]: Finished audit-rules.service. Feb 9 00:42:26.440205 kernel: audit: type=1300 audit(1707439346.427:152): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdd0b35020 a2=420 a3=0 items=0 ppid=1122 pid=1142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 00:42:26.440335 kernel: audit: type=1327 audit(1707439346.427:152): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 00:42:26.427000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 00:42:26.445724 systemd-resolved[1131]: Positive Trust Anchors: Feb 9 00:42:26.446746 systemd-resolved[1131]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 00:42:26.446850 systemd-resolved[1131]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 00:42:26.574513 systemd[1]: Started systemd-timesyncd.service. Feb 9 00:42:26.576918 systemd-resolved[1131]: Defaulting to hostname 'linux'. Feb 9 00:42:26.576946 systemd[1]: Reached target time-set.target. Feb 9 00:42:26.580900 systemd[1]: Started systemd-resolved.service. Feb 9 00:42:26.582099 systemd[1]: Reached target network.target. Feb 9 00:42:26.583065 systemd[1]: Reached target nss-lookup.target. Feb 9 00:42:27.955046 systemd-resolved[1131]: Clock change detected. Flushing caches. Feb 9 00:42:27.955319 systemd-timesyncd[1133]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 9 00:42:27.955701 systemd-timesyncd[1133]: Initial clock synchronization to Fri 2024-02-09 00:42:27.954983 UTC. Feb 9 00:42:28.921185 ldconfig[1109]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 00:42:29.024967 systemd[1]: Finished ldconfig.service. Feb 9 00:42:29.044033 systemd[1]: Starting systemd-update-done.service... Feb 9 00:42:29.093536 systemd[1]: Finished systemd-update-done.service. Feb 9 00:42:29.096593 systemd[1]: Reached target sysinit.target. Feb 9 00:42:29.097897 systemd[1]: Started motdgen.path. Feb 9 00:42:29.099167 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 00:42:29.101338 systemd[1]: Started logrotate.timer. Feb 9 00:42:29.114320 systemd[1]: Started mdadm.timer. Feb 9 00:42:29.122199 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 00:42:29.128299 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 00:42:29.131754 systemd[1]: Reached target paths.target. Feb 9 00:42:29.138281 systemd[1]: Reached target timers.target. Feb 9 00:42:29.139971 systemd[1]: Listening on dbus.socket. Feb 9 00:42:29.147075 systemd[1]: Starting docker.socket... Feb 9 00:42:29.155240 systemd[1]: Listening on sshd.socket. Feb 9 00:42:29.156532 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 00:42:29.157166 systemd[1]: Listening on docker.socket. Feb 9 00:42:29.158278 systemd[1]: Reached target sockets.target. Feb 9 00:42:29.159311 systemd[1]: Reached target basic.target. Feb 9 00:42:29.162262 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 00:42:29.165216 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 00:42:29.180171 systemd[1]: Starting containerd.service... Feb 9 00:42:29.184799 systemd[1]: Starting dbus.service... Feb 9 00:42:29.189405 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 00:42:29.201584 systemd[1]: Starting extend-filesystems.service... Feb 9 00:42:29.205886 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 00:42:29.218675 jq[1153]: false Feb 9 00:42:29.219321 systemd[1]: Starting motdgen.service... Feb 9 00:42:29.227691 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 00:42:29.237022 systemd[1]: Starting prepare-critools.service... Feb 9 00:42:29.238983 extend-filesystems[1154]: Found sr0 Feb 9 00:42:29.250931 extend-filesystems[1154]: Found vda Feb 9 00:42:29.250931 extend-filesystems[1154]: Found vda1 Feb 9 00:42:29.250931 extend-filesystems[1154]: Found vda2 Feb 9 00:42:29.250931 extend-filesystems[1154]: Found vda3 Feb 9 00:42:29.250931 extend-filesystems[1154]: Found usr Feb 9 00:42:29.250931 extend-filesystems[1154]: Found vda4 Feb 9 00:42:29.250931 extend-filesystems[1154]: Found vda6 Feb 9 00:42:29.250931 extend-filesystems[1154]: Found vda7 Feb 9 00:42:29.250931 extend-filesystems[1154]: Found vda9 Feb 9 00:42:29.250931 extend-filesystems[1154]: Checking size of /dev/vda9 Feb 9 00:42:29.241979 systemd[1]: Starting prepare-helm.service... Feb 9 00:42:29.306746 extend-filesystems[1154]: Resized partition /dev/vda9 Feb 9 00:42:29.268840 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 00:42:29.311958 systemd[1]: Starting sshd-keygen.service... Feb 9 00:42:29.313737 extend-filesystems[1171]: resize2fs 1.46.5 (30-Dec-2021) Feb 9 00:42:29.344838 systemd[1]: Starting systemd-logind.service... Feb 9 00:42:29.345680 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 00:42:29.345782 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 00:42:29.346394 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 9 00:42:29.347545 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 9 00:42:29.353199 systemd[1]: Starting update-engine.service... Feb 9 00:42:29.355913 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 00:42:29.359482 jq[1180]: true Feb 9 00:42:29.363040 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 00:42:29.363241 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 00:42:29.363667 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 00:42:29.363901 systemd[1]: Finished motdgen.service. Feb 9 00:42:29.384993 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 00:42:29.385220 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 00:42:29.416676 jq[1185]: true Feb 9 00:42:29.423778 dbus-daemon[1152]: [system] SELinux support is enabled Feb 9 00:42:29.427222 systemd[1]: Started dbus.service. Feb 9 00:42:29.432643 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 00:42:29.432683 systemd[1]: Reached target system-config.target. Feb 9 00:42:29.438434 tar[1182]: ./ Feb 9 00:42:29.438434 tar[1182]: ./macvlan Feb 9 00:42:29.438222 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 00:42:29.438247 systemd[1]: Reached target user-config.target. Feb 9 00:42:29.445090 tar[1183]: crictl Feb 9 00:42:29.450925 tar[1184]: linux-amd64/helm Feb 9 00:42:29.483440 env[1186]: time="2024-02-09T00:42:29.483370287Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 00:42:29.575759 tar[1182]: ./static Feb 9 00:42:29.577599 env[1186]: time="2024-02-09T00:42:29.577534294Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 00:42:29.577871 env[1186]: time="2024-02-09T00:42:29.577848203Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 00:42:29.596894 env[1186]: time="2024-02-09T00:42:29.585945856Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 00:42:29.596894 env[1186]: time="2024-02-09T00:42:29.586003214Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 00:42:29.596894 env[1186]: time="2024-02-09T00:42:29.586304068Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 00:42:29.596894 env[1186]: time="2024-02-09T00:42:29.586328343Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 00:42:29.596894 env[1186]: time="2024-02-09T00:42:29.586347058Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 00:42:29.596894 env[1186]: time="2024-02-09T00:42:29.586359722Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 00:42:29.596894 env[1186]: time="2024-02-09T00:42:29.586449581Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 00:42:29.596894 env[1186]: time="2024-02-09T00:42:29.586792263Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 00:42:29.596894 env[1186]: time="2024-02-09T00:42:29.586947905Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 00:42:29.596894 env[1186]: time="2024-02-09T00:42:29.586972050Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 00:42:29.597772 env[1186]: time="2024-02-09T00:42:29.587032333Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 00:42:29.597772 env[1186]: time="2024-02-09T00:42:29.587049946Z" level=info msg="metadata content store policy set" policy=shared Feb 9 00:42:29.600735 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 9 00:42:29.895691 update_engine[1179]: I0209 00:42:29.766324 1179 main.cc:92] Flatcar Update Engine starting Feb 9 00:42:29.895691 update_engine[1179]: I0209 00:42:29.798830 1179 update_check_scheduler.cc:74] Next update check in 9m56s Feb 9 00:42:29.796640 systemd[1]: Started update-engine.service. Feb 9 00:42:29.816930 systemd[1]: Started locksmithd.service. Feb 9 00:42:29.896455 systemd-logind[1177]: Watching system buttons on /dev/input/event1 (Power Button) Feb 9 00:42:29.896478 systemd-logind[1177]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 9 00:42:29.897617 systemd-logind[1177]: New seat seat0. Feb 9 00:42:29.899899 systemd[1]: Started systemd-logind.service. Feb 9 00:42:29.916771 extend-filesystems[1171]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 9 00:42:29.916771 extend-filesystems[1171]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 9 00:42:29.916771 extend-filesystems[1171]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 9 00:42:29.945818 extend-filesystems[1154]: Resized filesystem in /dev/vda9 Feb 9 00:42:29.950844 sshd_keygen[1178]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 00:42:29.917753 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 00:42:29.917969 systemd[1]: Finished extend-filesystems.service. Feb 9 00:42:29.954266 tar[1182]: ./vlan Feb 9 00:42:29.983392 systemd[1]: Finished sshd-keygen.service. Feb 9 00:42:29.988781 systemd[1]: Starting issuegen.service... Feb 9 00:42:29.998069 tar[1182]: ./portmap Feb 9 00:42:30.015810 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 00:42:30.016044 systemd[1]: Finished issuegen.service. Feb 9 00:42:30.031935 systemd[1]: Starting systemd-user-sessions.service... Feb 9 00:42:30.051512 systemd[1]: Finished systemd-user-sessions.service. Feb 9 00:42:30.071776 env[1186]: time="2024-02-09T00:42:30.065785555Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 00:42:30.071776 env[1186]: time="2024-02-09T00:42:30.065860897Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 00:42:30.071776 env[1186]: time="2024-02-09T00:42:30.065877468Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 00:42:30.071776 env[1186]: time="2024-02-09T00:42:30.065945365Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 00:42:30.071776 env[1186]: time="2024-02-09T00:42:30.065978337Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 00:42:30.071776 env[1186]: time="2024-02-09T00:42:30.065994848Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 00:42:30.071776 env[1186]: time="2024-02-09T00:42:30.066010537Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 00:42:30.071776 env[1186]: time="2024-02-09T00:42:30.066057776Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 00:42:30.071776 env[1186]: time="2024-02-09T00:42:30.066078074Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 00:42:30.071776 env[1186]: time="2024-02-09T00:42:30.066101317Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 00:42:30.071776 env[1186]: time="2024-02-09T00:42:30.066137846Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 00:42:30.071776 env[1186]: time="2024-02-09T00:42:30.066154437Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 00:42:30.071776 env[1186]: time="2024-02-09T00:42:30.066482602Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 00:42:30.071776 env[1186]: time="2024-02-09T00:42:30.066611925Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 00:42:30.071663 systemd[1]: Started getty@tty1.service. Feb 9 00:42:30.072297 env[1186]: time="2024-02-09T00:42:30.066998349Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 00:42:30.072297 env[1186]: time="2024-02-09T00:42:30.067048443Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 00:42:30.072297 env[1186]: time="2024-02-09T00:42:30.067066487Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 00:42:30.072297 env[1186]: time="2024-02-09T00:42:30.067158460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 00:42:30.072297 env[1186]: time="2024-02-09T00:42:30.067194347Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 00:42:30.072297 env[1186]: time="2024-02-09T00:42:30.067211980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 00:42:30.072297 env[1186]: time="2024-02-09T00:42:30.067227168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 00:42:30.072297 env[1186]: time="2024-02-09T00:42:30.067258056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 00:42:30.072297 env[1186]: time="2024-02-09T00:42:30.067273996Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 00:42:30.072297 env[1186]: time="2024-02-09T00:42:30.067288353Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 00:42:30.072297 env[1186]: time="2024-02-09T00:42:30.067303391Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 00:42:30.072297 env[1186]: time="2024-02-09T00:42:30.067335952Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 00:42:30.072297 env[1186]: time="2024-02-09T00:42:30.067487436Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 00:42:30.072297 env[1186]: time="2024-02-09T00:42:30.067504739Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 00:42:30.072297 env[1186]: time="2024-02-09T00:42:30.067519306Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 00:42:30.072682 env[1186]: time="2024-02-09T00:42:30.067533803Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 00:42:30.072682 env[1186]: time="2024-02-09T00:42:30.067568919Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 00:42:30.072682 env[1186]: time="2024-02-09T00:42:30.067592734Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 00:42:30.072682 env[1186]: time="2024-02-09T00:42:30.067618713Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 00:42:30.072682 env[1186]: time="2024-02-09T00:42:30.067670269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 00:42:30.073566 env[1186]: time="2024-02-09T00:42:30.068029142Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 00:42:30.073566 env[1186]: time="2024-02-09T00:42:30.068107559Z" level=info msg="Connect containerd service" Feb 9 00:42:30.073566 env[1186]: time="2024-02-09T00:42:30.068149919Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 00:42:30.073566 env[1186]: time="2024-02-09T00:42:30.068956711Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 00:42:30.073566 env[1186]: time="2024-02-09T00:42:30.069097405Z" level=info msg="Start subscribing containerd event" Feb 9 00:42:30.073566 env[1186]: time="2024-02-09T00:42:30.069156065Z" level=info msg="Start recovering state" Feb 9 00:42:30.073566 env[1186]: time="2024-02-09T00:42:30.069218522Z" level=info msg="Start event monitor" Feb 9 00:42:30.073566 env[1186]: time="2024-02-09T00:42:30.069258056Z" level=info msg="Start snapshots syncer" Feb 9 00:42:30.073566 env[1186]: time="2024-02-09T00:42:30.069285678Z" level=info msg="Start cni network conf syncer for default" Feb 9 00:42:30.073566 env[1186]: time="2024-02-09T00:42:30.069296228Z" level=info msg="Start streaming server" Feb 9 00:42:30.073566 env[1186]: time="2024-02-09T00:42:30.069504027Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 00:42:30.106221 env[1186]: time="2024-02-09T00:42:30.103862680Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 00:42:30.091384 systemd[1]: Started serial-getty@ttyS0.service. Feb 9 00:42:30.106343 tar[1182]: ./host-local Feb 9 00:42:30.106384 bash[1209]: Updated "/home/core/.ssh/authorized_keys" Feb 9 00:42:30.092642 systemd[1]: Reached target getty.target. Feb 9 00:42:30.094974 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 00:42:30.115092 systemd[1]: Started containerd.service. Feb 9 00:42:30.122088 tar[1182]: ./vrf Feb 9 00:42:30.122129 env[1186]: time="2024-02-09T00:42:30.116766194Z" level=info msg="containerd successfully booted in 0.634429s" Feb 9 00:42:30.155165 tar[1182]: ./bridge Feb 9 00:42:30.197160 tar[1182]: ./tuning Feb 9 00:42:30.232035 tar[1182]: ./firewall Feb 9 00:42:30.277193 tar[1182]: ./host-device Feb 9 00:42:30.317212 tar[1182]: ./sbr Feb 9 00:42:30.353215 tar[1182]: ./loopback Feb 9 00:42:30.386550 tar[1182]: ./dhcp Feb 9 00:42:30.485318 tar[1182]: ./ptp Feb 9 00:42:30.486872 tar[1184]: linux-amd64/LICENSE Feb 9 00:42:30.487242 tar[1184]: linux-amd64/README.md Feb 9 00:42:30.487263 systemd[1]: Finished prepare-critools.service. Feb 9 00:42:30.496252 systemd[1]: Finished prepare-helm.service. Feb 9 00:42:30.551421 locksmithd[1215]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 00:42:30.559649 tar[1182]: ./ipvlan Feb 9 00:42:30.665634 tar[1182]: ./bandwidth Feb 9 00:42:30.887285 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 00:42:30.891929 systemd[1]: Reached target multi-user.target. Feb 9 00:42:30.902479 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 00:42:30.919918 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 00:42:30.920150 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 00:42:30.924188 systemd[1]: Startup finished in 1.925s (kernel) + 20.469s (initrd) + 19.403s (userspace) = 41.798s. Feb 9 00:42:36.099662 systemd[1]: Created slice system-sshd.slice. Feb 9 00:42:36.103956 systemd[1]: Started sshd@0-10.0.0.24:22-10.0.0.1:37420.service. Feb 9 00:42:36.223657 sshd[1240]: Accepted publickey for core from 10.0.0.1 port 37420 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:42:36.227296 sshd[1240]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:42:36.241821 systemd-logind[1177]: New session 1 of user core. Feb 9 00:42:36.243219 systemd[1]: Created slice user-500.slice. Feb 9 00:42:36.244820 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 00:42:36.255758 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 00:42:36.257549 systemd[1]: Starting user@500.service... Feb 9 00:42:36.260951 (systemd)[1243]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:42:36.365859 systemd[1243]: Queued start job for default target default.target. Feb 9 00:42:36.366579 systemd[1243]: Reached target paths.target. Feb 9 00:42:36.366612 systemd[1243]: Reached target sockets.target. Feb 9 00:42:36.366630 systemd[1243]: Reached target timers.target. Feb 9 00:42:36.366646 systemd[1243]: Reached target basic.target. Feb 9 00:42:36.366699 systemd[1243]: Reached target default.target. Feb 9 00:42:36.366744 systemd[1243]: Startup finished in 97ms. Feb 9 00:42:36.367062 systemd[1]: Started user@500.service. Feb 9 00:42:36.368263 systemd[1]: Started session-1.scope. Feb 9 00:42:36.427407 systemd[1]: Started sshd@1-10.0.0.24:22-10.0.0.1:37434.service. Feb 9 00:42:36.464640 sshd[1252]: Accepted publickey for core from 10.0.0.1 port 37434 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:42:36.466291 sshd[1252]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:42:36.471697 systemd-logind[1177]: New session 2 of user core. Feb 9 00:42:36.472483 systemd[1]: Started session-2.scope. Feb 9 00:42:36.531758 sshd[1252]: pam_unix(sshd:session): session closed for user core Feb 9 00:42:36.535695 systemd[1]: sshd@1-10.0.0.24:22-10.0.0.1:37434.service: Deactivated successfully. Feb 9 00:42:36.536484 systemd[1]: session-2.scope: Deactivated successfully. Feb 9 00:42:36.537156 systemd-logind[1177]: Session 2 logged out. Waiting for processes to exit. Feb 9 00:42:36.539004 systemd[1]: Started sshd@2-10.0.0.24:22-10.0.0.1:37446.service. Feb 9 00:42:36.539900 systemd-logind[1177]: Removed session 2. Feb 9 00:42:36.568296 sshd[1258]: Accepted publickey for core from 10.0.0.1 port 37446 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:42:36.569759 sshd[1258]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:42:36.573404 systemd-logind[1177]: New session 3 of user core. Feb 9 00:42:36.574357 systemd[1]: Started session-3.scope. Feb 9 00:42:36.626104 sshd[1258]: pam_unix(sshd:session): session closed for user core Feb 9 00:42:36.630156 systemd[1]: sshd@2-10.0.0.24:22-10.0.0.1:37446.service: Deactivated successfully. Feb 9 00:42:36.630685 systemd[1]: session-3.scope: Deactivated successfully. Feb 9 00:42:36.631274 systemd-logind[1177]: Session 3 logged out. Waiting for processes to exit. Feb 9 00:42:36.632450 systemd[1]: Started sshd@3-10.0.0.24:22-10.0.0.1:37462.service. Feb 9 00:42:36.633330 systemd-logind[1177]: Removed session 3. Feb 9 00:42:36.666958 sshd[1265]: Accepted publickey for core from 10.0.0.1 port 37462 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:42:36.669115 sshd[1265]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:42:36.673081 systemd-logind[1177]: New session 4 of user core. Feb 9 00:42:36.674089 systemd[1]: Started session-4.scope. Feb 9 00:42:36.729276 sshd[1265]: pam_unix(sshd:session): session closed for user core Feb 9 00:42:36.732050 systemd[1]: sshd@3-10.0.0.24:22-10.0.0.1:37462.service: Deactivated successfully. Feb 9 00:42:36.732617 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 00:42:36.733184 systemd-logind[1177]: Session 4 logged out. Waiting for processes to exit. Feb 9 00:42:36.734072 systemd[1]: Started sshd@4-10.0.0.24:22-10.0.0.1:37468.service. Feb 9 00:42:36.734939 systemd-logind[1177]: Removed session 4. Feb 9 00:42:36.763769 sshd[1271]: Accepted publickey for core from 10.0.0.1 port 37468 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:42:36.765016 sshd[1271]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:42:36.769032 systemd-logind[1177]: New session 5 of user core. Feb 9 00:42:36.770165 systemd[1]: Started session-5.scope. Feb 9 00:42:36.828156 sudo[1274]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 00:42:36.828387 sudo[1274]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 00:42:37.409098 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 00:42:37.414289 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 00:42:37.414591 systemd[1]: Reached target network-online.target. Feb 9 00:42:37.416097 systemd[1]: Starting docker.service... Feb 9 00:42:37.454410 env[1292]: time="2024-02-09T00:42:37.454323863Z" level=info msg="Starting up" Feb 9 00:42:37.456437 env[1292]: time="2024-02-09T00:42:37.456395196Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 00:42:37.456437 env[1292]: time="2024-02-09T00:42:37.456430763Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 00:42:37.456521 env[1292]: time="2024-02-09T00:42:37.456464326Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 00:42:37.456521 env[1292]: time="2024-02-09T00:42:37.456477821Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 00:42:37.458726 env[1292]: time="2024-02-09T00:42:37.458678578Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 00:42:37.458726 env[1292]: time="2024-02-09T00:42:37.458705238Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 00:42:37.458821 env[1292]: time="2024-02-09T00:42:37.458731918Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 00:42:37.458821 env[1292]: time="2024-02-09T00:42:37.458742818Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 00:42:37.821165 env[1292]: time="2024-02-09T00:42:37.821027672Z" level=info msg="Loading containers: start." Feb 9 00:42:37.952752 kernel: Initializing XFRM netlink socket Feb 9 00:42:37.987427 env[1292]: time="2024-02-09T00:42:37.987361638Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 9 00:42:38.053813 systemd-networkd[1090]: docker0: Link UP Feb 9 00:42:38.066170 env[1292]: time="2024-02-09T00:42:38.066115151Z" level=info msg="Loading containers: done." Feb 9 00:42:38.078139 env[1292]: time="2024-02-09T00:42:38.078032376Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 9 00:42:38.078315 env[1292]: time="2024-02-09T00:42:38.078265493Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 9 00:42:38.078393 env[1292]: time="2024-02-09T00:42:38.078363717Z" level=info msg="Daemon has completed initialization" Feb 9 00:42:38.097599 systemd[1]: Started docker.service. Feb 9 00:42:38.102924 env[1292]: time="2024-02-09T00:42:38.102848106Z" level=info msg="API listen on /run/docker.sock" Feb 9 00:42:38.122924 systemd[1]: Reloading. Feb 9 00:42:38.188145 /usr/lib/systemd/system-generators/torcx-generator[1434]: time="2024-02-09T00:42:38Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 00:42:38.188173 /usr/lib/systemd/system-generators/torcx-generator[1434]: time="2024-02-09T00:42:38Z" level=info msg="torcx already run" Feb 9 00:42:38.256795 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 00:42:38.256818 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 00:42:38.274220 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 00:42:38.348949 systemd[1]: Started kubelet.service. Feb 9 00:42:38.426582 kubelet[1475]: E0209 00:42:38.426485 1475 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 00:42:38.429581 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 00:42:38.429708 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 00:42:38.769237 env[1186]: time="2024-02-09T00:42:38.769085280Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\"" Feb 9 00:42:39.459786 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2388027673.mount: Deactivated successfully. Feb 9 00:42:44.583399 env[1186]: time="2024-02-09T00:42:44.583318426Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:42:44.646593 env[1186]: time="2024-02-09T00:42:44.646516252Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:42:44.677342 env[1186]: time="2024-02-09T00:42:44.677307655Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:42:44.720089 env[1186]: time="2024-02-09T00:42:44.720045238Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2f28bed4096abd572a56595ac0304238bdc271dcfe22c650707c09bf97ec16fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:42:44.720794 env[1186]: time="2024-02-09T00:42:44.720763776Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\" returns image reference \"sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f\"" Feb 9 00:42:44.733400 env[1186]: time="2024-02-09T00:42:44.733354864Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\"" Feb 9 00:42:48.063553 env[1186]: time="2024-02-09T00:42:48.063478044Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:42:48.065494 env[1186]: time="2024-02-09T00:42:48.065439722Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:42:48.067003 env[1186]: time="2024-02-09T00:42:48.066970482Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:42:48.068579 env[1186]: time="2024-02-09T00:42:48.068545526Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fda420c6c15cdd01c4eba3404f0662fe486a9c7f38fa13c741a21334673841a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:42:48.069319 env[1186]: time="2024-02-09T00:42:48.069278760Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\" returns image reference \"sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486\"" Feb 9 00:42:48.081561 env[1186]: time="2024-02-09T00:42:48.081512719Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\"" Feb 9 00:42:48.529595 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 9 00:42:48.529792 systemd[1]: Stopped kubelet.service. Feb 9 00:42:48.531301 systemd[1]: Started kubelet.service. Feb 9 00:42:48.619249 kubelet[1512]: E0209 00:42:48.619157 1512 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 00:42:48.623370 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 00:42:48.623485 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 00:42:50.229259 env[1186]: time="2024-02-09T00:42:50.229175566Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:42:50.230929 env[1186]: time="2024-02-09T00:42:50.230893417Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:42:50.233044 env[1186]: time="2024-02-09T00:42:50.232862439Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:42:50.236212 env[1186]: time="2024-02-09T00:42:50.236182514Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c3c7303ee6d01c8e5a769db28661cf854b55175aa72c67e9b6a7b9d47ac42af3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:42:50.236950 env[1186]: time="2024-02-09T00:42:50.236908455Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\" returns image reference \"sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e\"" Feb 9 00:42:50.248548 env[1186]: time="2024-02-09T00:42:50.248501563Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 9 00:42:51.699850 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2545913195.mount: Deactivated successfully. Feb 9 00:42:52.809116 env[1186]: time="2024-02-09T00:42:52.809048397Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:42:52.811515 env[1186]: time="2024-02-09T00:42:52.811480157Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:42:52.813193 env[1186]: time="2024-02-09T00:42:52.813151060Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:42:52.814684 env[1186]: time="2024-02-09T00:42:52.814627799Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:42:52.815055 env[1186]: time="2024-02-09T00:42:52.815023260Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f\"" Feb 9 00:42:52.825219 env[1186]: time="2024-02-09T00:42:52.825178461Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 9 00:42:53.447614 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2658872493.mount: Deactivated successfully. Feb 9 00:42:53.452778 env[1186]: time="2024-02-09T00:42:53.452728577Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:42:53.454331 env[1186]: time="2024-02-09T00:42:53.454305905Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:42:53.455947 env[1186]: time="2024-02-09T00:42:53.455906736Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:42:53.457199 env[1186]: time="2024-02-09T00:42:53.457141321Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:42:53.457746 env[1186]: time="2024-02-09T00:42:53.457709717Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 9 00:42:53.466149 env[1186]: time="2024-02-09T00:42:53.466107543Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\"" Feb 9 00:42:54.421148 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2226506168.mount: Deactivated successfully. Feb 9 00:42:58.779560 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 9 00:42:58.779760 systemd[1]: Stopped kubelet.service. Feb 9 00:42:58.781914 systemd[1]: Started kubelet.service. Feb 9 00:42:58.896260 kubelet[1537]: E0209 00:42:58.896131 1537 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 00:42:58.900858 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 00:42:58.900981 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 00:43:00.263155 env[1186]: time="2024-02-09T00:43:00.263085875Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:43:00.268358 env[1186]: time="2024-02-09T00:43:00.268317375Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:43:00.272080 env[1186]: time="2024-02-09T00:43:00.272021149Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:43:00.275989 env[1186]: time="2024-02-09T00:43:00.275932202Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:43:00.276567 env[1186]: time="2024-02-09T00:43:00.276531676Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\" returns image reference \"sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7\"" Feb 9 00:43:00.304590 env[1186]: time="2024-02-09T00:43:00.304542988Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\"" Feb 9 00:43:01.168384 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount11988767.mount: Deactivated successfully. Feb 9 00:43:02.309360 env[1186]: time="2024-02-09T00:43:02.309289685Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:43:02.347857 env[1186]: time="2024-02-09T00:43:02.347813787Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:43:02.371276 env[1186]: time="2024-02-09T00:43:02.371240434Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:43:02.390006 env[1186]: time="2024-02-09T00:43:02.389954258Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:43:02.390297 env[1186]: time="2024-02-09T00:43:02.390263576Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\" returns image reference \"sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a\"" Feb 9 00:43:04.381574 systemd[1]: Stopped kubelet.service. Feb 9 00:43:04.393205 systemd[1]: Reloading. Feb 9 00:43:04.479908 /usr/lib/systemd/system-generators/torcx-generator[1645]: time="2024-02-09T00:43:04Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 00:43:04.480296 /usr/lib/systemd/system-generators/torcx-generator[1645]: time="2024-02-09T00:43:04Z" level=info msg="torcx already run" Feb 9 00:43:04.691933 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 00:43:04.691952 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 00:43:04.709478 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 00:43:04.783123 systemd[1]: Started kubelet.service. Feb 9 00:43:04.827902 kubelet[1687]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 00:43:04.827902 kubelet[1687]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 00:43:04.828256 kubelet[1687]: I0209 00:43:04.827952 1687 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 00:43:04.830096 kubelet[1687]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 00:43:04.830096 kubelet[1687]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 00:43:04.999176 kubelet[1687]: I0209 00:43:04.999108 1687 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 00:43:04.999176 kubelet[1687]: I0209 00:43:04.999128 1687 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 00:43:04.999349 kubelet[1687]: I0209 00:43:04.999335 1687 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 00:43:05.001821 kubelet[1687]: I0209 00:43:05.001796 1687 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 00:43:05.002928 kubelet[1687]: E0209 00:43:05.002901 1687 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.24:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.24:6443: connect: connection refused Feb 9 00:43:05.006149 kubelet[1687]: I0209 00:43:05.006132 1687 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 00:43:05.006334 kubelet[1687]: I0209 00:43:05.006321 1687 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 00:43:05.006423 kubelet[1687]: I0209 00:43:05.006400 1687 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 00:43:05.006499 kubelet[1687]: I0209 00:43:05.006444 1687 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 00:43:05.006499 kubelet[1687]: I0209 00:43:05.006454 1687 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 00:43:05.006561 kubelet[1687]: I0209 00:43:05.006552 1687 state_mem.go:36] "Initialized new in-memory state store" Feb 9 00:43:05.012450 kubelet[1687]: I0209 00:43:05.012424 1687 kubelet.go:398] "Attempting to sync node with API server" Feb 9 00:43:05.012450 kubelet[1687]: I0209 00:43:05.012445 1687 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 00:43:05.012575 kubelet[1687]: I0209 00:43:05.012531 1687 kubelet.go:297] "Adding apiserver pod source" Feb 9 00:43:05.012575 kubelet[1687]: I0209 00:43:05.012553 1687 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 00:43:05.013098 kubelet[1687]: I0209 00:43:05.013087 1687 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 00:43:05.013362 kubelet[1687]: W0209 00:43:05.013350 1687 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 00:43:05.013787 kubelet[1687]: I0209 00:43:05.013771 1687 server.go:1186] "Started kubelet" Feb 9 00:43:05.013934 kubelet[1687]: I0209 00:43:05.013916 1687 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 00:43:05.014745 kubelet[1687]: I0209 00:43:05.014676 1687 server.go:451] "Adding debug handlers to kubelet server" Feb 9 00:43:05.047318 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 9 00:43:05.047407 kubelet[1687]: E0209 00:43:05.047390 1687 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 00:43:05.047455 kubelet[1687]: I0209 00:43:05.047414 1687 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 00:43:05.047455 kubelet[1687]: E0209 00:43:05.047411 1687 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 00:43:05.047610 kubelet[1687]: W0209 00:43:05.047561 1687 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.24:6443: connect: connection refused Feb 9 00:43:05.047744 kubelet[1687]: E0209 00:43:05.047708 1687 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.24:6443: connect: connection refused Feb 9 00:43:05.047865 kubelet[1687]: W0209 00:43:05.047839 1687 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.24:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.24:6443: connect: connection refused Feb 9 00:43:05.047963 kubelet[1687]: E0209 00:43:05.047949 1687 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.24:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.24:6443: connect: connection refused Feb 9 00:43:05.048359 kubelet[1687]: E0209 00:43:05.048348 1687 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 00:43:05.048466 kubelet[1687]: I0209 00:43:05.048453 1687 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 00:43:05.048562 kubelet[1687]: E0209 00:43:05.048086 1687 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b20b1303417eba", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 0, 43, 5, 13747386, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 0, 43, 5, 13747386, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.0.0.24:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.24:6443: connect: connection refused'(may retry after sleeping) Feb 9 00:43:05.048742 kubelet[1687]: I0209 00:43:05.048711 1687 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 00:43:05.049169 kubelet[1687]: W0209 00:43:05.049147 1687 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.24:6443: connect: connection refused Feb 9 00:43:05.049273 kubelet[1687]: E0209 00:43:05.049260 1687 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.24:6443: connect: connection refused Feb 9 00:43:05.049636 kubelet[1687]: E0209 00:43:05.049609 1687 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://10.0.0.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.24:6443: connect: connection refused Feb 9 00:43:05.065520 kubelet[1687]: I0209 00:43:05.065500 1687 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 00:43:05.065520 kubelet[1687]: I0209 00:43:05.065513 1687 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 00:43:05.065520 kubelet[1687]: I0209 00:43:05.065527 1687 state_mem.go:36] "Initialized new in-memory state store" Feb 9 00:43:05.149663 kubelet[1687]: I0209 00:43:05.149646 1687 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 00:43:05.149994 kubelet[1687]: E0209 00:43:05.149976 1687 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.24:6443/api/v1/nodes\": dial tcp 10.0.0.24:6443: connect: connection refused" node="localhost" Feb 9 00:43:05.243297 kubelet[1687]: I0209 00:43:05.243271 1687 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 00:43:05.243597 kubelet[1687]: I0209 00:43:05.243570 1687 policy_none.go:49] "None policy: Start" Feb 9 00:43:05.244188 kubelet[1687]: I0209 00:43:05.244164 1687 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 00:43:05.244188 kubelet[1687]: I0209 00:43:05.244187 1687 state_mem.go:35] "Initializing new in-memory state store" Feb 9 00:43:05.250233 kubelet[1687]: E0209 00:43:05.250163 1687 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://10.0.0.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.24:6443: connect: connection refused Feb 9 00:43:05.315285 kubelet[1687]: I0209 00:43:05.315262 1687 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 00:43:05.315385 kubelet[1687]: I0209 00:43:05.315297 1687 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 00:43:05.315385 kubelet[1687]: I0209 00:43:05.315317 1687 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 00:43:05.315385 kubelet[1687]: E0209 00:43:05.315364 1687 kubelet.go:2137] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 00:43:05.315727 kubelet[1687]: W0209 00:43:05.315678 1687 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.24:6443: connect: connection refused Feb 9 00:43:05.315789 kubelet[1687]: E0209 00:43:05.315744 1687 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.24:6443: connect: connection refused Feb 9 00:43:05.347897 systemd[1]: Created slice kubepods.slice. Feb 9 00:43:05.351349 kubelet[1687]: I0209 00:43:05.351322 1687 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 00:43:05.351656 kubelet[1687]: E0209 00:43:05.351630 1687 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.24:6443/api/v1/nodes\": dial tcp 10.0.0.24:6443: connect: connection refused" node="localhost" Feb 9 00:43:05.352257 systemd[1]: Created slice kubepods-burstable.slice. Feb 9 00:43:05.354584 systemd[1]: Created slice kubepods-besteffort.slice. Feb 9 00:43:05.362325 kubelet[1687]: I0209 00:43:05.362304 1687 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 00:43:05.362541 kubelet[1687]: I0209 00:43:05.362522 1687 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 00:43:05.363059 kubelet[1687]: E0209 00:43:05.363036 1687 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 9 00:43:05.416451 kubelet[1687]: I0209 00:43:05.416412 1687 topology_manager.go:210] "Topology Admit Handler" Feb 9 00:43:05.417774 kubelet[1687]: I0209 00:43:05.417740 1687 topology_manager.go:210] "Topology Admit Handler" Feb 9 00:43:05.418716 kubelet[1687]: I0209 00:43:05.418695 1687 topology_manager.go:210] "Topology Admit Handler" Feb 9 00:43:05.420656 kubelet[1687]: I0209 00:43:05.420623 1687 status_manager.go:698] "Failed to get status for pod" podUID=550020dd9f101bcc23e1d3c651841c4d pod="kube-system/kube-controller-manager-localhost" err="Get \"https://10.0.0.24:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 10.0.0.24:6443: connect: connection refused" Feb 9 00:43:05.421039 kubelet[1687]: I0209 00:43:05.421021 1687 status_manager.go:698] "Failed to get status for pod" podUID=72ae17a74a2eae76daac6d298477aff0 pod="kube-system/kube-scheduler-localhost" err="Get \"https://10.0.0.24:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 10.0.0.24:6443: connect: connection refused" Feb 9 00:43:05.421543 kubelet[1687]: I0209 00:43:05.421527 1687 status_manager.go:698] "Failed to get status for pod" podUID=724db026101850f25c11e959374fe755 pod="kube-system/kube-apiserver-localhost" err="Get \"https://10.0.0.24:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.24:6443: connect: connection refused" Feb 9 00:43:05.424439 systemd[1]: Created slice kubepods-burstable-pod550020dd9f101bcc23e1d3c651841c4d.slice. Feb 9 00:43:05.432331 systemd[1]: Created slice kubepods-burstable-pod72ae17a74a2eae76daac6d298477aff0.slice. Feb 9 00:43:05.443260 systemd[1]: Created slice kubepods-burstable-pod724db026101850f25c11e959374fe755.slice. Feb 9 00:43:05.550570 kubelet[1687]: I0209 00:43:05.550506 1687 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 00:43:05.550570 kubelet[1687]: I0209 00:43:05.550565 1687 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 00:43:05.550570 kubelet[1687]: I0209 00:43:05.550588 1687 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 00:43:05.550876 kubelet[1687]: I0209 00:43:05.550620 1687 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 00:43:05.550876 kubelet[1687]: I0209 00:43:05.550644 1687 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 00:43:05.550876 kubelet[1687]: I0209 00:43:05.550670 1687 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae17a74a2eae76daac6d298477aff0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae17a74a2eae76daac6d298477aff0\") " pod="kube-system/kube-scheduler-localhost" Feb 9 00:43:05.550876 kubelet[1687]: I0209 00:43:05.550693 1687 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/724db026101850f25c11e959374fe755-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"724db026101850f25c11e959374fe755\") " pod="kube-system/kube-apiserver-localhost" Feb 9 00:43:05.550876 kubelet[1687]: I0209 00:43:05.550714 1687 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/724db026101850f25c11e959374fe755-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"724db026101850f25c11e959374fe755\") " pod="kube-system/kube-apiserver-localhost" Feb 9 00:43:05.551033 kubelet[1687]: I0209 00:43:05.550783 1687 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/724db026101850f25c11e959374fe755-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"724db026101850f25c11e959374fe755\") " pod="kube-system/kube-apiserver-localhost" Feb 9 00:43:05.651372 kubelet[1687]: E0209 00:43:05.651323 1687 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://10.0.0.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.24:6443: connect: connection refused Feb 9 00:43:05.731767 kubelet[1687]: E0209 00:43:05.731671 1687 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:05.732409 env[1186]: time="2024-02-09T00:43:05.732359846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:550020dd9f101bcc23e1d3c651841c4d,Namespace:kube-system,Attempt:0,}" Feb 9 00:43:05.741589 kubelet[1687]: E0209 00:43:05.741550 1687 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:05.742116 env[1186]: time="2024-02-09T00:43:05.742073564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae17a74a2eae76daac6d298477aff0,Namespace:kube-system,Attempt:0,}" Feb 9 00:43:05.746340 kubelet[1687]: E0209 00:43:05.746320 1687 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:05.746779 env[1186]: time="2024-02-09T00:43:05.746710692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:724db026101850f25c11e959374fe755,Namespace:kube-system,Attempt:0,}" Feb 9 00:43:05.753354 kubelet[1687]: I0209 00:43:05.753331 1687 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 00:43:05.753777 kubelet[1687]: E0209 00:43:05.753745 1687 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.24:6443/api/v1/nodes\": dial tcp 10.0.0.24:6443: connect: connection refused" node="localhost" Feb 9 00:43:05.982390 kubelet[1687]: W0209 00:43:05.982173 1687 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.24:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.24:6443: connect: connection refused Feb 9 00:43:05.982390 kubelet[1687]: E0209 00:43:05.982307 1687 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.24:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.24:6443: connect: connection refused Feb 9 00:43:06.190553 kubelet[1687]: W0209 00:43:06.190467 1687 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.24:6443: connect: connection refused Feb 9 00:43:06.190553 kubelet[1687]: E0209 00:43:06.190535 1687 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.24:6443: connect: connection refused Feb 9 00:43:06.230157 kubelet[1687]: W0209 00:43:06.230088 1687 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.24:6443: connect: connection refused Feb 9 00:43:06.230157 kubelet[1687]: E0209 00:43:06.230150 1687 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.24:6443: connect: connection refused Feb 9 00:43:06.324915 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount95772624.mount: Deactivated successfully. Feb 9 00:43:06.329706 env[1186]: time="2024-02-09T00:43:06.329655719Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:43:06.332062 env[1186]: time="2024-02-09T00:43:06.332010728Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:43:06.332891 env[1186]: time="2024-02-09T00:43:06.332850950Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:43:06.333644 env[1186]: time="2024-02-09T00:43:06.333615457Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:43:06.335915 env[1186]: time="2024-02-09T00:43:06.335888452Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:43:06.337019 env[1186]: time="2024-02-09T00:43:06.336992804Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:43:06.338445 env[1186]: time="2024-02-09T00:43:06.338410078Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:43:06.342107 env[1186]: time="2024-02-09T00:43:06.342053448Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:43:06.344752 env[1186]: time="2024-02-09T00:43:06.344689019Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:43:06.346870 env[1186]: time="2024-02-09T00:43:06.346829954Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:43:06.348365 env[1186]: time="2024-02-09T00:43:06.348304066Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:43:06.359207 env[1186]: time="2024-02-09T00:43:06.359165776Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:43:06.363379 env[1186]: time="2024-02-09T00:43:06.362475564Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 00:43:06.363379 env[1186]: time="2024-02-09T00:43:06.363335113Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 00:43:06.363460 env[1186]: time="2024-02-09T00:43:06.363373294Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 00:43:06.363769 env[1186]: time="2024-02-09T00:43:06.363674615Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/34942f09f6b6379dff8dcc968596d3f72076c7bf2797793e4a5c3a9a04e1d5b8 pid=1765 runtime=io.containerd.runc.v2 Feb 9 00:43:06.402276 env[1186]: time="2024-02-09T00:43:06.402179718Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 00:43:06.402560 env[1186]: time="2024-02-09T00:43:06.402229422Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 00:43:06.402741 env[1186]: time="2024-02-09T00:43:06.402673382Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 00:43:06.403110 env[1186]: time="2024-02-09T00:43:06.403052481Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0e030c7db520465abcb0f494f95f9ffc81c23967e5d9ed4dcf62963a43fd8a43 pid=1782 runtime=io.containerd.runc.v2 Feb 9 00:43:06.405478 env[1186]: time="2024-02-09T00:43:06.405390980Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 00:43:06.405478 env[1186]: time="2024-02-09T00:43:06.405435254Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 00:43:06.405668 env[1186]: time="2024-02-09T00:43:06.405612199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 00:43:06.406002 env[1186]: time="2024-02-09T00:43:06.405942394Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/72a55dd7b7f6a140042ea663c6dd8ed6e3568702c82ed5fa491cbdf99236905a pid=1796 runtime=io.containerd.runc.v2 Feb 9 00:43:06.444204 systemd[1]: Started cri-containerd-34942f09f6b6379dff8dcc968596d3f72076c7bf2797793e4a5c3a9a04e1d5b8.scope. Feb 9 00:43:06.448334 systemd[1]: Started cri-containerd-72a55dd7b7f6a140042ea663c6dd8ed6e3568702c82ed5fa491cbdf99236905a.scope. Feb 9 00:43:06.452045 kubelet[1687]: E0209 00:43:06.451899 1687 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: Get "https://10.0.0.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.24:6443: connect: connection refused Feb 9 00:43:06.481629 systemd[1]: Started cri-containerd-0e030c7db520465abcb0f494f95f9ffc81c23967e5d9ed4dcf62963a43fd8a43.scope. Feb 9 00:43:06.555433 kubelet[1687]: I0209 00:43:06.555396 1687 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 00:43:06.555778 kubelet[1687]: E0209 00:43:06.555762 1687 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.24:6443/api/v1/nodes\": dial tcp 10.0.0.24:6443: connect: connection refused" node="localhost" Feb 9 00:43:06.556882 env[1186]: time="2024-02-09T00:43:06.556843257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae17a74a2eae76daac6d298477aff0,Namespace:kube-system,Attempt:0,} returns sandbox id \"72a55dd7b7f6a140042ea663c6dd8ed6e3568702c82ed5fa491cbdf99236905a\"" Feb 9 00:43:06.558676 kubelet[1687]: E0209 00:43:06.558657 1687 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:06.561146 env[1186]: time="2024-02-09T00:43:06.561109316Z" level=info msg="CreateContainer within sandbox \"72a55dd7b7f6a140042ea663c6dd8ed6e3568702c82ed5fa491cbdf99236905a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 9 00:43:06.563762 env[1186]: time="2024-02-09T00:43:06.562974930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:724db026101850f25c11e959374fe755,Namespace:kube-system,Attempt:0,} returns sandbox id \"34942f09f6b6379dff8dcc968596d3f72076c7bf2797793e4a5c3a9a04e1d5b8\"" Feb 9 00:43:06.563968 kubelet[1687]: E0209 00:43:06.563541 1687 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:06.566296 env[1186]: time="2024-02-09T00:43:06.566255213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:550020dd9f101bcc23e1d3c651841c4d,Namespace:kube-system,Attempt:0,} returns sandbox id \"0e030c7db520465abcb0f494f95f9ffc81c23967e5d9ed4dcf62963a43fd8a43\"" Feb 9 00:43:06.566763 kubelet[1687]: E0209 00:43:06.566742 1687 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:06.566841 env[1186]: time="2024-02-09T00:43:06.566788783Z" level=info msg="CreateContainer within sandbox \"34942f09f6b6379dff8dcc968596d3f72076c7bf2797793e4a5c3a9a04e1d5b8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 9 00:43:06.568983 env[1186]: time="2024-02-09T00:43:06.568937001Z" level=info msg="CreateContainer within sandbox \"0e030c7db520465abcb0f494f95f9ffc81c23967e5d9ed4dcf62963a43fd8a43\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 9 00:43:06.676023 kubelet[1687]: W0209 00:43:06.675872 1687 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.24:6443: connect: connection refused Feb 9 00:43:06.676023 kubelet[1687]: E0209 00:43:06.675955 1687 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.24:6443: connect: connection refused Feb 9 00:43:07.142293 kubelet[1687]: E0209 00:43:07.142246 1687 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.24:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.24:6443: connect: connection refused Feb 9 00:43:07.632239 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2879786066.mount: Deactivated successfully. Feb 9 00:43:07.657257 env[1186]: time="2024-02-09T00:43:07.657194453Z" level=info msg="CreateContainer within sandbox \"34942f09f6b6379dff8dcc968596d3f72076c7bf2797793e4a5c3a9a04e1d5b8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5aee001a8680b7c2221f7a58671e584b7f590c6be21d5f454b30499a75ad51d3\"" Feb 9 00:43:07.658087 env[1186]: time="2024-02-09T00:43:07.658052357Z" level=info msg="StartContainer for \"5aee001a8680b7c2221f7a58671e584b7f590c6be21d5f454b30499a75ad51d3\"" Feb 9 00:43:07.663701 env[1186]: time="2024-02-09T00:43:07.663653317Z" level=info msg="CreateContainer within sandbox \"72a55dd7b7f6a140042ea663c6dd8ed6e3568702c82ed5fa491cbdf99236905a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0b3278ca2997916872b83899ddc70a1250a6c34124df2464914fabea5f817d8c\"" Feb 9 00:43:07.664379 env[1186]: time="2024-02-09T00:43:07.664339857Z" level=info msg="StartContainer for \"0b3278ca2997916872b83899ddc70a1250a6c34124df2464914fabea5f817d8c\"" Feb 9 00:43:07.665418 env[1186]: time="2024-02-09T00:43:07.665367001Z" level=info msg="CreateContainer within sandbox \"0e030c7db520465abcb0f494f95f9ffc81c23967e5d9ed4dcf62963a43fd8a43\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c805bacaa2294ec9a3a705af1f6d14b978627490ff5e5fe0896ee07299fbefed\"" Feb 9 00:43:07.666016 env[1186]: time="2024-02-09T00:43:07.665977977Z" level=info msg="StartContainer for \"c805bacaa2294ec9a3a705af1f6d14b978627490ff5e5fe0896ee07299fbefed\"" Feb 9 00:43:07.675752 systemd[1]: Started cri-containerd-5aee001a8680b7c2221f7a58671e584b7f590c6be21d5f454b30499a75ad51d3.scope. Feb 9 00:43:07.684908 systemd[1]: Started cri-containerd-0b3278ca2997916872b83899ddc70a1250a6c34124df2464914fabea5f817d8c.scope. Feb 9 00:43:07.694095 systemd[1]: Started cri-containerd-c805bacaa2294ec9a3a705af1f6d14b978627490ff5e5fe0896ee07299fbefed.scope. Feb 9 00:43:07.727705 env[1186]: time="2024-02-09T00:43:07.727555524Z" level=info msg="StartContainer for \"5aee001a8680b7c2221f7a58671e584b7f590c6be21d5f454b30499a75ad51d3\" returns successfully" Feb 9 00:43:07.761936 env[1186]: time="2024-02-09T00:43:07.761826226Z" level=info msg="StartContainer for \"c805bacaa2294ec9a3a705af1f6d14b978627490ff5e5fe0896ee07299fbefed\" returns successfully" Feb 9 00:43:07.762529 env[1186]: time="2024-02-09T00:43:07.762462070Z" level=info msg="StartContainer for \"0b3278ca2997916872b83899ddc70a1250a6c34124df2464914fabea5f817d8c\" returns successfully" Feb 9 00:43:08.157409 kubelet[1687]: I0209 00:43:08.157383 1687 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 00:43:08.328971 kubelet[1687]: E0209 00:43:08.328947 1687 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:08.330901 kubelet[1687]: E0209 00:43:08.330889 1687 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:08.332452 kubelet[1687]: E0209 00:43:08.332439 1687 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:09.333965 kubelet[1687]: E0209 00:43:09.333924 1687 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:09.333965 kubelet[1687]: E0209 00:43:09.333941 1687 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:09.334442 kubelet[1687]: E0209 00:43:09.334426 1687 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:09.431383 kubelet[1687]: E0209 00:43:09.431338 1687 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 9 00:43:09.519667 kubelet[1687]: I0209 00:43:09.519612 1687 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Feb 9 00:43:10.017482 kubelet[1687]: I0209 00:43:10.017410 1687 apiserver.go:52] "Watching apiserver" Feb 9 00:43:10.049826 kubelet[1687]: I0209 00:43:10.049699 1687 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 00:43:10.085530 kubelet[1687]: I0209 00:43:10.085466 1687 reconciler.go:41] "Reconciler: start to sync state" Feb 9 00:43:10.338814 kubelet[1687]: E0209 00:43:10.338768 1687 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Feb 9 00:43:10.339233 kubelet[1687]: E0209 00:43:10.339184 1687 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:11.995558 systemd[1]: Reloading. Feb 9 00:43:12.063274 /usr/lib/systemd/system-generators/torcx-generator[2017]: time="2024-02-09T00:43:12Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 00:43:12.063313 /usr/lib/systemd/system-generators/torcx-generator[2017]: time="2024-02-09T00:43:12Z" level=info msg="torcx already run" Feb 9 00:43:12.131745 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 00:43:12.131764 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 00:43:12.148229 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 00:43:12.239692 kubelet[1687]: I0209 00:43:12.239429 1687 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 00:43:12.239480 systemd[1]: Stopping kubelet.service... Feb 9 00:43:12.250737 systemd[1]: kubelet.service: Deactivated successfully. Feb 9 00:43:12.251046 systemd[1]: Stopped kubelet.service. Feb 9 00:43:12.252797 systemd[1]: Started kubelet.service. Feb 9 00:43:12.307867 kubelet[2058]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 00:43:12.307867 kubelet[2058]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 00:43:12.308399 kubelet[2058]: I0209 00:43:12.307890 2058 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 00:43:12.309034 kubelet[2058]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 00:43:12.309034 kubelet[2058]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 00:43:12.312260 kubelet[2058]: I0209 00:43:12.312232 2058 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 00:43:12.312260 kubelet[2058]: I0209 00:43:12.312256 2058 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 00:43:12.312567 kubelet[2058]: I0209 00:43:12.312544 2058 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 00:43:12.314656 kubelet[2058]: I0209 00:43:12.314635 2058 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 9 00:43:12.315975 kubelet[2058]: I0209 00:43:12.315951 2058 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 00:43:12.320275 kubelet[2058]: I0209 00:43:12.320246 2058 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 00:43:12.320573 kubelet[2058]: I0209 00:43:12.320514 2058 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 00:43:12.320622 kubelet[2058]: I0209 00:43:12.320605 2058 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 00:43:12.320712 kubelet[2058]: I0209 00:43:12.320631 2058 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 00:43:12.320712 kubelet[2058]: I0209 00:43:12.320645 2058 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 00:43:12.320712 kubelet[2058]: I0209 00:43:12.320689 2058 state_mem.go:36] "Initialized new in-memory state store" Feb 9 00:43:12.324053 kubelet[2058]: I0209 00:43:12.324025 2058 kubelet.go:398] "Attempting to sync node with API server" Feb 9 00:43:12.324053 kubelet[2058]: I0209 00:43:12.324054 2058 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 00:43:12.324132 kubelet[2058]: I0209 00:43:12.324081 2058 kubelet.go:297] "Adding apiserver pod source" Feb 9 00:43:12.324132 kubelet[2058]: I0209 00:43:12.324103 2058 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 00:43:12.325172 kubelet[2058]: I0209 00:43:12.325145 2058 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 00:43:12.325811 kubelet[2058]: I0209 00:43:12.325784 2058 server.go:1186] "Started kubelet" Feb 9 00:43:12.327657 kubelet[2058]: I0209 00:43:12.327644 2058 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 00:43:12.328249 kubelet[2058]: E0209 00:43:12.328210 2058 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 00:43:12.328399 kubelet[2058]: E0209 00:43:12.328373 2058 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 00:43:12.328992 kubelet[2058]: I0209 00:43:12.328978 2058 server.go:451] "Adding debug handlers to kubelet server" Feb 9 00:43:12.331395 kubelet[2058]: I0209 00:43:12.331365 2058 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 00:43:12.334935 kubelet[2058]: I0209 00:43:12.334917 2058 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 00:43:12.335612 kubelet[2058]: I0209 00:43:12.335592 2058 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 00:43:12.359029 kubelet[2058]: I0209 00:43:12.359003 2058 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 00:43:12.374799 kubelet[2058]: I0209 00:43:12.374767 2058 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 00:43:12.374799 kubelet[2058]: I0209 00:43:12.374793 2058 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 00:43:12.374964 kubelet[2058]: I0209 00:43:12.374813 2058 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 00:43:12.374964 kubelet[2058]: E0209 00:43:12.374859 2058 kubelet.go:2137] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 00:43:12.388132 kubelet[2058]: I0209 00:43:12.388099 2058 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 00:43:12.388132 kubelet[2058]: I0209 00:43:12.388116 2058 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 00:43:12.388132 kubelet[2058]: I0209 00:43:12.388131 2058 state_mem.go:36] "Initialized new in-memory state store" Feb 9 00:43:12.388320 kubelet[2058]: I0209 00:43:12.388260 2058 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 9 00:43:12.388320 kubelet[2058]: I0209 00:43:12.388272 2058 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 9 00:43:12.388320 kubelet[2058]: I0209 00:43:12.388277 2058 policy_none.go:49] "None policy: Start" Feb 9 00:43:12.388708 kubelet[2058]: I0209 00:43:12.388683 2058 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 00:43:12.388708 kubelet[2058]: I0209 00:43:12.388713 2058 state_mem.go:35] "Initializing new in-memory state store" Feb 9 00:43:12.388862 kubelet[2058]: I0209 00:43:12.388836 2058 state_mem.go:75] "Updated machine memory state" Feb 9 00:43:12.392661 kubelet[2058]: I0209 00:43:12.392628 2058 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 00:43:12.392877 kubelet[2058]: I0209 00:43:12.392865 2058 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 00:43:12.438989 kubelet[2058]: I0209 00:43:12.438944 2058 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 00:43:12.475333 kubelet[2058]: I0209 00:43:12.475265 2058 topology_manager.go:210] "Topology Admit Handler" Feb 9 00:43:12.475497 kubelet[2058]: I0209 00:43:12.475391 2058 topology_manager.go:210] "Topology Admit Handler" Feb 9 00:43:12.475497 kubelet[2058]: I0209 00:43:12.475422 2058 topology_manager.go:210] "Topology Admit Handler" Feb 9 00:43:12.492782 sudo[2112]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 9 00:43:12.492961 sudo[2112]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Feb 9 00:43:12.637353 kubelet[2058]: I0209 00:43:12.637294 2058 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 00:43:12.637353 kubelet[2058]: I0209 00:43:12.637342 2058 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 00:43:12.637569 kubelet[2058]: I0209 00:43:12.637377 2058 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 00:43:12.637569 kubelet[2058]: I0209 00:43:12.637401 2058 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae17a74a2eae76daac6d298477aff0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae17a74a2eae76daac6d298477aff0\") " pod="kube-system/kube-scheduler-localhost" Feb 9 00:43:12.637569 kubelet[2058]: I0209 00:43:12.637458 2058 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/724db026101850f25c11e959374fe755-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"724db026101850f25c11e959374fe755\") " pod="kube-system/kube-apiserver-localhost" Feb 9 00:43:12.637569 kubelet[2058]: I0209 00:43:12.637476 2058 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/724db026101850f25c11e959374fe755-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"724db026101850f25c11e959374fe755\") " pod="kube-system/kube-apiserver-localhost" Feb 9 00:43:12.637569 kubelet[2058]: I0209 00:43:12.637502 2058 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/724db026101850f25c11e959374fe755-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"724db026101850f25c11e959374fe755\") " pod="kube-system/kube-apiserver-localhost" Feb 9 00:43:12.637703 kubelet[2058]: I0209 00:43:12.637520 2058 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 00:43:12.637703 kubelet[2058]: I0209 00:43:12.637536 2058 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 00:43:12.944406 sudo[2112]: pam_unix(sudo:session): session closed for user root Feb 9 00:43:12.977244 kubelet[2058]: E0209 00:43:12.977207 2058 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:12.977458 kubelet[2058]: E0209 00:43:12.977288 2058 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:12.977740 kubelet[2058]: E0209 00:43:12.977610 2058 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:13.084519 kubelet[2058]: I0209 00:43:13.084465 2058 kubelet_node_status.go:108] "Node was previously registered" node="localhost" Feb 9 00:43:13.084750 kubelet[2058]: I0209 00:43:13.084577 2058 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Feb 9 00:43:13.325425 kubelet[2058]: I0209 00:43:13.325385 2058 apiserver.go:52] "Watching apiserver" Feb 9 00:43:13.336851 kubelet[2058]: I0209 00:43:13.336823 2058 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 00:43:13.342888 kubelet[2058]: I0209 00:43:13.342849 2058 reconciler.go:41] "Reconciler: start to sync state" Feb 9 00:43:13.923446 kubelet[2058]: E0209 00:43:13.923409 2058 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Feb 9 00:43:13.923760 kubelet[2058]: E0209 00:43:13.923738 2058 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:14.011762 kubelet[2058]: E0209 00:43:14.011699 2058 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 9 00:43:14.012200 kubelet[2058]: E0209 00:43:14.012175 2058 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:14.289605 kubelet[2058]: E0209 00:43:14.289572 2058 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Feb 9 00:43:14.290205 kubelet[2058]: E0209 00:43:14.290166 2058 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:14.384195 kubelet[2058]: E0209 00:43:14.384151 2058 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:14.384615 kubelet[2058]: E0209 00:43:14.384498 2058 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:14.384832 kubelet[2058]: E0209 00:43:14.384811 2058 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:14.450392 sudo[1274]: pam_unix(sudo:session): session closed for user root Feb 9 00:43:14.454431 sshd[1271]: pam_unix(sshd:session): session closed for user core Feb 9 00:43:14.457308 systemd[1]: sshd@4-10.0.0.24:22-10.0.0.1:37468.service: Deactivated successfully. Feb 9 00:43:14.457972 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 00:43:14.458110 systemd[1]: session-5.scope: Consumed 3.506s CPU time. Feb 9 00:43:14.458505 systemd-logind[1177]: Session 5 logged out. Waiting for processes to exit. Feb 9 00:43:14.459228 systemd-logind[1177]: Removed session 5. Feb 9 00:43:14.732561 kubelet[2058]: I0209 00:43:14.732439 2058 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.732354741 pod.CreationTimestamp="2024-02-09 00:43:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 00:43:14.331814242 +0000 UTC m=+2.074880651" watchObservedRunningTime="2024-02-09 00:43:14.732354741 +0000 UTC m=+2.475421150" Feb 9 00:43:14.732749 kubelet[2058]: I0209 00:43:14.732565 2058 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.732541814 pod.CreationTimestamp="2024-02-09 00:43:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 00:43:14.732070153 +0000 UTC m=+2.475136573" watchObservedRunningTime="2024-02-09 00:43:14.732541814 +0000 UTC m=+2.475608223" Feb 9 00:43:15.126505 update_engine[1179]: I0209 00:43:15.126454 1179 update_attempter.cc:509] Updating boot flags... Feb 9 00:43:15.516045 kubelet[2058]: E0209 00:43:15.515930 2058 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:17.501807 kubelet[2058]: E0209 00:43:17.501765 2058 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:17.514731 kubelet[2058]: I0209 00:43:17.514663 2058 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=5.514612764 pod.CreationTimestamp="2024-02-09 00:43:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 00:43:15.132092542 +0000 UTC m=+2.875158951" watchObservedRunningTime="2024-02-09 00:43:17.514612764 +0000 UTC m=+5.257679173" Feb 9 00:43:18.389639 kubelet[2058]: E0209 00:43:18.389590 2058 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:22.926786 kubelet[2058]: E0209 00:43:22.926755 2058 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:23.396279 kubelet[2058]: E0209 00:43:23.396251 2058 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:24.397712 kubelet[2058]: E0209 00:43:24.397676 2058 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:24.928024 kubelet[2058]: I0209 00:43:24.927992 2058 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 9 00:43:24.928358 env[1186]: time="2024-02-09T00:43:24.928315647Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 00:43:24.928635 kubelet[2058]: I0209 00:43:24.928505 2058 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 9 00:43:25.521747 kubelet[2058]: E0209 00:43:25.521704 2058 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:26.277225 kubelet[2058]: I0209 00:43:26.277185 2058 topology_manager.go:210] "Topology Admit Handler" Feb 9 00:43:26.281850 systemd[1]: Created slice kubepods-burstable-podcea384fd_bed8_4dc5_8d33_845ed2d2a2d4.slice. Feb 9 00:43:26.332469 kubelet[2058]: I0209 00:43:26.332415 2058 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cea384fd-bed8-4dc5-8d33-845ed2d2a2d4-cni-path\") pod \"cilium-vngng\" (UID: \"cea384fd-bed8-4dc5-8d33-845ed2d2a2d4\") " pod="kube-system/cilium-vngng" Feb 9 00:43:26.332469 kubelet[2058]: I0209 00:43:26.332465 2058 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cea384fd-bed8-4dc5-8d33-845ed2d2a2d4-cilium-run\") pod \"cilium-vngng\" (UID: \"cea384fd-bed8-4dc5-8d33-845ed2d2a2d4\") " pod="kube-system/cilium-vngng" Feb 9 00:43:26.332673 kubelet[2058]: I0209 00:43:26.332491 2058 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cea384fd-bed8-4dc5-8d33-845ed2d2a2d4-lib-modules\") pod \"cilium-vngng\" (UID: \"cea384fd-bed8-4dc5-8d33-845ed2d2a2d4\") " pod="kube-system/cilium-vngng" Feb 9 00:43:26.332673 kubelet[2058]: I0209 00:43:26.332519 2058 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cea384fd-bed8-4dc5-8d33-845ed2d2a2d4-clustermesh-secrets\") pod \"cilium-vngng\" (UID: \"cea384fd-bed8-4dc5-8d33-845ed2d2a2d4\") " pod="kube-system/cilium-vngng" Feb 9 00:43:26.332673 kubelet[2058]: I0209 00:43:26.332615 2058 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cea384fd-bed8-4dc5-8d33-845ed2d2a2d4-cilium-config-path\") pod \"cilium-vngng\" (UID: \"cea384fd-bed8-4dc5-8d33-845ed2d2a2d4\") " pod="kube-system/cilium-vngng" Feb 9 00:43:26.332673 kubelet[2058]: I0209 00:43:26.332658 2058 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cea384fd-bed8-4dc5-8d33-845ed2d2a2d4-hubble-tls\") pod \"cilium-vngng\" (UID: \"cea384fd-bed8-4dc5-8d33-845ed2d2a2d4\") " pod="kube-system/cilium-vngng" Feb 9 00:43:26.332673 kubelet[2058]: I0209 00:43:26.332676 2058 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgd9g\" (UniqueName: \"kubernetes.io/projected/cea384fd-bed8-4dc5-8d33-845ed2d2a2d4-kube-api-access-cgd9g\") pod \"cilium-vngng\" (UID: \"cea384fd-bed8-4dc5-8d33-845ed2d2a2d4\") " pod="kube-system/cilium-vngng" Feb 9 00:43:26.332816 kubelet[2058]: I0209 00:43:26.332699 2058 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cea384fd-bed8-4dc5-8d33-845ed2d2a2d4-hostproc\") pod \"cilium-vngng\" (UID: \"cea384fd-bed8-4dc5-8d33-845ed2d2a2d4\") " pod="kube-system/cilium-vngng" Feb 9 00:43:26.332816 kubelet[2058]: I0209 00:43:26.332737 2058 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cea384fd-bed8-4dc5-8d33-845ed2d2a2d4-cilium-cgroup\") pod \"cilium-vngng\" (UID: \"cea384fd-bed8-4dc5-8d33-845ed2d2a2d4\") " pod="kube-system/cilium-vngng" Feb 9 00:43:26.332816 kubelet[2058]: I0209 00:43:26.332782 2058 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cea384fd-bed8-4dc5-8d33-845ed2d2a2d4-etc-cni-netd\") pod \"cilium-vngng\" (UID: \"cea384fd-bed8-4dc5-8d33-845ed2d2a2d4\") " pod="kube-system/cilium-vngng" Feb 9 00:43:26.332816 kubelet[2058]: I0209 00:43:26.332807 2058 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cea384fd-bed8-4dc5-8d33-845ed2d2a2d4-xtables-lock\") pod \"cilium-vngng\" (UID: \"cea384fd-bed8-4dc5-8d33-845ed2d2a2d4\") " pod="kube-system/cilium-vngng" Feb 9 00:43:26.332912 kubelet[2058]: I0209 00:43:26.332825 2058 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cea384fd-bed8-4dc5-8d33-845ed2d2a2d4-host-proc-sys-net\") pod \"cilium-vngng\" (UID: \"cea384fd-bed8-4dc5-8d33-845ed2d2a2d4\") " pod="kube-system/cilium-vngng" Feb 9 00:43:26.332912 kubelet[2058]: I0209 00:43:26.332848 2058 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cea384fd-bed8-4dc5-8d33-845ed2d2a2d4-host-proc-sys-kernel\") pod \"cilium-vngng\" (UID: \"cea384fd-bed8-4dc5-8d33-845ed2d2a2d4\") " pod="kube-system/cilium-vngng" Feb 9 00:43:26.332912 kubelet[2058]: I0209 00:43:26.332888 2058 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cea384fd-bed8-4dc5-8d33-845ed2d2a2d4-bpf-maps\") pod \"cilium-vngng\" (UID: \"cea384fd-bed8-4dc5-8d33-845ed2d2a2d4\") " pod="kube-system/cilium-vngng" Feb 9 00:43:26.380313 kubelet[2058]: I0209 00:43:26.380282 2058 topology_manager.go:210] "Topology Admit Handler" Feb 9 00:43:26.386310 systemd[1]: Created slice kubepods-besteffort-pod9e1384d0_4eca_40ea_8312_cf9ee048922f.slice. Feb 9 00:43:26.433776 kubelet[2058]: I0209 00:43:26.433707 2058 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9e1384d0-4eca-40ea-8312-cf9ee048922f-xtables-lock\") pod \"kube-proxy-k5dvc\" (UID: \"9e1384d0-4eca-40ea-8312-cf9ee048922f\") " pod="kube-system/kube-proxy-k5dvc" Feb 9 00:43:26.433950 kubelet[2058]: I0209 00:43:26.433877 2058 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9e1384d0-4eca-40ea-8312-cf9ee048922f-kube-proxy\") pod \"kube-proxy-k5dvc\" (UID: \"9e1384d0-4eca-40ea-8312-cf9ee048922f\") " pod="kube-system/kube-proxy-k5dvc" Feb 9 00:43:26.433950 kubelet[2058]: I0209 00:43:26.433922 2058 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qf56h\" (UniqueName: \"kubernetes.io/projected/9e1384d0-4eca-40ea-8312-cf9ee048922f-kube-api-access-qf56h\") pod \"kube-proxy-k5dvc\" (UID: \"9e1384d0-4eca-40ea-8312-cf9ee048922f\") " pod="kube-system/kube-proxy-k5dvc" Feb 9 00:43:26.434000 kubelet[2058]: I0209 00:43:26.433959 2058 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9e1384d0-4eca-40ea-8312-cf9ee048922f-lib-modules\") pod \"kube-proxy-k5dvc\" (UID: \"9e1384d0-4eca-40ea-8312-cf9ee048922f\") " pod="kube-system/kube-proxy-k5dvc" Feb 9 00:43:26.465681 kubelet[2058]: I0209 00:43:26.465652 2058 topology_manager.go:210] "Topology Admit Handler" Feb 9 00:43:26.470385 systemd[1]: Created slice kubepods-besteffort-poda70beb70_e305_4d40_8f88_f7152445c18b.slice. Feb 9 00:43:26.534288 kubelet[2058]: I0209 00:43:26.534254 2058 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-br75q\" (UniqueName: \"kubernetes.io/projected/a70beb70-e305-4d40-8f88-f7152445c18b-kube-api-access-br75q\") pod \"cilium-operator-f59cbd8c6-9c785\" (UID: \"a70beb70-e305-4d40-8f88-f7152445c18b\") " pod="kube-system/cilium-operator-f59cbd8c6-9c785" Feb 9 00:43:26.534629 kubelet[2058]: I0209 00:43:26.534317 2058 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a70beb70-e305-4d40-8f88-f7152445c18b-cilium-config-path\") pod \"cilium-operator-f59cbd8c6-9c785\" (UID: \"a70beb70-e305-4d40-8f88-f7152445c18b\") " pod="kube-system/cilium-operator-f59cbd8c6-9c785" Feb 9 00:43:26.584881 kubelet[2058]: E0209 00:43:26.584838 2058 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:26.585654 env[1186]: time="2024-02-09T00:43:26.585594054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vngng,Uid:cea384fd-bed8-4dc5-8d33-845ed2d2a2d4,Namespace:kube-system,Attempt:0,}" Feb 9 00:43:26.602599 env[1186]: time="2024-02-09T00:43:26.602521130Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 00:43:26.602711 env[1186]: time="2024-02-09T00:43:26.602611811Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 00:43:26.602711 env[1186]: time="2024-02-09T00:43:26.602627401Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 00:43:26.602848 env[1186]: time="2024-02-09T00:43:26.602808641Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/10e4b20761f5db9782c20c112a3523c9b5843048b5d7b2b0a6f782ec0a5a59ad pid=2190 runtime=io.containerd.runc.v2 Feb 9 00:43:26.612495 systemd[1]: Started cri-containerd-10e4b20761f5db9782c20c112a3523c9b5843048b5d7b2b0a6f782ec0a5a59ad.scope. Feb 9 00:43:26.639231 env[1186]: time="2024-02-09T00:43:26.639186619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vngng,Uid:cea384fd-bed8-4dc5-8d33-845ed2d2a2d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"10e4b20761f5db9782c20c112a3523c9b5843048b5d7b2b0a6f782ec0a5a59ad\"" Feb 9 00:43:26.639913 kubelet[2058]: E0209 00:43:26.639885 2058 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:26.641015 env[1186]: time="2024-02-09T00:43:26.640983136Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 9 00:43:26.998385 kubelet[2058]: E0209 00:43:26.998275 2058 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:26.999055 env[1186]: time="2024-02-09T00:43:26.999014662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k5dvc,Uid:9e1384d0-4eca-40ea-8312-cf9ee048922f,Namespace:kube-system,Attempt:0,}" Feb 9 00:43:27.012010 env[1186]: time="2024-02-09T00:43:27.011929510Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 00:43:27.012010 env[1186]: time="2024-02-09T00:43:27.011972321Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 00:43:27.012010 env[1186]: time="2024-02-09T00:43:27.011982600Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 00:43:27.012263 env[1186]: time="2024-02-09T00:43:27.012109368Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fabe05e755e59b517248a65a4cac24cd8637937019ac352058942d8af308e398 pid=2231 runtime=io.containerd.runc.v2 Feb 9 00:43:27.023243 systemd[1]: Started cri-containerd-fabe05e755e59b517248a65a4cac24cd8637937019ac352058942d8af308e398.scope. Feb 9 00:43:27.043323 env[1186]: time="2024-02-09T00:43:27.043262282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k5dvc,Uid:9e1384d0-4eca-40ea-8312-cf9ee048922f,Namespace:kube-system,Attempt:0,} returns sandbox id \"fabe05e755e59b517248a65a4cac24cd8637937019ac352058942d8af308e398\"" Feb 9 00:43:27.044146 kubelet[2058]: E0209 00:43:27.043959 2058 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:27.047953 env[1186]: time="2024-02-09T00:43:27.047918594Z" level=info msg="CreateContainer within sandbox \"fabe05e755e59b517248a65a4cac24cd8637937019ac352058942d8af308e398\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 00:43:27.060960 env[1186]: time="2024-02-09T00:43:27.060907111Z" level=info msg="CreateContainer within sandbox \"fabe05e755e59b517248a65a4cac24cd8637937019ac352058942d8af308e398\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4f6ddeeb5480d90531df944cf1f5b51a730faa54e513cc921e91f339a63a8784\"" Feb 9 00:43:27.061422 env[1186]: time="2024-02-09T00:43:27.061378036Z" level=info msg="StartContainer for \"4f6ddeeb5480d90531df944cf1f5b51a730faa54e513cc921e91f339a63a8784\"" Feb 9 00:43:27.076021 systemd[1]: Started cri-containerd-4f6ddeeb5480d90531df944cf1f5b51a730faa54e513cc921e91f339a63a8784.scope. Feb 9 00:43:27.122040 env[1186]: time="2024-02-09T00:43:27.121961595Z" level=info msg="StartContainer for \"4f6ddeeb5480d90531df944cf1f5b51a730faa54e513cc921e91f339a63a8784\" returns successfully" Feb 9 00:43:27.372607 kubelet[2058]: E0209 00:43:27.372579 2058 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:27.373125 env[1186]: time="2024-02-09T00:43:27.373086835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-9c785,Uid:a70beb70-e305-4d40-8f88-f7152445c18b,Namespace:kube-system,Attempt:0,}" Feb 9 00:43:27.404106 kubelet[2058]: E0209 00:43:27.404078 2058 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:27.464586 env[1186]: time="2024-02-09T00:43:27.464512813Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 00:43:27.464780 env[1186]: time="2024-02-09T00:43:27.464562557Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 00:43:27.464780 env[1186]: time="2024-02-09T00:43:27.464575662Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 00:43:27.465033 env[1186]: time="2024-02-09T00:43:27.464930488Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/be28aa47604de5e5c4429c9afad02b3bff86bf50e7a607d4866620ed4384e671 pid=2414 runtime=io.containerd.runc.v2 Feb 9 00:43:27.479404 systemd[1]: Started cri-containerd-be28aa47604de5e5c4429c9afad02b3bff86bf50e7a607d4866620ed4384e671.scope. Feb 9 00:43:27.517867 env[1186]: time="2024-02-09T00:43:27.517809574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-9c785,Uid:a70beb70-e305-4d40-8f88-f7152445c18b,Namespace:kube-system,Attempt:0,} returns sandbox id \"be28aa47604de5e5c4429c9afad02b3bff86bf50e7a607d4866620ed4384e671\"" Feb 9 00:43:27.518854 kubelet[2058]: E0209 00:43:27.518830 2058 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:27.528834 kubelet[2058]: I0209 00:43:27.528684 2058 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-k5dvc" podStartSLOduration=1.5286323 pod.CreationTimestamp="2024-02-09 00:43:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 00:43:27.528322958 +0000 UTC m=+15.271389367" watchObservedRunningTime="2024-02-09 00:43:27.5286323 +0000 UTC m=+15.271698729" Feb 9 00:43:28.408060 kubelet[2058]: E0209 00:43:28.408027 2058 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:33.982856 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3152013747.mount: Deactivated successfully. Feb 9 00:43:38.866951 env[1186]: time="2024-02-09T00:43:38.866897561Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:43:38.868382 env[1186]: time="2024-02-09T00:43:38.868325020Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:43:38.869699 env[1186]: time="2024-02-09T00:43:38.869655749Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:43:38.870198 env[1186]: time="2024-02-09T00:43:38.870156489Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 9 00:43:38.871008 env[1186]: time="2024-02-09T00:43:38.870978462Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 9 00:43:38.875879 env[1186]: time="2024-02-09T00:43:38.875840590Z" level=info msg="CreateContainer within sandbox \"10e4b20761f5db9782c20c112a3523c9b5843048b5d7b2b0a6f782ec0a5a59ad\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 00:43:38.887444 env[1186]: time="2024-02-09T00:43:38.887398118Z" level=info msg="CreateContainer within sandbox \"10e4b20761f5db9782c20c112a3523c9b5843048b5d7b2b0a6f782ec0a5a59ad\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ecddefe9826cdd3cedf5f055aca07e65ac4e7973150cc7786e07a2e003895e91\"" Feb 9 00:43:38.888054 env[1186]: time="2024-02-09T00:43:38.887983116Z" level=info msg="StartContainer for \"ecddefe9826cdd3cedf5f055aca07e65ac4e7973150cc7786e07a2e003895e91\"" Feb 9 00:43:38.904341 systemd[1]: Started cri-containerd-ecddefe9826cdd3cedf5f055aca07e65ac4e7973150cc7786e07a2e003895e91.scope. Feb 9 00:43:38.925901 env[1186]: time="2024-02-09T00:43:38.925837534Z" level=info msg="StartContainer for \"ecddefe9826cdd3cedf5f055aca07e65ac4e7973150cc7786e07a2e003895e91\" returns successfully" Feb 9 00:43:38.932573 systemd[1]: cri-containerd-ecddefe9826cdd3cedf5f055aca07e65ac4e7973150cc7786e07a2e003895e91.scope: Deactivated successfully. Feb 9 00:43:39.406084 env[1186]: time="2024-02-09T00:43:39.406020003Z" level=info msg="shim disconnected" id=ecddefe9826cdd3cedf5f055aca07e65ac4e7973150cc7786e07a2e003895e91 Feb 9 00:43:39.406084 env[1186]: time="2024-02-09T00:43:39.406067131Z" level=warning msg="cleaning up after shim disconnected" id=ecddefe9826cdd3cedf5f055aca07e65ac4e7973150cc7786e07a2e003895e91 namespace=k8s.io Feb 9 00:43:39.406084 env[1186]: time="2024-02-09T00:43:39.406076218Z" level=info msg="cleaning up dead shim" Feb 9 00:43:39.413398 env[1186]: time="2024-02-09T00:43:39.413325276Z" level=warning msg="cleanup warnings time=\"2024-02-09T00:43:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2499 runtime=io.containerd.runc.v2\n" Feb 9 00:43:39.431514 kubelet[2058]: E0209 00:43:39.431480 2058 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:39.433129 env[1186]: time="2024-02-09T00:43:39.433089359Z" level=info msg="CreateContainer within sandbox \"10e4b20761f5db9782c20c112a3523c9b5843048b5d7b2b0a6f782ec0a5a59ad\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 00:43:39.456968 env[1186]: time="2024-02-09T00:43:39.456904295Z" level=info msg="CreateContainer within sandbox \"10e4b20761f5db9782c20c112a3523c9b5843048b5d7b2b0a6f782ec0a5a59ad\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d9a6284705ab9e408006d9983a8a6d5bb5aa1c8a5f778b62985aea9c594ad763\"" Feb 9 00:43:39.457577 env[1186]: time="2024-02-09T00:43:39.457418350Z" level=info msg="StartContainer for \"d9a6284705ab9e408006d9983a8a6d5bb5aa1c8a5f778b62985aea9c594ad763\"" Feb 9 00:43:39.472441 systemd[1]: Started cri-containerd-d9a6284705ab9e408006d9983a8a6d5bb5aa1c8a5f778b62985aea9c594ad763.scope. Feb 9 00:43:39.520708 env[1186]: time="2024-02-09T00:43:39.520646254Z" level=info msg="StartContainer for \"d9a6284705ab9e408006d9983a8a6d5bb5aa1c8a5f778b62985aea9c594ad763\" returns successfully" Feb 9 00:43:39.526350 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 00:43:39.526577 systemd[1]: Stopped systemd-sysctl.service. Feb 9 00:43:39.526761 systemd[1]: Stopping systemd-sysctl.service... Feb 9 00:43:39.528259 systemd[1]: Starting systemd-sysctl.service... Feb 9 00:43:39.529944 systemd[1]: cri-containerd-d9a6284705ab9e408006d9983a8a6d5bb5aa1c8a5f778b62985aea9c594ad763.scope: Deactivated successfully. Feb 9 00:43:39.539599 systemd[1]: Finished systemd-sysctl.service. Feb 9 00:43:39.557571 env[1186]: time="2024-02-09T00:43:39.557521718Z" level=info msg="shim disconnected" id=d9a6284705ab9e408006d9983a8a6d5bb5aa1c8a5f778b62985aea9c594ad763 Feb 9 00:43:39.557571 env[1186]: time="2024-02-09T00:43:39.557562565Z" level=warning msg="cleaning up after shim disconnected" id=d9a6284705ab9e408006d9983a8a6d5bb5aa1c8a5f778b62985aea9c594ad763 namespace=k8s.io Feb 9 00:43:39.557571 env[1186]: time="2024-02-09T00:43:39.557571211Z" level=info msg="cleaning up dead shim" Feb 9 00:43:39.563690 env[1186]: time="2024-02-09T00:43:39.563647197Z" level=warning msg="cleanup warnings time=\"2024-02-09T00:43:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2561 runtime=io.containerd.runc.v2\n" Feb 9 00:43:39.884942 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ecddefe9826cdd3cedf5f055aca07e65ac4e7973150cc7786e07a2e003895e91-rootfs.mount: Deactivated successfully. Feb 9 00:43:40.433745 kubelet[2058]: E0209 00:43:40.433702 2058 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:40.435795 env[1186]: time="2024-02-09T00:43:40.435755242Z" level=info msg="CreateContainer within sandbox \"10e4b20761f5db9782c20c112a3523c9b5843048b5d7b2b0a6f782ec0a5a59ad\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 00:43:41.302977 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount349097802.mount: Deactivated successfully. Feb 9 00:43:41.328523 env[1186]: time="2024-02-09T00:43:41.328453163Z" level=info msg="CreateContainer within sandbox \"10e4b20761f5db9782c20c112a3523c9b5843048b5d7b2b0a6f782ec0a5a59ad\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"15660d2681f67912f4500dc9925f4c784afff308f2120ec4eeb179b025bc2ff9\"" Feb 9 00:43:41.329085 env[1186]: time="2024-02-09T00:43:41.329057657Z" level=info msg="StartContainer for \"15660d2681f67912f4500dc9925f4c784afff308f2120ec4eeb179b025bc2ff9\"" Feb 9 00:43:41.349344 systemd[1]: Started cri-containerd-15660d2681f67912f4500dc9925f4c784afff308f2120ec4eeb179b025bc2ff9.scope. Feb 9 00:43:41.383677 systemd[1]: cri-containerd-15660d2681f67912f4500dc9925f4c784afff308f2120ec4eeb179b025bc2ff9.scope: Deactivated successfully. Feb 9 00:43:41.384572 env[1186]: time="2024-02-09T00:43:41.384527944Z" level=info msg="StartContainer for \"15660d2681f67912f4500dc9925f4c784afff308f2120ec4eeb179b025bc2ff9\" returns successfully" Feb 9 00:43:41.419576 env[1186]: time="2024-02-09T00:43:41.419497701Z" level=info msg="shim disconnected" id=15660d2681f67912f4500dc9925f4c784afff308f2120ec4eeb179b025bc2ff9 Feb 9 00:43:41.419576 env[1186]: time="2024-02-09T00:43:41.419550570Z" level=warning msg="cleaning up after shim disconnected" id=15660d2681f67912f4500dc9925f4c784afff308f2120ec4eeb179b025bc2ff9 namespace=k8s.io Feb 9 00:43:41.419576 env[1186]: time="2024-02-09T00:43:41.419559617Z" level=info msg="cleaning up dead shim" Feb 9 00:43:41.429197 env[1186]: time="2024-02-09T00:43:41.429129460Z" level=warning msg="cleanup warnings time=\"2024-02-09T00:43:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2618 runtime=io.containerd.runc.v2\n" Feb 9 00:43:41.437632 kubelet[2058]: E0209 00:43:41.437595 2058 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:41.439756 env[1186]: time="2024-02-09T00:43:41.439705940Z" level=info msg="CreateContainer within sandbox \"10e4b20761f5db9782c20c112a3523c9b5843048b5d7b2b0a6f782ec0a5a59ad\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 00:43:41.468739 env[1186]: time="2024-02-09T00:43:41.468671609Z" level=info msg="CreateContainer within sandbox \"10e4b20761f5db9782c20c112a3523c9b5843048b5d7b2b0a6f782ec0a5a59ad\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5d445e62b5fef228036dd0784837b1a24cb59036900a43c291e746f54011cb15\"" Feb 9 00:43:41.469751 env[1186]: time="2024-02-09T00:43:41.469279450Z" level=info msg="StartContainer for \"5d445e62b5fef228036dd0784837b1a24cb59036900a43c291e746f54011cb15\"" Feb 9 00:43:41.484258 systemd[1]: Started cri-containerd-5d445e62b5fef228036dd0784837b1a24cb59036900a43c291e746f54011cb15.scope. Feb 9 00:43:41.507115 systemd[1]: cri-containerd-5d445e62b5fef228036dd0784837b1a24cb59036900a43c291e746f54011cb15.scope: Deactivated successfully. Feb 9 00:43:41.514047 env[1186]: time="2024-02-09T00:43:41.513989849Z" level=info msg="StartContainer for \"5d445e62b5fef228036dd0784837b1a24cb59036900a43c291e746f54011cb15\" returns successfully" Feb 9 00:43:41.574187 env[1186]: time="2024-02-09T00:43:41.574036475Z" level=info msg="shim disconnected" id=5d445e62b5fef228036dd0784837b1a24cb59036900a43c291e746f54011cb15 Feb 9 00:43:41.574187 env[1186]: time="2024-02-09T00:43:41.574097520Z" level=warning msg="cleaning up after shim disconnected" id=5d445e62b5fef228036dd0784837b1a24cb59036900a43c291e746f54011cb15 namespace=k8s.io Feb 9 00:43:41.574187 env[1186]: time="2024-02-09T00:43:41.574120994Z" level=info msg="cleaning up dead shim" Feb 9 00:43:41.581798 env[1186]: time="2024-02-09T00:43:41.581692836Z" level=warning msg="cleanup warnings time=\"2024-02-09T00:43:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2672 runtime=io.containerd.runc.v2\n" Feb 9 00:43:42.150570 env[1186]: time="2024-02-09T00:43:42.150493037Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:43:42.152161 env[1186]: time="2024-02-09T00:43:42.152127355Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:43:42.153620 env[1186]: time="2024-02-09T00:43:42.153575994Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:43:42.154001 env[1186]: time="2024-02-09T00:43:42.153973941Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 9 00:43:42.155641 env[1186]: time="2024-02-09T00:43:42.155606826Z" level=info msg="CreateContainer within sandbox \"be28aa47604de5e5c4429c9afad02b3bff86bf50e7a607d4866620ed4384e671\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 9 00:43:42.300206 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-15660d2681f67912f4500dc9925f4c784afff308f2120ec4eeb179b025bc2ff9-rootfs.mount: Deactivated successfully. Feb 9 00:43:42.325198 env[1186]: time="2024-02-09T00:43:42.325146355Z" level=info msg="CreateContainer within sandbox \"be28aa47604de5e5c4429c9afad02b3bff86bf50e7a607d4866620ed4384e671\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"0421dce762ff4f79a5ff6e8d9f35ac531bdf0b08ef0dc5bdbeba21c6f0f7a96b\"" Feb 9 00:43:42.325661 env[1186]: time="2024-02-09T00:43:42.325623320Z" level=info msg="StartContainer for \"0421dce762ff4f79a5ff6e8d9f35ac531bdf0b08ef0dc5bdbeba21c6f0f7a96b\"" Feb 9 00:43:42.342561 systemd[1]: Started cri-containerd-0421dce762ff4f79a5ff6e8d9f35ac531bdf0b08ef0dc5bdbeba21c6f0f7a96b.scope. Feb 9 00:43:42.367781 env[1186]: time="2024-02-09T00:43:42.367711834Z" level=info msg="StartContainer for \"0421dce762ff4f79a5ff6e8d9f35ac531bdf0b08ef0dc5bdbeba21c6f0f7a96b\" returns successfully" Feb 9 00:43:42.445206 kubelet[2058]: E0209 00:43:42.444335 2058 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:42.447376 env[1186]: time="2024-02-09T00:43:42.447311209Z" level=info msg="CreateContainer within sandbox \"10e4b20761f5db9782c20c112a3523c9b5843048b5d7b2b0a6f782ec0a5a59ad\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 00:43:42.456444 kubelet[2058]: E0209 00:43:42.456404 2058 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:42.518898 env[1186]: time="2024-02-09T00:43:42.518811003Z" level=info msg="CreateContainer within sandbox \"10e4b20761f5db9782c20c112a3523c9b5843048b5d7b2b0a6f782ec0a5a59ad\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6741a05073906c8cb51129c885adcd4bfabb542d84916f3076d6351b98180219\"" Feb 9 00:43:42.519396 env[1186]: time="2024-02-09T00:43:42.519357619Z" level=info msg="StartContainer for \"6741a05073906c8cb51129c885adcd4bfabb542d84916f3076d6351b98180219\"" Feb 9 00:43:42.536495 systemd[1]: Started cri-containerd-6741a05073906c8cb51129c885adcd4bfabb542d84916f3076d6351b98180219.scope. Feb 9 00:43:42.674479 env[1186]: time="2024-02-09T00:43:42.674422302Z" level=info msg="StartContainer for \"6741a05073906c8cb51129c885adcd4bfabb542d84916f3076d6351b98180219\" returns successfully" Feb 9 00:43:42.824787 kubelet[2058]: I0209 00:43:42.823880 2058 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 00:43:42.847229 kubelet[2058]: I0209 00:43:42.847171 2058 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-f59cbd8c6-9c785" podStartSLOduration=-9.22337202000766e+09 pod.CreationTimestamp="2024-02-09 00:43:26 +0000 UTC" firstStartedPulling="2024-02-09 00:43:27.519866711 +0000 UTC m=+15.262933120" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 00:43:42.514997246 +0000 UTC m=+30.258063655" watchObservedRunningTime="2024-02-09 00:43:42.847116653 +0000 UTC m=+30.590183062" Feb 9 00:43:42.847563 kubelet[2058]: I0209 00:43:42.847539 2058 topology_manager.go:210] "Topology Admit Handler" Feb 9 00:43:42.848397 kubelet[2058]: I0209 00:43:42.848370 2058 topology_manager.go:210] "Topology Admit Handler" Feb 9 00:43:42.854165 systemd[1]: Created slice kubepods-burstable-poda47f0156_e104_4a9e_94d6_87d86dc9ea19.slice. Feb 9 00:43:42.859252 systemd[1]: Created slice kubepods-burstable-podcd386fbe_86f1_4d59_beb0_e3f4fab44cb5.slice. Feb 9 00:43:42.951467 kubelet[2058]: I0209 00:43:42.951431 2058 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngrrm\" (UniqueName: \"kubernetes.io/projected/a47f0156-e104-4a9e-94d6-87d86dc9ea19-kube-api-access-ngrrm\") pod \"coredns-787d4945fb-496nn\" (UID: \"a47f0156-e104-4a9e-94d6-87d86dc9ea19\") " pod="kube-system/coredns-787d4945fb-496nn" Feb 9 00:43:42.951761 kubelet[2058]: I0209 00:43:42.951710 2058 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flhkm\" (UniqueName: \"kubernetes.io/projected/cd386fbe-86f1-4d59-beb0-e3f4fab44cb5-kube-api-access-flhkm\") pod \"coredns-787d4945fb-57zsf\" (UID: \"cd386fbe-86f1-4d59-beb0-e3f4fab44cb5\") " pod="kube-system/coredns-787d4945fb-57zsf" Feb 9 00:43:42.951874 kubelet[2058]: I0209 00:43:42.951853 2058 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cd386fbe-86f1-4d59-beb0-e3f4fab44cb5-config-volume\") pod \"coredns-787d4945fb-57zsf\" (UID: \"cd386fbe-86f1-4d59-beb0-e3f4fab44cb5\") " pod="kube-system/coredns-787d4945fb-57zsf" Feb 9 00:43:42.951938 kubelet[2058]: I0209 00:43:42.951887 2058 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a47f0156-e104-4a9e-94d6-87d86dc9ea19-config-volume\") pod \"coredns-787d4945fb-496nn\" (UID: \"a47f0156-e104-4a9e-94d6-87d86dc9ea19\") " pod="kube-system/coredns-787d4945fb-496nn" Feb 9 00:43:43.158343 kubelet[2058]: E0209 00:43:43.158186 2058 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:43.158914 env[1186]: time="2024-02-09T00:43:43.158856522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-496nn,Uid:a47f0156-e104-4a9e-94d6-87d86dc9ea19,Namespace:kube-system,Attempt:0,}" Feb 9 00:43:43.161649 kubelet[2058]: E0209 00:43:43.161612 2058 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:43.162179 env[1186]: time="2024-02-09T00:43:43.162124806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-57zsf,Uid:cd386fbe-86f1-4d59-beb0-e3f4fab44cb5,Namespace:kube-system,Attempt:0,}" Feb 9 00:43:43.462289 kubelet[2058]: E0209 00:43:43.461238 2058 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:43.462289 kubelet[2058]: E0209 00:43:43.461792 2058 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:43.474037 kubelet[2058]: I0209 00:43:43.473992 2058 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-vngng" podStartSLOduration=-9.223372018380823e+09 pod.CreationTimestamp="2024-02-09 00:43:25 +0000 UTC" firstStartedPulling="2024-02-09 00:43:26.640514646 +0000 UTC m=+14.383581045" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 00:43:43.473435291 +0000 UTC m=+31.216501710" watchObservedRunningTime="2024-02-09 00:43:43.473952232 +0000 UTC m=+31.217018631" Feb 9 00:43:44.464016 kubelet[2058]: E0209 00:43:44.463904 2058 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:44.833400 systemd-networkd[1090]: cilium_host: Link UP Feb 9 00:43:44.833561 systemd-networkd[1090]: cilium_net: Link UP Feb 9 00:43:44.833565 systemd-networkd[1090]: cilium_net: Gained carrier Feb 9 00:43:44.833933 systemd-networkd[1090]: cilium_host: Gained carrier Feb 9 00:43:44.840101 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 9 00:43:44.839106 systemd-networkd[1090]: cilium_host: Gained IPv6LL Feb 9 00:43:44.843897 systemd-networkd[1090]: cilium_net: Gained IPv6LL Feb 9 00:43:44.927202 systemd-networkd[1090]: cilium_vxlan: Link UP Feb 9 00:43:44.927214 systemd-networkd[1090]: cilium_vxlan: Gained carrier Feb 9 00:43:45.133780 kernel: NET: Registered PF_ALG protocol family Feb 9 00:43:45.465924 kubelet[2058]: E0209 00:43:45.465879 2058 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:45.701133 systemd-networkd[1090]: lxc_health: Link UP Feb 9 00:43:45.719639 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 00:43:45.718465 systemd-networkd[1090]: lxc_health: Gained carrier Feb 9 00:43:45.798662 systemd-networkd[1090]: lxcf7bb6ca490d0: Link UP Feb 9 00:43:45.808808 kernel: eth0: renamed from tmp41d39 Feb 9 00:43:45.812919 systemd-networkd[1090]: lxc32e434d440a7: Link UP Feb 9 00:43:45.814440 systemd-networkd[1090]: lxcf7bb6ca490d0: Gained carrier Feb 9 00:43:45.814746 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcf7bb6ca490d0: link becomes ready Feb 9 00:43:45.849578 kernel: eth0: renamed from tmpe395b Feb 9 00:43:45.855472 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 00:43:45.855627 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc32e434d440a7: link becomes ready Feb 9 00:43:45.855775 systemd-networkd[1090]: lxc32e434d440a7: Gained carrier Feb 9 00:43:46.091942 systemd-networkd[1090]: cilium_vxlan: Gained IPv6LL Feb 9 00:43:46.587049 kubelet[2058]: E0209 00:43:46.586999 2058 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:47.305972 systemd-networkd[1090]: lxcf7bb6ca490d0: Gained IPv6LL Feb 9 00:43:47.561874 systemd-networkd[1090]: lxc_health: Gained IPv6LL Feb 9 00:43:47.625907 systemd-networkd[1090]: lxc32e434d440a7: Gained IPv6LL Feb 9 00:43:49.770670 systemd[1]: Started sshd@5-10.0.0.24:22-10.0.0.1:59932.service. Feb 9 00:43:49.836741 sshd[3257]: Accepted publickey for core from 10.0.0.1 port 59932 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:43:49.886049 sshd[3257]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:43:49.895223 systemd-logind[1177]: New session 6 of user core. Feb 9 00:43:49.896530 systemd[1]: Started session-6.scope. Feb 9 00:43:49.936512 env[1186]: time="2024-02-09T00:43:49.936404278Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 00:43:49.937328 env[1186]: time="2024-02-09T00:43:49.936509886Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 00:43:49.937328 env[1186]: time="2024-02-09T00:43:49.936561443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 00:43:49.941495 env[1186]: time="2024-02-09T00:43:49.936789522Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/41d392d3a4a87e4ba224bd2d1476463377cdaf985a21bba759efb42ad4361816 pid=3267 runtime=io.containerd.runc.v2 Feb 9 00:43:49.951398 env[1186]: time="2024-02-09T00:43:49.951255833Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 00:43:49.951398 env[1186]: time="2024-02-09T00:43:49.951328098Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 00:43:49.951657 env[1186]: time="2024-02-09T00:43:49.951346773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 00:43:49.951956 env[1186]: time="2024-02-09T00:43:49.951827264Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e395bc185976ece4850ba51372fd79c3d0b78d2f7eea2fedaba603423e59e16f pid=3289 runtime=io.containerd.runc.v2 Feb 9 00:43:49.956074 systemd[1]: Started cri-containerd-41d392d3a4a87e4ba224bd2d1476463377cdaf985a21bba759efb42ad4361816.scope. Feb 9 00:43:49.970036 systemd-resolved[1131]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 00:43:49.984466 systemd[1]: Started cri-containerd-e395bc185976ece4850ba51372fd79c3d0b78d2f7eea2fedaba603423e59e16f.scope. Feb 9 00:43:50.004693 systemd-resolved[1131]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 00:43:50.010798 env[1186]: time="2024-02-09T00:43:50.010074195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-496nn,Uid:a47f0156-e104-4a9e-94d6-87d86dc9ea19,Namespace:kube-system,Attempt:0,} returns sandbox id \"41d392d3a4a87e4ba224bd2d1476463377cdaf985a21bba759efb42ad4361816\"" Feb 9 00:43:50.011772 kubelet[2058]: E0209 00:43:50.010969 2058 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:50.018752 env[1186]: time="2024-02-09T00:43:50.018659903Z" level=info msg="CreateContainer within sandbox \"41d392d3a4a87e4ba224bd2d1476463377cdaf985a21bba759efb42ad4361816\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 00:43:50.040390 env[1186]: time="2024-02-09T00:43:50.039422005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-57zsf,Uid:cd386fbe-86f1-4d59-beb0-e3f4fab44cb5,Namespace:kube-system,Attempt:0,} returns sandbox id \"e395bc185976ece4850ba51372fd79c3d0b78d2f7eea2fedaba603423e59e16f\"" Feb 9 00:43:50.040560 kubelet[2058]: E0209 00:43:50.040142 2058 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:50.043579 env[1186]: time="2024-02-09T00:43:50.042701928Z" level=info msg="CreateContainer within sandbox \"e395bc185976ece4850ba51372fd79c3d0b78d2f7eea2fedaba603423e59e16f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 00:43:50.175999 env[1186]: time="2024-02-09T00:43:50.175930729Z" level=info msg="CreateContainer within sandbox \"41d392d3a4a87e4ba224bd2d1476463377cdaf985a21bba759efb42ad4361816\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"53a12cdd5c34a8ae8f5900d020124e7c40980d97c0b77ae024a06fdff6f62678\"" Feb 9 00:43:50.176986 env[1186]: time="2024-02-09T00:43:50.176931075Z" level=info msg="StartContainer for \"53a12cdd5c34a8ae8f5900d020124e7c40980d97c0b77ae024a06fdff6f62678\"" Feb 9 00:43:50.181825 env[1186]: time="2024-02-09T00:43:50.181709731Z" level=info msg="CreateContainer within sandbox \"e395bc185976ece4850ba51372fd79c3d0b78d2f7eea2fedaba603423e59e16f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"447de3e3de5a40cbcb2cf812a38f5097cd95ccd2465993f9a039c51a7d0e96ee\"" Feb 9 00:43:50.182987 env[1186]: time="2024-02-09T00:43:50.182958243Z" level=info msg="StartContainer for \"447de3e3de5a40cbcb2cf812a38f5097cd95ccd2465993f9a039c51a7d0e96ee\"" Feb 9 00:43:50.197446 systemd[1]: Started cri-containerd-53a12cdd5c34a8ae8f5900d020124e7c40980d97c0b77ae024a06fdff6f62678.scope. Feb 9 00:43:50.198155 sshd[3257]: pam_unix(sshd:session): session closed for user core Feb 9 00:43:50.201439 systemd[1]: sshd@5-10.0.0.24:22-10.0.0.1:59932.service: Deactivated successfully. Feb 9 00:43:50.202351 systemd[1]: session-6.scope: Deactivated successfully. Feb 9 00:43:50.204049 systemd-logind[1177]: Session 6 logged out. Waiting for processes to exit. Feb 9 00:43:50.205061 systemd-logind[1177]: Removed session 6. Feb 9 00:43:50.209329 systemd[1]: Started cri-containerd-447de3e3de5a40cbcb2cf812a38f5097cd95ccd2465993f9a039c51a7d0e96ee.scope. Feb 9 00:43:50.290806 env[1186]: time="2024-02-09T00:43:50.290648541Z" level=info msg="StartContainer for \"53a12cdd5c34a8ae8f5900d020124e7c40980d97c0b77ae024a06fdff6f62678\" returns successfully" Feb 9 00:43:50.299376 env[1186]: time="2024-02-09T00:43:50.299318738Z" level=info msg="StartContainer for \"447de3e3de5a40cbcb2cf812a38f5097cd95ccd2465993f9a039c51a7d0e96ee\" returns successfully" Feb 9 00:43:50.480143 kubelet[2058]: E0209 00:43:50.480115 2058 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:50.481180 kubelet[2058]: E0209 00:43:50.481166 2058 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:50.489652 kubelet[2058]: I0209 00:43:50.489612 2058 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-57zsf" podStartSLOduration=24.489568917 pod.CreationTimestamp="2024-02-09 00:43:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 00:43:50.488330212 +0000 UTC m=+38.231396621" watchObservedRunningTime="2024-02-09 00:43:50.489568917 +0000 UTC m=+38.232635326" Feb 9 00:43:50.498396 kubelet[2058]: I0209 00:43:50.498361 2058 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-496nn" podStartSLOduration=24.498317259 pod.CreationTimestamp="2024-02-09 00:43:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 00:43:50.497690634 +0000 UTC m=+38.240757063" watchObservedRunningTime="2024-02-09 00:43:50.498317259 +0000 UTC m=+38.241383678" Feb 9 00:43:51.486034 kubelet[2058]: E0209 00:43:51.486003 2058 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:51.486521 kubelet[2058]: E0209 00:43:51.486003 2058 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:52.487698 kubelet[2058]: E0209 00:43:52.487663 2058 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:52.488074 kubelet[2058]: E0209 00:43:52.487851 2058 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:54.817024 kubelet[2058]: I0209 00:43:54.816909 2058 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness" Feb 9 00:43:54.817840 kubelet[2058]: E0209 00:43:54.817809 2058 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:55.203260 systemd[1]: Started sshd@6-10.0.0.24:22-10.0.0.1:59944.service. Feb 9 00:43:55.235758 sshd[3539]: Accepted publickey for core from 10.0.0.1 port 59944 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:43:55.236810 sshd[3539]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:43:55.240348 systemd-logind[1177]: New session 7 of user core. Feb 9 00:43:55.241043 systemd[1]: Started session-7.scope. Feb 9 00:43:55.493452 kubelet[2058]: E0209 00:43:55.493365 2058 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:55.530837 sshd[3539]: pam_unix(sshd:session): session closed for user core Feb 9 00:43:55.533019 systemd[1]: sshd@6-10.0.0.24:22-10.0.0.1:59944.service: Deactivated successfully. Feb 9 00:43:55.533901 systemd[1]: session-7.scope: Deactivated successfully. Feb 9 00:43:55.534654 systemd-logind[1177]: Session 7 logged out. Waiting for processes to exit. Feb 9 00:43:55.535661 systemd-logind[1177]: Removed session 7. Feb 9 00:44:00.534906 systemd[1]: Started sshd@7-10.0.0.24:22-10.0.0.1:38998.service. Feb 9 00:44:00.564013 sshd[3557]: Accepted publickey for core from 10.0.0.1 port 38998 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:44:00.565044 sshd[3557]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:44:00.568376 systemd-logind[1177]: New session 8 of user core. Feb 9 00:44:00.569168 systemd[1]: Started session-8.scope. Feb 9 00:44:00.709613 sshd[3557]: pam_unix(sshd:session): session closed for user core Feb 9 00:44:00.712187 systemd[1]: sshd@7-10.0.0.24:22-10.0.0.1:38998.service: Deactivated successfully. Feb 9 00:44:00.712968 systemd[1]: session-8.scope: Deactivated successfully. Feb 9 00:44:00.713570 systemd-logind[1177]: Session 8 logged out. Waiting for processes to exit. Feb 9 00:44:00.714232 systemd-logind[1177]: Removed session 8. Feb 9 00:44:05.713255 systemd[1]: Started sshd@8-10.0.0.24:22-10.0.0.1:39008.service. Feb 9 00:44:05.741641 sshd[3571]: Accepted publickey for core from 10.0.0.1 port 39008 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:44:05.742810 sshd[3571]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:44:05.746185 systemd-logind[1177]: New session 9 of user core. Feb 9 00:44:05.747108 systemd[1]: Started session-9.scope. Feb 9 00:44:05.845886 sshd[3571]: pam_unix(sshd:session): session closed for user core Feb 9 00:44:05.847790 systemd[1]: sshd@8-10.0.0.24:22-10.0.0.1:39008.service: Deactivated successfully. Feb 9 00:44:05.848488 systemd[1]: session-9.scope: Deactivated successfully. Feb 9 00:44:05.848940 systemd-logind[1177]: Session 9 logged out. Waiting for processes to exit. Feb 9 00:44:05.849582 systemd-logind[1177]: Removed session 9. Feb 9 00:44:10.850561 systemd[1]: Started sshd@9-10.0.0.24:22-10.0.0.1:54670.service. Feb 9 00:44:10.880277 sshd[3585]: Accepted publickey for core from 10.0.0.1 port 54670 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:44:10.881394 sshd[3585]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:44:10.884418 systemd-logind[1177]: New session 10 of user core. Feb 9 00:44:10.885403 systemd[1]: Started session-10.scope. Feb 9 00:44:10.997511 sshd[3585]: pam_unix(sshd:session): session closed for user core Feb 9 00:44:11.000742 systemd[1]: sshd@9-10.0.0.24:22-10.0.0.1:54670.service: Deactivated successfully. Feb 9 00:44:11.001513 systemd[1]: session-10.scope: Deactivated successfully. Feb 9 00:44:11.002509 systemd-logind[1177]: Session 10 logged out. Waiting for processes to exit. Feb 9 00:44:11.003408 systemd-logind[1177]: Removed session 10. Feb 9 00:44:16.002316 systemd[1]: Started sshd@10-10.0.0.24:22-10.0.0.1:54682.service. Feb 9 00:44:16.031031 sshd[3601]: Accepted publickey for core from 10.0.0.1 port 54682 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:44:16.032078 sshd[3601]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:44:16.035617 systemd-logind[1177]: New session 11 of user core. Feb 9 00:44:16.036560 systemd[1]: Started session-11.scope. Feb 9 00:44:16.151581 sshd[3601]: pam_unix(sshd:session): session closed for user core Feb 9 00:44:16.155236 systemd[1]: sshd@10-10.0.0.24:22-10.0.0.1:54682.service: Deactivated successfully. Feb 9 00:44:16.156043 systemd[1]: session-11.scope: Deactivated successfully. Feb 9 00:44:16.156707 systemd-logind[1177]: Session 11 logged out. Waiting for processes to exit. Feb 9 00:44:16.158252 systemd[1]: Started sshd@11-10.0.0.24:22-10.0.0.1:50594.service. Feb 9 00:44:16.159164 systemd-logind[1177]: Removed session 11. Feb 9 00:44:16.192673 sshd[3615]: Accepted publickey for core from 10.0.0.1 port 50594 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:44:16.194058 sshd[3615]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:44:16.197698 systemd-logind[1177]: New session 12 of user core. Feb 9 00:44:16.198700 systemd[1]: Started session-12.scope. Feb 9 00:44:17.030264 sshd[3615]: pam_unix(sshd:session): session closed for user core Feb 9 00:44:17.033806 systemd[1]: Started sshd@12-10.0.0.24:22-10.0.0.1:50600.service. Feb 9 00:44:17.034217 systemd[1]: sshd@11-10.0.0.24:22-10.0.0.1:50594.service: Deactivated successfully. Feb 9 00:44:17.034769 systemd[1]: session-12.scope: Deactivated successfully. Feb 9 00:44:17.035338 systemd-logind[1177]: Session 12 logged out. Waiting for processes to exit. Feb 9 00:44:17.036226 systemd-logind[1177]: Removed session 12. Feb 9 00:44:17.065213 sshd[3625]: Accepted publickey for core from 10.0.0.1 port 50600 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:44:17.066445 sshd[3625]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:44:17.069800 systemd-logind[1177]: New session 13 of user core. Feb 9 00:44:17.070504 systemd[1]: Started session-13.scope. Feb 9 00:44:17.216209 sshd[3625]: pam_unix(sshd:session): session closed for user core Feb 9 00:44:17.218242 systemd[1]: sshd@12-10.0.0.24:22-10.0.0.1:50600.service: Deactivated successfully. Feb 9 00:44:17.219211 systemd[1]: session-13.scope: Deactivated successfully. Feb 9 00:44:17.220091 systemd-logind[1177]: Session 13 logged out. Waiting for processes to exit. Feb 9 00:44:17.220794 systemd-logind[1177]: Removed session 13. Feb 9 00:44:22.221341 systemd[1]: Started sshd@13-10.0.0.24:22-10.0.0.1:50610.service. Feb 9 00:44:22.251029 sshd[3642]: Accepted publickey for core from 10.0.0.1 port 50610 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:44:22.252185 sshd[3642]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:44:22.255464 systemd-logind[1177]: New session 14 of user core. Feb 9 00:44:22.256297 systemd[1]: Started session-14.scope. Feb 9 00:44:22.360916 sshd[3642]: pam_unix(sshd:session): session closed for user core Feb 9 00:44:22.362861 systemd[1]: sshd@13-10.0.0.24:22-10.0.0.1:50610.service: Deactivated successfully. Feb 9 00:44:22.363516 systemd[1]: session-14.scope: Deactivated successfully. Feb 9 00:44:22.364075 systemd-logind[1177]: Session 14 logged out. Waiting for processes to exit. Feb 9 00:44:22.364806 systemd-logind[1177]: Removed session 14. Feb 9 00:44:27.365553 systemd[1]: Started sshd@14-10.0.0.24:22-10.0.0.1:45518.service. Feb 9 00:44:27.394202 sshd[3657]: Accepted publickey for core from 10.0.0.1 port 45518 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:44:27.395235 sshd[3657]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:44:27.398191 systemd-logind[1177]: New session 15 of user core. Feb 9 00:44:27.399186 systemd[1]: Started session-15.scope. Feb 9 00:44:27.530300 sshd[3657]: pam_unix(sshd:session): session closed for user core Feb 9 00:44:27.532143 systemd[1]: sshd@14-10.0.0.24:22-10.0.0.1:45518.service: Deactivated successfully. Feb 9 00:44:27.532858 systemd[1]: session-15.scope: Deactivated successfully. Feb 9 00:44:27.533368 systemd-logind[1177]: Session 15 logged out. Waiting for processes to exit. Feb 9 00:44:27.533979 systemd-logind[1177]: Removed session 15. Feb 9 00:44:32.535941 systemd[1]: Started sshd@15-10.0.0.24:22-10.0.0.1:45532.service. Feb 9 00:44:32.566659 sshd[3671]: Accepted publickey for core from 10.0.0.1 port 45532 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:44:32.567883 sshd[3671]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:44:32.571684 systemd-logind[1177]: New session 16 of user core. Feb 9 00:44:32.572458 systemd[1]: Started session-16.scope. Feb 9 00:44:32.672468 sshd[3671]: pam_unix(sshd:session): session closed for user core Feb 9 00:44:32.675166 systemd[1]: sshd@15-10.0.0.24:22-10.0.0.1:45532.service: Deactivated successfully. Feb 9 00:44:32.675643 systemd[1]: session-16.scope: Deactivated successfully. Feb 9 00:44:32.676176 systemd-logind[1177]: Session 16 logged out. Waiting for processes to exit. Feb 9 00:44:32.677074 systemd[1]: Started sshd@16-10.0.0.24:22-10.0.0.1:45542.service. Feb 9 00:44:32.677789 systemd-logind[1177]: Removed session 16. Feb 9 00:44:32.706665 sshd[3684]: Accepted publickey for core from 10.0.0.1 port 45542 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:44:32.707689 sshd[3684]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:44:32.710747 systemd-logind[1177]: New session 17 of user core. Feb 9 00:44:32.711463 systemd[1]: Started session-17.scope. Feb 9 00:44:33.184517 sshd[3684]: pam_unix(sshd:session): session closed for user core Feb 9 00:44:33.188086 systemd[1]: sshd@16-10.0.0.24:22-10.0.0.1:45542.service: Deactivated successfully. Feb 9 00:44:33.189086 systemd[1]: session-17.scope: Deactivated successfully. Feb 9 00:44:33.195928 systemd-logind[1177]: Session 17 logged out. Waiting for processes to exit. Feb 9 00:44:33.197839 systemd[1]: Started sshd@17-10.0.0.24:22-10.0.0.1:45544.service. Feb 9 00:44:33.198987 systemd-logind[1177]: Removed session 17. Feb 9 00:44:33.239816 sshd[3695]: Accepted publickey for core from 10.0.0.1 port 45544 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:44:33.241866 sshd[3695]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:44:33.246417 systemd-logind[1177]: New session 18 of user core. Feb 9 00:44:33.247536 systemd[1]: Started session-18.scope. Feb 9 00:44:34.613658 sshd[3695]: pam_unix(sshd:session): session closed for user core Feb 9 00:44:34.626988 systemd[1]: Started sshd@18-10.0.0.24:22-10.0.0.1:45546.service. Feb 9 00:44:34.628279 systemd[1]: sshd@17-10.0.0.24:22-10.0.0.1:45544.service: Deactivated successfully. Feb 9 00:44:34.629547 systemd[1]: session-18.scope: Deactivated successfully. Feb 9 00:44:34.630882 systemd-logind[1177]: Session 18 logged out. Waiting for processes to exit. Feb 9 00:44:34.632631 systemd-logind[1177]: Removed session 18. Feb 9 00:44:34.680779 sshd[3725]: Accepted publickey for core from 10.0.0.1 port 45546 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:44:34.682633 sshd[3725]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:44:34.690798 systemd-logind[1177]: New session 19 of user core. Feb 9 00:44:34.691678 systemd[1]: Started session-19.scope. Feb 9 00:44:34.946502 sshd[3725]: pam_unix(sshd:session): session closed for user core Feb 9 00:44:34.951012 systemd[1]: sshd@18-10.0.0.24:22-10.0.0.1:45546.service: Deactivated successfully. Feb 9 00:44:34.951866 systemd[1]: session-19.scope: Deactivated successfully. Feb 9 00:44:34.954005 systemd[1]: Started sshd@19-10.0.0.24:22-10.0.0.1:45550.service. Feb 9 00:44:34.955481 systemd-logind[1177]: Session 19 logged out. Waiting for processes to exit. Feb 9 00:44:34.959900 systemd-logind[1177]: Removed session 19. Feb 9 00:44:34.990571 sshd[3778]: Accepted publickey for core from 10.0.0.1 port 45550 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:44:34.992224 sshd[3778]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:44:34.997416 systemd-logind[1177]: New session 20 of user core. Feb 9 00:44:34.998603 systemd[1]: Started session-20.scope. Feb 9 00:44:35.132050 sshd[3778]: pam_unix(sshd:session): session closed for user core Feb 9 00:44:35.135121 systemd[1]: sshd@19-10.0.0.24:22-10.0.0.1:45550.service: Deactivated successfully. Feb 9 00:44:35.136134 systemd[1]: session-20.scope: Deactivated successfully. Feb 9 00:44:35.136872 systemd-logind[1177]: Session 20 logged out. Waiting for processes to exit. Feb 9 00:44:35.137694 systemd-logind[1177]: Removed session 20. Feb 9 00:44:35.375673 kubelet[2058]: E0209 00:44:35.375625 2058 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:44:40.137762 systemd[1]: Started sshd@20-10.0.0.24:22-10.0.0.1:47172.service. Feb 9 00:44:40.170017 sshd[3791]: Accepted publickey for core from 10.0.0.1 port 47172 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:44:40.171401 sshd[3791]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:44:40.175598 systemd-logind[1177]: New session 21 of user core. Feb 9 00:44:40.176840 systemd[1]: Started session-21.scope. Feb 9 00:44:40.292337 sshd[3791]: pam_unix(sshd:session): session closed for user core Feb 9 00:44:40.294484 systemd[1]: sshd@20-10.0.0.24:22-10.0.0.1:47172.service: Deactivated successfully. Feb 9 00:44:40.295309 systemd[1]: session-21.scope: Deactivated successfully. Feb 9 00:44:40.295964 systemd-logind[1177]: Session 21 logged out. Waiting for processes to exit. Feb 9 00:44:40.296763 systemd-logind[1177]: Removed session 21. Feb 9 00:44:41.376060 kubelet[2058]: E0209 00:44:41.375994 2058 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:44:44.375977 kubelet[2058]: E0209 00:44:44.375931 2058 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:44:44.376327 kubelet[2058]: E0209 00:44:44.376104 2058 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:44:45.296439 systemd[1]: Started sshd@21-10.0.0.24:22-10.0.0.1:47182.service. Feb 9 00:44:45.325121 sshd[3831]: Accepted publickey for core from 10.0.0.1 port 47182 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:44:45.326390 sshd[3831]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:44:45.330050 systemd-logind[1177]: New session 22 of user core. Feb 9 00:44:45.331133 systemd[1]: Started session-22.scope. Feb 9 00:44:45.431277 sshd[3831]: pam_unix(sshd:session): session closed for user core Feb 9 00:44:45.433329 systemd[1]: sshd@21-10.0.0.24:22-10.0.0.1:47182.service: Deactivated successfully. Feb 9 00:44:45.434054 systemd[1]: session-22.scope: Deactivated successfully. Feb 9 00:44:45.434607 systemd-logind[1177]: Session 22 logged out. Waiting for processes to exit. Feb 9 00:44:45.435357 systemd-logind[1177]: Removed session 22. Feb 9 00:44:50.442527 systemd[1]: Started sshd@22-10.0.0.24:22-10.0.0.1:34612.service. Feb 9 00:44:50.514202 sshd[3845]: Accepted publickey for core from 10.0.0.1 port 34612 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:44:50.514882 sshd[3845]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:44:50.529329 systemd-logind[1177]: New session 23 of user core. Feb 9 00:44:50.529335 systemd[1]: Started session-23.scope. Feb 9 00:44:50.718844 sshd[3845]: pam_unix(sshd:session): session closed for user core Feb 9 00:44:50.723342 systemd[1]: sshd@22-10.0.0.24:22-10.0.0.1:34612.service: Deactivated successfully. Feb 9 00:44:50.725074 systemd[1]: session-23.scope: Deactivated successfully. Feb 9 00:44:50.726413 systemd-logind[1177]: Session 23 logged out. Waiting for processes to exit. Feb 9 00:44:50.729466 systemd-logind[1177]: Removed session 23. Feb 9 00:44:54.377862 kubelet[2058]: E0209 00:44:54.377537 2058 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:44:55.755346 systemd[1]: Started sshd@23-10.0.0.24:22-10.0.0.1:34624.service. Feb 9 00:44:55.810709 sshd[3859]: Accepted publickey for core from 10.0.0.1 port 34624 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:44:55.811539 sshd[3859]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:44:55.824106 systemd-logind[1177]: New session 24 of user core. Feb 9 00:44:55.824335 systemd[1]: Started session-24.scope. Feb 9 00:44:56.063959 sshd[3859]: pam_unix(sshd:session): session closed for user core Feb 9 00:44:56.067543 systemd[1]: sshd@23-10.0.0.24:22-10.0.0.1:34624.service: Deactivated successfully. Feb 9 00:44:56.068581 systemd[1]: session-24.scope: Deactivated successfully. Feb 9 00:44:56.069507 systemd-logind[1177]: Session 24 logged out. Waiting for processes to exit. Feb 9 00:44:56.070668 systemd-logind[1177]: Removed session 24. Feb 9 00:45:01.093191 systemd[1]: Started sshd@24-10.0.0.24:22-10.0.0.1:44300.service. Feb 9 00:45:01.166855 sshd[3873]: Accepted publickey for core from 10.0.0.1 port 44300 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:45:01.168791 sshd[3873]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:45:01.188643 systemd[1]: Started session-25.scope. Feb 9 00:45:01.189038 systemd-logind[1177]: New session 25 of user core. Feb 9 00:45:01.419996 sshd[3873]: pam_unix(sshd:session): session closed for user core Feb 9 00:45:01.426363 systemd[1]: Started sshd@25-10.0.0.24:22-10.0.0.1:44310.service. Feb 9 00:45:01.427058 systemd[1]: sshd@24-10.0.0.24:22-10.0.0.1:44300.service: Deactivated successfully. Feb 9 00:45:01.427950 systemd[1]: session-25.scope: Deactivated successfully. Feb 9 00:45:01.433542 systemd-logind[1177]: Session 25 logged out. Waiting for processes to exit. Feb 9 00:45:01.434677 systemd-logind[1177]: Removed session 25. Feb 9 00:45:01.518572 sshd[3885]: Accepted publickey for core from 10.0.0.1 port 44310 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:45:01.519271 sshd[3885]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:45:01.527894 systemd[1]: Started session-26.scope. Feb 9 00:45:01.530921 systemd-logind[1177]: New session 26 of user core. Feb 9 00:45:03.211600 env[1186]: time="2024-02-09T00:45:03.211545706Z" level=info msg="StopContainer for \"0421dce762ff4f79a5ff6e8d9f35ac531bdf0b08ef0dc5bdbeba21c6f0f7a96b\" with timeout 30 (s)" Feb 9 00:45:03.212113 env[1186]: time="2024-02-09T00:45:03.211961241Z" level=info msg="Stop container \"0421dce762ff4f79a5ff6e8d9f35ac531bdf0b08ef0dc5bdbeba21c6f0f7a96b\" with signal terminated" Feb 9 00:45:03.218285 env[1186]: time="2024-02-09T00:45:03.218145919Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 00:45:03.222188 systemd[1]: cri-containerd-0421dce762ff4f79a5ff6e8d9f35ac531bdf0b08ef0dc5bdbeba21c6f0f7a96b.scope: Deactivated successfully. Feb 9 00:45:03.230945 env[1186]: time="2024-02-09T00:45:03.230906988Z" level=info msg="StopContainer for \"6741a05073906c8cb51129c885adcd4bfabb542d84916f3076d6351b98180219\" with timeout 1 (s)" Feb 9 00:45:03.231244 env[1186]: time="2024-02-09T00:45:03.231221492Z" level=info msg="Stop container \"6741a05073906c8cb51129c885adcd4bfabb542d84916f3076d6351b98180219\" with signal terminated" Feb 9 00:45:03.238345 systemd-networkd[1090]: lxc_health: Link DOWN Feb 9 00:45:03.238355 systemd-networkd[1090]: lxc_health: Lost carrier Feb 9 00:45:03.243488 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0421dce762ff4f79a5ff6e8d9f35ac531bdf0b08ef0dc5bdbeba21c6f0f7a96b-rootfs.mount: Deactivated successfully. Feb 9 00:45:03.261561 env[1186]: time="2024-02-09T00:45:03.261508844Z" level=info msg="shim disconnected" id=0421dce762ff4f79a5ff6e8d9f35ac531bdf0b08ef0dc5bdbeba21c6f0f7a96b Feb 9 00:45:03.261561 env[1186]: time="2024-02-09T00:45:03.261555382Z" level=warning msg="cleaning up after shim disconnected" id=0421dce762ff4f79a5ff6e8d9f35ac531bdf0b08ef0dc5bdbeba21c6f0f7a96b namespace=k8s.io Feb 9 00:45:03.261561 env[1186]: time="2024-02-09T00:45:03.261564659Z" level=info msg="cleaning up dead shim" Feb 9 00:45:03.264146 systemd[1]: cri-containerd-6741a05073906c8cb51129c885adcd4bfabb542d84916f3076d6351b98180219.scope: Deactivated successfully. Feb 9 00:45:03.264458 systemd[1]: cri-containerd-6741a05073906c8cb51129c885adcd4bfabb542d84916f3076d6351b98180219.scope: Consumed 7.154s CPU time. Feb 9 00:45:03.272012 env[1186]: time="2024-02-09T00:45:03.271951039Z" level=warning msg="cleanup warnings time=\"2024-02-09T00:45:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3946 runtime=io.containerd.runc.v2\n" Feb 9 00:45:03.274935 env[1186]: time="2024-02-09T00:45:03.274904962Z" level=info msg="StopContainer for \"0421dce762ff4f79a5ff6e8d9f35ac531bdf0b08ef0dc5bdbeba21c6f0f7a96b\" returns successfully" Feb 9 00:45:03.275906 env[1186]: time="2024-02-09T00:45:03.275668595Z" level=info msg="StopPodSandbox for \"be28aa47604de5e5c4429c9afad02b3bff86bf50e7a607d4866620ed4384e671\"" Feb 9 00:45:03.275988 env[1186]: time="2024-02-09T00:45:03.275954124Z" level=info msg="Container to stop \"0421dce762ff4f79a5ff6e8d9f35ac531bdf0b08ef0dc5bdbeba21c6f0f7a96b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 00:45:03.277996 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-be28aa47604de5e5c4429c9afad02b3bff86bf50e7a607d4866620ed4384e671-shm.mount: Deactivated successfully. Feb 9 00:45:03.282463 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6741a05073906c8cb51129c885adcd4bfabb542d84916f3076d6351b98180219-rootfs.mount: Deactivated successfully. Feb 9 00:45:03.287047 systemd[1]: cri-containerd-be28aa47604de5e5c4429c9afad02b3bff86bf50e7a607d4866620ed4384e671.scope: Deactivated successfully. Feb 9 00:45:03.288346 env[1186]: time="2024-02-09T00:45:03.288296912Z" level=info msg="shim disconnected" id=6741a05073906c8cb51129c885adcd4bfabb542d84916f3076d6351b98180219 Feb 9 00:45:03.288426 env[1186]: time="2024-02-09T00:45:03.288348449Z" level=warning msg="cleaning up after shim disconnected" id=6741a05073906c8cb51129c885adcd4bfabb542d84916f3076d6351b98180219 namespace=k8s.io Feb 9 00:45:03.288426 env[1186]: time="2024-02-09T00:45:03.288356865Z" level=info msg="cleaning up dead shim" Feb 9 00:45:03.294690 env[1186]: time="2024-02-09T00:45:03.294639499Z" level=warning msg="cleanup warnings time=\"2024-02-09T00:45:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3978 runtime=io.containerd.runc.v2\n" Feb 9 00:45:03.297485 env[1186]: time="2024-02-09T00:45:03.297445722Z" level=info msg="StopContainer for \"6741a05073906c8cb51129c885adcd4bfabb542d84916f3076d6351b98180219\" returns successfully" Feb 9 00:45:03.298201 env[1186]: time="2024-02-09T00:45:03.298153290Z" level=info msg="StopPodSandbox for \"10e4b20761f5db9782c20c112a3523c9b5843048b5d7b2b0a6f782ec0a5a59ad\"" Feb 9 00:45:03.298270 env[1186]: time="2024-02-09T00:45:03.298241777Z" level=info msg="Container to stop \"5d445e62b5fef228036dd0784837b1a24cb59036900a43c291e746f54011cb15\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 00:45:03.298304 env[1186]: time="2024-02-09T00:45:03.298267586Z" level=info msg="Container to stop \"ecddefe9826cdd3cedf5f055aca07e65ac4e7973150cc7786e07a2e003895e91\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 00:45:03.298304 env[1186]: time="2024-02-09T00:45:03.298283817Z" level=info msg="Container to stop \"d9a6284705ab9e408006d9983a8a6d5bb5aa1c8a5f778b62985aea9c594ad763\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 00:45:03.298354 env[1186]: time="2024-02-09T00:45:03.298299616Z" level=info msg="Container to stop \"15660d2681f67912f4500dc9925f4c784afff308f2120ec4eeb179b025bc2ff9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 00:45:03.298354 env[1186]: time="2024-02-09T00:45:03.298316228Z" level=info msg="Container to stop \"6741a05073906c8cb51129c885adcd4bfabb542d84916f3076d6351b98180219\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 00:45:03.299891 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-10e4b20761f5db9782c20c112a3523c9b5843048b5d7b2b0a6f782ec0a5a59ad-shm.mount: Deactivated successfully. Feb 9 00:45:03.306164 systemd[1]: cri-containerd-10e4b20761f5db9782c20c112a3523c9b5843048b5d7b2b0a6f782ec0a5a59ad.scope: Deactivated successfully. Feb 9 00:45:03.311284 env[1186]: time="2024-02-09T00:45:03.311237348Z" level=info msg="shim disconnected" id=be28aa47604de5e5c4429c9afad02b3bff86bf50e7a607d4866620ed4384e671 Feb 9 00:45:03.311571 env[1186]: time="2024-02-09T00:45:03.311287624Z" level=warning msg="cleaning up after shim disconnected" id=be28aa47604de5e5c4429c9afad02b3bff86bf50e7a607d4866620ed4384e671 namespace=k8s.io Feb 9 00:45:03.311571 env[1186]: time="2024-02-09T00:45:03.311297212Z" level=info msg="cleaning up dead shim" Feb 9 00:45:03.321104 env[1186]: time="2024-02-09T00:45:03.321050364Z" level=warning msg="cleanup warnings time=\"2024-02-09T00:45:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4014 runtime=io.containerd.runc.v2\n" Feb 9 00:45:03.321463 env[1186]: time="2024-02-09T00:45:03.321433227Z" level=info msg="TearDown network for sandbox \"be28aa47604de5e5c4429c9afad02b3bff86bf50e7a607d4866620ed4384e671\" successfully" Feb 9 00:45:03.321523 env[1186]: time="2024-02-09T00:45:03.321463034Z" level=info msg="StopPodSandbox for \"be28aa47604de5e5c4429c9afad02b3bff86bf50e7a607d4866620ed4384e671\" returns successfully" Feb 9 00:45:03.327342 env[1186]: time="2024-02-09T00:45:03.327280509Z" level=info msg="shim disconnected" id=10e4b20761f5db9782c20c112a3523c9b5843048b5d7b2b0a6f782ec0a5a59ad Feb 9 00:45:03.327496 env[1186]: time="2024-02-09T00:45:03.327346574Z" level=warning msg="cleaning up after shim disconnected" id=10e4b20761f5db9782c20c112a3523c9b5843048b5d7b2b0a6f782ec0a5a59ad namespace=k8s.io Feb 9 00:45:03.327496 env[1186]: time="2024-02-09T00:45:03.327364167Z" level=info msg="cleaning up dead shim" Feb 9 00:45:03.337565 env[1186]: time="2024-02-09T00:45:03.337503950Z" level=warning msg="cleanup warnings time=\"2024-02-09T00:45:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4033 runtime=io.containerd.runc.v2\n" Feb 9 00:45:03.337956 env[1186]: time="2024-02-09T00:45:03.337921018Z" level=info msg="TearDown network for sandbox \"10e4b20761f5db9782c20c112a3523c9b5843048b5d7b2b0a6f782ec0a5a59ad\" successfully" Feb 9 00:45:03.337956 env[1186]: time="2024-02-09T00:45:03.337950805Z" level=info msg="StopPodSandbox for \"10e4b20761f5db9782c20c112a3523c9b5843048b5d7b2b0a6f782ec0a5a59ad\" returns successfully" Feb 9 00:45:03.467418 kubelet[2058]: I0209 00:45:03.466353 2058 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cea384fd-bed8-4dc5-8d33-845ed2d2a2d4-cni-path\") pod \"cea384fd-bed8-4dc5-8d33-845ed2d2a2d4\" (UID: \"cea384fd-bed8-4dc5-8d33-845ed2d2a2d4\") " Feb 9 00:45:03.467418 kubelet[2058]: I0209 00:45:03.466405 2058 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cea384fd-bed8-4dc5-8d33-845ed2d2a2d4-etc-cni-netd\") pod \"cea384fd-bed8-4dc5-8d33-845ed2d2a2d4\" (UID: \"cea384fd-bed8-4dc5-8d33-845ed2d2a2d4\") " Feb 9 00:45:03.467418 kubelet[2058]: I0209 00:45:03.466441 2058 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cea384fd-bed8-4dc5-8d33-845ed2d2a2d4-cilium-config-path\") pod \"cea384fd-bed8-4dc5-8d33-845ed2d2a2d4\" (UID: \"cea384fd-bed8-4dc5-8d33-845ed2d2a2d4\") " Feb 9 00:45:03.467418 kubelet[2058]: I0209 00:45:03.466461 2058 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cea384fd-bed8-4dc5-8d33-845ed2d2a2d4-bpf-maps\") pod \"cea384fd-bed8-4dc5-8d33-845ed2d2a2d4\" (UID: \"cea384fd-bed8-4dc5-8d33-845ed2d2a2d4\") " Feb 9 00:45:03.467418 kubelet[2058]: I0209 00:45:03.466482 2058 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-br75q\" (UniqueName: \"kubernetes.io/projected/a70beb70-e305-4d40-8f88-f7152445c18b-kube-api-access-br75q\") pod \"a70beb70-e305-4d40-8f88-f7152445c18b\" (UID: \"a70beb70-e305-4d40-8f88-f7152445c18b\") " Feb 9 00:45:03.467418 kubelet[2058]: I0209 00:45:03.466477 2058 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cea384fd-bed8-4dc5-8d33-845ed2d2a2d4-cni-path" (OuterVolumeSpecName: "cni-path") pod "cea384fd-bed8-4dc5-8d33-845ed2d2a2d4" (UID: "cea384fd-bed8-4dc5-8d33-845ed2d2a2d4"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 00:45:03.467952 kubelet[2058]: I0209 00:45:03.466504 2058 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a70beb70-e305-4d40-8f88-f7152445c18b-cilium-config-path\") pod \"a70beb70-e305-4d40-8f88-f7152445c18b\" (UID: \"a70beb70-e305-4d40-8f88-f7152445c18b\") " Feb 9 00:45:03.467952 kubelet[2058]: I0209 00:45:03.466523 2058 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cea384fd-bed8-4dc5-8d33-845ed2d2a2d4-cilium-run\") pod \"cea384fd-bed8-4dc5-8d33-845ed2d2a2d4\" (UID: \"cea384fd-bed8-4dc5-8d33-845ed2d2a2d4\") " Feb 9 00:45:03.467952 kubelet[2058]: I0209 00:45:03.466545 2058 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cea384fd-bed8-4dc5-8d33-845ed2d2a2d4-hubble-tls\") pod \"cea384fd-bed8-4dc5-8d33-845ed2d2a2d4\" (UID: \"cea384fd-bed8-4dc5-8d33-845ed2d2a2d4\") " Feb 9 00:45:03.467952 kubelet[2058]: I0209 00:45:03.466543 2058 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cea384fd-bed8-4dc5-8d33-845ed2d2a2d4-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "cea384fd-bed8-4dc5-8d33-845ed2d2a2d4" (UID: "cea384fd-bed8-4dc5-8d33-845ed2d2a2d4"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 00:45:03.467952 kubelet[2058]: I0209 00:45:03.466569 2058 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cea384fd-bed8-4dc5-8d33-845ed2d2a2d4-hostproc\") pod \"cea384fd-bed8-4dc5-8d33-845ed2d2a2d4\" (UID: \"cea384fd-bed8-4dc5-8d33-845ed2d2a2d4\") " Feb 9 00:45:03.467952 kubelet[2058]: I0209 00:45:03.466591 2058 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cea384fd-bed8-4dc5-8d33-845ed2d2a2d4-host-proc-sys-kernel\") pod \"cea384fd-bed8-4dc5-8d33-845ed2d2a2d4\" (UID: \"cea384fd-bed8-4dc5-8d33-845ed2d2a2d4\") " Feb 9 00:45:03.468145 kubelet[2058]: I0209 00:45:03.466608 2058 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cea384fd-bed8-4dc5-8d33-845ed2d2a2d4-xtables-lock\") pod \"cea384fd-bed8-4dc5-8d33-845ed2d2a2d4\" (UID: \"cea384fd-bed8-4dc5-8d33-845ed2d2a2d4\") " Feb 9 00:45:03.468145 kubelet[2058]: I0209 00:45:03.466627 2058 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cea384fd-bed8-4dc5-8d33-845ed2d2a2d4-lib-modules\") pod \"cea384fd-bed8-4dc5-8d33-845ed2d2a2d4\" (UID: \"cea384fd-bed8-4dc5-8d33-845ed2d2a2d4\") " Feb 9 00:45:03.468145 kubelet[2058]: I0209 00:45:03.466648 2058 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cea384fd-bed8-4dc5-8d33-845ed2d2a2d4-clustermesh-secrets\") pod \"cea384fd-bed8-4dc5-8d33-845ed2d2a2d4\" (UID: \"cea384fd-bed8-4dc5-8d33-845ed2d2a2d4\") " Feb 9 00:45:03.468145 kubelet[2058]: I0209 00:45:03.466667 2058 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cea384fd-bed8-4dc5-8d33-845ed2d2a2d4-host-proc-sys-net\") pod \"cea384fd-bed8-4dc5-8d33-845ed2d2a2d4\" (UID: \"cea384fd-bed8-4dc5-8d33-845ed2d2a2d4\") " Feb 9 00:45:03.468145 kubelet[2058]: I0209 00:45:03.466687 2058 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cea384fd-bed8-4dc5-8d33-845ed2d2a2d4-cilium-cgroup\") pod \"cea384fd-bed8-4dc5-8d33-845ed2d2a2d4\" (UID: \"cea384fd-bed8-4dc5-8d33-845ed2d2a2d4\") " Feb 9 00:45:03.468145 kubelet[2058]: I0209 00:45:03.466706 2058 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cgd9g\" (UniqueName: \"kubernetes.io/projected/cea384fd-bed8-4dc5-8d33-845ed2d2a2d4-kube-api-access-cgd9g\") pod \"cea384fd-bed8-4dc5-8d33-845ed2d2a2d4\" (UID: \"cea384fd-bed8-4dc5-8d33-845ed2d2a2d4\") " Feb 9 00:45:03.468365 kubelet[2058]: I0209 00:45:03.466762 2058 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cea384fd-bed8-4dc5-8d33-845ed2d2a2d4-cni-path\") on node \"localhost\" DevicePath \"\"" Feb 9 00:45:03.468365 kubelet[2058]: I0209 00:45:03.466773 2058 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cea384fd-bed8-4dc5-8d33-845ed2d2a2d4-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Feb 9 00:45:03.468365 kubelet[2058]: W0209 00:45:03.466818 2058 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/cea384fd-bed8-4dc5-8d33-845ed2d2a2d4/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 00:45:03.468365 kubelet[2058]: W0209 00:45:03.467173 2058 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/a70beb70-e305-4d40-8f88-f7152445c18b/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 00:45:03.469219 kubelet[2058]: I0209 00:45:03.468566 2058 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cea384fd-bed8-4dc5-8d33-845ed2d2a2d4-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "cea384fd-bed8-4dc5-8d33-845ed2d2a2d4" (UID: "cea384fd-bed8-4dc5-8d33-845ed2d2a2d4"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 00:45:03.469219 kubelet[2058]: I0209 00:45:03.468670 2058 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cea384fd-bed8-4dc5-8d33-845ed2d2a2d4-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "cea384fd-bed8-4dc5-8d33-845ed2d2a2d4" (UID: "cea384fd-bed8-4dc5-8d33-845ed2d2a2d4"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 00:45:03.469219 kubelet[2058]: I0209 00:45:03.468694 2058 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cea384fd-bed8-4dc5-8d33-845ed2d2a2d4-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "cea384fd-bed8-4dc5-8d33-845ed2d2a2d4" (UID: "cea384fd-bed8-4dc5-8d33-845ed2d2a2d4"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 00:45:03.469219 kubelet[2058]: I0209 00:45:03.468938 2058 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cea384fd-bed8-4dc5-8d33-845ed2d2a2d4-hostproc" (OuterVolumeSpecName: "hostproc") pod "cea384fd-bed8-4dc5-8d33-845ed2d2a2d4" (UID: "cea384fd-bed8-4dc5-8d33-845ed2d2a2d4"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 00:45:03.469219 kubelet[2058]: I0209 00:45:03.468964 2058 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cea384fd-bed8-4dc5-8d33-845ed2d2a2d4-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "cea384fd-bed8-4dc5-8d33-845ed2d2a2d4" (UID: "cea384fd-bed8-4dc5-8d33-845ed2d2a2d4"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 00:45:03.469422 kubelet[2058]: I0209 00:45:03.469159 2058 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cea384fd-bed8-4dc5-8d33-845ed2d2a2d4-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "cea384fd-bed8-4dc5-8d33-845ed2d2a2d4" (UID: "cea384fd-bed8-4dc5-8d33-845ed2d2a2d4"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 00:45:03.469422 kubelet[2058]: I0209 00:45:03.469186 2058 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cea384fd-bed8-4dc5-8d33-845ed2d2a2d4-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "cea384fd-bed8-4dc5-8d33-845ed2d2a2d4" (UID: "cea384fd-bed8-4dc5-8d33-845ed2d2a2d4"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 00:45:03.469495 kubelet[2058]: I0209 00:45:03.469463 2058 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a70beb70-e305-4d40-8f88-f7152445c18b-kube-api-access-br75q" (OuterVolumeSpecName: "kube-api-access-br75q") pod "a70beb70-e305-4d40-8f88-f7152445c18b" (UID: "a70beb70-e305-4d40-8f88-f7152445c18b"). InnerVolumeSpecName "kube-api-access-br75q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 00:45:03.469593 kubelet[2058]: I0209 00:45:03.469562 2058 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a70beb70-e305-4d40-8f88-f7152445c18b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a70beb70-e305-4d40-8f88-f7152445c18b" (UID: "a70beb70-e305-4d40-8f88-f7152445c18b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 00:45:03.469593 kubelet[2058]: I0209 00:45:03.469550 2058 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cea384fd-bed8-4dc5-8d33-845ed2d2a2d4-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "cea384fd-bed8-4dc5-8d33-845ed2d2a2d4" (UID: "cea384fd-bed8-4dc5-8d33-845ed2d2a2d4"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 00:45:03.469813 kubelet[2058]: I0209 00:45:03.469749 2058 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cea384fd-bed8-4dc5-8d33-845ed2d2a2d4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "cea384fd-bed8-4dc5-8d33-845ed2d2a2d4" (UID: "cea384fd-bed8-4dc5-8d33-845ed2d2a2d4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 00:45:03.471263 kubelet[2058]: I0209 00:45:03.471239 2058 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cea384fd-bed8-4dc5-8d33-845ed2d2a2d4-kube-api-access-cgd9g" (OuterVolumeSpecName: "kube-api-access-cgd9g") pod "cea384fd-bed8-4dc5-8d33-845ed2d2a2d4" (UID: "cea384fd-bed8-4dc5-8d33-845ed2d2a2d4"). InnerVolumeSpecName "kube-api-access-cgd9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 00:45:03.471340 kubelet[2058]: I0209 00:45:03.471271 2058 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cea384fd-bed8-4dc5-8d33-845ed2d2a2d4-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "cea384fd-bed8-4dc5-8d33-845ed2d2a2d4" (UID: "cea384fd-bed8-4dc5-8d33-845ed2d2a2d4"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 00:45:03.472741 kubelet[2058]: I0209 00:45:03.472704 2058 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cea384fd-bed8-4dc5-8d33-845ed2d2a2d4-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "cea384fd-bed8-4dc5-8d33-845ed2d2a2d4" (UID: "cea384fd-bed8-4dc5-8d33-845ed2d2a2d4"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 00:45:03.567048 kubelet[2058]: I0209 00:45:03.566995 2058 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cea384fd-bed8-4dc5-8d33-845ed2d2a2d4-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 9 00:45:03.567048 kubelet[2058]: I0209 00:45:03.567045 2058 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cea384fd-bed8-4dc5-8d33-845ed2d2a2d4-bpf-maps\") on node \"localhost\" DevicePath \"\"" Feb 9 00:45:03.567295 kubelet[2058]: I0209 00:45:03.567067 2058 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-br75q\" (UniqueName: \"kubernetes.io/projected/a70beb70-e305-4d40-8f88-f7152445c18b-kube-api-access-br75q\") on node \"localhost\" DevicePath \"\"" Feb 9 00:45:03.567295 kubelet[2058]: I0209 00:45:03.567078 2058 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cea384fd-bed8-4dc5-8d33-845ed2d2a2d4-cilium-run\") on node \"localhost\" DevicePath \"\"" Feb 9 00:45:03.567295 kubelet[2058]: I0209 00:45:03.567090 2058 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cea384fd-bed8-4dc5-8d33-845ed2d2a2d4-hubble-tls\") on node \"localhost\" DevicePath \"\"" Feb 9 00:45:03.567295 kubelet[2058]: I0209 00:45:03.567099 2058 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cea384fd-bed8-4dc5-8d33-845ed2d2a2d4-hostproc\") on node \"localhost\" DevicePath \"\"" Feb 9 00:45:03.567295 kubelet[2058]: I0209 00:45:03.567110 2058 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a70beb70-e305-4d40-8f88-f7152445c18b-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 9 00:45:03.567295 kubelet[2058]: I0209 00:45:03.567122 2058 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cea384fd-bed8-4dc5-8d33-845ed2d2a2d4-lib-modules\") on node \"localhost\" DevicePath \"\"" Feb 9 00:45:03.567295 kubelet[2058]: I0209 00:45:03.567132 2058 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cea384fd-bed8-4dc5-8d33-845ed2d2a2d4-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Feb 9 00:45:03.567295 kubelet[2058]: I0209 00:45:03.567143 2058 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cea384fd-bed8-4dc5-8d33-845ed2d2a2d4-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Feb 9 00:45:03.567475 kubelet[2058]: I0209 00:45:03.567153 2058 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cea384fd-bed8-4dc5-8d33-845ed2d2a2d4-xtables-lock\") on node \"localhost\" DevicePath \"\"" Feb 9 00:45:03.567475 kubelet[2058]: I0209 00:45:03.567164 2058 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cea384fd-bed8-4dc5-8d33-845ed2d2a2d4-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Feb 9 00:45:03.567475 kubelet[2058]: I0209 00:45:03.567175 2058 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-cgd9g\" (UniqueName: \"kubernetes.io/projected/cea384fd-bed8-4dc5-8d33-845ed2d2a2d4-kube-api-access-cgd9g\") on node \"localhost\" DevicePath \"\"" Feb 9 00:45:03.567475 kubelet[2058]: I0209 00:45:03.567185 2058 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cea384fd-bed8-4dc5-8d33-845ed2d2a2d4-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Feb 9 00:45:03.657016 kubelet[2058]: I0209 00:45:03.656978 2058 scope.go:115] "RemoveContainer" containerID="0421dce762ff4f79a5ff6e8d9f35ac531bdf0b08ef0dc5bdbeba21c6f0f7a96b" Feb 9 00:45:03.658322 env[1186]: time="2024-02-09T00:45:03.658280723Z" level=info msg="RemoveContainer for \"0421dce762ff4f79a5ff6e8d9f35ac531bdf0b08ef0dc5bdbeba21c6f0f7a96b\"" Feb 9 00:45:03.660761 systemd[1]: Removed slice kubepods-besteffort-poda70beb70_e305_4d40_8f88_f7152445c18b.slice. Feb 9 00:45:03.663473 systemd[1]: Removed slice kubepods-burstable-podcea384fd_bed8_4dc5_8d33_845ed2d2a2d4.slice. Feb 9 00:45:03.663548 systemd[1]: kubepods-burstable-podcea384fd_bed8_4dc5_8d33_845ed2d2a2d4.slice: Consumed 7.239s CPU time. Feb 9 00:45:03.756705 env[1186]: time="2024-02-09T00:45:03.756584652Z" level=info msg="RemoveContainer for \"0421dce762ff4f79a5ff6e8d9f35ac531bdf0b08ef0dc5bdbeba21c6f0f7a96b\" returns successfully" Feb 9 00:45:03.757039 kubelet[2058]: I0209 00:45:03.756996 2058 scope.go:115] "RemoveContainer" containerID="0421dce762ff4f79a5ff6e8d9f35ac531bdf0b08ef0dc5bdbeba21c6f0f7a96b" Feb 9 00:45:03.757359 env[1186]: time="2024-02-09T00:45:03.757289824Z" level=error msg="ContainerStatus for \"0421dce762ff4f79a5ff6e8d9f35ac531bdf0b08ef0dc5bdbeba21c6f0f7a96b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0421dce762ff4f79a5ff6e8d9f35ac531bdf0b08ef0dc5bdbeba21c6f0f7a96b\": not found" Feb 9 00:45:03.757571 kubelet[2058]: E0209 00:45:03.757544 2058 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0421dce762ff4f79a5ff6e8d9f35ac531bdf0b08ef0dc5bdbeba21c6f0f7a96b\": not found" containerID="0421dce762ff4f79a5ff6e8d9f35ac531bdf0b08ef0dc5bdbeba21c6f0f7a96b" Feb 9 00:45:03.757571 kubelet[2058]: I0209 00:45:03.757586 2058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:0421dce762ff4f79a5ff6e8d9f35ac531bdf0b08ef0dc5bdbeba21c6f0f7a96b} err="failed to get container status \"0421dce762ff4f79a5ff6e8d9f35ac531bdf0b08ef0dc5bdbeba21c6f0f7a96b\": rpc error: code = NotFound desc = an error occurred when try to find container \"0421dce762ff4f79a5ff6e8d9f35ac531bdf0b08ef0dc5bdbeba21c6f0f7a96b\": not found" Feb 9 00:45:03.757571 kubelet[2058]: I0209 00:45:03.757596 2058 scope.go:115] "RemoveContainer" containerID="6741a05073906c8cb51129c885adcd4bfabb542d84916f3076d6351b98180219" Feb 9 00:45:03.758546 env[1186]: time="2024-02-09T00:45:03.758519479Z" level=info msg="RemoveContainer for \"6741a05073906c8cb51129c885adcd4bfabb542d84916f3076d6351b98180219\"" Feb 9 00:45:03.850919 env[1186]: time="2024-02-09T00:45:03.850861230Z" level=info msg="RemoveContainer for \"6741a05073906c8cb51129c885adcd4bfabb542d84916f3076d6351b98180219\" returns successfully" Feb 9 00:45:03.851156 kubelet[2058]: I0209 00:45:03.851101 2058 scope.go:115] "RemoveContainer" containerID="5d445e62b5fef228036dd0784837b1a24cb59036900a43c291e746f54011cb15" Feb 9 00:45:03.852401 env[1186]: time="2024-02-09T00:45:03.852358379Z" level=info msg="RemoveContainer for \"5d445e62b5fef228036dd0784837b1a24cb59036900a43c291e746f54011cb15\"" Feb 9 00:45:03.941275 env[1186]: time="2024-02-09T00:45:03.941210705Z" level=info msg="RemoveContainer for \"5d445e62b5fef228036dd0784837b1a24cb59036900a43c291e746f54011cb15\" returns successfully" Feb 9 00:45:03.941515 kubelet[2058]: I0209 00:45:03.941491 2058 scope.go:115] "RemoveContainer" containerID="15660d2681f67912f4500dc9925f4c784afff308f2120ec4eeb179b025bc2ff9" Feb 9 00:45:03.942607 env[1186]: time="2024-02-09T00:45:03.942558683Z" level=info msg="RemoveContainer for \"15660d2681f67912f4500dc9925f4c784afff308f2120ec4eeb179b025bc2ff9\"" Feb 9 00:45:04.091902 env[1186]: time="2024-02-09T00:45:04.091837990Z" level=info msg="RemoveContainer for \"15660d2681f67912f4500dc9925f4c784afff308f2120ec4eeb179b025bc2ff9\" returns successfully" Feb 9 00:45:04.092198 kubelet[2058]: I0209 00:45:04.092166 2058 scope.go:115] "RemoveContainer" containerID="d9a6284705ab9e408006d9983a8a6d5bb5aa1c8a5f778b62985aea9c594ad763" Feb 9 00:45:04.093656 env[1186]: time="2024-02-09T00:45:04.093620168Z" level=info msg="RemoveContainer for \"d9a6284705ab9e408006d9983a8a6d5bb5aa1c8a5f778b62985aea9c594ad763\"" Feb 9 00:45:04.175021 env[1186]: time="2024-02-09T00:45:04.174954294Z" level=info msg="RemoveContainer for \"d9a6284705ab9e408006d9983a8a6d5bb5aa1c8a5f778b62985aea9c594ad763\" returns successfully" Feb 9 00:45:04.175431 kubelet[2058]: I0209 00:45:04.175288 2058 scope.go:115] "RemoveContainer" containerID="ecddefe9826cdd3cedf5f055aca07e65ac4e7973150cc7786e07a2e003895e91" Feb 9 00:45:04.178002 env[1186]: time="2024-02-09T00:45:04.177959783Z" level=info msg="RemoveContainer for \"ecddefe9826cdd3cedf5f055aca07e65ac4e7973150cc7786e07a2e003895e91\"" Feb 9 00:45:04.185550 env[1186]: time="2024-02-09T00:45:04.185481107Z" level=info msg="RemoveContainer for \"ecddefe9826cdd3cedf5f055aca07e65ac4e7973150cc7786e07a2e003895e91\" returns successfully" Feb 9 00:45:04.185836 kubelet[2058]: I0209 00:45:04.185807 2058 scope.go:115] "RemoveContainer" containerID="6741a05073906c8cb51129c885adcd4bfabb542d84916f3076d6351b98180219" Feb 9 00:45:04.186165 env[1186]: time="2024-02-09T00:45:04.186092782Z" level=error msg="ContainerStatus for \"6741a05073906c8cb51129c885adcd4bfabb542d84916f3076d6351b98180219\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6741a05073906c8cb51129c885adcd4bfabb542d84916f3076d6351b98180219\": not found" Feb 9 00:45:04.186276 kubelet[2058]: E0209 00:45:04.186260 2058 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6741a05073906c8cb51129c885adcd4bfabb542d84916f3076d6351b98180219\": not found" containerID="6741a05073906c8cb51129c885adcd4bfabb542d84916f3076d6351b98180219" Feb 9 00:45:04.186322 kubelet[2058]: I0209 00:45:04.186300 2058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:6741a05073906c8cb51129c885adcd4bfabb542d84916f3076d6351b98180219} err="failed to get container status \"6741a05073906c8cb51129c885adcd4bfabb542d84916f3076d6351b98180219\": rpc error: code = NotFound desc = an error occurred when try to find container \"6741a05073906c8cb51129c885adcd4bfabb542d84916f3076d6351b98180219\": not found" Feb 9 00:45:04.186322 kubelet[2058]: I0209 00:45:04.186314 2058 scope.go:115] "RemoveContainer" containerID="5d445e62b5fef228036dd0784837b1a24cb59036900a43c291e746f54011cb15" Feb 9 00:45:04.186467 env[1186]: time="2024-02-09T00:45:04.186424770Z" level=error msg="ContainerStatus for \"5d445e62b5fef228036dd0784837b1a24cb59036900a43c291e746f54011cb15\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5d445e62b5fef228036dd0784837b1a24cb59036900a43c291e746f54011cb15\": not found" Feb 9 00:45:04.186629 kubelet[2058]: E0209 00:45:04.186594 2058 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5d445e62b5fef228036dd0784837b1a24cb59036900a43c291e746f54011cb15\": not found" containerID="5d445e62b5fef228036dd0784837b1a24cb59036900a43c291e746f54011cb15" Feb 9 00:45:04.186847 kubelet[2058]: I0209 00:45:04.186649 2058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:5d445e62b5fef228036dd0784837b1a24cb59036900a43c291e746f54011cb15} err="failed to get container status \"5d445e62b5fef228036dd0784837b1a24cb59036900a43c291e746f54011cb15\": rpc error: code = NotFound desc = an error occurred when try to find container \"5d445e62b5fef228036dd0784837b1a24cb59036900a43c291e746f54011cb15\": not found" Feb 9 00:45:04.186847 kubelet[2058]: I0209 00:45:04.186668 2058 scope.go:115] "RemoveContainer" containerID="15660d2681f67912f4500dc9925f4c784afff308f2120ec4eeb179b025bc2ff9" Feb 9 00:45:04.187042 env[1186]: time="2024-02-09T00:45:04.186967615Z" level=error msg="ContainerStatus for \"15660d2681f67912f4500dc9925f4c784afff308f2120ec4eeb179b025bc2ff9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"15660d2681f67912f4500dc9925f4c784afff308f2120ec4eeb179b025bc2ff9\": not found" Feb 9 00:45:04.187216 kubelet[2058]: E0209 00:45:04.187196 2058 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"15660d2681f67912f4500dc9925f4c784afff308f2120ec4eeb179b025bc2ff9\": not found" containerID="15660d2681f67912f4500dc9925f4c784afff308f2120ec4eeb179b025bc2ff9" Feb 9 00:45:04.187263 kubelet[2058]: I0209 00:45:04.187243 2058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:15660d2681f67912f4500dc9925f4c784afff308f2120ec4eeb179b025bc2ff9} err="failed to get container status \"15660d2681f67912f4500dc9925f4c784afff308f2120ec4eeb179b025bc2ff9\": rpc error: code = NotFound desc = an error occurred when try to find container \"15660d2681f67912f4500dc9925f4c784afff308f2120ec4eeb179b025bc2ff9\": not found" Feb 9 00:45:04.187263 kubelet[2058]: I0209 00:45:04.187262 2058 scope.go:115] "RemoveContainer" containerID="d9a6284705ab9e408006d9983a8a6d5bb5aa1c8a5f778b62985aea9c594ad763" Feb 9 00:45:04.187604 env[1186]: time="2024-02-09T00:45:04.187534837Z" level=error msg="ContainerStatus for \"d9a6284705ab9e408006d9983a8a6d5bb5aa1c8a5f778b62985aea9c594ad763\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d9a6284705ab9e408006d9983a8a6d5bb5aa1c8a5f778b62985aea9c594ad763\": not found" Feb 9 00:45:04.187713 kubelet[2058]: E0209 00:45:04.187697 2058 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d9a6284705ab9e408006d9983a8a6d5bb5aa1c8a5f778b62985aea9c594ad763\": not found" containerID="d9a6284705ab9e408006d9983a8a6d5bb5aa1c8a5f778b62985aea9c594ad763" Feb 9 00:45:04.187779 kubelet[2058]: I0209 00:45:04.187732 2058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:d9a6284705ab9e408006d9983a8a6d5bb5aa1c8a5f778b62985aea9c594ad763} err="failed to get container status \"d9a6284705ab9e408006d9983a8a6d5bb5aa1c8a5f778b62985aea9c594ad763\": rpc error: code = NotFound desc = an error occurred when try to find container \"d9a6284705ab9e408006d9983a8a6d5bb5aa1c8a5f778b62985aea9c594ad763\": not found" Feb 9 00:45:04.187779 kubelet[2058]: I0209 00:45:04.187741 2058 scope.go:115] "RemoveContainer" containerID="ecddefe9826cdd3cedf5f055aca07e65ac4e7973150cc7786e07a2e003895e91" Feb 9 00:45:04.187968 env[1186]: time="2024-02-09T00:45:04.187913071Z" level=error msg="ContainerStatus for \"ecddefe9826cdd3cedf5f055aca07e65ac4e7973150cc7786e07a2e003895e91\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ecddefe9826cdd3cedf5f055aca07e65ac4e7973150cc7786e07a2e003895e91\": not found" Feb 9 00:45:04.188066 kubelet[2058]: E0209 00:45:04.188051 2058 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ecddefe9826cdd3cedf5f055aca07e65ac4e7973150cc7786e07a2e003895e91\": not found" containerID="ecddefe9826cdd3cedf5f055aca07e65ac4e7973150cc7786e07a2e003895e91" Feb 9 00:45:04.188098 kubelet[2058]: I0209 00:45:04.188076 2058 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:ecddefe9826cdd3cedf5f055aca07e65ac4e7973150cc7786e07a2e003895e91} err="failed to get container status \"ecddefe9826cdd3cedf5f055aca07e65ac4e7973150cc7786e07a2e003895e91\": rpc error: code = NotFound desc = an error occurred when try to find container \"ecddefe9826cdd3cedf5f055aca07e65ac4e7973150cc7786e07a2e003895e91\": not found" Feb 9 00:45:04.203747 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-be28aa47604de5e5c4429c9afad02b3bff86bf50e7a607d4866620ed4384e671-rootfs.mount: Deactivated successfully. Feb 9 00:45:04.203866 systemd[1]: var-lib-kubelet-pods-a70beb70\x2de305\x2d4d40\x2d8f88\x2df7152445c18b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbr75q.mount: Deactivated successfully. Feb 9 00:45:04.203923 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-10e4b20761f5db9782c20c112a3523c9b5843048b5d7b2b0a6f782ec0a5a59ad-rootfs.mount: Deactivated successfully. Feb 9 00:45:04.203969 systemd[1]: var-lib-kubelet-pods-cea384fd\x2dbed8\x2d4dc5\x2d8d33\x2d845ed2d2a2d4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcgd9g.mount: Deactivated successfully. Feb 9 00:45:04.204026 systemd[1]: var-lib-kubelet-pods-cea384fd\x2dbed8\x2d4dc5\x2d8d33\x2d845ed2d2a2d4-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 00:45:04.204076 systemd[1]: var-lib-kubelet-pods-cea384fd\x2dbed8\x2d4dc5\x2d8d33\x2d845ed2d2a2d4-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 00:45:04.378298 env[1186]: time="2024-02-09T00:45:04.377007722Z" level=info msg="StopContainer for \"0421dce762ff4f79a5ff6e8d9f35ac531bdf0b08ef0dc5bdbeba21c6f0f7a96b\" with timeout 1 (s)" Feb 9 00:45:04.378298 env[1186]: time="2024-02-09T00:45:04.377078535Z" level=error msg="StopContainer for \"0421dce762ff4f79a5ff6e8d9f35ac531bdf0b08ef0dc5bdbeba21c6f0f7a96b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0421dce762ff4f79a5ff6e8d9f35ac531bdf0b08ef0dc5bdbeba21c6f0f7a96b\": not found" Feb 9 00:45:04.378298 env[1186]: time="2024-02-09T00:45:04.377014726Z" level=info msg="StopContainer for \"6741a05073906c8cb51129c885adcd4bfabb542d84916f3076d6351b98180219\" with timeout 1 (s)" Feb 9 00:45:04.378298 env[1186]: time="2024-02-09T00:45:04.377195887Z" level=error msg="StopContainer for \"6741a05073906c8cb51129c885adcd4bfabb542d84916f3076d6351b98180219\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6741a05073906c8cb51129c885adcd4bfabb542d84916f3076d6351b98180219\": not found" Feb 9 00:45:04.378298 env[1186]: time="2024-02-09T00:45:04.377688518Z" level=info msg="StopPodSandbox for \"10e4b20761f5db9782c20c112a3523c9b5843048b5d7b2b0a6f782ec0a5a59ad\"" Feb 9 00:45:04.378298 env[1186]: time="2024-02-09T00:45:04.377747349Z" level=info msg="StopPodSandbox for \"be28aa47604de5e5c4429c9afad02b3bff86bf50e7a607d4866620ed4384e671\"" Feb 9 00:45:04.378298 env[1186]: time="2024-02-09T00:45:04.377845516Z" level=info msg="TearDown network for sandbox \"10e4b20761f5db9782c20c112a3523c9b5843048b5d7b2b0a6f782ec0a5a59ad\" successfully" Feb 9 00:45:04.378298 env[1186]: time="2024-02-09T00:45:04.377890450Z" level=info msg="StopPodSandbox for \"10e4b20761f5db9782c20c112a3523c9b5843048b5d7b2b0a6f782ec0a5a59ad\" returns successfully" Feb 9 00:45:04.378298 env[1186]: time="2024-02-09T00:45:04.377927120Z" level=info msg="TearDown network for sandbox \"be28aa47604de5e5c4429c9afad02b3bff86bf50e7a607d4866620ed4384e671\" successfully" Feb 9 00:45:04.378298 env[1186]: time="2024-02-09T00:45:04.377960061Z" level=info msg="StopPodSandbox for \"be28aa47604de5e5c4429c9afad02b3bff86bf50e7a607d4866620ed4384e671\" returns successfully" Feb 9 00:45:04.379005 kubelet[2058]: E0209 00:45:04.377335 2058 remote_runtime.go:349] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0421dce762ff4f79a5ff6e8d9f35ac531bdf0b08ef0dc5bdbeba21c6f0f7a96b\": not found" containerID="0421dce762ff4f79a5ff6e8d9f35ac531bdf0b08ef0dc5bdbeba21c6f0f7a96b" Feb 9 00:45:04.379005 kubelet[2058]: E0209 00:45:04.377528 2058 remote_runtime.go:349] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6741a05073906c8cb51129c885adcd4bfabb542d84916f3076d6351b98180219\": not found" containerID="6741a05073906c8cb51129c885adcd4bfabb542d84916f3076d6351b98180219" Feb 9 00:45:04.379005 kubelet[2058]: I0209 00:45:04.378437 2058 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=a70beb70-e305-4d40-8f88-f7152445c18b path="/var/lib/kubelet/pods/a70beb70-e305-4d40-8f88-f7152445c18b/volumes" Feb 9 00:45:04.379005 kubelet[2058]: I0209 00:45:04.378844 2058 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=cea384fd-bed8-4dc5-8d33-845ed2d2a2d4 path="/var/lib/kubelet/pods/cea384fd-bed8-4dc5-8d33-845ed2d2a2d4/volumes" Feb 9 00:45:05.133981 sshd[3885]: pam_unix(sshd:session): session closed for user core Feb 9 00:45:05.136818 systemd[1]: sshd@25-10.0.0.24:22-10.0.0.1:44310.service: Deactivated successfully. Feb 9 00:45:05.137508 systemd[1]: session-26.scope: Deactivated successfully. Feb 9 00:45:05.138112 systemd-logind[1177]: Session 26 logged out. Waiting for processes to exit. Feb 9 00:45:05.139245 systemd[1]: Started sshd@26-10.0.0.24:22-10.0.0.1:44326.service. Feb 9 00:45:05.140179 systemd-logind[1177]: Removed session 26. Feb 9 00:45:05.168798 sshd[4051]: Accepted publickey for core from 10.0.0.1 port 44326 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:45:05.169670 sshd[4051]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:45:05.173204 systemd-logind[1177]: New session 27 of user core. Feb 9 00:45:05.173931 systemd[1]: Started session-27.scope. Feb 9 00:45:05.743984 sshd[4051]: pam_unix(sshd:session): session closed for user core Feb 9 00:45:05.747068 systemd[1]: sshd@26-10.0.0.24:22-10.0.0.1:44326.service: Deactivated successfully. Feb 9 00:45:05.747657 systemd[1]: session-27.scope: Deactivated successfully. Feb 9 00:45:05.750205 systemd[1]: Started sshd@27-10.0.0.24:22-10.0.0.1:44338.service. Feb 9 00:45:05.750908 systemd-logind[1177]: Session 27 logged out. Waiting for processes to exit. Feb 9 00:45:05.751741 systemd-logind[1177]: Removed session 27. Feb 9 00:45:05.778971 sshd[4063]: Accepted publickey for core from 10.0.0.1 port 44338 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:45:05.780161 sshd[4063]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:45:05.783543 systemd-logind[1177]: New session 28 of user core. Feb 9 00:45:05.784603 systemd[1]: Started session-28.scope. Feb 9 00:45:06.009576 kubelet[2058]: I0209 00:45:06.009444 2058 topology_manager.go:210] "Topology Admit Handler" Feb 9 00:45:06.009576 kubelet[2058]: E0209 00:45:06.009538 2058 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a70beb70-e305-4d40-8f88-f7152445c18b" containerName="cilium-operator" Feb 9 00:45:06.009576 kubelet[2058]: E0209 00:45:06.009550 2058 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cea384fd-bed8-4dc5-8d33-845ed2d2a2d4" containerName="mount-cgroup" Feb 9 00:45:06.009576 kubelet[2058]: E0209 00:45:06.009557 2058 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cea384fd-bed8-4dc5-8d33-845ed2d2a2d4" containerName="mount-bpf-fs" Feb 9 00:45:06.009576 kubelet[2058]: E0209 00:45:06.009564 2058 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cea384fd-bed8-4dc5-8d33-845ed2d2a2d4" containerName="clean-cilium-state" Feb 9 00:45:06.009576 kubelet[2058]: E0209 00:45:06.009573 2058 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cea384fd-bed8-4dc5-8d33-845ed2d2a2d4" containerName="apply-sysctl-overwrites" Feb 9 00:45:06.009576 kubelet[2058]: E0209 00:45:06.009581 2058 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cea384fd-bed8-4dc5-8d33-845ed2d2a2d4" containerName="cilium-agent" Feb 9 00:45:06.010335 kubelet[2058]: I0209 00:45:06.009604 2058 memory_manager.go:346] "RemoveStaleState removing state" podUID="a70beb70-e305-4d40-8f88-f7152445c18b" containerName="cilium-operator" Feb 9 00:45:06.010335 kubelet[2058]: I0209 00:45:06.009612 2058 memory_manager.go:346] "RemoveStaleState removing state" podUID="cea384fd-bed8-4dc5-8d33-845ed2d2a2d4" containerName="cilium-agent" Feb 9 00:45:06.015438 systemd[1]: Created slice kubepods-burstable-pod4b0e3910_9dae_4369_a7da_ddb449c692da.slice. Feb 9 00:45:06.030688 sshd[4063]: pam_unix(sshd:session): session closed for user core Feb 9 00:45:06.033163 systemd[1]: sshd@27-10.0.0.24:22-10.0.0.1:44338.service: Deactivated successfully. Feb 9 00:45:06.033655 systemd[1]: session-28.scope: Deactivated successfully. Feb 9 00:45:06.034086 systemd-logind[1177]: Session 28 logged out. Waiting for processes to exit. Feb 9 00:45:06.035031 systemd[1]: Started sshd@28-10.0.0.24:22-10.0.0.1:40726.service. Feb 9 00:45:06.039465 systemd-logind[1177]: Removed session 28. Feb 9 00:45:06.067236 sshd[4076]: Accepted publickey for core from 10.0.0.1 port 40726 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:45:06.068379 sshd[4076]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:45:06.075769 systemd-logind[1177]: New session 29 of user core. Feb 9 00:45:06.076311 systemd[1]: Started session-29.scope. Feb 9 00:45:06.079805 kubelet[2058]: I0209 00:45:06.079778 2058 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4b0e3910-9dae-4369-a7da-ddb449c692da-cilium-run\") pod \"cilium-rpj5s\" (UID: \"4b0e3910-9dae-4369-a7da-ddb449c692da\") " pod="kube-system/cilium-rpj5s" Feb 9 00:45:06.180413 kubelet[2058]: I0209 00:45:06.180361 2058 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4b0e3910-9dae-4369-a7da-ddb449c692da-host-proc-sys-net\") pod \"cilium-rpj5s\" (UID: \"4b0e3910-9dae-4369-a7da-ddb449c692da\") " pod="kube-system/cilium-rpj5s" Feb 9 00:45:06.180413 kubelet[2058]: I0209 00:45:06.180413 2058 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4b0e3910-9dae-4369-a7da-ddb449c692da-host-proc-sys-kernel\") pod \"cilium-rpj5s\" (UID: \"4b0e3910-9dae-4369-a7da-ddb449c692da\") " pod="kube-system/cilium-rpj5s" Feb 9 00:45:06.180632 kubelet[2058]: I0209 00:45:06.180448 2058 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4b0e3910-9dae-4369-a7da-ddb449c692da-cilium-cgroup\") pod \"cilium-rpj5s\" (UID: \"4b0e3910-9dae-4369-a7da-ddb449c692da\") " pod="kube-system/cilium-rpj5s" Feb 9 00:45:06.180632 kubelet[2058]: I0209 00:45:06.180473 2058 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4b0e3910-9dae-4369-a7da-ddb449c692da-cilium-ipsec-secrets\") pod \"cilium-rpj5s\" (UID: \"4b0e3910-9dae-4369-a7da-ddb449c692da\") " pod="kube-system/cilium-rpj5s" Feb 9 00:45:06.180632 kubelet[2058]: I0209 00:45:06.180512 2058 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4b0e3910-9dae-4369-a7da-ddb449c692da-hostproc\") pod \"cilium-rpj5s\" (UID: \"4b0e3910-9dae-4369-a7da-ddb449c692da\") " pod="kube-system/cilium-rpj5s" Feb 9 00:45:06.180632 kubelet[2058]: I0209 00:45:06.180539 2058 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4b0e3910-9dae-4369-a7da-ddb449c692da-clustermesh-secrets\") pod \"cilium-rpj5s\" (UID: \"4b0e3910-9dae-4369-a7da-ddb449c692da\") " pod="kube-system/cilium-rpj5s" Feb 9 00:45:06.180632 kubelet[2058]: I0209 00:45:06.180562 2058 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4b0e3910-9dae-4369-a7da-ddb449c692da-hubble-tls\") pod \"cilium-rpj5s\" (UID: \"4b0e3910-9dae-4369-a7da-ddb449c692da\") " pod="kube-system/cilium-rpj5s" Feb 9 00:45:06.180632 kubelet[2058]: I0209 00:45:06.180585 2058 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4b0e3910-9dae-4369-a7da-ddb449c692da-xtables-lock\") pod \"cilium-rpj5s\" (UID: \"4b0e3910-9dae-4369-a7da-ddb449c692da\") " pod="kube-system/cilium-rpj5s" Feb 9 00:45:06.180885 kubelet[2058]: I0209 00:45:06.180612 2058 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5bql\" (UniqueName: \"kubernetes.io/projected/4b0e3910-9dae-4369-a7da-ddb449c692da-kube-api-access-h5bql\") pod \"cilium-rpj5s\" (UID: \"4b0e3910-9dae-4369-a7da-ddb449c692da\") " pod="kube-system/cilium-rpj5s" Feb 9 00:45:06.180885 kubelet[2058]: I0209 00:45:06.180638 2058 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4b0e3910-9dae-4369-a7da-ddb449c692da-cni-path\") pod \"cilium-rpj5s\" (UID: \"4b0e3910-9dae-4369-a7da-ddb449c692da\") " pod="kube-system/cilium-rpj5s" Feb 9 00:45:06.180885 kubelet[2058]: I0209 00:45:06.180664 2058 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4b0e3910-9dae-4369-a7da-ddb449c692da-bpf-maps\") pod \"cilium-rpj5s\" (UID: \"4b0e3910-9dae-4369-a7da-ddb449c692da\") " pod="kube-system/cilium-rpj5s" Feb 9 00:45:06.180885 kubelet[2058]: I0209 00:45:06.180689 2058 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4b0e3910-9dae-4369-a7da-ddb449c692da-lib-modules\") pod \"cilium-rpj5s\" (UID: \"4b0e3910-9dae-4369-a7da-ddb449c692da\") " pod="kube-system/cilium-rpj5s" Feb 9 00:45:06.180885 kubelet[2058]: I0209 00:45:06.180714 2058 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4b0e3910-9dae-4369-a7da-ddb449c692da-cilium-config-path\") pod \"cilium-rpj5s\" (UID: \"4b0e3910-9dae-4369-a7da-ddb449c692da\") " pod="kube-system/cilium-rpj5s" Feb 9 00:45:06.180885 kubelet[2058]: I0209 00:45:06.180766 2058 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4b0e3910-9dae-4369-a7da-ddb449c692da-etc-cni-netd\") pod \"cilium-rpj5s\" (UID: \"4b0e3910-9dae-4369-a7da-ddb449c692da\") " pod="kube-system/cilium-rpj5s" Feb 9 00:45:06.319271 kubelet[2058]: E0209 00:45:06.319216 2058 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:45:06.319904 env[1186]: time="2024-02-09T00:45:06.319840860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rpj5s,Uid:4b0e3910-9dae-4369-a7da-ddb449c692da,Namespace:kube-system,Attempt:0,}" Feb 9 00:45:06.332575 env[1186]: time="2024-02-09T00:45:06.332488054Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 00:45:06.332575 env[1186]: time="2024-02-09T00:45:06.332539120Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 00:45:06.332575 env[1186]: time="2024-02-09T00:45:06.332553728Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 00:45:06.332792 env[1186]: time="2024-02-09T00:45:06.332687741Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/696b96b5c4ac082f4bf28f079205eec53ece86f0b5867ef887e86bd432c495d4 pid=4102 runtime=io.containerd.runc.v2 Feb 9 00:45:06.344227 systemd[1]: Started cri-containerd-696b96b5c4ac082f4bf28f079205eec53ece86f0b5867ef887e86bd432c495d4.scope. Feb 9 00:45:06.371325 env[1186]: time="2024-02-09T00:45:06.371282684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rpj5s,Uid:4b0e3910-9dae-4369-a7da-ddb449c692da,Namespace:kube-system,Attempt:0,} returns sandbox id \"696b96b5c4ac082f4bf28f079205eec53ece86f0b5867ef887e86bd432c495d4\"" Feb 9 00:45:06.372353 kubelet[2058]: E0209 00:45:06.372159 2058 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:45:06.374377 env[1186]: time="2024-02-09T00:45:06.374340641Z" level=info msg="CreateContainer within sandbox \"696b96b5c4ac082f4bf28f079205eec53ece86f0b5867ef887e86bd432c495d4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 00:45:06.390589 env[1186]: time="2024-02-09T00:45:06.390515088Z" level=info msg="CreateContainer within sandbox \"696b96b5c4ac082f4bf28f079205eec53ece86f0b5867ef887e86bd432c495d4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"31e90c9f8ae568f8b3c11dfdc5969305a947dbe1786f668b50444771d7c82155\"" Feb 9 00:45:06.392485 env[1186]: time="2024-02-09T00:45:06.391266718Z" level=info msg="StartContainer for \"31e90c9f8ae568f8b3c11dfdc5969305a947dbe1786f668b50444771d7c82155\"" Feb 9 00:45:06.407020 systemd[1]: Started cri-containerd-31e90c9f8ae568f8b3c11dfdc5969305a947dbe1786f668b50444771d7c82155.scope. Feb 9 00:45:06.416336 systemd[1]: cri-containerd-31e90c9f8ae568f8b3c11dfdc5969305a947dbe1786f668b50444771d7c82155.scope: Deactivated successfully. Feb 9 00:45:06.416620 systemd[1]: Stopped cri-containerd-31e90c9f8ae568f8b3c11dfdc5969305a947dbe1786f668b50444771d7c82155.scope. Feb 9 00:45:06.435076 env[1186]: time="2024-02-09T00:45:06.435004757Z" level=info msg="shim disconnected" id=31e90c9f8ae568f8b3c11dfdc5969305a947dbe1786f668b50444771d7c82155 Feb 9 00:45:06.435076 env[1186]: time="2024-02-09T00:45:06.435060462Z" level=warning msg="cleaning up after shim disconnected" id=31e90c9f8ae568f8b3c11dfdc5969305a947dbe1786f668b50444771d7c82155 namespace=k8s.io Feb 9 00:45:06.435076 env[1186]: time="2024-02-09T00:45:06.435069960Z" level=info msg="cleaning up dead shim" Feb 9 00:45:06.442138 env[1186]: time="2024-02-09T00:45:06.442103578Z" level=warning msg="cleanup warnings time=\"2024-02-09T00:45:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4160 runtime=io.containerd.runc.v2\ntime=\"2024-02-09T00:45:06Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/31e90c9f8ae568f8b3c11dfdc5969305a947dbe1786f668b50444771d7c82155/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 9 00:45:06.442396 env[1186]: time="2024-02-09T00:45:06.442302404Z" level=error msg="copy shim log" error="read /proc/self/fd/41: file already closed" Feb 9 00:45:06.442663 env[1186]: time="2024-02-09T00:45:06.442597401Z" level=error msg="Failed to pipe stderr of container \"31e90c9f8ae568f8b3c11dfdc5969305a947dbe1786f668b50444771d7c82155\"" error="reading from a closed fifo" Feb 9 00:45:06.442745 env[1186]: time="2024-02-09T00:45:06.442696889Z" level=error msg="Failed to pipe stdout of container \"31e90c9f8ae568f8b3c11dfdc5969305a947dbe1786f668b50444771d7c82155\"" error="reading from a closed fifo" Feb 9 00:45:06.444467 env[1186]: time="2024-02-09T00:45:06.444421226Z" level=error msg="StartContainer for \"31e90c9f8ae568f8b3c11dfdc5969305a947dbe1786f668b50444771d7c82155\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 9 00:45:06.444694 kubelet[2058]: E0209 00:45:06.444666 2058 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="31e90c9f8ae568f8b3c11dfdc5969305a947dbe1786f668b50444771d7c82155" Feb 9 00:45:06.444849 kubelet[2058]: E0209 00:45:06.444831 2058 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 9 00:45:06.444849 kubelet[2058]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 9 00:45:06.444849 kubelet[2058]: rm /hostbin/cilium-mount Feb 9 00:45:06.444849 kubelet[2058]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-h5bql,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-rpj5s_kube-system(4b0e3910-9dae-4369-a7da-ddb449c692da): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 9 00:45:06.445139 kubelet[2058]: E0209 00:45:06.444873 2058 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-rpj5s" podUID=4b0e3910-9dae-4369-a7da-ddb449c692da Feb 9 00:45:06.669515 env[1186]: time="2024-02-09T00:45:06.669359448Z" level=info msg="StopPodSandbox for \"696b96b5c4ac082f4bf28f079205eec53ece86f0b5867ef887e86bd432c495d4\"" Feb 9 00:45:06.669515 env[1186]: time="2024-02-09T00:45:06.669416495Z" level=info msg="Container to stop \"31e90c9f8ae568f8b3c11dfdc5969305a947dbe1786f668b50444771d7c82155\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 00:45:06.676160 systemd[1]: cri-containerd-696b96b5c4ac082f4bf28f079205eec53ece86f0b5867ef887e86bd432c495d4.scope: Deactivated successfully. Feb 9 00:45:06.703972 env[1186]: time="2024-02-09T00:45:06.703919128Z" level=info msg="shim disconnected" id=696b96b5c4ac082f4bf28f079205eec53ece86f0b5867ef887e86bd432c495d4 Feb 9 00:45:06.703972 env[1186]: time="2024-02-09T00:45:06.703967028Z" level=warning msg="cleaning up after shim disconnected" id=696b96b5c4ac082f4bf28f079205eec53ece86f0b5867ef887e86bd432c495d4 namespace=k8s.io Feb 9 00:45:06.703972 env[1186]: time="2024-02-09T00:45:06.703975183Z" level=info msg="cleaning up dead shim" Feb 9 00:45:06.711078 env[1186]: time="2024-02-09T00:45:06.711021916Z" level=warning msg="cleanup warnings time=\"2024-02-09T00:45:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4191 runtime=io.containerd.runc.v2\n" Feb 9 00:45:06.711357 env[1186]: time="2024-02-09T00:45:06.711325309Z" level=info msg="TearDown network for sandbox \"696b96b5c4ac082f4bf28f079205eec53ece86f0b5867ef887e86bd432c495d4\" successfully" Feb 9 00:45:06.711398 env[1186]: time="2024-02-09T00:45:06.711354043Z" level=info msg="StopPodSandbox for \"696b96b5c4ac082f4bf28f079205eec53ece86f0b5867ef887e86bd432c495d4\" returns successfully" Feb 9 00:45:06.784886 kubelet[2058]: I0209 00:45:06.784839 2058 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4b0e3910-9dae-4369-a7da-ddb449c692da-host-proc-sys-net\") pod \"4b0e3910-9dae-4369-a7da-ddb449c692da\" (UID: \"4b0e3910-9dae-4369-a7da-ddb449c692da\") " Feb 9 00:45:06.784886 kubelet[2058]: I0209 00:45:06.784893 2058 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4b0e3910-9dae-4369-a7da-ddb449c692da-cilium-cgroup\") pod \"4b0e3910-9dae-4369-a7da-ddb449c692da\" (UID: \"4b0e3910-9dae-4369-a7da-ddb449c692da\") " Feb 9 00:45:06.785123 kubelet[2058]: I0209 00:45:06.784928 2058 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4b0e3910-9dae-4369-a7da-ddb449c692da-clustermesh-secrets\") pod \"4b0e3910-9dae-4369-a7da-ddb449c692da\" (UID: \"4b0e3910-9dae-4369-a7da-ddb449c692da\") " Feb 9 00:45:06.785123 kubelet[2058]: I0209 00:45:06.784954 2058 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4b0e3910-9dae-4369-a7da-ddb449c692da-cni-path\") pod \"4b0e3910-9dae-4369-a7da-ddb449c692da\" (UID: \"4b0e3910-9dae-4369-a7da-ddb449c692da\") " Feb 9 00:45:06.785123 kubelet[2058]: I0209 00:45:06.784976 2058 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4b0e3910-9dae-4369-a7da-ddb449c692da-cilium-run\") pod \"4b0e3910-9dae-4369-a7da-ddb449c692da\" (UID: \"4b0e3910-9dae-4369-a7da-ddb449c692da\") " Feb 9 00:45:06.785123 kubelet[2058]: I0209 00:45:06.784969 2058 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b0e3910-9dae-4369-a7da-ddb449c692da-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "4b0e3910-9dae-4369-a7da-ddb449c692da" (UID: "4b0e3910-9dae-4369-a7da-ddb449c692da"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 00:45:06.785123 kubelet[2058]: I0209 00:45:06.784950 2058 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b0e3910-9dae-4369-a7da-ddb449c692da-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "4b0e3910-9dae-4369-a7da-ddb449c692da" (UID: "4b0e3910-9dae-4369-a7da-ddb449c692da"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 00:45:06.785353 kubelet[2058]: I0209 00:45:06.785003 2058 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4b0e3910-9dae-4369-a7da-ddb449c692da-cilium-ipsec-secrets\") pod \"4b0e3910-9dae-4369-a7da-ddb449c692da\" (UID: \"4b0e3910-9dae-4369-a7da-ddb449c692da\") " Feb 9 00:45:06.785353 kubelet[2058]: I0209 00:45:06.785025 2058 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4b0e3910-9dae-4369-a7da-ddb449c692da-etc-cni-netd\") pod \"4b0e3910-9dae-4369-a7da-ddb449c692da\" (UID: \"4b0e3910-9dae-4369-a7da-ddb449c692da\") " Feb 9 00:45:06.785353 kubelet[2058]: I0209 00:45:06.785023 2058 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b0e3910-9dae-4369-a7da-ddb449c692da-cni-path" (OuterVolumeSpecName: "cni-path") pod "4b0e3910-9dae-4369-a7da-ddb449c692da" (UID: "4b0e3910-9dae-4369-a7da-ddb449c692da"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 00:45:06.785353 kubelet[2058]: I0209 00:45:06.785045 2058 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4b0e3910-9dae-4369-a7da-ddb449c692da-hostproc\") pod \"4b0e3910-9dae-4369-a7da-ddb449c692da\" (UID: \"4b0e3910-9dae-4369-a7da-ddb449c692da\") " Feb 9 00:45:06.785353 kubelet[2058]: I0209 00:45:06.785072 2058 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5bql\" (UniqueName: \"kubernetes.io/projected/4b0e3910-9dae-4369-a7da-ddb449c692da-kube-api-access-h5bql\") pod \"4b0e3910-9dae-4369-a7da-ddb449c692da\" (UID: \"4b0e3910-9dae-4369-a7da-ddb449c692da\") " Feb 9 00:45:06.785353 kubelet[2058]: I0209 00:45:06.785094 2058 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4b0e3910-9dae-4369-a7da-ddb449c692da-host-proc-sys-kernel\") pod \"4b0e3910-9dae-4369-a7da-ddb449c692da\" (UID: \"4b0e3910-9dae-4369-a7da-ddb449c692da\") " Feb 9 00:45:06.785542 kubelet[2058]: I0209 00:45:06.785116 2058 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4b0e3910-9dae-4369-a7da-ddb449c692da-hubble-tls\") pod \"4b0e3910-9dae-4369-a7da-ddb449c692da\" (UID: \"4b0e3910-9dae-4369-a7da-ddb449c692da\") " Feb 9 00:45:06.785542 kubelet[2058]: I0209 00:45:06.785147 2058 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4b0e3910-9dae-4369-a7da-ddb449c692da-lib-modules\") pod \"4b0e3910-9dae-4369-a7da-ddb449c692da\" (UID: \"4b0e3910-9dae-4369-a7da-ddb449c692da\") " Feb 9 00:45:06.785542 kubelet[2058]: I0209 00:45:06.785170 2058 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4b0e3910-9dae-4369-a7da-ddb449c692da-xtables-lock\") pod \"4b0e3910-9dae-4369-a7da-ddb449c692da\" (UID: \"4b0e3910-9dae-4369-a7da-ddb449c692da\") " Feb 9 00:45:06.785542 kubelet[2058]: I0209 00:45:06.785190 2058 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4b0e3910-9dae-4369-a7da-ddb449c692da-bpf-maps\") pod \"4b0e3910-9dae-4369-a7da-ddb449c692da\" (UID: \"4b0e3910-9dae-4369-a7da-ddb449c692da\") " Feb 9 00:45:06.785542 kubelet[2058]: I0209 00:45:06.785219 2058 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4b0e3910-9dae-4369-a7da-ddb449c692da-cilium-config-path\") pod \"4b0e3910-9dae-4369-a7da-ddb449c692da\" (UID: \"4b0e3910-9dae-4369-a7da-ddb449c692da\") " Feb 9 00:45:06.785542 kubelet[2058]: I0209 00:45:06.785256 2058 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4b0e3910-9dae-4369-a7da-ddb449c692da-cni-path\") on node \"localhost\" DevicePath \"\"" Feb 9 00:45:06.785542 kubelet[2058]: I0209 00:45:06.785272 2058 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4b0e3910-9dae-4369-a7da-ddb449c692da-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Feb 9 00:45:06.785799 kubelet[2058]: I0209 00:45:06.785285 2058 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4b0e3910-9dae-4369-a7da-ddb449c692da-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Feb 9 00:45:06.785799 kubelet[2058]: W0209 00:45:06.785433 2058 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/4b0e3910-9dae-4369-a7da-ddb449c692da/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 00:45:06.785799 kubelet[2058]: I0209 00:45:06.785506 2058 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b0e3910-9dae-4369-a7da-ddb449c692da-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "4b0e3910-9dae-4369-a7da-ddb449c692da" (UID: "4b0e3910-9dae-4369-a7da-ddb449c692da"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 00:45:06.785799 kubelet[2058]: I0209 00:45:06.785532 2058 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b0e3910-9dae-4369-a7da-ddb449c692da-hostproc" (OuterVolumeSpecName: "hostproc") pod "4b0e3910-9dae-4369-a7da-ddb449c692da" (UID: "4b0e3910-9dae-4369-a7da-ddb449c692da"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 00:45:06.785799 kubelet[2058]: I0209 00:45:06.785573 2058 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b0e3910-9dae-4369-a7da-ddb449c692da-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "4b0e3910-9dae-4369-a7da-ddb449c692da" (UID: "4b0e3910-9dae-4369-a7da-ddb449c692da"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 00:45:06.785799 kubelet[2058]: I0209 00:45:06.785596 2058 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b0e3910-9dae-4369-a7da-ddb449c692da-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4b0e3910-9dae-4369-a7da-ddb449c692da" (UID: "4b0e3910-9dae-4369-a7da-ddb449c692da"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 00:45:06.785994 kubelet[2058]: I0209 00:45:06.785659 2058 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b0e3910-9dae-4369-a7da-ddb449c692da-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "4b0e3910-9dae-4369-a7da-ddb449c692da" (UID: "4b0e3910-9dae-4369-a7da-ddb449c692da"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 00:45:06.785994 kubelet[2058]: I0209 00:45:06.785692 2058 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b0e3910-9dae-4369-a7da-ddb449c692da-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "4b0e3910-9dae-4369-a7da-ddb449c692da" (UID: "4b0e3910-9dae-4369-a7da-ddb449c692da"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 00:45:06.785994 kubelet[2058]: I0209 00:45:06.785713 2058 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b0e3910-9dae-4369-a7da-ddb449c692da-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "4b0e3910-9dae-4369-a7da-ddb449c692da" (UID: "4b0e3910-9dae-4369-a7da-ddb449c692da"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 00:45:06.787920 kubelet[2058]: I0209 00:45:06.787882 2058 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b0e3910-9dae-4369-a7da-ddb449c692da-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4b0e3910-9dae-4369-a7da-ddb449c692da" (UID: "4b0e3910-9dae-4369-a7da-ddb449c692da"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 00:45:06.788409 kubelet[2058]: I0209 00:45:06.788376 2058 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b0e3910-9dae-4369-a7da-ddb449c692da-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "4b0e3910-9dae-4369-a7da-ddb449c692da" (UID: "4b0e3910-9dae-4369-a7da-ddb449c692da"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 00:45:06.788499 kubelet[2058]: I0209 00:45:06.788462 2058 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b0e3910-9dae-4369-a7da-ddb449c692da-kube-api-access-h5bql" (OuterVolumeSpecName: "kube-api-access-h5bql") pod "4b0e3910-9dae-4369-a7da-ddb449c692da" (UID: "4b0e3910-9dae-4369-a7da-ddb449c692da"). InnerVolumeSpecName "kube-api-access-h5bql". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 00:45:06.788636 kubelet[2058]: I0209 00:45:06.788607 2058 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b0e3910-9dae-4369-a7da-ddb449c692da-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "4b0e3910-9dae-4369-a7da-ddb449c692da" (UID: "4b0e3910-9dae-4369-a7da-ddb449c692da"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 00:45:06.790014 kubelet[2058]: I0209 00:45:06.789968 2058 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b0e3910-9dae-4369-a7da-ddb449c692da-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "4b0e3910-9dae-4369-a7da-ddb449c692da" (UID: "4b0e3910-9dae-4369-a7da-ddb449c692da"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 00:45:06.885942 kubelet[2058]: I0209 00:45:06.885903 2058 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4b0e3910-9dae-4369-a7da-ddb449c692da-bpf-maps\") on node \"localhost\" DevicePath \"\"" Feb 9 00:45:06.885942 kubelet[2058]: I0209 00:45:06.885938 2058 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4b0e3910-9dae-4369-a7da-ddb449c692da-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 9 00:45:06.885942 kubelet[2058]: I0209 00:45:06.885956 2058 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4b0e3910-9dae-4369-a7da-ddb449c692da-xtables-lock\") on node \"localhost\" DevicePath \"\"" Feb 9 00:45:06.886185 kubelet[2058]: I0209 00:45:06.885967 2058 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4b0e3910-9dae-4369-a7da-ddb449c692da-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Feb 9 00:45:06.886185 kubelet[2058]: I0209 00:45:06.885978 2058 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4b0e3910-9dae-4369-a7da-ddb449c692da-cilium-run\") on node \"localhost\" DevicePath \"\"" Feb 9 00:45:06.886185 kubelet[2058]: I0209 00:45:06.885989 2058 reconciler_common.go:295] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4b0e3910-9dae-4369-a7da-ddb449c692da-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Feb 9 00:45:06.886185 kubelet[2058]: I0209 00:45:06.885999 2058 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4b0e3910-9dae-4369-a7da-ddb449c692da-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Feb 9 00:45:06.886185 kubelet[2058]: I0209 00:45:06.886010 2058 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4b0e3910-9dae-4369-a7da-ddb449c692da-hostproc\") on node \"localhost\" DevicePath \"\"" Feb 9 00:45:06.886185 kubelet[2058]: I0209 00:45:06.886022 2058 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-h5bql\" (UniqueName: \"kubernetes.io/projected/4b0e3910-9dae-4369-a7da-ddb449c692da-kube-api-access-h5bql\") on node \"localhost\" DevicePath \"\"" Feb 9 00:45:06.886185 kubelet[2058]: I0209 00:45:06.886033 2058 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4b0e3910-9dae-4369-a7da-ddb449c692da-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Feb 9 00:45:06.886185 kubelet[2058]: I0209 00:45:06.886044 2058 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4b0e3910-9dae-4369-a7da-ddb449c692da-hubble-tls\") on node \"localhost\" DevicePath \"\"" Feb 9 00:45:06.886386 kubelet[2058]: I0209 00:45:06.886054 2058 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4b0e3910-9dae-4369-a7da-ddb449c692da-lib-modules\") on node \"localhost\" DevicePath \"\"" Feb 9 00:45:07.286170 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-696b96b5c4ac082f4bf28f079205eec53ece86f0b5867ef887e86bd432c495d4-shm.mount: Deactivated successfully. Feb 9 00:45:07.286301 systemd[1]: var-lib-kubelet-pods-4b0e3910\x2d9dae\x2d4369\x2da7da\x2dddb449c692da-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dh5bql.mount: Deactivated successfully. Feb 9 00:45:07.286386 systemd[1]: var-lib-kubelet-pods-4b0e3910\x2d9dae\x2d4369\x2da7da\x2dddb449c692da-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 9 00:45:07.286457 systemd[1]: var-lib-kubelet-pods-4b0e3910\x2d9dae\x2d4369\x2da7da\x2dddb449c692da-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 00:45:07.286524 systemd[1]: var-lib-kubelet-pods-4b0e3910\x2d9dae\x2d4369\x2da7da\x2dddb449c692da-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 00:45:07.418745 kubelet[2058]: E0209 00:45:07.418707 2058 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 00:45:07.672031 kubelet[2058]: I0209 00:45:07.671922 2058 scope.go:115] "RemoveContainer" containerID="31e90c9f8ae568f8b3c11dfdc5969305a947dbe1786f668b50444771d7c82155" Feb 9 00:45:07.672697 env[1186]: time="2024-02-09T00:45:07.672655476Z" level=info msg="RemoveContainer for \"31e90c9f8ae568f8b3c11dfdc5969305a947dbe1786f668b50444771d7c82155\"" Feb 9 00:45:07.676118 systemd[1]: Removed slice kubepods-burstable-pod4b0e3910_9dae_4369_a7da_ddb449c692da.slice. Feb 9 00:45:07.677750 env[1186]: time="2024-02-09T00:45:07.677702971Z" level=info msg="RemoveContainer for \"31e90c9f8ae568f8b3c11dfdc5969305a947dbe1786f668b50444771d7c82155\" returns successfully" Feb 9 00:45:07.700464 kubelet[2058]: I0209 00:45:07.700428 2058 topology_manager.go:210] "Topology Admit Handler" Feb 9 00:45:07.700659 kubelet[2058]: E0209 00:45:07.700493 2058 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4b0e3910-9dae-4369-a7da-ddb449c692da" containerName="mount-cgroup" Feb 9 00:45:07.700659 kubelet[2058]: I0209 00:45:07.700529 2058 memory_manager.go:346] "RemoveStaleState removing state" podUID="4b0e3910-9dae-4369-a7da-ddb449c692da" containerName="mount-cgroup" Feb 9 00:45:07.705120 systemd[1]: Created slice kubepods-burstable-pod0bc8400b_b686_4954_855b_0dbc3c3d1b44.slice. Feb 9 00:45:07.790988 kubelet[2058]: I0209 00:45:07.790945 2058 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0bc8400b-b686-4954-855b-0dbc3c3d1b44-clustermesh-secrets\") pod \"cilium-c5cj9\" (UID: \"0bc8400b-b686-4954-855b-0dbc3c3d1b44\") " pod="kube-system/cilium-c5cj9" Feb 9 00:45:07.790988 kubelet[2058]: I0209 00:45:07.790991 2058 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0bc8400b-b686-4954-855b-0dbc3c3d1b44-etc-cni-netd\") pod \"cilium-c5cj9\" (UID: \"0bc8400b-b686-4954-855b-0dbc3c3d1b44\") " pod="kube-system/cilium-c5cj9" Feb 9 00:45:07.791188 kubelet[2058]: I0209 00:45:07.791016 2058 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0bc8400b-b686-4954-855b-0dbc3c3d1b44-xtables-lock\") pod \"cilium-c5cj9\" (UID: \"0bc8400b-b686-4954-855b-0dbc3c3d1b44\") " pod="kube-system/cilium-c5cj9" Feb 9 00:45:07.791188 kubelet[2058]: I0209 00:45:07.791064 2058 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0bc8400b-b686-4954-855b-0dbc3c3d1b44-cilium-config-path\") pod \"cilium-c5cj9\" (UID: \"0bc8400b-b686-4954-855b-0dbc3c3d1b44\") " pod="kube-system/cilium-c5cj9" Feb 9 00:45:07.791188 kubelet[2058]: I0209 00:45:07.791111 2058 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0bc8400b-b686-4954-855b-0dbc3c3d1b44-hubble-tls\") pod \"cilium-c5cj9\" (UID: \"0bc8400b-b686-4954-855b-0dbc3c3d1b44\") " pod="kube-system/cilium-c5cj9" Feb 9 00:45:07.791267 kubelet[2058]: I0209 00:45:07.791187 2058 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0bc8400b-b686-4954-855b-0dbc3c3d1b44-bpf-maps\") pod \"cilium-c5cj9\" (UID: \"0bc8400b-b686-4954-855b-0dbc3c3d1b44\") " pod="kube-system/cilium-c5cj9" Feb 9 00:45:07.791267 kubelet[2058]: I0209 00:45:07.791232 2058 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0bc8400b-b686-4954-855b-0dbc3c3d1b44-lib-modules\") pod \"cilium-c5cj9\" (UID: \"0bc8400b-b686-4954-855b-0dbc3c3d1b44\") " pod="kube-system/cilium-c5cj9" Feb 9 00:45:07.791315 kubelet[2058]: I0209 00:45:07.791283 2058 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0bc8400b-b686-4954-855b-0dbc3c3d1b44-host-proc-sys-net\") pod \"cilium-c5cj9\" (UID: \"0bc8400b-b686-4954-855b-0dbc3c3d1b44\") " pod="kube-system/cilium-c5cj9" Feb 9 00:45:07.791374 kubelet[2058]: I0209 00:45:07.791360 2058 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0bc8400b-b686-4954-855b-0dbc3c3d1b44-cilium-run\") pod \"cilium-c5cj9\" (UID: \"0bc8400b-b686-4954-855b-0dbc3c3d1b44\") " pod="kube-system/cilium-c5cj9" Feb 9 00:45:07.791444 kubelet[2058]: I0209 00:45:07.791401 2058 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0bc8400b-b686-4954-855b-0dbc3c3d1b44-cilium-cgroup\") pod \"cilium-c5cj9\" (UID: \"0bc8400b-b686-4954-855b-0dbc3c3d1b44\") " pod="kube-system/cilium-c5cj9" Feb 9 00:45:07.791476 kubelet[2058]: I0209 00:45:07.791451 2058 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0bc8400b-b686-4954-855b-0dbc3c3d1b44-host-proc-sys-kernel\") pod \"cilium-c5cj9\" (UID: \"0bc8400b-b686-4954-855b-0dbc3c3d1b44\") " pod="kube-system/cilium-c5cj9" Feb 9 00:45:07.791503 kubelet[2058]: I0209 00:45:07.791485 2058 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0bc8400b-b686-4954-855b-0dbc3c3d1b44-cni-path\") pod \"cilium-c5cj9\" (UID: \"0bc8400b-b686-4954-855b-0dbc3c3d1b44\") " pod="kube-system/cilium-c5cj9" Feb 9 00:45:07.791526 kubelet[2058]: I0209 00:45:07.791512 2058 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0bc8400b-b686-4954-855b-0dbc3c3d1b44-cilium-ipsec-secrets\") pod \"cilium-c5cj9\" (UID: \"0bc8400b-b686-4954-855b-0dbc3c3d1b44\") " pod="kube-system/cilium-c5cj9" Feb 9 00:45:07.791566 kubelet[2058]: I0209 00:45:07.791551 2058 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v6m7\" (UniqueName: \"kubernetes.io/projected/0bc8400b-b686-4954-855b-0dbc3c3d1b44-kube-api-access-4v6m7\") pod \"cilium-c5cj9\" (UID: \"0bc8400b-b686-4954-855b-0dbc3c3d1b44\") " pod="kube-system/cilium-c5cj9" Feb 9 00:45:07.791631 kubelet[2058]: I0209 00:45:07.791586 2058 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0bc8400b-b686-4954-855b-0dbc3c3d1b44-hostproc\") pod \"cilium-c5cj9\" (UID: \"0bc8400b-b686-4954-855b-0dbc3c3d1b44\") " pod="kube-system/cilium-c5cj9" Feb 9 00:45:08.007711 kubelet[2058]: E0209 00:45:08.007585 2058 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:45:08.008507 env[1186]: time="2024-02-09T00:45:08.008140369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-c5cj9,Uid:0bc8400b-b686-4954-855b-0dbc3c3d1b44,Namespace:kube-system,Attempt:0,}" Feb 9 00:45:08.077764 env[1186]: time="2024-02-09T00:45:08.076469234Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 00:45:08.077764 env[1186]: time="2024-02-09T00:45:08.076503488Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 00:45:08.077764 env[1186]: time="2024-02-09T00:45:08.076515771Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 00:45:08.077764 env[1186]: time="2024-02-09T00:45:08.076732460Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bcab12f05b1922e5dd6448ccd298a7d1de11b1f8227b1a324b2974d159ba3aa3 pid=4218 runtime=io.containerd.runc.v2 Feb 9 00:45:08.092912 systemd[1]: Started cri-containerd-bcab12f05b1922e5dd6448ccd298a7d1de11b1f8227b1a324b2974d159ba3aa3.scope. Feb 9 00:45:08.119399 env[1186]: time="2024-02-09T00:45:08.119351626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-c5cj9,Uid:0bc8400b-b686-4954-855b-0dbc3c3d1b44,Namespace:kube-system,Attempt:0,} returns sandbox id \"bcab12f05b1922e5dd6448ccd298a7d1de11b1f8227b1a324b2974d159ba3aa3\"" Feb 9 00:45:08.119991 kubelet[2058]: E0209 00:45:08.119976 2058 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:45:08.121472 env[1186]: time="2024-02-09T00:45:08.121440081Z" level=info msg="CreateContainer within sandbox \"bcab12f05b1922e5dd6448ccd298a7d1de11b1f8227b1a324b2974d159ba3aa3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 00:45:08.132374 env[1186]: time="2024-02-09T00:45:08.132330290Z" level=info msg="CreateContainer within sandbox \"bcab12f05b1922e5dd6448ccd298a7d1de11b1f8227b1a324b2974d159ba3aa3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"79d666ac28eb84376b717b39b56586b202c9f44bbd4cfb34a0d87c3236c04865\"" Feb 9 00:45:08.132901 env[1186]: time="2024-02-09T00:45:08.132830614Z" level=info msg="StartContainer for \"79d666ac28eb84376b717b39b56586b202c9f44bbd4cfb34a0d87c3236c04865\"" Feb 9 00:45:08.145057 systemd[1]: Started cri-containerd-79d666ac28eb84376b717b39b56586b202c9f44bbd4cfb34a0d87c3236c04865.scope. Feb 9 00:45:08.171545 env[1186]: time="2024-02-09T00:45:08.168739877Z" level=info msg="StartContainer for \"79d666ac28eb84376b717b39b56586b202c9f44bbd4cfb34a0d87c3236c04865\" returns successfully" Feb 9 00:45:08.175117 systemd[1]: cri-containerd-79d666ac28eb84376b717b39b56586b202c9f44bbd4cfb34a0d87c3236c04865.scope: Deactivated successfully. Feb 9 00:45:08.199490 env[1186]: time="2024-02-09T00:45:08.199437266Z" level=info msg="shim disconnected" id=79d666ac28eb84376b717b39b56586b202c9f44bbd4cfb34a0d87c3236c04865 Feb 9 00:45:08.199490 env[1186]: time="2024-02-09T00:45:08.199491368Z" level=warning msg="cleaning up after shim disconnected" id=79d666ac28eb84376b717b39b56586b202c9f44bbd4cfb34a0d87c3236c04865 namespace=k8s.io Feb 9 00:45:08.199711 env[1186]: time="2024-02-09T00:45:08.199506066Z" level=info msg="cleaning up dead shim" Feb 9 00:45:08.205866 env[1186]: time="2024-02-09T00:45:08.205828567Z" level=warning msg="cleanup warnings time=\"2024-02-09T00:45:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4302 runtime=io.containerd.runc.v2\n" Feb 9 00:45:08.376598 env[1186]: time="2024-02-09T00:45:08.376556967Z" level=info msg="StopPodSandbox for \"696b96b5c4ac082f4bf28f079205eec53ece86f0b5867ef887e86bd432c495d4\"" Feb 9 00:45:08.376777 env[1186]: time="2024-02-09T00:45:08.376653058Z" level=info msg="TearDown network for sandbox \"696b96b5c4ac082f4bf28f079205eec53ece86f0b5867ef887e86bd432c495d4\" successfully" Feb 9 00:45:08.376777 env[1186]: time="2024-02-09T00:45:08.376695217Z" level=info msg="StopPodSandbox for \"696b96b5c4ac082f4bf28f079205eec53ece86f0b5867ef887e86bd432c495d4\" returns successfully" Feb 9 00:45:08.377715 kubelet[2058]: I0209 00:45:08.377694 2058 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=4b0e3910-9dae-4369-a7da-ddb449c692da path="/var/lib/kubelet/pods/4b0e3910-9dae-4369-a7da-ddb449c692da/volumes" Feb 9 00:45:08.674769 kubelet[2058]: E0209 00:45:08.674461 2058 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:45:08.676440 env[1186]: time="2024-02-09T00:45:08.676403603Z" level=info msg="CreateContainer within sandbox \"bcab12f05b1922e5dd6448ccd298a7d1de11b1f8227b1a324b2974d159ba3aa3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 00:45:08.979808 env[1186]: time="2024-02-09T00:45:08.979408265Z" level=info msg="CreateContainer within sandbox \"bcab12f05b1922e5dd6448ccd298a7d1de11b1f8227b1a324b2974d159ba3aa3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5a01ffa7f8bf226a148d95fa7490f2693bb9ae6e0f776e31e0d2808554f2d160\"" Feb 9 00:45:08.980161 env[1186]: time="2024-02-09T00:45:08.980103178Z" level=info msg="StartContainer for \"5a01ffa7f8bf226a148d95fa7490f2693bb9ae6e0f776e31e0d2808554f2d160\"" Feb 9 00:45:08.996963 systemd[1]: Started cri-containerd-5a01ffa7f8bf226a148d95fa7490f2693bb9ae6e0f776e31e0d2808554f2d160.scope. Feb 9 00:45:09.021980 systemd[1]: cri-containerd-5a01ffa7f8bf226a148d95fa7490f2693bb9ae6e0f776e31e0d2808554f2d160.scope: Deactivated successfully. Feb 9 00:45:09.093135 env[1186]: time="2024-02-09T00:45:09.093089446Z" level=info msg="StartContainer for \"5a01ffa7f8bf226a148d95fa7490f2693bb9ae6e0f776e31e0d2808554f2d160\" returns successfully" Feb 9 00:45:09.120990 env[1186]: time="2024-02-09T00:45:09.120928481Z" level=info msg="shim disconnected" id=5a01ffa7f8bf226a148d95fa7490f2693bb9ae6e0f776e31e0d2808554f2d160 Feb 9 00:45:09.120990 env[1186]: time="2024-02-09T00:45:09.120973345Z" level=warning msg="cleaning up after shim disconnected" id=5a01ffa7f8bf226a148d95fa7490f2693bb9ae6e0f776e31e0d2808554f2d160 namespace=k8s.io Feb 9 00:45:09.120990 env[1186]: time="2024-02-09T00:45:09.120982202Z" level=info msg="cleaning up dead shim" Feb 9 00:45:09.127057 env[1186]: time="2024-02-09T00:45:09.127012410Z" level=warning msg="cleanup warnings time=\"2024-02-09T00:45:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4362 runtime=io.containerd.runc.v2\n" Feb 9 00:45:09.286432 systemd[1]: run-containerd-runc-k8s.io-5a01ffa7f8bf226a148d95fa7490f2693bb9ae6e0f776e31e0d2808554f2d160-runc.Dn5lCI.mount: Deactivated successfully. Feb 9 00:45:09.286540 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5a01ffa7f8bf226a148d95fa7490f2693bb9ae6e0f776e31e0d2808554f2d160-rootfs.mount: Deactivated successfully. Feb 9 00:45:09.543337 kubelet[2058]: W0209 00:45:09.543209 2058 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b0e3910_9dae_4369_a7da_ddb449c692da.slice/cri-containerd-31e90c9f8ae568f8b3c11dfdc5969305a947dbe1786f668b50444771d7c82155.scope WatchSource:0}: container "31e90c9f8ae568f8b3c11dfdc5969305a947dbe1786f668b50444771d7c82155" in namespace "k8s.io": not found Feb 9 00:45:09.678739 kubelet[2058]: E0209 00:45:09.678681 2058 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:45:09.684396 env[1186]: time="2024-02-09T00:45:09.684343329Z" level=info msg="CreateContainer within sandbox \"bcab12f05b1922e5dd6448ccd298a7d1de11b1f8227b1a324b2974d159ba3aa3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 00:45:09.697585 env[1186]: time="2024-02-09T00:45:09.697527518Z" level=info msg="CreateContainer within sandbox \"bcab12f05b1922e5dd6448ccd298a7d1de11b1f8227b1a324b2974d159ba3aa3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c8d739bae37e545e999d2507b4f33753133ee1226afed923cc75ccc15085c3d7\"" Feb 9 00:45:09.698160 env[1186]: time="2024-02-09T00:45:09.698113094Z" level=info msg="StartContainer for \"c8d739bae37e545e999d2507b4f33753133ee1226afed923cc75ccc15085c3d7\"" Feb 9 00:45:09.714041 systemd[1]: Started cri-containerd-c8d739bae37e545e999d2507b4f33753133ee1226afed923cc75ccc15085c3d7.scope. Feb 9 00:45:09.735760 systemd[1]: cri-containerd-c8d739bae37e545e999d2507b4f33753133ee1226afed923cc75ccc15085c3d7.scope: Deactivated successfully. Feb 9 00:45:09.737193 env[1186]: time="2024-02-09T00:45:09.737145727Z" level=info msg="StartContainer for \"c8d739bae37e545e999d2507b4f33753133ee1226afed923cc75ccc15085c3d7\" returns successfully" Feb 9 00:45:09.757558 env[1186]: time="2024-02-09T00:45:09.757501439Z" level=info msg="shim disconnected" id=c8d739bae37e545e999d2507b4f33753133ee1226afed923cc75ccc15085c3d7 Feb 9 00:45:09.757558 env[1186]: time="2024-02-09T00:45:09.757554058Z" level=warning msg="cleaning up after shim disconnected" id=c8d739bae37e545e999d2507b4f33753133ee1226afed923cc75ccc15085c3d7 namespace=k8s.io Feb 9 00:45:09.757760 env[1186]: time="2024-02-09T00:45:09.757563186Z" level=info msg="cleaning up dead shim" Feb 9 00:45:09.764634 env[1186]: time="2024-02-09T00:45:09.764584395Z" level=warning msg="cleanup warnings time=\"2024-02-09T00:45:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4418 runtime=io.containerd.runc.v2\n" Feb 9 00:45:10.286503 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c8d739bae37e545e999d2507b4f33753133ee1226afed923cc75ccc15085c3d7-rootfs.mount: Deactivated successfully. Feb 9 00:45:10.682017 kubelet[2058]: E0209 00:45:10.681902 2058 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:45:10.684138 env[1186]: time="2024-02-09T00:45:10.684098278Z" level=info msg="CreateContainer within sandbox \"bcab12f05b1922e5dd6448ccd298a7d1de11b1f8227b1a324b2974d159ba3aa3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 00:45:10.697991 env[1186]: time="2024-02-09T00:45:10.697936129Z" level=info msg="CreateContainer within sandbox \"bcab12f05b1922e5dd6448ccd298a7d1de11b1f8227b1a324b2974d159ba3aa3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5d8e8de926516c8c6feb6cd01bb7a0ed2efce9245f9a083d685ca96b684c093c\"" Feb 9 00:45:10.699037 env[1186]: time="2024-02-09T00:45:10.698888597Z" level=info msg="StartContainer for \"5d8e8de926516c8c6feb6cd01bb7a0ed2efce9245f9a083d685ca96b684c093c\"" Feb 9 00:45:10.720293 systemd[1]: Started cri-containerd-5d8e8de926516c8c6feb6cd01bb7a0ed2efce9245f9a083d685ca96b684c093c.scope. Feb 9 00:45:10.739905 systemd[1]: cri-containerd-5d8e8de926516c8c6feb6cd01bb7a0ed2efce9245f9a083d685ca96b684c093c.scope: Deactivated successfully. Feb 9 00:45:10.742915 env[1186]: time="2024-02-09T00:45:10.742855970Z" level=info msg="StartContainer for \"5d8e8de926516c8c6feb6cd01bb7a0ed2efce9245f9a083d685ca96b684c093c\" returns successfully" Feb 9 00:45:10.762454 env[1186]: time="2024-02-09T00:45:10.762387270Z" level=info msg="shim disconnected" id=5d8e8de926516c8c6feb6cd01bb7a0ed2efce9245f9a083d685ca96b684c093c Feb 9 00:45:10.762454 env[1186]: time="2024-02-09T00:45:10.762454157Z" level=warning msg="cleaning up after shim disconnected" id=5d8e8de926516c8c6feb6cd01bb7a0ed2efce9245f9a083d685ca96b684c093c namespace=k8s.io Feb 9 00:45:10.762681 env[1186]: time="2024-02-09T00:45:10.762467682Z" level=info msg="cleaning up dead shim" Feb 9 00:45:10.769632 env[1186]: time="2024-02-09T00:45:10.769577047Z" level=warning msg="cleanup warnings time=\"2024-02-09T00:45:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4473 runtime=io.containerd.runc.v2\n" Feb 9 00:45:11.286512 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5d8e8de926516c8c6feb6cd01bb7a0ed2efce9245f9a083d685ca96b684c093c-rootfs.mount: Deactivated successfully. Feb 9 00:45:11.686283 kubelet[2058]: E0209 00:45:11.686036 2058 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:45:11.689422 env[1186]: time="2024-02-09T00:45:11.688457730Z" level=info msg="CreateContainer within sandbox \"bcab12f05b1922e5dd6448ccd298a7d1de11b1f8227b1a324b2974d159ba3aa3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 00:45:11.702757 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2578323410.mount: Deactivated successfully. Feb 9 00:45:11.704300 env[1186]: time="2024-02-09T00:45:11.704253354Z" level=info msg="CreateContainer within sandbox \"bcab12f05b1922e5dd6448ccd298a7d1de11b1f8227b1a324b2974d159ba3aa3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"25327c4a88bfd8f6fa71838424f3d56cceb790748e2a66a233834be207b5cab2\"" Feb 9 00:45:11.704745 env[1186]: time="2024-02-09T00:45:11.704676724Z" level=info msg="StartContainer for \"25327c4a88bfd8f6fa71838424f3d56cceb790748e2a66a233834be207b5cab2\"" Feb 9 00:45:11.722371 systemd[1]: Started cri-containerd-25327c4a88bfd8f6fa71838424f3d56cceb790748e2a66a233834be207b5cab2.scope. Feb 9 00:45:11.749106 env[1186]: time="2024-02-09T00:45:11.749052949Z" level=info msg="StartContainer for \"25327c4a88bfd8f6fa71838424f3d56cceb790748e2a66a233834be207b5cab2\" returns successfully" Feb 9 00:45:11.998748 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 9 00:45:12.286792 systemd[1]: run-containerd-runc-k8s.io-25327c4a88bfd8f6fa71838424f3d56cceb790748e2a66a233834be207b5cab2-runc.hjaHrG.mount: Deactivated successfully. Feb 9 00:45:12.361569 env[1186]: time="2024-02-09T00:45:12.361532112Z" level=info msg="StopPodSandbox for \"696b96b5c4ac082f4bf28f079205eec53ece86f0b5867ef887e86bd432c495d4\"" Feb 9 00:45:12.361759 env[1186]: time="2024-02-09T00:45:12.361610911Z" level=info msg="TearDown network for sandbox \"696b96b5c4ac082f4bf28f079205eec53ece86f0b5867ef887e86bd432c495d4\" successfully" Feb 9 00:45:12.361759 env[1186]: time="2024-02-09T00:45:12.361641950Z" level=info msg="StopPodSandbox for \"696b96b5c4ac082f4bf28f079205eec53ece86f0b5867ef887e86bd432c495d4\" returns successfully" Feb 9 00:45:12.361980 env[1186]: time="2024-02-09T00:45:12.361962726Z" level=info msg="RemovePodSandbox for \"696b96b5c4ac082f4bf28f079205eec53ece86f0b5867ef887e86bd432c495d4\"" Feb 9 00:45:12.362027 env[1186]: time="2024-02-09T00:45:12.361982042Z" level=info msg="Forcibly stopping sandbox \"696b96b5c4ac082f4bf28f079205eec53ece86f0b5867ef887e86bd432c495d4\"" Feb 9 00:45:12.362056 env[1186]: time="2024-02-09T00:45:12.362037306Z" level=info msg="TearDown network for sandbox \"696b96b5c4ac082f4bf28f079205eec53ece86f0b5867ef887e86bd432c495d4\" successfully" Feb 9 00:45:12.364682 env[1186]: time="2024-02-09T00:45:12.364655099Z" level=info msg="RemovePodSandbox \"696b96b5c4ac082f4bf28f079205eec53ece86f0b5867ef887e86bd432c495d4\" returns successfully" Feb 9 00:45:12.364963 env[1186]: time="2024-02-09T00:45:12.364940807Z" level=info msg="StopPodSandbox for \"10e4b20761f5db9782c20c112a3523c9b5843048b5d7b2b0a6f782ec0a5a59ad\"" Feb 9 00:45:12.365047 env[1186]: time="2024-02-09T00:45:12.365011581Z" level=info msg="TearDown network for sandbox \"10e4b20761f5db9782c20c112a3523c9b5843048b5d7b2b0a6f782ec0a5a59ad\" successfully" Feb 9 00:45:12.365073 env[1186]: time="2024-02-09T00:45:12.365047389Z" level=info msg="StopPodSandbox for \"10e4b20761f5db9782c20c112a3523c9b5843048b5d7b2b0a6f782ec0a5a59ad\" returns successfully" Feb 9 00:45:12.365250 env[1186]: time="2024-02-09T00:45:12.365233450Z" level=info msg="RemovePodSandbox for \"10e4b20761f5db9782c20c112a3523c9b5843048b5d7b2b0a6f782ec0a5a59ad\"" Feb 9 00:45:12.365348 env[1186]: time="2024-02-09T00:45:12.365314784Z" level=info msg="Forcibly stopping sandbox \"10e4b20761f5db9782c20c112a3523c9b5843048b5d7b2b0a6f782ec0a5a59ad\"" Feb 9 00:45:12.365402 env[1186]: time="2024-02-09T00:45:12.365377622Z" level=info msg="TearDown network for sandbox \"10e4b20761f5db9782c20c112a3523c9b5843048b5d7b2b0a6f782ec0a5a59ad\" successfully" Feb 9 00:45:12.367846 env[1186]: time="2024-02-09T00:45:12.367821876Z" level=info msg="RemovePodSandbox \"10e4b20761f5db9782c20c112a3523c9b5843048b5d7b2b0a6f782ec0a5a59ad\" returns successfully" Feb 9 00:45:12.368057 env[1186]: time="2024-02-09T00:45:12.368035821Z" level=info msg="StopPodSandbox for \"be28aa47604de5e5c4429c9afad02b3bff86bf50e7a607d4866620ed4384e671\"" Feb 9 00:45:12.368107 env[1186]: time="2024-02-09T00:45:12.368087579Z" level=info msg="TearDown network for sandbox \"be28aa47604de5e5c4429c9afad02b3bff86bf50e7a607d4866620ed4384e671\" successfully" Feb 9 00:45:12.368141 env[1186]: time="2024-02-09T00:45:12.368109470Z" level=info msg="StopPodSandbox for \"be28aa47604de5e5c4429c9afad02b3bff86bf50e7a607d4866620ed4384e671\" returns successfully" Feb 9 00:45:12.368333 env[1186]: time="2024-02-09T00:45:12.368305360Z" level=info msg="RemovePodSandbox for \"be28aa47604de5e5c4429c9afad02b3bff86bf50e7a607d4866620ed4384e671\"" Feb 9 00:45:12.368392 env[1186]: time="2024-02-09T00:45:12.368335627Z" level=info msg="Forcibly stopping sandbox \"be28aa47604de5e5c4429c9afad02b3bff86bf50e7a607d4866620ed4384e671\"" Feb 9 00:45:12.368436 env[1186]: time="2024-02-09T00:45:12.368405318Z" level=info msg="TearDown network for sandbox \"be28aa47604de5e5c4429c9afad02b3bff86bf50e7a607d4866620ed4384e671\" successfully" Feb 9 00:45:12.370999 env[1186]: time="2024-02-09T00:45:12.370972324Z" level=info msg="RemovePodSandbox \"be28aa47604de5e5c4429c9afad02b3bff86bf50e7a607d4866620ed4384e671\" returns successfully" Feb 9 00:45:12.653620 kubelet[2058]: W0209 00:45:12.652913 2058 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0bc8400b_b686_4954_855b_0dbc3c3d1b44.slice/cri-containerd-79d666ac28eb84376b717b39b56586b202c9f44bbd4cfb34a0d87c3236c04865.scope WatchSource:0}: task 79d666ac28eb84376b717b39b56586b202c9f44bbd4cfb34a0d87c3236c04865 not found: not found Feb 9 00:45:12.691855 kubelet[2058]: E0209 00:45:12.691816 2058 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:45:12.709470 kubelet[2058]: I0209 00:45:12.709423 2058 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-c5cj9" podStartSLOduration=5.709380936 pod.CreationTimestamp="2024-02-09 00:45:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 00:45:12.707127773 +0000 UTC m=+120.450194182" watchObservedRunningTime="2024-02-09 00:45:12.709380936 +0000 UTC m=+120.452447345" Feb 9 00:45:13.694320 kubelet[2058]: E0209 00:45:13.694268 2058 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:45:14.361886 systemd[1]: run-containerd-runc-k8s.io-25327c4a88bfd8f6fa71838424f3d56cceb790748e2a66a233834be207b5cab2-runc.qiiwNa.mount: Deactivated successfully. Feb 9 00:45:14.611470 systemd-networkd[1090]: lxc_health: Link UP Feb 9 00:45:14.619504 systemd-networkd[1090]: lxc_health: Gained carrier Feb 9 00:45:14.619845 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 00:45:14.696170 kubelet[2058]: E0209 00:45:14.695775 2058 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:45:15.375733 kubelet[2058]: E0209 00:45:15.375701 2058 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:45:15.762901 kubelet[2058]: W0209 00:45:15.762638 2058 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0bc8400b_b686_4954_855b_0dbc3c3d1b44.slice/cri-containerd-5a01ffa7f8bf226a148d95fa7490f2693bb9ae6e0f776e31e0d2808554f2d160.scope WatchSource:0}: task 5a01ffa7f8bf226a148d95fa7490f2693bb9ae6e0f776e31e0d2808554f2d160 not found: not found Feb 9 00:45:16.010008 kubelet[2058]: E0209 00:45:16.009980 2058 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:45:16.073875 systemd-networkd[1090]: lxc_health: Gained IPv6LL Feb 9 00:45:16.698501 kubelet[2058]: E0209 00:45:16.698468 2058 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:45:17.700682 kubelet[2058]: E0209 00:45:17.700637 2058 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:45:18.871396 kubelet[2058]: W0209 00:45:18.871352 2058 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0bc8400b_b686_4954_855b_0dbc3c3d1b44.slice/cri-containerd-c8d739bae37e545e999d2507b4f33753133ee1226afed923cc75ccc15085c3d7.scope WatchSource:0}: task c8d739bae37e545e999d2507b4f33753133ee1226afed923cc75ccc15085c3d7 not found: not found Feb 9 00:45:20.725464 sshd[4076]: pam_unix(sshd:session): session closed for user core Feb 9 00:45:20.727364 systemd[1]: sshd@28-10.0.0.24:22-10.0.0.1:40726.service: Deactivated successfully. Feb 9 00:45:20.728138 systemd[1]: session-29.scope: Deactivated successfully. Feb 9 00:45:20.728670 systemd-logind[1177]: Session 29 logged out. Waiting for processes to exit. Feb 9 00:45:20.729306 systemd-logind[1177]: Removed session 29. Feb 9 00:45:21.979955 kubelet[2058]: W0209 00:45:21.979912 2058 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0bc8400b_b686_4954_855b_0dbc3c3d1b44.slice/cri-containerd-5d8e8de926516c8c6feb6cd01bb7a0ed2efce9245f9a083d685ca96b684c093c.scope WatchSource:0}: task 5d8e8de926516c8c6feb6cd01bb7a0ed2efce9245f9a083d685ca96b684c093c not found: not found