Feb 12 19:43:06.786769 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Feb 12 18:05:31 -00 2024 Feb 12 19:43:06.786786 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 12 19:43:06.786796 kernel: BIOS-provided physical RAM map: Feb 12 19:43:06.786802 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 12 19:43:06.786807 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Feb 12 19:43:06.786812 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Feb 12 19:43:06.786819 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Feb 12 19:43:06.786825 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Feb 12 19:43:06.786830 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Feb 12 19:43:06.786837 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Feb 12 19:43:06.786842 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Feb 12 19:43:06.786848 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Feb 12 19:43:06.786853 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Feb 12 19:43:06.786859 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Feb 12 19:43:06.786866 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Feb 12 19:43:06.786873 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Feb 12 19:43:06.786879 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Feb 12 19:43:06.786884 kernel: NX (Execute Disable) protection: active Feb 12 19:43:06.786890 kernel: e820: update [mem 0x9b3fa018-0x9b403c57] usable ==> usable Feb 12 19:43:06.786896 kernel: e820: update [mem 0x9b3fa018-0x9b403c57] usable ==> usable Feb 12 19:43:06.786902 kernel: e820: update [mem 0x9b3bd018-0x9b3f9e57] usable ==> usable Feb 12 19:43:06.786908 kernel: e820: update [mem 0x9b3bd018-0x9b3f9e57] usable ==> usable Feb 12 19:43:06.786913 kernel: extended physical RAM map: Feb 12 19:43:06.786926 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 12 19:43:06.786932 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Feb 12 19:43:06.786939 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Feb 12 19:43:06.786945 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Feb 12 19:43:06.786951 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Feb 12 19:43:06.786957 kernel: reserve setup_data: [mem 0x000000000080c000-0x000000000080ffff] usable Feb 12 19:43:06.786963 kernel: reserve setup_data: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Feb 12 19:43:06.786969 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b3bd017] usable Feb 12 19:43:06.786974 kernel: reserve setup_data: [mem 0x000000009b3bd018-0x000000009b3f9e57] usable Feb 12 19:43:06.786980 kernel: reserve setup_data: [mem 0x000000009b3f9e58-0x000000009b3fa017] usable Feb 12 19:43:06.786986 kernel: reserve setup_data: [mem 0x000000009b3fa018-0x000000009b403c57] usable Feb 12 19:43:06.786992 kernel: reserve setup_data: [mem 0x000000009b403c58-0x000000009c8eefff] usable Feb 12 19:43:06.786997 kernel: reserve setup_data: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Feb 12 19:43:06.787005 kernel: reserve setup_data: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Feb 12 19:43:06.787011 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Feb 12 19:43:06.787016 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Feb 12 19:43:06.787022 kernel: reserve setup_data: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Feb 12 19:43:06.787031 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Feb 12 19:43:06.787037 kernel: efi: EFI v2.70 by EDK II Feb 12 19:43:06.787043 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b773018 RNG=0x9cb75018 Feb 12 19:43:06.787051 kernel: random: crng init done Feb 12 19:43:06.787057 kernel: SMBIOS 2.8 present. Feb 12 19:43:06.787064 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 0.0.0 02/06/2015 Feb 12 19:43:06.787070 kernel: Hypervisor detected: KVM Feb 12 19:43:06.787076 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 12 19:43:06.787082 kernel: kvm-clock: cpu 0, msr 32faa001, primary cpu clock Feb 12 19:43:06.787089 kernel: kvm-clock: using sched offset of 4306341956 cycles Feb 12 19:43:06.787096 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 12 19:43:06.787102 kernel: tsc: Detected 2794.750 MHz processor Feb 12 19:43:06.787110 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 12 19:43:06.787117 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 12 19:43:06.787123 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Feb 12 19:43:06.787134 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 12 19:43:06.787160 kernel: Using GB pages for direct mapping Feb 12 19:43:06.787173 kernel: Secure boot disabled Feb 12 19:43:06.787180 kernel: ACPI: Early table checksum verification disabled Feb 12 19:43:06.787186 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Feb 12 19:43:06.787193 kernel: ACPI: XSDT 0x000000009CB7D0E8 00004C (v01 BOCHS BXPC 00000001 01000013) Feb 12 19:43:06.787202 kernel: ACPI: FACP 0x000000009CB7A000 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:43:06.787212 kernel: ACPI: DSDT 0x000000009CB7B000 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:43:06.787219 kernel: ACPI: FACS 0x000000009CBDD000 000040 Feb 12 19:43:06.787225 kernel: ACPI: APIC 0x000000009CB79000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:43:06.787232 kernel: ACPI: HPET 0x000000009CB78000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:43:06.787238 kernel: ACPI: WAET 0x000000009CB77000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:43:06.787245 kernel: ACPI: BGRT 0x000000009CB76000 000038 (v01 INTEL EDK2 00000002 01000013) Feb 12 19:43:06.787251 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb7a000-0x9cb7a073] Feb 12 19:43:06.787257 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7b000-0x9cb7ca38] Feb 12 19:43:06.787266 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Feb 12 19:43:06.787273 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb79000-0x9cb7908f] Feb 12 19:43:06.787279 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb78000-0x9cb78037] Feb 12 19:43:06.787286 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb77000-0x9cb77027] Feb 12 19:43:06.787294 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb76000-0x9cb76037] Feb 12 19:43:06.787301 kernel: No NUMA configuration found Feb 12 19:43:06.787309 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Feb 12 19:43:06.787317 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Feb 12 19:43:06.787323 kernel: Zone ranges: Feb 12 19:43:06.787331 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 12 19:43:06.787337 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Feb 12 19:43:06.787344 kernel: Normal empty Feb 12 19:43:06.787350 kernel: Movable zone start for each node Feb 12 19:43:06.787356 kernel: Early memory node ranges Feb 12 19:43:06.787363 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Feb 12 19:43:06.787369 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Feb 12 19:43:06.787376 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Feb 12 19:43:06.787382 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Feb 12 19:43:06.787402 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Feb 12 19:43:06.787409 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Feb 12 19:43:06.787415 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Feb 12 19:43:06.787421 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 12 19:43:06.787428 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Feb 12 19:43:06.787434 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Feb 12 19:43:06.787441 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 12 19:43:06.787447 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Feb 12 19:43:06.787453 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Feb 12 19:43:06.787460 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Feb 12 19:43:06.787468 kernel: ACPI: PM-Timer IO Port: 0xb008 Feb 12 19:43:06.787474 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 12 19:43:06.787480 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 12 19:43:06.787487 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 12 19:43:06.787493 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 12 19:43:06.787499 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 12 19:43:06.787506 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 12 19:43:06.787512 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 12 19:43:06.787518 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 12 19:43:06.787526 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 12 19:43:06.787532 kernel: TSC deadline timer available Feb 12 19:43:06.787539 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Feb 12 19:43:06.787545 kernel: kvm-guest: KVM setup pv remote TLB flush Feb 12 19:43:06.787551 kernel: kvm-guest: setup PV sched yield Feb 12 19:43:06.787558 kernel: [mem 0x9d000000-0xffffffff] available for PCI devices Feb 12 19:43:06.787564 kernel: Booting paravirtualized kernel on KVM Feb 12 19:43:06.787571 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 12 19:43:06.787577 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Feb 12 19:43:06.787585 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u524288 Feb 12 19:43:06.787591 kernel: pcpu-alloc: s185624 r8192 d31464 u524288 alloc=1*2097152 Feb 12 19:43:06.787602 kernel: pcpu-alloc: [0] 0 1 2 3 Feb 12 19:43:06.787610 kernel: kvm-guest: setup async PF for cpu 0 Feb 12 19:43:06.787617 kernel: kvm-guest: stealtime: cpu 0, msr 9b01c0c0 Feb 12 19:43:06.787624 kernel: kvm-guest: PV spinlocks enabled Feb 12 19:43:06.787631 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 12 19:43:06.787637 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Feb 12 19:43:06.787644 kernel: Policy zone: DMA32 Feb 12 19:43:06.787652 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 12 19:43:06.787659 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 12 19:43:06.787667 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 12 19:43:06.787674 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 12 19:43:06.787681 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 12 19:43:06.787688 kernel: Memory: 2400512K/2567000K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 166228K reserved, 0K cma-reserved) Feb 12 19:43:06.787695 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 12 19:43:06.787703 kernel: ftrace: allocating 34475 entries in 135 pages Feb 12 19:43:06.787710 kernel: ftrace: allocated 135 pages with 4 groups Feb 12 19:43:06.787717 kernel: rcu: Hierarchical RCU implementation. Feb 12 19:43:06.787724 kernel: rcu: RCU event tracing is enabled. Feb 12 19:43:06.787731 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 12 19:43:06.787738 kernel: Rude variant of Tasks RCU enabled. Feb 12 19:43:06.787745 kernel: Tracing variant of Tasks RCU enabled. Feb 12 19:43:06.787752 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 12 19:43:06.787758 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 12 19:43:06.787767 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Feb 12 19:43:06.787773 kernel: Console: colour dummy device 80x25 Feb 12 19:43:06.787780 kernel: printk: console [ttyS0] enabled Feb 12 19:43:06.787787 kernel: ACPI: Core revision 20210730 Feb 12 19:43:06.787794 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 12 19:43:06.787800 kernel: APIC: Switch to symmetric I/O mode setup Feb 12 19:43:06.787807 kernel: x2apic enabled Feb 12 19:43:06.787814 kernel: Switched APIC routing to physical x2apic. Feb 12 19:43:06.787820 kernel: kvm-guest: setup PV IPIs Feb 12 19:43:06.787828 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 12 19:43:06.787835 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 12 19:43:06.787842 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Feb 12 19:43:06.787849 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 12 19:43:06.787856 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Feb 12 19:43:06.787862 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Feb 12 19:43:06.787869 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 12 19:43:06.787876 kernel: Spectre V2 : Mitigation: Retpolines Feb 12 19:43:06.787883 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 12 19:43:06.787891 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 12 19:43:06.787898 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Feb 12 19:43:06.787904 kernel: RETBleed: Mitigation: untrained return thunk Feb 12 19:43:06.787911 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 12 19:43:06.787926 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Feb 12 19:43:06.787932 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 12 19:43:06.787952 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 12 19:43:06.787963 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 12 19:43:06.787970 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 12 19:43:06.787979 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 12 19:43:06.787990 kernel: Freeing SMP alternatives memory: 32K Feb 12 19:43:06.787996 kernel: pid_max: default: 32768 minimum: 301 Feb 12 19:43:06.788003 kernel: LSM: Security Framework initializing Feb 12 19:43:06.788010 kernel: SELinux: Initializing. Feb 12 19:43:06.788017 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 12 19:43:06.788024 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 12 19:43:06.788031 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Feb 12 19:43:06.788037 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Feb 12 19:43:06.788047 kernel: ... version: 0 Feb 12 19:43:06.788053 kernel: ... bit width: 48 Feb 12 19:43:06.788060 kernel: ... generic registers: 6 Feb 12 19:43:06.788067 kernel: ... value mask: 0000ffffffffffff Feb 12 19:43:06.788073 kernel: ... max period: 00007fffffffffff Feb 12 19:43:06.788080 kernel: ... fixed-purpose events: 0 Feb 12 19:43:06.788087 kernel: ... event mask: 000000000000003f Feb 12 19:43:06.788093 kernel: signal: max sigframe size: 1776 Feb 12 19:43:06.788100 kernel: rcu: Hierarchical SRCU implementation. Feb 12 19:43:06.788108 kernel: smp: Bringing up secondary CPUs ... Feb 12 19:43:06.788115 kernel: x86: Booting SMP configuration: Feb 12 19:43:06.788122 kernel: .... node #0, CPUs: #1 Feb 12 19:43:06.788128 kernel: kvm-clock: cpu 1, msr 32faa041, secondary cpu clock Feb 12 19:43:06.788135 kernel: kvm-guest: setup async PF for cpu 1 Feb 12 19:43:06.788142 kernel: kvm-guest: stealtime: cpu 1, msr 9b09c0c0 Feb 12 19:43:06.788148 kernel: #2 Feb 12 19:43:06.788155 kernel: kvm-clock: cpu 2, msr 32faa081, secondary cpu clock Feb 12 19:43:06.788162 kernel: kvm-guest: setup async PF for cpu 2 Feb 12 19:43:06.788170 kernel: kvm-guest: stealtime: cpu 2, msr 9b11c0c0 Feb 12 19:43:06.788177 kernel: #3 Feb 12 19:43:06.788183 kernel: kvm-clock: cpu 3, msr 32faa0c1, secondary cpu clock Feb 12 19:43:06.788190 kernel: kvm-guest: setup async PF for cpu 3 Feb 12 19:43:06.788197 kernel: kvm-guest: stealtime: cpu 3, msr 9b19c0c0 Feb 12 19:43:06.788203 kernel: smp: Brought up 1 node, 4 CPUs Feb 12 19:43:06.788210 kernel: smpboot: Max logical packages: 1 Feb 12 19:43:06.788217 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Feb 12 19:43:06.788224 kernel: devtmpfs: initialized Feb 12 19:43:06.788230 kernel: x86/mm: Memory block size: 128MB Feb 12 19:43:06.788239 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Feb 12 19:43:06.788246 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Feb 12 19:43:06.788252 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Feb 12 19:43:06.788259 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Feb 12 19:43:06.788266 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Feb 12 19:43:06.788273 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 12 19:43:06.788280 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 12 19:43:06.788286 kernel: pinctrl core: initialized pinctrl subsystem Feb 12 19:43:06.788294 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 12 19:43:06.788301 kernel: audit: initializing netlink subsys (disabled) Feb 12 19:43:06.788308 kernel: audit: type=2000 audit(1707766986.292:1): state=initialized audit_enabled=0 res=1 Feb 12 19:43:06.788315 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 12 19:43:06.788321 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 12 19:43:06.788328 kernel: cpuidle: using governor menu Feb 12 19:43:06.788335 kernel: ACPI: bus type PCI registered Feb 12 19:43:06.788341 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 12 19:43:06.788348 kernel: dca service started, version 1.12.1 Feb 12 19:43:06.788355 kernel: PCI: Using configuration type 1 for base access Feb 12 19:43:06.788363 kernel: PCI: Using configuration type 1 for extended access Feb 12 19:43:06.788370 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 12 19:43:06.788377 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 12 19:43:06.788393 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 12 19:43:06.788400 kernel: ACPI: Added _OSI(Module Device) Feb 12 19:43:06.788407 kernel: ACPI: Added _OSI(Processor Device) Feb 12 19:43:06.788414 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 12 19:43:06.788420 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 12 19:43:06.788427 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 12 19:43:06.788436 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 12 19:43:06.788443 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 12 19:43:06.788450 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 12 19:43:06.788456 kernel: ACPI: Interpreter enabled Feb 12 19:43:06.788463 kernel: ACPI: PM: (supports S0 S3 S5) Feb 12 19:43:06.788470 kernel: ACPI: Using IOAPIC for interrupt routing Feb 12 19:43:06.788477 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 12 19:43:06.788483 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Feb 12 19:43:06.788490 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 12 19:43:06.788605 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 12 19:43:06.788617 kernel: acpiphp: Slot [3] registered Feb 12 19:43:06.788624 kernel: acpiphp: Slot [4] registered Feb 12 19:43:06.788630 kernel: acpiphp: Slot [5] registered Feb 12 19:43:06.788637 kernel: acpiphp: Slot [6] registered Feb 12 19:43:06.788644 kernel: acpiphp: Slot [7] registered Feb 12 19:43:06.788650 kernel: acpiphp: Slot [8] registered Feb 12 19:43:06.788657 kernel: acpiphp: Slot [9] registered Feb 12 19:43:06.788668 kernel: acpiphp: Slot [10] registered Feb 12 19:43:06.788676 kernel: acpiphp: Slot [11] registered Feb 12 19:43:06.788685 kernel: acpiphp: Slot [12] registered Feb 12 19:43:06.788694 kernel: acpiphp: Slot [13] registered Feb 12 19:43:06.788703 kernel: acpiphp: Slot [14] registered Feb 12 19:43:06.788710 kernel: acpiphp: Slot [15] registered Feb 12 19:43:06.788716 kernel: acpiphp: Slot [16] registered Feb 12 19:43:06.788723 kernel: acpiphp: Slot [17] registered Feb 12 19:43:06.788729 kernel: acpiphp: Slot [18] registered Feb 12 19:43:06.788738 kernel: acpiphp: Slot [19] registered Feb 12 19:43:06.788745 kernel: acpiphp: Slot [20] registered Feb 12 19:43:06.788751 kernel: acpiphp: Slot [21] registered Feb 12 19:43:06.788758 kernel: acpiphp: Slot [22] registered Feb 12 19:43:06.788765 kernel: acpiphp: Slot [23] registered Feb 12 19:43:06.788771 kernel: acpiphp: Slot [24] registered Feb 12 19:43:06.788778 kernel: acpiphp: Slot [25] registered Feb 12 19:43:06.788785 kernel: acpiphp: Slot [26] registered Feb 12 19:43:06.788791 kernel: acpiphp: Slot [27] registered Feb 12 19:43:06.788798 kernel: acpiphp: Slot [28] registered Feb 12 19:43:06.788806 kernel: acpiphp: Slot [29] registered Feb 12 19:43:06.788812 kernel: acpiphp: Slot [30] registered Feb 12 19:43:06.788819 kernel: acpiphp: Slot [31] registered Feb 12 19:43:06.788825 kernel: PCI host bridge to bus 0000:00 Feb 12 19:43:06.788907 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 12 19:43:06.788983 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 12 19:43:06.789046 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 12 19:43:06.789108 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Feb 12 19:43:06.789172 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0x87fffffff window] Feb 12 19:43:06.789233 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 12 19:43:06.789315 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 12 19:43:06.789404 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 12 19:43:06.789485 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Feb 12 19:43:06.789554 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Feb 12 19:43:06.789626 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Feb 12 19:43:06.789695 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Feb 12 19:43:06.789766 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Feb 12 19:43:06.789833 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Feb 12 19:43:06.789911 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 12 19:43:06.789990 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Feb 12 19:43:06.790059 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Feb 12 19:43:06.790138 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Feb 12 19:43:06.790207 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Feb 12 19:43:06.790276 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc1043000-0xc1043fff] Feb 12 19:43:06.790377 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Feb 12 19:43:06.790460 kernel: pci 0000:00:02.0: BAR 0: assigned to efifb Feb 12 19:43:06.790529 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 12 19:43:06.790611 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Feb 12 19:43:06.790681 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0a0-0xc0bf] Feb 12 19:43:06.790756 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Feb 12 19:43:06.790827 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Feb 12 19:43:06.790944 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Feb 12 19:43:06.791017 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Feb 12 19:43:06.791085 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Feb 12 19:43:06.791157 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Feb 12 19:43:06.791233 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Feb 12 19:43:06.791302 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Feb 12 19:43:06.791373 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc1040000-0xc1040fff] Feb 12 19:43:06.791473 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Feb 12 19:43:06.791542 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Feb 12 19:43:06.791552 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 12 19:43:06.791559 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 12 19:43:06.791577 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 12 19:43:06.791589 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 12 19:43:06.791596 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 12 19:43:06.791603 kernel: iommu: Default domain type: Translated Feb 12 19:43:06.791609 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 12 19:43:06.792281 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Feb 12 19:43:06.792362 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 12 19:43:06.792441 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Feb 12 19:43:06.792454 kernel: vgaarb: loaded Feb 12 19:43:06.792461 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 12 19:43:06.792468 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 12 19:43:06.792474 kernel: PTP clock support registered Feb 12 19:43:06.792481 kernel: Registered efivars operations Feb 12 19:43:06.792488 kernel: PCI: Using ACPI for IRQ routing Feb 12 19:43:06.792495 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 12 19:43:06.792501 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Feb 12 19:43:06.792508 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Feb 12 19:43:06.792516 kernel: e820: reserve RAM buffer [mem 0x9b3bd018-0x9bffffff] Feb 12 19:43:06.792523 kernel: e820: reserve RAM buffer [mem 0x9b3fa018-0x9bffffff] Feb 12 19:43:06.792530 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Feb 12 19:43:06.792536 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Feb 12 19:43:06.792543 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 12 19:43:06.792550 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 12 19:43:06.792557 kernel: clocksource: Switched to clocksource kvm-clock Feb 12 19:43:06.792563 kernel: VFS: Disk quotas dquot_6.6.0 Feb 12 19:43:06.792570 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 12 19:43:06.792579 kernel: pnp: PnP ACPI init Feb 12 19:43:06.792653 kernel: pnp 00:02: [dma 2] Feb 12 19:43:06.792663 kernel: pnp: PnP ACPI: found 6 devices Feb 12 19:43:06.792671 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 12 19:43:06.792678 kernel: NET: Registered PF_INET protocol family Feb 12 19:43:06.792684 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 12 19:43:06.792691 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 12 19:43:06.792698 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 12 19:43:06.792708 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 12 19:43:06.792715 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 12 19:43:06.792722 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 12 19:43:06.792728 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 12 19:43:06.792735 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 12 19:43:06.792742 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 12 19:43:06.792749 kernel: NET: Registered PF_XDP protocol family Feb 12 19:43:06.792819 kernel: pci 0000:00:05.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Feb 12 19:43:06.792902 kernel: pci 0000:00:05.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Feb 12 19:43:06.792974 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 12 19:43:06.793037 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 12 19:43:06.793097 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 12 19:43:06.793155 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Feb 12 19:43:06.793215 kernel: pci_bus 0000:00: resource 8 [mem 0x800000000-0x87fffffff window] Feb 12 19:43:06.793283 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Feb 12 19:43:06.793355 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 12 19:43:06.793438 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Feb 12 19:43:06.793448 kernel: PCI: CLS 0 bytes, default 64 Feb 12 19:43:06.793456 kernel: Initialise system trusted keyrings Feb 12 19:43:06.793463 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 12 19:43:06.793470 kernel: Key type asymmetric registered Feb 12 19:43:06.793477 kernel: Asymmetric key parser 'x509' registered Feb 12 19:43:06.793484 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 12 19:43:06.793492 kernel: io scheduler mq-deadline registered Feb 12 19:43:06.793499 kernel: io scheduler kyber registered Feb 12 19:43:06.793508 kernel: io scheduler bfq registered Feb 12 19:43:06.793516 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 12 19:43:06.793523 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 12 19:43:06.793530 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Feb 12 19:43:06.793537 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 12 19:43:06.793544 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 12 19:43:06.793551 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 12 19:43:06.793559 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 12 19:43:06.793566 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 12 19:43:06.793580 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 12 19:43:06.793656 kernel: rtc_cmos 00:05: RTC can wake from S4 Feb 12 19:43:06.793669 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 12 19:43:06.793730 kernel: rtc_cmos 00:05: registered as rtc0 Feb 12 19:43:06.793791 kernel: rtc_cmos 00:05: setting system clock to 2024-02-12T19:43:06 UTC (1707766986) Feb 12 19:43:06.793855 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Feb 12 19:43:06.793864 kernel: efifb: probing for efifb Feb 12 19:43:06.793871 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Feb 12 19:43:06.793879 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Feb 12 19:43:06.793886 kernel: efifb: scrolling: redraw Feb 12 19:43:06.793893 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 12 19:43:06.793900 kernel: Console: switching to colour frame buffer device 160x50 Feb 12 19:43:06.793907 kernel: fb0: EFI VGA frame buffer device Feb 12 19:43:06.793921 kernel: pstore: Registered efi as persistent store backend Feb 12 19:43:06.793930 kernel: NET: Registered PF_INET6 protocol family Feb 12 19:43:06.793937 kernel: Segment Routing with IPv6 Feb 12 19:43:06.793944 kernel: In-situ OAM (IOAM) with IPv6 Feb 12 19:43:06.793951 kernel: NET: Registered PF_PACKET protocol family Feb 12 19:43:06.793959 kernel: Key type dns_resolver registered Feb 12 19:43:06.793966 kernel: IPI shorthand broadcast: enabled Feb 12 19:43:06.793973 kernel: sched_clock: Marking stable (352295348, 89960907)->(464945753, -22689498) Feb 12 19:43:06.793980 kernel: registered taskstats version 1 Feb 12 19:43:06.793987 kernel: Loading compiled-in X.509 certificates Feb 12 19:43:06.793995 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 253e5c5c936b12e2ff2626e7f3214deb753330c8' Feb 12 19:43:06.794002 kernel: Key type .fscrypt registered Feb 12 19:43:06.794009 kernel: Key type fscrypt-provisioning registered Feb 12 19:43:06.794016 kernel: pstore: Using crash dump compression: deflate Feb 12 19:43:06.794024 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 12 19:43:06.794031 kernel: ima: Allocated hash algorithm: sha1 Feb 12 19:43:06.794038 kernel: ima: No architecture policies found Feb 12 19:43:06.794045 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 12 19:43:06.794053 kernel: Write protecting the kernel read-only data: 28672k Feb 12 19:43:06.794060 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 12 19:43:06.794068 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 12 19:43:06.794075 kernel: Run /init as init process Feb 12 19:43:06.794082 kernel: with arguments: Feb 12 19:43:06.794089 kernel: /init Feb 12 19:43:06.794095 kernel: with environment: Feb 12 19:43:06.794102 kernel: HOME=/ Feb 12 19:43:06.794109 kernel: TERM=linux Feb 12 19:43:06.794116 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 12 19:43:06.794126 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 19:43:06.794136 systemd[1]: Detected virtualization kvm. Feb 12 19:43:06.794144 systemd[1]: Detected architecture x86-64. Feb 12 19:43:06.794151 systemd[1]: Running in initrd. Feb 12 19:43:06.794163 systemd[1]: No hostname configured, using default hostname. Feb 12 19:43:06.794170 systemd[1]: Hostname set to . Feb 12 19:43:06.794178 systemd[1]: Initializing machine ID from VM UUID. Feb 12 19:43:06.794189 systemd[1]: Queued start job for default target initrd.target. Feb 12 19:43:06.794196 systemd[1]: Started systemd-ask-password-console.path. Feb 12 19:43:06.794204 systemd[1]: Reached target cryptsetup.target. Feb 12 19:43:06.794211 systemd[1]: Reached target paths.target. Feb 12 19:43:06.794229 systemd[1]: Reached target slices.target. Feb 12 19:43:06.794238 systemd[1]: Reached target swap.target. Feb 12 19:43:06.794245 systemd[1]: Reached target timers.target. Feb 12 19:43:06.794253 systemd[1]: Listening on iscsid.socket. Feb 12 19:43:06.794263 systemd[1]: Listening on iscsiuio.socket. Feb 12 19:43:06.794272 systemd[1]: Listening on systemd-journald-audit.socket. Feb 12 19:43:06.794292 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 12 19:43:06.794305 systemd[1]: Listening on systemd-journald.socket. Feb 12 19:43:06.794312 systemd[1]: Listening on systemd-networkd.socket. Feb 12 19:43:06.794320 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 19:43:06.794327 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 19:43:06.794335 systemd[1]: Reached target sockets.target. Feb 12 19:43:06.794346 systemd[1]: Starting kmod-static-nodes.service... Feb 12 19:43:06.794363 systemd[1]: Finished network-cleanup.service. Feb 12 19:43:06.794372 systemd[1]: Starting systemd-fsck-usr.service... Feb 12 19:43:06.794379 systemd[1]: Starting systemd-journald.service... Feb 12 19:43:06.794396 systemd[1]: Starting systemd-modules-load.service... Feb 12 19:43:06.794404 systemd[1]: Starting systemd-resolved.service... Feb 12 19:43:06.794412 systemd[1]: Starting systemd-vconsole-setup.service... Feb 12 19:43:06.794419 systemd[1]: Finished kmod-static-nodes.service. Feb 12 19:43:06.794427 systemd[1]: Finished systemd-fsck-usr.service. Feb 12 19:43:06.794436 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 19:43:06.794444 systemd[1]: Finished systemd-vconsole-setup.service. Feb 12 19:43:06.794452 kernel: audit: type=1130 audit(1707766986.791:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:06.794459 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 19:43:06.794470 systemd-journald[197]: Journal started Feb 12 19:43:06.794509 systemd-journald[197]: Runtime Journal (/run/log/journal/2a89acac2fbf4d6c86a3583f72ff5b7b) is 6.0M, max 48.4M, 42.4M free. Feb 12 19:43:06.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:06.786040 systemd-modules-load[198]: Inserted module 'overlay' Feb 12 19:43:06.801599 kernel: audit: type=1130 audit(1707766986.794:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:06.801615 systemd[1]: Started systemd-journald.service. Feb 12 19:43:06.801625 kernel: audit: type=1130 audit(1707766986.797:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:06.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:06.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:06.800567 systemd[1]: Starting dracut-cmdline-ask.service... Feb 12 19:43:06.808401 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 12 19:43:06.810795 systemd-modules-load[198]: Inserted module 'br_netfilter' Feb 12 19:43:06.811490 kernel: Bridge firewalling registered Feb 12 19:43:06.812790 systemd-resolved[199]: Positive Trust Anchors: Feb 12 19:43:06.812804 systemd-resolved[199]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 19:43:06.812831 systemd-resolved[199]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 19:43:06.815002 systemd-resolved[199]: Defaulting to hostname 'linux'. Feb 12 19:43:06.821589 kernel: audit: type=1130 audit(1707766986.818:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:06.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:06.821000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:06.815749 systemd[1]: Started systemd-resolved.service. Feb 12 19:43:06.824771 kernel: audit: type=1130 audit(1707766986.821:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:06.819569 systemd[1]: Finished dracut-cmdline-ask.service. Feb 12 19:43:06.821652 systemd[1]: Reached target nss-lookup.target. Feb 12 19:43:06.824940 systemd[1]: Starting dracut-cmdline.service... Feb 12 19:43:06.827505 kernel: SCSI subsystem initialized Feb 12 19:43:06.833298 dracut-cmdline[215]: dracut-dracut-053 Feb 12 19:43:06.834913 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 12 19:43:06.841114 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 12 19:43:06.841152 kernel: device-mapper: uevent: version 1.0.3 Feb 12 19:43:06.841166 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 12 19:43:06.843738 systemd-modules-load[198]: Inserted module 'dm_multipath' Feb 12 19:43:06.844439 systemd[1]: Finished systemd-modules-load.service. Feb 12 19:43:06.845000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:06.846410 systemd[1]: Starting systemd-sysctl.service... Feb 12 19:43:06.848640 kernel: audit: type=1130 audit(1707766986.845:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:06.853545 systemd[1]: Finished systemd-sysctl.service. Feb 12 19:43:06.856500 kernel: audit: type=1130 audit(1707766986.853:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:06.853000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:06.890418 kernel: Loading iSCSI transport class v2.0-870. Feb 12 19:43:06.903419 kernel: iscsi: registered transport (tcp) Feb 12 19:43:06.923418 kernel: iscsi: registered transport (qla4xxx) Feb 12 19:43:06.923469 kernel: QLogic iSCSI HBA Driver Feb 12 19:43:06.948041 systemd[1]: Finished dracut-cmdline.service. Feb 12 19:43:06.951232 kernel: audit: type=1130 audit(1707766986.947:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:06.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:06.951259 systemd[1]: Starting dracut-pre-udev.service... Feb 12 19:43:06.997422 kernel: raid6: avx2x4 gen() 30999 MB/s Feb 12 19:43:07.029411 kernel: raid6: avx2x4 xor() 8539 MB/s Feb 12 19:43:07.046409 kernel: raid6: avx2x2 gen() 32523 MB/s Feb 12 19:43:07.063412 kernel: raid6: avx2x2 xor() 19267 MB/s Feb 12 19:43:07.080421 kernel: raid6: avx2x1 gen() 25602 MB/s Feb 12 19:43:07.097424 kernel: raid6: avx2x1 xor() 13300 MB/s Feb 12 19:43:07.114429 kernel: raid6: sse2x4 gen() 11854 MB/s Feb 12 19:43:07.131420 kernel: raid6: sse2x4 xor() 6144 MB/s Feb 12 19:43:07.148417 kernel: raid6: sse2x2 gen() 16552 MB/s Feb 12 19:43:07.165414 kernel: raid6: sse2x2 xor() 9681 MB/s Feb 12 19:43:07.182416 kernel: raid6: sse2x1 gen() 12297 MB/s Feb 12 19:43:07.199853 kernel: raid6: sse2x1 xor() 7802 MB/s Feb 12 19:43:07.199889 kernel: raid6: using algorithm avx2x2 gen() 32523 MB/s Feb 12 19:43:07.199901 kernel: raid6: .... xor() 19267 MB/s, rmw enabled Feb 12 19:43:07.199924 kernel: raid6: using avx2x2 recovery algorithm Feb 12 19:43:07.212414 kernel: xor: automatically using best checksumming function avx Feb 12 19:43:07.301421 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 12 19:43:07.309874 systemd[1]: Finished dracut-pre-udev.service. Feb 12 19:43:07.313473 kernel: audit: type=1130 audit(1707766987.309:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:07.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:07.313000 audit: BPF prog-id=7 op=LOAD Feb 12 19:43:07.313000 audit: BPF prog-id=8 op=LOAD Feb 12 19:43:07.313864 systemd[1]: Starting systemd-udevd.service... Feb 12 19:43:07.324724 systemd-udevd[397]: Using default interface naming scheme 'v252'. Feb 12 19:43:07.328587 systemd[1]: Started systemd-udevd.service. Feb 12 19:43:07.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:07.333737 systemd[1]: Starting dracut-pre-trigger.service... Feb 12 19:43:07.342736 dracut-pre-trigger[412]: rd.md=0: removing MD RAID activation Feb 12 19:43:07.366439 systemd[1]: Finished dracut-pre-trigger.service. Feb 12 19:43:07.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:07.367241 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 19:43:07.399119 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 19:43:07.398000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:07.428410 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 12 19:43:07.430538 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 12 19:43:07.430559 kernel: GPT:9289727 != 19775487 Feb 12 19:43:07.430573 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 12 19:43:07.430582 kernel: GPT:9289727 != 19775487 Feb 12 19:43:07.431412 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 12 19:43:07.431442 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 19:43:07.436415 kernel: cryptd: max_cpu_qlen set to 1000 Feb 12 19:43:07.453415 kernel: AVX2 version of gcm_enc/dec engaged. Feb 12 19:43:07.454423 kernel: AES CTR mode by8 optimization enabled Feb 12 19:43:07.454449 kernel: libata version 3.00 loaded. Feb 12 19:43:07.458654 kernel: ata_piix 0000:00:01.1: version 2.13 Feb 12 19:43:07.458843 kernel: scsi host0: ata_piix Feb 12 19:43:07.458951 kernel: scsi host1: ata_piix Feb 12 19:43:07.459967 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Feb 12 19:43:07.459987 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Feb 12 19:43:07.469401 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (460) Feb 12 19:43:07.470847 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 12 19:43:07.475916 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 12 19:43:07.477343 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 12 19:43:07.481967 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 19:43:07.488398 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 12 19:43:07.492769 systemd[1]: Starting disk-uuid.service... Feb 12 19:43:07.618412 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Feb 12 19:43:07.618447 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 12 19:43:07.647410 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Feb 12 19:43:07.647556 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 12 19:43:07.664411 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Feb 12 19:43:07.732295 disk-uuid[519]: Primary Header is updated. Feb 12 19:43:07.732295 disk-uuid[519]: Secondary Entries is updated. Feb 12 19:43:07.732295 disk-uuid[519]: Secondary Header is updated. Feb 12 19:43:07.735406 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 19:43:07.738397 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 19:43:08.739316 disk-uuid[533]: The operation has completed successfully. Feb 12 19:43:08.740272 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 19:43:08.760713 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 12 19:43:08.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:08.760000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:08.760791 systemd[1]: Finished disk-uuid.service. Feb 12 19:43:08.766882 systemd[1]: Starting verity-setup.service... Feb 12 19:43:08.778415 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Feb 12 19:43:08.798882 systemd[1]: Found device dev-mapper-usr.device. Feb 12 19:43:08.800714 systemd[1]: Mounting sysusr-usr.mount... Feb 12 19:43:08.803775 systemd[1]: Finished verity-setup.service. Feb 12 19:43:08.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:08.859345 systemd[1]: Mounted sysusr-usr.mount. Feb 12 19:43:08.860380 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 12 19:43:08.859956 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 12 19:43:08.860494 systemd[1]: Starting ignition-setup.service... Feb 12 19:43:08.861871 systemd[1]: Starting parse-ip-for-networkd.service... Feb 12 19:43:08.871516 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 12 19:43:08.871549 kernel: BTRFS info (device vda6): using free space tree Feb 12 19:43:08.871562 kernel: BTRFS info (device vda6): has skinny extents Feb 12 19:43:08.879456 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 12 19:43:08.885966 systemd[1]: Finished ignition-setup.service. Feb 12 19:43:08.886000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:08.887366 systemd[1]: Starting ignition-fetch-offline.service... Feb 12 19:43:08.923114 ignition[632]: Ignition 2.14.0 Feb 12 19:43:08.923124 ignition[632]: Stage: fetch-offline Feb 12 19:43:08.923193 ignition[632]: no configs at "/usr/lib/ignition/base.d" Feb 12 19:43:08.923202 ignition[632]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 19:43:08.923302 ignition[632]: parsed url from cmdline: "" Feb 12 19:43:08.923305 ignition[632]: no config URL provided Feb 12 19:43:08.923309 ignition[632]: reading system config file "/usr/lib/ignition/user.ign" Feb 12 19:43:08.923314 ignition[632]: no config at "/usr/lib/ignition/user.ign" Feb 12 19:43:08.926973 systemd[1]: Finished parse-ip-for-networkd.service. Feb 12 19:43:08.923335 ignition[632]: op(1): [started] loading QEMU firmware config module Feb 12 19:43:08.923752 ignition[632]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 12 19:43:08.929438 ignition[632]: op(1): [finished] loading QEMU firmware config module Feb 12 19:43:08.929458 ignition[632]: QEMU firmware config was not found. Ignoring... Feb 12 19:43:08.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:08.931000 audit: BPF prog-id=9 op=LOAD Feb 12 19:43:08.932795 systemd[1]: Starting systemd-networkd.service... Feb 12 19:43:08.985991 ignition[632]: parsing config with SHA512: 422a7788620ee1182453b964c4e04f5b0ecedf6632040ea3a5f84d3ea9dbf25d845ba5e5ffa9f3b28500fab14233194ce839809a6cec88177ce0701d22596994 Feb 12 19:43:09.003194 systemd-networkd[713]: lo: Link UP Feb 12 19:43:09.003204 systemd-networkd[713]: lo: Gained carrier Feb 12 19:43:09.004502 systemd-networkd[713]: Enumeration completed Feb 12 19:43:09.004574 systemd[1]: Started systemd-networkd.service. Feb 12 19:43:09.005731 systemd[1]: Reached target network.target. Feb 12 19:43:09.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:09.007971 systemd[1]: Starting iscsiuio.service... Feb 12 19:43:09.009511 systemd-networkd[713]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 19:43:09.011527 systemd-networkd[713]: eth0: Link UP Feb 12 19:43:09.012199 systemd-networkd[713]: eth0: Gained carrier Feb 12 19:43:09.012963 systemd[1]: Started iscsiuio.service. Feb 12 19:43:09.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:09.014816 systemd[1]: Starting iscsid.service... Feb 12 19:43:09.018133 iscsid[718]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 12 19:43:09.018133 iscsid[718]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 12 19:43:09.018133 iscsid[718]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 12 19:43:09.018133 iscsid[718]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 12 19:43:09.018133 iscsid[718]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 12 19:43:09.018133 iscsid[718]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 12 19:43:09.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:09.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:09.019938 unknown[632]: fetched base config from "system" Feb 12 19:43:09.020498 ignition[632]: fetch-offline: fetch-offline passed Feb 12 19:43:09.019946 unknown[632]: fetched user config from "qemu" Feb 12 19:43:09.020547 ignition[632]: Ignition finished successfully Feb 12 19:43:09.022819 systemd[1]: Started iscsid.service. Feb 12 19:43:09.024354 systemd[1]: Finished ignition-fetch-offline.service. Feb 12 19:43:09.026411 systemd[1]: Starting dracut-initqueue.service... Feb 12 19:43:09.027057 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 12 19:43:09.027707 systemd[1]: Starting ignition-kargs.service... Feb 12 19:43:09.029467 systemd-networkd[713]: eth0: DHCPv4 address 10.0.0.136/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 12 19:43:09.035687 ignition[720]: Ignition 2.14.0 Feb 12 19:43:09.035692 ignition[720]: Stage: kargs Feb 12 19:43:09.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:09.036631 systemd[1]: Finished dracut-initqueue.service. Feb 12 19:43:09.035765 ignition[720]: no configs at "/usr/lib/ignition/base.d" Feb 12 19:43:09.037569 systemd[1]: Reached target remote-fs-pre.target. Feb 12 19:43:09.035772 ignition[720]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 19:43:09.038520 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 19:43:09.036890 ignition[720]: kargs: kargs passed Feb 12 19:43:09.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:09.039138 systemd[1]: Reached target remote-fs.target. Feb 12 19:43:09.036921 ignition[720]: Ignition finished successfully Feb 12 19:43:09.040910 systemd[1]: Starting dracut-pre-mount.service... Feb 12 19:43:09.041626 systemd[1]: Finished ignition-kargs.service. Feb 12 19:43:09.043223 systemd[1]: Starting ignition-disks.service... Feb 12 19:43:09.049086 systemd[1]: Finished dracut-pre-mount.service. Feb 12 19:43:09.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:09.050699 ignition[733]: Ignition 2.14.0 Feb 12 19:43:09.050708 ignition[733]: Stage: disks Feb 12 19:43:09.050779 ignition[733]: no configs at "/usr/lib/ignition/base.d" Feb 12 19:43:09.050788 ignition[733]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 19:43:09.051877 ignition[733]: disks: disks passed Feb 12 19:43:09.051906 ignition[733]: Ignition finished successfully Feb 12 19:43:09.054779 systemd[1]: Finished ignition-disks.service. Feb 12 19:43:09.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:09.055932 systemd[1]: Reached target initrd-root-device.target. Feb 12 19:43:09.055996 systemd[1]: Reached target local-fs-pre.target. Feb 12 19:43:09.057155 systemd[1]: Reached target local-fs.target. Feb 12 19:43:09.058210 systemd[1]: Reached target sysinit.target. Feb 12 19:43:09.059664 systemd[1]: Reached target basic.target. Feb 12 19:43:09.061343 systemd[1]: Starting systemd-fsck-root.service... Feb 12 19:43:09.072108 systemd-fsck[745]: ROOT: clean, 602/553520 files, 56013/553472 blocks Feb 12 19:43:09.076412 systemd[1]: Finished systemd-fsck-root.service. Feb 12 19:43:09.076000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:09.077925 systemd[1]: Mounting sysroot.mount... Feb 12 19:43:09.084248 systemd[1]: Mounted sysroot.mount. Feb 12 19:43:09.085811 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 12 19:43:09.084793 systemd[1]: Reached target initrd-root-fs.target. Feb 12 19:43:09.086503 systemd[1]: Mounting sysroot-usr.mount... Feb 12 19:43:09.087250 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 12 19:43:09.087277 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 12 19:43:09.087295 systemd[1]: Reached target ignition-diskful.target. Feb 12 19:43:09.088795 systemd[1]: Mounted sysroot-usr.mount. Feb 12 19:43:09.090095 systemd[1]: Starting initrd-setup-root.service... Feb 12 19:43:09.094090 initrd-setup-root[755]: cut: /sysroot/etc/passwd: No such file or directory Feb 12 19:43:09.096830 initrd-setup-root[763]: cut: /sysroot/etc/group: No such file or directory Feb 12 19:43:09.099434 initrd-setup-root[771]: cut: /sysroot/etc/shadow: No such file or directory Feb 12 19:43:09.101835 initrd-setup-root[779]: cut: /sysroot/etc/gshadow: No such file or directory Feb 12 19:43:09.124215 systemd[1]: Finished initrd-setup-root.service. Feb 12 19:43:09.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:09.126105 systemd[1]: Starting ignition-mount.service... Feb 12 19:43:09.127656 systemd[1]: Starting sysroot-boot.service... Feb 12 19:43:09.130752 bash[796]: umount: /sysroot/usr/share/oem: not mounted. Feb 12 19:43:09.139073 ignition[798]: INFO : Ignition 2.14.0 Feb 12 19:43:09.139073 ignition[798]: INFO : Stage: mount Feb 12 19:43:09.140353 ignition[798]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 12 19:43:09.140353 ignition[798]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 19:43:09.140353 ignition[798]: INFO : mount: mount passed Feb 12 19:43:09.140353 ignition[798]: INFO : Ignition finished successfully Feb 12 19:43:09.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:09.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:09.141133 systemd[1]: Finished ignition-mount.service. Feb 12 19:43:09.143905 systemd[1]: Finished sysroot-boot.service. Feb 12 19:43:09.809243 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 12 19:43:09.814402 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (806) Feb 12 19:43:09.814430 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 12 19:43:09.815540 kernel: BTRFS info (device vda6): using free space tree Feb 12 19:43:09.815564 kernel: BTRFS info (device vda6): has skinny extents Feb 12 19:43:09.818898 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 12 19:43:09.820768 systemd[1]: Starting ignition-files.service... Feb 12 19:43:09.834192 ignition[826]: INFO : Ignition 2.14.0 Feb 12 19:43:09.834192 ignition[826]: INFO : Stage: files Feb 12 19:43:09.835446 ignition[826]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 12 19:43:09.835446 ignition[826]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 19:43:09.835446 ignition[826]: DEBUG : files: compiled without relabeling support, skipping Feb 12 19:43:09.837980 ignition[826]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 12 19:43:09.837980 ignition[826]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 12 19:43:09.837980 ignition[826]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 12 19:43:09.837980 ignition[826]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 12 19:43:09.837980 ignition[826]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 12 19:43:09.837621 unknown[826]: wrote ssh authorized keys file for user: core Feb 12 19:43:09.843798 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 12 19:43:09.843798 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 12 19:43:09.868574 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 12 19:43:09.924479 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 12 19:43:09.925822 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 12 19:43:09.925822 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-amd64.tar.gz: attempt #1 Feb 12 19:43:10.256589 systemd-networkd[713]: eth0: Gained IPv6LL Feb 12 19:43:10.390737 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 12 19:43:10.476321 ignition[826]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: aa622325bf05520939f9e020d7a28ab48ac23e2fae6f47d5a4e52174c88c1ebc31b464853e4fd65bd8f5331f330a6ca96fd370d247d3eeaed042da4ee2d1219a Feb 12 19:43:10.478426 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 12 19:43:10.478426 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 12 19:43:10.478426 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz: attempt #1 Feb 12 19:43:10.898799 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 12 19:43:11.076166 ignition[826]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 5d0324ca8a3c90c680b6e1fddb245a2255582fa15949ba1f3c6bb7323df9d3af754dae98d6e40ac9ccafb2999c932df2c4288d418949a4915d928eb23c090540 Feb 12 19:43:11.078136 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 12 19:43:11.078136 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 12 19:43:11.080517 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 12 19:43:11.081713 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubectl" Feb 12 19:43:11.082785 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubectl: attempt #1 Feb 12 19:43:11.154229 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 12 19:43:11.344455 ignition[826]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 33cf3f6e37bcee4dff7ce14ab933c605d07353d4e31446dd2b52c3f05e0b150b60e531f6069f112d8a76331322a72b593537531e62104cfc7c70cb03d46f76b3 Feb 12 19:43:11.346488 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 12 19:43:11.346488 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubelet" Feb 12 19:43:11.346488 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubelet: attempt #1 Feb 12 19:43:11.399336 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 12 19:43:11.841461 ignition[826]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: ce6ba764274162d38ac1c44e1fb1f0f835346f3afc5b508bb755b1b7d7170910f5812b0a1941b32e29d950e905bbd08ae761c87befad921db4d44969c8562e75 Feb 12 19:43:11.843711 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 12 19:43:11.843711 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 12 19:43:11.843711 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubeadm: attempt #1 Feb 12 19:43:11.892629 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET result: OK Feb 12 19:43:12.097973 ignition[826]: DEBUG : files: createFilesystemsFiles: createFiles: op(9): file matches expected sum of: f4daad200c8378dfdc6cb69af28eaca4215f2b4a2dbdf75f29f9210171cb5683bc873fc000319022e6b3ad61175475d77190734713ba9136644394e8a8faafa1 Feb 12 19:43:12.112984 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 12 19:43:12.112984 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 12 19:43:12.112984 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 12 19:43:12.420050 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 12 19:43:12.490610 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 12 19:43:12.490610 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Feb 12 19:43:12.490610 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Feb 12 19:43:12.490610 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 12 19:43:12.490610 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 12 19:43:12.490610 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 12 19:43:12.490610 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 12 19:43:12.490610 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 12 19:43:12.490610 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 12 19:43:12.490610 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 19:43:12.490610 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 19:43:12.490610 ignition[826]: INFO : files: op(10): [started] processing unit "prepare-helm.service" Feb 12 19:43:12.490610 ignition[826]: INFO : files: op(10): op(11): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 12 19:43:12.505545 ignition[826]: INFO : files: op(10): op(11): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 12 19:43:12.505545 ignition[826]: INFO : files: op(10): [finished] processing unit "prepare-helm.service" Feb 12 19:43:12.505545 ignition[826]: INFO : files: op(12): [started] processing unit "coreos-metadata.service" Feb 12 19:43:12.505545 ignition[826]: INFO : files: op(12): op(13): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 12 19:43:12.505545 ignition[826]: INFO : files: op(12): op(13): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 12 19:43:12.505545 ignition[826]: INFO : files: op(12): [finished] processing unit "coreos-metadata.service" Feb 12 19:43:12.505545 ignition[826]: INFO : files: op(14): [started] processing unit "prepare-cni-plugins.service" Feb 12 19:43:12.505545 ignition[826]: INFO : files: op(14): op(15): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 19:43:12.505545 ignition[826]: INFO : files: op(14): op(15): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 19:43:12.505545 ignition[826]: INFO : files: op(14): [finished] processing unit "prepare-cni-plugins.service" Feb 12 19:43:12.505545 ignition[826]: INFO : files: op(16): [started] processing unit "prepare-critools.service" Feb 12 19:43:12.505545 ignition[826]: INFO : files: op(16): op(17): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 19:43:12.505545 ignition[826]: INFO : files: op(16): op(17): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 19:43:12.505545 ignition[826]: INFO : files: op(16): [finished] processing unit "prepare-critools.service" Feb 12 19:43:12.505545 ignition[826]: INFO : files: op(18): [started] setting preset to enabled for "prepare-critools.service" Feb 12 19:43:12.505545 ignition[826]: INFO : files: op(18): [finished] setting preset to enabled for "prepare-critools.service" Feb 12 19:43:12.505545 ignition[826]: INFO : files: op(19): [started] setting preset to enabled for "prepare-helm.service" Feb 12 19:43:12.505545 ignition[826]: INFO : files: op(19): [finished] setting preset to enabled for "prepare-helm.service" Feb 12 19:43:12.505545 ignition[826]: INFO : files: op(1a): [started] setting preset to disabled for "coreos-metadata.service" Feb 12 19:43:12.526421 ignition[826]: INFO : files: op(1a): op(1b): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 12 19:43:12.542594 ignition[826]: INFO : files: op(1a): op(1b): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 12 19:43:12.543815 ignition[826]: INFO : files: op(1a): [finished] setting preset to disabled for "coreos-metadata.service" Feb 12 19:43:12.543815 ignition[826]: INFO : files: op(1c): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 19:43:12.543815 ignition[826]: INFO : files: op(1c): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 19:43:12.543815 ignition[826]: INFO : files: createResultFile: createFiles: op(1d): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 12 19:43:12.543815 ignition[826]: INFO : files: createResultFile: createFiles: op(1d): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 12 19:43:12.543815 ignition[826]: INFO : files: files passed Feb 12 19:43:12.543815 ignition[826]: INFO : Ignition finished successfully Feb 12 19:43:12.550856 systemd[1]: Finished ignition-files.service. Feb 12 19:43:12.554688 kernel: kauditd_printk_skb: 23 callbacks suppressed Feb 12 19:43:12.554713 kernel: audit: type=1130 audit(1707766992.551:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:12.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:12.554699 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 12 19:43:12.555322 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 12 19:43:12.555916 systemd[1]: Starting ignition-quench.service... Feb 12 19:43:12.558516 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 12 19:43:12.563728 kernel: audit: type=1130 audit(1707766992.558:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:12.563748 kernel: audit: type=1131 audit(1707766992.558:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:12.558000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:12.558000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:12.558585 systemd[1]: Finished ignition-quench.service. Feb 12 19:43:12.566510 initrd-setup-root-after-ignition[851]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Feb 12 19:43:12.568803 initrd-setup-root-after-ignition[853]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 12 19:43:12.570311 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 12 19:43:12.573744 kernel: audit: type=1130 audit(1707766992.569:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:12.569000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:12.570467 systemd[1]: Reached target ignition-complete.target. Feb 12 19:43:12.575195 systemd[1]: Starting initrd-parse-etc.service... Feb 12 19:43:12.586918 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 12 19:43:12.586986 systemd[1]: Finished initrd-parse-etc.service. Feb 12 19:43:12.592511 kernel: audit: type=1130 audit(1707766992.587:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:12.592533 kernel: audit: type=1131 audit(1707766992.587:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:12.587000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:12.587000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:12.588208 systemd[1]: Reached target initrd-fs.target. Feb 12 19:43:12.593024 systemd[1]: Reached target initrd.target. Feb 12 19:43:12.594032 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 12 19:43:12.595329 systemd[1]: Starting dracut-pre-pivot.service... Feb 12 19:43:12.605316 systemd[1]: Finished dracut-pre-pivot.service. Feb 12 19:43:12.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:12.606909 systemd[1]: Starting initrd-cleanup.service... Feb 12 19:43:12.609362 kernel: audit: type=1130 audit(1707766992.605:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:12.614658 systemd[1]: Stopped target nss-lookup.target. Feb 12 19:43:12.615295 systemd[1]: Stopped target remote-cryptsetup.target. Feb 12 19:43:12.616341 systemd[1]: Stopped target timers.target. Feb 12 19:43:12.617331 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 12 19:43:12.620792 kernel: audit: type=1131 audit(1707766992.617:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:12.617000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:12.617424 systemd[1]: Stopped dracut-pre-pivot.service. Feb 12 19:43:12.618353 systemd[1]: Stopped target initrd.target. Feb 12 19:43:12.621329 systemd[1]: Stopped target basic.target. Feb 12 19:43:12.622274 systemd[1]: Stopped target ignition-complete.target. Feb 12 19:43:12.623359 systemd[1]: Stopped target ignition-diskful.target. Feb 12 19:43:12.624352 systemd[1]: Stopped target initrd-root-device.target. Feb 12 19:43:12.625460 systemd[1]: Stopped target remote-fs.target. Feb 12 19:43:12.626462 systemd[1]: Stopped target remote-fs-pre.target. Feb 12 19:43:12.627562 systemd[1]: Stopped target sysinit.target. Feb 12 19:43:12.628512 systemd[1]: Stopped target local-fs.target. Feb 12 19:43:12.629539 systemd[1]: Stopped target local-fs-pre.target. Feb 12 19:43:12.630528 systemd[1]: Stopped target swap.target. Feb 12 19:43:12.632000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:12.631463 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 12 19:43:12.635904 kernel: audit: type=1131 audit(1707766992.632:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:12.631540 systemd[1]: Stopped dracut-pre-mount.service. Feb 12 19:43:12.636000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:12.632529 systemd[1]: Stopped target cryptsetup.target. Feb 12 19:43:12.639878 kernel: audit: type=1131 audit(1707766992.636:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:12.639000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:12.635356 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 12 19:43:12.635468 systemd[1]: Stopped dracut-initqueue.service. Feb 12 19:43:12.636512 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 12 19:43:12.636587 systemd[1]: Stopped ignition-fetch-offline.service. Feb 12 19:43:12.639506 systemd[1]: Stopped target paths.target. Feb 12 19:43:12.640416 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 12 19:43:12.646453 systemd[1]: Stopped systemd-ask-password-console.path. Feb 12 19:43:12.647745 systemd[1]: Stopped target slices.target. Feb 12 19:43:12.648794 systemd[1]: Stopped target sockets.target. Feb 12 19:43:12.649858 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 12 19:43:12.650694 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 12 19:43:12.651000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:12.652022 systemd[1]: ignition-files.service: Deactivated successfully. Feb 12 19:43:12.652679 systemd[1]: Stopped ignition-files.service. Feb 12 19:43:12.653000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:12.654517 systemd[1]: Stopping ignition-mount.service... Feb 12 19:43:12.655621 systemd[1]: Stopping iscsid.service... Feb 12 19:43:12.656481 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 12 19:43:12.657183 iscsid[718]: iscsid shutting down. Feb 12 19:43:12.657217 systemd[1]: Stopped kmod-static-nodes.service. Feb 12 19:43:12.658000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:12.659522 systemd[1]: Stopping sysroot-boot.service... Feb 12 19:43:12.660489 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 12 19:43:12.661294 systemd[1]: Stopped systemd-udev-trigger.service. Feb 12 19:43:12.662433 ignition[866]: INFO : Ignition 2.14.0 Feb 12 19:43:12.662433 ignition[866]: INFO : Stage: umount Feb 12 19:43:12.662433 ignition[866]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 12 19:43:12.662433 ignition[866]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 19:43:12.662000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:12.665593 ignition[866]: INFO : umount: umount passed Feb 12 19:43:12.665593 ignition[866]: INFO : Ignition finished successfully Feb 12 19:43:12.662523 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 12 19:43:12.664000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:12.663070 systemd[1]: Stopped dracut-pre-trigger.service. Feb 12 19:43:12.669047 systemd[1]: iscsid.service: Deactivated successfully. Feb 12 19:43:12.669646 systemd[1]: Stopped iscsid.service. Feb 12 19:43:12.670000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:12.670890 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 12 19:43:12.671548 systemd[1]: Stopped ignition-mount.service. Feb 12 19:43:12.672000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:12.673657 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 12 19:43:12.674635 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 12 19:43:12.675257 systemd[1]: Stopped sysroot-boot.service. Feb 12 19:43:12.675000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:12.676563 systemd[1]: iscsid.socket: Deactivated successfully. Feb 12 19:43:12.677139 systemd[1]: Closed iscsid.socket. Feb 12 19:43:12.678062 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 12 19:43:12.678097 systemd[1]: Stopped ignition-disks.service. Feb 12 19:43:12.679000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:12.679630 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 12 19:43:12.679660 systemd[1]: Stopped ignition-kargs.service. Feb 12 19:43:12.680000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:12.681170 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 12 19:43:12.681198 systemd[1]: Stopped ignition-setup.service. Feb 12 19:43:12.682000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:12.682782 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 12 19:43:12.682813 systemd[1]: Stopped initrd-setup-root.service. Feb 12 19:43:12.684000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:12.684517 systemd[1]: Stopping iscsiuio.service... Feb 12 19:43:12.685594 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 12 19:43:12.686240 systemd[1]: Finished initrd-cleanup.service. Feb 12 19:43:12.686000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:12.686000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:12.687445 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 12 19:43:12.688052 systemd[1]: Stopped iscsiuio.service. Feb 12 19:43:12.688000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:12.689804 systemd[1]: Stopped target network.target. Feb 12 19:43:12.690841 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 12 19:43:12.690869 systemd[1]: Closed iscsiuio.socket. Feb 12 19:43:12.692289 systemd[1]: Stopping systemd-networkd.service... Feb 12 19:43:12.693501 systemd[1]: Stopping systemd-resolved.service... Feb 12 19:43:12.697413 systemd-networkd[713]: eth0: DHCPv6 lease lost Feb 12 19:43:12.698239 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 12 19:43:12.698974 systemd[1]: Stopped systemd-networkd.service. Feb 12 19:43:12.699000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:12.700448 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 12 19:43:12.700476 systemd[1]: Closed systemd-networkd.socket. Feb 12 19:43:12.702000 audit: BPF prog-id=9 op=UNLOAD Feb 12 19:43:12.702543 systemd[1]: Stopping network-cleanup.service... Feb 12 19:43:12.703618 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 12 19:43:12.703656 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 12 19:43:12.704000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:12.705335 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 19:43:12.705365 systemd[1]: Stopped systemd-sysctl.service. Feb 12 19:43:12.706000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:12.707110 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 12 19:43:12.707754 systemd[1]: Stopped systemd-modules-load.service. Feb 12 19:43:12.708000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:12.708927 systemd[1]: Stopping systemd-udevd.service... Feb 12 19:43:12.711425 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 12 19:43:12.712598 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 12 19:43:12.713242 systemd[1]: Stopped systemd-resolved.service. Feb 12 19:43:12.713000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:12.716056 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 12 19:43:12.716756 systemd[1]: Stopped network-cleanup.service. Feb 12 19:43:12.717000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:12.717944 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 12 19:43:12.718000 audit: BPF prog-id=6 op=UNLOAD Feb 12 19:43:12.718646 systemd[1]: Stopped systemd-udevd.service. Feb 12 19:43:12.719000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:12.720009 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 12 19:43:12.720043 systemd[1]: Closed systemd-udevd-control.socket. Feb 12 19:43:12.721683 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 12 19:43:12.721710 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 12 19:43:12.723288 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 12 19:43:12.723321 systemd[1]: Stopped dracut-pre-udev.service. Feb 12 19:43:12.724000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:12.724953 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 12 19:43:12.724982 systemd[1]: Stopped dracut-cmdline.service. Feb 12 19:43:12.726000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:12.726577 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 12 19:43:12.727192 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 12 19:43:12.727000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:12.728755 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 12 19:43:12.729883 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 12 19:43:12.729920 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 12 19:43:12.731000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:12.733136 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 12 19:43:12.733869 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 12 19:43:12.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:12.734000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:12.735092 systemd[1]: Reached target initrd-switch-root.target. Feb 12 19:43:12.736668 systemd[1]: Starting initrd-switch-root.service... Feb 12 19:43:12.751636 systemd[1]: Switching root. Feb 12 19:43:12.769714 systemd-journald[197]: Journal stopped Feb 12 19:43:15.721860 systemd-journald[197]: Received SIGTERM from PID 1 (systemd). Feb 12 19:43:15.721915 kernel: SELinux: Class mctp_socket not defined in policy. Feb 12 19:43:15.721929 kernel: SELinux: Class anon_inode not defined in policy. Feb 12 19:43:15.721948 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 12 19:43:15.721963 kernel: SELinux: policy capability network_peer_controls=1 Feb 12 19:43:15.721973 kernel: SELinux: policy capability open_perms=1 Feb 12 19:43:15.721985 kernel: SELinux: policy capability extended_socket_class=1 Feb 12 19:43:15.721995 kernel: SELinux: policy capability always_check_network=0 Feb 12 19:43:15.722004 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 12 19:43:15.722013 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 12 19:43:15.722022 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 12 19:43:15.722031 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 12 19:43:15.722041 systemd[1]: Successfully loaded SELinux policy in 35.183ms. Feb 12 19:43:15.722065 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.187ms. Feb 12 19:43:15.722094 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 19:43:15.722106 systemd[1]: Detected virtualization kvm. Feb 12 19:43:15.722116 systemd[1]: Detected architecture x86-64. Feb 12 19:43:15.722129 systemd[1]: Detected first boot. Feb 12 19:43:15.722140 systemd[1]: Initializing machine ID from VM UUID. Feb 12 19:43:15.722149 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 12 19:43:15.722159 systemd[1]: Populated /etc with preset unit settings. Feb 12 19:43:15.722170 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:43:15.722189 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:43:15.722200 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:43:15.722210 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 12 19:43:15.722223 systemd[1]: Stopped initrd-switch-root.service. Feb 12 19:43:15.722233 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 12 19:43:15.722243 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 12 19:43:15.722253 systemd[1]: Created slice system-addon\x2drun.slice. Feb 12 19:43:15.722266 systemd[1]: Created slice system-getty.slice. Feb 12 19:43:15.722280 systemd[1]: Created slice system-modprobe.slice. Feb 12 19:43:15.722290 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 12 19:43:15.722300 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 12 19:43:15.722311 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 12 19:43:15.722321 systemd[1]: Created slice user.slice. Feb 12 19:43:15.722331 systemd[1]: Started systemd-ask-password-console.path. Feb 12 19:43:15.722343 systemd[1]: Started systemd-ask-password-wall.path. Feb 12 19:43:15.722353 systemd[1]: Set up automount boot.automount. Feb 12 19:43:15.722363 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 12 19:43:15.722374 systemd[1]: Stopped target initrd-switch-root.target. Feb 12 19:43:15.722399 systemd[1]: Stopped target initrd-fs.target. Feb 12 19:43:15.722414 systemd[1]: Stopped target initrd-root-fs.target. Feb 12 19:43:15.722428 systemd[1]: Reached target integritysetup.target. Feb 12 19:43:15.722440 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 19:43:15.722454 systemd[1]: Reached target remote-fs.target. Feb 12 19:43:15.722464 systemd[1]: Reached target slices.target. Feb 12 19:43:15.722474 systemd[1]: Reached target swap.target. Feb 12 19:43:15.722486 systemd[1]: Reached target torcx.target. Feb 12 19:43:15.722496 systemd[1]: Reached target veritysetup.target. Feb 12 19:43:15.722506 systemd[1]: Listening on systemd-coredump.socket. Feb 12 19:43:15.722516 systemd[1]: Listening on systemd-initctl.socket. Feb 12 19:43:15.722526 systemd[1]: Listening on systemd-networkd.socket. Feb 12 19:43:15.722536 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 19:43:15.722546 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 19:43:15.722556 systemd[1]: Listening on systemd-userdbd.socket. Feb 12 19:43:15.722566 systemd[1]: Mounting dev-hugepages.mount... Feb 12 19:43:15.722577 systemd[1]: Mounting dev-mqueue.mount... Feb 12 19:43:15.722588 systemd[1]: Mounting media.mount... Feb 12 19:43:15.722598 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 12 19:43:15.722608 systemd[1]: Mounting sys-kernel-debug.mount... Feb 12 19:43:15.722617 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 12 19:43:15.722628 systemd[1]: Mounting tmp.mount... Feb 12 19:43:15.722638 systemd[1]: Starting flatcar-tmpfiles.service... Feb 12 19:43:15.722648 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 12 19:43:15.722657 systemd[1]: Starting kmod-static-nodes.service... Feb 12 19:43:15.722669 systemd[1]: Starting modprobe@configfs.service... Feb 12 19:43:15.722679 systemd[1]: Starting modprobe@dm_mod.service... Feb 12 19:43:15.722695 systemd[1]: Starting modprobe@drm.service... Feb 12 19:43:15.722706 systemd[1]: Starting modprobe@efi_pstore.service... Feb 12 19:43:15.722716 systemd[1]: Starting modprobe@fuse.service... Feb 12 19:43:15.722726 systemd[1]: Starting modprobe@loop.service... Feb 12 19:43:15.722737 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 12 19:43:15.722752 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 12 19:43:15.722764 systemd[1]: Stopped systemd-fsck-root.service. Feb 12 19:43:15.722774 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 12 19:43:15.722784 systemd[1]: Stopped systemd-fsck-usr.service. Feb 12 19:43:15.722793 kernel: loop: module loaded Feb 12 19:43:15.722803 systemd[1]: Stopped systemd-journald.service. Feb 12 19:43:15.722814 kernel: fuse: init (API version 7.34) Feb 12 19:43:15.722825 systemd[1]: Starting systemd-journald.service... Feb 12 19:43:15.722836 systemd[1]: Starting systemd-modules-load.service... Feb 12 19:43:15.722846 systemd[1]: Starting systemd-network-generator.service... Feb 12 19:43:15.722856 systemd[1]: Starting systemd-remount-fs.service... Feb 12 19:43:15.722867 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 19:43:15.722877 systemd[1]: verity-setup.service: Deactivated successfully. Feb 12 19:43:15.722887 systemd[1]: Stopped verity-setup.service. Feb 12 19:43:15.722897 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 12 19:43:15.722910 systemd-journald[977]: Journal started Feb 12 19:43:15.722953 systemd-journald[977]: Runtime Journal (/run/log/journal/2a89acac2fbf4d6c86a3583f72ff5b7b) is 6.0M, max 48.4M, 42.4M free. Feb 12 19:43:12.823000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 12 19:43:13.591000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 19:43:13.591000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 19:43:13.591000 audit: BPF prog-id=10 op=LOAD Feb 12 19:43:13.591000 audit: BPF prog-id=10 op=UNLOAD Feb 12 19:43:13.591000 audit: BPF prog-id=11 op=LOAD Feb 12 19:43:13.591000 audit: BPF prog-id=11 op=UNLOAD Feb 12 19:43:13.623000 audit[900]: AVC avc: denied { associate } for pid=900 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 12 19:43:13.623000 audit[900]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001078dc a1=c00002ae40 a2=c000029b00 a3=32 items=0 ppid=883 pid=900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:13.623000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 12 19:43:13.624000 audit[900]: AVC avc: denied { associate } for pid=900 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 12 19:43:13.624000 audit[900]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001079b5 a2=1ed a3=0 items=2 ppid=883 pid=900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:13.624000 audit: CWD cwd="/" Feb 12 19:43:13.624000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:13.624000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:13.624000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 12 19:43:15.624000 audit: BPF prog-id=12 op=LOAD Feb 12 19:43:15.624000 audit: BPF prog-id=3 op=UNLOAD Feb 12 19:43:15.624000 audit: BPF prog-id=13 op=LOAD Feb 12 19:43:15.624000 audit: BPF prog-id=14 op=LOAD Feb 12 19:43:15.624000 audit: BPF prog-id=4 op=UNLOAD Feb 12 19:43:15.624000 audit: BPF prog-id=5 op=UNLOAD Feb 12 19:43:15.624000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:15.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:15.627000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:15.634000 audit: BPF prog-id=12 op=UNLOAD Feb 12 19:43:15.700000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:15.703000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:15.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:15.704000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:15.705000 audit: BPF prog-id=15 op=LOAD Feb 12 19:43:15.705000 audit: BPF prog-id=16 op=LOAD Feb 12 19:43:15.705000 audit: BPF prog-id=17 op=LOAD Feb 12 19:43:15.705000 audit: BPF prog-id=13 op=UNLOAD Feb 12 19:43:15.705000 audit: BPF prog-id=14 op=UNLOAD Feb 12 19:43:15.720000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:15.720000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 12 19:43:15.720000 audit[977]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffda7d2e7a0 a2=4000 a3=7ffda7d2e83c items=0 ppid=1 pid=977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:15.720000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 12 19:43:15.622524 systemd[1]: Queued start job for default target multi-user.target. Feb 12 19:43:13.621998 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-02-12T19:43:13Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:43:15.622534 systemd[1]: Unnecessary job was removed for dev-vda6.device. Feb 12 19:43:13.622178 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-02-12T19:43:13Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 12 19:43:15.625035 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 12 19:43:13.622196 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-02-12T19:43:13Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 12 19:43:13.622223 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-02-12T19:43:13Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 12 19:43:13.622232 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-02-12T19:43:13Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 12 19:43:13.622257 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-02-12T19:43:13Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 12 19:43:13.622268 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-02-12T19:43:13Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 12 19:43:13.622462 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-02-12T19:43:13Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 12 19:43:13.622491 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-02-12T19:43:13Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 12 19:43:13.622501 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-02-12T19:43:13Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 12 19:43:13.622776 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-02-12T19:43:13Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 12 19:43:13.622815 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-02-12T19:43:13Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 12 19:43:13.622838 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-02-12T19:43:13Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 12 19:43:13.622857 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-02-12T19:43:13Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 12 19:43:13.622872 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-02-12T19:43:13Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 12 19:43:15.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:15.725400 systemd[1]: Started systemd-journald.service. Feb 12 19:43:13.622884 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-02-12T19:43:13Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 12 19:43:15.379163 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-02-12T19:43:15Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 19:43:15.379423 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-02-12T19:43:15Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 19:43:15.379506 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-02-12T19:43:15Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 19:43:15.379648 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-02-12T19:43:15Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 19:43:15.379701 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-02-12T19:43:15Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 12 19:43:15.379753 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-02-12T19:43:15Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 12 19:43:15.725892 systemd[1]: Mounted dev-hugepages.mount. Feb 12 19:43:15.726673 systemd[1]: Mounted dev-mqueue.mount. Feb 12 19:43:15.727446 systemd[1]: Mounted media.mount. Feb 12 19:43:15.728152 systemd[1]: Mounted sys-kernel-debug.mount. Feb 12 19:43:15.728953 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 12 19:43:15.729797 systemd[1]: Mounted tmp.mount. Feb 12 19:43:15.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:15.730773 systemd[1]: Finished kmod-static-nodes.service. Feb 12 19:43:15.731807 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 12 19:43:15.732034 systemd[1]: Finished modprobe@configfs.service. Feb 12 19:43:15.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:15.732000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:15.733055 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 12 19:43:15.733266 systemd[1]: Finished modprobe@dm_mod.service. Feb 12 19:43:15.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:15.733000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:15.734313 systemd[1]: Finished flatcar-tmpfiles.service. Feb 12 19:43:15.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:15.735130 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 12 19:43:15.735305 systemd[1]: Finished modprobe@drm.service. Feb 12 19:43:15.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:15.735000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:15.736179 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 12 19:43:15.736338 systemd[1]: Finished modprobe@efi_pstore.service. Feb 12 19:43:15.736000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:15.736000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:15.737199 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 12 19:43:15.737403 systemd[1]: Finished modprobe@fuse.service. Feb 12 19:43:15.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:15.737000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:15.738179 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 12 19:43:15.738378 systemd[1]: Finished modprobe@loop.service. Feb 12 19:43:15.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:15.738000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:15.739439 systemd[1]: Finished systemd-modules-load.service. Feb 12 19:43:15.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:15.740350 systemd[1]: Finished systemd-network-generator.service. Feb 12 19:43:15.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:15.741406 systemd[1]: Finished systemd-remount-fs.service. Feb 12 19:43:15.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:15.742416 systemd[1]: Reached target network-pre.target. Feb 12 19:43:15.743977 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 12 19:43:15.745644 systemd[1]: Mounting sys-kernel-config.mount... Feb 12 19:43:15.746495 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 12 19:43:15.747954 systemd[1]: Starting systemd-hwdb-update.service... Feb 12 19:43:15.749922 systemd[1]: Starting systemd-journal-flush.service... Feb 12 19:43:15.750962 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 12 19:43:15.752140 systemd[1]: Starting systemd-random-seed.service... Feb 12 19:43:15.754071 systemd-journald[977]: Time spent on flushing to /var/log/journal/2a89acac2fbf4d6c86a3583f72ff5b7b is 18.834ms for 1185 entries. Feb 12 19:43:15.754071 systemd-journald[977]: System Journal (/var/log/journal/2a89acac2fbf4d6c86a3583f72ff5b7b) is 8.0M, max 195.6M, 187.6M free. Feb 12 19:43:16.160040 systemd-journald[977]: Received client request to flush runtime journal. Feb 12 19:43:15.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:15.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:15.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:15.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:15.753143 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 12 19:43:15.755289 systemd[1]: Starting systemd-sysctl.service... Feb 12 19:43:15.757472 systemd[1]: Starting systemd-sysusers.service... Feb 12 19:43:15.760346 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 12 19:43:16.160857 udevadm[1003]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 12 19:43:15.761078 systemd[1]: Mounted sys-kernel-config.mount. Feb 12 19:43:16.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:15.761868 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 19:43:15.763363 systemd[1]: Starting systemd-udev-settle.service... Feb 12 19:43:15.825848 systemd[1]: Finished systemd-sysctl.service. Feb 12 19:43:15.826946 systemd[1]: Finished systemd-sysusers.service. Feb 12 19:43:15.950068 systemd[1]: Finished systemd-random-seed.service. Feb 12 19:43:15.950992 systemd[1]: Reached target first-boot-complete.target. Feb 12 19:43:16.161024 systemd[1]: Finished systemd-journal-flush.service. Feb 12 19:43:16.514467 systemd[1]: Finished systemd-hwdb-update.service. Feb 12 19:43:16.514000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:16.515000 audit: BPF prog-id=18 op=LOAD Feb 12 19:43:16.515000 audit: BPF prog-id=19 op=LOAD Feb 12 19:43:16.515000 audit: BPF prog-id=7 op=UNLOAD Feb 12 19:43:16.515000 audit: BPF prog-id=8 op=UNLOAD Feb 12 19:43:16.516250 systemd[1]: Starting systemd-udevd.service... Feb 12 19:43:16.531139 systemd-udevd[1006]: Using default interface naming scheme 'v252'. Feb 12 19:43:16.541812 systemd[1]: Started systemd-udevd.service. Feb 12 19:43:16.542000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:16.543000 audit: BPF prog-id=20 op=LOAD Feb 12 19:43:16.545001 systemd[1]: Starting systemd-networkd.service... Feb 12 19:43:16.550000 audit: BPF prog-id=21 op=LOAD Feb 12 19:43:16.551000 audit: BPF prog-id=22 op=LOAD Feb 12 19:43:16.551000 audit: BPF prog-id=23 op=LOAD Feb 12 19:43:16.551990 systemd[1]: Starting systemd-userdbd.service... Feb 12 19:43:16.574154 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Feb 12 19:43:16.577058 systemd[1]: Started systemd-userdbd.service. Feb 12 19:43:16.577000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:16.601266 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 19:43:16.611417 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 12 19:43:16.613731 systemd-networkd[1018]: lo: Link UP Feb 12 19:43:16.613743 systemd-networkd[1018]: lo: Gained carrier Feb 12 19:43:16.614127 systemd-networkd[1018]: Enumeration completed Feb 12 19:43:16.614206 systemd[1]: Started systemd-networkd.service. Feb 12 19:43:16.614221 systemd-networkd[1018]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 19:43:16.615513 kernel: ACPI: button: Power Button [PWRF] Feb 12 19:43:16.615000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:16.615071 systemd-networkd[1018]: eth0: Link UP Feb 12 19:43:16.615079 systemd-networkd[1018]: eth0: Gained carrier Feb 12 19:43:16.628479 systemd-networkd[1018]: eth0: DHCPv4 address 10.0.0.136/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 12 19:43:16.623000 audit[1031]: AVC avc: denied { confidentiality } for pid=1031 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 12 19:43:16.623000 audit[1031]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55c619e86ff0 a1=32194 a2=7efc496edbc5 a3=5 items=108 ppid=1006 pid=1031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:16.623000 audit: CWD cwd="/" Feb 12 19:43:16.623000 audit: PATH item=0 name=(null) inode=51 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=1 name=(null) inode=13844 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=2 name=(null) inode=13844 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=3 name=(null) inode=13845 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=4 name=(null) inode=13844 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=5 name=(null) inode=13846 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=6 name=(null) inode=13844 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=7 name=(null) inode=13847 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=8 name=(null) inode=13847 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=9 name=(null) inode=13848 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=10 name=(null) inode=13847 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=11 name=(null) inode=13849 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=12 name=(null) inode=13847 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=13 name=(null) inode=13850 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=14 name=(null) inode=13847 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=15 name=(null) inode=13851 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=16 name=(null) inode=13847 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=17 name=(null) inode=13852 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=18 name=(null) inode=13844 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=19 name=(null) inode=13853 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=20 name=(null) inode=13853 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=21 name=(null) inode=13854 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=22 name=(null) inode=13853 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=23 name=(null) inode=13855 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=24 name=(null) inode=13853 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=25 name=(null) inode=13856 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=26 name=(null) inode=13853 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=27 name=(null) inode=13857 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=28 name=(null) inode=13853 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=29 name=(null) inode=13858 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=30 name=(null) inode=13844 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=31 name=(null) inode=13859 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=32 name=(null) inode=13859 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=33 name=(null) inode=13860 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=34 name=(null) inode=13859 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=35 name=(null) inode=13861 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=36 name=(null) inode=13859 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=37 name=(null) inode=13862 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=38 name=(null) inode=13859 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=39 name=(null) inode=13863 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=40 name=(null) inode=13859 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=41 name=(null) inode=13864 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=42 name=(null) inode=13844 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=43 name=(null) inode=13865 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=44 name=(null) inode=13865 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=45 name=(null) inode=13866 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=46 name=(null) inode=13865 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=47 name=(null) inode=13867 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=48 name=(null) inode=13865 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=49 name=(null) inode=13868 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=50 name=(null) inode=13865 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=51 name=(null) inode=13869 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=52 name=(null) inode=13865 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=53 name=(null) inode=13870 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=54 name=(null) inode=51 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=55 name=(null) inode=13871 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=56 name=(null) inode=13871 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=57 name=(null) inode=13872 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=58 name=(null) inode=13871 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=59 name=(null) inode=13873 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=60 name=(null) inode=13871 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=61 name=(null) inode=13874 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=62 name=(null) inode=13874 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=63 name=(null) inode=13875 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=64 name=(null) inode=13874 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=65 name=(null) inode=13876 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=66 name=(null) inode=13874 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=67 name=(null) inode=13877 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=68 name=(null) inode=13874 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=69 name=(null) inode=13878 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=70 name=(null) inode=13874 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=71 name=(null) inode=13879 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=72 name=(null) inode=13871 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=73 name=(null) inode=13880 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=74 name=(null) inode=13880 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=75 name=(null) inode=13881 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=76 name=(null) inode=13880 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=77 name=(null) inode=13882 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=78 name=(null) inode=13880 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=79 name=(null) inode=13883 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=80 name=(null) inode=13880 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=81 name=(null) inode=13884 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=82 name=(null) inode=13880 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=83 name=(null) inode=13885 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=84 name=(null) inode=13871 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=85 name=(null) inode=13886 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=86 name=(null) inode=13886 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=87 name=(null) inode=13887 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=88 name=(null) inode=13886 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=89 name=(null) inode=13888 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=90 name=(null) inode=13886 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=91 name=(null) inode=13889 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=92 name=(null) inode=13886 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=93 name=(null) inode=13890 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=94 name=(null) inode=13886 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=95 name=(null) inode=13891 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=96 name=(null) inode=13871 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=97 name=(null) inode=13892 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=98 name=(null) inode=13892 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=99 name=(null) inode=13893 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=100 name=(null) inode=13892 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=101 name=(null) inode=13894 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=102 name=(null) inode=13892 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=103 name=(null) inode=13895 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=104 name=(null) inode=13892 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=105 name=(null) inode=13896 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=106 name=(null) inode=13892 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PATH item=107 name=(null) inode=13897 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:43:16.623000 audit: PROCTITLE proctitle="(udev-worker)" Feb 12 19:43:16.641411 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 0 Feb 12 19:43:16.653421 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 12 19:43:16.662403 kernel: mousedev: PS/2 mouse device common for all mice Feb 12 19:43:16.700534 kernel: kvm: Nested Virtualization enabled Feb 12 19:43:16.700574 kernel: SVM: kvm: Nested Paging enabled Feb 12 19:43:16.700606 kernel: SVM: Virtual VMLOAD VMSAVE supported Feb 12 19:43:16.701428 kernel: SVM: Virtual GIF supported Feb 12 19:43:16.713437 kernel: EDAC MC: Ver: 3.0.0 Feb 12 19:43:16.734684 systemd[1]: Finished systemd-udev-settle.service. Feb 12 19:43:16.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:16.736373 systemd[1]: Starting lvm2-activation-early.service... Feb 12 19:43:16.743431 lvm[1042]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 19:43:16.769152 systemd[1]: Finished lvm2-activation-early.service. Feb 12 19:43:16.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:16.769891 systemd[1]: Reached target cryptsetup.target. Feb 12 19:43:16.771434 systemd[1]: Starting lvm2-activation.service... Feb 12 19:43:16.774243 lvm[1043]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 19:43:16.800963 systemd[1]: Finished lvm2-activation.service. Feb 12 19:43:16.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:16.801691 systemd[1]: Reached target local-fs-pre.target. Feb 12 19:43:16.802268 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 12 19:43:16.802291 systemd[1]: Reached target local-fs.target. Feb 12 19:43:16.802847 systemd[1]: Reached target machines.target. Feb 12 19:43:16.804336 systemd[1]: Starting ldconfig.service... Feb 12 19:43:16.805092 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 12 19:43:16.805134 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 19:43:16.806150 systemd[1]: Starting systemd-boot-update.service... Feb 12 19:43:16.807667 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 12 19:43:16.809532 systemd[1]: Starting systemd-machine-id-commit.service... Feb 12 19:43:16.810394 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 12 19:43:16.810431 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 12 19:43:16.811453 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 12 19:43:16.814562 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1045 (bootctl) Feb 12 19:43:16.815565 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 12 19:43:16.824516 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 12 19:43:16.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:16.826838 systemd-tmpfiles[1048]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 12 19:43:16.828252 systemd-tmpfiles[1048]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 12 19:43:16.831227 systemd-tmpfiles[1048]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 12 19:43:16.861361 systemd-fsck[1053]: fsck.fat 4.2 (2021-01-31) Feb 12 19:43:16.861361 systemd-fsck[1053]: /dev/vda1: 790 files, 115362/258078 clusters Feb 12 19:43:16.861896 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 12 19:43:16.862000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:16.863831 systemd[1]: Mounting boot.mount... Feb 12 19:43:17.449975 systemd[1]: Mounted boot.mount. Feb 12 19:43:17.469195 systemd[1]: Finished systemd-boot-update.service. Feb 12 19:43:17.469000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:17.478213 ldconfig[1044]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 12 19:43:17.485184 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 12 19:43:17.485702 systemd[1]: Finished systemd-machine-id-commit.service. Feb 12 19:43:17.486000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:17.487445 systemd[1]: Finished ldconfig.service. Feb 12 19:43:17.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:17.515483 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 12 19:43:17.515000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:17.517505 systemd[1]: Starting audit-rules.service... Feb 12 19:43:17.518959 systemd[1]: Starting clean-ca-certificates.service... Feb 12 19:43:17.520478 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 12 19:43:17.521000 audit: BPF prog-id=24 op=LOAD Feb 12 19:43:17.523698 systemd[1]: Starting systemd-resolved.service... Feb 12 19:43:17.524000 audit: BPF prog-id=25 op=LOAD Feb 12 19:43:17.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:17.531000 audit[1069]: SYSTEM_BOOT pid=1069 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 12 19:43:17.526602 systemd[1]: Starting systemd-timesyncd.service... Feb 12 19:43:17.528165 systemd[1]: Starting systemd-update-utmp.service... Feb 12 19:43:17.529217 systemd[1]: Finished clean-ca-certificates.service. Feb 12 19:43:17.530219 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 12 19:43:17.535612 systemd[1]: Finished systemd-update-utmp.service. Feb 12 19:43:17.536000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:17.537031 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 12 19:43:17.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:17.539585 systemd[1]: Starting systemd-update-done.service... Feb 12 19:43:17.544313 systemd[1]: Finished systemd-update-done.service. Feb 12 19:43:17.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:43:17.546000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 12 19:43:17.546000 audit[1079]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff1d7bc6f0 a2=420 a3=0 items=0 ppid=1058 pid=1079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:43:17.546000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 12 19:43:17.547101 systemd[1]: Finished audit-rules.service. Feb 12 19:43:17.548677 augenrules[1079]: No rules Feb 12 19:43:17.575318 systemd-resolved[1067]: Positive Trust Anchors: Feb 12 19:43:17.575328 systemd-resolved[1067]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 19:43:17.575353 systemd-resolved[1067]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 19:43:17.581249 systemd-resolved[1067]: Defaulting to hostname 'linux'. Feb 12 19:43:17.582560 systemd[1]: Started systemd-resolved.service. Feb 12 19:43:17.583633 systemd[1]: Started systemd-timesyncd.service. Feb 12 19:43:17.583930 systemd-timesyncd[1068]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 12 19:43:17.583998 systemd-timesyncd[1068]: Initial clock synchronization to Mon 2024-02-12 19:43:17.750489 UTC. Feb 12 19:43:17.584628 systemd[1]: Reached target network.target. Feb 12 19:43:17.585337 systemd[1]: Reached target nss-lookup.target. Feb 12 19:43:17.586101 systemd[1]: Reached target sysinit.target. Feb 12 19:43:17.586846 systemd[1]: Started motdgen.path. Feb 12 19:43:17.587402 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 12 19:43:17.588192 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 12 19:43:17.588887 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 12 19:43:17.588912 systemd[1]: Reached target paths.target. Feb 12 19:43:17.589491 systemd[1]: Reached target time-set.target. Feb 12 19:43:17.590280 systemd[1]: Started logrotate.timer. Feb 12 19:43:17.591002 systemd[1]: Started mdadm.timer. Feb 12 19:43:17.591562 systemd[1]: Reached target timers.target. Feb 12 19:43:17.592442 systemd[1]: Listening on dbus.socket. Feb 12 19:43:17.593963 systemd[1]: Starting docker.socket... Feb 12 19:43:17.596603 systemd[1]: Listening on sshd.socket. Feb 12 19:43:17.597242 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 19:43:17.597590 systemd[1]: Listening on docker.socket. Feb 12 19:43:17.598200 systemd[1]: Reached target sockets.target. Feb 12 19:43:17.598793 systemd[1]: Reached target basic.target. Feb 12 19:43:17.599366 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 19:43:17.599397 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 19:43:17.600276 systemd[1]: Starting containerd.service... Feb 12 19:43:17.601629 systemd[1]: Starting dbus.service... Feb 12 19:43:17.602924 systemd[1]: Starting enable-oem-cloudinit.service... Feb 12 19:43:17.604504 systemd[1]: Starting extend-filesystems.service... Feb 12 19:43:17.605247 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 12 19:43:17.606083 systemd[1]: Starting motdgen.service... Feb 12 19:43:17.606501 jq[1090]: false Feb 12 19:43:17.608115 systemd[1]: Starting prepare-cni-plugins.service... Feb 12 19:43:17.609615 systemd[1]: Starting prepare-critools.service... Feb 12 19:43:17.611051 systemd[1]: Starting prepare-helm.service... Feb 12 19:43:17.612429 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 12 19:43:17.614443 systemd[1]: Starting sshd-keygen.service... Feb 12 19:43:17.618524 systemd[1]: Starting systemd-logind.service... Feb 12 19:43:17.619215 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 19:43:17.619264 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 12 19:43:17.619586 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 12 19:43:17.620071 systemd[1]: Starting update-engine.service... Feb 12 19:43:17.621356 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 12 19:43:17.623271 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 12 19:43:17.623415 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 12 19:43:17.623646 systemd[1]: motdgen.service: Deactivated successfully. Feb 12 19:43:17.623761 systemd[1]: Finished motdgen.service. Feb 12 19:43:17.624996 jq[1110]: true Feb 12 19:43:17.625944 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 12 19:43:17.626067 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 12 19:43:17.630650 tar[1113]: ./ Feb 12 19:43:17.630650 tar[1113]: ./loopback Feb 12 19:43:17.631329 tar[1115]: linux-amd64/helm Feb 12 19:43:17.631494 tar[1114]: crictl Feb 12 19:43:17.639374 jq[1118]: true Feb 12 19:43:17.640210 dbus-daemon[1089]: [system] SELinux support is enabled Feb 12 19:43:17.640345 systemd[1]: Started dbus.service. Feb 12 19:43:17.642710 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 12 19:43:17.642731 systemd[1]: Reached target system-config.target. Feb 12 19:43:17.643345 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 12 19:43:17.643361 systemd[1]: Reached target user-config.target. Feb 12 19:43:17.658666 extend-filesystems[1091]: Found sr0 Feb 12 19:43:17.658666 extend-filesystems[1091]: Found vda Feb 12 19:43:17.660476 extend-filesystems[1091]: Found vda1 Feb 12 19:43:17.660476 extend-filesystems[1091]: Found vda2 Feb 12 19:43:17.660476 extend-filesystems[1091]: Found vda3 Feb 12 19:43:17.660476 extend-filesystems[1091]: Found usr Feb 12 19:43:17.660476 extend-filesystems[1091]: Found vda4 Feb 12 19:43:17.660476 extend-filesystems[1091]: Found vda6 Feb 12 19:43:17.660476 extend-filesystems[1091]: Found vda7 Feb 12 19:43:17.660476 extend-filesystems[1091]: Found vda9 Feb 12 19:43:17.660476 extend-filesystems[1091]: Checking size of /dev/vda9 Feb 12 19:43:17.665844 env[1119]: time="2024-02-12T19:43:17.664677196Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 12 19:43:17.666698 update_engine[1109]: I0212 19:43:17.666409 1109 main.cc:92] Flatcar Update Engine starting Feb 12 19:43:17.667931 systemd[1]: Started update-engine.service. Feb 12 19:43:17.668017 update_engine[1109]: I0212 19:43:17.667970 1109 update_check_scheduler.cc:74] Next update check in 8m49s Feb 12 19:43:17.670011 systemd[1]: Started locksmithd.service. Feb 12 19:43:17.680592 systemd-networkd[1018]: eth0: Gained IPv6LL Feb 12 19:43:17.690867 extend-filesystems[1091]: Resized partition /dev/vda9 Feb 12 19:43:17.692025 extend-filesystems[1150]: resize2fs 1.46.5 (30-Dec-2021) Feb 12 19:43:17.697412 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 12 19:43:17.712631 tar[1113]: ./bandwidth Feb 12 19:43:17.716077 env[1119]: time="2024-02-12T19:43:17.716036796Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 12 19:43:17.717292 systemd-logind[1107]: Watching system buttons on /dev/input/event1 (Power Button) Feb 12 19:43:17.717314 systemd-logind[1107]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 12 19:43:17.718767 systemd-logind[1107]: New seat seat0. Feb 12 19:43:17.718912 env[1119]: time="2024-02-12T19:43:17.718886429Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:43:17.720231 env[1119]: time="2024-02-12T19:43:17.720207095Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 12 19:43:17.720305 env[1119]: time="2024-02-12T19:43:17.720286053Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:43:17.720583 env[1119]: time="2024-02-12T19:43:17.720565547Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 19:43:17.720666 env[1119]: time="2024-02-12T19:43:17.720647681Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 12 19:43:17.720742 env[1119]: time="2024-02-12T19:43:17.720722732Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 12 19:43:17.720841 env[1119]: time="2024-02-12T19:43:17.720823050Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 12 19:43:17.720967 env[1119]: time="2024-02-12T19:43:17.720950459Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:43:17.721300 env[1119]: time="2024-02-12T19:43:17.721271300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:43:17.721524 env[1119]: time="2024-02-12T19:43:17.721506852Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 19:43:17.721597 env[1119]: time="2024-02-12T19:43:17.721578156Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 12 19:43:17.721719 env[1119]: time="2024-02-12T19:43:17.721701357Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 12 19:43:17.721974 env[1119]: time="2024-02-12T19:43:17.721941447Z" level=info msg="metadata content store policy set" policy=shared Feb 12 19:43:17.722375 bash[1149]: Updated "/home/core/.ssh/authorized_keys" Feb 12 19:43:17.722881 systemd[1]: Started systemd-logind.service. Feb 12 19:43:17.724422 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 12 19:43:17.726840 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 12 19:43:17.751190 extend-filesystems[1150]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 12 19:43:17.751190 extend-filesystems[1150]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 12 19:43:17.751190 extend-filesystems[1150]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 12 19:43:17.756732 extend-filesystems[1091]: Resized filesystem in /dev/vda9 Feb 12 19:43:17.757381 env[1119]: time="2024-02-12T19:43:17.752843749Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 12 19:43:17.757381 env[1119]: time="2024-02-12T19:43:17.752892370Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 12 19:43:17.757381 env[1119]: time="2024-02-12T19:43:17.752905204Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 12 19:43:17.757381 env[1119]: time="2024-02-12T19:43:17.752937735Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 12 19:43:17.757381 env[1119]: time="2024-02-12T19:43:17.752951130Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 12 19:43:17.757381 env[1119]: time="2024-02-12T19:43:17.752976197Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 12 19:43:17.757381 env[1119]: time="2024-02-12T19:43:17.752987338Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 12 19:43:17.757381 env[1119]: time="2024-02-12T19:43:17.752999310Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 12 19:43:17.757381 env[1119]: time="2024-02-12T19:43:17.753010511Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 12 19:43:17.757381 env[1119]: time="2024-02-12T19:43:17.753022784Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 12 19:43:17.757381 env[1119]: time="2024-02-12T19:43:17.753045056Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 12 19:43:17.757381 env[1119]: time="2024-02-12T19:43:17.753056367Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 12 19:43:17.757381 env[1119]: time="2024-02-12T19:43:17.753199045Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 12 19:43:17.757381 env[1119]: time="2024-02-12T19:43:17.753278364Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 12 19:43:17.752573 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 12 19:43:17.757910 env[1119]: time="2024-02-12T19:43:17.753517352Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 12 19:43:17.757910 env[1119]: time="2024-02-12T19:43:17.753549682Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 12 19:43:17.757910 env[1119]: time="2024-02-12T19:43:17.753564320Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 12 19:43:17.757910 env[1119]: time="2024-02-12T19:43:17.753601770Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 12 19:43:17.757910 env[1119]: time="2024-02-12T19:43:17.753612310Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 12 19:43:17.757910 env[1119]: time="2024-02-12T19:43:17.753641675Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 12 19:43:17.757910 env[1119]: time="2024-02-12T19:43:17.753654920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 12 19:43:17.757910 env[1119]: time="2024-02-12T19:43:17.753665429Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 12 19:43:17.757910 env[1119]: time="2024-02-12T19:43:17.753675689Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 12 19:43:17.757910 env[1119]: time="2024-02-12T19:43:17.753685627Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 12 19:43:17.757910 env[1119]: time="2024-02-12T19:43:17.753707999Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 12 19:43:17.757910 env[1119]: time="2024-02-12T19:43:17.753720232Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 12 19:43:17.757910 env[1119]: time="2024-02-12T19:43:17.753830178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 12 19:43:17.757910 env[1119]: time="2024-02-12T19:43:17.753856528Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 12 19:43:17.757910 env[1119]: time="2024-02-12T19:43:17.753867478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 12 19:43:17.752716 systemd[1]: Finished extend-filesystems.service. Feb 12 19:43:17.758220 env[1119]: time="2024-02-12T19:43:17.753877126Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 12 19:43:17.758220 env[1119]: time="2024-02-12T19:43:17.753889449Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 12 19:43:17.758220 env[1119]: time="2024-02-12T19:43:17.753899909Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 12 19:43:17.758220 env[1119]: time="2024-02-12T19:43:17.753931699Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 12 19:43:17.758220 env[1119]: time="2024-02-12T19:43:17.753964891Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 12 19:43:17.755673 systemd[1]: Started containerd.service. Feb 12 19:43:17.758349 env[1119]: time="2024-02-12T19:43:17.754163854Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 12 19:43:17.758349 env[1119]: time="2024-02-12T19:43:17.754207185Z" level=info msg="Connect containerd service" Feb 12 19:43:17.758349 env[1119]: time="2024-02-12T19:43:17.754246138Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 12 19:43:17.758349 env[1119]: time="2024-02-12T19:43:17.754792513Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 19:43:17.758349 env[1119]: time="2024-02-12T19:43:17.755094319Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 12 19:43:17.758349 env[1119]: time="2024-02-12T19:43:17.755123714Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 12 19:43:17.758349 env[1119]: time="2024-02-12T19:43:17.755173317Z" level=info msg="containerd successfully booted in 0.091118s" Feb 12 19:43:17.761596 env[1119]: time="2024-02-12T19:43:17.761568067Z" level=info msg="Start subscribing containerd event" Feb 12 19:43:17.762317 env[1119]: time="2024-02-12T19:43:17.762302394Z" level=info msg="Start recovering state" Feb 12 19:43:17.762468 env[1119]: time="2024-02-12T19:43:17.762453617Z" level=info msg="Start event monitor" Feb 12 19:43:17.762571 env[1119]: time="2024-02-12T19:43:17.762555007Z" level=info msg="Start snapshots syncer" Feb 12 19:43:17.762682 env[1119]: time="2024-02-12T19:43:17.762665144Z" level=info msg="Start cni network conf syncer for default" Feb 12 19:43:17.762773 env[1119]: time="2024-02-12T19:43:17.762756575Z" level=info msg="Start streaming server" Feb 12 19:43:17.765379 locksmithd[1137]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 12 19:43:17.784838 tar[1113]: ./ptp Feb 12 19:43:17.819815 tar[1113]: ./vlan Feb 12 19:43:17.856019 tar[1113]: ./host-device Feb 12 19:43:17.892020 tar[1113]: ./tuning Feb 12 19:43:17.898379 sshd_keygen[1111]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 12 19:43:17.916603 systemd[1]: Finished sshd-keygen.service. Feb 12 19:43:17.918482 systemd[1]: Starting issuegen.service... Feb 12 19:43:17.923373 systemd[1]: issuegen.service: Deactivated successfully. Feb 12 19:43:17.923491 systemd[1]: Finished issuegen.service. Feb 12 19:43:17.924986 systemd[1]: Starting systemd-user-sessions.service... Feb 12 19:43:17.928408 tar[1113]: ./vrf Feb 12 19:43:17.930092 systemd[1]: Finished systemd-user-sessions.service. Feb 12 19:43:17.931846 systemd[1]: Started getty@tty1.service. Feb 12 19:43:17.933330 systemd[1]: Started serial-getty@ttyS0.service. Feb 12 19:43:17.934158 systemd[1]: Reached target getty.target. Feb 12 19:43:17.961374 tar[1113]: ./sbr Feb 12 19:43:17.991442 tar[1113]: ./tap Feb 12 19:43:18.025109 tar[1113]: ./dhcp Feb 12 19:43:18.082831 systemd[1]: Finished prepare-critools.service. Feb 12 19:43:18.090850 tar[1115]: linux-amd64/LICENSE Feb 12 19:43:18.090996 tar[1115]: linux-amd64/README.md Feb 12 19:43:18.095007 systemd[1]: Finished prepare-helm.service. Feb 12 19:43:18.109640 tar[1113]: ./static Feb 12 19:43:18.131682 tar[1113]: ./firewall Feb 12 19:43:18.169487 tar[1113]: ./macvlan Feb 12 19:43:18.203170 tar[1113]: ./dummy Feb 12 19:43:18.235848 tar[1113]: ./bridge Feb 12 19:43:18.272782 tar[1113]: ./ipvlan Feb 12 19:43:18.304153 tar[1113]: ./portmap Feb 12 19:43:18.332310 tar[1113]: ./host-local Feb 12 19:43:18.366607 systemd[1]: Finished prepare-cni-plugins.service. Feb 12 19:43:18.367582 systemd[1]: Reached target multi-user.target. Feb 12 19:43:18.369432 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 12 19:43:18.375771 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 12 19:43:18.375887 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 12 19:43:18.376824 systemd[1]: Startup finished in 495ms (kernel) + 6.129s (initrd) + 5.589s (userspace) = 12.215s. Feb 12 19:43:21.781532 systemd[1]: Created slice system-sshd.slice. Feb 12 19:43:21.782447 systemd[1]: Started sshd@0-10.0.0.136:22-10.0.0.1:36860.service. Feb 12 19:43:21.823859 sshd[1178]: Accepted publickey for core from 10.0.0.1 port 36860 ssh2: RSA SHA256:61J5tVZgtMsvFgBzlA318rHvk/8vx1tAF2anfHXiCnk Feb 12 19:43:21.825046 sshd[1178]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:43:21.832609 systemd-logind[1107]: New session 1 of user core. Feb 12 19:43:21.833443 systemd[1]: Created slice user-500.slice. Feb 12 19:43:21.834381 systemd[1]: Starting user-runtime-dir@500.service... Feb 12 19:43:21.840932 systemd[1]: Finished user-runtime-dir@500.service. Feb 12 19:43:21.842268 systemd[1]: Starting user@500.service... Feb 12 19:43:21.844320 (systemd)[1181]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:43:21.911985 systemd[1181]: Queued start job for default target default.target. Feb 12 19:43:21.912549 systemd[1181]: Reached target paths.target. Feb 12 19:43:21.912571 systemd[1181]: Reached target sockets.target. Feb 12 19:43:21.912584 systemd[1181]: Reached target timers.target. Feb 12 19:43:21.912596 systemd[1181]: Reached target basic.target. Feb 12 19:43:21.912632 systemd[1181]: Reached target default.target. Feb 12 19:43:21.912659 systemd[1181]: Startup finished in 64ms. Feb 12 19:43:21.912722 systemd[1]: Started user@500.service. Feb 12 19:43:21.913635 systemd[1]: Started session-1.scope. Feb 12 19:43:21.963640 systemd[1]: Started sshd@1-10.0.0.136:22-10.0.0.1:36870.service. Feb 12 19:43:22.006843 sshd[1190]: Accepted publickey for core from 10.0.0.1 port 36870 ssh2: RSA SHA256:61J5tVZgtMsvFgBzlA318rHvk/8vx1tAF2anfHXiCnk Feb 12 19:43:22.008006 sshd[1190]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:43:22.011246 systemd-logind[1107]: New session 2 of user core. Feb 12 19:43:22.011985 systemd[1]: Started session-2.scope. Feb 12 19:43:22.065366 sshd[1190]: pam_unix(sshd:session): session closed for user core Feb 12 19:43:22.067710 systemd[1]: sshd@1-10.0.0.136:22-10.0.0.1:36870.service: Deactivated successfully. Feb 12 19:43:22.068253 systemd[1]: session-2.scope: Deactivated successfully. Feb 12 19:43:22.068701 systemd-logind[1107]: Session 2 logged out. Waiting for processes to exit. Feb 12 19:43:22.069752 systemd[1]: Started sshd@2-10.0.0.136:22-10.0.0.1:36882.service. Feb 12 19:43:22.070372 systemd-logind[1107]: Removed session 2. Feb 12 19:43:22.107963 sshd[1196]: Accepted publickey for core from 10.0.0.1 port 36882 ssh2: RSA SHA256:61J5tVZgtMsvFgBzlA318rHvk/8vx1tAF2anfHXiCnk Feb 12 19:43:22.109169 sshd[1196]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:43:22.112462 systemd-logind[1107]: New session 3 of user core. Feb 12 19:43:22.113246 systemd[1]: Started session-3.scope. Feb 12 19:43:22.164144 sshd[1196]: pam_unix(sshd:session): session closed for user core Feb 12 19:43:22.166590 systemd[1]: sshd@2-10.0.0.136:22-10.0.0.1:36882.service: Deactivated successfully. Feb 12 19:43:22.167050 systemd[1]: session-3.scope: Deactivated successfully. Feb 12 19:43:22.167512 systemd-logind[1107]: Session 3 logged out. Waiting for processes to exit. Feb 12 19:43:22.168327 systemd[1]: Started sshd@3-10.0.0.136:22-10.0.0.1:36894.service. Feb 12 19:43:22.168991 systemd-logind[1107]: Removed session 3. Feb 12 19:43:22.206834 sshd[1202]: Accepted publickey for core from 10.0.0.1 port 36894 ssh2: RSA SHA256:61J5tVZgtMsvFgBzlA318rHvk/8vx1tAF2anfHXiCnk Feb 12 19:43:22.207775 sshd[1202]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:43:22.210689 systemd-logind[1107]: New session 4 of user core. Feb 12 19:43:22.211376 systemd[1]: Started session-4.scope. Feb 12 19:43:22.262661 sshd[1202]: pam_unix(sshd:session): session closed for user core Feb 12 19:43:22.264881 systemd[1]: sshd@3-10.0.0.136:22-10.0.0.1:36894.service: Deactivated successfully. Feb 12 19:43:22.265395 systemd[1]: session-4.scope: Deactivated successfully. Feb 12 19:43:22.265875 systemd-logind[1107]: Session 4 logged out. Waiting for processes to exit. Feb 12 19:43:22.266853 systemd[1]: Started sshd@4-10.0.0.136:22-10.0.0.1:36900.service. Feb 12 19:43:22.267467 systemd-logind[1107]: Removed session 4. Feb 12 19:43:22.307982 sshd[1208]: Accepted publickey for core from 10.0.0.1 port 36900 ssh2: RSA SHA256:61J5tVZgtMsvFgBzlA318rHvk/8vx1tAF2anfHXiCnk Feb 12 19:43:22.308970 sshd[1208]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:43:22.312202 systemd-logind[1107]: New session 5 of user core. Feb 12 19:43:22.313012 systemd[1]: Started session-5.scope. Feb 12 19:43:22.367083 sudo[1211]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 12 19:43:22.367239 sudo[1211]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 12 19:43:22.887743 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 12 19:43:22.892547 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 12 19:43:22.892869 systemd[1]: Reached target network-online.target. Feb 12 19:43:22.894221 systemd[1]: Starting docker.service... Feb 12 19:43:22.928313 env[1229]: time="2024-02-12T19:43:22.928259195Z" level=info msg="Starting up" Feb 12 19:43:22.929618 env[1229]: time="2024-02-12T19:43:22.929582610Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 12 19:43:22.929618 env[1229]: time="2024-02-12T19:43:22.929610558Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 12 19:43:22.929688 env[1229]: time="2024-02-12T19:43:22.929634796Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 12 19:43:22.929688 env[1229]: time="2024-02-12T19:43:22.929646479Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 12 19:43:22.931126 env[1229]: time="2024-02-12T19:43:22.931092814Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 12 19:43:22.931126 env[1229]: time="2024-02-12T19:43:22.931117659Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 12 19:43:22.931198 env[1229]: time="2024-02-12T19:43:22.931137039Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 12 19:43:22.931198 env[1229]: time="2024-02-12T19:43:22.931145344Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 12 19:43:24.269751 env[1229]: time="2024-02-12T19:43:24.269699919Z" level=info msg="Loading containers: start." Feb 12 19:43:24.362420 kernel: Initializing XFRM netlink socket Feb 12 19:43:24.387429 env[1229]: time="2024-02-12T19:43:24.387387418Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 12 19:43:24.427991 systemd-networkd[1018]: docker0: Link UP Feb 12 19:43:24.435622 env[1229]: time="2024-02-12T19:43:24.435595641Z" level=info msg="Loading containers: done." Feb 12 19:43:24.443039 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1755547726-merged.mount: Deactivated successfully. Feb 12 19:43:24.445504 env[1229]: time="2024-02-12T19:43:24.445463521Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 12 19:43:24.445654 env[1229]: time="2024-02-12T19:43:24.445622307Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 12 19:43:24.445726 env[1229]: time="2024-02-12T19:43:24.445712358Z" level=info msg="Daemon has completed initialization" Feb 12 19:43:24.459530 systemd[1]: Started docker.service. Feb 12 19:43:24.466051 env[1229]: time="2024-02-12T19:43:24.466008024Z" level=info msg="API listen on /run/docker.sock" Feb 12 19:43:24.479692 systemd[1]: Reloading. Feb 12 19:43:24.539750 /usr/lib/systemd/system-generators/torcx-generator[1371]: time="2024-02-12T19:43:24Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:43:24.539778 /usr/lib/systemd/system-generators/torcx-generator[1371]: time="2024-02-12T19:43:24Z" level=info msg="torcx already run" Feb 12 19:43:24.594657 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:43:24.594673 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:43:24.610775 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:43:24.679798 systemd[1]: Started kubelet.service. Feb 12 19:43:24.723439 kubelet[1411]: E0212 19:43:24.723366 1411 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 12 19:43:24.726965 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 19:43:24.727109 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 19:43:25.048995 env[1119]: time="2024-02-12T19:43:25.048948746Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.6\"" Feb 12 19:43:25.967737 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3031942346.mount: Deactivated successfully. Feb 12 19:43:27.854136 env[1119]: time="2024-02-12T19:43:27.854054885Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:27.856114 env[1119]: time="2024-02-12T19:43:27.856055261Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:70e88c5e3a8e409ff4604a5fdb1dacb736ea02ba0b7a3da635f294e953906f47,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:27.857924 env[1119]: time="2024-02-12T19:43:27.857878475Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:27.859408 env[1119]: time="2024-02-12T19:43:27.859366569Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:98a686df810b9f1de8e3b2ae869e79c51a36e7434d33c53f011852618aec0a68,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:27.860134 env[1119]: time="2024-02-12T19:43:27.860099909Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.6\" returns image reference \"sha256:70e88c5e3a8e409ff4604a5fdb1dacb736ea02ba0b7a3da635f294e953906f47\"" Feb 12 19:43:27.869530 env[1119]: time="2024-02-12T19:43:27.869488794Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.6\"" Feb 12 19:43:30.028546 env[1119]: time="2024-02-12T19:43:30.028483656Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:30.030908 env[1119]: time="2024-02-12T19:43:30.030874982Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:18dbd2df3bb54036300d2af8b20ef60d479173946ff089a4d16e258b27faa55c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:30.033294 env[1119]: time="2024-02-12T19:43:30.033263370Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:30.036404 env[1119]: time="2024-02-12T19:43:30.036341587Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:80bdcd72cfe26028bb2fed75732fc2f511c35fa8d1edc03deae11f3490713c9e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:30.037188 env[1119]: time="2024-02-12T19:43:30.037156282Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.6\" returns image reference \"sha256:18dbd2df3bb54036300d2af8b20ef60d479173946ff089a4d16e258b27faa55c\"" Feb 12 19:43:30.045578 env[1119]: time="2024-02-12T19:43:30.045545575Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.6\"" Feb 12 19:43:31.672190 env[1119]: time="2024-02-12T19:43:31.672126495Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:31.673962 env[1119]: time="2024-02-12T19:43:31.673908361Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7597ecaaf12074e2980eee086736dbd01e566dc266351560001aa47dbbb0e5fe,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:31.675776 env[1119]: time="2024-02-12T19:43:31.675741500Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:31.678326 env[1119]: time="2024-02-12T19:43:31.678287911Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:a89db556c34d652d403d909882dbd97336f2e935b1c726b2e2b2c0400186ac39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:31.678881 env[1119]: time="2024-02-12T19:43:31.678846749Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.6\" returns image reference \"sha256:7597ecaaf12074e2980eee086736dbd01e566dc266351560001aa47dbbb0e5fe\"" Feb 12 19:43:31.687498 env[1119]: time="2024-02-12T19:43:31.687461500Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.6\"" Feb 12 19:43:32.661417 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1325706523.mount: Deactivated successfully. Feb 12 19:43:33.892718 env[1119]: time="2024-02-12T19:43:33.892651115Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:33.894418 env[1119]: time="2024-02-12T19:43:33.894380335Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:342a759d88156b4f56ba522a1aed0e3d32d72542545346b40877f6583bebe05f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:33.896069 env[1119]: time="2024-02-12T19:43:33.896045665Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:33.897365 env[1119]: time="2024-02-12T19:43:33.897331754Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:3898a1671ae42be1cd3c2e777549bc7b5b306b8da3a224b747365f6679fb902a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:33.897846 env[1119]: time="2024-02-12T19:43:33.897811305Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.6\" returns image reference \"sha256:342a759d88156b4f56ba522a1aed0e3d32d72542545346b40877f6583bebe05f\"" Feb 12 19:43:33.905619 env[1119]: time="2024-02-12T19:43:33.905593797Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 12 19:43:34.820297 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 12 19:43:34.820472 systemd[1]: Stopped kubelet.service. Feb 12 19:43:34.821675 systemd[1]: Started kubelet.service. Feb 12 19:43:34.828299 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2017586925.mount: Deactivated successfully. Feb 12 19:43:34.837211 env[1119]: time="2024-02-12T19:43:34.837165359Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:34.838736 env[1119]: time="2024-02-12T19:43:34.838710068Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:34.840850 env[1119]: time="2024-02-12T19:43:34.840807124Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:34.843630 env[1119]: time="2024-02-12T19:43:34.843589263Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:34.844248 env[1119]: time="2024-02-12T19:43:34.844206693Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 12 19:43:34.855688 env[1119]: time="2024-02-12T19:43:34.855654589Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.9-0\"" Feb 12 19:43:34.866460 kubelet[1459]: E0212 19:43:34.866411 1459 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 12 19:43:34.869151 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 19:43:34.869270 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 19:43:35.351631 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3338896605.mount: Deactivated successfully. Feb 12 19:43:40.240112 env[1119]: time="2024-02-12T19:43:40.240044489Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.9-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:40.241779 env[1119]: time="2024-02-12T19:43:40.241721839Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:40.243674 env[1119]: time="2024-02-12T19:43:40.243641100Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.9-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:40.245279 env[1119]: time="2024-02-12T19:43:40.245222003Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:40.245938 env[1119]: time="2024-02-12T19:43:40.245904624Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.9-0\" returns image reference \"sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9\"" Feb 12 19:43:40.254490 env[1119]: time="2024-02-12T19:43:40.254460371Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Feb 12 19:43:41.251884 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount618409010.mount: Deactivated successfully. Feb 12 19:43:42.062412 env[1119]: time="2024-02-12T19:43:42.062344271Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:42.064157 env[1119]: time="2024-02-12T19:43:42.064105226Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:42.065355 env[1119]: time="2024-02-12T19:43:42.065303002Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:42.066632 env[1119]: time="2024-02-12T19:43:42.066603947Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:42.067029 env[1119]: time="2024-02-12T19:43:42.066997676Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Feb 12 19:43:44.210865 systemd[1]: Stopped kubelet.service. Feb 12 19:43:44.223733 systemd[1]: Reloading. Feb 12 19:43:44.288198 /usr/lib/systemd/system-generators/torcx-generator[1575]: time="2024-02-12T19:43:44Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:43:44.288227 /usr/lib/systemd/system-generators/torcx-generator[1575]: time="2024-02-12T19:43:44Z" level=info msg="torcx already run" Feb 12 19:43:44.342428 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:43:44.342444 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:43:44.358666 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:43:44.432450 systemd[1]: Started kubelet.service. Feb 12 19:43:44.471641 kubelet[1616]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:43:44.471641 kubelet[1616]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 12 19:43:44.471641 kubelet[1616]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:43:44.471641 kubelet[1616]: I0212 19:43:44.471598 1616 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 12 19:43:44.708975 kubelet[1616]: I0212 19:43:44.708931 1616 server.go:467] "Kubelet version" kubeletVersion="v1.28.1" Feb 12 19:43:44.708975 kubelet[1616]: I0212 19:43:44.708960 1616 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 12 19:43:44.709192 kubelet[1616]: I0212 19:43:44.709171 1616 server.go:895] "Client rotation is on, will bootstrap in background" Feb 12 19:43:44.713659 kubelet[1616]: E0212 19:43:44.713640 1616 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.136:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.136:6443: connect: connection refused Feb 12 19:43:44.713842 kubelet[1616]: I0212 19:43:44.713827 1616 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 19:43:44.719704 kubelet[1616]: I0212 19:43:44.719680 1616 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 12 19:43:44.719866 kubelet[1616]: I0212 19:43:44.719845 1616 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 12 19:43:44.719993 kubelet[1616]: I0212 19:43:44.719974 1616 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 12 19:43:44.719993 kubelet[1616]: I0212 19:43:44.719992 1616 topology_manager.go:138] "Creating topology manager with none policy" Feb 12 19:43:44.720101 kubelet[1616]: I0212 19:43:44.719999 1616 container_manager_linux.go:301] "Creating device plugin manager" Feb 12 19:43:44.720101 kubelet[1616]: I0212 19:43:44.720076 1616 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:43:44.720147 kubelet[1616]: I0212 19:43:44.720141 1616 kubelet.go:393] "Attempting to sync node with API server" Feb 12 19:43:44.720177 kubelet[1616]: I0212 19:43:44.720154 1616 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 12 19:43:44.720201 kubelet[1616]: I0212 19:43:44.720182 1616 kubelet.go:309] "Adding apiserver pod source" Feb 12 19:43:44.720201 kubelet[1616]: I0212 19:43:44.720196 1616 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 12 19:43:44.720720 kubelet[1616]: I0212 19:43:44.720707 1616 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 12 19:43:44.720829 kubelet[1616]: W0212 19:43:44.720724 1616 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.136:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused Feb 12 19:43:44.720829 kubelet[1616]: E0212 19:43:44.720833 1616 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.136:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused Feb 12 19:43:44.721009 kubelet[1616]: W0212 19:43:44.720701 1616 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.136:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused Feb 12 19:43:44.721009 kubelet[1616]: E0212 19:43:44.720865 1616 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.136:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused Feb 12 19:43:44.721088 kubelet[1616]: W0212 19:43:44.721064 1616 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 12 19:43:44.721546 kubelet[1616]: I0212 19:43:44.721518 1616 server.go:1232] "Started kubelet" Feb 12 19:43:44.723276 kubelet[1616]: E0212 19:43:44.722271 1616 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b3350f93a66239", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 43, 44, 721486393, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 43, 44, 721486393, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"localhost"}': 'Post "https://10.0.0.136:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.136:6443: connect: connection refused'(may retry after sleeping) Feb 12 19:43:44.723276 kubelet[1616]: E0212 19:43:44.722465 1616 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 12 19:43:44.723276 kubelet[1616]: E0212 19:43:44.722479 1616 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 12 19:43:44.723276 kubelet[1616]: I0212 19:43:44.722604 1616 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 12 19:43:44.723276 kubelet[1616]: I0212 19:43:44.722771 1616 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 12 19:43:44.724123 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 12 19:43:44.724299 kubelet[1616]: I0212 19:43:44.724280 1616 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 12 19:43:44.724587 kubelet[1616]: I0212 19:43:44.724574 1616 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 12 19:43:44.724689 kubelet[1616]: I0212 19:43:44.724588 1616 server.go:462] "Adding debug handlers to kubelet server" Feb 12 19:43:44.725814 kubelet[1616]: E0212 19:43:44.725786 1616 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 19:43:44.726021 kubelet[1616]: I0212 19:43:44.725988 1616 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 12 19:43:44.726108 kubelet[1616]: I0212 19:43:44.726094 1616 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 12 19:43:44.726140 kubelet[1616]: I0212 19:43:44.726134 1616 reconciler_new.go:29] "Reconciler: start to sync state" Feb 12 19:43:44.726548 kubelet[1616]: W0212 19:43:44.726512 1616 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.136:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused Feb 12 19:43:44.726647 kubelet[1616]: E0212 19:43:44.726632 1616 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.136:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused Feb 12 19:43:44.726771 kubelet[1616]: E0212 19:43:44.726746 1616 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.136:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.136:6443: connect: connection refused" interval="200ms" Feb 12 19:43:44.737331 kubelet[1616]: I0212 19:43:44.737268 1616 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 12 19:43:44.738442 kubelet[1616]: I0212 19:43:44.738418 1616 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 12 19:43:44.738494 kubelet[1616]: I0212 19:43:44.738457 1616 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 12 19:43:44.738494 kubelet[1616]: I0212 19:43:44.738485 1616 kubelet.go:2303] "Starting kubelet main sync loop" Feb 12 19:43:44.738579 kubelet[1616]: E0212 19:43:44.738556 1616 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 12 19:43:44.738998 kubelet[1616]: W0212 19:43:44.738951 1616 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.136:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused Feb 12 19:43:44.738998 kubelet[1616]: E0212 19:43:44.738995 1616 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.136:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused Feb 12 19:43:44.745026 kubelet[1616]: I0212 19:43:44.745002 1616 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 12 19:43:44.745026 kubelet[1616]: I0212 19:43:44.745033 1616 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 12 19:43:44.745186 kubelet[1616]: I0212 19:43:44.745045 1616 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:43:44.747511 kubelet[1616]: I0212 19:43:44.747492 1616 policy_none.go:49] "None policy: Start" Feb 12 19:43:44.748065 kubelet[1616]: I0212 19:43:44.748051 1616 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 12 19:43:44.748162 kubelet[1616]: I0212 19:43:44.748133 1616 state_mem.go:35] "Initializing new in-memory state store" Feb 12 19:43:44.753169 systemd[1]: Created slice kubepods.slice. Feb 12 19:43:44.756457 systemd[1]: Created slice kubepods-burstable.slice. Feb 12 19:43:44.758548 systemd[1]: Created slice kubepods-besteffort.slice. Feb 12 19:43:44.767018 kubelet[1616]: I0212 19:43:44.766991 1616 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 12 19:43:44.767289 kubelet[1616]: I0212 19:43:44.767188 1616 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 12 19:43:44.767574 kubelet[1616]: E0212 19:43:44.767547 1616 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 12 19:43:44.827039 kubelet[1616]: I0212 19:43:44.827013 1616 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 12 19:43:44.827374 kubelet[1616]: E0212 19:43:44.827359 1616 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.136:6443/api/v1/nodes\": dial tcp 10.0.0.136:6443: connect: connection refused" node="localhost" Feb 12 19:43:44.839511 kubelet[1616]: I0212 19:43:44.839488 1616 topology_manager.go:215] "Topology Admit Handler" podUID="9850bb49d96c6402f1f45cd4d4bdb217" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 12 19:43:44.840358 kubelet[1616]: I0212 19:43:44.840339 1616 topology_manager.go:215] "Topology Admit Handler" podUID="212dcc5e2f08bec92c239ac5786b7e2b" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 12 19:43:44.841020 kubelet[1616]: I0212 19:43:44.841004 1616 topology_manager.go:215] "Topology Admit Handler" podUID="d0325d16aab19669b5fea4b6623890e6" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 12 19:43:44.844982 systemd[1]: Created slice kubepods-burstable-pod9850bb49d96c6402f1f45cd4d4bdb217.slice. Feb 12 19:43:44.855018 systemd[1]: Created slice kubepods-burstable-pod212dcc5e2f08bec92c239ac5786b7e2b.slice. Feb 12 19:43:44.867876 systemd[1]: Created slice kubepods-burstable-podd0325d16aab19669b5fea4b6623890e6.slice. Feb 12 19:43:44.927531 kubelet[1616]: E0212 19:43:44.927503 1616 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.136:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.136:6443: connect: connection refused" interval="400ms" Feb 12 19:43:45.028520 kubelet[1616]: I0212 19:43:45.027860 1616 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9850bb49d96c6402f1f45cd4d4bdb217-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9850bb49d96c6402f1f45cd4d4bdb217\") " pod="kube-system/kube-apiserver-localhost" Feb 12 19:43:45.028520 kubelet[1616]: I0212 19:43:45.027896 1616 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9850bb49d96c6402f1f45cd4d4bdb217-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9850bb49d96c6402f1f45cd4d4bdb217\") " pod="kube-system/kube-apiserver-localhost" Feb 12 19:43:45.028520 kubelet[1616]: I0212 19:43:45.027919 1616 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/212dcc5e2f08bec92c239ac5786b7e2b-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"212dcc5e2f08bec92c239ac5786b7e2b\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 19:43:45.028520 kubelet[1616]: I0212 19:43:45.027954 1616 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/212dcc5e2f08bec92c239ac5786b7e2b-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"212dcc5e2f08bec92c239ac5786b7e2b\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 19:43:45.028520 kubelet[1616]: I0212 19:43:45.027972 1616 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/212dcc5e2f08bec92c239ac5786b7e2b-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"212dcc5e2f08bec92c239ac5786b7e2b\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 19:43:45.028752 kubelet[1616]: I0212 19:43:45.027990 1616 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/212dcc5e2f08bec92c239ac5786b7e2b-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"212dcc5e2f08bec92c239ac5786b7e2b\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 19:43:45.028752 kubelet[1616]: I0212 19:43:45.028036 1616 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d0325d16aab19669b5fea4b6623890e6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d0325d16aab19669b5fea4b6623890e6\") " pod="kube-system/kube-scheduler-localhost" Feb 12 19:43:45.028752 kubelet[1616]: I0212 19:43:45.028091 1616 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9850bb49d96c6402f1f45cd4d4bdb217-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9850bb49d96c6402f1f45cd4d4bdb217\") " pod="kube-system/kube-apiserver-localhost" Feb 12 19:43:45.028752 kubelet[1616]: I0212 19:43:45.028123 1616 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/212dcc5e2f08bec92c239ac5786b7e2b-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"212dcc5e2f08bec92c239ac5786b7e2b\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 19:43:45.028752 kubelet[1616]: I0212 19:43:45.028580 1616 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 12 19:43:45.028988 kubelet[1616]: E0212 19:43:45.028947 1616 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.136:6443/api/v1/nodes\": dial tcp 10.0.0.136:6443: connect: connection refused" node="localhost" Feb 12 19:43:45.153569 kubelet[1616]: E0212 19:43:45.153541 1616 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:43:45.154130 env[1119]: time="2024-02-12T19:43:45.154088604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9850bb49d96c6402f1f45cd4d4bdb217,Namespace:kube-system,Attempt:0,}" Feb 12 19:43:45.167261 kubelet[1616]: E0212 19:43:45.167234 1616 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:43:45.167836 env[1119]: time="2024-02-12T19:43:45.167792438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:212dcc5e2f08bec92c239ac5786b7e2b,Namespace:kube-system,Attempt:0,}" Feb 12 19:43:45.170018 kubelet[1616]: E0212 19:43:45.170001 1616 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:43:45.170275 env[1119]: time="2024-02-12T19:43:45.170238425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d0325d16aab19669b5fea4b6623890e6,Namespace:kube-system,Attempt:0,}" Feb 12 19:43:45.328649 kubelet[1616]: E0212 19:43:45.328541 1616 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.136:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.136:6443: connect: connection refused" interval="800ms" Feb 12 19:43:45.430815 kubelet[1616]: I0212 19:43:45.430783 1616 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 12 19:43:45.431126 kubelet[1616]: E0212 19:43:45.431107 1616 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.136:6443/api/v1/nodes\": dial tcp 10.0.0.136:6443: connect: connection refused" node="localhost" Feb 12 19:43:45.666437 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3844828564.mount: Deactivated successfully. Feb 12 19:43:45.672763 env[1119]: time="2024-02-12T19:43:45.672722508Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:45.674177 env[1119]: time="2024-02-12T19:43:45.674138172Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:45.675051 env[1119]: time="2024-02-12T19:43:45.675009648Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:45.675782 env[1119]: time="2024-02-12T19:43:45.675758574Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:45.678103 env[1119]: time="2024-02-12T19:43:45.678079086Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:45.678954 env[1119]: time="2024-02-12T19:43:45.678929420Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:45.680092 env[1119]: time="2024-02-12T19:43:45.680057181Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:45.681490 env[1119]: time="2024-02-12T19:43:45.681439765Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:45.683331 env[1119]: time="2024-02-12T19:43:45.683302791Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:45.684484 env[1119]: time="2024-02-12T19:43:45.684462830Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:45.685778 env[1119]: time="2024-02-12T19:43:45.685738322Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:45.687115 env[1119]: time="2024-02-12T19:43:45.687087033Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:45.874078 kubelet[1616]: W0212 19:43:45.874005 1616 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.136:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused Feb 12 19:43:45.874078 kubelet[1616]: E0212 19:43:45.874069 1616 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.136:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused Feb 12 19:43:45.947962 kubelet[1616]: W0212 19:43:45.947837 1616 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.136:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused Feb 12 19:43:45.947962 kubelet[1616]: E0212 19:43:45.947902 1616 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.136:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused Feb 12 19:43:45.992056 env[1119]: time="2024-02-12T19:43:45.991975109Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:43:45.992056 env[1119]: time="2024-02-12T19:43:45.992012009Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:43:45.992056 env[1119]: time="2024-02-12T19:43:45.992022273Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:43:45.992200 env[1119]: time="2024-02-12T19:43:45.992148982Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/72c3894e5d77a3e87cb3beed4fa2e0b56e12eb67367450ff4983bf1277b15c7c pid=1656 runtime=io.containerd.runc.v2 Feb 12 19:43:46.002693 systemd[1]: Started cri-containerd-72c3894e5d77a3e87cb3beed4fa2e0b56e12eb67367450ff4983bf1277b15c7c.scope. Feb 12 19:43:46.014506 env[1119]: time="2024-02-12T19:43:46.014455704Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:43:46.014626 env[1119]: time="2024-02-12T19:43:46.014492100Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:43:46.014626 env[1119]: time="2024-02-12T19:43:46.014503105Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:43:46.014626 env[1119]: time="2024-02-12T19:43:46.014605809Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/71084b2fff0d648ffa859f007e1757aa516f8a5e9749a4b351ea3055ebaa1d7f pid=1689 runtime=io.containerd.runc.v2 Feb 12 19:43:46.017626 env[1119]: time="2024-02-12T19:43:46.017571655Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:43:46.017708 env[1119]: time="2024-02-12T19:43:46.017603280Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:43:46.017708 env[1119]: time="2024-02-12T19:43:46.017613254Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:43:46.017813 env[1119]: time="2024-02-12T19:43:46.017724057Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/69b31691136d5fea05c703be8f60e572edefcc068851b4cfd0838093c985b088 pid=1707 runtime=io.containerd.runc.v2 Feb 12 19:43:46.023058 systemd[1]: Started cri-containerd-71084b2fff0d648ffa859f007e1757aa516f8a5e9749a4b351ea3055ebaa1d7f.scope. Feb 12 19:43:46.028432 systemd[1]: Started cri-containerd-69b31691136d5fea05c703be8f60e572edefcc068851b4cfd0838093c985b088.scope. Feb 12 19:43:46.041203 env[1119]: time="2024-02-12T19:43:46.041165322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9850bb49d96c6402f1f45cd4d4bdb217,Namespace:kube-system,Attempt:0,} returns sandbox id \"72c3894e5d77a3e87cb3beed4fa2e0b56e12eb67367450ff4983bf1277b15c7c\"" Feb 12 19:43:46.042199 kubelet[1616]: E0212 19:43:46.042078 1616 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:43:46.044170 env[1119]: time="2024-02-12T19:43:46.044139639Z" level=info msg="CreateContainer within sandbox \"72c3894e5d77a3e87cb3beed4fa2e0b56e12eb67367450ff4983bf1277b15c7c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 12 19:43:46.056155 env[1119]: time="2024-02-12T19:43:46.056110239Z" level=info msg="CreateContainer within sandbox \"72c3894e5d77a3e87cb3beed4fa2e0b56e12eb67367450ff4983bf1277b15c7c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f049a8ffaa7fd600c889eea4127a11be5cf3da5a93506f6f0bab28495a26e169\"" Feb 12 19:43:46.057162 env[1119]: time="2024-02-12T19:43:46.057140700Z" level=info msg="StartContainer for \"f049a8ffaa7fd600c889eea4127a11be5cf3da5a93506f6f0bab28495a26e169\"" Feb 12 19:43:46.062130 env[1119]: time="2024-02-12T19:43:46.062107247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:212dcc5e2f08bec92c239ac5786b7e2b,Namespace:kube-system,Attempt:0,} returns sandbox id \"71084b2fff0d648ffa859f007e1757aa516f8a5e9749a4b351ea3055ebaa1d7f\"" Feb 12 19:43:46.066327 kubelet[1616]: E0212 19:43:46.066179 1616 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:43:46.068112 env[1119]: time="2024-02-12T19:43:46.068085390Z" level=info msg="CreateContainer within sandbox \"71084b2fff0d648ffa859f007e1757aa516f8a5e9749a4b351ea3055ebaa1d7f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 12 19:43:46.072411 env[1119]: time="2024-02-12T19:43:46.072341985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d0325d16aab19669b5fea4b6623890e6,Namespace:kube-system,Attempt:0,} returns sandbox id \"69b31691136d5fea05c703be8f60e572edefcc068851b4cfd0838093c985b088\"" Feb 12 19:43:46.073040 kubelet[1616]: E0212 19:43:46.073013 1616 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:43:46.074666 env[1119]: time="2024-02-12T19:43:46.074574113Z" level=info msg="CreateContainer within sandbox \"69b31691136d5fea05c703be8f60e572edefcc068851b4cfd0838093c985b088\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 12 19:43:46.077794 systemd[1]: Started cri-containerd-f049a8ffaa7fd600c889eea4127a11be5cf3da5a93506f6f0bab28495a26e169.scope. Feb 12 19:43:46.085218 env[1119]: time="2024-02-12T19:43:46.085176882Z" level=info msg="CreateContainer within sandbox \"71084b2fff0d648ffa859f007e1757aa516f8a5e9749a4b351ea3055ebaa1d7f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d23136649f3f173928d93da20d5a6c9632889a3d68fa22979e0a8ec54162f7b2\"" Feb 12 19:43:46.087243 env[1119]: time="2024-02-12T19:43:46.086685036Z" level=info msg="StartContainer for \"d23136649f3f173928d93da20d5a6c9632889a3d68fa22979e0a8ec54162f7b2\"" Feb 12 19:43:46.093441 env[1119]: time="2024-02-12T19:43:46.091498880Z" level=info msg="CreateContainer within sandbox \"69b31691136d5fea05c703be8f60e572edefcc068851b4cfd0838093c985b088\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c1bcbecbd102dd12f4d20e889ecf8dc494ea8eb1c8056110fb0f515348e9fe66\"" Feb 12 19:43:46.093441 env[1119]: time="2024-02-12T19:43:46.091870581Z" level=info msg="StartContainer for \"c1bcbecbd102dd12f4d20e889ecf8dc494ea8eb1c8056110fb0f515348e9fe66\"" Feb 12 19:43:46.101659 systemd[1]: Started cri-containerd-d23136649f3f173928d93da20d5a6c9632889a3d68fa22979e0a8ec54162f7b2.scope. Feb 12 19:43:46.112461 systemd[1]: Started cri-containerd-c1bcbecbd102dd12f4d20e889ecf8dc494ea8eb1c8056110fb0f515348e9fe66.scope. Feb 12 19:43:46.115767 env[1119]: time="2024-02-12T19:43:46.115721297Z" level=info msg="StartContainer for \"f049a8ffaa7fd600c889eea4127a11be5cf3da5a93506f6f0bab28495a26e169\" returns successfully" Feb 12 19:43:46.129829 kubelet[1616]: E0212 19:43:46.129783 1616 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.136:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.136:6443: connect: connection refused" interval="1.6s" Feb 12 19:43:46.146176 env[1119]: time="2024-02-12T19:43:46.146140043Z" level=info msg="StartContainer for \"c1bcbecbd102dd12f4d20e889ecf8dc494ea8eb1c8056110fb0f515348e9fe66\" returns successfully" Feb 12 19:43:46.146653 env[1119]: time="2024-02-12T19:43:46.146634445Z" level=info msg="StartContainer for \"d23136649f3f173928d93da20d5a6c9632889a3d68fa22979e0a8ec54162f7b2\" returns successfully" Feb 12 19:43:46.233080 kubelet[1616]: I0212 19:43:46.232959 1616 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 12 19:43:46.744731 kubelet[1616]: E0212 19:43:46.744666 1616 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:43:46.749963 kubelet[1616]: E0212 19:43:46.749946 1616 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:43:46.761936 kubelet[1616]: E0212 19:43:46.761924 1616 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:43:47.158480 kubelet[1616]: I0212 19:43:47.158363 1616 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Feb 12 19:43:47.169326 kubelet[1616]: E0212 19:43:47.169298 1616 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 19:43:47.269570 kubelet[1616]: E0212 19:43:47.269524 1616 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 19:43:47.370037 kubelet[1616]: E0212 19:43:47.369996 1616 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 19:43:47.470685 kubelet[1616]: E0212 19:43:47.470569 1616 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 19:43:47.571053 kubelet[1616]: E0212 19:43:47.571030 1616 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 19:43:47.671550 kubelet[1616]: E0212 19:43:47.671524 1616 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 19:43:47.765019 kubelet[1616]: E0212 19:43:47.764884 1616 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:43:47.772393 kubelet[1616]: E0212 19:43:47.772337 1616 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 19:43:47.872490 kubelet[1616]: E0212 19:43:47.872447 1616 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 19:43:47.972959 kubelet[1616]: E0212 19:43:47.972922 1616 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 19:43:48.073679 kubelet[1616]: E0212 19:43:48.073595 1616 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 19:43:48.174238 kubelet[1616]: E0212 19:43:48.174198 1616 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 19:43:48.274914 kubelet[1616]: E0212 19:43:48.274875 1616 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 19:43:48.375494 kubelet[1616]: E0212 19:43:48.375371 1616 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 19:43:48.475903 kubelet[1616]: E0212 19:43:48.475858 1616 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 19:43:48.722080 kubelet[1616]: I0212 19:43:48.721969 1616 apiserver.go:52] "Watching apiserver" Feb 12 19:43:48.726479 kubelet[1616]: I0212 19:43:48.726448 1616 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 12 19:43:49.494127 systemd[1]: Reloading. Feb 12 19:43:49.559516 /usr/lib/systemd/system-generators/torcx-generator[1913]: time="2024-02-12T19:43:49Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:43:49.559544 /usr/lib/systemd/system-generators/torcx-generator[1913]: time="2024-02-12T19:43:49Z" level=info msg="torcx already run" Feb 12 19:43:50.025671 kubelet[1616]: E0212 19:43:50.025629 1616 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:43:50.051212 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:43:50.051227 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:43:50.069157 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:43:50.156800 systemd[1]: Stopping kubelet.service... Feb 12 19:43:50.160964 systemd[1]: kubelet.service: Deactivated successfully. Feb 12 19:43:50.161108 systemd[1]: Stopped kubelet.service. Feb 12 19:43:50.162425 systemd[1]: Started kubelet.service. Feb 12 19:43:50.213199 kubelet[1955]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:43:50.213478 kubelet[1955]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 12 19:43:50.213564 kubelet[1955]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:43:50.213694 kubelet[1955]: I0212 19:43:50.213665 1955 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 12 19:43:50.217030 sudo[1966]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 12 19:43:50.217190 sudo[1966]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Feb 12 19:43:50.217754 kubelet[1955]: I0212 19:43:50.217725 1955 server.go:467] "Kubelet version" kubeletVersion="v1.28.1" Feb 12 19:43:50.217754 kubelet[1955]: I0212 19:43:50.217756 1955 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 12 19:43:50.218035 kubelet[1955]: I0212 19:43:50.218014 1955 server.go:895] "Client rotation is on, will bootstrap in background" Feb 12 19:43:50.220048 kubelet[1955]: I0212 19:43:50.219698 1955 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 12 19:43:50.220530 kubelet[1955]: I0212 19:43:50.220509 1955 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 19:43:50.230301 kubelet[1955]: I0212 19:43:50.230263 1955 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 12 19:43:50.230504 kubelet[1955]: I0212 19:43:50.230472 1955 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 12 19:43:50.230699 kubelet[1955]: I0212 19:43:50.230676 1955 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 12 19:43:50.230819 kubelet[1955]: I0212 19:43:50.230707 1955 topology_manager.go:138] "Creating topology manager with none policy" Feb 12 19:43:50.230819 kubelet[1955]: I0212 19:43:50.230722 1955 container_manager_linux.go:301] "Creating device plugin manager" Feb 12 19:43:50.230819 kubelet[1955]: I0212 19:43:50.230759 1955 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:43:50.230926 kubelet[1955]: I0212 19:43:50.230831 1955 kubelet.go:393] "Attempting to sync node with API server" Feb 12 19:43:50.230926 kubelet[1955]: I0212 19:43:50.230846 1955 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 12 19:43:50.230926 kubelet[1955]: I0212 19:43:50.230871 1955 kubelet.go:309] "Adding apiserver pod source" Feb 12 19:43:50.230926 kubelet[1955]: I0212 19:43:50.230889 1955 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 12 19:43:50.231277 kubelet[1955]: I0212 19:43:50.231264 1955 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 12 19:43:50.231914 kubelet[1955]: I0212 19:43:50.231894 1955 server.go:1232] "Started kubelet" Feb 12 19:43:50.239202 kubelet[1955]: I0212 19:43:50.233600 1955 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 12 19:43:50.239202 kubelet[1955]: I0212 19:43:50.233813 1955 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 12 19:43:50.239202 kubelet[1955]: I0212 19:43:50.233856 1955 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 12 19:43:50.239202 kubelet[1955]: I0212 19:43:50.234475 1955 server.go:462] "Adding debug handlers to kubelet server" Feb 12 19:43:50.239202 kubelet[1955]: I0212 19:43:50.237531 1955 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 12 19:43:50.239202 kubelet[1955]: I0212 19:43:50.238746 1955 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 12 19:43:50.239202 kubelet[1955]: I0212 19:43:50.238813 1955 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 12 19:43:50.239202 kubelet[1955]: I0212 19:43:50.238904 1955 reconciler_new.go:29] "Reconciler: start to sync state" Feb 12 19:43:50.252203 kubelet[1955]: E0212 19:43:50.252189 1955 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 12 19:43:50.252292 kubelet[1955]: E0212 19:43:50.252278 1955 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 12 19:43:50.272455 kubelet[1955]: I0212 19:43:50.272335 1955 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 12 19:43:50.275224 kubelet[1955]: I0212 19:43:50.275195 1955 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 12 19:43:50.275479 kubelet[1955]: I0212 19:43:50.275448 1955 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 12 19:43:50.275553 kubelet[1955]: I0212 19:43:50.275495 1955 kubelet.go:2303] "Starting kubelet main sync loop" Feb 12 19:43:50.275592 kubelet[1955]: E0212 19:43:50.275562 1955 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 12 19:43:50.305154 kubelet[1955]: I0212 19:43:50.305047 1955 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 12 19:43:50.305154 kubelet[1955]: I0212 19:43:50.305070 1955 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 12 19:43:50.305154 kubelet[1955]: I0212 19:43:50.305083 1955 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:43:50.305322 kubelet[1955]: I0212 19:43:50.305207 1955 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 12 19:43:50.305322 kubelet[1955]: I0212 19:43:50.305229 1955 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 12 19:43:50.305322 kubelet[1955]: I0212 19:43:50.305235 1955 policy_none.go:49] "None policy: Start" Feb 12 19:43:50.305922 kubelet[1955]: I0212 19:43:50.305906 1955 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 12 19:43:50.305980 kubelet[1955]: I0212 19:43:50.305929 1955 state_mem.go:35] "Initializing new in-memory state store" Feb 12 19:43:50.306114 kubelet[1955]: I0212 19:43:50.306085 1955 state_mem.go:75] "Updated machine memory state" Feb 12 19:43:50.309436 kubelet[1955]: I0212 19:43:50.309414 1955 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 12 19:43:50.309807 kubelet[1955]: I0212 19:43:50.309724 1955 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 12 19:43:50.342609 kubelet[1955]: I0212 19:43:50.342582 1955 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 12 19:43:50.349287 kubelet[1955]: I0212 19:43:50.349249 1955 kubelet_node_status.go:108] "Node was previously registered" node="localhost" Feb 12 19:43:50.349414 kubelet[1955]: I0212 19:43:50.349331 1955 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Feb 12 19:43:50.376680 kubelet[1955]: I0212 19:43:50.376637 1955 topology_manager.go:215] "Topology Admit Handler" podUID="9850bb49d96c6402f1f45cd4d4bdb217" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 12 19:43:50.376815 kubelet[1955]: I0212 19:43:50.376788 1955 topology_manager.go:215] "Topology Admit Handler" podUID="212dcc5e2f08bec92c239ac5786b7e2b" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 12 19:43:50.377038 kubelet[1955]: I0212 19:43:50.377018 1955 topology_manager.go:215] "Topology Admit Handler" podUID="d0325d16aab19669b5fea4b6623890e6" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 12 19:43:50.383844 kubelet[1955]: E0212 19:43:50.383818 1955 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 12 19:43:50.440408 kubelet[1955]: I0212 19:43:50.440368 1955 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/212dcc5e2f08bec92c239ac5786b7e2b-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"212dcc5e2f08bec92c239ac5786b7e2b\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 19:43:50.440497 kubelet[1955]: I0212 19:43:50.440425 1955 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9850bb49d96c6402f1f45cd4d4bdb217-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9850bb49d96c6402f1f45cd4d4bdb217\") " pod="kube-system/kube-apiserver-localhost" Feb 12 19:43:50.440497 kubelet[1955]: I0212 19:43:50.440458 1955 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9850bb49d96c6402f1f45cd4d4bdb217-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9850bb49d96c6402f1f45cd4d4bdb217\") " pod="kube-system/kube-apiserver-localhost" Feb 12 19:43:50.440497 kubelet[1955]: I0212 19:43:50.440484 1955 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/212dcc5e2f08bec92c239ac5786b7e2b-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"212dcc5e2f08bec92c239ac5786b7e2b\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 19:43:50.440586 kubelet[1955]: I0212 19:43:50.440543 1955 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/212dcc5e2f08bec92c239ac5786b7e2b-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"212dcc5e2f08bec92c239ac5786b7e2b\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 19:43:50.440614 kubelet[1955]: I0212 19:43:50.440594 1955 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d0325d16aab19669b5fea4b6623890e6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d0325d16aab19669b5fea4b6623890e6\") " pod="kube-system/kube-scheduler-localhost" Feb 12 19:43:50.440640 kubelet[1955]: I0212 19:43:50.440617 1955 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9850bb49d96c6402f1f45cd4d4bdb217-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9850bb49d96c6402f1f45cd4d4bdb217\") " pod="kube-system/kube-apiserver-localhost" Feb 12 19:43:50.440666 kubelet[1955]: I0212 19:43:50.440652 1955 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/212dcc5e2f08bec92c239ac5786b7e2b-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"212dcc5e2f08bec92c239ac5786b7e2b\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 19:43:50.440693 kubelet[1955]: I0212 19:43:50.440685 1955 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/212dcc5e2f08bec92c239ac5786b7e2b-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"212dcc5e2f08bec92c239ac5786b7e2b\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 19:43:50.677001 sudo[1966]: pam_unix(sudo:session): session closed for user root Feb 12 19:43:50.683058 kubelet[1955]: E0212 19:43:50.683030 1955 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:43:50.684862 kubelet[1955]: E0212 19:43:50.684835 1955 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:43:50.684942 kubelet[1955]: E0212 19:43:50.684879 1955 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:43:51.233530 kubelet[1955]: I0212 19:43:51.233479 1955 apiserver.go:52] "Watching apiserver" Feb 12 19:43:51.239340 kubelet[1955]: I0212 19:43:51.239313 1955 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 12 19:43:51.285497 kubelet[1955]: E0212 19:43:51.285469 1955 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:43:51.290801 kubelet[1955]: E0212 19:43:51.290773 1955 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 12 19:43:51.290917 kubelet[1955]: E0212 19:43:51.290896 1955 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Feb 12 19:43:51.291213 kubelet[1955]: E0212 19:43:51.291191 1955 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:43:51.291436 kubelet[1955]: E0212 19:43:51.291414 1955 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:43:51.302120 kubelet[1955]: I0212 19:43:51.301921 1955 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.301875374 podCreationTimestamp="2024-02-12 19:43:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:43:51.301713438 +0000 UTC m=+1.136633730" watchObservedRunningTime="2024-02-12 19:43:51.301875374 +0000 UTC m=+1.136795666" Feb 12 19:43:51.308165 kubelet[1955]: I0212 19:43:51.308130 1955 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.308101971 podCreationTimestamp="2024-02-12 19:43:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:43:51.307970883 +0000 UTC m=+1.142891175" watchObservedRunningTime="2024-02-12 19:43:51.308101971 +0000 UTC m=+1.143022263" Feb 12 19:43:51.313803 kubelet[1955]: I0212 19:43:51.313782 1955 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.3137463409999999 podCreationTimestamp="2024-02-12 19:43:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:43:51.313557346 +0000 UTC m=+1.148477649" watchObservedRunningTime="2024-02-12 19:43:51.313746341 +0000 UTC m=+1.148666633" Feb 12 19:43:51.619199 sudo[1211]: pam_unix(sudo:session): session closed for user root Feb 12 19:43:51.620331 sshd[1208]: pam_unix(sshd:session): session closed for user core Feb 12 19:43:51.622086 systemd[1]: sshd@4-10.0.0.136:22-10.0.0.1:36900.service: Deactivated successfully. Feb 12 19:43:51.622723 systemd[1]: session-5.scope: Deactivated successfully. Feb 12 19:43:51.622853 systemd[1]: session-5.scope: Consumed 3.411s CPU time. Feb 12 19:43:51.623202 systemd-logind[1107]: Session 5 logged out. Waiting for processes to exit. Feb 12 19:43:51.623801 systemd-logind[1107]: Removed session 5. Feb 12 19:43:52.286578 kubelet[1955]: E0212 19:43:52.286554 1955 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:43:52.286928 kubelet[1955]: E0212 19:43:52.286648 1955 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:43:53.291441 kubelet[1955]: E0212 19:43:53.291372 1955 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:43:54.943357 kubelet[1955]: E0212 19:43:54.943321 1955 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:43:56.256635 kubelet[1955]: E0212 19:43:56.256597 1955 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:43:56.295124 kubelet[1955]: E0212 19:43:56.295088 1955 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:44:00.827263 kubelet[1955]: E0212 19:44:00.827221 1955 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:44:01.300176 kubelet[1955]: E0212 19:44:01.300148 1955 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:44:03.168496 update_engine[1109]: I0212 19:44:03.168454 1109 update_attempter.cc:509] Updating boot flags... Feb 12 19:44:03.774984 kubelet[1955]: I0212 19:44:03.774941 1955 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 12 19:44:03.775468 kubelet[1955]: I0212 19:44:03.775411 1955 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 12 19:44:03.775505 env[1119]: time="2024-02-12T19:44:03.775251529Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 12 19:44:04.529802 kubelet[1955]: I0212 19:44:04.529758 1955 topology_manager.go:215] "Topology Admit Handler" podUID="27422ce4-a4ac-4961-84ad-baf1c78dfb50" podNamespace="kube-system" podName="kube-proxy-qkjjv" Feb 12 19:44:04.534059 systemd[1]: Created slice kubepods-besteffort-pod27422ce4_a4ac_4961_84ad_baf1c78dfb50.slice. Feb 12 19:44:04.538172 kubelet[1955]: I0212 19:44:04.538148 1955 topology_manager.go:215] "Topology Admit Handler" podUID="1ad341bf-87f1-4024-a54d-9db2ba5c1f62" podNamespace="kube-system" podName="cilium-k8pll" Feb 12 19:44:04.545970 systemd[1]: Created slice kubepods-burstable-pod1ad341bf_87f1_4024_a54d_9db2ba5c1f62.slice. Feb 12 19:44:04.625298 kubelet[1955]: I0212 19:44:04.625266 1955 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/27422ce4-a4ac-4961-84ad-baf1c78dfb50-xtables-lock\") pod \"kube-proxy-qkjjv\" (UID: \"27422ce4-a4ac-4961-84ad-baf1c78dfb50\") " pod="kube-system/kube-proxy-qkjjv" Feb 12 19:44:04.625298 kubelet[1955]: I0212 19:44:04.625299 1955 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/27422ce4-a4ac-4961-84ad-baf1c78dfb50-lib-modules\") pod \"kube-proxy-qkjjv\" (UID: \"27422ce4-a4ac-4961-84ad-baf1c78dfb50\") " pod="kube-system/kube-proxy-qkjjv" Feb 12 19:44:04.625298 kubelet[1955]: I0212 19:44:04.625320 1955 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nnwt\" (UniqueName: \"kubernetes.io/projected/27422ce4-a4ac-4961-84ad-baf1c78dfb50-kube-api-access-4nnwt\") pod \"kube-proxy-qkjjv\" (UID: \"27422ce4-a4ac-4961-84ad-baf1c78dfb50\") " pod="kube-system/kube-proxy-qkjjv" Feb 12 19:44:04.625614 kubelet[1955]: I0212 19:44:04.625340 1955 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1ad341bf-87f1-4024-a54d-9db2ba5c1f62-cilium-cgroup\") pod \"cilium-k8pll\" (UID: \"1ad341bf-87f1-4024-a54d-9db2ba5c1f62\") " pod="kube-system/cilium-k8pll" Feb 12 19:44:04.625614 kubelet[1955]: I0212 19:44:04.625357 1955 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1ad341bf-87f1-4024-a54d-9db2ba5c1f62-etc-cni-netd\") pod \"cilium-k8pll\" (UID: \"1ad341bf-87f1-4024-a54d-9db2ba5c1f62\") " pod="kube-system/cilium-k8pll" Feb 12 19:44:04.625614 kubelet[1955]: I0212 19:44:04.625376 1955 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1ad341bf-87f1-4024-a54d-9db2ba5c1f62-clustermesh-secrets\") pod \"cilium-k8pll\" (UID: \"1ad341bf-87f1-4024-a54d-9db2ba5c1f62\") " pod="kube-system/cilium-k8pll" Feb 12 19:44:04.625614 kubelet[1955]: I0212 19:44:04.625465 1955 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1ad341bf-87f1-4024-a54d-9db2ba5c1f62-host-proc-sys-net\") pod \"cilium-k8pll\" (UID: \"1ad341bf-87f1-4024-a54d-9db2ba5c1f62\") " pod="kube-system/cilium-k8pll" Feb 12 19:44:04.625614 kubelet[1955]: I0212 19:44:04.625501 1955 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1ad341bf-87f1-4024-a54d-9db2ba5c1f62-xtables-lock\") pod \"cilium-k8pll\" (UID: \"1ad341bf-87f1-4024-a54d-9db2ba5c1f62\") " pod="kube-system/cilium-k8pll" Feb 12 19:44:04.625614 kubelet[1955]: I0212 19:44:04.625532 1955 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1ad341bf-87f1-4024-a54d-9db2ba5c1f62-lib-modules\") pod \"cilium-k8pll\" (UID: \"1ad341bf-87f1-4024-a54d-9db2ba5c1f62\") " pod="kube-system/cilium-k8pll" Feb 12 19:44:04.625807 kubelet[1955]: I0212 19:44:04.625577 1955 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1ad341bf-87f1-4024-a54d-9db2ba5c1f62-cilium-run\") pod \"cilium-k8pll\" (UID: \"1ad341bf-87f1-4024-a54d-9db2ba5c1f62\") " pod="kube-system/cilium-k8pll" Feb 12 19:44:04.625807 kubelet[1955]: I0212 19:44:04.625605 1955 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1ad341bf-87f1-4024-a54d-9db2ba5c1f62-bpf-maps\") pod \"cilium-k8pll\" (UID: \"1ad341bf-87f1-4024-a54d-9db2ba5c1f62\") " pod="kube-system/cilium-k8pll" Feb 12 19:44:04.625807 kubelet[1955]: I0212 19:44:04.625625 1955 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1ad341bf-87f1-4024-a54d-9db2ba5c1f62-host-proc-sys-kernel\") pod \"cilium-k8pll\" (UID: \"1ad341bf-87f1-4024-a54d-9db2ba5c1f62\") " pod="kube-system/cilium-k8pll" Feb 12 19:44:04.625807 kubelet[1955]: I0212 19:44:04.625651 1955 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/27422ce4-a4ac-4961-84ad-baf1c78dfb50-kube-proxy\") pod \"kube-proxy-qkjjv\" (UID: \"27422ce4-a4ac-4961-84ad-baf1c78dfb50\") " pod="kube-system/kube-proxy-qkjjv" Feb 12 19:44:04.625807 kubelet[1955]: I0212 19:44:04.625740 1955 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzms9\" (UniqueName: \"kubernetes.io/projected/1ad341bf-87f1-4024-a54d-9db2ba5c1f62-kube-api-access-xzms9\") pod \"cilium-k8pll\" (UID: \"1ad341bf-87f1-4024-a54d-9db2ba5c1f62\") " pod="kube-system/cilium-k8pll" Feb 12 19:44:04.625807 kubelet[1955]: I0212 19:44:04.625777 1955 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1ad341bf-87f1-4024-a54d-9db2ba5c1f62-hostproc\") pod \"cilium-k8pll\" (UID: \"1ad341bf-87f1-4024-a54d-9db2ba5c1f62\") " pod="kube-system/cilium-k8pll" Feb 12 19:44:04.625990 kubelet[1955]: I0212 19:44:04.625794 1955 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1ad341bf-87f1-4024-a54d-9db2ba5c1f62-cilium-config-path\") pod \"cilium-k8pll\" (UID: \"1ad341bf-87f1-4024-a54d-9db2ba5c1f62\") " pod="kube-system/cilium-k8pll" Feb 12 19:44:04.625990 kubelet[1955]: I0212 19:44:04.625835 1955 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1ad341bf-87f1-4024-a54d-9db2ba5c1f62-hubble-tls\") pod \"cilium-k8pll\" (UID: \"1ad341bf-87f1-4024-a54d-9db2ba5c1f62\") " pod="kube-system/cilium-k8pll" Feb 12 19:44:04.625990 kubelet[1955]: I0212 19:44:04.625856 1955 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1ad341bf-87f1-4024-a54d-9db2ba5c1f62-cni-path\") pod \"cilium-k8pll\" (UID: \"1ad341bf-87f1-4024-a54d-9db2ba5c1f62\") " pod="kube-system/cilium-k8pll" Feb 12 19:44:04.766074 kubelet[1955]: I0212 19:44:04.764502 1955 topology_manager.go:215] "Topology Admit Handler" podUID="72149871-0480-46b0-a9e7-403e47facad8" podNamespace="kube-system" podName="cilium-operator-6bc8ccdb58-r8l2t" Feb 12 19:44:04.769249 systemd[1]: Created slice kubepods-besteffort-pod72149871_0480_46b0_a9e7_403e47facad8.slice. Feb 12 19:44:04.827679 kubelet[1955]: I0212 19:44:04.827554 1955 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/72149871-0480-46b0-a9e7-403e47facad8-cilium-config-path\") pod \"cilium-operator-6bc8ccdb58-r8l2t\" (UID: \"72149871-0480-46b0-a9e7-403e47facad8\") " pod="kube-system/cilium-operator-6bc8ccdb58-r8l2t" Feb 12 19:44:04.827679 kubelet[1955]: I0212 19:44:04.827606 1955 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxsk7\" (UniqueName: \"kubernetes.io/projected/72149871-0480-46b0-a9e7-403e47facad8-kube-api-access-pxsk7\") pod \"cilium-operator-6bc8ccdb58-r8l2t\" (UID: \"72149871-0480-46b0-a9e7-403e47facad8\") " pod="kube-system/cilium-operator-6bc8ccdb58-r8l2t" Feb 12 19:44:04.841875 kubelet[1955]: E0212 19:44:04.841840 1955 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:44:04.842714 env[1119]: time="2024-02-12T19:44:04.842665428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qkjjv,Uid:27422ce4-a4ac-4961-84ad-baf1c78dfb50,Namespace:kube-system,Attempt:0,}" Feb 12 19:44:04.849289 kubelet[1955]: E0212 19:44:04.849266 1955 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:44:04.849939 env[1119]: time="2024-02-12T19:44:04.849686153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k8pll,Uid:1ad341bf-87f1-4024-a54d-9db2ba5c1f62,Namespace:kube-system,Attempt:0,}" Feb 12 19:44:04.863142 env[1119]: time="2024-02-12T19:44:04.863077120Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:44:04.863142 env[1119]: time="2024-02-12T19:44:04.863115458Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:44:04.863142 env[1119]: time="2024-02-12T19:44:04.863128825Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:44:04.863358 env[1119]: time="2024-02-12T19:44:04.863325616Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9c1a0be83f91dccdcf8b3e9fbea8112a003c189aeca174c1de0f8033b5eed04b pid=2063 runtime=io.containerd.runc.v2 Feb 12 19:44:04.872485 systemd[1]: Started cri-containerd-9c1a0be83f91dccdcf8b3e9fbea8112a003c189aeca174c1de0f8033b5eed04b.scope. Feb 12 19:44:04.889256 env[1119]: time="2024-02-12T19:44:04.889201295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qkjjv,Uid:27422ce4-a4ac-4961-84ad-baf1c78dfb50,Namespace:kube-system,Attempt:0,} returns sandbox id \"9c1a0be83f91dccdcf8b3e9fbea8112a003c189aeca174c1de0f8033b5eed04b\"" Feb 12 19:44:04.890383 kubelet[1955]: E0212 19:44:04.890363 1955 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:44:04.893406 env[1119]: time="2024-02-12T19:44:04.893365178Z" level=info msg="CreateContainer within sandbox \"9c1a0be83f91dccdcf8b3e9fbea8112a003c189aeca174c1de0f8033b5eed04b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 12 19:44:04.903436 env[1119]: time="2024-02-12T19:44:04.903359032Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:44:04.903436 env[1119]: time="2024-02-12T19:44:04.903420267Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:44:04.903516 env[1119]: time="2024-02-12T19:44:04.903430688Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:44:04.903638 env[1119]: time="2024-02-12T19:44:04.903580492Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8db82a875b3a1ec700599e69c6f9aae4a0f126514aab281e8a57c48cfee741af pid=2103 runtime=io.containerd.runc.v2 Feb 12 19:44:04.912254 systemd[1]: Started cri-containerd-8db82a875b3a1ec700599e69c6f9aae4a0f126514aab281e8a57c48cfee741af.scope. Feb 12 19:44:04.932193 env[1119]: time="2024-02-12T19:44:04.932158922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k8pll,Uid:1ad341bf-87f1-4024-a54d-9db2ba5c1f62,Namespace:kube-system,Attempt:0,} returns sandbox id \"8db82a875b3a1ec700599e69c6f9aae4a0f126514aab281e8a57c48cfee741af\"" Feb 12 19:44:04.938867 env[1119]: time="2024-02-12T19:44:04.933834662Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 12 19:44:04.939211 kubelet[1955]: E0212 19:44:04.932947 1955 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:44:04.948160 kubelet[1955]: E0212 19:44:04.948003 1955 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:44:05.065894 env[1119]: time="2024-02-12T19:44:05.065843434Z" level=info msg="CreateContainer within sandbox \"9c1a0be83f91dccdcf8b3e9fbea8112a003c189aeca174c1de0f8033b5eed04b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b53a8ec533110bb76c73988d6531dbeb13382b5667ac7b4010b7c61b70e8de51\"" Feb 12 19:44:05.066499 env[1119]: time="2024-02-12T19:44:05.066459352Z" level=info msg="StartContainer for \"b53a8ec533110bb76c73988d6531dbeb13382b5667ac7b4010b7c61b70e8de51\"" Feb 12 19:44:05.071771 kubelet[1955]: E0212 19:44:05.071748 1955 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:44:05.072623 env[1119]: time="2024-02-12T19:44:05.072550189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-r8l2t,Uid:72149871-0480-46b0-a9e7-403e47facad8,Namespace:kube-system,Attempt:0,}" Feb 12 19:44:05.080979 systemd[1]: Started cri-containerd-b53a8ec533110bb76c73988d6531dbeb13382b5667ac7b4010b7c61b70e8de51.scope. Feb 12 19:44:05.278960 env[1119]: time="2024-02-12T19:44:05.278902343Z" level=info msg="StartContainer for \"b53a8ec533110bb76c73988d6531dbeb13382b5667ac7b4010b7c61b70e8de51\" returns successfully" Feb 12 19:44:05.306935 kubelet[1955]: E0212 19:44:05.306913 1955 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:44:05.403293 env[1119]: time="2024-02-12T19:44:05.403224997Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:44:05.403293 env[1119]: time="2024-02-12T19:44:05.403264286Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:44:05.403293 env[1119]: time="2024-02-12T19:44:05.403273716Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:44:05.403510 env[1119]: time="2024-02-12T19:44:05.403422958Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3636acbe1b65adad1d37f23a430ef46f68cd418ffc554aafbcd62a1ecd081a04 pid=2296 runtime=io.containerd.runc.v2 Feb 12 19:44:05.412360 systemd[1]: Started cri-containerd-3636acbe1b65adad1d37f23a430ef46f68cd418ffc554aafbcd62a1ecd081a04.scope. Feb 12 19:44:05.442619 env[1119]: time="2024-02-12T19:44:05.442575544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-r8l2t,Uid:72149871-0480-46b0-a9e7-403e47facad8,Namespace:kube-system,Attempt:0,} returns sandbox id \"3636acbe1b65adad1d37f23a430ef46f68cd418ffc554aafbcd62a1ecd081a04\"" Feb 12 19:44:05.443688 kubelet[1955]: E0212 19:44:05.443249 1955 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:44:10.284505 kubelet[1955]: I0212 19:44:10.284477 1955 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-qkjjv" podStartSLOduration=6.284445165 podCreationTimestamp="2024-02-12 19:44:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:44:05.368061567 +0000 UTC m=+15.202981919" watchObservedRunningTime="2024-02-12 19:44:10.284445165 +0000 UTC m=+20.119365458" Feb 12 19:44:13.339967 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1079235671.mount: Deactivated successfully. Feb 12 19:44:17.701608 env[1119]: time="2024-02-12T19:44:17.701561408Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:44:17.703230 env[1119]: time="2024-02-12T19:44:17.703205830Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:44:17.704721 env[1119]: time="2024-02-12T19:44:17.704669948Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:44:17.705159 env[1119]: time="2024-02-12T19:44:17.705124922Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 12 19:44:17.706020 env[1119]: time="2024-02-12T19:44:17.705990483Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 12 19:44:17.708269 env[1119]: time="2024-02-12T19:44:17.708235536Z" level=info msg="CreateContainer within sandbox \"8db82a875b3a1ec700599e69c6f9aae4a0f126514aab281e8a57c48cfee741af\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 19:44:17.719041 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount624312997.mount: Deactivated successfully. Feb 12 19:44:17.720814 env[1119]: time="2024-02-12T19:44:17.720769350Z" level=info msg="CreateContainer within sandbox \"8db82a875b3a1ec700599e69c6f9aae4a0f126514aab281e8a57c48cfee741af\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c84cbe5c5136bf0a13c7a3b8918d2332d34460a55eb7c41854299964e1830149\"" Feb 12 19:44:17.721193 env[1119]: time="2024-02-12T19:44:17.721173304Z" level=info msg="StartContainer for \"c84cbe5c5136bf0a13c7a3b8918d2332d34460a55eb7c41854299964e1830149\"" Feb 12 19:44:17.738664 systemd[1]: Started cri-containerd-c84cbe5c5136bf0a13c7a3b8918d2332d34460a55eb7c41854299964e1830149.scope. Feb 12 19:44:17.759729 env[1119]: time="2024-02-12T19:44:17.759677110Z" level=info msg="StartContainer for \"c84cbe5c5136bf0a13c7a3b8918d2332d34460a55eb7c41854299964e1830149\" returns successfully" Feb 12 19:44:17.766674 systemd[1]: cri-containerd-c84cbe5c5136bf0a13c7a3b8918d2332d34460a55eb7c41854299964e1830149.scope: Deactivated successfully. Feb 12 19:44:18.331192 kubelet[1955]: E0212 19:44:18.331169 1955 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:44:18.356434 env[1119]: time="2024-02-12T19:44:18.356369147Z" level=info msg="shim disconnected" id=c84cbe5c5136bf0a13c7a3b8918d2332d34460a55eb7c41854299964e1830149 Feb 12 19:44:18.356434 env[1119]: time="2024-02-12T19:44:18.356431610Z" level=warning msg="cleaning up after shim disconnected" id=c84cbe5c5136bf0a13c7a3b8918d2332d34460a55eb7c41854299964e1830149 namespace=k8s.io Feb 12 19:44:18.356434 env[1119]: time="2024-02-12T19:44:18.356440628Z" level=info msg="cleaning up dead shim" Feb 12 19:44:18.379762 env[1119]: time="2024-02-12T19:44:18.379704053Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:44:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2383 runtime=io.containerd.runc.v2\n" Feb 12 19:44:18.521204 systemd[1]: Started sshd@5-10.0.0.136:22-10.0.0.1:49076.service. Feb 12 19:44:18.560778 sshd[2396]: Accepted publickey for core from 10.0.0.1 port 49076 ssh2: RSA SHA256:61J5tVZgtMsvFgBzlA318rHvk/8vx1tAF2anfHXiCnk Feb 12 19:44:18.561774 sshd[2396]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:44:18.564819 systemd-logind[1107]: New session 6 of user core. Feb 12 19:44:18.565624 systemd[1]: Started session-6.scope. Feb 12 19:44:18.675738 sshd[2396]: pam_unix(sshd:session): session closed for user core Feb 12 19:44:18.677977 systemd[1]: sshd@5-10.0.0.136:22-10.0.0.1:49076.service: Deactivated successfully. Feb 12 19:44:18.678670 systemd[1]: session-6.scope: Deactivated successfully. Feb 12 19:44:18.679362 systemd-logind[1107]: Session 6 logged out. Waiting for processes to exit. Feb 12 19:44:18.679998 systemd-logind[1107]: Removed session 6. Feb 12 19:44:18.717597 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c84cbe5c5136bf0a13c7a3b8918d2332d34460a55eb7c41854299964e1830149-rootfs.mount: Deactivated successfully. Feb 12 19:44:19.262614 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount50913630.mount: Deactivated successfully. Feb 12 19:44:19.333267 kubelet[1955]: E0212 19:44:19.333248 1955 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:44:19.334982 env[1119]: time="2024-02-12T19:44:19.334935118Z" level=info msg="CreateContainer within sandbox \"8db82a875b3a1ec700599e69c6f9aae4a0f126514aab281e8a57c48cfee741af\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 12 19:44:19.343546 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1836281673.mount: Deactivated successfully. Feb 12 19:44:19.346860 env[1119]: time="2024-02-12T19:44:19.346815151Z" level=info msg="CreateContainer within sandbox \"8db82a875b3a1ec700599e69c6f9aae4a0f126514aab281e8a57c48cfee741af\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c7c1e24ebf5f24ae320ed92f44cd042d9c32265905d554db429e7778f55750b4\"" Feb 12 19:44:19.347318 env[1119]: time="2024-02-12T19:44:19.347277347Z" level=info msg="StartContainer for \"c7c1e24ebf5f24ae320ed92f44cd042d9c32265905d554db429e7778f55750b4\"" Feb 12 19:44:19.360003 systemd[1]: Started cri-containerd-c7c1e24ebf5f24ae320ed92f44cd042d9c32265905d554db429e7778f55750b4.scope. Feb 12 19:44:19.380815 env[1119]: time="2024-02-12T19:44:19.380759339Z" level=info msg="StartContainer for \"c7c1e24ebf5f24ae320ed92f44cd042d9c32265905d554db429e7778f55750b4\" returns successfully" Feb 12 19:44:19.390511 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 19:44:19.390689 systemd[1]: Stopped systemd-sysctl.service. Feb 12 19:44:19.390823 systemd[1]: Stopping systemd-sysctl.service... Feb 12 19:44:19.392185 systemd[1]: Starting systemd-sysctl.service... Feb 12 19:44:19.394500 systemd[1]: cri-containerd-c7c1e24ebf5f24ae320ed92f44cd042d9c32265905d554db429e7778f55750b4.scope: Deactivated successfully. Feb 12 19:44:19.402572 systemd[1]: Finished systemd-sysctl.service. Feb 12 19:44:19.466481 env[1119]: time="2024-02-12T19:44:19.466426774Z" level=info msg="shim disconnected" id=c7c1e24ebf5f24ae320ed92f44cd042d9c32265905d554db429e7778f55750b4 Feb 12 19:44:19.466481 env[1119]: time="2024-02-12T19:44:19.466472713Z" level=warning msg="cleaning up after shim disconnected" id=c7c1e24ebf5f24ae320ed92f44cd042d9c32265905d554db429e7778f55750b4 namespace=k8s.io Feb 12 19:44:19.466481 env[1119]: time="2024-02-12T19:44:19.466481310Z" level=info msg="cleaning up dead shim" Feb 12 19:44:19.473213 env[1119]: time="2024-02-12T19:44:19.473167459Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:44:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2461 runtime=io.containerd.runc.v2\n" Feb 12 19:44:19.875917 env[1119]: time="2024-02-12T19:44:19.875848582Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:44:19.877374 env[1119]: time="2024-02-12T19:44:19.877338190Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:44:19.878775 env[1119]: time="2024-02-12T19:44:19.878726680Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:44:19.879058 env[1119]: time="2024-02-12T19:44:19.879031256Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 12 19:44:19.880678 env[1119]: time="2024-02-12T19:44:19.880647183Z" level=info msg="CreateContainer within sandbox \"3636acbe1b65adad1d37f23a430ef46f68cd418ffc554aafbcd62a1ecd081a04\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 12 19:44:19.890711 env[1119]: time="2024-02-12T19:44:19.890671710Z" level=info msg="CreateContainer within sandbox \"3636acbe1b65adad1d37f23a430ef46f68cd418ffc554aafbcd62a1ecd081a04\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"28a2e7cea54b47953baebcbff95dc7457678cbf69be44707bc9ead9cdd1be0d5\"" Feb 12 19:44:19.891024 env[1119]: time="2024-02-12T19:44:19.891000004Z" level=info msg="StartContainer for \"28a2e7cea54b47953baebcbff95dc7457678cbf69be44707bc9ead9cdd1be0d5\"" Feb 12 19:44:19.905308 systemd[1]: Started cri-containerd-28a2e7cea54b47953baebcbff95dc7457678cbf69be44707bc9ead9cdd1be0d5.scope. Feb 12 19:44:19.927426 env[1119]: time="2024-02-12T19:44:19.927362889Z" level=info msg="StartContainer for \"28a2e7cea54b47953baebcbff95dc7457678cbf69be44707bc9ead9cdd1be0d5\" returns successfully" Feb 12 19:44:20.336039 kubelet[1955]: E0212 19:44:20.335937 1955 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:44:20.337819 kubelet[1955]: E0212 19:44:20.337800 1955 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:44:20.339331 env[1119]: time="2024-02-12T19:44:20.339293052Z" level=info msg="CreateContainer within sandbox \"8db82a875b3a1ec700599e69c6f9aae4a0f126514aab281e8a57c48cfee741af\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 12 19:44:20.717466 systemd[1]: run-containerd-runc-k8s.io-28a2e7cea54b47953baebcbff95dc7457678cbf69be44707bc9ead9cdd1be0d5-runc.HyQLpr.mount: Deactivated successfully. Feb 12 19:44:20.719862 env[1119]: time="2024-02-12T19:44:20.719813831Z" level=info msg="CreateContainer within sandbox \"8db82a875b3a1ec700599e69c6f9aae4a0f126514aab281e8a57c48cfee741af\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"492e182036ecbd9915b1b0e2682d64aa419e7a95da5138120f6455342c952eb9\"" Feb 12 19:44:20.720463 env[1119]: time="2024-02-12T19:44:20.720436678Z" level=info msg="StartContainer for \"492e182036ecbd9915b1b0e2682d64aa419e7a95da5138120f6455342c952eb9\"" Feb 12 19:44:20.755498 kubelet[1955]: I0212 19:44:20.755466 1955 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-6bc8ccdb58-r8l2t" podStartSLOduration=2.320447428 podCreationTimestamp="2024-02-12 19:44:04 +0000 UTC" firstStartedPulling="2024-02-12 19:44:05.444248926 +0000 UTC m=+15.279169218" lastFinishedPulling="2024-02-12 19:44:19.879219014 +0000 UTC m=+29.714139306" observedRunningTime="2024-02-12 19:44:20.718571921 +0000 UTC m=+30.553492213" watchObservedRunningTime="2024-02-12 19:44:20.755417516 +0000 UTC m=+30.590337808" Feb 12 19:44:20.756478 systemd[1]: Started cri-containerd-492e182036ecbd9915b1b0e2682d64aa419e7a95da5138120f6455342c952eb9.scope. Feb 12 19:44:20.761331 systemd[1]: run-containerd-runc-k8s.io-492e182036ecbd9915b1b0e2682d64aa419e7a95da5138120f6455342c952eb9-runc.AEWiPO.mount: Deactivated successfully. Feb 12 19:44:20.813721 env[1119]: time="2024-02-12T19:44:20.813664314Z" level=info msg="StartContainer for \"492e182036ecbd9915b1b0e2682d64aa419e7a95da5138120f6455342c952eb9\" returns successfully" Feb 12 19:44:20.817965 systemd[1]: cri-containerd-492e182036ecbd9915b1b0e2682d64aa419e7a95da5138120f6455342c952eb9.scope: Deactivated successfully. Feb 12 19:44:21.078397 env[1119]: time="2024-02-12T19:44:21.078270770Z" level=info msg="shim disconnected" id=492e182036ecbd9915b1b0e2682d64aa419e7a95da5138120f6455342c952eb9 Feb 12 19:44:21.078397 env[1119]: time="2024-02-12T19:44:21.078317070Z" level=warning msg="cleaning up after shim disconnected" id=492e182036ecbd9915b1b0e2682d64aa419e7a95da5138120f6455342c952eb9 namespace=k8s.io Feb 12 19:44:21.078397 env[1119]: time="2024-02-12T19:44:21.078325937Z" level=info msg="cleaning up dead shim" Feb 12 19:44:21.087864 env[1119]: time="2024-02-12T19:44:21.087823951Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:44:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2556 runtime=io.containerd.runc.v2\n" Feb 12 19:44:21.340769 kubelet[1955]: E0212 19:44:21.340688 1955 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:44:21.340769 kubelet[1955]: E0212 19:44:21.340711 1955 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:44:21.342326 env[1119]: time="2024-02-12T19:44:21.342296711Z" level=info msg="CreateContainer within sandbox \"8db82a875b3a1ec700599e69c6f9aae4a0f126514aab281e8a57c48cfee741af\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 12 19:44:21.355406 env[1119]: time="2024-02-12T19:44:21.355350200Z" level=info msg="CreateContainer within sandbox \"8db82a875b3a1ec700599e69c6f9aae4a0f126514aab281e8a57c48cfee741af\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f68184cc61d3782f530ab3702ff3f775d22d74f17320b9bec7aa5ee5ebccfe59\"" Feb 12 19:44:21.355819 env[1119]: time="2024-02-12T19:44:21.355779227Z" level=info msg="StartContainer for \"f68184cc61d3782f530ab3702ff3f775d22d74f17320b9bec7aa5ee5ebccfe59\"" Feb 12 19:44:21.370949 systemd[1]: Started cri-containerd-f68184cc61d3782f530ab3702ff3f775d22d74f17320b9bec7aa5ee5ebccfe59.scope. Feb 12 19:44:21.398291 env[1119]: time="2024-02-12T19:44:21.398178511Z" level=info msg="StartContainer for \"f68184cc61d3782f530ab3702ff3f775d22d74f17320b9bec7aa5ee5ebccfe59\" returns successfully" Feb 12 19:44:21.398792 systemd[1]: cri-containerd-f68184cc61d3782f530ab3702ff3f775d22d74f17320b9bec7aa5ee5ebccfe59.scope: Deactivated successfully. Feb 12 19:44:21.418004 env[1119]: time="2024-02-12T19:44:21.417945744Z" level=info msg="shim disconnected" id=f68184cc61d3782f530ab3702ff3f775d22d74f17320b9bec7aa5ee5ebccfe59 Feb 12 19:44:21.418004 env[1119]: time="2024-02-12T19:44:21.418000452Z" level=warning msg="cleaning up after shim disconnected" id=f68184cc61d3782f530ab3702ff3f775d22d74f17320b9bec7aa5ee5ebccfe59 namespace=k8s.io Feb 12 19:44:21.418004 env[1119]: time="2024-02-12T19:44:21.418010250Z" level=info msg="cleaning up dead shim" Feb 12 19:44:21.423560 env[1119]: time="2024-02-12T19:44:21.423515395Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:44:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2609 runtime=io.containerd.runc.v2\n" Feb 12 19:44:21.717565 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-492e182036ecbd9915b1b0e2682d64aa419e7a95da5138120f6455342c952eb9-rootfs.mount: Deactivated successfully. Feb 12 19:44:22.344660 kubelet[1955]: E0212 19:44:22.344630 1955 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:44:22.346771 env[1119]: time="2024-02-12T19:44:22.346727316Z" level=info msg="CreateContainer within sandbox \"8db82a875b3a1ec700599e69c6f9aae4a0f126514aab281e8a57c48cfee741af\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 12 19:44:22.452733 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount865448779.mount: Deactivated successfully. Feb 12 19:44:22.454560 env[1119]: time="2024-02-12T19:44:22.454515547Z" level=info msg="CreateContainer within sandbox \"8db82a875b3a1ec700599e69c6f9aae4a0f126514aab281e8a57c48cfee741af\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5f89c5d9567941a67f0c04b292ce66fd9a0ddccb0be7ebf0009e304e5ba4571f\"" Feb 12 19:44:22.455006 env[1119]: time="2024-02-12T19:44:22.454979672Z" level=info msg="StartContainer for \"5f89c5d9567941a67f0c04b292ce66fd9a0ddccb0be7ebf0009e304e5ba4571f\"" Feb 12 19:44:22.471699 systemd[1]: Started cri-containerd-5f89c5d9567941a67f0c04b292ce66fd9a0ddccb0be7ebf0009e304e5ba4571f.scope. Feb 12 19:44:22.490974 env[1119]: time="2024-02-12T19:44:22.490927642Z" level=info msg="StartContainer for \"5f89c5d9567941a67f0c04b292ce66fd9a0ddccb0be7ebf0009e304e5ba4571f\" returns successfully" Feb 12 19:44:22.629529 kubelet[1955]: I0212 19:44:22.629475 1955 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 12 19:44:22.642881 kubelet[1955]: I0212 19:44:22.642844 1955 topology_manager.go:215] "Topology Admit Handler" podUID="621a5593-a33b-44e5-9b05-fb9aa2d8c88e" podNamespace="kube-system" podName="coredns-5dd5756b68-2fjkr" Feb 12 19:44:22.645099 kubelet[1955]: I0212 19:44:22.645077 1955 topology_manager.go:215] "Topology Admit Handler" podUID="300d5883-4274-4ea8-87a3-c18276958542" podNamespace="kube-system" podName="coredns-5dd5756b68-cxl9z" Feb 12 19:44:22.649419 systemd[1]: Created slice kubepods-burstable-pod621a5593_a33b_44e5_9b05_fb9aa2d8c88e.slice. Feb 12 19:44:22.654332 systemd[1]: Created slice kubepods-burstable-pod300d5883_4274_4ea8_87a3_c18276958542.slice. Feb 12 19:44:22.749195 kubelet[1955]: I0212 19:44:22.749162 1955 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/621a5593-a33b-44e5-9b05-fb9aa2d8c88e-config-volume\") pod \"coredns-5dd5756b68-2fjkr\" (UID: \"621a5593-a33b-44e5-9b05-fb9aa2d8c88e\") " pod="kube-system/coredns-5dd5756b68-2fjkr" Feb 12 19:44:22.749195 kubelet[1955]: I0212 19:44:22.749203 1955 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/300d5883-4274-4ea8-87a3-c18276958542-config-volume\") pod \"coredns-5dd5756b68-cxl9z\" (UID: \"300d5883-4274-4ea8-87a3-c18276958542\") " pod="kube-system/coredns-5dd5756b68-cxl9z" Feb 12 19:44:22.749362 kubelet[1955]: I0212 19:44:22.749222 1955 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2z4k\" (UniqueName: \"kubernetes.io/projected/300d5883-4274-4ea8-87a3-c18276958542-kube-api-access-z2z4k\") pod \"coredns-5dd5756b68-cxl9z\" (UID: \"300d5883-4274-4ea8-87a3-c18276958542\") " pod="kube-system/coredns-5dd5756b68-cxl9z" Feb 12 19:44:22.749362 kubelet[1955]: I0212 19:44:22.749243 1955 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wd4k\" (UniqueName: \"kubernetes.io/projected/621a5593-a33b-44e5-9b05-fb9aa2d8c88e-kube-api-access-2wd4k\") pod \"coredns-5dd5756b68-2fjkr\" (UID: \"621a5593-a33b-44e5-9b05-fb9aa2d8c88e\") " pod="kube-system/coredns-5dd5756b68-2fjkr" Feb 12 19:44:22.952291 kubelet[1955]: E0212 19:44:22.952191 1955 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:44:22.952685 env[1119]: time="2024-02-12T19:44:22.952650050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-2fjkr,Uid:621a5593-a33b-44e5-9b05-fb9aa2d8c88e,Namespace:kube-system,Attempt:0,}" Feb 12 19:44:22.957809 kubelet[1955]: E0212 19:44:22.957784 1955 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:44:22.958168 env[1119]: time="2024-02-12T19:44:22.958133017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-cxl9z,Uid:300d5883-4274-4ea8-87a3-c18276958542,Namespace:kube-system,Attempt:0,}" Feb 12 19:44:23.349540 kubelet[1955]: E0212 19:44:23.349277 1955 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:44:23.560589 kubelet[1955]: I0212 19:44:23.560549 1955 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-k8pll" podStartSLOduration=6.788438111 podCreationTimestamp="2024-02-12 19:44:04 +0000 UTC" firstStartedPulling="2024-02-12 19:44:04.933418064 +0000 UTC m=+14.768338356" lastFinishedPulling="2024-02-12 19:44:17.705496082 +0000 UTC m=+27.540416374" observedRunningTime="2024-02-12 19:44:23.560161358 +0000 UTC m=+33.395081680" watchObservedRunningTime="2024-02-12 19:44:23.560516129 +0000 UTC m=+33.395436421" Feb 12 19:44:23.680313 systemd[1]: Started sshd@6-10.0.0.136:22-10.0.0.1:49078.service. Feb 12 19:44:23.720925 sshd[2775]: Accepted publickey for core from 10.0.0.1 port 49078 ssh2: RSA SHA256:61J5tVZgtMsvFgBzlA318rHvk/8vx1tAF2anfHXiCnk Feb 12 19:44:23.722019 sshd[2775]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:44:23.725147 systemd-logind[1107]: New session 7 of user core. Feb 12 19:44:23.726071 systemd[1]: Started session-7.scope. Feb 12 19:44:23.832979 sshd[2775]: pam_unix(sshd:session): session closed for user core Feb 12 19:44:23.835252 systemd[1]: sshd@6-10.0.0.136:22-10.0.0.1:49078.service: Deactivated successfully. Feb 12 19:44:23.835908 systemd[1]: session-7.scope: Deactivated successfully. Feb 12 19:44:23.836646 systemd-logind[1107]: Session 7 logged out. Waiting for processes to exit. Feb 12 19:44:23.837250 systemd-logind[1107]: Removed session 7. Feb 12 19:44:24.351188 kubelet[1955]: E0212 19:44:24.351156 1955 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:44:24.454225 systemd-networkd[1018]: cilium_host: Link UP Feb 12 19:44:24.454323 systemd-networkd[1018]: cilium_net: Link UP Feb 12 19:44:24.454976 systemd-networkd[1018]: cilium_net: Gained carrier Feb 12 19:44:24.455640 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 12 19:44:24.455686 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 12 19:44:24.455772 systemd-networkd[1018]: cilium_host: Gained carrier Feb 12 19:44:24.519952 systemd-networkd[1018]: cilium_vxlan: Link UP Feb 12 19:44:24.519962 systemd-networkd[1018]: cilium_vxlan: Gained carrier Feb 12 19:44:24.694416 kernel: NET: Registered PF_ALG protocol family Feb 12 19:44:25.176185 systemd-networkd[1018]: lxc_health: Link UP Feb 12 19:44:25.189207 systemd-networkd[1018]: lxc_health: Gained carrier Feb 12 19:44:25.191423 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 12 19:44:25.201495 systemd-networkd[1018]: cilium_net: Gained IPv6LL Feb 12 19:44:25.265589 systemd-networkd[1018]: cilium_host: Gained IPv6LL Feb 12 19:44:25.356917 kubelet[1955]: E0212 19:44:25.356889 1955 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:44:25.537419 kernel: eth0: renamed from tmp292c5 Feb 12 19:44:25.543817 systemd-networkd[1018]: lxc26dc715417a3: Link UP Feb 12 19:44:25.547737 systemd-networkd[1018]: lxc26dc715417a3: Gained carrier Feb 12 19:44:25.548121 systemd-networkd[1018]: lxccf291a0b18ad: Link UP Feb 12 19:44:25.548431 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc26dc715417a3: link becomes ready Feb 12 19:44:25.555462 kernel: eth0: renamed from tmpa4fe7 Feb 12 19:44:25.560867 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 19:44:25.560987 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxccf291a0b18ad: link becomes ready Feb 12 19:44:25.561238 systemd-networkd[1018]: lxccf291a0b18ad: Gained carrier Feb 12 19:44:26.358518 kubelet[1955]: E0212 19:44:26.358484 1955 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:44:26.480541 systemd-networkd[1018]: cilium_vxlan: Gained IPv6LL Feb 12 19:44:26.480801 systemd-networkd[1018]: lxc_health: Gained IPv6LL Feb 12 19:44:26.736562 systemd-networkd[1018]: lxccf291a0b18ad: Gained IPv6LL Feb 12 19:44:27.056619 systemd-networkd[1018]: lxc26dc715417a3: Gained IPv6LL Feb 12 19:44:27.360060 kubelet[1955]: E0212 19:44:27.359977 1955 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:44:28.361404 kubelet[1955]: E0212 19:44:28.361348 1955 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:44:28.710783 env[1119]: time="2024-02-12T19:44:28.710712696Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:44:28.710783 env[1119]: time="2024-02-12T19:44:28.710759787Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:44:28.710783 env[1119]: time="2024-02-12T19:44:28.710773823Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:44:28.711260 env[1119]: time="2024-02-12T19:44:28.711194509Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a4fe735106f2cdf9fc08359651ab6116fa3616e5c2d5dcdd0bd925de023a0952 pid=3181 runtime=io.containerd.runc.v2 Feb 12 19:44:28.716147 env[1119]: time="2024-02-12T19:44:28.711423223Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:44:28.716147 env[1119]: time="2024-02-12T19:44:28.711454594Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:44:28.716147 env[1119]: time="2024-02-12T19:44:28.711466617Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:44:28.716147 env[1119]: time="2024-02-12T19:44:28.711635293Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/292c59308cf9faff9a418bf0f2fd4c91b4b76d2223f6986176a2bd743734e4a3 pid=3189 runtime=io.containerd.runc.v2 Feb 12 19:44:28.729031 systemd[1]: run-containerd-runc-k8s.io-292c59308cf9faff9a418bf0f2fd4c91b4b76d2223f6986176a2bd743734e4a3-runc.GFx4x3.mount: Deactivated successfully. Feb 12 19:44:28.730789 systemd[1]: Started cri-containerd-a4fe735106f2cdf9fc08359651ab6116fa3616e5c2d5dcdd0bd925de023a0952.scope. Feb 12 19:44:28.733291 systemd[1]: Started cri-containerd-292c59308cf9faff9a418bf0f2fd4c91b4b76d2223f6986176a2bd743734e4a3.scope. Feb 12 19:44:28.745866 systemd-resolved[1067]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 12 19:44:28.748192 systemd-resolved[1067]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 12 19:44:28.769217 env[1119]: time="2024-02-12T19:44:28.769182144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-cxl9z,Uid:300d5883-4274-4ea8-87a3-c18276958542,Namespace:kube-system,Attempt:0,} returns sandbox id \"292c59308cf9faff9a418bf0f2fd4c91b4b76d2223f6986176a2bd743734e4a3\"" Feb 12 19:44:28.770132 kubelet[1955]: E0212 19:44:28.770100 1955 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:44:28.774299 env[1119]: time="2024-02-12T19:44:28.774256241Z" level=info msg="CreateContainer within sandbox \"292c59308cf9faff9a418bf0f2fd4c91b4b76d2223f6986176a2bd743734e4a3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 12 19:44:28.780368 env[1119]: time="2024-02-12T19:44:28.780345005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-2fjkr,Uid:621a5593-a33b-44e5-9b05-fb9aa2d8c88e,Namespace:kube-system,Attempt:0,} returns sandbox id \"a4fe735106f2cdf9fc08359651ab6116fa3616e5c2d5dcdd0bd925de023a0952\"" Feb 12 19:44:28.780783 kubelet[1955]: E0212 19:44:28.780759 1955 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:44:28.782177 env[1119]: time="2024-02-12T19:44:28.782149643Z" level=info msg="CreateContainer within sandbox \"a4fe735106f2cdf9fc08359651ab6116fa3616e5c2d5dcdd0bd925de023a0952\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 12 19:44:28.792254 env[1119]: time="2024-02-12T19:44:28.792201600Z" level=info msg="CreateContainer within sandbox \"292c59308cf9faff9a418bf0f2fd4c91b4b76d2223f6986176a2bd743734e4a3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1a99ea5c3a18a86e31ed0e576726c74e06b651611ae9f177c692d2d54eb03b64\"" Feb 12 19:44:28.792718 env[1119]: time="2024-02-12T19:44:28.792687001Z" level=info msg="StartContainer for \"1a99ea5c3a18a86e31ed0e576726c74e06b651611ae9f177c692d2d54eb03b64\"" Feb 12 19:44:28.797417 env[1119]: time="2024-02-12T19:44:28.797384347Z" level=info msg="CreateContainer within sandbox \"a4fe735106f2cdf9fc08359651ab6116fa3616e5c2d5dcdd0bd925de023a0952\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bfd645369724b94d1660de7ac4d5503e25c8a558bf0d7cf0f0abd5c5b9044293\"" Feb 12 19:44:28.797819 env[1119]: time="2024-02-12T19:44:28.797782649Z" level=info msg="StartContainer for \"bfd645369724b94d1660de7ac4d5503e25c8a558bf0d7cf0f0abd5c5b9044293\"" Feb 12 19:44:28.808366 systemd[1]: Started cri-containerd-1a99ea5c3a18a86e31ed0e576726c74e06b651611ae9f177c692d2d54eb03b64.scope. Feb 12 19:44:28.821333 systemd[1]: Started cri-containerd-bfd645369724b94d1660de7ac4d5503e25c8a558bf0d7cf0f0abd5c5b9044293.scope. Feb 12 19:44:28.832640 env[1119]: time="2024-02-12T19:44:28.832602662Z" level=info msg="StartContainer for \"1a99ea5c3a18a86e31ed0e576726c74e06b651611ae9f177c692d2d54eb03b64\" returns successfully" Feb 12 19:44:28.837513 systemd[1]: Started sshd@7-10.0.0.136:22-10.0.0.1:46990.service. Feb 12 19:44:28.845777 env[1119]: time="2024-02-12T19:44:28.845745891Z" level=info msg="StartContainer for \"bfd645369724b94d1660de7ac4d5503e25c8a558bf0d7cf0f0abd5c5b9044293\" returns successfully" Feb 12 19:44:28.878438 sshd[3310]: Accepted publickey for core from 10.0.0.1 port 46990 ssh2: RSA SHA256:61J5tVZgtMsvFgBzlA318rHvk/8vx1tAF2anfHXiCnk Feb 12 19:44:28.879651 sshd[3310]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:44:28.883955 systemd-logind[1107]: New session 8 of user core. Feb 12 19:44:28.884872 systemd[1]: Started session-8.scope. Feb 12 19:44:28.997580 sshd[3310]: pam_unix(sshd:session): session closed for user core Feb 12 19:44:29.000093 systemd[1]: sshd@7-10.0.0.136:22-10.0.0.1:46990.service: Deactivated successfully. Feb 12 19:44:29.000783 systemd[1]: session-8.scope: Deactivated successfully. Feb 12 19:44:29.001262 systemd-logind[1107]: Session 8 logged out. Waiting for processes to exit. Feb 12 19:44:29.001898 systemd-logind[1107]: Removed session 8. Feb 12 19:44:29.364804 kubelet[1955]: E0212 19:44:29.364687 1955 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:44:29.366501 kubelet[1955]: E0212 19:44:29.366464 1955 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:44:29.372360 kubelet[1955]: I0212 19:44:29.372321 1955 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-cxl9z" podStartSLOduration=25.372288386 podCreationTimestamp="2024-02-12 19:44:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:44:29.371923679 +0000 UTC m=+39.206843971" watchObservedRunningTime="2024-02-12 19:44:29.372288386 +0000 UTC m=+39.207208678" Feb 12 19:44:30.368258 kubelet[1955]: E0212 19:44:30.368232 1955 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:44:30.368579 kubelet[1955]: E0212 19:44:30.368448 1955 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:44:31.370381 kubelet[1955]: E0212 19:44:31.370335 1955 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:44:31.370832 kubelet[1955]: E0212 19:44:31.370439 1955 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:44:33.999632 systemd[1]: Started sshd@8-10.0.0.136:22-10.0.0.1:46998.service. Feb 12 19:44:34.039459 sshd[3356]: Accepted publickey for core from 10.0.0.1 port 46998 ssh2: RSA SHA256:61J5tVZgtMsvFgBzlA318rHvk/8vx1tAF2anfHXiCnk Feb 12 19:44:34.040245 sshd[3356]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:44:34.043259 systemd-logind[1107]: New session 9 of user core. Feb 12 19:44:34.044286 systemd[1]: Started session-9.scope. Feb 12 19:44:34.147690 sshd[3356]: pam_unix(sshd:session): session closed for user core Feb 12 19:44:34.150010 systemd[1]: sshd@8-10.0.0.136:22-10.0.0.1:46998.service: Deactivated successfully. Feb 12 19:44:34.150765 systemd[1]: session-9.scope: Deactivated successfully. Feb 12 19:44:34.151296 systemd-logind[1107]: Session 9 logged out. Waiting for processes to exit. Feb 12 19:44:34.151979 systemd-logind[1107]: Removed session 9. Feb 12 19:44:39.151826 systemd[1]: Started sshd@9-10.0.0.136:22-10.0.0.1:49492.service. Feb 12 19:44:39.191736 sshd[3375]: Accepted publickey for core from 10.0.0.1 port 49492 ssh2: RSA SHA256:61J5tVZgtMsvFgBzlA318rHvk/8vx1tAF2anfHXiCnk Feb 12 19:44:39.192674 sshd[3375]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:44:39.195819 systemd-logind[1107]: New session 10 of user core. Feb 12 19:44:39.196600 systemd[1]: Started session-10.scope. Feb 12 19:44:39.294803 sshd[3375]: pam_unix(sshd:session): session closed for user core Feb 12 19:44:39.297447 systemd[1]: sshd@9-10.0.0.136:22-10.0.0.1:49492.service: Deactivated successfully. Feb 12 19:44:39.297944 systemd[1]: session-10.scope: Deactivated successfully. Feb 12 19:44:39.298404 systemd-logind[1107]: Session 10 logged out. Waiting for processes to exit. Feb 12 19:44:39.299421 systemd[1]: Started sshd@10-10.0.0.136:22-10.0.0.1:49504.service. Feb 12 19:44:39.300093 systemd-logind[1107]: Removed session 10. Feb 12 19:44:39.338851 sshd[3389]: Accepted publickey for core from 10.0.0.1 port 49504 ssh2: RSA SHA256:61J5tVZgtMsvFgBzlA318rHvk/8vx1tAF2anfHXiCnk Feb 12 19:44:39.339761 sshd[3389]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:44:39.342613 systemd-logind[1107]: New session 11 of user core. Feb 12 19:44:39.343336 systemd[1]: Started session-11.scope. Feb 12 19:44:39.953146 sshd[3389]: pam_unix(sshd:session): session closed for user core Feb 12 19:44:39.956650 systemd[1]: Started sshd@11-10.0.0.136:22-10.0.0.1:49512.service. Feb 12 19:44:39.964932 systemd-logind[1107]: Session 11 logged out. Waiting for processes to exit. Feb 12 19:44:39.966235 systemd[1]: sshd@10-10.0.0.136:22-10.0.0.1:49504.service: Deactivated successfully. Feb 12 19:44:39.966911 systemd[1]: session-11.scope: Deactivated successfully. Feb 12 19:44:39.968235 systemd-logind[1107]: Removed session 11. Feb 12 19:44:39.999840 sshd[3399]: Accepted publickey for core from 10.0.0.1 port 49512 ssh2: RSA SHA256:61J5tVZgtMsvFgBzlA318rHvk/8vx1tAF2anfHXiCnk Feb 12 19:44:40.000898 sshd[3399]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:44:40.004006 systemd-logind[1107]: New session 12 of user core. Feb 12 19:44:40.004796 systemd[1]: Started session-12.scope. Feb 12 19:44:40.106493 sshd[3399]: pam_unix(sshd:session): session closed for user core Feb 12 19:44:40.108845 systemd[1]: sshd@11-10.0.0.136:22-10.0.0.1:49512.service: Deactivated successfully. Feb 12 19:44:40.109500 systemd[1]: session-12.scope: Deactivated successfully. Feb 12 19:44:40.109996 systemd-logind[1107]: Session 12 logged out. Waiting for processes to exit. Feb 12 19:44:40.110612 systemd-logind[1107]: Removed session 12. Feb 12 19:44:45.111128 systemd[1]: Started sshd@12-10.0.0.136:22-10.0.0.1:49528.service. Feb 12 19:44:45.149224 sshd[3413]: Accepted publickey for core from 10.0.0.1 port 49528 ssh2: RSA SHA256:61J5tVZgtMsvFgBzlA318rHvk/8vx1tAF2anfHXiCnk Feb 12 19:44:45.150144 sshd[3413]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:44:45.153108 systemd-logind[1107]: New session 13 of user core. Feb 12 19:44:45.153862 systemd[1]: Started session-13.scope. Feb 12 19:44:45.250068 sshd[3413]: pam_unix(sshd:session): session closed for user core Feb 12 19:44:45.252104 systemd[1]: sshd@12-10.0.0.136:22-10.0.0.1:49528.service: Deactivated successfully. Feb 12 19:44:45.252724 systemd[1]: session-13.scope: Deactivated successfully. Feb 12 19:44:45.253195 systemd-logind[1107]: Session 13 logged out. Waiting for processes to exit. Feb 12 19:44:45.253765 systemd-logind[1107]: Removed session 13. Feb 12 19:44:50.254679 systemd[1]: Started sshd@13-10.0.0.136:22-10.0.0.1:52588.service. Feb 12 19:44:50.293276 sshd[3426]: Accepted publickey for core from 10.0.0.1 port 52588 ssh2: RSA SHA256:61J5tVZgtMsvFgBzlA318rHvk/8vx1tAF2anfHXiCnk Feb 12 19:44:50.294430 sshd[3426]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:44:50.297789 systemd-logind[1107]: New session 14 of user core. Feb 12 19:44:50.298635 systemd[1]: Started session-14.scope. Feb 12 19:44:50.399893 sshd[3426]: pam_unix(sshd:session): session closed for user core Feb 12 19:44:50.402424 systemd[1]: sshd@13-10.0.0.136:22-10.0.0.1:52588.service: Deactivated successfully. Feb 12 19:44:50.402919 systemd[1]: session-14.scope: Deactivated successfully. Feb 12 19:44:50.403340 systemd-logind[1107]: Session 14 logged out. Waiting for processes to exit. Feb 12 19:44:50.404231 systemd[1]: Started sshd@14-10.0.0.136:22-10.0.0.1:52592.service. Feb 12 19:44:50.404845 systemd-logind[1107]: Removed session 14. Feb 12 19:44:50.442209 sshd[3441]: Accepted publickey for core from 10.0.0.1 port 52592 ssh2: RSA SHA256:61J5tVZgtMsvFgBzlA318rHvk/8vx1tAF2anfHXiCnk Feb 12 19:44:50.443109 sshd[3441]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:44:50.445996 systemd-logind[1107]: New session 15 of user core. Feb 12 19:44:50.446730 systemd[1]: Started session-15.scope. Feb 12 19:44:50.593878 sshd[3441]: pam_unix(sshd:session): session closed for user core Feb 12 19:44:50.596476 systemd[1]: sshd@14-10.0.0.136:22-10.0.0.1:52592.service: Deactivated successfully. Feb 12 19:44:50.596993 systemd[1]: session-15.scope: Deactivated successfully. Feb 12 19:44:50.597509 systemd-logind[1107]: Session 15 logged out. Waiting for processes to exit. Feb 12 19:44:50.598601 systemd[1]: Started sshd@15-10.0.0.136:22-10.0.0.1:52608.service. Feb 12 19:44:50.599189 systemd-logind[1107]: Removed session 15. Feb 12 19:44:50.638100 sshd[3453]: Accepted publickey for core from 10.0.0.1 port 52608 ssh2: RSA SHA256:61J5tVZgtMsvFgBzlA318rHvk/8vx1tAF2anfHXiCnk Feb 12 19:44:50.639264 sshd[3453]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:44:50.642175 systemd-logind[1107]: New session 16 of user core. Feb 12 19:44:50.642983 systemd[1]: Started session-16.scope. Feb 12 19:44:51.423201 sshd[3453]: pam_unix(sshd:session): session closed for user core Feb 12 19:44:51.425405 systemd[1]: Started sshd@16-10.0.0.136:22-10.0.0.1:52618.service. Feb 12 19:44:51.426531 systemd[1]: sshd@15-10.0.0.136:22-10.0.0.1:52608.service: Deactivated successfully. Feb 12 19:44:51.427081 systemd[1]: session-16.scope: Deactivated successfully. Feb 12 19:44:51.429929 systemd-logind[1107]: Session 16 logged out. Waiting for processes to exit. Feb 12 19:44:51.430981 systemd-logind[1107]: Removed session 16. Feb 12 19:44:51.467855 sshd[3470]: Accepted publickey for core from 10.0.0.1 port 52618 ssh2: RSA SHA256:61J5tVZgtMsvFgBzlA318rHvk/8vx1tAF2anfHXiCnk Feb 12 19:44:51.468950 sshd[3470]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:44:51.472239 systemd-logind[1107]: New session 17 of user core. Feb 12 19:44:51.473065 systemd[1]: Started session-17.scope. Feb 12 19:44:51.715624 sshd[3470]: pam_unix(sshd:session): session closed for user core Feb 12 19:44:51.719175 systemd[1]: Started sshd@17-10.0.0.136:22-10.0.0.1:52626.service. Feb 12 19:44:51.723614 systemd[1]: sshd@16-10.0.0.136:22-10.0.0.1:52618.service: Deactivated successfully. Feb 12 19:44:51.724325 systemd[1]: session-17.scope: Deactivated successfully. Feb 12 19:44:51.724983 systemd-logind[1107]: Session 17 logged out. Waiting for processes to exit. Feb 12 19:44:51.725618 systemd-logind[1107]: Removed session 17. Feb 12 19:44:51.758333 sshd[3483]: Accepted publickey for core from 10.0.0.1 port 52626 ssh2: RSA SHA256:61J5tVZgtMsvFgBzlA318rHvk/8vx1tAF2anfHXiCnk Feb 12 19:44:51.759467 sshd[3483]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:44:51.762573 systemd-logind[1107]: New session 18 of user core. Feb 12 19:44:51.763354 systemd[1]: Started session-18.scope. Feb 12 19:44:51.862170 sshd[3483]: pam_unix(sshd:session): session closed for user core Feb 12 19:44:51.864353 systemd[1]: sshd@17-10.0.0.136:22-10.0.0.1:52626.service: Deactivated successfully. Feb 12 19:44:51.865036 systemd[1]: session-18.scope: Deactivated successfully. Feb 12 19:44:51.865775 systemd-logind[1107]: Session 18 logged out. Waiting for processes to exit. Feb 12 19:44:51.866378 systemd-logind[1107]: Removed session 18. Feb 12 19:44:56.866017 systemd[1]: Started sshd@18-10.0.0.136:22-10.0.0.1:40210.service. Feb 12 19:44:56.904404 sshd[3497]: Accepted publickey for core from 10.0.0.1 port 40210 ssh2: RSA SHA256:61J5tVZgtMsvFgBzlA318rHvk/8vx1tAF2anfHXiCnk Feb 12 19:44:56.905319 sshd[3497]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:44:56.908156 systemd-logind[1107]: New session 19 of user core. Feb 12 19:44:56.909135 systemd[1]: Started session-19.scope. Feb 12 19:44:57.008726 sshd[3497]: pam_unix(sshd:session): session closed for user core Feb 12 19:44:57.011076 systemd[1]: sshd@18-10.0.0.136:22-10.0.0.1:40210.service: Deactivated successfully. Feb 12 19:44:57.011718 systemd[1]: session-19.scope: Deactivated successfully. Feb 12 19:44:57.012198 systemd-logind[1107]: Session 19 logged out. Waiting for processes to exit. Feb 12 19:44:57.012809 systemd-logind[1107]: Removed session 19. Feb 12 19:45:02.013236 systemd[1]: Started sshd@19-10.0.0.136:22-10.0.0.1:40220.service. Feb 12 19:45:02.051856 sshd[3514]: Accepted publickey for core from 10.0.0.1 port 40220 ssh2: RSA SHA256:61J5tVZgtMsvFgBzlA318rHvk/8vx1tAF2anfHXiCnk Feb 12 19:45:02.052986 sshd[3514]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:45:02.055951 systemd-logind[1107]: New session 20 of user core. Feb 12 19:45:02.056673 systemd[1]: Started session-20.scope. Feb 12 19:45:02.158772 sshd[3514]: pam_unix(sshd:session): session closed for user core Feb 12 19:45:02.161622 systemd[1]: sshd@19-10.0.0.136:22-10.0.0.1:40220.service: Deactivated successfully. Feb 12 19:45:02.162273 systemd[1]: session-20.scope: Deactivated successfully. Feb 12 19:45:02.162783 systemd-logind[1107]: Session 20 logged out. Waiting for processes to exit. Feb 12 19:45:02.163550 systemd-logind[1107]: Removed session 20. Feb 12 19:45:06.276422 kubelet[1955]: E0212 19:45:06.276373 1955 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:45:07.162212 systemd[1]: Started sshd@20-10.0.0.136:22-10.0.0.1:57256.service. Feb 12 19:45:07.200505 sshd[3529]: Accepted publickey for core from 10.0.0.1 port 57256 ssh2: RSA SHA256:61J5tVZgtMsvFgBzlA318rHvk/8vx1tAF2anfHXiCnk Feb 12 19:45:07.201524 sshd[3529]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:45:07.204647 systemd-logind[1107]: New session 21 of user core. Feb 12 19:45:07.205702 systemd[1]: Started session-21.scope. Feb 12 19:45:07.304753 sshd[3529]: pam_unix(sshd:session): session closed for user core Feb 12 19:45:07.307401 systemd[1]: sshd@20-10.0.0.136:22-10.0.0.1:57256.service: Deactivated successfully. Feb 12 19:45:07.308025 systemd[1]: session-21.scope: Deactivated successfully. Feb 12 19:45:07.308528 systemd-logind[1107]: Session 21 logged out. Waiting for processes to exit. Feb 12 19:45:07.309316 systemd-logind[1107]: Removed session 21. Feb 12 19:45:12.308382 systemd[1]: Started sshd@21-10.0.0.136:22-10.0.0.1:57262.service. Feb 12 19:45:12.346415 sshd[3542]: Accepted publickey for core from 10.0.0.1 port 57262 ssh2: RSA SHA256:61J5tVZgtMsvFgBzlA318rHvk/8vx1tAF2anfHXiCnk Feb 12 19:45:12.347329 sshd[3542]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:45:12.350233 systemd-logind[1107]: New session 22 of user core. Feb 12 19:45:12.351071 systemd[1]: Started session-22.scope. Feb 12 19:45:12.447326 sshd[3542]: pam_unix(sshd:session): session closed for user core Feb 12 19:45:12.450334 systemd[1]: Started sshd@22-10.0.0.136:22-10.0.0.1:57264.service. Feb 12 19:45:12.450736 systemd[1]: sshd@21-10.0.0.136:22-10.0.0.1:57262.service: Deactivated successfully. Feb 12 19:45:12.451286 systemd[1]: session-22.scope: Deactivated successfully. Feb 12 19:45:12.451791 systemd-logind[1107]: Session 22 logged out. Waiting for processes to exit. Feb 12 19:45:12.452518 systemd-logind[1107]: Removed session 22. Feb 12 19:45:12.487938 sshd[3554]: Accepted publickey for core from 10.0.0.1 port 57264 ssh2: RSA SHA256:61J5tVZgtMsvFgBzlA318rHvk/8vx1tAF2anfHXiCnk Feb 12 19:45:12.488979 sshd[3554]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:45:12.491920 systemd-logind[1107]: New session 23 of user core. Feb 12 19:45:12.492693 systemd[1]: Started session-23.scope. Feb 12 19:45:13.276729 kubelet[1955]: E0212 19:45:13.276696 1955 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:45:13.793571 kubelet[1955]: I0212 19:45:13.793539 1955 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-2fjkr" podStartSLOduration=69.793499276 podCreationTimestamp="2024-02-12 19:44:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:44:29.392833642 +0000 UTC m=+39.227753934" watchObservedRunningTime="2024-02-12 19:45:13.793499276 +0000 UTC m=+83.628419568" Feb 12 19:45:13.800680 env[1119]: time="2024-02-12T19:45:13.800634552Z" level=info msg="StopContainer for \"28a2e7cea54b47953baebcbff95dc7457678cbf69be44707bc9ead9cdd1be0d5\" with timeout 30 (s)" Feb 12 19:45:13.801082 env[1119]: time="2024-02-12T19:45:13.801050611Z" level=info msg="Stop container \"28a2e7cea54b47953baebcbff95dc7457678cbf69be44707bc9ead9cdd1be0d5\" with signal terminated" Feb 12 19:45:13.810435 systemd[1]: cri-containerd-28a2e7cea54b47953baebcbff95dc7457678cbf69be44707bc9ead9cdd1be0d5.scope: Deactivated successfully. Feb 12 19:45:13.821889 env[1119]: time="2024-02-12T19:45:13.821832964Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 19:45:13.827737 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-28a2e7cea54b47953baebcbff95dc7457678cbf69be44707bc9ead9cdd1be0d5-rootfs.mount: Deactivated successfully. Feb 12 19:45:13.829004 env[1119]: time="2024-02-12T19:45:13.828972278Z" level=info msg="StopContainer for \"5f89c5d9567941a67f0c04b292ce66fd9a0ddccb0be7ebf0009e304e5ba4571f\" with timeout 2 (s)" Feb 12 19:45:13.829206 env[1119]: time="2024-02-12T19:45:13.829179260Z" level=info msg="Stop container \"5f89c5d9567941a67f0c04b292ce66fd9a0ddccb0be7ebf0009e304e5ba4571f\" with signal terminated" Feb 12 19:45:13.834160 systemd-networkd[1018]: lxc_health: Link DOWN Feb 12 19:45:13.834167 systemd-networkd[1018]: lxc_health: Lost carrier Feb 12 19:45:13.838293 env[1119]: time="2024-02-12T19:45:13.838256379Z" level=info msg="shim disconnected" id=28a2e7cea54b47953baebcbff95dc7457678cbf69be44707bc9ead9cdd1be0d5 Feb 12 19:45:13.838441 env[1119]: time="2024-02-12T19:45:13.838295464Z" level=warning msg="cleaning up after shim disconnected" id=28a2e7cea54b47953baebcbff95dc7457678cbf69be44707bc9ead9cdd1be0d5 namespace=k8s.io Feb 12 19:45:13.838441 env[1119]: time="2024-02-12T19:45:13.838305001Z" level=info msg="cleaning up dead shim" Feb 12 19:45:13.844326 env[1119]: time="2024-02-12T19:45:13.844281730Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:45:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3608 runtime=io.containerd.runc.v2\n" Feb 12 19:45:13.847130 env[1119]: time="2024-02-12T19:45:13.847100066Z" level=info msg="StopContainer for \"28a2e7cea54b47953baebcbff95dc7457678cbf69be44707bc9ead9cdd1be0d5\" returns successfully" Feb 12 19:45:13.847692 env[1119]: time="2024-02-12T19:45:13.847672061Z" level=info msg="StopPodSandbox for \"3636acbe1b65adad1d37f23a430ef46f68cd418ffc554aafbcd62a1ecd081a04\"" Feb 12 19:45:13.847744 env[1119]: time="2024-02-12T19:45:13.847726194Z" level=info msg="Container to stop \"28a2e7cea54b47953baebcbff95dc7457678cbf69be44707bc9ead9cdd1be0d5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:45:13.849098 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3636acbe1b65adad1d37f23a430ef46f68cd418ffc554aafbcd62a1ecd081a04-shm.mount: Deactivated successfully. Feb 12 19:45:13.853895 systemd[1]: cri-containerd-3636acbe1b65adad1d37f23a430ef46f68cd418ffc554aafbcd62a1ecd081a04.scope: Deactivated successfully. Feb 12 19:45:13.861093 systemd[1]: cri-containerd-5f89c5d9567941a67f0c04b292ce66fd9a0ddccb0be7ebf0009e304e5ba4571f.scope: Deactivated successfully. Feb 12 19:45:13.861330 systemd[1]: cri-containerd-5f89c5d9567941a67f0c04b292ce66fd9a0ddccb0be7ebf0009e304e5ba4571f.scope: Consumed 5.757s CPU time. Feb 12 19:45:13.870988 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3636acbe1b65adad1d37f23a430ef46f68cd418ffc554aafbcd62a1ecd081a04-rootfs.mount: Deactivated successfully. Feb 12 19:45:13.876466 env[1119]: time="2024-02-12T19:45:13.876369780Z" level=info msg="shim disconnected" id=3636acbe1b65adad1d37f23a430ef46f68cd418ffc554aafbcd62a1ecd081a04 Feb 12 19:45:13.876466 env[1119]: time="2024-02-12T19:45:13.876462436Z" level=warning msg="cleaning up after shim disconnected" id=3636acbe1b65adad1d37f23a430ef46f68cd418ffc554aafbcd62a1ecd081a04 namespace=k8s.io Feb 12 19:45:13.876466 env[1119]: time="2024-02-12T19:45:13.876471353Z" level=info msg="cleaning up dead shim" Feb 12 19:45:13.881227 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5f89c5d9567941a67f0c04b292ce66fd9a0ddccb0be7ebf0009e304e5ba4571f-rootfs.mount: Deactivated successfully. Feb 12 19:45:13.884049 env[1119]: time="2024-02-12T19:45:13.884012077Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:45:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3656 runtime=io.containerd.runc.v2\n" Feb 12 19:45:13.884334 env[1119]: time="2024-02-12T19:45:13.884292159Z" level=info msg="shim disconnected" id=5f89c5d9567941a67f0c04b292ce66fd9a0ddccb0be7ebf0009e304e5ba4571f Feb 12 19:45:13.884435 env[1119]: time="2024-02-12T19:45:13.884339067Z" level=warning msg="cleaning up after shim disconnected" id=5f89c5d9567941a67f0c04b292ce66fd9a0ddccb0be7ebf0009e304e5ba4571f namespace=k8s.io Feb 12 19:45:13.884435 env[1119]: time="2024-02-12T19:45:13.884348376Z" level=info msg="cleaning up dead shim" Feb 12 19:45:13.884893 env[1119]: time="2024-02-12T19:45:13.884867841Z" level=info msg="TearDown network for sandbox \"3636acbe1b65adad1d37f23a430ef46f68cd418ffc554aafbcd62a1ecd081a04\" successfully" Feb 12 19:45:13.884970 env[1119]: time="2024-02-12T19:45:13.884894973Z" level=info msg="StopPodSandbox for \"3636acbe1b65adad1d37f23a430ef46f68cd418ffc554aafbcd62a1ecd081a04\" returns successfully" Feb 12 19:45:13.890321 env[1119]: time="2024-02-12T19:45:13.890279887Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:45:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3669 runtime=io.containerd.runc.v2\n" Feb 12 19:45:13.892634 env[1119]: time="2024-02-12T19:45:13.892601030Z" level=info msg="StopContainer for \"5f89c5d9567941a67f0c04b292ce66fd9a0ddccb0be7ebf0009e304e5ba4571f\" returns successfully" Feb 12 19:45:13.892903 env[1119]: time="2024-02-12T19:45:13.892876514Z" level=info msg="StopPodSandbox for \"8db82a875b3a1ec700599e69c6f9aae4a0f126514aab281e8a57c48cfee741af\"" Feb 12 19:45:13.892965 env[1119]: time="2024-02-12T19:45:13.892923753Z" level=info msg="Container to stop \"f68184cc61d3782f530ab3702ff3f775d22d74f17320b9bec7aa5ee5ebccfe59\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:45:13.892965 env[1119]: time="2024-02-12T19:45:13.892935976Z" level=info msg="Container to stop \"5f89c5d9567941a67f0c04b292ce66fd9a0ddccb0be7ebf0009e304e5ba4571f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:45:13.892965 env[1119]: time="2024-02-12T19:45:13.892945354Z" level=info msg="Container to stop \"c7c1e24ebf5f24ae320ed92f44cd042d9c32265905d554db429e7778f55750b4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:45:13.892965 env[1119]: time="2024-02-12T19:45:13.892955433Z" level=info msg="Container to stop \"c84cbe5c5136bf0a13c7a3b8918d2332d34460a55eb7c41854299964e1830149\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:45:13.893128 env[1119]: time="2024-02-12T19:45:13.892964510Z" level=info msg="Container to stop \"492e182036ecbd9915b1b0e2682d64aa419e7a95da5138120f6455342c952eb9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:45:13.897289 systemd[1]: cri-containerd-8db82a875b3a1ec700599e69c6f9aae4a0f126514aab281e8a57c48cfee741af.scope: Deactivated successfully. Feb 12 19:45:13.913659 env[1119]: time="2024-02-12T19:45:13.913603763Z" level=info msg="shim disconnected" id=8db82a875b3a1ec700599e69c6f9aae4a0f126514aab281e8a57c48cfee741af Feb 12 19:45:13.913659 env[1119]: time="2024-02-12T19:45:13.913651022Z" level=warning msg="cleaning up after shim disconnected" id=8db82a875b3a1ec700599e69c6f9aae4a0f126514aab281e8a57c48cfee741af namespace=k8s.io Feb 12 19:45:13.913659 env[1119]: time="2024-02-12T19:45:13.913658957Z" level=info msg="cleaning up dead shim" Feb 12 19:45:13.919357 env[1119]: time="2024-02-12T19:45:13.919309657Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:45:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3699 runtime=io.containerd.runc.v2\n" Feb 12 19:45:13.919637 env[1119]: time="2024-02-12T19:45:13.919613253Z" level=info msg="TearDown network for sandbox \"8db82a875b3a1ec700599e69c6f9aae4a0f126514aab281e8a57c48cfee741af\" successfully" Feb 12 19:45:13.919681 env[1119]: time="2024-02-12T19:45:13.919636126Z" level=info msg="StopPodSandbox for \"8db82a875b3a1ec700599e69c6f9aae4a0f126514aab281e8a57c48cfee741af\" returns successfully" Feb 12 19:45:13.922947 kubelet[1955]: I0212 19:45:13.922921 1955 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxsk7\" (UniqueName: \"kubernetes.io/projected/72149871-0480-46b0-a9e7-403e47facad8-kube-api-access-pxsk7\") pod \"72149871-0480-46b0-a9e7-403e47facad8\" (UID: \"72149871-0480-46b0-a9e7-403e47facad8\") " Feb 12 19:45:13.923085 kubelet[1955]: I0212 19:45:13.922962 1955 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/72149871-0480-46b0-a9e7-403e47facad8-cilium-config-path\") pod \"72149871-0480-46b0-a9e7-403e47facad8\" (UID: \"72149871-0480-46b0-a9e7-403e47facad8\") " Feb 12 19:45:13.924925 kubelet[1955]: I0212 19:45:13.924887 1955 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72149871-0480-46b0-a9e7-403e47facad8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "72149871-0480-46b0-a9e7-403e47facad8" (UID: "72149871-0480-46b0-a9e7-403e47facad8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 19:45:13.927497 kubelet[1955]: I0212 19:45:13.927469 1955 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72149871-0480-46b0-a9e7-403e47facad8-kube-api-access-pxsk7" (OuterVolumeSpecName: "kube-api-access-pxsk7") pod "72149871-0480-46b0-a9e7-403e47facad8" (UID: "72149871-0480-46b0-a9e7-403e47facad8"). InnerVolumeSpecName "kube-api-access-pxsk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:45:14.023232 kubelet[1955]: I0212 19:45:14.023184 1955 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1ad341bf-87f1-4024-a54d-9db2ba5c1f62-etc-cni-netd\") pod \"1ad341bf-87f1-4024-a54d-9db2ba5c1f62\" (UID: \"1ad341bf-87f1-4024-a54d-9db2ba5c1f62\") " Feb 12 19:45:14.023232 kubelet[1955]: I0212 19:45:14.023206 1955 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ad341bf-87f1-4024-a54d-9db2ba5c1f62-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "1ad341bf-87f1-4024-a54d-9db2ba5c1f62" (UID: "1ad341bf-87f1-4024-a54d-9db2ba5c1f62"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:45:14.023463 kubelet[1955]: I0212 19:45:14.023254 1955 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1ad341bf-87f1-4024-a54d-9db2ba5c1f62-host-proc-sys-kernel\") pod \"1ad341bf-87f1-4024-a54d-9db2ba5c1f62\" (UID: \"1ad341bf-87f1-4024-a54d-9db2ba5c1f62\") " Feb 12 19:45:14.023463 kubelet[1955]: I0212 19:45:14.023282 1955 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzms9\" (UniqueName: \"kubernetes.io/projected/1ad341bf-87f1-4024-a54d-9db2ba5c1f62-kube-api-access-xzms9\") pod \"1ad341bf-87f1-4024-a54d-9db2ba5c1f62\" (UID: \"1ad341bf-87f1-4024-a54d-9db2ba5c1f62\") " Feb 12 19:45:14.023463 kubelet[1955]: I0212 19:45:14.023298 1955 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1ad341bf-87f1-4024-a54d-9db2ba5c1f62-xtables-lock\") pod \"1ad341bf-87f1-4024-a54d-9db2ba5c1f62\" (UID: \"1ad341bf-87f1-4024-a54d-9db2ba5c1f62\") " Feb 12 19:45:14.023463 kubelet[1955]: I0212 19:45:14.023317 1955 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1ad341bf-87f1-4024-a54d-9db2ba5c1f62-hubble-tls\") pod \"1ad341bf-87f1-4024-a54d-9db2ba5c1f62\" (UID: \"1ad341bf-87f1-4024-a54d-9db2ba5c1f62\") " Feb 12 19:45:14.023463 kubelet[1955]: I0212 19:45:14.023332 1955 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1ad341bf-87f1-4024-a54d-9db2ba5c1f62-host-proc-sys-net\") pod \"1ad341bf-87f1-4024-a54d-9db2ba5c1f62\" (UID: \"1ad341bf-87f1-4024-a54d-9db2ba5c1f62\") " Feb 12 19:45:14.023463 kubelet[1955]: I0212 19:45:14.023349 1955 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1ad341bf-87f1-4024-a54d-9db2ba5c1f62-bpf-maps\") pod \"1ad341bf-87f1-4024-a54d-9db2ba5c1f62\" (UID: \"1ad341bf-87f1-4024-a54d-9db2ba5c1f62\") " Feb 12 19:45:14.023614 kubelet[1955]: I0212 19:45:14.023363 1955 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1ad341bf-87f1-4024-a54d-9db2ba5c1f62-hostproc\") pod \"1ad341bf-87f1-4024-a54d-9db2ba5c1f62\" (UID: \"1ad341bf-87f1-4024-a54d-9db2ba5c1f62\") " Feb 12 19:45:14.023614 kubelet[1955]: I0212 19:45:14.023376 1955 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1ad341bf-87f1-4024-a54d-9db2ba5c1f62-cilium-run\") pod \"1ad341bf-87f1-4024-a54d-9db2ba5c1f62\" (UID: \"1ad341bf-87f1-4024-a54d-9db2ba5c1f62\") " Feb 12 19:45:14.023614 kubelet[1955]: I0212 19:45:14.023406 1955 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1ad341bf-87f1-4024-a54d-9db2ba5c1f62-lib-modules\") pod \"1ad341bf-87f1-4024-a54d-9db2ba5c1f62\" (UID: \"1ad341bf-87f1-4024-a54d-9db2ba5c1f62\") " Feb 12 19:45:14.023614 kubelet[1955]: I0212 19:45:14.023421 1955 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1ad341bf-87f1-4024-a54d-9db2ba5c1f62-cni-path\") pod \"1ad341bf-87f1-4024-a54d-9db2ba5c1f62\" (UID: \"1ad341bf-87f1-4024-a54d-9db2ba5c1f62\") " Feb 12 19:45:14.023614 kubelet[1955]: I0212 19:45:14.023441 1955 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1ad341bf-87f1-4024-a54d-9db2ba5c1f62-clustermesh-secrets\") pod \"1ad341bf-87f1-4024-a54d-9db2ba5c1f62\" (UID: \"1ad341bf-87f1-4024-a54d-9db2ba5c1f62\") " Feb 12 19:45:14.023614 kubelet[1955]: I0212 19:45:14.023457 1955 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1ad341bf-87f1-4024-a54d-9db2ba5c1f62-cilium-cgroup\") pod \"1ad341bf-87f1-4024-a54d-9db2ba5c1f62\" (UID: \"1ad341bf-87f1-4024-a54d-9db2ba5c1f62\") " Feb 12 19:45:14.023749 kubelet[1955]: I0212 19:45:14.023475 1955 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1ad341bf-87f1-4024-a54d-9db2ba5c1f62-cilium-config-path\") pod \"1ad341bf-87f1-4024-a54d-9db2ba5c1f62\" (UID: \"1ad341bf-87f1-4024-a54d-9db2ba5c1f62\") " Feb 12 19:45:14.023749 kubelet[1955]: I0212 19:45:14.023502 1955 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1ad341bf-87f1-4024-a54d-9db2ba5c1f62-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Feb 12 19:45:14.023749 kubelet[1955]: I0212 19:45:14.023513 1955 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-pxsk7\" (UniqueName: \"kubernetes.io/projected/72149871-0480-46b0-a9e7-403e47facad8-kube-api-access-pxsk7\") on node \"localhost\" DevicePath \"\"" Feb 12 19:45:14.023749 kubelet[1955]: I0212 19:45:14.023522 1955 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/72149871-0480-46b0-a9e7-403e47facad8-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 12 19:45:14.023749 kubelet[1955]: I0212 19:45:14.023569 1955 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ad341bf-87f1-4024-a54d-9db2ba5c1f62-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "1ad341bf-87f1-4024-a54d-9db2ba5c1f62" (UID: "1ad341bf-87f1-4024-a54d-9db2ba5c1f62"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:45:14.023749 kubelet[1955]: I0212 19:45:14.023604 1955 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ad341bf-87f1-4024-a54d-9db2ba5c1f62-hostproc" (OuterVolumeSpecName: "hostproc") pod "1ad341bf-87f1-4024-a54d-9db2ba5c1f62" (UID: "1ad341bf-87f1-4024-a54d-9db2ba5c1f62"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:45:14.023890 kubelet[1955]: I0212 19:45:14.023618 1955 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ad341bf-87f1-4024-a54d-9db2ba5c1f62-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "1ad341bf-87f1-4024-a54d-9db2ba5c1f62" (UID: "1ad341bf-87f1-4024-a54d-9db2ba5c1f62"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:45:14.025221 kubelet[1955]: I0212 19:45:14.023943 1955 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ad341bf-87f1-4024-a54d-9db2ba5c1f62-cni-path" (OuterVolumeSpecName: "cni-path") pod "1ad341bf-87f1-4024-a54d-9db2ba5c1f62" (UID: "1ad341bf-87f1-4024-a54d-9db2ba5c1f62"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:45:14.025221 kubelet[1955]: I0212 19:45:14.023959 1955 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ad341bf-87f1-4024-a54d-9db2ba5c1f62-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "1ad341bf-87f1-4024-a54d-9db2ba5c1f62" (UID: "1ad341bf-87f1-4024-a54d-9db2ba5c1f62"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:45:14.025221 kubelet[1955]: I0212 19:45:14.023980 1955 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ad341bf-87f1-4024-a54d-9db2ba5c1f62-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1ad341bf-87f1-4024-a54d-9db2ba5c1f62" (UID: "1ad341bf-87f1-4024-a54d-9db2ba5c1f62"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:45:14.025221 kubelet[1955]: I0212 19:45:14.023990 1955 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ad341bf-87f1-4024-a54d-9db2ba5c1f62-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "1ad341bf-87f1-4024-a54d-9db2ba5c1f62" (UID: "1ad341bf-87f1-4024-a54d-9db2ba5c1f62"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:45:14.025221 kubelet[1955]: I0212 19:45:14.023996 1955 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ad341bf-87f1-4024-a54d-9db2ba5c1f62-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "1ad341bf-87f1-4024-a54d-9db2ba5c1f62" (UID: "1ad341bf-87f1-4024-a54d-9db2ba5c1f62"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:45:14.025368 kubelet[1955]: I0212 19:45:14.024020 1955 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ad341bf-87f1-4024-a54d-9db2ba5c1f62-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "1ad341bf-87f1-4024-a54d-9db2ba5c1f62" (UID: "1ad341bf-87f1-4024-a54d-9db2ba5c1f62"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:45:14.025368 kubelet[1955]: I0212 19:45:14.025252 1955 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ad341bf-87f1-4024-a54d-9db2ba5c1f62-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1ad341bf-87f1-4024-a54d-9db2ba5c1f62" (UID: "1ad341bf-87f1-4024-a54d-9db2ba5c1f62"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 19:45:14.025573 kubelet[1955]: I0212 19:45:14.025535 1955 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ad341bf-87f1-4024-a54d-9db2ba5c1f62-kube-api-access-xzms9" (OuterVolumeSpecName: "kube-api-access-xzms9") pod "1ad341bf-87f1-4024-a54d-9db2ba5c1f62" (UID: "1ad341bf-87f1-4024-a54d-9db2ba5c1f62"). InnerVolumeSpecName "kube-api-access-xzms9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:45:14.026282 kubelet[1955]: I0212 19:45:14.026256 1955 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ad341bf-87f1-4024-a54d-9db2ba5c1f62-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "1ad341bf-87f1-4024-a54d-9db2ba5c1f62" (UID: "1ad341bf-87f1-4024-a54d-9db2ba5c1f62"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 19:45:14.026282 kubelet[1955]: I0212 19:45:14.026272 1955 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ad341bf-87f1-4024-a54d-9db2ba5c1f62-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "1ad341bf-87f1-4024-a54d-9db2ba5c1f62" (UID: "1ad341bf-87f1-4024-a54d-9db2ba5c1f62"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:45:14.124803 kubelet[1955]: I0212 19:45:14.124702 1955 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1ad341bf-87f1-4024-a54d-9db2ba5c1f62-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Feb 12 19:45:14.124803 kubelet[1955]: I0212 19:45:14.124740 1955 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-xzms9\" (UniqueName: \"kubernetes.io/projected/1ad341bf-87f1-4024-a54d-9db2ba5c1f62-kube-api-access-xzms9\") on node \"localhost\" DevicePath \"\"" Feb 12 19:45:14.124803 kubelet[1955]: I0212 19:45:14.124755 1955 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1ad341bf-87f1-4024-a54d-9db2ba5c1f62-xtables-lock\") on node \"localhost\" DevicePath \"\"" Feb 12 19:45:14.124803 kubelet[1955]: I0212 19:45:14.124776 1955 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1ad341bf-87f1-4024-a54d-9db2ba5c1f62-hubble-tls\") on node \"localhost\" DevicePath \"\"" Feb 12 19:45:14.124803 kubelet[1955]: I0212 19:45:14.124791 1955 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1ad341bf-87f1-4024-a54d-9db2ba5c1f62-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Feb 12 19:45:14.124803 kubelet[1955]: I0212 19:45:14.124805 1955 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1ad341bf-87f1-4024-a54d-9db2ba5c1f62-bpf-maps\") on node \"localhost\" DevicePath \"\"" Feb 12 19:45:14.125154 kubelet[1955]: I0212 19:45:14.124817 1955 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1ad341bf-87f1-4024-a54d-9db2ba5c1f62-hostproc\") on node \"localhost\" DevicePath \"\"" Feb 12 19:45:14.125154 kubelet[1955]: I0212 19:45:14.124833 1955 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1ad341bf-87f1-4024-a54d-9db2ba5c1f62-cilium-run\") on node \"localhost\" DevicePath \"\"" Feb 12 19:45:14.125154 kubelet[1955]: I0212 19:45:14.124843 1955 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1ad341bf-87f1-4024-a54d-9db2ba5c1f62-lib-modules\") on node \"localhost\" DevicePath \"\"" Feb 12 19:45:14.125154 kubelet[1955]: I0212 19:45:14.124863 1955 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1ad341bf-87f1-4024-a54d-9db2ba5c1f62-cni-path\") on node \"localhost\" DevicePath \"\"" Feb 12 19:45:14.125154 kubelet[1955]: I0212 19:45:14.124873 1955 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1ad341bf-87f1-4024-a54d-9db2ba5c1f62-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Feb 12 19:45:14.125154 kubelet[1955]: I0212 19:45:14.124882 1955 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1ad341bf-87f1-4024-a54d-9db2ba5c1f62-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Feb 12 19:45:14.125154 kubelet[1955]: I0212 19:45:14.124892 1955 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1ad341bf-87f1-4024-a54d-9db2ba5c1f62-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 12 19:45:14.282511 systemd[1]: Removed slice kubepods-burstable-pod1ad341bf_87f1_4024_a54d_9db2ba5c1f62.slice. Feb 12 19:45:14.282583 systemd[1]: kubepods-burstable-pod1ad341bf_87f1_4024_a54d_9db2ba5c1f62.slice: Consumed 5.840s CPU time. Feb 12 19:45:14.283462 systemd[1]: Removed slice kubepods-besteffort-pod72149871_0480_46b0_a9e7_403e47facad8.slice. Feb 12 19:45:14.441225 kubelet[1955]: I0212 19:45:14.441192 1955 scope.go:117] "RemoveContainer" containerID="28a2e7cea54b47953baebcbff95dc7457678cbf69be44707bc9ead9cdd1be0d5" Feb 12 19:45:14.442528 env[1119]: time="2024-02-12T19:45:14.442488547Z" level=info msg="RemoveContainer for \"28a2e7cea54b47953baebcbff95dc7457678cbf69be44707bc9ead9cdd1be0d5\"" Feb 12 19:45:14.447833 env[1119]: time="2024-02-12T19:45:14.447799484Z" level=info msg="RemoveContainer for \"28a2e7cea54b47953baebcbff95dc7457678cbf69be44707bc9ead9cdd1be0d5\" returns successfully" Feb 12 19:45:14.448090 kubelet[1955]: I0212 19:45:14.448072 1955 scope.go:117] "RemoveContainer" containerID="28a2e7cea54b47953baebcbff95dc7457678cbf69be44707bc9ead9cdd1be0d5" Feb 12 19:45:14.448455 env[1119]: time="2024-02-12T19:45:14.448343015Z" level=error msg="ContainerStatus for \"28a2e7cea54b47953baebcbff95dc7457678cbf69be44707bc9ead9cdd1be0d5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"28a2e7cea54b47953baebcbff95dc7457678cbf69be44707bc9ead9cdd1be0d5\": not found" Feb 12 19:45:14.449330 kubelet[1955]: E0212 19:45:14.449299 1955 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"28a2e7cea54b47953baebcbff95dc7457678cbf69be44707bc9ead9cdd1be0d5\": not found" containerID="28a2e7cea54b47953baebcbff95dc7457678cbf69be44707bc9ead9cdd1be0d5" Feb 12 19:45:14.449417 kubelet[1955]: I0212 19:45:14.449401 1955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"28a2e7cea54b47953baebcbff95dc7457678cbf69be44707bc9ead9cdd1be0d5"} err="failed to get container status \"28a2e7cea54b47953baebcbff95dc7457678cbf69be44707bc9ead9cdd1be0d5\": rpc error: code = NotFound desc = an error occurred when try to find container \"28a2e7cea54b47953baebcbff95dc7457678cbf69be44707bc9ead9cdd1be0d5\": not found" Feb 12 19:45:14.449448 kubelet[1955]: I0212 19:45:14.449423 1955 scope.go:117] "RemoveContainer" containerID="5f89c5d9567941a67f0c04b292ce66fd9a0ddccb0be7ebf0009e304e5ba4571f" Feb 12 19:45:14.450461 env[1119]: time="2024-02-12T19:45:14.450435004Z" level=info msg="RemoveContainer for \"5f89c5d9567941a67f0c04b292ce66fd9a0ddccb0be7ebf0009e304e5ba4571f\"" Feb 12 19:45:14.453431 env[1119]: time="2024-02-12T19:45:14.453374502Z" level=info msg="RemoveContainer for \"5f89c5d9567941a67f0c04b292ce66fd9a0ddccb0be7ebf0009e304e5ba4571f\" returns successfully" Feb 12 19:45:14.453652 kubelet[1955]: I0212 19:45:14.453600 1955 scope.go:117] "RemoveContainer" containerID="f68184cc61d3782f530ab3702ff3f775d22d74f17320b9bec7aa5ee5ebccfe59" Feb 12 19:45:14.454710 env[1119]: time="2024-02-12T19:45:14.454676212Z" level=info msg="RemoveContainer for \"f68184cc61d3782f530ab3702ff3f775d22d74f17320b9bec7aa5ee5ebccfe59\"" Feb 12 19:45:14.458139 env[1119]: time="2024-02-12T19:45:14.458097914Z" level=info msg="RemoveContainer for \"f68184cc61d3782f530ab3702ff3f775d22d74f17320b9bec7aa5ee5ebccfe59\" returns successfully" Feb 12 19:45:14.458306 kubelet[1955]: I0212 19:45:14.458287 1955 scope.go:117] "RemoveContainer" containerID="492e182036ecbd9915b1b0e2682d64aa419e7a95da5138120f6455342c952eb9" Feb 12 19:45:14.459113 env[1119]: time="2024-02-12T19:45:14.459096929Z" level=info msg="RemoveContainer for \"492e182036ecbd9915b1b0e2682d64aa419e7a95da5138120f6455342c952eb9\"" Feb 12 19:45:14.461524 env[1119]: time="2024-02-12T19:45:14.461498686Z" level=info msg="RemoveContainer for \"492e182036ecbd9915b1b0e2682d64aa419e7a95da5138120f6455342c952eb9\" returns successfully" Feb 12 19:45:14.461689 kubelet[1955]: I0212 19:45:14.461668 1955 scope.go:117] "RemoveContainer" containerID="c7c1e24ebf5f24ae320ed92f44cd042d9c32265905d554db429e7778f55750b4" Feb 12 19:45:14.462926 env[1119]: time="2024-02-12T19:45:14.462884195Z" level=info msg="RemoveContainer for \"c7c1e24ebf5f24ae320ed92f44cd042d9c32265905d554db429e7778f55750b4\"" Feb 12 19:45:14.465890 env[1119]: time="2024-02-12T19:45:14.465858579Z" level=info msg="RemoveContainer for \"c7c1e24ebf5f24ae320ed92f44cd042d9c32265905d554db429e7778f55750b4\" returns successfully" Feb 12 19:45:14.466011 kubelet[1955]: I0212 19:45:14.465995 1955 scope.go:117] "RemoveContainer" containerID="c84cbe5c5136bf0a13c7a3b8918d2332d34460a55eb7c41854299964e1830149" Feb 12 19:45:14.467018 env[1119]: time="2024-02-12T19:45:14.466992079Z" level=info msg="RemoveContainer for \"c84cbe5c5136bf0a13c7a3b8918d2332d34460a55eb7c41854299964e1830149\"" Feb 12 19:45:14.469250 env[1119]: time="2024-02-12T19:45:14.469231539Z" level=info msg="RemoveContainer for \"c84cbe5c5136bf0a13c7a3b8918d2332d34460a55eb7c41854299964e1830149\" returns successfully" Feb 12 19:45:14.469376 kubelet[1955]: I0212 19:45:14.469358 1955 scope.go:117] "RemoveContainer" containerID="5f89c5d9567941a67f0c04b292ce66fd9a0ddccb0be7ebf0009e304e5ba4571f" Feb 12 19:45:14.469622 env[1119]: time="2024-02-12T19:45:14.469546136Z" level=error msg="ContainerStatus for \"5f89c5d9567941a67f0c04b292ce66fd9a0ddccb0be7ebf0009e304e5ba4571f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5f89c5d9567941a67f0c04b292ce66fd9a0ddccb0be7ebf0009e304e5ba4571f\": not found" Feb 12 19:45:14.469782 kubelet[1955]: E0212 19:45:14.469747 1955 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5f89c5d9567941a67f0c04b292ce66fd9a0ddccb0be7ebf0009e304e5ba4571f\": not found" containerID="5f89c5d9567941a67f0c04b292ce66fd9a0ddccb0be7ebf0009e304e5ba4571f" Feb 12 19:45:14.469850 kubelet[1955]: I0212 19:45:14.469799 1955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5f89c5d9567941a67f0c04b292ce66fd9a0ddccb0be7ebf0009e304e5ba4571f"} err="failed to get container status \"5f89c5d9567941a67f0c04b292ce66fd9a0ddccb0be7ebf0009e304e5ba4571f\": rpc error: code = NotFound desc = an error occurred when try to find container \"5f89c5d9567941a67f0c04b292ce66fd9a0ddccb0be7ebf0009e304e5ba4571f\": not found" Feb 12 19:45:14.469850 kubelet[1955]: I0212 19:45:14.469810 1955 scope.go:117] "RemoveContainer" containerID="f68184cc61d3782f530ab3702ff3f775d22d74f17320b9bec7aa5ee5ebccfe59" Feb 12 19:45:14.470012 env[1119]: time="2024-02-12T19:45:14.469950153Z" level=error msg="ContainerStatus for \"f68184cc61d3782f530ab3702ff3f775d22d74f17320b9bec7aa5ee5ebccfe59\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f68184cc61d3782f530ab3702ff3f775d22d74f17320b9bec7aa5ee5ebccfe59\": not found" Feb 12 19:45:14.470170 kubelet[1955]: E0212 19:45:14.470066 1955 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f68184cc61d3782f530ab3702ff3f775d22d74f17320b9bec7aa5ee5ebccfe59\": not found" containerID="f68184cc61d3782f530ab3702ff3f775d22d74f17320b9bec7aa5ee5ebccfe59" Feb 12 19:45:14.470170 kubelet[1955]: I0212 19:45:14.470088 1955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f68184cc61d3782f530ab3702ff3f775d22d74f17320b9bec7aa5ee5ebccfe59"} err="failed to get container status \"f68184cc61d3782f530ab3702ff3f775d22d74f17320b9bec7aa5ee5ebccfe59\": rpc error: code = NotFound desc = an error occurred when try to find container \"f68184cc61d3782f530ab3702ff3f775d22d74f17320b9bec7aa5ee5ebccfe59\": not found" Feb 12 19:45:14.470170 kubelet[1955]: I0212 19:45:14.470095 1955 scope.go:117] "RemoveContainer" containerID="492e182036ecbd9915b1b0e2682d64aa419e7a95da5138120f6455342c952eb9" Feb 12 19:45:14.470284 env[1119]: time="2024-02-12T19:45:14.470197531Z" level=error msg="ContainerStatus for \"492e182036ecbd9915b1b0e2682d64aa419e7a95da5138120f6455342c952eb9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"492e182036ecbd9915b1b0e2682d64aa419e7a95da5138120f6455342c952eb9\": not found" Feb 12 19:45:14.470325 kubelet[1955]: E0212 19:45:14.470283 1955 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"492e182036ecbd9915b1b0e2682d64aa419e7a95da5138120f6455342c952eb9\": not found" containerID="492e182036ecbd9915b1b0e2682d64aa419e7a95da5138120f6455342c952eb9" Feb 12 19:45:14.470325 kubelet[1955]: I0212 19:45:14.470299 1955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"492e182036ecbd9915b1b0e2682d64aa419e7a95da5138120f6455342c952eb9"} err="failed to get container status \"492e182036ecbd9915b1b0e2682d64aa419e7a95da5138120f6455342c952eb9\": rpc error: code = NotFound desc = an error occurred when try to find container \"492e182036ecbd9915b1b0e2682d64aa419e7a95da5138120f6455342c952eb9\": not found" Feb 12 19:45:14.470325 kubelet[1955]: I0212 19:45:14.470307 1955 scope.go:117] "RemoveContainer" containerID="c7c1e24ebf5f24ae320ed92f44cd042d9c32265905d554db429e7778f55750b4" Feb 12 19:45:14.470485 env[1119]: time="2024-02-12T19:45:14.470430725Z" level=error msg="ContainerStatus for \"c7c1e24ebf5f24ae320ed92f44cd042d9c32265905d554db429e7778f55750b4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c7c1e24ebf5f24ae320ed92f44cd042d9c32265905d554db429e7778f55750b4\": not found" Feb 12 19:45:14.470561 kubelet[1955]: E0212 19:45:14.470547 1955 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c7c1e24ebf5f24ae320ed92f44cd042d9c32265905d554db429e7778f55750b4\": not found" containerID="c7c1e24ebf5f24ae320ed92f44cd042d9c32265905d554db429e7778f55750b4" Feb 12 19:45:14.470561 kubelet[1955]: I0212 19:45:14.470564 1955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c7c1e24ebf5f24ae320ed92f44cd042d9c32265905d554db429e7778f55750b4"} err="failed to get container status \"c7c1e24ebf5f24ae320ed92f44cd042d9c32265905d554db429e7778f55750b4\": rpc error: code = NotFound desc = an error occurred when try to find container \"c7c1e24ebf5f24ae320ed92f44cd042d9c32265905d554db429e7778f55750b4\": not found" Feb 12 19:45:14.470663 kubelet[1955]: I0212 19:45:14.470571 1955 scope.go:117] "RemoveContainer" containerID="c84cbe5c5136bf0a13c7a3b8918d2332d34460a55eb7c41854299964e1830149" Feb 12 19:45:14.470715 env[1119]: time="2024-02-12T19:45:14.470679286Z" level=error msg="ContainerStatus for \"c84cbe5c5136bf0a13c7a3b8918d2332d34460a55eb7c41854299964e1830149\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c84cbe5c5136bf0a13c7a3b8918d2332d34460a55eb7c41854299964e1830149\": not found" Feb 12 19:45:14.470871 kubelet[1955]: E0212 19:45:14.470837 1955 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c84cbe5c5136bf0a13c7a3b8918d2332d34460a55eb7c41854299964e1830149\": not found" containerID="c84cbe5c5136bf0a13c7a3b8918d2332d34460a55eb7c41854299964e1830149" Feb 12 19:45:14.470917 kubelet[1955]: I0212 19:45:14.470888 1955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c84cbe5c5136bf0a13c7a3b8918d2332d34460a55eb7c41854299964e1830149"} err="failed to get container status \"c84cbe5c5136bf0a13c7a3b8918d2332d34460a55eb7c41854299964e1830149\": rpc error: code = NotFound desc = an error occurred when try to find container \"c84cbe5c5136bf0a13c7a3b8918d2332d34460a55eb7c41854299964e1830149\": not found" Feb 12 19:45:14.805774 systemd[1]: var-lib-kubelet-pods-72149871\x2d0480\x2d46b0\x2da9e7\x2d403e47facad8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpxsk7.mount: Deactivated successfully. Feb 12 19:45:14.805877 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8db82a875b3a1ec700599e69c6f9aae4a0f126514aab281e8a57c48cfee741af-rootfs.mount: Deactivated successfully. Feb 12 19:45:14.805932 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8db82a875b3a1ec700599e69c6f9aae4a0f126514aab281e8a57c48cfee741af-shm.mount: Deactivated successfully. Feb 12 19:45:14.805984 systemd[1]: var-lib-kubelet-pods-1ad341bf\x2d87f1\x2d4024\x2da54d\x2d9db2ba5c1f62-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxzms9.mount: Deactivated successfully. Feb 12 19:45:14.806040 systemd[1]: var-lib-kubelet-pods-1ad341bf\x2d87f1\x2d4024\x2da54d\x2d9db2ba5c1f62-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 12 19:45:14.806090 systemd[1]: var-lib-kubelet-pods-1ad341bf\x2d87f1\x2d4024\x2da54d\x2d9db2ba5c1f62-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 12 19:45:15.276759 kubelet[1955]: E0212 19:45:15.276714 1955 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:45:15.327954 kubelet[1955]: E0212 19:45:15.327935 1955 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 12 19:45:15.776728 sshd[3554]: pam_unix(sshd:session): session closed for user core Feb 12 19:45:15.779118 systemd[1]: sshd@22-10.0.0.136:22-10.0.0.1:57264.service: Deactivated successfully. Feb 12 19:45:15.779702 systemd[1]: session-23.scope: Deactivated successfully. Feb 12 19:45:15.780243 systemd-logind[1107]: Session 23 logged out. Waiting for processes to exit. Feb 12 19:45:15.781316 systemd[1]: Started sshd@23-10.0.0.136:22-10.0.0.1:57268.service. Feb 12 19:45:15.782177 systemd-logind[1107]: Removed session 23. Feb 12 19:45:15.821313 sshd[3717]: Accepted publickey for core from 10.0.0.1 port 57268 ssh2: RSA SHA256:61J5tVZgtMsvFgBzlA318rHvk/8vx1tAF2anfHXiCnk Feb 12 19:45:15.822242 sshd[3717]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:45:15.825189 systemd-logind[1107]: New session 24 of user core. Feb 12 19:45:15.825963 systemd[1]: Started session-24.scope. Feb 12 19:45:16.278430 kubelet[1955]: I0212 19:45:16.278398 1955 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="1ad341bf-87f1-4024-a54d-9db2ba5c1f62" path="/var/lib/kubelet/pods/1ad341bf-87f1-4024-a54d-9db2ba5c1f62/volumes" Feb 12 19:45:16.278888 kubelet[1955]: I0212 19:45:16.278871 1955 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="72149871-0480-46b0-a9e7-403e47facad8" path="/var/lib/kubelet/pods/72149871-0480-46b0-a9e7-403e47facad8/volumes" Feb 12 19:45:16.472373 sshd[3717]: pam_unix(sshd:session): session closed for user core Feb 12 19:45:16.474725 systemd[1]: sshd@23-10.0.0.136:22-10.0.0.1:57268.service: Deactivated successfully. Feb 12 19:45:16.475192 systemd[1]: session-24.scope: Deactivated successfully. Feb 12 19:45:16.476645 systemd[1]: Started sshd@24-10.0.0.136:22-10.0.0.1:47900.service. Feb 12 19:45:16.477284 systemd-logind[1107]: Session 24 logged out. Waiting for processes to exit. Feb 12 19:45:16.477966 systemd-logind[1107]: Removed session 24. Feb 12 19:45:16.488109 kubelet[1955]: I0212 19:45:16.488081 1955 topology_manager.go:215] "Topology Admit Handler" podUID="ca8f8019-41cf-4493-8b40-251b48fd43f8" podNamespace="kube-system" podName="cilium-kdktd" Feb 12 19:45:16.488329 kubelet[1955]: E0212 19:45:16.488316 1955 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1ad341bf-87f1-4024-a54d-9db2ba5c1f62" containerName="apply-sysctl-overwrites" Feb 12 19:45:16.488422 kubelet[1955]: E0212 19:45:16.488399 1955 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="72149871-0480-46b0-a9e7-403e47facad8" containerName="cilium-operator" Feb 12 19:45:16.488422 kubelet[1955]: E0212 19:45:16.488413 1955 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1ad341bf-87f1-4024-a54d-9db2ba5c1f62" containerName="clean-cilium-state" Feb 12 19:45:16.488422 kubelet[1955]: E0212 19:45:16.488420 1955 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1ad341bf-87f1-4024-a54d-9db2ba5c1f62" containerName="mount-cgroup" Feb 12 19:45:16.488422 kubelet[1955]: E0212 19:45:16.488426 1955 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1ad341bf-87f1-4024-a54d-9db2ba5c1f62" containerName="mount-bpf-fs" Feb 12 19:45:16.488422 kubelet[1955]: E0212 19:45:16.488431 1955 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1ad341bf-87f1-4024-a54d-9db2ba5c1f62" containerName="cilium-agent" Feb 12 19:45:16.488706 kubelet[1955]: I0212 19:45:16.488454 1955 memory_manager.go:346] "RemoveStaleState removing state" podUID="72149871-0480-46b0-a9e7-403e47facad8" containerName="cilium-operator" Feb 12 19:45:16.488706 kubelet[1955]: I0212 19:45:16.488460 1955 memory_manager.go:346] "RemoveStaleState removing state" podUID="1ad341bf-87f1-4024-a54d-9db2ba5c1f62" containerName="cilium-agent" Feb 12 19:45:16.493269 systemd[1]: Created slice kubepods-burstable-podca8f8019_41cf_4493_8b40_251b48fd43f8.slice. Feb 12 19:45:16.525063 sshd[3729]: Accepted publickey for core from 10.0.0.1 port 47900 ssh2: RSA SHA256:61J5tVZgtMsvFgBzlA318rHvk/8vx1tAF2anfHXiCnk Feb 12 19:45:16.526439 sshd[3729]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:45:16.530987 systemd[1]: Started session-25.scope. Feb 12 19:45:16.532070 systemd-logind[1107]: New session 25 of user core. Feb 12 19:45:16.538081 kubelet[1955]: I0212 19:45:16.538039 1955 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ca8f8019-41cf-4493-8b40-251b48fd43f8-hostproc\") pod \"cilium-kdktd\" (UID: \"ca8f8019-41cf-4493-8b40-251b48fd43f8\") " pod="kube-system/cilium-kdktd" Feb 12 19:45:16.538156 kubelet[1955]: I0212 19:45:16.538092 1955 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ca8f8019-41cf-4493-8b40-251b48fd43f8-lib-modules\") pod \"cilium-kdktd\" (UID: \"ca8f8019-41cf-4493-8b40-251b48fd43f8\") " pod="kube-system/cilium-kdktd" Feb 12 19:45:16.538156 kubelet[1955]: I0212 19:45:16.538118 1955 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ca8f8019-41cf-4493-8b40-251b48fd43f8-cilium-config-path\") pod \"cilium-kdktd\" (UID: \"ca8f8019-41cf-4493-8b40-251b48fd43f8\") " pod="kube-system/cilium-kdktd" Feb 12 19:45:16.538156 kubelet[1955]: I0212 19:45:16.538143 1955 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ca8f8019-41cf-4493-8b40-251b48fd43f8-cni-path\") pod \"cilium-kdktd\" (UID: \"ca8f8019-41cf-4493-8b40-251b48fd43f8\") " pod="kube-system/cilium-kdktd" Feb 12 19:45:16.538232 kubelet[1955]: I0212 19:45:16.538169 1955 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ca8f8019-41cf-4493-8b40-251b48fd43f8-etc-cni-netd\") pod \"cilium-kdktd\" (UID: \"ca8f8019-41cf-4493-8b40-251b48fd43f8\") " pod="kube-system/cilium-kdktd" Feb 12 19:45:16.538232 kubelet[1955]: I0212 19:45:16.538195 1955 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ca8f8019-41cf-4493-8b40-251b48fd43f8-bpf-maps\") pod \"cilium-kdktd\" (UID: \"ca8f8019-41cf-4493-8b40-251b48fd43f8\") " pod="kube-system/cilium-kdktd" Feb 12 19:45:16.538232 kubelet[1955]: I0212 19:45:16.538218 1955 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ca8f8019-41cf-4493-8b40-251b48fd43f8-cilium-run\") pod \"cilium-kdktd\" (UID: \"ca8f8019-41cf-4493-8b40-251b48fd43f8\") " pod="kube-system/cilium-kdktd" Feb 12 19:45:16.538300 kubelet[1955]: I0212 19:45:16.538244 1955 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ca8f8019-41cf-4493-8b40-251b48fd43f8-clustermesh-secrets\") pod \"cilium-kdktd\" (UID: \"ca8f8019-41cf-4493-8b40-251b48fd43f8\") " pod="kube-system/cilium-kdktd" Feb 12 19:45:16.538300 kubelet[1955]: I0212 19:45:16.538266 1955 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ca8f8019-41cf-4493-8b40-251b48fd43f8-host-proc-sys-net\") pod \"cilium-kdktd\" (UID: \"ca8f8019-41cf-4493-8b40-251b48fd43f8\") " pod="kube-system/cilium-kdktd" Feb 12 19:45:16.538300 kubelet[1955]: I0212 19:45:16.538288 1955 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ca8f8019-41cf-4493-8b40-251b48fd43f8-hubble-tls\") pod \"cilium-kdktd\" (UID: \"ca8f8019-41cf-4493-8b40-251b48fd43f8\") " pod="kube-system/cilium-kdktd" Feb 12 19:45:16.538367 kubelet[1955]: I0212 19:45:16.538310 1955 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ca8f8019-41cf-4493-8b40-251b48fd43f8-xtables-lock\") pod \"cilium-kdktd\" (UID: \"ca8f8019-41cf-4493-8b40-251b48fd43f8\") " pod="kube-system/cilium-kdktd" Feb 12 19:45:16.538367 kubelet[1955]: I0212 19:45:16.538333 1955 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ca8f8019-41cf-4493-8b40-251b48fd43f8-cilium-ipsec-secrets\") pod \"cilium-kdktd\" (UID: \"ca8f8019-41cf-4493-8b40-251b48fd43f8\") " pod="kube-system/cilium-kdktd" Feb 12 19:45:16.538367 kubelet[1955]: I0212 19:45:16.538353 1955 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4m9x\" (UniqueName: \"kubernetes.io/projected/ca8f8019-41cf-4493-8b40-251b48fd43f8-kube-api-access-k4m9x\") pod \"cilium-kdktd\" (UID: \"ca8f8019-41cf-4493-8b40-251b48fd43f8\") " pod="kube-system/cilium-kdktd" Feb 12 19:45:16.538450 kubelet[1955]: I0212 19:45:16.538371 1955 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ca8f8019-41cf-4493-8b40-251b48fd43f8-cilium-cgroup\") pod \"cilium-kdktd\" (UID: \"ca8f8019-41cf-4493-8b40-251b48fd43f8\") " pod="kube-system/cilium-kdktd" Feb 12 19:45:16.538450 kubelet[1955]: I0212 19:45:16.538404 1955 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ca8f8019-41cf-4493-8b40-251b48fd43f8-host-proc-sys-kernel\") pod \"cilium-kdktd\" (UID: \"ca8f8019-41cf-4493-8b40-251b48fd43f8\") " pod="kube-system/cilium-kdktd" Feb 12 19:45:16.641114 sshd[3729]: pam_unix(sshd:session): session closed for user core Feb 12 19:45:16.644945 systemd[1]: Started sshd@25-10.0.0.136:22-10.0.0.1:47916.service. Feb 12 19:45:16.652904 kubelet[1955]: E0212 19:45:16.652873 1955 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:45:16.654521 env[1119]: time="2024-02-12T19:45:16.654463803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kdktd,Uid:ca8f8019-41cf-4493-8b40-251b48fd43f8,Namespace:kube-system,Attempt:0,}" Feb 12 19:45:16.655298 systemd[1]: sshd@24-10.0.0.136:22-10.0.0.1:47900.service: Deactivated successfully. Feb 12 19:45:16.658137 systemd[1]: session-25.scope: Deactivated successfully. Feb 12 19:45:16.658928 systemd-logind[1107]: Session 25 logged out. Waiting for processes to exit. Feb 12 19:45:16.661634 systemd-logind[1107]: Removed session 25. Feb 12 19:45:16.669573 env[1119]: time="2024-02-12T19:45:16.669509281Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:45:16.669646 env[1119]: time="2024-02-12T19:45:16.669583532Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:45:16.669646 env[1119]: time="2024-02-12T19:45:16.669606515Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:45:16.669864 env[1119]: time="2024-02-12T19:45:16.669818748Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a32c95e6eb8d7af9f87a4d6dc3ccc85f389d96b0d564f3cd71dd163f5d092226 pid=3756 runtime=io.containerd.runc.v2 Feb 12 19:45:16.682869 systemd[1]: Started cri-containerd-a32c95e6eb8d7af9f87a4d6dc3ccc85f389d96b0d564f3cd71dd163f5d092226.scope. Feb 12 19:45:16.692674 sshd[3742]: Accepted publickey for core from 10.0.0.1 port 47916 ssh2: RSA SHA256:61J5tVZgtMsvFgBzlA318rHvk/8vx1tAF2anfHXiCnk Feb 12 19:45:16.693968 sshd[3742]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:45:16.698420 systemd[1]: Started session-26.scope. Feb 12 19:45:16.698462 systemd-logind[1107]: New session 26 of user core. Feb 12 19:45:16.706069 env[1119]: time="2024-02-12T19:45:16.706025384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kdktd,Uid:ca8f8019-41cf-4493-8b40-251b48fd43f8,Namespace:kube-system,Attempt:0,} returns sandbox id \"a32c95e6eb8d7af9f87a4d6dc3ccc85f389d96b0d564f3cd71dd163f5d092226\"" Feb 12 19:45:16.706836 kubelet[1955]: E0212 19:45:16.706818 1955 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:45:16.714516 env[1119]: time="2024-02-12T19:45:16.714476987Z" level=info msg="CreateContainer within sandbox \"a32c95e6eb8d7af9f87a4d6dc3ccc85f389d96b0d564f3cd71dd163f5d092226\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 19:45:16.728360 env[1119]: time="2024-02-12T19:45:16.728324872Z" level=info msg="CreateContainer within sandbox \"a32c95e6eb8d7af9f87a4d6dc3ccc85f389d96b0d564f3cd71dd163f5d092226\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8b03e7b85c0e45c73165be6aecabb87c0723520ccf19ac135bc2961e4e747861\"" Feb 12 19:45:16.728952 env[1119]: time="2024-02-12T19:45:16.728915823Z" level=info msg="StartContainer for \"8b03e7b85c0e45c73165be6aecabb87c0723520ccf19ac135bc2961e4e747861\"" Feb 12 19:45:16.741247 systemd[1]: Started cri-containerd-8b03e7b85c0e45c73165be6aecabb87c0723520ccf19ac135bc2961e4e747861.scope. Feb 12 19:45:16.750885 systemd[1]: cri-containerd-8b03e7b85c0e45c73165be6aecabb87c0723520ccf19ac135bc2961e4e747861.scope: Deactivated successfully. Feb 12 19:45:16.751127 systemd[1]: Stopped cri-containerd-8b03e7b85c0e45c73165be6aecabb87c0723520ccf19ac135bc2961e4e747861.scope. Feb 12 19:45:16.766075 env[1119]: time="2024-02-12T19:45:16.766031314Z" level=info msg="shim disconnected" id=8b03e7b85c0e45c73165be6aecabb87c0723520ccf19ac135bc2961e4e747861 Feb 12 19:45:16.766242 env[1119]: time="2024-02-12T19:45:16.766077642Z" level=warning msg="cleaning up after shim disconnected" id=8b03e7b85c0e45c73165be6aecabb87c0723520ccf19ac135bc2961e4e747861 namespace=k8s.io Feb 12 19:45:16.766242 env[1119]: time="2024-02-12T19:45:16.766086989Z" level=info msg="cleaning up dead shim" Feb 12 19:45:16.781439 env[1119]: time="2024-02-12T19:45:16.781371953Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:45:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3819 runtime=io.containerd.runc.v2\ntime=\"2024-02-12T19:45:16Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/8b03e7b85c0e45c73165be6aecabb87c0723520ccf19ac135bc2961e4e747861/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 12 19:45:16.781861 env[1119]: time="2024-02-12T19:45:16.781691068Z" level=error msg="copy shim log" error="read /proc/self/fd/39: file already closed" Feb 12 19:45:16.781977 env[1119]: time="2024-02-12T19:45:16.781835493Z" level=error msg="Failed to pipe stdout of container \"8b03e7b85c0e45c73165be6aecabb87c0723520ccf19ac135bc2961e4e747861\"" error="reading from a closed fifo" Feb 12 19:45:16.782957 env[1119]: time="2024-02-12T19:45:16.782898560Z" level=error msg="Failed to pipe stderr of container \"8b03e7b85c0e45c73165be6aecabb87c0723520ccf19ac135bc2961e4e747861\"" error="reading from a closed fifo" Feb 12 19:45:16.785691 env[1119]: time="2024-02-12T19:45:16.785628973Z" level=error msg="StartContainer for \"8b03e7b85c0e45c73165be6aecabb87c0723520ccf19ac135bc2961e4e747861\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 12 19:45:16.786423 kubelet[1955]: E0212 19:45:16.785995 1955 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="8b03e7b85c0e45c73165be6aecabb87c0723520ccf19ac135bc2961e4e747861" Feb 12 19:45:16.789975 kubelet[1955]: E0212 19:45:16.789941 1955 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 12 19:45:16.789975 kubelet[1955]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 12 19:45:16.789975 kubelet[1955]: rm /hostbin/cilium-mount Feb 12 19:45:16.790074 kubelet[1955]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-k4m9x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-kdktd_kube-system(ca8f8019-41cf-4493-8b40-251b48fd43f8): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 12 19:45:16.790074 kubelet[1955]: E0212 19:45:16.790015 1955 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-kdktd" podUID="ca8f8019-41cf-4493-8b40-251b48fd43f8" Feb 12 19:45:17.453702 env[1119]: time="2024-02-12T19:45:17.453663926Z" level=info msg="StopPodSandbox for \"a32c95e6eb8d7af9f87a4d6dc3ccc85f389d96b0d564f3cd71dd163f5d092226\"" Feb 12 19:45:17.453893 env[1119]: time="2024-02-12T19:45:17.453716296Z" level=info msg="Container to stop \"8b03e7b85c0e45c73165be6aecabb87c0723520ccf19ac135bc2961e4e747861\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:45:17.459188 systemd[1]: cri-containerd-a32c95e6eb8d7af9f87a4d6dc3ccc85f389d96b0d564f3cd71dd163f5d092226.scope: Deactivated successfully. Feb 12 19:45:17.478853 env[1119]: time="2024-02-12T19:45:17.478791011Z" level=info msg="shim disconnected" id=a32c95e6eb8d7af9f87a4d6dc3ccc85f389d96b0d564f3cd71dd163f5d092226 Feb 12 19:45:17.478853 env[1119]: time="2024-02-12T19:45:17.478841576Z" level=warning msg="cleaning up after shim disconnected" id=a32c95e6eb8d7af9f87a4d6dc3ccc85f389d96b0d564f3cd71dd163f5d092226 namespace=k8s.io Feb 12 19:45:17.478853 env[1119]: time="2024-02-12T19:45:17.478850503Z" level=info msg="cleaning up dead shim" Feb 12 19:45:17.485079 env[1119]: time="2024-02-12T19:45:17.485055593Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:45:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3850 runtime=io.containerd.runc.v2\n" Feb 12 19:45:17.485323 env[1119]: time="2024-02-12T19:45:17.485289779Z" level=info msg="TearDown network for sandbox \"a32c95e6eb8d7af9f87a4d6dc3ccc85f389d96b0d564f3cd71dd163f5d092226\" successfully" Feb 12 19:45:17.485323 env[1119]: time="2024-02-12T19:45:17.485319605Z" level=info msg="StopPodSandbox for \"a32c95e6eb8d7af9f87a4d6dc3ccc85f389d96b0d564f3cd71dd163f5d092226\" returns successfully" Feb 12 19:45:17.545251 kubelet[1955]: I0212 19:45:17.545207 1955 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ca8f8019-41cf-4493-8b40-251b48fd43f8-cilium-cgroup\") pod \"ca8f8019-41cf-4493-8b40-251b48fd43f8\" (UID: \"ca8f8019-41cf-4493-8b40-251b48fd43f8\") " Feb 12 19:45:17.545522 kubelet[1955]: I0212 19:45:17.545274 1955 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ca8f8019-41cf-4493-8b40-251b48fd43f8-cilium-config-path\") pod \"ca8f8019-41cf-4493-8b40-251b48fd43f8\" (UID: \"ca8f8019-41cf-4493-8b40-251b48fd43f8\") " Feb 12 19:45:17.545522 kubelet[1955]: I0212 19:45:17.545297 1955 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ca8f8019-41cf-4493-8b40-251b48fd43f8-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ca8f8019-41cf-4493-8b40-251b48fd43f8" (UID: "ca8f8019-41cf-4493-8b40-251b48fd43f8"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:45:17.545522 kubelet[1955]: I0212 19:45:17.545306 1955 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ca8f8019-41cf-4493-8b40-251b48fd43f8-hubble-tls\") pod \"ca8f8019-41cf-4493-8b40-251b48fd43f8\" (UID: \"ca8f8019-41cf-4493-8b40-251b48fd43f8\") " Feb 12 19:45:17.545522 kubelet[1955]: I0212 19:45:17.545375 1955 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ca8f8019-41cf-4493-8b40-251b48fd43f8-cilium-run\") pod \"ca8f8019-41cf-4493-8b40-251b48fd43f8\" (UID: \"ca8f8019-41cf-4493-8b40-251b48fd43f8\") " Feb 12 19:45:17.545522 kubelet[1955]: I0212 19:45:17.545424 1955 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ca8f8019-41cf-4493-8b40-251b48fd43f8-cilium-ipsec-secrets\") pod \"ca8f8019-41cf-4493-8b40-251b48fd43f8\" (UID: \"ca8f8019-41cf-4493-8b40-251b48fd43f8\") " Feb 12 19:45:17.545522 kubelet[1955]: I0212 19:45:17.545446 1955 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ca8f8019-41cf-4493-8b40-251b48fd43f8-host-proc-sys-kernel\") pod \"ca8f8019-41cf-4493-8b40-251b48fd43f8\" (UID: \"ca8f8019-41cf-4493-8b40-251b48fd43f8\") " Feb 12 19:45:17.545522 kubelet[1955]: I0212 19:45:17.545431 1955 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ca8f8019-41cf-4493-8b40-251b48fd43f8-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ca8f8019-41cf-4493-8b40-251b48fd43f8" (UID: "ca8f8019-41cf-4493-8b40-251b48fd43f8"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:45:17.545522 kubelet[1955]: I0212 19:45:17.545462 1955 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ca8f8019-41cf-4493-8b40-251b48fd43f8-lib-modules\") pod \"ca8f8019-41cf-4493-8b40-251b48fd43f8\" (UID: \"ca8f8019-41cf-4493-8b40-251b48fd43f8\") " Feb 12 19:45:17.545522 kubelet[1955]: I0212 19:45:17.545477 1955 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ca8f8019-41cf-4493-8b40-251b48fd43f8-hostproc\") pod \"ca8f8019-41cf-4493-8b40-251b48fd43f8\" (UID: \"ca8f8019-41cf-4493-8b40-251b48fd43f8\") " Feb 12 19:45:17.545522 kubelet[1955]: I0212 19:45:17.545490 1955 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ca8f8019-41cf-4493-8b40-251b48fd43f8-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ca8f8019-41cf-4493-8b40-251b48fd43f8" (UID: "ca8f8019-41cf-4493-8b40-251b48fd43f8"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:45:17.545522 kubelet[1955]: I0212 19:45:17.545497 1955 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ca8f8019-41cf-4493-8b40-251b48fd43f8-cni-path\") pod \"ca8f8019-41cf-4493-8b40-251b48fd43f8\" (UID: \"ca8f8019-41cf-4493-8b40-251b48fd43f8\") " Feb 12 19:45:17.545522 kubelet[1955]: I0212 19:45:17.545509 1955 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ca8f8019-41cf-4493-8b40-251b48fd43f8-cni-path" (OuterVolumeSpecName: "cni-path") pod "ca8f8019-41cf-4493-8b40-251b48fd43f8" (UID: "ca8f8019-41cf-4493-8b40-251b48fd43f8"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:45:17.545522 kubelet[1955]: I0212 19:45:17.545528 1955 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ca8f8019-41cf-4493-8b40-251b48fd43f8-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ca8f8019-41cf-4493-8b40-251b48fd43f8" (UID: "ca8f8019-41cf-4493-8b40-251b48fd43f8"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:45:17.545982 kubelet[1955]: I0212 19:45:17.545536 1955 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ca8f8019-41cf-4493-8b40-251b48fd43f8-etc-cni-netd\") pod \"ca8f8019-41cf-4493-8b40-251b48fd43f8\" (UID: \"ca8f8019-41cf-4493-8b40-251b48fd43f8\") " Feb 12 19:45:17.545982 kubelet[1955]: I0212 19:45:17.545543 1955 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ca8f8019-41cf-4493-8b40-251b48fd43f8-hostproc" (OuterVolumeSpecName: "hostproc") pod "ca8f8019-41cf-4493-8b40-251b48fd43f8" (UID: "ca8f8019-41cf-4493-8b40-251b48fd43f8"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:45:17.545982 kubelet[1955]: I0212 19:45:17.545559 1955 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ca8f8019-41cf-4493-8b40-251b48fd43f8-xtables-lock\") pod \"ca8f8019-41cf-4493-8b40-251b48fd43f8\" (UID: \"ca8f8019-41cf-4493-8b40-251b48fd43f8\") " Feb 12 19:45:17.545982 kubelet[1955]: I0212 19:45:17.545577 1955 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ca8f8019-41cf-4493-8b40-251b48fd43f8-bpf-maps\") pod \"ca8f8019-41cf-4493-8b40-251b48fd43f8\" (UID: \"ca8f8019-41cf-4493-8b40-251b48fd43f8\") " Feb 12 19:45:17.545982 kubelet[1955]: I0212 19:45:17.545602 1955 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ca8f8019-41cf-4493-8b40-251b48fd43f8-clustermesh-secrets\") pod \"ca8f8019-41cf-4493-8b40-251b48fd43f8\" (UID: \"ca8f8019-41cf-4493-8b40-251b48fd43f8\") " Feb 12 19:45:17.545982 kubelet[1955]: I0212 19:45:17.545618 1955 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ca8f8019-41cf-4493-8b40-251b48fd43f8-host-proc-sys-net\") pod \"ca8f8019-41cf-4493-8b40-251b48fd43f8\" (UID: \"ca8f8019-41cf-4493-8b40-251b48fd43f8\") " Feb 12 19:45:17.545982 kubelet[1955]: I0212 19:45:17.545638 1955 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4m9x\" (UniqueName: \"kubernetes.io/projected/ca8f8019-41cf-4493-8b40-251b48fd43f8-kube-api-access-k4m9x\") pod \"ca8f8019-41cf-4493-8b40-251b48fd43f8\" (UID: \"ca8f8019-41cf-4493-8b40-251b48fd43f8\") " Feb 12 19:45:17.545982 kubelet[1955]: I0212 19:45:17.545680 1955 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ca8f8019-41cf-4493-8b40-251b48fd43f8-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Feb 12 19:45:17.545982 kubelet[1955]: I0212 19:45:17.545691 1955 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ca8f8019-41cf-4493-8b40-251b48fd43f8-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Feb 12 19:45:17.545982 kubelet[1955]: I0212 19:45:17.545699 1955 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ca8f8019-41cf-4493-8b40-251b48fd43f8-lib-modules\") on node \"localhost\" DevicePath \"\"" Feb 12 19:45:17.545982 kubelet[1955]: I0212 19:45:17.545707 1955 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ca8f8019-41cf-4493-8b40-251b48fd43f8-cilium-run\") on node \"localhost\" DevicePath \"\"" Feb 12 19:45:17.545982 kubelet[1955]: I0212 19:45:17.545716 1955 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ca8f8019-41cf-4493-8b40-251b48fd43f8-hostproc\") on node \"localhost\" DevicePath \"\"" Feb 12 19:45:17.545982 kubelet[1955]: I0212 19:45:17.545724 1955 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ca8f8019-41cf-4493-8b40-251b48fd43f8-cni-path\") on node \"localhost\" DevicePath \"\"" Feb 12 19:45:17.545982 kubelet[1955]: I0212 19:45:17.545781 1955 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ca8f8019-41cf-4493-8b40-251b48fd43f8-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ca8f8019-41cf-4493-8b40-251b48fd43f8" (UID: "ca8f8019-41cf-4493-8b40-251b48fd43f8"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:45:17.545982 kubelet[1955]: I0212 19:45:17.545803 1955 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ca8f8019-41cf-4493-8b40-251b48fd43f8-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ca8f8019-41cf-4493-8b40-251b48fd43f8" (UID: "ca8f8019-41cf-4493-8b40-251b48fd43f8"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:45:17.545982 kubelet[1955]: I0212 19:45:17.545817 1955 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ca8f8019-41cf-4493-8b40-251b48fd43f8-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ca8f8019-41cf-4493-8b40-251b48fd43f8" (UID: "ca8f8019-41cf-4493-8b40-251b48fd43f8"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:45:17.546352 kubelet[1955]: I0212 19:45:17.545990 1955 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ca8f8019-41cf-4493-8b40-251b48fd43f8-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ca8f8019-41cf-4493-8b40-251b48fd43f8" (UID: "ca8f8019-41cf-4493-8b40-251b48fd43f8"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:45:17.548111 kubelet[1955]: I0212 19:45:17.548081 1955 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca8f8019-41cf-4493-8b40-251b48fd43f8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ca8f8019-41cf-4493-8b40-251b48fd43f8" (UID: "ca8f8019-41cf-4493-8b40-251b48fd43f8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 19:45:17.548275 kubelet[1955]: I0212 19:45:17.548247 1955 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca8f8019-41cf-4493-8b40-251b48fd43f8-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ca8f8019-41cf-4493-8b40-251b48fd43f8" (UID: "ca8f8019-41cf-4493-8b40-251b48fd43f8"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 19:45:17.548514 kubelet[1955]: I0212 19:45:17.548478 1955 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca8f8019-41cf-4493-8b40-251b48fd43f8-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ca8f8019-41cf-4493-8b40-251b48fd43f8" (UID: "ca8f8019-41cf-4493-8b40-251b48fd43f8"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:45:17.548951 kubelet[1955]: I0212 19:45:17.548919 1955 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca8f8019-41cf-4493-8b40-251b48fd43f8-kube-api-access-k4m9x" (OuterVolumeSpecName: "kube-api-access-k4m9x") pod "ca8f8019-41cf-4493-8b40-251b48fd43f8" (UID: "ca8f8019-41cf-4493-8b40-251b48fd43f8"). InnerVolumeSpecName "kube-api-access-k4m9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:45:17.550028 kubelet[1955]: I0212 19:45:17.549996 1955 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca8f8019-41cf-4493-8b40-251b48fd43f8-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "ca8f8019-41cf-4493-8b40-251b48fd43f8" (UID: "ca8f8019-41cf-4493-8b40-251b48fd43f8"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 19:45:17.646331 kubelet[1955]: I0212 19:45:17.646295 1955 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ca8f8019-41cf-4493-8b40-251b48fd43f8-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Feb 12 19:45:17.646331 kubelet[1955]: I0212 19:45:17.646324 1955 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ca8f8019-41cf-4493-8b40-251b48fd43f8-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Feb 12 19:45:17.646331 kubelet[1955]: I0212 19:45:17.646333 1955 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ca8f8019-41cf-4493-8b40-251b48fd43f8-xtables-lock\") on node \"localhost\" DevicePath \"\"" Feb 12 19:45:17.646490 kubelet[1955]: I0212 19:45:17.646342 1955 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ca8f8019-41cf-4493-8b40-251b48fd43f8-bpf-maps\") on node \"localhost\" DevicePath \"\"" Feb 12 19:45:17.646490 kubelet[1955]: I0212 19:45:17.646351 1955 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ca8f8019-41cf-4493-8b40-251b48fd43f8-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Feb 12 19:45:17.646490 kubelet[1955]: I0212 19:45:17.646359 1955 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ca8f8019-41cf-4493-8b40-251b48fd43f8-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Feb 12 19:45:17.646490 kubelet[1955]: I0212 19:45:17.646368 1955 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-k4m9x\" (UniqueName: \"kubernetes.io/projected/ca8f8019-41cf-4493-8b40-251b48fd43f8-kube-api-access-k4m9x\") on node \"localhost\" DevicePath \"\"" Feb 12 19:45:17.646490 kubelet[1955]: I0212 19:45:17.646376 1955 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ca8f8019-41cf-4493-8b40-251b48fd43f8-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 12 19:45:17.646490 kubelet[1955]: I0212 19:45:17.646405 1955 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ca8f8019-41cf-4493-8b40-251b48fd43f8-hubble-tls\") on node \"localhost\" DevicePath \"\"" Feb 12 19:45:17.646557 systemd[1]: run-containerd-runc-k8s.io-a32c95e6eb8d7af9f87a4d6dc3ccc85f389d96b0d564f3cd71dd163f5d092226-runc.8y93PT.mount: Deactivated successfully. Feb 12 19:45:17.646653 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a32c95e6eb8d7af9f87a4d6dc3ccc85f389d96b0d564f3cd71dd163f5d092226-rootfs.mount: Deactivated successfully. Feb 12 19:45:17.646701 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a32c95e6eb8d7af9f87a4d6dc3ccc85f389d96b0d564f3cd71dd163f5d092226-shm.mount: Deactivated successfully. Feb 12 19:45:17.646754 systemd[1]: var-lib-kubelet-pods-ca8f8019\x2d41cf\x2d4493\x2d8b40\x2d251b48fd43f8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dk4m9x.mount: Deactivated successfully. Feb 12 19:45:17.646811 systemd[1]: var-lib-kubelet-pods-ca8f8019\x2d41cf\x2d4493\x2d8b40\x2d251b48fd43f8-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 12 19:45:17.646860 systemd[1]: var-lib-kubelet-pods-ca8f8019\x2d41cf\x2d4493\x2d8b40\x2d251b48fd43f8-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 12 19:45:17.646909 systemd[1]: var-lib-kubelet-pods-ca8f8019\x2d41cf\x2d4493\x2d8b40\x2d251b48fd43f8-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 12 19:45:18.282350 systemd[1]: Removed slice kubepods-burstable-podca8f8019_41cf_4493_8b40_251b48fd43f8.slice. Feb 12 19:45:18.456576 kubelet[1955]: I0212 19:45:18.456530 1955 scope.go:117] "RemoveContainer" containerID="8b03e7b85c0e45c73165be6aecabb87c0723520ccf19ac135bc2961e4e747861" Feb 12 19:45:18.457895 env[1119]: time="2024-02-12T19:45:18.457833804Z" level=info msg="RemoveContainer for \"8b03e7b85c0e45c73165be6aecabb87c0723520ccf19ac135bc2961e4e747861\"" Feb 12 19:45:18.461276 env[1119]: time="2024-02-12T19:45:18.461244441Z" level=info msg="RemoveContainer for \"8b03e7b85c0e45c73165be6aecabb87c0723520ccf19ac135bc2961e4e747861\" returns successfully" Feb 12 19:45:18.539024 kubelet[1955]: I0212 19:45:18.538905 1955 topology_manager.go:215] "Topology Admit Handler" podUID="24a4e627-17f6-47fc-986e-feaf8034186a" podNamespace="kube-system" podName="cilium-w9wqq" Feb 12 19:45:18.539024 kubelet[1955]: E0212 19:45:18.538959 1955 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ca8f8019-41cf-4493-8b40-251b48fd43f8" containerName="mount-cgroup" Feb 12 19:45:18.539024 kubelet[1955]: I0212 19:45:18.538979 1955 memory_manager.go:346] "RemoveStaleState removing state" podUID="ca8f8019-41cf-4493-8b40-251b48fd43f8" containerName="mount-cgroup" Feb 12 19:45:18.544246 systemd[1]: Created slice kubepods-burstable-pod24a4e627_17f6_47fc_986e_feaf8034186a.slice. Feb 12 19:45:18.654014 kubelet[1955]: I0212 19:45:18.653965 1955 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/24a4e627-17f6-47fc-986e-feaf8034186a-cilium-config-path\") pod \"cilium-w9wqq\" (UID: \"24a4e627-17f6-47fc-986e-feaf8034186a\") " pod="kube-system/cilium-w9wqq" Feb 12 19:45:18.654379 kubelet[1955]: I0212 19:45:18.654037 1955 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/24a4e627-17f6-47fc-986e-feaf8034186a-bpf-maps\") pod \"cilium-w9wqq\" (UID: \"24a4e627-17f6-47fc-986e-feaf8034186a\") " pod="kube-system/cilium-w9wqq" Feb 12 19:45:18.654379 kubelet[1955]: I0212 19:45:18.654057 1955 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/24a4e627-17f6-47fc-986e-feaf8034186a-hostproc\") pod \"cilium-w9wqq\" (UID: \"24a4e627-17f6-47fc-986e-feaf8034186a\") " pod="kube-system/cilium-w9wqq" Feb 12 19:45:18.654379 kubelet[1955]: I0212 19:45:18.654074 1955 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/24a4e627-17f6-47fc-986e-feaf8034186a-etc-cni-netd\") pod \"cilium-w9wqq\" (UID: \"24a4e627-17f6-47fc-986e-feaf8034186a\") " pod="kube-system/cilium-w9wqq" Feb 12 19:45:18.654379 kubelet[1955]: I0212 19:45:18.654093 1955 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/24a4e627-17f6-47fc-986e-feaf8034186a-hubble-tls\") pod \"cilium-w9wqq\" (UID: \"24a4e627-17f6-47fc-986e-feaf8034186a\") " pod="kube-system/cilium-w9wqq" Feb 12 19:45:18.654379 kubelet[1955]: I0212 19:45:18.654114 1955 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/24a4e627-17f6-47fc-986e-feaf8034186a-cilium-run\") pod \"cilium-w9wqq\" (UID: \"24a4e627-17f6-47fc-986e-feaf8034186a\") " pod="kube-system/cilium-w9wqq" Feb 12 19:45:18.654379 kubelet[1955]: I0212 19:45:18.654131 1955 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/24a4e627-17f6-47fc-986e-feaf8034186a-cni-path\") pod \"cilium-w9wqq\" (UID: \"24a4e627-17f6-47fc-986e-feaf8034186a\") " pod="kube-system/cilium-w9wqq" Feb 12 19:45:18.654379 kubelet[1955]: I0212 19:45:18.654153 1955 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/24a4e627-17f6-47fc-986e-feaf8034186a-clustermesh-secrets\") pod \"cilium-w9wqq\" (UID: \"24a4e627-17f6-47fc-986e-feaf8034186a\") " pod="kube-system/cilium-w9wqq" Feb 12 19:45:18.654379 kubelet[1955]: I0212 19:45:18.654234 1955 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/24a4e627-17f6-47fc-986e-feaf8034186a-host-proc-sys-kernel\") pod \"cilium-w9wqq\" (UID: \"24a4e627-17f6-47fc-986e-feaf8034186a\") " pod="kube-system/cilium-w9wqq" Feb 12 19:45:18.654379 kubelet[1955]: I0212 19:45:18.654286 1955 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fc8t\" (UniqueName: \"kubernetes.io/projected/24a4e627-17f6-47fc-986e-feaf8034186a-kube-api-access-5fc8t\") pod \"cilium-w9wqq\" (UID: \"24a4e627-17f6-47fc-986e-feaf8034186a\") " pod="kube-system/cilium-w9wqq" Feb 12 19:45:18.654379 kubelet[1955]: I0212 19:45:18.654340 1955 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/24a4e627-17f6-47fc-986e-feaf8034186a-xtables-lock\") pod \"cilium-w9wqq\" (UID: \"24a4e627-17f6-47fc-986e-feaf8034186a\") " pod="kube-system/cilium-w9wqq" Feb 12 19:45:18.654379 kubelet[1955]: I0212 19:45:18.654372 1955 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/24a4e627-17f6-47fc-986e-feaf8034186a-host-proc-sys-net\") pod \"cilium-w9wqq\" (UID: \"24a4e627-17f6-47fc-986e-feaf8034186a\") " pod="kube-system/cilium-w9wqq" Feb 12 19:45:18.654662 kubelet[1955]: I0212 19:45:18.654441 1955 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/24a4e627-17f6-47fc-986e-feaf8034186a-cilium-cgroup\") pod \"cilium-w9wqq\" (UID: \"24a4e627-17f6-47fc-986e-feaf8034186a\") " pod="kube-system/cilium-w9wqq" Feb 12 19:45:18.654662 kubelet[1955]: I0212 19:45:18.654469 1955 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/24a4e627-17f6-47fc-986e-feaf8034186a-lib-modules\") pod \"cilium-w9wqq\" (UID: \"24a4e627-17f6-47fc-986e-feaf8034186a\") " pod="kube-system/cilium-w9wqq" Feb 12 19:45:18.654662 kubelet[1955]: I0212 19:45:18.654493 1955 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/24a4e627-17f6-47fc-986e-feaf8034186a-cilium-ipsec-secrets\") pod \"cilium-w9wqq\" (UID: \"24a4e627-17f6-47fc-986e-feaf8034186a\") " pod="kube-system/cilium-w9wqq" Feb 12 19:45:18.847657 kubelet[1955]: E0212 19:45:18.847559 1955 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:45:18.848506 env[1119]: time="2024-02-12T19:45:18.848472463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w9wqq,Uid:24a4e627-17f6-47fc-986e-feaf8034186a,Namespace:kube-system,Attempt:0,}" Feb 12 19:45:18.860154 env[1119]: time="2024-02-12T19:45:18.860081376Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:45:18.860154 env[1119]: time="2024-02-12T19:45:18.860136612Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:45:18.860154 env[1119]: time="2024-02-12T19:45:18.860147713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:45:18.860467 env[1119]: time="2024-02-12T19:45:18.860421472Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9e9dad6a1563cd182dc9a9c231c3689addc21fd3d9b570fbf92a7bd2b1d824c0 pid=3880 runtime=io.containerd.runc.v2 Feb 12 19:45:18.872742 systemd[1]: Started cri-containerd-9e9dad6a1563cd182dc9a9c231c3689addc21fd3d9b570fbf92a7bd2b1d824c0.scope. Feb 12 19:45:18.889461 env[1119]: time="2024-02-12T19:45:18.889424676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w9wqq,Uid:24a4e627-17f6-47fc-986e-feaf8034186a,Namespace:kube-system,Attempt:0,} returns sandbox id \"9e9dad6a1563cd182dc9a9c231c3689addc21fd3d9b570fbf92a7bd2b1d824c0\"" Feb 12 19:45:18.890327 kubelet[1955]: E0212 19:45:18.890308 1955 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:45:18.892288 env[1119]: time="2024-02-12T19:45:18.892255751Z" level=info msg="CreateContainer within sandbox \"9e9dad6a1563cd182dc9a9c231c3689addc21fd3d9b570fbf92a7bd2b1d824c0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 19:45:18.902480 env[1119]: time="2024-02-12T19:45:18.902437084Z" level=info msg="CreateContainer within sandbox \"9e9dad6a1563cd182dc9a9c231c3689addc21fd3d9b570fbf92a7bd2b1d824c0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1cb71239c87bda13b2db03515cc866f3a4a95ea25ff7609e83de2e6f15a619b2\"" Feb 12 19:45:18.902813 env[1119]: time="2024-02-12T19:45:18.902781037Z" level=info msg="StartContainer for \"1cb71239c87bda13b2db03515cc866f3a4a95ea25ff7609e83de2e6f15a619b2\"" Feb 12 19:45:18.916282 systemd[1]: Started cri-containerd-1cb71239c87bda13b2db03515cc866f3a4a95ea25ff7609e83de2e6f15a619b2.scope. Feb 12 19:45:18.939002 env[1119]: time="2024-02-12T19:45:18.938952492Z" level=info msg="StartContainer for \"1cb71239c87bda13b2db03515cc866f3a4a95ea25ff7609e83de2e6f15a619b2\" returns successfully" Feb 12 19:45:18.946022 systemd[1]: cri-containerd-1cb71239c87bda13b2db03515cc866f3a4a95ea25ff7609e83de2e6f15a619b2.scope: Deactivated successfully. Feb 12 19:45:18.972258 env[1119]: time="2024-02-12T19:45:18.972201467Z" level=info msg="shim disconnected" id=1cb71239c87bda13b2db03515cc866f3a4a95ea25ff7609e83de2e6f15a619b2 Feb 12 19:45:18.972258 env[1119]: time="2024-02-12T19:45:18.972258335Z" level=warning msg="cleaning up after shim disconnected" id=1cb71239c87bda13b2db03515cc866f3a4a95ea25ff7609e83de2e6f15a619b2 namespace=k8s.io Feb 12 19:45:18.972258 env[1119]: time="2024-02-12T19:45:18.972267282Z" level=info msg="cleaning up dead shim" Feb 12 19:45:18.981028 env[1119]: time="2024-02-12T19:45:18.980980948Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:45:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3964 runtime=io.containerd.runc.v2\n" Feb 12 19:45:19.459745 kubelet[1955]: E0212 19:45:19.459554 1955 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:45:19.461644 env[1119]: time="2024-02-12T19:45:19.461575596Z" level=info msg="CreateContainer within sandbox \"9e9dad6a1563cd182dc9a9c231c3689addc21fd3d9b570fbf92a7bd2b1d824c0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 12 19:45:19.471557 env[1119]: time="2024-02-12T19:45:19.471508971Z" level=info msg="CreateContainer within sandbox \"9e9dad6a1563cd182dc9a9c231c3689addc21fd3d9b570fbf92a7bd2b1d824c0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"fd300e684ab74a699aa23a1b3e37003e6fe6fd1a497d950fd17f7608b5990326\"" Feb 12 19:45:19.471932 env[1119]: time="2024-02-12T19:45:19.471895876Z" level=info msg="StartContainer for \"fd300e684ab74a699aa23a1b3e37003e6fe6fd1a497d950fd17f7608b5990326\"" Feb 12 19:45:19.483728 systemd[1]: Started cri-containerd-fd300e684ab74a699aa23a1b3e37003e6fe6fd1a497d950fd17f7608b5990326.scope. Feb 12 19:45:19.505099 env[1119]: time="2024-02-12T19:45:19.503732791Z" level=info msg="StartContainer for \"fd300e684ab74a699aa23a1b3e37003e6fe6fd1a497d950fd17f7608b5990326\" returns successfully" Feb 12 19:45:19.507996 systemd[1]: cri-containerd-fd300e684ab74a699aa23a1b3e37003e6fe6fd1a497d950fd17f7608b5990326.scope: Deactivated successfully. Feb 12 19:45:19.525484 env[1119]: time="2024-02-12T19:45:19.525430830Z" level=info msg="shim disconnected" id=fd300e684ab74a699aa23a1b3e37003e6fe6fd1a497d950fd17f7608b5990326 Feb 12 19:45:19.525484 env[1119]: time="2024-02-12T19:45:19.525477448Z" level=warning msg="cleaning up after shim disconnected" id=fd300e684ab74a699aa23a1b3e37003e6fe6fd1a497d950fd17f7608b5990326 namespace=k8s.io Feb 12 19:45:19.525624 env[1119]: time="2024-02-12T19:45:19.525488980Z" level=info msg="cleaning up dead shim" Feb 12 19:45:19.531283 env[1119]: time="2024-02-12T19:45:19.531249992Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:45:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4024 runtime=io.containerd.runc.v2\n" Feb 12 19:45:19.871909 kubelet[1955]: W0212 19:45:19.871802 1955 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podca8f8019_41cf_4493_8b40_251b48fd43f8.slice/cri-containerd-8b03e7b85c0e45c73165be6aecabb87c0723520ccf19ac135bc2961e4e747861.scope WatchSource:0}: container "8b03e7b85c0e45c73165be6aecabb87c0723520ccf19ac135bc2961e4e747861" in namespace "k8s.io": not found Feb 12 19:45:20.277981 kubelet[1955]: I0212 19:45:20.277949 1955 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="ca8f8019-41cf-4493-8b40-251b48fd43f8" path="/var/lib/kubelet/pods/ca8f8019-41cf-4493-8b40-251b48fd43f8/volumes" Feb 12 19:45:20.329236 kubelet[1955]: E0212 19:45:20.329214 1955 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 12 19:45:20.463793 kubelet[1955]: E0212 19:45:20.463769 1955 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:45:20.465267 env[1119]: time="2024-02-12T19:45:20.465232706Z" level=info msg="CreateContainer within sandbox \"9e9dad6a1563cd182dc9a9c231c3689addc21fd3d9b570fbf92a7bd2b1d824c0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 12 19:45:20.481520 env[1119]: time="2024-02-12T19:45:20.481469550Z" level=info msg="CreateContainer within sandbox \"9e9dad6a1563cd182dc9a9c231c3689addc21fd3d9b570fbf92a7bd2b1d824c0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f7907897f5dc732672979c1c5c9d86ffa07b99fca2878db5bdc44993aacfea7a\"" Feb 12 19:45:20.481969 env[1119]: time="2024-02-12T19:45:20.481944291Z" level=info msg="StartContainer for \"f7907897f5dc732672979c1c5c9d86ffa07b99fca2878db5bdc44993aacfea7a\"" Feb 12 19:45:20.496527 systemd[1]: Started cri-containerd-f7907897f5dc732672979c1c5c9d86ffa07b99fca2878db5bdc44993aacfea7a.scope. Feb 12 19:45:20.517481 env[1119]: time="2024-02-12T19:45:20.517426881Z" level=info msg="StartContainer for \"f7907897f5dc732672979c1c5c9d86ffa07b99fca2878db5bdc44993aacfea7a\" returns successfully" Feb 12 19:45:20.519329 systemd[1]: cri-containerd-f7907897f5dc732672979c1c5c9d86ffa07b99fca2878db5bdc44993aacfea7a.scope: Deactivated successfully. Feb 12 19:45:20.537941 env[1119]: time="2024-02-12T19:45:20.537842842Z" level=info msg="shim disconnected" id=f7907897f5dc732672979c1c5c9d86ffa07b99fca2878db5bdc44993aacfea7a Feb 12 19:45:20.537941 env[1119]: time="2024-02-12T19:45:20.537892646Z" level=warning msg="cleaning up after shim disconnected" id=f7907897f5dc732672979c1c5c9d86ffa07b99fca2878db5bdc44993aacfea7a namespace=k8s.io Feb 12 19:45:20.537941 env[1119]: time="2024-02-12T19:45:20.537903076Z" level=info msg="cleaning up dead shim" Feb 12 19:45:20.543832 env[1119]: time="2024-02-12T19:45:20.543790479Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:45:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4080 runtime=io.containerd.runc.v2\n" Feb 12 19:45:20.759021 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f7907897f5dc732672979c1c5c9d86ffa07b99fca2878db5bdc44993aacfea7a-rootfs.mount: Deactivated successfully. Feb 12 19:45:21.467638 kubelet[1955]: E0212 19:45:21.467613 1955 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:45:21.470687 env[1119]: time="2024-02-12T19:45:21.469925993Z" level=info msg="CreateContainer within sandbox \"9e9dad6a1563cd182dc9a9c231c3689addc21fd3d9b570fbf92a7bd2b1d824c0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 12 19:45:21.480970 env[1119]: time="2024-02-12T19:45:21.480924609Z" level=info msg="CreateContainer within sandbox \"9e9dad6a1563cd182dc9a9c231c3689addc21fd3d9b570fbf92a7bd2b1d824c0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0e6587157d521ea9b70b4737f2202278d8bb445dc849a36b10ee42b442a002f7\"" Feb 12 19:45:21.481396 env[1119]: time="2024-02-12T19:45:21.481359254Z" level=info msg="StartContainer for \"0e6587157d521ea9b70b4737f2202278d8bb445dc849a36b10ee42b442a002f7\"" Feb 12 19:45:21.495057 systemd[1]: Started cri-containerd-0e6587157d521ea9b70b4737f2202278d8bb445dc849a36b10ee42b442a002f7.scope. Feb 12 19:45:21.512711 systemd[1]: cri-containerd-0e6587157d521ea9b70b4737f2202278d8bb445dc849a36b10ee42b442a002f7.scope: Deactivated successfully. Feb 12 19:45:21.515466 env[1119]: time="2024-02-12T19:45:21.515415216Z" level=info msg="StartContainer for \"0e6587157d521ea9b70b4737f2202278d8bb445dc849a36b10ee42b442a002f7\" returns successfully" Feb 12 19:45:21.533403 env[1119]: time="2024-02-12T19:45:21.533347715Z" level=info msg="shim disconnected" id=0e6587157d521ea9b70b4737f2202278d8bb445dc849a36b10ee42b442a002f7 Feb 12 19:45:21.533403 env[1119]: time="2024-02-12T19:45:21.533402509Z" level=warning msg="cleaning up after shim disconnected" id=0e6587157d521ea9b70b4737f2202278d8bb445dc849a36b10ee42b442a002f7 namespace=k8s.io Feb 12 19:45:21.533616 env[1119]: time="2024-02-12T19:45:21.533412017Z" level=info msg="cleaning up dead shim" Feb 12 19:45:21.538788 env[1119]: time="2024-02-12T19:45:21.538763565Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:45:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4136 runtime=io.containerd.runc.v2\n" Feb 12 19:45:21.759132 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0e6587157d521ea9b70b4737f2202278d8bb445dc849a36b10ee42b442a002f7-rootfs.mount: Deactivated successfully. Feb 12 19:45:22.236094 kubelet[1955]: I0212 19:45:22.236068 1955 setters.go:552] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-02-12T19:45:22Z","lastTransitionTime":"2024-02-12T19:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 12 19:45:22.471702 kubelet[1955]: E0212 19:45:22.471677 1955 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:45:22.473484 env[1119]: time="2024-02-12T19:45:22.473451187Z" level=info msg="CreateContainer within sandbox \"9e9dad6a1563cd182dc9a9c231c3689addc21fd3d9b570fbf92a7bd2b1d824c0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 12 19:45:22.491409 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1056922939.mount: Deactivated successfully. Feb 12 19:45:22.492882 env[1119]: time="2024-02-12T19:45:22.492844567Z" level=info msg="CreateContainer within sandbox \"9e9dad6a1563cd182dc9a9c231c3689addc21fd3d9b570fbf92a7bd2b1d824c0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"848ed9c372b44941aae15578e1ca71b133bd446c2f479e8a79576af67d0036ba\"" Feb 12 19:45:22.493269 env[1119]: time="2024-02-12T19:45:22.493245599Z" level=info msg="StartContainer for \"848ed9c372b44941aae15578e1ca71b133bd446c2f479e8a79576af67d0036ba\"" Feb 12 19:45:22.507622 systemd[1]: Started cri-containerd-848ed9c372b44941aae15578e1ca71b133bd446c2f479e8a79576af67d0036ba.scope. Feb 12 19:45:22.529293 env[1119]: time="2024-02-12T19:45:22.529240685Z" level=info msg="StartContainer for \"848ed9c372b44941aae15578e1ca71b133bd446c2f479e8a79576af67d0036ba\" returns successfully" Feb 12 19:45:22.759175 systemd[1]: run-containerd-runc-k8s.io-848ed9c372b44941aae15578e1ca71b133bd446c2f479e8a79576af67d0036ba-runc.3A2j15.mount: Deactivated successfully. Feb 12 19:45:22.773413 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 12 19:45:22.980134 kubelet[1955]: W0212 19:45:22.980091 1955 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24a4e627_17f6_47fc_986e_feaf8034186a.slice/cri-containerd-1cb71239c87bda13b2db03515cc866f3a4a95ea25ff7609e83de2e6f15a619b2.scope WatchSource:0}: task 1cb71239c87bda13b2db03515cc866f3a4a95ea25ff7609e83de2e6f15a619b2 not found: not found Feb 12 19:45:23.475982 kubelet[1955]: E0212 19:45:23.475952 1955 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:45:24.848635 kubelet[1955]: E0212 19:45:24.848604 1955 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:45:25.191805 systemd-networkd[1018]: lxc_health: Link UP Feb 12 19:45:25.198804 systemd-networkd[1018]: lxc_health: Gained carrier Feb 12 19:45:25.199486 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 12 19:45:25.276796 kubelet[1955]: E0212 19:45:25.276760 1955 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:45:26.085665 kubelet[1955]: W0212 19:45:26.085630 1955 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24a4e627_17f6_47fc_986e_feaf8034186a.slice/cri-containerd-fd300e684ab74a699aa23a1b3e37003e6fe6fd1a497d950fd17f7608b5990326.scope WatchSource:0}: task fd300e684ab74a699aa23a1b3e37003e6fe6fd1a497d950fd17f7608b5990326 not found: not found Feb 12 19:45:26.256538 systemd-networkd[1018]: lxc_health: Gained IPv6LL Feb 12 19:45:26.849145 kubelet[1955]: E0212 19:45:26.849115 1955 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:45:26.861296 kubelet[1955]: I0212 19:45:26.861268 1955 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-w9wqq" podStartSLOduration=8.86123459 podCreationTimestamp="2024-02-12 19:45:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:45:23.486270971 +0000 UTC m=+93.321191263" watchObservedRunningTime="2024-02-12 19:45:26.86123459 +0000 UTC m=+96.696154882" Feb 12 19:45:27.482326 kubelet[1955]: E0212 19:45:27.482285 1955 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:45:28.483573 kubelet[1955]: E0212 19:45:28.483542 1955 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:45:29.196740 kubelet[1955]: W0212 19:45:29.196699 1955 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24a4e627_17f6_47fc_986e_feaf8034186a.slice/cri-containerd-f7907897f5dc732672979c1c5c9d86ffa07b99fca2878db5bdc44993aacfea7a.scope WatchSource:0}: task f7907897f5dc732672979c1c5c9d86ffa07b99fca2878db5bdc44993aacfea7a not found: not found Feb 12 19:45:31.211202 sshd[3742]: pam_unix(sshd:session): session closed for user core Feb 12 19:45:31.213327 systemd[1]: sshd@25-10.0.0.136:22-10.0.0.1:47916.service: Deactivated successfully. Feb 12 19:45:31.213981 systemd[1]: session-26.scope: Deactivated successfully. Feb 12 19:45:31.214618 systemd-logind[1107]: Session 26 logged out. Waiting for processes to exit. Feb 12 19:45:31.215340 systemd-logind[1107]: Removed session 26. Feb 12 19:45:32.305894 kubelet[1955]: W0212 19:45:32.305857 1955 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24a4e627_17f6_47fc_986e_feaf8034186a.slice/cri-containerd-0e6587157d521ea9b70b4737f2202278d8bb445dc849a36b10ee42b442a002f7.scope WatchSource:0}: task 0e6587157d521ea9b70b4737f2202278d8bb445dc849a36b10ee42b442a002f7 not found: not found