Feb 12 19:32:30.792388 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Feb 12 18:05:31 -00 2024 Feb 12 19:32:30.792407 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 12 19:32:30.792417 kernel: BIOS-provided physical RAM map: Feb 12 19:32:30.792422 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 12 19:32:30.792428 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Feb 12 19:32:30.792433 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Feb 12 19:32:30.792439 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Feb 12 19:32:30.792445 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Feb 12 19:32:30.792450 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Feb 12 19:32:30.792457 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Feb 12 19:32:30.792463 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Feb 12 19:32:30.792468 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Feb 12 19:32:30.792474 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Feb 12 19:32:30.792482 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Feb 12 19:32:30.792491 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Feb 12 19:32:30.792501 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Feb 12 19:32:30.792509 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Feb 12 19:32:30.792517 kernel: NX (Execute Disable) protection: active Feb 12 19:32:30.792525 kernel: e820: update [mem 0x9b3f7018-0x9b400c57] usable ==> usable Feb 12 19:32:30.792532 kernel: e820: update [mem 0x9b3f7018-0x9b400c57] usable ==> usable Feb 12 19:32:30.792540 kernel: e820: update [mem 0x9b1aa018-0x9b1e6e57] usable ==> usable Feb 12 19:32:30.792548 kernel: e820: update [mem 0x9b1aa018-0x9b1e6e57] usable ==> usable Feb 12 19:32:30.792556 kernel: extended physical RAM map: Feb 12 19:32:30.792562 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 12 19:32:30.792567 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Feb 12 19:32:30.792575 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Feb 12 19:32:30.792581 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Feb 12 19:32:30.792587 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Feb 12 19:32:30.792593 kernel: reserve setup_data: [mem 0x000000000080c000-0x000000000080ffff] usable Feb 12 19:32:30.792600 kernel: reserve setup_data: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Feb 12 19:32:30.792608 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b1aa017] usable Feb 12 19:32:30.792616 kernel: reserve setup_data: [mem 0x000000009b1aa018-0x000000009b1e6e57] usable Feb 12 19:32:30.792623 kernel: reserve setup_data: [mem 0x000000009b1e6e58-0x000000009b3f7017] usable Feb 12 19:32:30.792631 kernel: reserve setup_data: [mem 0x000000009b3f7018-0x000000009b400c57] usable Feb 12 19:32:30.792639 kernel: reserve setup_data: [mem 0x000000009b400c58-0x000000009c8eefff] usable Feb 12 19:32:30.792647 kernel: reserve setup_data: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Feb 12 19:32:30.792656 kernel: reserve setup_data: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Feb 12 19:32:30.792664 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Feb 12 19:32:30.792669 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Feb 12 19:32:30.792675 kernel: reserve setup_data: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Feb 12 19:32:30.792687 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Feb 12 19:32:30.792694 kernel: efi: EFI v2.70 by EDK II Feb 12 19:32:30.792703 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b773018 RNG=0x9cb75018 Feb 12 19:32:30.792712 kernel: random: crng init done Feb 12 19:32:30.792720 kernel: SMBIOS 2.8 present. Feb 12 19:32:30.792729 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 0.0.0 02/06/2015 Feb 12 19:32:30.792737 kernel: Hypervisor detected: KVM Feb 12 19:32:30.792745 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 12 19:32:30.792754 kernel: kvm-clock: cpu 0, msr 12faa001, primary cpu clock Feb 12 19:32:30.792762 kernel: kvm-clock: using sched offset of 4244737039 cycles Feb 12 19:32:30.792772 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 12 19:32:30.792780 kernel: tsc: Detected 2794.750 MHz processor Feb 12 19:32:30.792790 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 12 19:32:30.792797 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 12 19:32:30.792803 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Feb 12 19:32:30.792810 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 12 19:32:30.792816 kernel: Using GB pages for direct mapping Feb 12 19:32:30.792823 kernel: Secure boot disabled Feb 12 19:32:30.792829 kernel: ACPI: Early table checksum verification disabled Feb 12 19:32:30.792836 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Feb 12 19:32:30.792842 kernel: ACPI: XSDT 0x000000009CB7D0E8 00004C (v01 BOCHS BXPC 00000001 01000013) Feb 12 19:32:30.792850 kernel: ACPI: FACP 0x000000009CB7A000 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:32:30.792857 kernel: ACPI: DSDT 0x000000009CB7B000 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:32:30.792863 kernel: ACPI: FACS 0x000000009CBDD000 000040 Feb 12 19:32:30.792869 kernel: ACPI: APIC 0x000000009CB79000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:32:30.792876 kernel: ACPI: HPET 0x000000009CB78000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:32:30.792882 kernel: ACPI: WAET 0x000000009CB77000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:32:30.792891 kernel: ACPI: BGRT 0x000000009CB76000 000038 (v01 INTEL EDK2 00000002 01000013) Feb 12 19:32:30.792899 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb7a000-0x9cb7a073] Feb 12 19:32:30.792907 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7b000-0x9cb7ca38] Feb 12 19:32:30.792917 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Feb 12 19:32:30.792923 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb79000-0x9cb7908f] Feb 12 19:32:30.792930 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb78000-0x9cb78037] Feb 12 19:32:30.792936 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb77000-0x9cb77027] Feb 12 19:32:30.792942 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb76000-0x9cb76037] Feb 12 19:32:30.792949 kernel: No NUMA configuration found Feb 12 19:32:30.792956 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Feb 12 19:32:30.792962 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Feb 12 19:32:30.792969 kernel: Zone ranges: Feb 12 19:32:30.792976 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 12 19:32:30.792983 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Feb 12 19:32:30.792989 kernel: Normal empty Feb 12 19:32:30.792996 kernel: Movable zone start for each node Feb 12 19:32:30.793002 kernel: Early memory node ranges Feb 12 19:32:30.793008 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Feb 12 19:32:30.793015 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Feb 12 19:32:30.793021 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Feb 12 19:32:30.793028 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Feb 12 19:32:30.793036 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Feb 12 19:32:30.793042 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Feb 12 19:32:30.793049 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Feb 12 19:32:30.793055 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 12 19:32:30.793061 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Feb 12 19:32:30.793068 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Feb 12 19:32:30.793074 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 12 19:32:30.793081 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Feb 12 19:32:30.793087 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Feb 12 19:32:30.793095 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Feb 12 19:32:30.793102 kernel: ACPI: PM-Timer IO Port: 0xb008 Feb 12 19:32:30.793108 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 12 19:32:30.793114 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 12 19:32:30.793121 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 12 19:32:30.793140 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 12 19:32:30.793146 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 12 19:32:30.793153 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 12 19:32:30.793159 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 12 19:32:30.793167 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 12 19:32:30.793173 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 12 19:32:30.793180 kernel: TSC deadline timer available Feb 12 19:32:30.793186 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Feb 12 19:32:30.793192 kernel: kvm-guest: KVM setup pv remote TLB flush Feb 12 19:32:30.793199 kernel: kvm-guest: setup PV sched yield Feb 12 19:32:30.793205 kernel: [mem 0x9d000000-0xffffffff] available for PCI devices Feb 12 19:32:30.793211 kernel: Booting paravirtualized kernel on KVM Feb 12 19:32:30.793218 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 12 19:32:30.793225 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Feb 12 19:32:30.793241 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u524288 Feb 12 19:32:30.793250 kernel: pcpu-alloc: s185624 r8192 d31464 u524288 alloc=1*2097152 Feb 12 19:32:30.793263 kernel: pcpu-alloc: [0] 0 1 2 3 Feb 12 19:32:30.793270 kernel: kvm-guest: setup async PF for cpu 0 Feb 12 19:32:30.793277 kernel: kvm-guest: stealtime: cpu 0, msr 9ae1c0c0 Feb 12 19:32:30.793284 kernel: kvm-guest: PV spinlocks enabled Feb 12 19:32:30.793291 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 12 19:32:30.793297 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Feb 12 19:32:30.793304 kernel: Policy zone: DMA32 Feb 12 19:32:30.793312 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 12 19:32:30.793319 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 12 19:32:30.793327 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 12 19:32:30.793334 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 12 19:32:30.793341 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 12 19:32:30.793348 kernel: Memory: 2400436K/2567000K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 166304K reserved, 0K cma-reserved) Feb 12 19:32:30.793356 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 12 19:32:30.793363 kernel: ftrace: allocating 34475 entries in 135 pages Feb 12 19:32:30.793370 kernel: ftrace: allocated 135 pages with 4 groups Feb 12 19:32:30.793377 kernel: rcu: Hierarchical RCU implementation. Feb 12 19:32:30.793384 kernel: rcu: RCU event tracing is enabled. Feb 12 19:32:30.793391 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 12 19:32:30.793398 kernel: Rude variant of Tasks RCU enabled. Feb 12 19:32:30.793405 kernel: Tracing variant of Tasks RCU enabled. Feb 12 19:32:30.793412 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 12 19:32:30.793418 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 12 19:32:30.793427 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Feb 12 19:32:30.793433 kernel: Console: colour dummy device 80x25 Feb 12 19:32:30.793440 kernel: printk: console [ttyS0] enabled Feb 12 19:32:30.793447 kernel: ACPI: Core revision 20210730 Feb 12 19:32:30.793454 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 12 19:32:30.793461 kernel: APIC: Switch to symmetric I/O mode setup Feb 12 19:32:30.793467 kernel: x2apic enabled Feb 12 19:32:30.793474 kernel: Switched APIC routing to physical x2apic. Feb 12 19:32:30.793481 kernel: kvm-guest: setup PV IPIs Feb 12 19:32:30.793489 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 12 19:32:30.793495 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 12 19:32:30.793502 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Feb 12 19:32:30.793509 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 12 19:32:30.793516 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Feb 12 19:32:30.793523 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Feb 12 19:32:30.793530 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 12 19:32:30.793536 kernel: Spectre V2 : Mitigation: Retpolines Feb 12 19:32:30.793543 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 12 19:32:30.793554 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 12 19:32:30.793561 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Feb 12 19:32:30.793567 kernel: RETBleed: Mitigation: untrained return thunk Feb 12 19:32:30.793574 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 12 19:32:30.793581 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Feb 12 19:32:30.793588 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 12 19:32:30.793595 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 12 19:32:30.793602 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 12 19:32:30.793609 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 12 19:32:30.793617 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 12 19:32:30.793624 kernel: Freeing SMP alternatives memory: 32K Feb 12 19:32:30.793631 kernel: pid_max: default: 32768 minimum: 301 Feb 12 19:32:30.793637 kernel: LSM: Security Framework initializing Feb 12 19:32:30.793644 kernel: SELinux: Initializing. Feb 12 19:32:30.793651 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 12 19:32:30.793658 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 12 19:32:30.793664 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Feb 12 19:32:30.793673 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Feb 12 19:32:30.793680 kernel: ... version: 0 Feb 12 19:32:30.793686 kernel: ... bit width: 48 Feb 12 19:32:30.793693 kernel: ... generic registers: 6 Feb 12 19:32:30.793699 kernel: ... value mask: 0000ffffffffffff Feb 12 19:32:30.793706 kernel: ... max period: 00007fffffffffff Feb 12 19:32:30.793713 kernel: ... fixed-purpose events: 0 Feb 12 19:32:30.793722 kernel: ... event mask: 000000000000003f Feb 12 19:32:30.793729 kernel: signal: max sigframe size: 1776 Feb 12 19:32:30.793735 kernel: rcu: Hierarchical SRCU implementation. Feb 12 19:32:30.793743 kernel: smp: Bringing up secondary CPUs ... Feb 12 19:32:30.793750 kernel: x86: Booting SMP configuration: Feb 12 19:32:30.793757 kernel: .... node #0, CPUs: #1 Feb 12 19:32:30.793763 kernel: kvm-clock: cpu 1, msr 12faa041, secondary cpu clock Feb 12 19:32:30.793770 kernel: kvm-guest: setup async PF for cpu 1 Feb 12 19:32:30.793777 kernel: kvm-guest: stealtime: cpu 1, msr 9ae9c0c0 Feb 12 19:32:30.793784 kernel: #2 Feb 12 19:32:30.793791 kernel: kvm-clock: cpu 2, msr 12faa081, secondary cpu clock Feb 12 19:32:30.793797 kernel: kvm-guest: setup async PF for cpu 2 Feb 12 19:32:30.793805 kernel: kvm-guest: stealtime: cpu 2, msr 9af1c0c0 Feb 12 19:32:30.793812 kernel: #3 Feb 12 19:32:30.793819 kernel: kvm-clock: cpu 3, msr 12faa0c1, secondary cpu clock Feb 12 19:32:30.793825 kernel: kvm-guest: setup async PF for cpu 3 Feb 12 19:32:30.793832 kernel: kvm-guest: stealtime: cpu 3, msr 9af9c0c0 Feb 12 19:32:30.793839 kernel: smp: Brought up 1 node, 4 CPUs Feb 12 19:32:30.793845 kernel: smpboot: Max logical packages: 1 Feb 12 19:32:30.793852 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Feb 12 19:32:30.793859 kernel: devtmpfs: initialized Feb 12 19:32:30.793867 kernel: x86/mm: Memory block size: 128MB Feb 12 19:32:30.793874 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Feb 12 19:32:30.793881 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Feb 12 19:32:30.793888 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Feb 12 19:32:30.793895 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Feb 12 19:32:30.793902 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Feb 12 19:32:30.793908 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 12 19:32:30.793915 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 12 19:32:30.793922 kernel: pinctrl core: initialized pinctrl subsystem Feb 12 19:32:30.793930 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 12 19:32:30.793937 kernel: audit: initializing netlink subsys (disabled) Feb 12 19:32:30.793944 kernel: audit: type=2000 audit(1707766350.412:1): state=initialized audit_enabled=0 res=1 Feb 12 19:32:30.793950 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 12 19:32:30.793957 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 12 19:32:30.793964 kernel: cpuidle: using governor menu Feb 12 19:32:30.793971 kernel: ACPI: bus type PCI registered Feb 12 19:32:30.793978 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 12 19:32:30.793984 kernel: dca service started, version 1.12.1 Feb 12 19:32:30.793992 kernel: PCI: Using configuration type 1 for base access Feb 12 19:32:30.793999 kernel: PCI: Using configuration type 1 for extended access Feb 12 19:32:30.794006 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 12 19:32:30.794013 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 12 19:32:30.794019 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 12 19:32:30.794026 kernel: ACPI: Added _OSI(Module Device) Feb 12 19:32:30.794033 kernel: ACPI: Added _OSI(Processor Device) Feb 12 19:32:30.794039 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 12 19:32:30.794046 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 12 19:32:30.794054 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 12 19:32:30.794061 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 12 19:32:30.794068 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 12 19:32:30.794075 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 12 19:32:30.794081 kernel: ACPI: Interpreter enabled Feb 12 19:32:30.794088 kernel: ACPI: PM: (supports S0 S3 S5) Feb 12 19:32:30.794095 kernel: ACPI: Using IOAPIC for interrupt routing Feb 12 19:32:30.794104 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 12 19:32:30.794111 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Feb 12 19:32:30.794121 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 12 19:32:30.794310 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 12 19:32:30.794322 kernel: acpiphp: Slot [3] registered Feb 12 19:32:30.794329 kernel: acpiphp: Slot [4] registered Feb 12 19:32:30.794336 kernel: acpiphp: Slot [5] registered Feb 12 19:32:30.794342 kernel: acpiphp: Slot [6] registered Feb 12 19:32:30.794349 kernel: acpiphp: Slot [7] registered Feb 12 19:32:30.794356 kernel: acpiphp: Slot [8] registered Feb 12 19:32:30.794362 kernel: acpiphp: Slot [9] registered Feb 12 19:32:30.794371 kernel: acpiphp: Slot [10] registered Feb 12 19:32:30.794378 kernel: acpiphp: Slot [11] registered Feb 12 19:32:30.794385 kernel: acpiphp: Slot [12] registered Feb 12 19:32:30.794391 kernel: acpiphp: Slot [13] registered Feb 12 19:32:30.794398 kernel: acpiphp: Slot [14] registered Feb 12 19:32:30.794405 kernel: acpiphp: Slot [15] registered Feb 12 19:32:30.794411 kernel: acpiphp: Slot [16] registered Feb 12 19:32:30.794418 kernel: acpiphp: Slot [17] registered Feb 12 19:32:30.794424 kernel: acpiphp: Slot [18] registered Feb 12 19:32:30.794432 kernel: acpiphp: Slot [19] registered Feb 12 19:32:30.794439 kernel: acpiphp: Slot [20] registered Feb 12 19:32:30.794446 kernel: acpiphp: Slot [21] registered Feb 12 19:32:30.794452 kernel: acpiphp: Slot [22] registered Feb 12 19:32:30.794459 kernel: acpiphp: Slot [23] registered Feb 12 19:32:30.794466 kernel: acpiphp: Slot [24] registered Feb 12 19:32:30.794472 kernel: acpiphp: Slot [25] registered Feb 12 19:32:30.794479 kernel: acpiphp: Slot [26] registered Feb 12 19:32:30.794485 kernel: acpiphp: Slot [27] registered Feb 12 19:32:30.794492 kernel: acpiphp: Slot [28] registered Feb 12 19:32:30.794500 kernel: acpiphp: Slot [29] registered Feb 12 19:32:30.794507 kernel: acpiphp: Slot [30] registered Feb 12 19:32:30.794514 kernel: acpiphp: Slot [31] registered Feb 12 19:32:30.794520 kernel: PCI host bridge to bus 0000:00 Feb 12 19:32:30.794607 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 12 19:32:30.794670 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 12 19:32:30.794747 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 12 19:32:30.794811 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Feb 12 19:32:30.794873 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0x87fffffff window] Feb 12 19:32:30.794933 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 12 19:32:30.795031 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 12 19:32:30.795115 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 12 19:32:30.795228 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Feb 12 19:32:30.795314 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Feb 12 19:32:30.795406 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Feb 12 19:32:30.795496 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Feb 12 19:32:30.795589 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Feb 12 19:32:30.795660 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Feb 12 19:32:30.795749 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 12 19:32:30.795820 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Feb 12 19:32:30.795890 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Feb 12 19:32:30.795972 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Feb 12 19:32:30.796080 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Feb 12 19:32:30.796180 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc1043000-0xc1043fff] Feb 12 19:32:30.796302 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Feb 12 19:32:30.796372 kernel: pci 0000:00:02.0: BAR 0: assigned to efifb Feb 12 19:32:30.796438 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 12 19:32:30.796523 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Feb 12 19:32:30.796591 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0a0-0xc0bf] Feb 12 19:32:30.796667 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Feb 12 19:32:30.796736 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Feb 12 19:32:30.796811 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Feb 12 19:32:30.796879 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Feb 12 19:32:30.796947 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Feb 12 19:32:30.797017 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Feb 12 19:32:30.797100 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Feb 12 19:32:30.797184 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Feb 12 19:32:30.797265 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc1040000-0xc1040fff] Feb 12 19:32:30.797334 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Feb 12 19:32:30.797429 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Feb 12 19:32:30.797440 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 12 19:32:30.797450 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 12 19:32:30.797460 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 12 19:32:30.797467 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 12 19:32:30.797474 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 12 19:32:30.797481 kernel: iommu: Default domain type: Translated Feb 12 19:32:30.797488 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 12 19:32:30.797558 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Feb 12 19:32:30.797626 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 12 19:32:30.797695 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Feb 12 19:32:30.797706 kernel: vgaarb: loaded Feb 12 19:32:30.797713 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 12 19:32:30.797720 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 12 19:32:30.797727 kernel: PTP clock support registered Feb 12 19:32:30.797734 kernel: Registered efivars operations Feb 12 19:32:30.797741 kernel: PCI: Using ACPI for IRQ routing Feb 12 19:32:30.797747 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 12 19:32:30.797754 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Feb 12 19:32:30.797761 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Feb 12 19:32:30.797769 kernel: e820: reserve RAM buffer [mem 0x9b1aa018-0x9bffffff] Feb 12 19:32:30.797776 kernel: e820: reserve RAM buffer [mem 0x9b3f7018-0x9bffffff] Feb 12 19:32:30.797783 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Feb 12 19:32:30.797789 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Feb 12 19:32:30.797796 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 12 19:32:30.797803 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 12 19:32:30.797810 kernel: clocksource: Switched to clocksource kvm-clock Feb 12 19:32:30.797816 kernel: VFS: Disk quotas dquot_6.6.0 Feb 12 19:32:30.797823 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 12 19:32:30.797831 kernel: pnp: PnP ACPI init Feb 12 19:32:30.797919 kernel: pnp 00:02: [dma 2] Feb 12 19:32:30.797930 kernel: pnp: PnP ACPI: found 6 devices Feb 12 19:32:30.797937 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 12 19:32:30.797944 kernel: NET: Registered PF_INET protocol family Feb 12 19:32:30.797951 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 12 19:32:30.797958 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 12 19:32:30.797965 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 12 19:32:30.797974 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 12 19:32:30.797981 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 12 19:32:30.797988 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 12 19:32:30.797995 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 12 19:32:30.798002 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 12 19:32:30.798008 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 12 19:32:30.798015 kernel: NET: Registered PF_XDP protocol family Feb 12 19:32:30.798086 kernel: pci 0000:00:05.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Feb 12 19:32:30.798183 kernel: pci 0000:00:05.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Feb 12 19:32:30.798385 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 12 19:32:30.798488 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 12 19:32:30.798551 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 12 19:32:30.798610 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Feb 12 19:32:30.798670 kernel: pci_bus 0000:00: resource 8 [mem 0x800000000-0x87fffffff window] Feb 12 19:32:30.798750 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Feb 12 19:32:30.798825 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 12 19:32:30.798899 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Feb 12 19:32:30.798910 kernel: PCI: CLS 0 bytes, default 64 Feb 12 19:32:30.798918 kernel: Initialise system trusted keyrings Feb 12 19:32:30.798927 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 12 19:32:30.798934 kernel: Key type asymmetric registered Feb 12 19:32:30.798942 kernel: Asymmetric key parser 'x509' registered Feb 12 19:32:30.798949 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 12 19:32:30.798957 kernel: io scheduler mq-deadline registered Feb 12 19:32:30.798964 kernel: io scheduler kyber registered Feb 12 19:32:30.798974 kernel: io scheduler bfq registered Feb 12 19:32:30.798981 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 12 19:32:30.798989 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 12 19:32:30.798997 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Feb 12 19:32:30.799004 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 12 19:32:30.799011 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 12 19:32:30.799019 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 12 19:32:30.799027 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 12 19:32:30.799034 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 12 19:32:30.799043 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 12 19:32:30.799144 kernel: rtc_cmos 00:05: RTC can wake from S4 Feb 12 19:32:30.799160 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 12 19:32:30.799836 kernel: rtc_cmos 00:05: registered as rtc0 Feb 12 19:32:30.799909 kernel: rtc_cmos 00:05: setting system clock to 2024-02-12T19:32:30 UTC (1707766350) Feb 12 19:32:30.801987 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Feb 12 19:32:30.802003 kernel: efifb: probing for efifb Feb 12 19:32:30.802012 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Feb 12 19:32:30.802020 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Feb 12 19:32:30.802028 kernel: efifb: scrolling: redraw Feb 12 19:32:30.802035 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 12 19:32:30.802043 kernel: Console: switching to colour frame buffer device 160x50 Feb 12 19:32:30.802050 kernel: fb0: EFI VGA frame buffer device Feb 12 19:32:30.802061 kernel: pstore: Registered efi as persistent store backend Feb 12 19:32:30.802068 kernel: NET: Registered PF_INET6 protocol family Feb 12 19:32:30.802076 kernel: Segment Routing with IPv6 Feb 12 19:32:30.802083 kernel: In-situ OAM (IOAM) with IPv6 Feb 12 19:32:30.802090 kernel: NET: Registered PF_PACKET protocol family Feb 12 19:32:30.802098 kernel: Key type dns_resolver registered Feb 12 19:32:30.802105 kernel: IPI shorthand broadcast: enabled Feb 12 19:32:30.802113 kernel: sched_clock: Marking stable (389270893, 89521172)->(506818094, -28026029) Feb 12 19:32:30.802121 kernel: registered taskstats version 1 Feb 12 19:32:30.802167 kernel: Loading compiled-in X.509 certificates Feb 12 19:32:30.802176 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 253e5c5c936b12e2ff2626e7f3214deb753330c8' Feb 12 19:32:30.802184 kernel: Key type .fscrypt registered Feb 12 19:32:30.802191 kernel: Key type fscrypt-provisioning registered Feb 12 19:32:30.802198 kernel: pstore: Using crash dump compression: deflate Feb 12 19:32:30.802206 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 12 19:32:30.802214 kernel: ima: Allocated hash algorithm: sha1 Feb 12 19:32:30.802223 kernel: ima: No architecture policies found Feb 12 19:32:30.802238 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 12 19:32:30.802257 kernel: Write protecting the kernel read-only data: 28672k Feb 12 19:32:30.802265 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 12 19:32:30.802272 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 12 19:32:30.802280 kernel: Run /init as init process Feb 12 19:32:30.802287 kernel: with arguments: Feb 12 19:32:30.802295 kernel: /init Feb 12 19:32:30.802302 kernel: with environment: Feb 12 19:32:30.802311 kernel: HOME=/ Feb 12 19:32:30.802319 kernel: TERM=linux Feb 12 19:32:30.802326 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 12 19:32:30.802341 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 19:32:30.802352 systemd[1]: Detected virtualization kvm. Feb 12 19:32:30.802361 systemd[1]: Detected architecture x86-64. Feb 12 19:32:30.802368 systemd[1]: Running in initrd. Feb 12 19:32:30.802376 systemd[1]: No hostname configured, using default hostname. Feb 12 19:32:30.802383 systemd[1]: Hostname set to . Feb 12 19:32:30.802396 systemd[1]: Initializing machine ID from VM UUID. Feb 12 19:32:30.802404 systemd[1]: Queued start job for default target initrd.target. Feb 12 19:32:30.802412 systemd[1]: Started systemd-ask-password-console.path. Feb 12 19:32:30.802420 systemd[1]: Reached target cryptsetup.target. Feb 12 19:32:30.802427 systemd[1]: Reached target paths.target. Feb 12 19:32:30.802435 systemd[1]: Reached target slices.target. Feb 12 19:32:30.802443 systemd[1]: Reached target swap.target. Feb 12 19:32:30.802450 systemd[1]: Reached target timers.target. Feb 12 19:32:30.802462 systemd[1]: Listening on iscsid.socket. Feb 12 19:32:30.802470 systemd[1]: Listening on iscsiuio.socket. Feb 12 19:32:30.802478 systemd[1]: Listening on systemd-journald-audit.socket. Feb 12 19:32:30.802486 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 12 19:32:30.802494 systemd[1]: Listening on systemd-journald.socket. Feb 12 19:32:30.802502 systemd[1]: Listening on systemd-networkd.socket. Feb 12 19:32:30.802509 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 19:32:30.802517 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 19:32:30.802525 systemd[1]: Reached target sockets.target. Feb 12 19:32:30.802537 systemd[1]: Starting kmod-static-nodes.service... Feb 12 19:32:30.802545 systemd[1]: Finished network-cleanup.service. Feb 12 19:32:30.802553 systemd[1]: Starting systemd-fsck-usr.service... Feb 12 19:32:30.802561 systemd[1]: Starting systemd-journald.service... Feb 12 19:32:30.802569 systemd[1]: Starting systemd-modules-load.service... Feb 12 19:32:30.802577 systemd[1]: Starting systemd-resolved.service... Feb 12 19:32:30.802584 systemd[1]: Starting systemd-vconsole-setup.service... Feb 12 19:32:30.802592 systemd[1]: Finished kmod-static-nodes.service. Feb 12 19:32:30.802600 systemd[1]: Finished systemd-fsck-usr.service. Feb 12 19:32:30.802611 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 19:32:30.802619 systemd[1]: Finished systemd-vconsole-setup.service. Feb 12 19:32:30.802628 kernel: audit: type=1130 audit(1707766350.799:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:30.802640 systemd-journald[200]: Journal started Feb 12 19:32:30.802694 systemd-journald[200]: Runtime Journal (/run/log/journal/dfad7a90244f4866a3b70e6e240994fc) is 6.0M, max 48.4M, 42.4M free. Feb 12 19:32:30.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:30.796314 systemd-modules-load[201]: Inserted module 'overlay' Feb 12 19:32:30.804893 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 19:32:30.804908 kernel: audit: type=1130 audit(1707766350.804:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:30.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:30.807144 systemd[1]: Started systemd-journald.service. Feb 12 19:32:30.807000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:30.808818 systemd[1]: Starting dracut-cmdline-ask.service... Feb 12 19:32:30.810185 kernel: audit: type=1130 audit(1707766350.807:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:30.818838 systemd[1]: Finished dracut-cmdline-ask.service. Feb 12 19:32:30.819945 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 12 19:32:30.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:30.820294 systemd-resolved[202]: Positive Trust Anchors: Feb 12 19:32:30.822631 kernel: audit: type=1130 audit(1707766350.819:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:30.822645 kernel: Bridge firewalling registered Feb 12 19:32:30.820304 systemd-resolved[202]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 19:32:30.820330 systemd-resolved[202]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 19:32:30.820630 systemd[1]: Starting dracut-cmdline.service... Feb 12 19:32:30.821984 systemd-modules-load[201]: Inserted module 'br_netfilter' Feb 12 19:32:30.822821 systemd-resolved[202]: Defaulting to hostname 'linux'. Feb 12 19:32:30.829392 systemd[1]: Started systemd-resolved.service. Feb 12 19:32:30.830511 systemd[1]: Reached target nss-lookup.target. Feb 12 19:32:30.830000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:30.833139 kernel: audit: type=1130 audit(1707766350.830:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:30.838642 dracut-cmdline[216]: dracut-dracut-053 Feb 12 19:32:30.840610 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 12 19:32:30.845267 kernel: SCSI subsystem initialized Feb 12 19:32:30.855192 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 12 19:32:30.855214 kernel: device-mapper: uevent: version 1.0.3 Feb 12 19:32:30.856149 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 12 19:32:30.858730 systemd-modules-load[201]: Inserted module 'dm_multipath' Feb 12 19:32:30.859445 systemd[1]: Finished systemd-modules-load.service. Feb 12 19:32:30.862550 kernel: audit: type=1130 audit(1707766350.859:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:30.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:30.860278 systemd[1]: Starting systemd-sysctl.service... Feb 12 19:32:30.867212 systemd[1]: Finished systemd-sysctl.service. Feb 12 19:32:30.870203 kernel: audit: type=1130 audit(1707766350.867:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:30.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:30.900145 kernel: Loading iSCSI transport class v2.0-870. Feb 12 19:32:30.910146 kernel: iscsi: registered transport (tcp) Feb 12 19:32:30.928471 kernel: iscsi: registered transport (qla4xxx) Feb 12 19:32:30.928500 kernel: QLogic iSCSI HBA Driver Feb 12 19:32:30.956892 systemd[1]: Finished dracut-cmdline.service. Feb 12 19:32:30.957000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:30.958749 systemd[1]: Starting dracut-pre-udev.service... Feb 12 19:32:30.960964 kernel: audit: type=1130 audit(1707766350.957:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:31.003147 kernel: raid6: avx2x4 gen() 30169 MB/s Feb 12 19:32:31.020146 kernel: raid6: avx2x4 xor() 7278 MB/s Feb 12 19:32:31.037143 kernel: raid6: avx2x2 gen() 32518 MB/s Feb 12 19:32:31.054142 kernel: raid6: avx2x2 xor() 19335 MB/s Feb 12 19:32:31.071146 kernel: raid6: avx2x1 gen() 26657 MB/s Feb 12 19:32:31.088142 kernel: raid6: avx2x1 xor() 15392 MB/s Feb 12 19:32:31.105140 kernel: raid6: sse2x4 gen() 14881 MB/s Feb 12 19:32:31.122145 kernel: raid6: sse2x4 xor() 7108 MB/s Feb 12 19:32:31.139141 kernel: raid6: sse2x2 gen() 16281 MB/s Feb 12 19:32:31.156143 kernel: raid6: sse2x2 xor() 9867 MB/s Feb 12 19:32:31.173142 kernel: raid6: sse2x1 gen() 12382 MB/s Feb 12 19:32:31.190565 kernel: raid6: sse2x1 xor() 7839 MB/s Feb 12 19:32:31.190579 kernel: raid6: using algorithm avx2x2 gen() 32518 MB/s Feb 12 19:32:31.190588 kernel: raid6: .... xor() 19335 MB/s, rmw enabled Feb 12 19:32:31.190596 kernel: raid6: using avx2x2 recovery algorithm Feb 12 19:32:31.202144 kernel: xor: automatically using best checksumming function avx Feb 12 19:32:31.294155 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 12 19:32:31.302111 systemd[1]: Finished dracut-pre-udev.service. Feb 12 19:32:31.305310 kernel: audit: type=1130 audit(1707766351.302:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:31.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:31.305000 audit: BPF prog-id=7 op=LOAD Feb 12 19:32:31.305000 audit: BPF prog-id=8 op=LOAD Feb 12 19:32:31.305643 systemd[1]: Starting systemd-udevd.service... Feb 12 19:32:31.316947 systemd-udevd[398]: Using default interface naming scheme 'v252'. Feb 12 19:32:31.320625 systemd[1]: Started systemd-udevd.service. Feb 12 19:32:31.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:31.323543 systemd[1]: Starting dracut-pre-trigger.service... Feb 12 19:32:31.331266 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Feb 12 19:32:31.352458 systemd[1]: Finished dracut-pre-trigger.service. Feb 12 19:32:31.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:31.353606 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 19:32:31.387760 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 19:32:31.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:31.417154 kernel: cryptd: max_cpu_qlen set to 1000 Feb 12 19:32:31.419435 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 12 19:32:31.425355 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 12 19:32:31.425391 kernel: GPT:9289727 != 19775487 Feb 12 19:32:31.425400 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 12 19:32:31.426302 kernel: GPT:9289727 != 19775487 Feb 12 19:32:31.426324 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 12 19:32:31.427302 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 19:32:31.437152 kernel: AVX2 version of gcm_enc/dec engaged. Feb 12 19:32:31.437177 kernel: AES CTR mode by8 optimization enabled Feb 12 19:32:31.441150 kernel: libata version 3.00 loaded. Feb 12 19:32:31.445150 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (445) Feb 12 19:32:31.445202 kernel: ata_piix 0000:00:01.1: version 2.13 Feb 12 19:32:31.446426 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 12 19:32:31.446546 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 12 19:32:31.453853 kernel: scsi host0: ata_piix Feb 12 19:32:31.447677 systemd[1]: Starting disk-uuid.service... Feb 12 19:32:31.457073 disk-uuid[474]: Primary Header is updated. Feb 12 19:32:31.457073 disk-uuid[474]: Secondary Entries is updated. Feb 12 19:32:31.457073 disk-uuid[474]: Secondary Header is updated. Feb 12 19:32:31.457440 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 12 19:32:31.462417 kernel: scsi host1: ata_piix Feb 12 19:32:31.462556 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Feb 12 19:32:31.462567 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Feb 12 19:32:31.463146 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 19:32:31.464434 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 12 19:32:31.467145 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 19:32:31.468106 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 19:32:31.617168 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Feb 12 19:32:31.617241 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 12 19:32:31.647142 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Feb 12 19:32:31.647294 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 12 19:32:31.664158 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Feb 12 19:32:32.463805 disk-uuid[488]: The operation has completed successfully. Feb 12 19:32:32.465369 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 19:32:32.484877 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 12 19:32:32.486000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:32.486000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:32.484997 systemd[1]: Finished disk-uuid.service. Feb 12 19:32:32.500528 systemd[1]: Starting verity-setup.service... Feb 12 19:32:32.512161 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Feb 12 19:32:32.530270 systemd[1]: Found device dev-mapper-usr.device. Feb 12 19:32:32.533376 systemd[1]: Mounting sysusr-usr.mount... Feb 12 19:32:32.535336 systemd[1]: Finished verity-setup.service. Feb 12 19:32:32.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:32.591961 systemd[1]: Mounted sysusr-usr.mount. Feb 12 19:32:32.593330 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 12 19:32:32.592175 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 12 19:32:32.592843 systemd[1]: Starting ignition-setup.service... Feb 12 19:32:32.594515 systemd[1]: Starting parse-ip-for-networkd.service... Feb 12 19:32:32.600304 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 12 19:32:32.600340 kernel: BTRFS info (device vda6): using free space tree Feb 12 19:32:32.600354 kernel: BTRFS info (device vda6): has skinny extents Feb 12 19:32:32.608812 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 12 19:32:32.618552 systemd[1]: Finished ignition-setup.service. Feb 12 19:32:32.618000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:32.619453 systemd[1]: Starting ignition-fetch-offline.service... Feb 12 19:32:32.654395 ignition[623]: Ignition 2.14.0 Feb 12 19:32:32.654405 ignition[623]: Stage: fetch-offline Feb 12 19:32:32.654473 ignition[623]: no configs at "/usr/lib/ignition/base.d" Feb 12 19:32:32.654483 ignition[623]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 19:32:32.654570 ignition[623]: parsed url from cmdline: "" Feb 12 19:32:32.654573 ignition[623]: no config URL provided Feb 12 19:32:32.654577 ignition[623]: reading system config file "/usr/lib/ignition/user.ign" Feb 12 19:32:32.654583 ignition[623]: no config at "/usr/lib/ignition/user.ign" Feb 12 19:32:32.654600 ignition[623]: op(1): [started] loading QEMU firmware config module Feb 12 19:32:32.654604 ignition[623]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 12 19:32:32.657826 ignition[623]: op(1): [finished] loading QEMU firmware config module Feb 12 19:32:32.679707 systemd[1]: Finished parse-ip-for-networkd.service. Feb 12 19:32:32.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:32.681000 audit: BPF prog-id=9 op=LOAD Feb 12 19:32:32.682025 systemd[1]: Starting systemd-networkd.service... Feb 12 19:32:32.718236 ignition[623]: parsing config with SHA512: 1ef1ad5f9175a2abbee8b687231e743cebb1f5eff9685fc240282cc166b0ac61250c4e4e3242394c6c8a5166f5c134c76465ef6ecf34d5560687c618a7574577 Feb 12 19:32:32.742675 systemd-networkd[710]: lo: Link UP Feb 12 19:32:32.742685 systemd-networkd[710]: lo: Gained carrier Feb 12 19:32:32.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:32.743276 systemd-networkd[710]: Enumeration completed Feb 12 19:32:32.743376 systemd[1]: Started systemd-networkd.service. Feb 12 19:32:32.744069 systemd-networkd[710]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 19:32:32.749903 ignition[623]: fetch-offline: fetch-offline passed Feb 12 19:32:32.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:32.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:32.744891 systemd[1]: Reached target network.target. Feb 12 19:32:32.749953 ignition[623]: Ignition finished successfully Feb 12 19:32:32.747035 systemd[1]: Starting iscsiuio.service... Feb 12 19:32:32.747228 systemd-networkd[710]: eth0: Link UP Feb 12 19:32:32.757287 iscsid[716]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 12 19:32:32.757287 iscsid[716]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Feb 12 19:32:32.757287 iscsid[716]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 12 19:32:32.757287 iscsid[716]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 12 19:32:32.757287 iscsid[716]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 12 19:32:32.757287 iscsid[716]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 12 19:32:32.757287 iscsid[716]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 12 19:32:32.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:32.747231 systemd-networkd[710]: eth0: Gained carrier Feb 12 19:32:32.760841 ignition[715]: Ignition 2.14.0 Feb 12 19:32:32.749369 unknown[623]: fetched base config from "system" Feb 12 19:32:32.760846 ignition[715]: Stage: kargs Feb 12 19:32:32.749376 unknown[623]: fetched user config from "qemu" Feb 12 19:32:32.760921 ignition[715]: no configs at "/usr/lib/ignition/base.d" Feb 12 19:32:32.751025 systemd[1]: Finished ignition-fetch-offline.service. Feb 12 19:32:32.760928 ignition[715]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 19:32:32.751412 systemd[1]: Started iscsiuio.service. Feb 12 19:32:32.763082 ignition[715]: kargs: kargs passed Feb 12 19:32:32.751558 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 12 19:32:32.763124 ignition[715]: Ignition finished successfully Feb 12 19:32:32.752087 systemd[1]: Starting ignition-kargs.service... Feb 12 19:32:32.752859 systemd[1]: Starting iscsid.service... Feb 12 19:32:32.757380 systemd[1]: Started iscsid.service. Feb 12 19:32:32.762219 systemd-networkd[710]: eth0: DHCPv4 address 10.0.0.38/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 12 19:32:32.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:32.771787 systemd[1]: Starting dracut-initqueue.service... Feb 12 19:32:32.776287 systemd[1]: Finished ignition-kargs.service. Feb 12 19:32:32.780330 systemd[1]: Starting ignition-disks.service... Feb 12 19:32:32.781626 systemd[1]: Finished dracut-initqueue.service. Feb 12 19:32:32.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:32.783075 systemd[1]: Reached target remote-fs-pre.target. Feb 12 19:32:32.784466 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 19:32:32.785829 systemd[1]: Reached target remote-fs.target. Feb 12 19:32:32.786216 ignition[731]: Ignition 2.14.0 Feb 12 19:32:32.786221 ignition[731]: Stage: disks Feb 12 19:32:32.786295 ignition[731]: no configs at "/usr/lib/ignition/base.d" Feb 12 19:32:32.786302 ignition[731]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 19:32:32.787383 ignition[731]: disks: disks passed Feb 12 19:32:32.787414 ignition[731]: Ignition finished successfully Feb 12 19:32:32.791320 systemd[1]: Starting dracut-pre-mount.service... Feb 12 19:32:32.792725 systemd[1]: Finished ignition-disks.service. Feb 12 19:32:32.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:32.794187 systemd[1]: Reached target initrd-root-device.target. Feb 12 19:32:32.795575 systemd[1]: Reached target local-fs-pre.target. Feb 12 19:32:32.796853 systemd[1]: Reached target local-fs.target. Feb 12 19:32:32.798060 systemd[1]: Reached target sysinit.target. Feb 12 19:32:32.799281 systemd[1]: Reached target basic.target. Feb 12 19:32:32.800642 systemd[1]: Finished dracut-pre-mount.service. Feb 12 19:32:32.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:32.802431 systemd[1]: Starting systemd-fsck-root.service... Feb 12 19:32:32.812753 systemd-fsck[744]: ROOT: clean, 602/553520 files, 56013/553472 blocks Feb 12 19:32:32.817406 systemd[1]: Finished systemd-fsck-root.service. Feb 12 19:32:32.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:32.819309 systemd[1]: Mounting sysroot.mount... Feb 12 19:32:32.826152 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 12 19:32:32.826715 systemd[1]: Mounted sysroot.mount. Feb 12 19:32:32.828258 systemd[1]: Reached target initrd-root-fs.target. Feb 12 19:32:32.830356 systemd[1]: Mounting sysroot-usr.mount... Feb 12 19:32:32.831541 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 12 19:32:32.831578 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 12 19:32:32.832464 systemd[1]: Reached target ignition-diskful.target. Feb 12 19:32:32.835447 systemd[1]: Mounted sysroot-usr.mount. Feb 12 19:32:32.836996 systemd[1]: Starting initrd-setup-root.service... Feb 12 19:32:32.841528 initrd-setup-root[754]: cut: /sysroot/etc/passwd: No such file or directory Feb 12 19:32:32.845587 initrd-setup-root[762]: cut: /sysroot/etc/group: No such file or directory Feb 12 19:32:32.848898 initrd-setup-root[770]: cut: /sysroot/etc/shadow: No such file or directory Feb 12 19:32:32.852411 initrd-setup-root[778]: cut: /sysroot/etc/gshadow: No such file or directory Feb 12 19:32:32.878935 systemd[1]: Finished initrd-setup-root.service. Feb 12 19:32:32.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:32.880671 systemd[1]: Starting ignition-mount.service... Feb 12 19:32:32.882186 systemd[1]: Starting sysroot-boot.service... Feb 12 19:32:32.884987 bash[795]: umount: /sysroot/usr/share/oem: not mounted. Feb 12 19:32:32.892249 ignition[796]: INFO : Ignition 2.14.0 Feb 12 19:32:32.892249 ignition[796]: INFO : Stage: mount Feb 12 19:32:32.893308 ignition[796]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 12 19:32:32.893308 ignition[796]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 19:32:32.894151 ignition[796]: INFO : mount: mount passed Feb 12 19:32:32.894151 ignition[796]: INFO : Ignition finished successfully Feb 12 19:32:32.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:32.894081 systemd[1]: Finished ignition-mount.service. Feb 12 19:32:32.901755 systemd[1]: Finished sysroot-boot.service. Feb 12 19:32:32.901000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:33.542036 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 12 19:32:33.548155 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (805) Feb 12 19:32:33.549593 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 12 19:32:33.549605 kernel: BTRFS info (device vda6): using free space tree Feb 12 19:32:33.549615 kernel: BTRFS info (device vda6): has skinny extents Feb 12 19:32:33.552775 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 12 19:32:33.553930 systemd[1]: Starting ignition-files.service... Feb 12 19:32:33.566473 ignition[825]: INFO : Ignition 2.14.0 Feb 12 19:32:33.566473 ignition[825]: INFO : Stage: files Feb 12 19:32:33.567655 ignition[825]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 12 19:32:33.567655 ignition[825]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 19:32:33.569435 ignition[825]: DEBUG : files: compiled without relabeling support, skipping Feb 12 19:32:33.569435 ignition[825]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 12 19:32:33.569435 ignition[825]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 12 19:32:33.572294 ignition[825]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 12 19:32:33.572294 ignition[825]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 12 19:32:33.572294 ignition[825]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 12 19:32:33.572294 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 12 19:32:33.572294 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 12 19:32:33.570634 unknown[825]: wrote ssh authorized keys file for user: core Feb 12 19:32:33.630038 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 12 19:32:33.693725 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 12 19:32:33.695054 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 12 19:32:33.695054 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-amd64.tar.gz: attempt #1 Feb 12 19:32:34.077705 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 12 19:32:34.184816 ignition[825]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: aa622325bf05520939f9e020d7a28ab48ac23e2fae6f47d5a4e52174c88c1ebc31b464853e4fd65bd8f5331f330a6ca96fd370d247d3eeaed042da4ee2d1219a Feb 12 19:32:34.186783 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 12 19:32:34.186783 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 12 19:32:34.186783 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz: attempt #1 Feb 12 19:32:34.287379 systemd-networkd[710]: eth0: Gained IPv6LL Feb 12 19:32:34.494604 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 12 19:32:34.701985 ignition[825]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 5d0324ca8a3c90c680b6e1fddb245a2255582fa15949ba1f3c6bb7323df9d3af754dae98d6e40ac9ccafb2999c932df2c4288d418949a4915d928eb23c090540 Feb 12 19:32:34.704217 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 12 19:32:34.704217 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 12 19:32:34.704217 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 12 19:32:34.707827 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubectl" Feb 12 19:32:34.707827 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.27.2/bin/linux/amd64/kubectl: attempt #1 Feb 12 19:32:34.981883 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 12 19:32:35.235463 ignition[825]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 857e67001e74840518413593d90c6e64ad3f00d55fa44ad9a8e2ed6135392c908caff7ec19af18cbe10784b8f83afe687a0bc3bacbc9eee984cdeb9c0749cb83 Feb 12 19:32:35.235463 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 12 19:32:35.238500 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubelet" Feb 12 19:32:35.238500 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.27.2/bin/linux/amd64/kubelet: attempt #1 Feb 12 19:32:35.288553 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 12 19:32:35.803844 ignition[825]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: a283da2224d456958b2cb99b4f6faf4457c4ed89e9e95f37d970c637f6a7f64ff4dd4d2bfce538759b2d2090933bece599a285ef8fd132eb383fece9a3941560 Feb 12 19:32:35.806181 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 12 19:32:35.806181 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 12 19:32:35.806181 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET https://dl.k8s.io/release/v1.27.2/bin/linux/amd64/kubeadm: attempt #1 Feb 12 19:32:35.852171 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET result: OK Feb 12 19:32:36.056298 ignition[825]: DEBUG : files: createFilesystemsFiles: createFiles: op(9): file matches expected sum of: f40216b7d14046931c58072d10c7122934eac5a23c08821371f8b08ac1779443ad11d3458a4c5dcde7cf80fc600a9fefb14b1942aa46a52330248d497ca88836 Feb 12 19:32:36.056298 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 12 19:32:36.059529 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 12 19:32:36.059529 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 12 19:32:36.561571 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 12 19:32:36.631979 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 12 19:32:36.633222 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Feb 12 19:32:36.634435 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Feb 12 19:32:36.635604 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 12 19:32:36.636771 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 12 19:32:36.637920 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 12 19:32:36.639148 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 12 19:32:36.639148 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 12 19:32:36.641447 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 12 19:32:36.642654 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 19:32:36.643908 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 19:32:36.645124 ignition[825]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Feb 12 19:32:36.646017 ignition[825]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 12 19:32:36.647494 ignition[825]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 12 19:32:36.647494 ignition[825]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Feb 12 19:32:36.647494 ignition[825]: INFO : files: op(12): [started] processing unit "prepare-cni-plugins.service" Feb 12 19:32:36.650765 ignition[825]: INFO : files: op(12): op(13): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 19:32:36.650765 ignition[825]: INFO : files: op(12): op(13): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 19:32:36.650765 ignition[825]: INFO : files: op(12): [finished] processing unit "prepare-cni-plugins.service" Feb 12 19:32:36.650765 ignition[825]: INFO : files: op(14): [started] processing unit "prepare-critools.service" Feb 12 19:32:36.650765 ignition[825]: INFO : files: op(14): op(15): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 19:32:36.656941 ignition[825]: INFO : files: op(14): op(15): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 19:32:36.656941 ignition[825]: INFO : files: op(14): [finished] processing unit "prepare-critools.service" Feb 12 19:32:36.656941 ignition[825]: INFO : files: op(16): [started] processing unit "prepare-helm.service" Feb 12 19:32:36.656941 ignition[825]: INFO : files: op(16): op(17): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 12 19:32:36.656941 ignition[825]: INFO : files: op(16): op(17): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 12 19:32:36.656941 ignition[825]: INFO : files: op(16): [finished] processing unit "prepare-helm.service" Feb 12 19:32:36.656941 ignition[825]: INFO : files: op(18): [started] setting preset to disabled for "coreos-metadata.service" Feb 12 19:32:36.656941 ignition[825]: INFO : files: op(18): op(19): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 12 19:32:36.673703 ignition[825]: INFO : files: op(18): op(19): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 12 19:32:36.674904 ignition[825]: INFO : files: op(18): [finished] setting preset to disabled for "coreos-metadata.service" Feb 12 19:32:36.674904 ignition[825]: INFO : files: op(1a): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 19:32:36.674904 ignition[825]: INFO : files: op(1a): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 19:32:36.674904 ignition[825]: INFO : files: op(1b): [started] setting preset to enabled for "prepare-critools.service" Feb 12 19:32:36.674904 ignition[825]: INFO : files: op(1b): [finished] setting preset to enabled for "prepare-critools.service" Feb 12 19:32:36.674904 ignition[825]: INFO : files: op(1c): [started] setting preset to enabled for "prepare-helm.service" Feb 12 19:32:36.674904 ignition[825]: INFO : files: op(1c): [finished] setting preset to enabled for "prepare-helm.service" Feb 12 19:32:36.681666 ignition[825]: INFO : files: createResultFile: createFiles: op(1d): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 12 19:32:36.681666 ignition[825]: INFO : files: createResultFile: createFiles: op(1d): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 12 19:32:36.681666 ignition[825]: INFO : files: files passed Feb 12 19:32:36.681666 ignition[825]: INFO : Ignition finished successfully Feb 12 19:32:36.685835 systemd[1]: Finished ignition-files.service. Feb 12 19:32:36.686000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:36.687392 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 12 19:32:36.691423 kernel: kauditd_printk_skb: 23 callbacks suppressed Feb 12 19:32:36.691455 kernel: audit: type=1130 audit(1707766356.686:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:36.689707 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 12 19:32:36.690378 systemd[1]: Starting ignition-quench.service... Feb 12 19:32:36.698482 kernel: audit: type=1130 audit(1707766356.693:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:36.698496 kernel: audit: type=1131 audit(1707766356.693:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:36.693000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:36.693000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:36.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:36.692677 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 12 19:32:36.702937 kernel: audit: type=1130 audit(1707766356.698:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:36.703004 initrd-setup-root-after-ignition[850]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Feb 12 19:32:36.692766 systemd[1]: Finished ignition-quench.service. Feb 12 19:32:36.704667 initrd-setup-root-after-ignition[853]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 12 19:32:36.695505 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 12 19:32:36.698568 systemd[1]: Reached target ignition-complete.target. Feb 12 19:32:36.702429 systemd[1]: Starting initrd-parse-etc.service... Feb 12 19:32:36.712809 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 12 19:32:36.712884 systemd[1]: Finished initrd-parse-etc.service. Feb 12 19:32:36.719517 kernel: audit: type=1130 audit(1707766356.713:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:36.719532 kernel: audit: type=1131 audit(1707766356.713:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:36.713000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:36.713000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:36.714141 systemd[1]: Reached target initrd-fs.target. Feb 12 19:32:36.719512 systemd[1]: Reached target initrd.target. Feb 12 19:32:36.720085 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 12 19:32:36.720699 systemd[1]: Starting dracut-pre-pivot.service... Feb 12 19:32:36.730454 systemd[1]: Finished dracut-pre-pivot.service. Feb 12 19:32:36.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:36.732410 systemd[1]: Starting initrd-cleanup.service... Feb 12 19:32:36.734902 kernel: audit: type=1130 audit(1707766356.731:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:36.739587 systemd[1]: Stopped target nss-lookup.target. Feb 12 19:32:36.740259 systemd[1]: Stopped target remote-cryptsetup.target. Feb 12 19:32:36.741384 systemd[1]: Stopped target timers.target. Feb 12 19:32:36.742501 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 12 19:32:36.746845 kernel: audit: type=1131 audit(1707766356.743:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:36.743000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:36.742591 systemd[1]: Stopped dracut-pre-pivot.service. Feb 12 19:32:36.743629 systemd[1]: Stopped target initrd.target. Feb 12 19:32:36.747004 systemd[1]: Stopped target basic.target. Feb 12 19:32:36.748011 systemd[1]: Stopped target ignition-complete.target. Feb 12 19:32:36.749110 systemd[1]: Stopped target ignition-diskful.target. Feb 12 19:32:36.750231 systemd[1]: Stopped target initrd-root-device.target. Feb 12 19:32:36.751381 systemd[1]: Stopped target remote-fs.target. Feb 12 19:32:36.752512 systemd[1]: Stopped target remote-fs-pre.target. Feb 12 19:32:36.753688 systemd[1]: Stopped target sysinit.target. Feb 12 19:32:36.754720 systemd[1]: Stopped target local-fs.target. Feb 12 19:32:36.755851 systemd[1]: Stopped target local-fs-pre.target. Feb 12 19:32:36.756906 systemd[1]: Stopped target swap.target. Feb 12 19:32:36.761959 kernel: audit: type=1131 audit(1707766356.758:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:36.758000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:36.757885 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 12 19:32:36.757991 systemd[1]: Stopped dracut-pre-mount.service. Feb 12 19:32:36.766253 kernel: audit: type=1131 audit(1707766356.763:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:36.763000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:36.759060 systemd[1]: Stopped target cryptsetup.target. Feb 12 19:32:36.766000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:36.762009 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 12 19:32:36.762105 systemd[1]: Stopped dracut-initqueue.service. Feb 12 19:32:36.763294 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 12 19:32:36.763386 systemd[1]: Stopped ignition-fetch-offline.service. Feb 12 19:32:36.766370 systemd[1]: Stopped target paths.target. Feb 12 19:32:36.767341 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 12 19:32:36.771171 systemd[1]: Stopped systemd-ask-password-console.path. Feb 12 19:32:36.771851 systemd[1]: Stopped target slices.target. Feb 12 19:32:36.772895 systemd[1]: Stopped target sockets.target. Feb 12 19:32:36.773925 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 12 19:32:36.774000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:36.774015 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 12 19:32:36.776000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:36.775118 systemd[1]: ignition-files.service: Deactivated successfully. Feb 12 19:32:36.775210 systemd[1]: Stopped ignition-files.service. Feb 12 19:32:36.779161 iscsid[716]: iscsid shutting down. Feb 12 19:32:36.779000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:36.777066 systemd[1]: Stopping ignition-mount.service... Feb 12 19:32:36.778374 systemd[1]: Stopping iscsid.service... Feb 12 19:32:36.783000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:36.784200 ignition[866]: INFO : Ignition 2.14.0 Feb 12 19:32:36.784200 ignition[866]: INFO : Stage: umount Feb 12 19:32:36.784200 ignition[866]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 12 19:32:36.784200 ignition[866]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 19:32:36.784000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:36.779149 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 12 19:32:36.787000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:36.789000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:36.789366 ignition[866]: INFO : umount: umount passed Feb 12 19:32:36.789366 ignition[866]: INFO : Ignition finished successfully Feb 12 19:32:36.779263 systemd[1]: Stopped kmod-static-nodes.service. Feb 12 19:32:36.791000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:36.792000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:36.780661 systemd[1]: Stopping sysroot-boot.service... Feb 12 19:32:36.793000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:36.782819 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 12 19:32:36.782995 systemd[1]: Stopped systemd-udev-trigger.service. Feb 12 19:32:36.784172 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 12 19:32:36.796000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:36.796000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:36.797000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:36.784299 systemd[1]: Stopped dracut-pre-trigger.service. Feb 12 19:32:36.786584 systemd[1]: iscsid.service: Deactivated successfully. Feb 12 19:32:36.798000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:36.786675 systemd[1]: Stopped iscsid.service. Feb 12 19:32:36.788387 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 12 19:32:36.788460 systemd[1]: Stopped ignition-mount.service. Feb 12 19:32:36.803000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:36.789614 systemd[1]: iscsid.socket: Deactivated successfully. Feb 12 19:32:36.789679 systemd[1]: Closed iscsid.socket. Feb 12 19:32:36.790445 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 12 19:32:36.790482 systemd[1]: Stopped ignition-disks.service. Feb 12 19:32:36.791596 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 12 19:32:36.791625 systemd[1]: Stopped ignition-kargs.service. Feb 12 19:32:36.792646 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 12 19:32:36.792675 systemd[1]: Stopped ignition-setup.service. Feb 12 19:32:36.793429 systemd[1]: Stopping iscsiuio.service... Feb 12 19:32:36.795245 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 12 19:32:36.809000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:36.795622 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 12 19:32:36.795695 systemd[1]: Finished initrd-cleanup.service. Feb 12 19:32:36.796416 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 12 19:32:36.796491 systemd[1]: Stopped iscsiuio.service. Feb 12 19:32:36.797545 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 12 19:32:36.814000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:36.815000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:36.797610 systemd[1]: Stopped sysroot-boot.service. Feb 12 19:32:36.817000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:36.799364 systemd[1]: Stopped target network.target. Feb 12 19:32:36.800556 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 12 19:32:36.800582 systemd[1]: Closed iscsiuio.socket. Feb 12 19:32:36.801969 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 12 19:32:36.802004 systemd[1]: Stopped initrd-setup-root.service. Feb 12 19:32:36.820000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:36.803342 systemd[1]: Stopping systemd-networkd.service... Feb 12 19:32:36.804454 systemd[1]: Stopping systemd-resolved.service... Feb 12 19:32:36.808173 systemd-networkd[710]: eth0: DHCPv6 lease lost Feb 12 19:32:36.822000 audit: BPF prog-id=9 op=UNLOAD Feb 12 19:32:36.809183 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 12 19:32:36.809256 systemd[1]: Stopped systemd-networkd.service. Feb 12 19:32:36.825000 audit: BPF prog-id=6 op=UNLOAD Feb 12 19:32:36.811328 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 12 19:32:36.825000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:36.811360 systemd[1]: Closed systemd-networkd.socket. Feb 12 19:32:36.812681 systemd[1]: Stopping network-cleanup.service... Feb 12 19:32:36.826000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:36.813824 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 12 19:32:36.813866 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 12 19:32:36.814930 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 19:32:36.830000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:36.831000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:36.814963 systemd[1]: Stopped systemd-sysctl.service. Feb 12 19:32:36.816197 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 12 19:32:36.832000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:36.816240 systemd[1]: Stopped systemd-modules-load.service. Feb 12 19:32:36.817405 systemd[1]: Stopping systemd-udevd.service... Feb 12 19:32:36.819263 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 12 19:32:36.835000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:36.819705 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 12 19:32:36.819815 systemd[1]: Stopped systemd-resolved.service. Feb 12 19:32:36.824308 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 12 19:32:36.824420 systemd[1]: Stopped systemd-udevd.service. Feb 12 19:32:36.826223 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 12 19:32:36.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:36.839000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:36.826293 systemd[1]: Stopped network-cleanup.service. Feb 12 19:32:36.827221 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 12 19:32:36.827256 systemd[1]: Closed systemd-udevd-control.socket. Feb 12 19:32:36.828252 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 12 19:32:36.828278 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 12 19:32:36.829525 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 12 19:32:36.829565 systemd[1]: Stopped dracut-pre-udev.service. Feb 12 19:32:36.830665 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 12 19:32:36.830705 systemd[1]: Stopped dracut-cmdline.service. Feb 12 19:32:36.831822 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 12 19:32:36.831864 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 12 19:32:36.833677 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 12 19:32:36.834754 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 12 19:32:36.834805 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 12 19:32:36.838798 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 12 19:32:36.838862 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 12 19:32:36.839821 systemd[1]: Reached target initrd-switch-root.target. Feb 12 19:32:36.841591 systemd[1]: Starting initrd-switch-root.service... Feb 12 19:32:36.856971 systemd[1]: Switching root. Feb 12 19:32:36.874276 systemd-journald[200]: Journal stopped Feb 12 19:32:39.689275 systemd-journald[200]: Received SIGTERM from PID 1 (systemd). Feb 12 19:32:39.689324 kernel: SELinux: Class mctp_socket not defined in policy. Feb 12 19:32:39.689341 kernel: SELinux: Class anon_inode not defined in policy. Feb 12 19:32:39.689351 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 12 19:32:39.689361 kernel: SELinux: policy capability network_peer_controls=1 Feb 12 19:32:39.689382 kernel: SELinux: policy capability open_perms=1 Feb 12 19:32:39.689392 kernel: SELinux: policy capability extended_socket_class=1 Feb 12 19:32:39.689402 kernel: SELinux: policy capability always_check_network=0 Feb 12 19:32:39.689414 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 12 19:32:39.689423 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 12 19:32:39.689433 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 12 19:32:39.689442 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 12 19:32:39.689454 systemd[1]: Successfully loaded SELinux policy in 34.723ms. Feb 12 19:32:39.689468 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.298ms. Feb 12 19:32:39.689484 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 19:32:39.689494 systemd[1]: Detected virtualization kvm. Feb 12 19:32:39.689505 systemd[1]: Detected architecture x86-64. Feb 12 19:32:39.689515 systemd[1]: Detected first boot. Feb 12 19:32:39.689525 systemd[1]: Initializing machine ID from VM UUID. Feb 12 19:32:39.689536 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 12 19:32:39.689548 systemd[1]: Populated /etc with preset unit settings. Feb 12 19:32:39.689558 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:32:39.689573 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:32:39.689584 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:32:39.689596 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 12 19:32:39.689609 systemd[1]: Stopped initrd-switch-root.service. Feb 12 19:32:39.689619 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 12 19:32:39.689629 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 12 19:32:39.689639 systemd[1]: Created slice system-addon\x2drun.slice. Feb 12 19:32:39.689649 systemd[1]: Created slice system-getty.slice. Feb 12 19:32:39.689663 systemd[1]: Created slice system-modprobe.slice. Feb 12 19:32:39.689673 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 12 19:32:39.689684 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 12 19:32:39.689694 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 12 19:32:39.689704 systemd[1]: Created slice user.slice. Feb 12 19:32:39.689714 systemd[1]: Started systemd-ask-password-console.path. Feb 12 19:32:39.689724 systemd[1]: Started systemd-ask-password-wall.path. Feb 12 19:32:39.689734 systemd[1]: Set up automount boot.automount. Feb 12 19:32:39.689744 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 12 19:32:39.689758 systemd[1]: Stopped target initrd-switch-root.target. Feb 12 19:32:39.689768 systemd[1]: Stopped target initrd-fs.target. Feb 12 19:32:39.689778 systemd[1]: Stopped target initrd-root-fs.target. Feb 12 19:32:39.689788 systemd[1]: Reached target integritysetup.target. Feb 12 19:32:39.689798 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 19:32:39.689809 systemd[1]: Reached target remote-fs.target. Feb 12 19:32:39.689819 systemd[1]: Reached target slices.target. Feb 12 19:32:39.689830 systemd[1]: Reached target swap.target. Feb 12 19:32:39.689844 systemd[1]: Reached target torcx.target. Feb 12 19:32:39.689854 systemd[1]: Reached target veritysetup.target. Feb 12 19:32:39.689864 systemd[1]: Listening on systemd-coredump.socket. Feb 12 19:32:39.689875 systemd[1]: Listening on systemd-initctl.socket. Feb 12 19:32:39.689885 systemd[1]: Listening on systemd-networkd.socket. Feb 12 19:32:39.689895 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 19:32:39.689905 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 19:32:39.689915 systemd[1]: Listening on systemd-userdbd.socket. Feb 12 19:32:39.689925 systemd[1]: Mounting dev-hugepages.mount... Feb 12 19:32:39.689942 systemd[1]: Mounting dev-mqueue.mount... Feb 12 19:32:39.689951 systemd[1]: Mounting media.mount... Feb 12 19:32:39.689962 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 12 19:32:39.689972 systemd[1]: Mounting sys-kernel-debug.mount... Feb 12 19:32:39.689982 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 12 19:32:39.689992 systemd[1]: Mounting tmp.mount... Feb 12 19:32:39.690009 systemd[1]: Starting flatcar-tmpfiles.service... Feb 12 19:32:39.690019 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 12 19:32:39.690029 systemd[1]: Starting kmod-static-nodes.service... Feb 12 19:32:39.690044 systemd[1]: Starting modprobe@configfs.service... Feb 12 19:32:39.690054 systemd[1]: Starting modprobe@dm_mod.service... Feb 12 19:32:39.690065 systemd[1]: Starting modprobe@drm.service... Feb 12 19:32:39.690075 systemd[1]: Starting modprobe@efi_pstore.service... Feb 12 19:32:39.690085 systemd[1]: Starting modprobe@fuse.service... Feb 12 19:32:39.690095 systemd[1]: Starting modprobe@loop.service... Feb 12 19:32:39.690105 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 12 19:32:39.690147 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 12 19:32:39.690162 systemd[1]: Stopped systemd-fsck-root.service. Feb 12 19:32:39.690172 kernel: loop: module loaded Feb 12 19:32:39.690182 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 12 19:32:39.690192 systemd[1]: Stopped systemd-fsck-usr.service. Feb 12 19:32:39.690202 systemd[1]: Stopped systemd-journald.service. Feb 12 19:32:39.690212 systemd[1]: Starting systemd-journald.service... Feb 12 19:32:39.690225 kernel: fuse: init (API version 7.34) Feb 12 19:32:39.690235 systemd[1]: Starting systemd-modules-load.service... Feb 12 19:32:39.690245 systemd[1]: Starting systemd-network-generator.service... Feb 12 19:32:39.690255 systemd[1]: Starting systemd-remount-fs.service... Feb 12 19:32:39.690265 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 19:32:39.690276 systemd[1]: verity-setup.service: Deactivated successfully. Feb 12 19:32:39.690285 systemd[1]: Stopped verity-setup.service. Feb 12 19:32:39.690296 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 12 19:32:39.690306 systemd[1]: Mounted dev-hugepages.mount. Feb 12 19:32:39.690322 systemd-journald[972]: Journal started Feb 12 19:32:39.690363 systemd-journald[972]: Runtime Journal (/run/log/journal/dfad7a90244f4866a3b70e6e240994fc) is 6.0M, max 48.4M, 42.4M free. Feb 12 19:32:36.927000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 12 19:32:37.512000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 19:32:37.512000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 19:32:37.512000 audit: BPF prog-id=10 op=LOAD Feb 12 19:32:37.512000 audit: BPF prog-id=10 op=UNLOAD Feb 12 19:32:37.512000 audit: BPF prog-id=11 op=LOAD Feb 12 19:32:37.512000 audit: BPF prog-id=11 op=UNLOAD Feb 12 19:32:37.543000 audit[899]: AVC avc: denied { associate } for pid=899 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 12 19:32:37.543000 audit[899]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001c58ac a1=c000146de0 a2=c00014fac0 a3=32 items=0 ppid=882 pid=899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:32:37.543000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 12 19:32:37.545000 audit[899]: AVC avc: denied { associate } for pid=899 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 12 19:32:37.545000 audit[899]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001c5985 a2=1ed a3=0 items=2 ppid=882 pid=899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:32:37.545000 audit: CWD cwd="/" Feb 12 19:32:37.545000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:37.545000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:37.545000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 12 19:32:39.577000 audit: BPF prog-id=12 op=LOAD Feb 12 19:32:39.577000 audit: BPF prog-id=3 op=UNLOAD Feb 12 19:32:39.577000 audit: BPF prog-id=13 op=LOAD Feb 12 19:32:39.577000 audit: BPF prog-id=14 op=LOAD Feb 12 19:32:39.577000 audit: BPF prog-id=4 op=UNLOAD Feb 12 19:32:39.577000 audit: BPF prog-id=5 op=UNLOAD Feb 12 19:32:39.578000 audit: BPF prog-id=15 op=LOAD Feb 12 19:32:39.578000 audit: BPF prog-id=12 op=UNLOAD Feb 12 19:32:39.578000 audit: BPF prog-id=16 op=LOAD Feb 12 19:32:39.578000 audit: BPF prog-id=17 op=LOAD Feb 12 19:32:39.578000 audit: BPF prog-id=13 op=UNLOAD Feb 12 19:32:39.578000 audit: BPF prog-id=14 op=UNLOAD Feb 12 19:32:39.579000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:39.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:39.582000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:39.588000 audit: BPF prog-id=15 op=UNLOAD Feb 12 19:32:39.665000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:39.667000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:39.668000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:39.668000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:39.668000 audit: BPF prog-id=18 op=LOAD Feb 12 19:32:39.669000 audit: BPF prog-id=19 op=LOAD Feb 12 19:32:39.669000 audit: BPF prog-id=20 op=LOAD Feb 12 19:32:39.669000 audit: BPF prog-id=16 op=UNLOAD Feb 12 19:32:39.669000 audit: BPF prog-id=17 op=UNLOAD Feb 12 19:32:39.685000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:39.685000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 12 19:32:39.685000 audit[972]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffe1f96b0d0 a2=4000 a3=7ffe1f96b16c items=0 ppid=1 pid=972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:32:39.685000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 12 19:32:39.575929 systemd[1]: Queued start job for default target multi-user.target. Feb 12 19:32:37.542403 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-12T19:32:37Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:32:39.575939 systemd[1]: Unnecessary job was removed for dev-vda6.device. Feb 12 19:32:37.542581 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-12T19:32:37Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 12 19:32:39.579598 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 12 19:32:37.542595 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-12T19:32:37Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 12 19:32:37.542628 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-12T19:32:37Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 12 19:32:37.542637 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-12T19:32:37Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 12 19:32:37.542661 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-12T19:32:37Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 12 19:32:37.542672 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-12T19:32:37Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 12 19:32:37.542859 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-12T19:32:37Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 12 19:32:39.692170 systemd[1]: Started systemd-journald.service. Feb 12 19:32:39.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:37.542892 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-12T19:32:37Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 12 19:32:37.542903 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-12T19:32:37Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 12 19:32:37.543298 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-12T19:32:37Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 12 19:32:39.692384 systemd[1]: Mounted dev-mqueue.mount. Feb 12 19:32:37.543334 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-12T19:32:37Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 12 19:32:37.543354 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-12T19:32:37Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 12 19:32:37.543368 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-12T19:32:37Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 12 19:32:37.543385 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-12T19:32:37Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 12 19:32:37.543396 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-12T19:32:37Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 12 19:32:39.328014 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-12T19:32:39Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 19:32:39.328278 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-12T19:32:39Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 19:32:39.328362 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-12T19:32:39Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 19:32:39.328503 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-12T19:32:39Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 19:32:39.328549 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-12T19:32:39Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 12 19:32:39.328610 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-12T19:32:39Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 12 19:32:39.693355 systemd[1]: Mounted media.mount. Feb 12 19:32:39.694005 systemd[1]: Mounted sys-kernel-debug.mount. Feb 12 19:32:39.694851 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 12 19:32:39.695684 systemd[1]: Mounted tmp.mount. Feb 12 19:32:39.696627 systemd[1]: Finished kmod-static-nodes.service. Feb 12 19:32:39.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:39.697495 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 12 19:32:39.697646 systemd[1]: Finished modprobe@configfs.service. Feb 12 19:32:39.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:39.698000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:39.698498 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 12 19:32:39.698678 systemd[1]: Finished modprobe@dm_mod.service. Feb 12 19:32:39.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:39.699000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:39.699467 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 12 19:32:39.699634 systemd[1]: Finished modprobe@drm.service. Feb 12 19:32:39.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:39.700000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:39.700391 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 12 19:32:39.700586 systemd[1]: Finished modprobe@efi_pstore.service. Feb 12 19:32:39.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:39.701000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:39.701522 systemd[1]: Finished flatcar-tmpfiles.service. Feb 12 19:32:39.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:39.702312 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 12 19:32:39.702469 systemd[1]: Finished modprobe@fuse.service. Feb 12 19:32:39.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:39.702000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:39.703239 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 12 19:32:39.703379 systemd[1]: Finished modprobe@loop.service. Feb 12 19:32:39.703000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:39.703000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:39.704428 systemd[1]: Finished systemd-modules-load.service. Feb 12 19:32:39.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:39.705471 systemd[1]: Finished systemd-network-generator.service. Feb 12 19:32:39.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:39.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:39.706552 systemd[1]: Finished systemd-remount-fs.service. Feb 12 19:32:39.707676 systemd[1]: Reached target network-pre.target. Feb 12 19:32:39.709405 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 12 19:32:39.711102 systemd[1]: Mounting sys-kernel-config.mount... Feb 12 19:32:39.711712 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 12 19:32:39.712959 systemd[1]: Starting systemd-hwdb-update.service... Feb 12 19:32:39.714519 systemd[1]: Starting systemd-journal-flush.service... Feb 12 19:32:39.715222 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 12 19:32:39.716081 systemd[1]: Starting systemd-random-seed.service... Feb 12 19:32:39.716649 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 12 19:32:39.718051 systemd[1]: Starting systemd-sysctl.service... Feb 12 19:32:39.719903 systemd[1]: Starting systemd-sysusers.service... Feb 12 19:32:39.722690 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 12 19:32:39.723446 systemd[1]: Mounted sys-kernel-config.mount. Feb 12 19:32:39.724292 systemd[1]: Finished systemd-random-seed.service. Feb 12 19:32:39.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:39.724989 systemd[1]: Reached target first-boot-complete.target. Feb 12 19:32:39.726309 systemd-journald[972]: Time spent on flushing to /var/log/journal/dfad7a90244f4866a3b70e6e240994fc is 18.691ms for 1199 entries. Feb 12 19:32:39.726309 systemd-journald[972]: System Journal (/var/log/journal/dfad7a90244f4866a3b70e6e240994fc) is 8.0M, max 195.6M, 187.6M free. Feb 12 19:32:39.760602 systemd-journald[972]: Received client request to flush runtime journal. Feb 12 19:32:39.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:39.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:39.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:39.737424 systemd[1]: Finished systemd-sysusers.service. Feb 12 19:32:39.761020 udevadm[1003]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 12 19:32:39.745664 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 19:32:39.747859 systemd[1]: Starting systemd-udev-settle.service... Feb 12 19:32:39.748773 systemd[1]: Finished systemd-sysctl.service. Feb 12 19:32:39.761374 systemd[1]: Finished systemd-journal-flush.service. Feb 12 19:32:39.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:40.136236 systemd[1]: Finished systemd-hwdb-update.service. Feb 12 19:32:40.136000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:40.137000 audit: BPF prog-id=21 op=LOAD Feb 12 19:32:40.137000 audit: BPF prog-id=22 op=LOAD Feb 12 19:32:40.137000 audit: BPF prog-id=7 op=UNLOAD Feb 12 19:32:40.137000 audit: BPF prog-id=8 op=UNLOAD Feb 12 19:32:40.138676 systemd[1]: Starting systemd-udevd.service... Feb 12 19:32:40.152960 systemd-udevd[1006]: Using default interface naming scheme 'v252'. Feb 12 19:32:40.163376 systemd[1]: Started systemd-udevd.service. Feb 12 19:32:40.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:40.165000 audit: BPF prog-id=23 op=LOAD Feb 12 19:32:40.166408 systemd[1]: Starting systemd-networkd.service... Feb 12 19:32:40.172000 audit: BPF prog-id=24 op=LOAD Feb 12 19:32:40.172000 audit: BPF prog-id=25 op=LOAD Feb 12 19:32:40.172000 audit: BPF prog-id=26 op=LOAD Feb 12 19:32:40.173013 systemd[1]: Starting systemd-userdbd.service... Feb 12 19:32:40.193005 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Feb 12 19:32:40.201160 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 19:32:40.208121 systemd[1]: Started systemd-userdbd.service. Feb 12 19:32:40.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:40.226151 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 12 19:32:40.231150 kernel: ACPI: button: Power Button [PWRF] Feb 12 19:32:40.247817 systemd-networkd[1018]: lo: Link UP Feb 12 19:32:40.247828 systemd-networkd[1018]: lo: Gained carrier Feb 12 19:32:40.248184 systemd-networkd[1018]: Enumeration completed Feb 12 19:32:40.248255 systemd[1]: Started systemd-networkd.service. Feb 12 19:32:40.248400 systemd-networkd[1018]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 19:32:40.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:40.246000 audit[1031]: AVC avc: denied { confidentiality } for pid=1031 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 12 19:32:40.249301 systemd-networkd[1018]: eth0: Link UP Feb 12 19:32:40.249304 systemd-networkd[1018]: eth0: Gained carrier Feb 12 19:32:40.246000 audit[1031]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=562dbe9d07d0 a1=32194 a2=7f80e17b3bc5 a3=5 items=108 ppid=1006 pid=1031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:32:40.246000 audit: CWD cwd="/" Feb 12 19:32:40.246000 audit: PATH item=0 name=(null) inode=50 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=1 name=(null) inode=12783 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=2 name=(null) inode=12783 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=3 name=(null) inode=12784 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=4 name=(null) inode=12783 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=5 name=(null) inode=12785 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=6 name=(null) inode=12783 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=7 name=(null) inode=12786 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=8 name=(null) inode=12786 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=9 name=(null) inode=12787 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=10 name=(null) inode=12786 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=11 name=(null) inode=12788 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=12 name=(null) inode=12786 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=13 name=(null) inode=12789 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=14 name=(null) inode=12786 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=15 name=(null) inode=12790 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=16 name=(null) inode=12786 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=17 name=(null) inode=12791 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=18 name=(null) inode=12783 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=19 name=(null) inode=12792 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=20 name=(null) inode=12792 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=21 name=(null) inode=12793 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=22 name=(null) inode=12792 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=23 name=(null) inode=12794 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=24 name=(null) inode=12792 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=25 name=(null) inode=12795 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=26 name=(null) inode=12792 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=27 name=(null) inode=12796 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=28 name=(null) inode=12792 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=29 name=(null) inode=12797 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=30 name=(null) inode=12783 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=31 name=(null) inode=12798 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=32 name=(null) inode=12798 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=33 name=(null) inode=12799 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=34 name=(null) inode=12798 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=35 name=(null) inode=12800 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=36 name=(null) inode=12798 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=37 name=(null) inode=12801 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=38 name=(null) inode=12798 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=39 name=(null) inode=12802 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=40 name=(null) inode=12798 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=41 name=(null) inode=12803 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=42 name=(null) inode=12783 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=43 name=(null) inode=12804 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=44 name=(null) inode=12804 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=45 name=(null) inode=12805 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=46 name=(null) inode=12804 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=47 name=(null) inode=12806 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=48 name=(null) inode=12804 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=49 name=(null) inode=12807 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=50 name=(null) inode=12804 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=51 name=(null) inode=12808 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=52 name=(null) inode=12804 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=53 name=(null) inode=12809 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=54 name=(null) inode=50 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=55 name=(null) inode=12810 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=56 name=(null) inode=12810 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=57 name=(null) inode=12811 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=58 name=(null) inode=12810 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=59 name=(null) inode=12812 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=60 name=(null) inode=12810 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=61 name=(null) inode=12813 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=62 name=(null) inode=12813 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=63 name=(null) inode=12814 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=64 name=(null) inode=12813 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=65 name=(null) inode=12815 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=66 name=(null) inode=12813 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=67 name=(null) inode=12816 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=68 name=(null) inode=12813 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=69 name=(null) inode=12817 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=70 name=(null) inode=12813 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=71 name=(null) inode=12818 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=72 name=(null) inode=12810 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=73 name=(null) inode=12819 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=74 name=(null) inode=12819 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=75 name=(null) inode=12820 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=76 name=(null) inode=12819 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=77 name=(null) inode=12821 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=78 name=(null) inode=12819 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=79 name=(null) inode=12822 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=80 name=(null) inode=12819 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=81 name=(null) inode=12823 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=82 name=(null) inode=12819 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=83 name=(null) inode=12824 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=84 name=(null) inode=12810 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=85 name=(null) inode=12825 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=86 name=(null) inode=12825 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=87 name=(null) inode=12826 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=88 name=(null) inode=12825 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=89 name=(null) inode=12827 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=90 name=(null) inode=12825 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=91 name=(null) inode=12828 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=92 name=(null) inode=12825 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=93 name=(null) inode=12829 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=94 name=(null) inode=12825 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=95 name=(null) inode=12830 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=96 name=(null) inode=12810 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=97 name=(null) inode=12831 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=98 name=(null) inode=12831 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=99 name=(null) inode=12832 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=100 name=(null) inode=12831 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=101 name=(null) inode=12833 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=102 name=(null) inode=12831 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=103 name=(null) inode=12834 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=104 name=(null) inode=12831 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=105 name=(null) inode=12835 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=106 name=(null) inode=12831 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PATH item=107 name=(null) inode=12836 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:32:40.246000 audit: PROCTITLE proctitle="(udev-worker)" Feb 12 19:32:40.262271 systemd-networkd[1018]: eth0: DHCPv4 address 10.0.0.38/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 12 19:32:40.272204 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 12 19:32:40.275147 kernel: mousedev: PS/2 mouse device common for all mice Feb 12 19:32:40.277150 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 0 Feb 12 19:32:40.324910 kernel: kvm: Nested Virtualization enabled Feb 12 19:32:40.324994 kernel: SVM: kvm: Nested Paging enabled Feb 12 19:32:40.325024 kernel: SVM: Virtual VMLOAD VMSAVE supported Feb 12 19:32:40.325037 kernel: SVM: Virtual GIF supported Feb 12 19:32:40.339144 kernel: EDAC MC: Ver: 3.0.0 Feb 12 19:32:40.360587 systemd[1]: Finished systemd-udev-settle.service. Feb 12 19:32:40.361000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:40.362836 systemd[1]: Starting lvm2-activation-early.service... Feb 12 19:32:40.369605 lvm[1042]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 19:32:40.393941 systemd[1]: Finished lvm2-activation-early.service. Feb 12 19:32:40.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:40.395066 systemd[1]: Reached target cryptsetup.target. Feb 12 19:32:40.396766 systemd[1]: Starting lvm2-activation.service... Feb 12 19:32:40.399653 lvm[1043]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 19:32:40.427865 systemd[1]: Finished lvm2-activation.service. Feb 12 19:32:40.428000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:40.428671 systemd[1]: Reached target local-fs-pre.target. Feb 12 19:32:40.429311 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 12 19:32:40.429332 systemd[1]: Reached target local-fs.target. Feb 12 19:32:40.429916 systemd[1]: Reached target machines.target. Feb 12 19:32:40.431600 systemd[1]: Starting ldconfig.service... Feb 12 19:32:40.432394 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 12 19:32:40.432437 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 19:32:40.433239 systemd[1]: Starting systemd-boot-update.service... Feb 12 19:32:40.434754 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 12 19:32:40.436392 systemd[1]: Starting systemd-machine-id-commit.service... Feb 12 19:32:40.437870 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 12 19:32:40.437910 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 12 19:32:40.439253 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 12 19:32:40.441686 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1045 (bootctl) Feb 12 19:32:40.442676 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 12 19:32:40.443944 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 12 19:32:40.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:40.455689 systemd-tmpfiles[1048]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 12 19:32:40.457595 systemd-tmpfiles[1048]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 12 19:32:40.459868 systemd-tmpfiles[1048]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 12 19:32:40.623687 systemd-fsck[1053]: fsck.fat 4.2 (2021-01-31) Feb 12 19:32:40.623687 systemd-fsck[1053]: /dev/vda1: 790 files, 115362/258078 clusters Feb 12 19:32:40.625369 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 12 19:32:40.626000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:40.627946 systemd[1]: Mounting boot.mount... Feb 12 19:32:40.664729 systemd[1]: Mounted boot.mount. Feb 12 19:32:40.676021 systemd[1]: Finished systemd-boot-update.service. Feb 12 19:32:40.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:40.718188 ldconfig[1044]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 12 19:32:40.722818 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 12 19:32:40.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:40.724801 systemd[1]: Starting audit-rules.service... Feb 12 19:32:40.726205 systemd[1]: Starting clean-ca-certificates.service... Feb 12 19:32:40.727700 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 12 19:32:40.729000 audit: BPF prog-id=27 op=LOAD Feb 12 19:32:40.730049 systemd[1]: Starting systemd-resolved.service... Feb 12 19:32:40.732000 audit: BPF prog-id=28 op=LOAD Feb 12 19:32:40.732719 systemd[1]: Starting systemd-timesyncd.service... Feb 12 19:32:40.733973 systemd[1]: Starting systemd-update-utmp.service... Feb 12 19:32:40.735038 systemd[1]: Finished clean-ca-certificates.service. Feb 12 19:32:40.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:40.735859 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 12 19:32:40.740000 audit[1067]: SYSTEM_BOOT pid=1067 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 12 19:32:40.742194 systemd[1]: Finished systemd-update-utmp.service. Feb 12 19:32:40.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:40.765448 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 12 19:32:40.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:32:40.856000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 12 19:32:40.856000 audit[1076]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdc90575f0 a2=420 a3=0 items=0 ppid=1056 pid=1076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:32:40.856000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 12 19:32:40.856860 augenrules[1076]: No rules Feb 12 19:32:40.857160 systemd[1]: Finished audit-rules.service. Feb 12 19:32:40.867748 systemd[1]: Started systemd-timesyncd.service. Feb 12 19:32:40.868526 systemd[1]: Reached target time-set.target. Feb 12 19:32:40.868745 systemd-timesyncd[1066]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 12 19:32:40.868787 systemd-timesyncd[1066]: Initial clock synchronization to Mon 2024-02-12 19:32:41.195946 UTC. Feb 12 19:32:40.869950 systemd[1]: Finished ldconfig.service. Feb 12 19:32:40.872094 systemd[1]: Starting systemd-update-done.service... Feb 12 19:32:40.874526 systemd-resolved[1060]: Positive Trust Anchors: Feb 12 19:32:40.874539 systemd-resolved[1060]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 19:32:40.874568 systemd-resolved[1060]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 19:32:40.878447 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 12 19:32:40.878870 systemd[1]: Finished systemd-machine-id-commit.service. Feb 12 19:32:40.883120 systemd[1]: Finished systemd-update-done.service. Feb 12 19:32:40.884334 systemd-resolved[1060]: Defaulting to hostname 'linux'. Feb 12 19:32:40.885760 systemd[1]: Started systemd-resolved.service. Feb 12 19:32:40.886383 systemd[1]: Reached target network.target. Feb 12 19:32:40.886912 systemd[1]: Reached target nss-lookup.target. Feb 12 19:32:40.887472 systemd[1]: Reached target sysinit.target. Feb 12 19:32:40.888064 systemd[1]: Started motdgen.path. Feb 12 19:32:40.888548 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 12 19:32:40.889393 systemd[1]: Started logrotate.timer. Feb 12 19:32:40.889957 systemd[1]: Started mdadm.timer. Feb 12 19:32:40.890418 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 12 19:32:40.890995 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 12 19:32:40.891023 systemd[1]: Reached target paths.target. Feb 12 19:32:40.891514 systemd[1]: Reached target timers.target. Feb 12 19:32:40.892263 systemd[1]: Listening on dbus.socket. Feb 12 19:32:40.893565 systemd[1]: Starting docker.socket... Feb 12 19:32:40.895918 systemd[1]: Listening on sshd.socket. Feb 12 19:32:40.896517 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 19:32:40.896827 systemd[1]: Listening on docker.socket. Feb 12 19:32:40.897400 systemd[1]: Reached target sockets.target. Feb 12 19:32:40.897923 systemd[1]: Reached target basic.target. Feb 12 19:32:40.898456 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 19:32:40.898478 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 19:32:40.899242 systemd[1]: Starting containerd.service... Feb 12 19:32:40.900394 systemd[1]: Starting dbus.service... Feb 12 19:32:40.901538 systemd[1]: Starting enable-oem-cloudinit.service... Feb 12 19:32:40.902866 systemd[1]: Starting extend-filesystems.service... Feb 12 19:32:40.903500 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 12 19:32:40.904338 systemd[1]: Starting motdgen.service... Feb 12 19:32:40.906008 systemd[1]: Starting prepare-cni-plugins.service... Feb 12 19:32:40.907770 jq[1089]: false Feb 12 19:32:40.907430 systemd[1]: Starting prepare-critools.service... Feb 12 19:32:40.908761 systemd[1]: Starting prepare-helm.service... Feb 12 19:32:40.910224 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 12 19:32:40.911715 systemd[1]: Starting sshd-keygen.service... Feb 12 19:32:40.914536 systemd[1]: Starting systemd-logind.service... Feb 12 19:32:40.916204 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 19:32:40.916257 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 12 19:32:40.916563 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 12 19:32:40.918295 systemd[1]: Starting update-engine.service... Feb 12 19:32:40.920689 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 12 19:32:40.921255 dbus-daemon[1088]: [system] SELinux support is enabled Feb 12 19:32:40.922438 systemd[1]: Started dbus.service. Feb 12 19:32:40.924260 jq[1108]: true Feb 12 19:32:40.925443 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 12 19:32:40.926149 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 12 19:32:40.926378 systemd[1]: motdgen.service: Deactivated successfully. Feb 12 19:32:40.926601 extend-filesystems[1090]: Found sr0 Feb 12 19:32:40.926601 extend-filesystems[1090]: Found vda Feb 12 19:32:40.928449 extend-filesystems[1090]: Found vda1 Feb 12 19:32:40.928449 extend-filesystems[1090]: Found vda2 Feb 12 19:32:40.928449 extend-filesystems[1090]: Found vda3 Feb 12 19:32:40.928449 extend-filesystems[1090]: Found usr Feb 12 19:32:40.928449 extend-filesystems[1090]: Found vda4 Feb 12 19:32:40.928449 extend-filesystems[1090]: Found vda6 Feb 12 19:32:40.928449 extend-filesystems[1090]: Found vda7 Feb 12 19:32:40.928449 extend-filesystems[1090]: Found vda9 Feb 12 19:32:40.928449 extend-filesystems[1090]: Checking size of /dev/vda9 Feb 12 19:32:40.926767 systemd[1]: Finished motdgen.service. Feb 12 19:32:40.934652 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 12 19:32:40.934774 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 12 19:32:40.944662 tar[1115]: linux-amd64/helm Feb 12 19:32:40.937235 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 12 19:32:40.944945 tar[1113]: ./ Feb 12 19:32:40.944945 tar[1113]: ./loopback Feb 12 19:32:40.945095 jq[1118]: true Feb 12 19:32:40.937258 systemd[1]: Reached target system-config.target. Feb 12 19:32:40.937955 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 12 19:32:40.945442 tar[1114]: crictl Feb 12 19:32:40.937974 systemd[1]: Reached target user-config.target. Feb 12 19:32:40.990545 env[1119]: time="2024-02-12T19:32:40.990504654Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 12 19:32:40.998352 update_engine[1106]: I0212 19:32:40.998111 1106 main.cc:92] Flatcar Update Engine starting Feb 12 19:32:41.001605 systemd[1]: Started update-engine.service. Feb 12 19:32:41.001933 update_engine[1106]: I0212 19:32:41.001907 1106 update_check_scheduler.cc:74] Next update check in 5m54s Feb 12 19:32:41.002436 systemd-logind[1103]: Watching system buttons on /dev/input/event1 (Power Button) Feb 12 19:32:41.002823 extend-filesystems[1090]: Resized partition /dev/vda9 Feb 12 19:32:41.023913 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 12 19:32:41.036808 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 12 19:32:41.036903 extend-filesystems[1147]: resize2fs 1.46.5 (30-Dec-2021) Feb 12 19:32:41.058940 tar[1113]: ./bandwidth Feb 12 19:32:41.003661 systemd-logind[1103]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 12 19:32:41.059024 env[1119]: time="2024-02-12T19:32:41.032651138Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 12 19:32:41.059024 env[1119]: time="2024-02-12T19:32:41.048837425Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:32:41.059024 env[1119]: time="2024-02-12T19:32:41.050016923Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 12 19:32:41.059024 env[1119]: time="2024-02-12T19:32:41.050039512Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:32:41.059024 env[1119]: time="2024-02-12T19:32:41.050233140Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 19:32:41.059024 env[1119]: time="2024-02-12T19:32:41.050248293Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 12 19:32:41.059024 env[1119]: time="2024-02-12T19:32:41.050259744Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 12 19:32:41.059024 env[1119]: time="2024-02-12T19:32:41.050268035Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 12 19:32:41.059024 env[1119]: time="2024-02-12T19:32:41.050326684Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:32:41.059024 env[1119]: time="2024-02-12T19:32:41.050515286Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:32:41.059312 extend-filesystems[1147]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 12 19:32:41.059312 extend-filesystems[1147]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 12 19:32:41.059312 extend-filesystems[1147]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 12 19:32:41.062436 bash[1142]: Updated "/home/core/.ssh/authorized_keys" Feb 12 19:32:41.004461 systemd-logind[1103]: New seat seat0. Feb 12 19:32:41.062546 env[1119]: time="2024-02-12T19:32:41.050618906Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 19:32:41.062546 env[1119]: time="2024-02-12T19:32:41.050632890Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 12 19:32:41.062546 env[1119]: time="2024-02-12T19:32:41.050673239Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 12 19:32:41.062546 env[1119]: time="2024-02-12T19:32:41.050685898Z" level=info msg="metadata content store policy set" policy=shared Feb 12 19:32:41.062630 extend-filesystems[1090]: Resized filesystem in /dev/vda9 Feb 12 19:32:41.004524 systemd[1]: Started locksmithd.service. Feb 12 19:32:41.028103 systemd[1]: Started systemd-logind.service. Feb 12 19:32:41.041547 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 12 19:32:41.041696 systemd[1]: Finished extend-filesystems.service. Feb 12 19:32:41.054669 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 12 19:32:41.069959 env[1119]: time="2024-02-12T19:32:41.068280915Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 12 19:32:41.069959 env[1119]: time="2024-02-12T19:32:41.068362280Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 12 19:32:41.069959 env[1119]: time="2024-02-12T19:32:41.068378423Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 12 19:32:41.069959 env[1119]: time="2024-02-12T19:32:41.068437689Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 12 19:32:41.069959 env[1119]: time="2024-02-12T19:32:41.068452758Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 12 19:32:41.069959 env[1119]: time="2024-02-12T19:32:41.068464866Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 12 19:32:41.069959 env[1119]: time="2024-02-12T19:32:41.068478558Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 12 19:32:41.069959 env[1119]: time="2024-02-12T19:32:41.068550537Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 12 19:32:41.069959 env[1119]: time="2024-02-12T19:32:41.068575566Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 12 19:32:41.069959 env[1119]: time="2024-02-12T19:32:41.068590051Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 12 19:32:41.069959 env[1119]: time="2024-02-12T19:32:41.068602451Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 12 19:32:41.069959 env[1119]: time="2024-02-12T19:32:41.068614131Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 12 19:32:41.069959 env[1119]: time="2024-02-12T19:32:41.068785578Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 12 19:32:41.069959 env[1119]: time="2024-02-12T19:32:41.068869924Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 12 19:32:41.070263 env[1119]: time="2024-02-12T19:32:41.069221264Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 12 19:32:41.070263 env[1119]: time="2024-02-12T19:32:41.069250527Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 12 19:32:41.070263 env[1119]: time="2024-02-12T19:32:41.069262395Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 12 19:32:41.070263 env[1119]: time="2024-02-12T19:32:41.069310147Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 12 19:32:41.070263 env[1119]: time="2024-02-12T19:32:41.069321974Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 12 19:32:41.070263 env[1119]: time="2024-02-12T19:32:41.069333696Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 12 19:32:41.070263 env[1119]: time="2024-02-12T19:32:41.069344562Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 12 19:32:41.070263 env[1119]: time="2024-02-12T19:32:41.069355491Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 12 19:32:41.070263 env[1119]: time="2024-02-12T19:32:41.069367192Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 12 19:32:41.070263 env[1119]: time="2024-02-12T19:32:41.069378653Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 12 19:32:41.070263 env[1119]: time="2024-02-12T19:32:41.069388623Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 12 19:32:41.070263 env[1119]: time="2024-02-12T19:32:41.069401763Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 12 19:32:41.070263 env[1119]: time="2024-02-12T19:32:41.069510742Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 12 19:32:41.070263 env[1119]: time="2024-02-12T19:32:41.069525207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 12 19:32:41.070263 env[1119]: time="2024-02-12T19:32:41.069535979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 12 19:32:41.070599 env[1119]: time="2024-02-12T19:32:41.069546377Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 12 19:32:41.070599 env[1119]: time="2024-02-12T19:32:41.069559777Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 12 19:32:41.070599 env[1119]: time="2024-02-12T19:32:41.069570352Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 12 19:32:41.070599 env[1119]: time="2024-02-12T19:32:41.069588216Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 12 19:32:41.070599 env[1119]: time="2024-02-12T19:32:41.069623819Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 12 19:32:41.070703 env[1119]: time="2024-02-12T19:32:41.069846096Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 12 19:32:41.070703 env[1119]: time="2024-02-12T19:32:41.069904433Z" level=info msg="Connect containerd service" Feb 12 19:32:41.070703 env[1119]: time="2024-02-12T19:32:41.069949985Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 12 19:32:41.070703 env[1119]: time="2024-02-12T19:32:41.070485704Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 19:32:41.070703 env[1119]: time="2024-02-12T19:32:41.070679197Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 12 19:32:41.073782 env[1119]: time="2024-02-12T19:32:41.070710795Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 12 19:32:41.073782 env[1119]: time="2024-02-12T19:32:41.070763293Z" level=info msg="containerd successfully booted in 0.084693s" Feb 12 19:32:41.070807 systemd[1]: Started containerd.service. Feb 12 19:32:41.078186 env[1119]: time="2024-02-12T19:32:41.077802645Z" level=info msg="Start subscribing containerd event" Feb 12 19:32:41.078186 env[1119]: time="2024-02-12T19:32:41.077887721Z" level=info msg="Start recovering state" Feb 12 19:32:41.078186 env[1119]: time="2024-02-12T19:32:41.077954892Z" level=info msg="Start event monitor" Feb 12 19:32:41.078186 env[1119]: time="2024-02-12T19:32:41.077968606Z" level=info msg="Start snapshots syncer" Feb 12 19:32:41.078186 env[1119]: time="2024-02-12T19:32:41.077981986Z" level=info msg="Start cni network conf syncer for default" Feb 12 19:32:41.078186 env[1119]: time="2024-02-12T19:32:41.078011854Z" level=info msg="Start streaming server" Feb 12 19:32:41.101000 tar[1113]: ./ptp Feb 12 19:32:41.142019 tar[1113]: ./vlan Feb 12 19:32:41.182839 tar[1113]: ./host-device Feb 12 19:32:41.226376 tar[1113]: ./tuning Feb 12 19:32:41.316081 tar[1113]: ./vrf Feb 12 19:32:41.355163 tar[1113]: ./sbr Feb 12 19:32:41.392207 tar[1113]: ./tap Feb 12 19:32:41.435833 tar[1113]: ./dhcp Feb 12 19:32:41.501954 tar[1115]: linux-amd64/LICENSE Feb 12 19:32:41.502228 tar[1115]: linux-amd64/README.md Feb 12 19:32:41.506527 systemd[1]: Finished prepare-helm.service. Feb 12 19:32:41.515649 locksmithd[1148]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 12 19:32:41.517878 systemd[1]: Finished prepare-critools.service. Feb 12 19:32:41.533991 tar[1113]: ./static Feb 12 19:32:41.555873 tar[1113]: ./firewall Feb 12 19:32:41.589441 tar[1113]: ./macvlan Feb 12 19:32:41.620119 tar[1113]: ./dummy Feb 12 19:32:41.649678 sshd_keygen[1111]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 12 19:32:41.650002 tar[1113]: ./bridge Feb 12 19:32:41.668357 systemd[1]: Finished sshd-keygen.service. Feb 12 19:32:41.670452 systemd[1]: Starting issuegen.service... Feb 12 19:32:41.675836 systemd[1]: issuegen.service: Deactivated successfully. Feb 12 19:32:41.676004 systemd[1]: Finished issuegen.service. Feb 12 19:32:41.678063 systemd[1]: Starting systemd-user-sessions.service... Feb 12 19:32:41.683621 tar[1113]: ./ipvlan Feb 12 19:32:41.684090 systemd[1]: Finished systemd-user-sessions.service. Feb 12 19:32:41.685867 systemd[1]: Started getty@tty1.service. Feb 12 19:32:41.687372 systemd[1]: Started serial-getty@ttyS0.service. Feb 12 19:32:41.688282 systemd[1]: Reached target getty.target. Feb 12 19:32:41.714253 tar[1113]: ./portmap Feb 12 19:32:41.742932 tar[1113]: ./host-local Feb 12 19:32:41.775782 systemd[1]: Finished prepare-cni-plugins.service. Feb 12 19:32:41.776745 systemd[1]: Reached target multi-user.target. Feb 12 19:32:41.778375 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 12 19:32:41.784102 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 12 19:32:41.784233 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 12 19:32:41.785043 systemd[1]: Startup finished in 551ms (kernel) + 6.214s (initrd) + 4.893s (userspace) = 11.659s. Feb 12 19:32:42.159574 systemd-networkd[1018]: eth0: Gained IPv6LL Feb 12 19:32:45.887430 systemd[1]: Created slice system-sshd.slice. Feb 12 19:32:45.888415 systemd[1]: Started sshd@0-10.0.0.38:22-10.0.0.1:51948.service. Feb 12 19:32:45.926549 sshd[1177]: Accepted publickey for core from 10.0.0.1 port 51948 ssh2: RSA SHA256:61J5tVZgtMsvFgBzlA318rHvk/8vx1tAF2anfHXiCnk Feb 12 19:32:45.927693 sshd[1177]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:32:45.934325 systemd[1]: Created slice user-500.slice. Feb 12 19:32:45.935293 systemd[1]: Starting user-runtime-dir@500.service... Feb 12 19:32:45.936847 systemd-logind[1103]: New session 1 of user core. Feb 12 19:32:45.942618 systemd[1]: Finished user-runtime-dir@500.service. Feb 12 19:32:45.943811 systemd[1]: Starting user@500.service... Feb 12 19:32:45.945979 (systemd)[1180]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:32:46.008666 systemd[1180]: Queued start job for default target default.target. Feb 12 19:32:46.009108 systemd[1180]: Reached target paths.target. Feb 12 19:32:46.009130 systemd[1180]: Reached target sockets.target. Feb 12 19:32:46.009144 systemd[1180]: Reached target timers.target. Feb 12 19:32:46.009169 systemd[1180]: Reached target basic.target. Feb 12 19:32:46.009203 systemd[1180]: Reached target default.target. Feb 12 19:32:46.009227 systemd[1180]: Startup finished in 58ms. Feb 12 19:32:46.009274 systemd[1]: Started user@500.service. Feb 12 19:32:46.010118 systemd[1]: Started session-1.scope. Feb 12 19:32:46.061338 systemd[1]: Started sshd@1-10.0.0.38:22-10.0.0.1:51956.service. Feb 12 19:32:46.097933 sshd[1190]: Accepted publickey for core from 10.0.0.1 port 51956 ssh2: RSA SHA256:61J5tVZgtMsvFgBzlA318rHvk/8vx1tAF2anfHXiCnk Feb 12 19:32:46.098908 sshd[1190]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:32:46.102168 systemd-logind[1103]: New session 2 of user core. Feb 12 19:32:46.102939 systemd[1]: Started session-2.scope. Feb 12 19:32:46.373613 sshd[1190]: pam_unix(sshd:session): session closed for user core Feb 12 19:32:46.375980 systemd[1]: sshd@1-10.0.0.38:22-10.0.0.1:51956.service: Deactivated successfully. Feb 12 19:32:46.376444 systemd[1]: session-2.scope: Deactivated successfully. Feb 12 19:32:46.376859 systemd-logind[1103]: Session 2 logged out. Waiting for processes to exit. Feb 12 19:32:46.377789 systemd[1]: Started sshd@2-10.0.0.38:22-10.0.0.1:51966.service. Feb 12 19:32:46.378405 systemd-logind[1103]: Removed session 2. Feb 12 19:32:46.412903 sshd[1196]: Accepted publickey for core from 10.0.0.1 port 51966 ssh2: RSA SHA256:61J5tVZgtMsvFgBzlA318rHvk/8vx1tAF2anfHXiCnk Feb 12 19:32:46.414058 sshd[1196]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:32:46.417915 systemd-logind[1103]: New session 3 of user core. Feb 12 19:32:46.418929 systemd[1]: Started session-3.scope. Feb 12 19:32:46.470188 sshd[1196]: pam_unix(sshd:session): session closed for user core Feb 12 19:32:46.472907 systemd[1]: sshd@2-10.0.0.38:22-10.0.0.1:51966.service: Deactivated successfully. Feb 12 19:32:46.473517 systemd[1]: session-3.scope: Deactivated successfully. Feb 12 19:32:46.474050 systemd-logind[1103]: Session 3 logged out. Waiting for processes to exit. Feb 12 19:32:46.475043 systemd[1]: Started sshd@3-10.0.0.38:22-10.0.0.1:51982.service. Feb 12 19:32:46.475807 systemd-logind[1103]: Removed session 3. Feb 12 19:32:46.510538 sshd[1203]: Accepted publickey for core from 10.0.0.1 port 51982 ssh2: RSA SHA256:61J5tVZgtMsvFgBzlA318rHvk/8vx1tAF2anfHXiCnk Feb 12 19:32:46.511554 sshd[1203]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:32:46.514728 systemd-logind[1103]: New session 4 of user core. Feb 12 19:32:46.515625 systemd[1]: Started session-4.scope. Feb 12 19:32:46.568067 sshd[1203]: pam_unix(sshd:session): session closed for user core Feb 12 19:32:46.570652 systemd[1]: sshd@3-10.0.0.38:22-10.0.0.1:51982.service: Deactivated successfully. Feb 12 19:32:46.571139 systemd[1]: session-4.scope: Deactivated successfully. Feb 12 19:32:46.571605 systemd-logind[1103]: Session 4 logged out. Waiting for processes to exit. Feb 12 19:32:46.572521 systemd[1]: Started sshd@4-10.0.0.38:22-10.0.0.1:51986.service. Feb 12 19:32:46.573041 systemd-logind[1103]: Removed session 4. Feb 12 19:32:46.607451 sshd[1209]: Accepted publickey for core from 10.0.0.1 port 51986 ssh2: RSA SHA256:61J5tVZgtMsvFgBzlA318rHvk/8vx1tAF2anfHXiCnk Feb 12 19:32:46.608282 sshd[1209]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:32:46.610961 systemd-logind[1103]: New session 5 of user core. Feb 12 19:32:46.611615 systemd[1]: Started session-5.scope. Feb 12 19:32:46.665436 sudo[1212]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 12 19:32:46.665600 sudo[1212]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 12 19:32:47.189463 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 12 19:32:47.193937 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 12 19:32:47.194180 systemd[1]: Reached target network-online.target. Feb 12 19:32:47.195205 systemd[1]: Starting docker.service... Feb 12 19:32:47.223605 env[1229]: time="2024-02-12T19:32:47.223555355Z" level=info msg="Starting up" Feb 12 19:32:47.224704 env[1229]: time="2024-02-12T19:32:47.224669573Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 12 19:32:47.224704 env[1229]: time="2024-02-12T19:32:47.224693998Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 12 19:32:47.224775 env[1229]: time="2024-02-12T19:32:47.224714536Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 12 19:32:47.224775 env[1229]: time="2024-02-12T19:32:47.224724668Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 12 19:32:47.226018 env[1229]: time="2024-02-12T19:32:47.225989427Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 12 19:32:47.226018 env[1229]: time="2024-02-12T19:32:47.226008068Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 12 19:32:47.226127 env[1229]: time="2024-02-12T19:32:47.226023138Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 12 19:32:47.226127 env[1229]: time="2024-02-12T19:32:47.226039676Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 12 19:32:47.458790 env[1229]: time="2024-02-12T19:32:47.458685994Z" level=info msg="Loading containers: start." Feb 12 19:32:47.542168 kernel: Initializing XFRM netlink socket Feb 12 19:32:47.567346 env[1229]: time="2024-02-12T19:32:47.567302943Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 12 19:32:47.609833 systemd-networkd[1018]: docker0: Link UP Feb 12 19:32:47.618407 env[1229]: time="2024-02-12T19:32:47.618381589Z" level=info msg="Loading containers: done." Feb 12 19:32:47.630623 env[1229]: time="2024-02-12T19:32:47.630583867Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 12 19:32:47.630765 env[1229]: time="2024-02-12T19:32:47.630742479Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 12 19:32:47.630826 env[1229]: time="2024-02-12T19:32:47.630809918Z" level=info msg="Daemon has completed initialization" Feb 12 19:32:47.644979 systemd[1]: Started docker.service. Feb 12 19:32:47.648444 env[1229]: time="2024-02-12T19:32:47.648405292Z" level=info msg="API listen on /run/docker.sock" Feb 12 19:32:47.664514 systemd[1]: Reloading. Feb 12 19:32:47.719707 /usr/lib/systemd/system-generators/torcx-generator[1373]: time="2024-02-12T19:32:47Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:32:47.720044 /usr/lib/systemd/system-generators/torcx-generator[1373]: time="2024-02-12T19:32:47Z" level=info msg="torcx already run" Feb 12 19:32:47.774871 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:32:47.774889 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:32:47.792197 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:32:47.861635 systemd[1]: Started kubelet.service. Feb 12 19:32:47.908701 kubelet[1413]: E0212 19:32:47.908647 1413 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Feb 12 19:32:47.910329 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 19:32:47.910473 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 19:32:48.509193 env[1119]: time="2024-02-12T19:32:48.509121239Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.27.10\"" Feb 12 19:32:49.202937 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4142066433.mount: Deactivated successfully. Feb 12 19:32:51.353155 env[1119]: time="2024-02-12T19:32:51.353098034Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:32:51.355490 env[1119]: time="2024-02-12T19:32:51.355437117Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7968fc5c824ed95404f421a90882835f250220c0fd799b4fceef340dd5585ed5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:32:51.357081 env[1119]: time="2024-02-12T19:32:51.357049009Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:32:51.358590 env[1119]: time="2024-02-12T19:32:51.358541863Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:cfcebda74d6e665b68931d3589ee69fde81cd503ff3169888e4502af65579d98,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:32:51.359049 env[1119]: time="2024-02-12T19:32:51.359018735Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.27.10\" returns image reference \"sha256:7968fc5c824ed95404f421a90882835f250220c0fd799b4fceef340dd5585ed5\"" Feb 12 19:32:51.367159 env[1119]: time="2024-02-12T19:32:51.367110995Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.27.10\"" Feb 12 19:32:53.978955 env[1119]: time="2024-02-12T19:32:53.978885997Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:32:53.982042 env[1119]: time="2024-02-12T19:32:53.981994925Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c8134be729ba23c6e0c3e5dd52c393fc8d3cfc688bcec33540f64bb0137b67e0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:32:53.983608 env[1119]: time="2024-02-12T19:32:53.983558703Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:32:53.985387 env[1119]: time="2024-02-12T19:32:53.985359094Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fa168ebca1f6dbfe86ef0a690e007531c1f53569274fc7dc2774fe228b6ce8c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:32:53.986069 env[1119]: time="2024-02-12T19:32:53.986037328Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.27.10\" returns image reference \"sha256:c8134be729ba23c6e0c3e5dd52c393fc8d3cfc688bcec33540f64bb0137b67e0\"" Feb 12 19:32:54.002634 env[1119]: time="2024-02-12T19:32:54.002590211Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.27.10\"" Feb 12 19:32:55.736363 env[1119]: time="2024-02-12T19:32:55.736306884Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:32:55.738991 env[1119]: time="2024-02-12T19:32:55.738964697Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5eed9876e7181341b7015e3486dfd234f8e0d0d7d3d19b1bb971d720cd320975,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:32:55.741839 env[1119]: time="2024-02-12T19:32:55.741816165Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:32:55.743816 env[1119]: time="2024-02-12T19:32:55.743758994Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:09294de61e63987f181077cbc2f5c82463878af9cd8ecc6110c54150c9ae3143,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:32:55.744298 env[1119]: time="2024-02-12T19:32:55.744275656Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.27.10\" returns image reference \"sha256:5eed9876e7181341b7015e3486dfd234f8e0d0d7d3d19b1bb971d720cd320975\"" Feb 12 19:32:55.753463 env[1119]: time="2024-02-12T19:32:55.753431340Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.10\"" Feb 12 19:32:57.962702 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 12 19:32:57.962869 systemd[1]: Stopped kubelet.service. Feb 12 19:32:57.964252 systemd[1]: Started kubelet.service. Feb 12 19:32:58.022396 kubelet[1455]: E0212 19:32:58.022331 1455 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Feb 12 19:32:58.026776 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 19:32:58.026924 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 19:32:58.328890 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2232698346.mount: Deactivated successfully. Feb 12 19:32:58.863272 env[1119]: time="2024-02-12T19:32:58.863215039Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:32:58.864961 env[1119]: time="2024-02-12T19:32:58.864911033Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:db7b01e105753475c198490cf875df1314fd1a599f67ea1b184586cb399e1cae,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:32:58.866081 env[1119]: time="2024-02-12T19:32:58.866051441Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:32:58.867412 env[1119]: time="2024-02-12T19:32:58.867387247Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:d084b53c772f62ec38fddb2348a82d4234016daf6cd43fedbf0b3281f3790f88,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:32:58.867649 env[1119]: time="2024-02-12T19:32:58.867620675Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.10\" returns image reference \"sha256:db7b01e105753475c198490cf875df1314fd1a599f67ea1b184586cb399e1cae\"" Feb 12 19:32:58.875463 env[1119]: time="2024-02-12T19:32:58.875423872Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 12 19:32:59.781520 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2143855116.mount: Deactivated successfully. Feb 12 19:32:59.787686 env[1119]: time="2024-02-12T19:32:59.787636878Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:32:59.789619 env[1119]: time="2024-02-12T19:32:59.789570856Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:32:59.791145 env[1119]: time="2024-02-12T19:32:59.791109167Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:32:59.792681 env[1119]: time="2024-02-12T19:32:59.792621062Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:32:59.792981 env[1119]: time="2024-02-12T19:32:59.792949274Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 12 19:32:59.801305 env[1119]: time="2024-02-12T19:32:59.801280380Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.7-0\"" Feb 12 19:33:00.538736 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3225011750.mount: Deactivated successfully. Feb 12 19:33:06.914499 env[1119]: time="2024-02-12T19:33:06.914415988Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.7-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:33:06.916607 env[1119]: time="2024-02-12T19:33:06.916580740Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:33:06.918233 env[1119]: time="2024-02-12T19:33:06.918186484Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.7-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:33:06.920161 env[1119]: time="2024-02-12T19:33:06.920119647Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:33:06.920871 env[1119]: time="2024-02-12T19:33:06.920841937Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.7-0\" returns image reference \"sha256:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681\"" Feb 12 19:33:06.945989 env[1119]: time="2024-02-12T19:33:06.945939106Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Feb 12 19:33:08.127774 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 12 19:33:08.127975 systemd[1]: Stopped kubelet.service. Feb 12 19:33:08.130236 systemd[1]: Started kubelet.service. Feb 12 19:33:08.225759 kubelet[1481]: E0212 19:33:08.225704 1481 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Feb 12 19:33:08.227622 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 19:33:08.227733 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 19:33:08.349806 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount233930180.mount: Deactivated successfully. Feb 12 19:33:08.932309 env[1119]: time="2024-02-12T19:33:08.932248746Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:33:08.933926 env[1119]: time="2024-02-12T19:33:08.933899652Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:33:08.935454 env[1119]: time="2024-02-12T19:33:08.935405444Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:33:08.936817 env[1119]: time="2024-02-12T19:33:08.936796173Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:33:08.937313 env[1119]: time="2024-02-12T19:33:08.937274711Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Feb 12 19:33:11.398758 systemd[1]: Stopped kubelet.service. Feb 12 19:33:11.412663 systemd[1]: Reloading. Feb 12 19:33:11.469737 /usr/lib/systemd/system-generators/torcx-generator[1588]: time="2024-02-12T19:33:11Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:33:11.469767 /usr/lib/systemd/system-generators/torcx-generator[1588]: time="2024-02-12T19:33:11Z" level=info msg="torcx already run" Feb 12 19:33:11.530171 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:33:11.530187 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:33:11.546651 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:33:11.619397 systemd[1]: Started kubelet.service. Feb 12 19:33:11.663949 kubelet[1626]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:33:11.663949 kubelet[1626]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 12 19:33:11.663949 kubelet[1626]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:33:11.663949 kubelet[1626]: I0212 19:33:11.663895 1626 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 12 19:33:12.014902 kubelet[1626]: I0212 19:33:12.014781 1626 server.go:415] "Kubelet version" kubeletVersion="v1.27.2" Feb 12 19:33:12.014902 kubelet[1626]: I0212 19:33:12.014828 1626 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 12 19:33:12.015087 kubelet[1626]: I0212 19:33:12.015067 1626 server.go:837] "Client rotation is on, will bootstrap in background" Feb 12 19:33:12.018322 kubelet[1626]: I0212 19:33:12.018284 1626 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 19:33:12.019258 kubelet[1626]: E0212 19:33:12.019238 1626 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.38:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.38:6443: connect: connection refused Feb 12 19:33:12.022005 kubelet[1626]: I0212 19:33:12.021980 1626 server.go:662] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 12 19:33:12.022208 kubelet[1626]: I0212 19:33:12.022188 1626 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 12 19:33:12.022303 kubelet[1626]: I0212 19:33:12.022281 1626 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 12 19:33:12.022410 kubelet[1626]: I0212 19:33:12.022308 1626 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 12 19:33:12.022410 kubelet[1626]: I0212 19:33:12.022320 1626 container_manager_linux.go:302] "Creating device plugin manager" Feb 12 19:33:12.022465 kubelet[1626]: I0212 19:33:12.022425 1626 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:33:12.028527 kubelet[1626]: I0212 19:33:12.028511 1626 kubelet.go:405] "Attempting to sync node with API server" Feb 12 19:33:12.028570 kubelet[1626]: I0212 19:33:12.028532 1626 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 12 19:33:12.028570 kubelet[1626]: I0212 19:33:12.028551 1626 kubelet.go:309] "Adding apiserver pod source" Feb 12 19:33:12.028570 kubelet[1626]: I0212 19:33:12.028564 1626 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 12 19:33:12.029099 kubelet[1626]: I0212 19:33:12.029084 1626 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 12 19:33:12.029099 kubelet[1626]: W0212 19:33:12.029078 1626 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.38:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.38:6443: connect: connection refused Feb 12 19:33:12.029210 kubelet[1626]: E0212 19:33:12.029126 1626 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.38:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.38:6443: connect: connection refused Feb 12 19:33:12.029440 kubelet[1626]: W0212 19:33:12.029411 1626 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 12 19:33:12.029651 kubelet[1626]: W0212 19:33:12.029590 1626 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.38:6443: connect: connection refused Feb 12 19:33:12.029651 kubelet[1626]: E0212 19:33:12.029648 1626 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.38:6443: connect: connection refused Feb 12 19:33:12.029837 kubelet[1626]: I0212 19:33:12.029814 1626 server.go:1168] "Started kubelet" Feb 12 19:33:12.029926 kubelet[1626]: I0212 19:33:12.029903 1626 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 12 19:33:12.030151 kubelet[1626]: I0212 19:33:12.030120 1626 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 12 19:33:12.030733 kubelet[1626]: I0212 19:33:12.030716 1626 server.go:461] "Adding debug handlers to kubelet server" Feb 12 19:33:12.031677 kubelet[1626]: E0212 19:33:12.031664 1626 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 12 19:33:12.031760 kubelet[1626]: E0212 19:33:12.031746 1626 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 12 19:33:12.031912 kubelet[1626]: E0212 19:33:12.031803 1626 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b3347c44494f00", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 33, 12, 29794048, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 33, 12, 29794048, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.0.0.38:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.38:6443: connect: connection refused'(may retry after sleeping) Feb 12 19:33:12.032506 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 12 19:33:12.032599 kubelet[1626]: I0212 19:33:12.032576 1626 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 12 19:33:12.032765 kubelet[1626]: I0212 19:33:12.032730 1626 volume_manager.go:284] "Starting Kubelet Volume Manager" Feb 12 19:33:12.032875 kubelet[1626]: I0212 19:33:12.032853 1626 desired_state_of_world_populator.go:145] "Desired state populator starts to run" Feb 12 19:33:12.033182 kubelet[1626]: W0212 19:33:12.033116 1626 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.38:6443: connect: connection refused Feb 12 19:33:12.033182 kubelet[1626]: E0212 19:33:12.033182 1626 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.38:6443: connect: connection refused Feb 12 19:33:12.033273 kubelet[1626]: E0212 19:33:12.033227 1626 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 19:33:12.033396 kubelet[1626]: E0212 19:33:12.033373 1626 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.38:6443: connect: connection refused" interval="200ms" Feb 12 19:33:12.053985 kubelet[1626]: I0212 19:33:12.053959 1626 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 12 19:33:12.054983 kubelet[1626]: I0212 19:33:12.054951 1626 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 12 19:33:12.054983 kubelet[1626]: I0212 19:33:12.054983 1626 status_manager.go:207] "Starting to sync pod status with apiserver" Feb 12 19:33:12.055077 kubelet[1626]: I0212 19:33:12.055015 1626 kubelet.go:2257] "Starting kubelet main sync loop" Feb 12 19:33:12.055077 kubelet[1626]: E0212 19:33:12.055066 1626 kubelet.go:2281] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 12 19:33:12.060195 kubelet[1626]: W0212 19:33:12.060091 1626 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.38:6443: connect: connection refused Feb 12 19:33:12.060195 kubelet[1626]: E0212 19:33:12.060145 1626 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.38:6443: connect: connection refused Feb 12 19:33:12.060918 kubelet[1626]: I0212 19:33:12.060900 1626 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 12 19:33:12.060918 kubelet[1626]: I0212 19:33:12.060918 1626 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 12 19:33:12.060986 kubelet[1626]: I0212 19:33:12.060930 1626 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:33:12.063881 kubelet[1626]: I0212 19:33:12.063852 1626 policy_none.go:49] "None policy: Start" Feb 12 19:33:12.064301 kubelet[1626]: I0212 19:33:12.064288 1626 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 12 19:33:12.064394 kubelet[1626]: I0212 19:33:12.064369 1626 state_mem.go:35] "Initializing new in-memory state store" Feb 12 19:33:12.068762 systemd[1]: Created slice kubepods.slice. Feb 12 19:33:12.071906 systemd[1]: Created slice kubepods-burstable.slice. Feb 12 19:33:12.074112 systemd[1]: Created slice kubepods-besteffort.slice. Feb 12 19:33:12.083692 kubelet[1626]: I0212 19:33:12.083669 1626 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 12 19:33:12.083992 kubelet[1626]: I0212 19:33:12.083878 1626 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 12 19:33:12.084791 kubelet[1626]: E0212 19:33:12.084739 1626 eviction_manager.go:262] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 12 19:33:12.134227 kubelet[1626]: I0212 19:33:12.134199 1626 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 12 19:33:12.134508 kubelet[1626]: E0212 19:33:12.134486 1626 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.38:6443/api/v1/nodes\": dial tcp 10.0.0.38:6443: connect: connection refused" node="localhost" Feb 12 19:33:12.155602 kubelet[1626]: I0212 19:33:12.155566 1626 topology_manager.go:212] "Topology Admit Handler" Feb 12 19:33:12.156354 kubelet[1626]: I0212 19:33:12.156297 1626 topology_manager.go:212] "Topology Admit Handler" Feb 12 19:33:12.157519 kubelet[1626]: I0212 19:33:12.157502 1626 topology_manager.go:212] "Topology Admit Handler" Feb 12 19:33:12.164992 systemd[1]: Created slice kubepods-burstable-pod134ec8d112799097378cb0af0ada69a6.slice. Feb 12 19:33:12.181094 systemd[1]: Created slice kubepods-burstable-pod7709ea05d7cdf82b0d7e594b61a10331.slice. Feb 12 19:33:12.188659 systemd[1]: Created slice kubepods-burstable-pod2b0e94b38682f4e439413801d3cc54db.slice. Feb 12 19:33:12.234309 kubelet[1626]: E0212 19:33:12.234286 1626 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.38:6443: connect: connection refused" interval="400ms" Feb 12 19:33:12.334662 kubelet[1626]: I0212 19:33:12.334547 1626 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2b0e94b38682f4e439413801d3cc54db-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2b0e94b38682f4e439413801d3cc54db\") " pod="kube-system/kube-scheduler-localhost" Feb 12 19:33:12.334662 kubelet[1626]: I0212 19:33:12.334589 1626 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/134ec8d112799097378cb0af0ada69a6-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"134ec8d112799097378cb0af0ada69a6\") " pod="kube-system/kube-apiserver-localhost" Feb 12 19:33:12.334662 kubelet[1626]: I0212 19:33:12.334613 1626 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7709ea05d7cdf82b0d7e594b61a10331-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7709ea05d7cdf82b0d7e594b61a10331\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 19:33:12.334662 kubelet[1626]: I0212 19:33:12.334630 1626 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7709ea05d7cdf82b0d7e594b61a10331-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7709ea05d7cdf82b0d7e594b61a10331\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 19:33:12.335445 kubelet[1626]: I0212 19:33:12.334648 1626 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7709ea05d7cdf82b0d7e594b61a10331-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7709ea05d7cdf82b0d7e594b61a10331\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 19:33:12.335733 kubelet[1626]: I0212 19:33:12.335657 1626 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 12 19:33:12.335997 kubelet[1626]: I0212 19:33:12.335952 1626 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/134ec8d112799097378cb0af0ada69a6-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"134ec8d112799097378cb0af0ada69a6\") " pod="kube-system/kube-apiserver-localhost" Feb 12 19:33:12.336061 kubelet[1626]: I0212 19:33:12.336026 1626 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/134ec8d112799097378cb0af0ada69a6-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"134ec8d112799097378cb0af0ada69a6\") " pod="kube-system/kube-apiserver-localhost" Feb 12 19:33:12.336061 kubelet[1626]: I0212 19:33:12.336048 1626 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7709ea05d7cdf82b0d7e594b61a10331-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7709ea05d7cdf82b0d7e594b61a10331\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 19:33:12.336115 kubelet[1626]: I0212 19:33:12.336066 1626 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7709ea05d7cdf82b0d7e594b61a10331-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7709ea05d7cdf82b0d7e594b61a10331\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 19:33:12.336115 kubelet[1626]: E0212 19:33:12.336069 1626 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.38:6443/api/v1/nodes\": dial tcp 10.0.0.38:6443: connect: connection refused" node="localhost" Feb 12 19:33:12.479847 kubelet[1626]: E0212 19:33:12.479797 1626 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:33:12.480573 env[1119]: time="2024-02-12T19:33:12.480537179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:134ec8d112799097378cb0af0ada69a6,Namespace:kube-system,Attempt:0,}" Feb 12 19:33:12.487753 kubelet[1626]: E0212 19:33:12.487730 1626 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:33:12.488183 env[1119]: time="2024-02-12T19:33:12.488147475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7709ea05d7cdf82b0d7e594b61a10331,Namespace:kube-system,Attempt:0,}" Feb 12 19:33:12.490376 kubelet[1626]: E0212 19:33:12.490348 1626 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:33:12.490796 env[1119]: time="2024-02-12T19:33:12.490759811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2b0e94b38682f4e439413801d3cc54db,Namespace:kube-system,Attempt:0,}" Feb 12 19:33:12.634909 kubelet[1626]: E0212 19:33:12.634876 1626 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.38:6443: connect: connection refused" interval="800ms" Feb 12 19:33:12.737125 kubelet[1626]: I0212 19:33:12.737063 1626 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 12 19:33:12.737623 kubelet[1626]: E0212 19:33:12.737395 1626 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.38:6443/api/v1/nodes\": dial tcp 10.0.0.38:6443: connect: connection refused" node="localhost" Feb 12 19:33:13.296650 kubelet[1626]: W0212 19:33:13.296548 1626 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.38:6443: connect: connection refused Feb 12 19:33:13.296650 kubelet[1626]: E0212 19:33:13.296622 1626 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.38:6443: connect: connection refused Feb 12 19:33:13.425549 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1619301754.mount: Deactivated successfully. Feb 12 19:33:13.426420 kubelet[1626]: W0212 19:33:13.426361 1626 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.38:6443: connect: connection refused Feb 12 19:33:13.426486 kubelet[1626]: E0212 19:33:13.426431 1626 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.38:6443: connect: connection refused Feb 12 19:33:13.430877 env[1119]: time="2024-02-12T19:33:13.430836706Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:33:13.432269 env[1119]: time="2024-02-12T19:33:13.432217294Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:33:13.433002 env[1119]: time="2024-02-12T19:33:13.432969972Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:33:13.433761 env[1119]: time="2024-02-12T19:33:13.433719082Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:33:13.436115 kubelet[1626]: E0212 19:33:13.436081 1626 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.38:6443: connect: connection refused" interval="1.6s" Feb 12 19:33:13.436198 env[1119]: time="2024-02-12T19:33:13.436154536Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:33:13.437426 env[1119]: time="2024-02-12T19:33:13.437397392Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:33:13.438728 env[1119]: time="2024-02-12T19:33:13.438691848Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:33:13.441301 env[1119]: time="2024-02-12T19:33:13.441279254Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:33:13.442591 env[1119]: time="2024-02-12T19:33:13.442557857Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:33:13.444073 env[1119]: time="2024-02-12T19:33:13.444033082Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:33:13.447777 env[1119]: time="2024-02-12T19:33:13.447735669Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:33:13.448338 env[1119]: time="2024-02-12T19:33:13.448316933Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:33:13.450682 kubelet[1626]: W0212 19:33:13.450624 1626 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.38:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.38:6443: connect: connection refused Feb 12 19:33:13.450682 kubelet[1626]: E0212 19:33:13.450682 1626 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.38:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.38:6443: connect: connection refused Feb 12 19:33:13.470743 env[1119]: time="2024-02-12T19:33:13.470096242Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:33:13.470743 env[1119]: time="2024-02-12T19:33:13.470144934Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:33:13.470743 env[1119]: time="2024-02-12T19:33:13.470154630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:33:13.470743 env[1119]: time="2024-02-12T19:33:13.470250028Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c99b0ccd76e2789982d390a62fe084dbf28f46d797473ba24e002497ef6d56f8 pid=1666 runtime=io.containerd.runc.v2 Feb 12 19:33:13.472952 env[1119]: time="2024-02-12T19:33:13.472890208Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:33:13.473010 env[1119]: time="2024-02-12T19:33:13.472963356Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:33:13.473010 env[1119]: time="2024-02-12T19:33:13.472994360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:33:13.473319 env[1119]: time="2024-02-12T19:33:13.473220021Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4c20473290042c575687b0d25196a2dd5b614e5dd18f95865b6232b6801a867f pid=1674 runtime=io.containerd.runc.v2 Feb 12 19:33:13.482497 env[1119]: time="2024-02-12T19:33:13.481933759Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:33:13.482497 env[1119]: time="2024-02-12T19:33:13.482007298Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:33:13.482497 env[1119]: time="2024-02-12T19:33:13.482030230Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:33:13.482497 env[1119]: time="2024-02-12T19:33:13.482154196Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ba304006413dc5d6aef3addd8cbac001aae01cbc11dc3438aa946860622bfce4 pid=1708 runtime=io.containerd.runc.v2 Feb 12 19:33:13.482373 systemd[1]: Started cri-containerd-c99b0ccd76e2789982d390a62fe084dbf28f46d797473ba24e002497ef6d56f8.scope. Feb 12 19:33:13.519329 systemd[1]: Started cri-containerd-4c20473290042c575687b0d25196a2dd5b614e5dd18f95865b6232b6801a867f.scope. Feb 12 19:33:13.523570 systemd[1]: Started cri-containerd-ba304006413dc5d6aef3addd8cbac001aae01cbc11dc3438aa946860622bfce4.scope. Feb 12 19:33:13.539244 kubelet[1626]: I0212 19:33:13.539216 1626 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 12 19:33:13.539496 kubelet[1626]: E0212 19:33:13.539478 1626 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.38:6443/api/v1/nodes\": dial tcp 10.0.0.38:6443: connect: connection refused" node="localhost" Feb 12 19:33:13.598814 env[1119]: time="2024-02-12T19:33:13.597943969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7709ea05d7cdf82b0d7e594b61a10331,Namespace:kube-system,Attempt:0,} returns sandbox id \"c99b0ccd76e2789982d390a62fe084dbf28f46d797473ba24e002497ef6d56f8\"" Feb 12 19:33:13.600267 kubelet[1626]: E0212 19:33:13.600078 1626 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:33:13.602843 env[1119]: time="2024-02-12T19:33:13.602818620Z" level=info msg="CreateContainer within sandbox \"c99b0ccd76e2789982d390a62fe084dbf28f46d797473ba24e002497ef6d56f8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 12 19:33:13.609719 kubelet[1626]: W0212 19:33:13.609605 1626 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.38:6443: connect: connection refused Feb 12 19:33:13.609719 kubelet[1626]: E0212 19:33:13.609683 1626 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.38:6443: connect: connection refused Feb 12 19:33:13.621448 env[1119]: time="2024-02-12T19:33:13.621390704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:134ec8d112799097378cb0af0ada69a6,Namespace:kube-system,Attempt:0,} returns sandbox id \"4c20473290042c575687b0d25196a2dd5b614e5dd18f95865b6232b6801a867f\"" Feb 12 19:33:13.622105 kubelet[1626]: E0212 19:33:13.622081 1626 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:33:13.624388 env[1119]: time="2024-02-12T19:33:13.624336050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2b0e94b38682f4e439413801d3cc54db,Namespace:kube-system,Attempt:0,} returns sandbox id \"ba304006413dc5d6aef3addd8cbac001aae01cbc11dc3438aa946860622bfce4\"" Feb 12 19:33:13.624456 env[1119]: time="2024-02-12T19:33:13.624334175Z" level=info msg="CreateContainer within sandbox \"4c20473290042c575687b0d25196a2dd5b614e5dd18f95865b6232b6801a867f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 12 19:33:13.625057 kubelet[1626]: E0212 19:33:13.625037 1626 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:33:13.625793 env[1119]: time="2024-02-12T19:33:13.625759765Z" level=info msg="CreateContainer within sandbox \"c99b0ccd76e2789982d390a62fe084dbf28f46d797473ba24e002497ef6d56f8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"17898a212b9c569569fa8426ceffabd1a1efe026a6282df7f9278152526b7d16\"" Feb 12 19:33:13.626347 env[1119]: time="2024-02-12T19:33:13.626324004Z" level=info msg="StartContainer for \"17898a212b9c569569fa8426ceffabd1a1efe026a6282df7f9278152526b7d16\"" Feb 12 19:33:13.627098 env[1119]: time="2024-02-12T19:33:13.627072170Z" level=info msg="CreateContainer within sandbox \"ba304006413dc5d6aef3addd8cbac001aae01cbc11dc3438aa946860622bfce4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 12 19:33:13.639966 systemd[1]: Started cri-containerd-17898a212b9c569569fa8426ceffabd1a1efe026a6282df7f9278152526b7d16.scope. Feb 12 19:33:13.652188 env[1119]: time="2024-02-12T19:33:13.652120652Z" level=info msg="CreateContainer within sandbox \"ba304006413dc5d6aef3addd8cbac001aae01cbc11dc3438aa946860622bfce4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a695112fe0514a69d4e21c7d9644ce6c6556d4cb0a85819eed06f56157804429\"" Feb 12 19:33:13.652949 env[1119]: time="2024-02-12T19:33:13.652898428Z" level=info msg="StartContainer for \"a695112fe0514a69d4e21c7d9644ce6c6556d4cb0a85819eed06f56157804429\"" Feb 12 19:33:13.654547 env[1119]: time="2024-02-12T19:33:13.654504939Z" level=info msg="CreateContainer within sandbox \"4c20473290042c575687b0d25196a2dd5b614e5dd18f95865b6232b6801a867f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3714c7708bfba383475167a83e2f7f3d2509c37f0dee4fbc5ecefff4c1109a65\"" Feb 12 19:33:13.654859 env[1119]: time="2024-02-12T19:33:13.654824484Z" level=info msg="StartContainer for \"3714c7708bfba383475167a83e2f7f3d2509c37f0dee4fbc5ecefff4c1109a65\"" Feb 12 19:33:13.674591 systemd[1]: Started cri-containerd-a695112fe0514a69d4e21c7d9644ce6c6556d4cb0a85819eed06f56157804429.scope. Feb 12 19:33:13.688754 systemd[1]: Started cri-containerd-3714c7708bfba383475167a83e2f7f3d2509c37f0dee4fbc5ecefff4c1109a65.scope. Feb 12 19:33:13.693840 env[1119]: time="2024-02-12T19:33:13.690345196Z" level=info msg="StartContainer for \"17898a212b9c569569fa8426ceffabd1a1efe026a6282df7f9278152526b7d16\" returns successfully" Feb 12 19:33:13.738449 env[1119]: time="2024-02-12T19:33:13.738402858Z" level=info msg="StartContainer for \"a695112fe0514a69d4e21c7d9644ce6c6556d4cb0a85819eed06f56157804429\" returns successfully" Feb 12 19:33:13.739161 env[1119]: time="2024-02-12T19:33:13.739108038Z" level=info msg="StartContainer for \"3714c7708bfba383475167a83e2f7f3d2509c37f0dee4fbc5ecefff4c1109a65\" returns successfully" Feb 12 19:33:14.067367 kubelet[1626]: E0212 19:33:14.067321 1626 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:33:14.069907 kubelet[1626]: E0212 19:33:14.069882 1626 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:33:14.072149 kubelet[1626]: E0212 19:33:14.072106 1626 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:33:15.076409 kubelet[1626]: E0212 19:33:15.076384 1626 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:33:15.141618 kubelet[1626]: I0212 19:33:15.141579 1626 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 12 19:33:15.164440 kubelet[1626]: E0212 19:33:15.164173 1626 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 12 19:33:15.164440 kubelet[1626]: I0212 19:33:15.164344 1626 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Feb 12 19:33:16.031103 kubelet[1626]: I0212 19:33:16.031039 1626 apiserver.go:52] "Watching apiserver" Feb 12 19:33:16.033010 kubelet[1626]: I0212 19:33:16.032988 1626 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world" Feb 12 19:33:16.057149 kubelet[1626]: I0212 19:33:16.057082 1626 reconciler.go:41] "Reconciler: start to sync state" Feb 12 19:33:16.991371 kubelet[1626]: E0212 19:33:16.991325 1626 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:33:17.071914 kubelet[1626]: E0212 19:33:17.071881 1626 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:33:17.078494 kubelet[1626]: E0212 19:33:17.078473 1626 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:33:17.078786 kubelet[1626]: E0212 19:33:17.078768 1626 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:33:19.025736 systemd[1]: Reloading. Feb 12 19:33:19.088281 /usr/lib/systemd/system-generators/torcx-generator[1926]: time="2024-02-12T19:33:19Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:33:19.088310 /usr/lib/systemd/system-generators/torcx-generator[1926]: time="2024-02-12T19:33:19Z" level=info msg="torcx already run" Feb 12 19:33:19.152825 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:33:19.152843 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:33:19.169409 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:33:19.254421 kubelet[1626]: I0212 19:33:19.254390 1626 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 19:33:19.254429 systemd[1]: Stopping kubelet.service... Feb 12 19:33:19.271328 systemd[1]: kubelet.service: Deactivated successfully. Feb 12 19:33:19.271492 systemd[1]: Stopped kubelet.service. Feb 12 19:33:19.272900 systemd[1]: Started kubelet.service. Feb 12 19:33:19.320284 kubelet[1964]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:33:19.320284 kubelet[1964]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 12 19:33:19.320284 kubelet[1964]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:33:19.320284 kubelet[1964]: I0212 19:33:19.320241 1964 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 12 19:33:19.324417 kubelet[1964]: I0212 19:33:19.324388 1964 server.go:415] "Kubelet version" kubeletVersion="v1.27.2" Feb 12 19:33:19.324417 kubelet[1964]: I0212 19:33:19.324414 1964 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 12 19:33:19.324634 kubelet[1964]: I0212 19:33:19.324616 1964 server.go:837] "Client rotation is on, will bootstrap in background" Feb 12 19:33:19.327536 kubelet[1964]: I0212 19:33:19.327507 1964 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 12 19:33:19.328835 kubelet[1964]: I0212 19:33:19.328794 1964 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 19:33:19.331842 kubelet[1964]: I0212 19:33:19.331815 1964 server.go:662] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 12 19:33:19.331983 kubelet[1964]: I0212 19:33:19.331963 1964 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 12 19:33:19.332032 kubelet[1964]: I0212 19:33:19.332021 1964 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 12 19:33:19.332112 kubelet[1964]: I0212 19:33:19.332039 1964 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 12 19:33:19.332112 kubelet[1964]: I0212 19:33:19.332049 1964 container_manager_linux.go:302] "Creating device plugin manager" Feb 12 19:33:19.332112 kubelet[1964]: I0212 19:33:19.332069 1964 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:33:19.334800 kubelet[1964]: I0212 19:33:19.334783 1964 kubelet.go:405] "Attempting to sync node with API server" Feb 12 19:33:19.334800 kubelet[1964]: I0212 19:33:19.334801 1964 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 12 19:33:19.334909 kubelet[1964]: I0212 19:33:19.334827 1964 kubelet.go:309] "Adding apiserver pod source" Feb 12 19:33:19.334909 kubelet[1964]: I0212 19:33:19.334838 1964 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 12 19:33:19.335984 kubelet[1964]: I0212 19:33:19.335930 1964 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 12 19:33:19.336689 kubelet[1964]: I0212 19:33:19.336667 1964 server.go:1168] "Started kubelet" Feb 12 19:33:19.337167 kubelet[1964]: I0212 19:33:19.337150 1964 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 12 19:33:19.338341 kubelet[1964]: I0212 19:33:19.338324 1964 server.go:461] "Adding debug handlers to kubelet server" Feb 12 19:33:19.339205 kubelet[1964]: I0212 19:33:19.339187 1964 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 12 19:33:19.340092 kubelet[1964]: I0212 19:33:19.340063 1964 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 12 19:33:19.341978 kubelet[1964]: I0212 19:33:19.341895 1964 volume_manager.go:284] "Starting Kubelet Volume Manager" Feb 12 19:33:19.342503 kubelet[1964]: I0212 19:33:19.342463 1964 desired_state_of_world_populator.go:145] "Desired state populator starts to run" Feb 12 19:33:19.358531 kubelet[1964]: E0212 19:33:19.356998 1964 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 12 19:33:19.358531 kubelet[1964]: E0212 19:33:19.357033 1964 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 12 19:33:19.363152 kubelet[1964]: I0212 19:33:19.363107 1964 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 12 19:33:19.365387 kubelet[1964]: I0212 19:33:19.365337 1964 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 12 19:33:19.365387 kubelet[1964]: I0212 19:33:19.365359 1964 status_manager.go:207] "Starting to sync pod status with apiserver" Feb 12 19:33:19.365387 kubelet[1964]: I0212 19:33:19.365377 1964 kubelet.go:2257] "Starting kubelet main sync loop" Feb 12 19:33:19.365606 kubelet[1964]: E0212 19:33:19.365432 1964 kubelet.go:2281] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 12 19:33:19.403956 kubelet[1964]: I0212 19:33:19.403925 1964 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 12 19:33:19.403956 kubelet[1964]: I0212 19:33:19.403943 1964 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 12 19:33:19.403956 kubelet[1964]: I0212 19:33:19.403956 1964 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:33:19.404339 kubelet[1964]: I0212 19:33:19.404085 1964 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 12 19:33:19.404339 kubelet[1964]: I0212 19:33:19.404097 1964 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 12 19:33:19.404339 kubelet[1964]: I0212 19:33:19.404102 1964 policy_none.go:49] "None policy: Start" Feb 12 19:33:19.404674 kubelet[1964]: I0212 19:33:19.404610 1964 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 12 19:33:19.404674 kubelet[1964]: I0212 19:33:19.404622 1964 state_mem.go:35] "Initializing new in-memory state store" Feb 12 19:33:19.404896 kubelet[1964]: I0212 19:33:19.404715 1964 state_mem.go:75] "Updated machine memory state" Feb 12 19:33:19.409123 kubelet[1964]: I0212 19:33:19.409093 1964 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 12 19:33:19.410442 kubelet[1964]: I0212 19:33:19.410411 1964 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 12 19:33:19.424612 sudo[1995]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 12 19:33:19.424768 sudo[1995]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Feb 12 19:33:19.444456 kubelet[1964]: I0212 19:33:19.444429 1964 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 12 19:33:19.451105 kubelet[1964]: I0212 19:33:19.451064 1964 kubelet_node_status.go:108] "Node was previously registered" node="localhost" Feb 12 19:33:19.451250 kubelet[1964]: I0212 19:33:19.451224 1964 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Feb 12 19:33:19.465747 kubelet[1964]: I0212 19:33:19.465710 1964 topology_manager.go:212] "Topology Admit Handler" Feb 12 19:33:19.465931 kubelet[1964]: I0212 19:33:19.465798 1964 topology_manager.go:212] "Topology Admit Handler" Feb 12 19:33:19.465931 kubelet[1964]: I0212 19:33:19.465835 1964 topology_manager.go:212] "Topology Admit Handler" Feb 12 19:33:19.470120 kubelet[1964]: E0212 19:33:19.470099 1964 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Feb 12 19:33:19.470723 kubelet[1964]: E0212 19:33:19.470709 1964 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 12 19:33:19.644380 kubelet[1964]: I0212 19:33:19.644353 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/134ec8d112799097378cb0af0ada69a6-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"134ec8d112799097378cb0af0ada69a6\") " pod="kube-system/kube-apiserver-localhost" Feb 12 19:33:19.644512 kubelet[1964]: I0212 19:33:19.644398 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7709ea05d7cdf82b0d7e594b61a10331-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7709ea05d7cdf82b0d7e594b61a10331\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 19:33:19.644512 kubelet[1964]: I0212 19:33:19.644425 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/134ec8d112799097378cb0af0ada69a6-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"134ec8d112799097378cb0af0ada69a6\") " pod="kube-system/kube-apiserver-localhost" Feb 12 19:33:19.644512 kubelet[1964]: I0212 19:33:19.644445 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/134ec8d112799097378cb0af0ada69a6-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"134ec8d112799097378cb0af0ada69a6\") " pod="kube-system/kube-apiserver-localhost" Feb 12 19:33:19.644593 kubelet[1964]: I0212 19:33:19.644527 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7709ea05d7cdf82b0d7e594b61a10331-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7709ea05d7cdf82b0d7e594b61a10331\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 19:33:19.644593 kubelet[1964]: I0212 19:33:19.644575 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7709ea05d7cdf82b0d7e594b61a10331-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7709ea05d7cdf82b0d7e594b61a10331\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 19:33:19.644593 kubelet[1964]: I0212 19:33:19.644593 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7709ea05d7cdf82b0d7e594b61a10331-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7709ea05d7cdf82b0d7e594b61a10331\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 19:33:19.644665 kubelet[1964]: I0212 19:33:19.644613 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7709ea05d7cdf82b0d7e594b61a10331-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7709ea05d7cdf82b0d7e594b61a10331\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 19:33:19.644665 kubelet[1964]: I0212 19:33:19.644634 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2b0e94b38682f4e439413801d3cc54db-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2b0e94b38682f4e439413801d3cc54db\") " pod="kube-system/kube-scheduler-localhost" Feb 12 19:33:19.771573 kubelet[1964]: E0212 19:33:19.771549 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:33:19.771840 kubelet[1964]: E0212 19:33:19.771772 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:33:19.772123 kubelet[1964]: E0212 19:33:19.772102 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:33:19.881698 sudo[1995]: pam_unix(sudo:session): session closed for user root Feb 12 19:33:20.336295 kubelet[1964]: I0212 19:33:20.336248 1964 apiserver.go:52] "Watching apiserver" Feb 12 19:33:20.343393 kubelet[1964]: I0212 19:33:20.343357 1964 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world" Feb 12 19:33:20.347488 kubelet[1964]: I0212 19:33:20.347454 1964 reconciler.go:41] "Reconciler: start to sync state" Feb 12 19:33:20.377799 kubelet[1964]: E0212 19:33:20.377772 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:33:20.377975 kubelet[1964]: E0212 19:33:20.377960 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:33:20.408653 kubelet[1964]: E0212 19:33:20.408635 1964 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 12 19:33:20.409154 kubelet[1964]: E0212 19:33:20.409115 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:33:20.463182 kubelet[1964]: I0212 19:33:20.463144 1964 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=4.463081184 podCreationTimestamp="2024-02-12 19:33:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:33:20.462573997 +0000 UTC m=+1.186649225" watchObservedRunningTime="2024-02-12 19:33:20.463081184 +0000 UTC m=+1.187156412" Feb 12 19:33:20.484021 kubelet[1964]: I0212 19:33:20.483969 1964 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.48393604 podCreationTimestamp="2024-02-12 19:33:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:33:20.483875482 +0000 UTC m=+1.207950710" watchObservedRunningTime="2024-02-12 19:33:20.48393604 +0000 UTC m=+1.208011268" Feb 12 19:33:20.496341 kubelet[1964]: I0212 19:33:20.496301 1964 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.496253894 podCreationTimestamp="2024-02-12 19:33:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:33:20.495611417 +0000 UTC m=+1.219686645" watchObservedRunningTime="2024-02-12 19:33:20.496253894 +0000 UTC m=+1.220329112" Feb 12 19:33:20.890516 sudo[1212]: pam_unix(sudo:session): session closed for user root Feb 12 19:33:20.895710 sshd[1209]: pam_unix(sshd:session): session closed for user core Feb 12 19:33:20.898415 systemd[1]: sshd@4-10.0.0.38:22-10.0.0.1:51986.service: Deactivated successfully. Feb 12 19:33:20.899282 systemd[1]: session-5.scope: Deactivated successfully. Feb 12 19:33:20.899476 systemd[1]: session-5.scope: Consumed 3.676s CPU time. Feb 12 19:33:20.900114 systemd-logind[1103]: Session 5 logged out. Waiting for processes to exit. Feb 12 19:33:20.900820 systemd-logind[1103]: Removed session 5. Feb 12 19:33:21.379669 kubelet[1964]: E0212 19:33:21.379648 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:33:22.616389 kubelet[1964]: E0212 19:33:22.616345 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:33:22.905446 kubelet[1964]: E0212 19:33:22.905410 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:33:26.130087 update_engine[1106]: I0212 19:33:26.130025 1106 update_attempter.cc:509] Updating boot flags... Feb 12 19:33:28.186780 kubelet[1964]: E0212 19:33:28.186731 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:33:28.387988 kubelet[1964]: E0212 19:33:28.387948 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:33:31.061222 kubelet[1964]: I0212 19:33:31.061191 1964 kuberuntime_manager.go:1460] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 12 19:33:31.061734 env[1119]: time="2024-02-12T19:33:31.061688147Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 12 19:33:31.061980 kubelet[1964]: I0212 19:33:31.061904 1964 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 12 19:33:32.163402 kubelet[1964]: I0212 19:33:32.163371 1964 topology_manager.go:212] "Topology Admit Handler" Feb 12 19:33:32.167997 systemd[1]: Created slice kubepods-besteffort-podf2a1ad13_a09e_4e69_96cc_ff6e19516d31.slice. Feb 12 19:33:32.179826 kubelet[1964]: I0212 19:33:32.179790 1964 topology_manager.go:212] "Topology Admit Handler" Feb 12 19:33:32.187070 systemd[1]: Created slice kubepods-besteffort-pod9a238514_9519_4c45_b560_2ba21dc304df.slice. Feb 12 19:33:32.194375 kubelet[1964]: I0212 19:33:32.194344 1964 topology_manager.go:212] "Topology Admit Handler" Feb 12 19:33:32.204169 systemd[1]: Created slice kubepods-burstable-podde82b679_77a0_48d7_9057_90f424cf4bb8.slice. Feb 12 19:33:32.223203 kubelet[1964]: I0212 19:33:32.223155 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9a238514-9519-4c45-b560-2ba21dc304df-lib-modules\") pod \"kube-proxy-z7f9n\" (UID: \"9a238514-9519-4c45-b560-2ba21dc304df\") " pod="kube-system/kube-proxy-z7f9n" Feb 12 19:33:32.223203 kubelet[1964]: I0212 19:33:32.223200 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/de82b679-77a0-48d7-9057-90f424cf4bb8-cilium-cgroup\") pod \"cilium-c6pzf\" (UID: \"de82b679-77a0-48d7-9057-90f424cf4bb8\") " pod="kube-system/cilium-c6pzf" Feb 12 19:33:32.223383 kubelet[1964]: I0212 19:33:32.223223 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/de82b679-77a0-48d7-9057-90f424cf4bb8-lib-modules\") pod \"cilium-c6pzf\" (UID: \"de82b679-77a0-48d7-9057-90f424cf4bb8\") " pod="kube-system/cilium-c6pzf" Feb 12 19:33:32.223383 kubelet[1964]: I0212 19:33:32.223251 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/de82b679-77a0-48d7-9057-90f424cf4bb8-host-proc-sys-net\") pod \"cilium-c6pzf\" (UID: \"de82b679-77a0-48d7-9057-90f424cf4bb8\") " pod="kube-system/cilium-c6pzf" Feb 12 19:33:32.223383 kubelet[1964]: I0212 19:33:32.223276 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/de82b679-77a0-48d7-9057-90f424cf4bb8-cilium-run\") pod \"cilium-c6pzf\" (UID: \"de82b679-77a0-48d7-9057-90f424cf4bb8\") " pod="kube-system/cilium-c6pzf" Feb 12 19:33:32.223383 kubelet[1964]: I0212 19:33:32.223301 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/de82b679-77a0-48d7-9057-90f424cf4bb8-bpf-maps\") pod \"cilium-c6pzf\" (UID: \"de82b679-77a0-48d7-9057-90f424cf4bb8\") " pod="kube-system/cilium-c6pzf" Feb 12 19:33:32.223383 kubelet[1964]: I0212 19:33:32.223335 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/de82b679-77a0-48d7-9057-90f424cf4bb8-cni-path\") pod \"cilium-c6pzf\" (UID: \"de82b679-77a0-48d7-9057-90f424cf4bb8\") " pod="kube-system/cilium-c6pzf" Feb 12 19:33:32.223497 kubelet[1964]: I0212 19:33:32.223389 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/de82b679-77a0-48d7-9057-90f424cf4bb8-cilium-config-path\") pod \"cilium-c6pzf\" (UID: \"de82b679-77a0-48d7-9057-90f424cf4bb8\") " pod="kube-system/cilium-c6pzf" Feb 12 19:33:32.223497 kubelet[1964]: I0212 19:33:32.223414 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/de82b679-77a0-48d7-9057-90f424cf4bb8-hostproc\") pod \"cilium-c6pzf\" (UID: \"de82b679-77a0-48d7-9057-90f424cf4bb8\") " pod="kube-system/cilium-c6pzf" Feb 12 19:33:32.223497 kubelet[1964]: I0212 19:33:32.223431 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/de82b679-77a0-48d7-9057-90f424cf4bb8-etc-cni-netd\") pod \"cilium-c6pzf\" (UID: \"de82b679-77a0-48d7-9057-90f424cf4bb8\") " pod="kube-system/cilium-c6pzf" Feb 12 19:33:32.223497 kubelet[1964]: I0212 19:33:32.223449 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/de82b679-77a0-48d7-9057-90f424cf4bb8-hubble-tls\") pod \"cilium-c6pzf\" (UID: \"de82b679-77a0-48d7-9057-90f424cf4bb8\") " pod="kube-system/cilium-c6pzf" Feb 12 19:33:32.223497 kubelet[1964]: I0212 19:33:32.223479 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f2a1ad13-a09e-4e69-96cc-ff6e19516d31-cilium-config-path\") pod \"cilium-operator-574c4bb98d-mcg9n\" (UID: \"f2a1ad13-a09e-4e69-96cc-ff6e19516d31\") " pod="kube-system/cilium-operator-574c4bb98d-mcg9n" Feb 12 19:33:32.223616 kubelet[1964]: I0212 19:33:32.223495 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/de82b679-77a0-48d7-9057-90f424cf4bb8-clustermesh-secrets\") pod \"cilium-c6pzf\" (UID: \"de82b679-77a0-48d7-9057-90f424cf4bb8\") " pod="kube-system/cilium-c6pzf" Feb 12 19:33:32.223616 kubelet[1964]: I0212 19:33:32.223515 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/de82b679-77a0-48d7-9057-90f424cf4bb8-host-proc-sys-kernel\") pod \"cilium-c6pzf\" (UID: \"de82b679-77a0-48d7-9057-90f424cf4bb8\") " pod="kube-system/cilium-c6pzf" Feb 12 19:33:32.223616 kubelet[1964]: I0212 19:33:32.223530 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9a238514-9519-4c45-b560-2ba21dc304df-kube-proxy\") pod \"kube-proxy-z7f9n\" (UID: \"9a238514-9519-4c45-b560-2ba21dc304df\") " pod="kube-system/kube-proxy-z7f9n" Feb 12 19:33:32.223616 kubelet[1964]: I0212 19:33:32.223544 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9a238514-9519-4c45-b560-2ba21dc304df-xtables-lock\") pod \"kube-proxy-z7f9n\" (UID: \"9a238514-9519-4c45-b560-2ba21dc304df\") " pod="kube-system/kube-proxy-z7f9n" Feb 12 19:33:32.223616 kubelet[1964]: I0212 19:33:32.223592 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/de82b679-77a0-48d7-9057-90f424cf4bb8-xtables-lock\") pod \"cilium-c6pzf\" (UID: \"de82b679-77a0-48d7-9057-90f424cf4bb8\") " pod="kube-system/cilium-c6pzf" Feb 12 19:33:32.223726 kubelet[1964]: I0212 19:33:32.223633 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6rnn\" (UniqueName: \"kubernetes.io/projected/de82b679-77a0-48d7-9057-90f424cf4bb8-kube-api-access-s6rnn\") pod \"cilium-c6pzf\" (UID: \"de82b679-77a0-48d7-9057-90f424cf4bb8\") " pod="kube-system/cilium-c6pzf" Feb 12 19:33:32.223726 kubelet[1964]: I0212 19:33:32.223672 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2t2gc\" (UniqueName: \"kubernetes.io/projected/f2a1ad13-a09e-4e69-96cc-ff6e19516d31-kube-api-access-2t2gc\") pod \"cilium-operator-574c4bb98d-mcg9n\" (UID: \"f2a1ad13-a09e-4e69-96cc-ff6e19516d31\") " pod="kube-system/cilium-operator-574c4bb98d-mcg9n" Feb 12 19:33:32.223726 kubelet[1964]: I0212 19:33:32.223704 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7l8kj\" (UniqueName: \"kubernetes.io/projected/9a238514-9519-4c45-b560-2ba21dc304df-kube-api-access-7l8kj\") pod \"kube-proxy-z7f9n\" (UID: \"9a238514-9519-4c45-b560-2ba21dc304df\") " pod="kube-system/kube-proxy-z7f9n" Feb 12 19:33:32.478415 kubelet[1964]: E0212 19:33:32.478304 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:33:32.479026 env[1119]: time="2024-02-12T19:33:32.478975922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-mcg9n,Uid:f2a1ad13-a09e-4e69-96cc-ff6e19516d31,Namespace:kube-system,Attempt:0,}" Feb 12 19:33:32.493491 kubelet[1964]: E0212 19:33:32.493464 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:33:32.493934 env[1119]: time="2024-02-12T19:33:32.493764745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z7f9n,Uid:9a238514-9519-4c45-b560-2ba21dc304df,Namespace:kube-system,Attempt:0,}" Feb 12 19:33:32.506216 kubelet[1964]: E0212 19:33:32.506192 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:33:32.506623 env[1119]: time="2024-02-12T19:33:32.506590770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-c6pzf,Uid:de82b679-77a0-48d7-9057-90f424cf4bb8,Namespace:kube-system,Attempt:0,}" Feb 12 19:33:32.621932 kubelet[1964]: E0212 19:33:32.621906 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:33:32.716176 env[1119]: time="2024-02-12T19:33:32.714473760Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:33:32.716176 env[1119]: time="2024-02-12T19:33:32.714517265Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:33:32.716176 env[1119]: time="2024-02-12T19:33:32.714531225Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:33:32.716176 env[1119]: time="2024-02-12T19:33:32.714662652Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8f2874c63b810eb998d094b7cf258b91e9afc8ac1e8e8402ed9d33cc15d05230 pid=2074 runtime=io.containerd.runc.v2 Feb 12 19:33:32.720398 env[1119]: time="2024-02-12T19:33:32.720326848Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:33:32.720531 env[1119]: time="2024-02-12T19:33:32.720393865Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:33:32.720531 env[1119]: time="2024-02-12T19:33:32.720415993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:33:32.720605 env[1119]: time="2024-02-12T19:33:32.720576384Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7a616c699c9a4867f6e0c811cd27abb3290921407580f5038715cdc5a6f6b360 pid=2102 runtime=io.containerd.runc.v2 Feb 12 19:33:32.721900 env[1119]: time="2024-02-12T19:33:32.721851370Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:33:32.721900 env[1119]: time="2024-02-12T19:33:32.721881475Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:33:32.721900 env[1119]: time="2024-02-12T19:33:32.721890676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:33:32.722025 env[1119]: time="2024-02-12T19:33:32.721985292Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/32f6631c0280aabed505cce43be6b2092776a87686358b2f49075cba35223dcd pid=2111 runtime=io.containerd.runc.v2 Feb 12 19:33:32.728622 systemd[1]: Started cri-containerd-8f2874c63b810eb998d094b7cf258b91e9afc8ac1e8e8402ed9d33cc15d05230.scope. Feb 12 19:33:32.735900 systemd[1]: Started cri-containerd-32f6631c0280aabed505cce43be6b2092776a87686358b2f49075cba35223dcd.scope. Feb 12 19:33:32.736692 systemd[1]: Started cri-containerd-7a616c699c9a4867f6e0c811cd27abb3290921407580f5038715cdc5a6f6b360.scope. Feb 12 19:33:32.770327 env[1119]: time="2024-02-12T19:33:32.770263079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-c6pzf,Uid:de82b679-77a0-48d7-9057-90f424cf4bb8,Namespace:kube-system,Attempt:0,} returns sandbox id \"7a616c699c9a4867f6e0c811cd27abb3290921407580f5038715cdc5a6f6b360\"" Feb 12 19:33:32.771093 kubelet[1964]: E0212 19:33:32.771070 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:33:32.774421 env[1119]: time="2024-02-12T19:33:32.774346741Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 12 19:33:32.783028 env[1119]: time="2024-02-12T19:33:32.782989235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z7f9n,Uid:9a238514-9519-4c45-b560-2ba21dc304df,Namespace:kube-system,Attempt:0,} returns sandbox id \"32f6631c0280aabed505cce43be6b2092776a87686358b2f49075cba35223dcd\"" Feb 12 19:33:32.783572 kubelet[1964]: E0212 19:33:32.783557 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:33:32.786370 env[1119]: time="2024-02-12T19:33:32.786331037Z" level=info msg="CreateContainer within sandbox \"32f6631c0280aabed505cce43be6b2092776a87686358b2f49075cba35223dcd\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 12 19:33:32.790572 env[1119]: time="2024-02-12T19:33:32.790518436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-mcg9n,Uid:f2a1ad13-a09e-4e69-96cc-ff6e19516d31,Namespace:kube-system,Attempt:0,} returns sandbox id \"8f2874c63b810eb998d094b7cf258b91e9afc8ac1e8e8402ed9d33cc15d05230\"" Feb 12 19:33:32.791040 kubelet[1964]: E0212 19:33:32.791023 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:33:32.807962 env[1119]: time="2024-02-12T19:33:32.807898772Z" level=info msg="CreateContainer within sandbox \"32f6631c0280aabed505cce43be6b2092776a87686358b2f49075cba35223dcd\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6d2d4d41d6a489d5868213e9fe1a26d5b56debed976766d58a01adaf093ba137\"" Feb 12 19:33:32.808590 env[1119]: time="2024-02-12T19:33:32.808559976Z" level=info msg="StartContainer for \"6d2d4d41d6a489d5868213e9fe1a26d5b56debed976766d58a01adaf093ba137\"" Feb 12 19:33:32.821199 systemd[1]: Started cri-containerd-6d2d4d41d6a489d5868213e9fe1a26d5b56debed976766d58a01adaf093ba137.scope. Feb 12 19:33:32.847201 env[1119]: time="2024-02-12T19:33:32.847149202Z" level=info msg="StartContainer for \"6d2d4d41d6a489d5868213e9fe1a26d5b56debed976766d58a01adaf093ba137\" returns successfully" Feb 12 19:33:32.911031 kubelet[1964]: E0212 19:33:32.911007 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:33:33.396665 kubelet[1964]: E0212 19:33:33.396642 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:33:33.403300 kubelet[1964]: I0212 19:33:33.403258 1964 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-z7f9n" podStartSLOduration=1.403218811 podCreationTimestamp="2024-02-12 19:33:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:33:33.402839839 +0000 UTC m=+14.126915077" watchObservedRunningTime="2024-02-12 19:33:33.403218811 +0000 UTC m=+14.127294059" Feb 12 19:33:39.978428 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1541002819.mount: Deactivated successfully. Feb 12 19:33:45.101191 env[1119]: time="2024-02-12T19:33:45.101101850Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:33:45.103422 env[1119]: time="2024-02-12T19:33:45.103373423Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:33:45.105034 env[1119]: time="2024-02-12T19:33:45.105002060Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:33:45.105603 env[1119]: time="2024-02-12T19:33:45.105565443Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 12 19:33:45.106705 env[1119]: time="2024-02-12T19:33:45.106668426Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 12 19:33:45.108612 env[1119]: time="2024-02-12T19:33:45.108566570Z" level=info msg="CreateContainer within sandbox \"7a616c699c9a4867f6e0c811cd27abb3290921407580f5038715cdc5a6f6b360\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 19:33:45.123243 env[1119]: time="2024-02-12T19:33:45.123177370Z" level=info msg="CreateContainer within sandbox \"7a616c699c9a4867f6e0c811cd27abb3290921407580f5038715cdc5a6f6b360\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"634fe1b911cdbd9f264c8becd8ceb8c0a16fc5b57d127a58e84727ca2c6b075d\"" Feb 12 19:33:45.123797 env[1119]: time="2024-02-12T19:33:45.123755152Z" level=info msg="StartContainer for \"634fe1b911cdbd9f264c8becd8ceb8c0a16fc5b57d127a58e84727ca2c6b075d\"" Feb 12 19:33:45.140763 systemd[1]: run-containerd-runc-k8s.io-634fe1b911cdbd9f264c8becd8ceb8c0a16fc5b57d127a58e84727ca2c6b075d-runc.XPExiM.mount: Deactivated successfully. Feb 12 19:33:45.142063 systemd[1]: Started cri-containerd-634fe1b911cdbd9f264c8becd8ceb8c0a16fc5b57d127a58e84727ca2c6b075d.scope. Feb 12 19:33:45.166187 env[1119]: time="2024-02-12T19:33:45.166122485Z" level=info msg="StartContainer for \"634fe1b911cdbd9f264c8becd8ceb8c0a16fc5b57d127a58e84727ca2c6b075d\" returns successfully" Feb 12 19:33:45.174309 systemd[1]: cri-containerd-634fe1b911cdbd9f264c8becd8ceb8c0a16fc5b57d127a58e84727ca2c6b075d.scope: Deactivated successfully. Feb 12 19:33:45.416791 kubelet[1964]: E0212 19:33:45.415807 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:33:45.587308 env[1119]: time="2024-02-12T19:33:45.587262182Z" level=info msg="shim disconnected" id=634fe1b911cdbd9f264c8becd8ceb8c0a16fc5b57d127a58e84727ca2c6b075d Feb 12 19:33:45.587308 env[1119]: time="2024-02-12T19:33:45.587312175Z" level=warning msg="cleaning up after shim disconnected" id=634fe1b911cdbd9f264c8becd8ceb8c0a16fc5b57d127a58e84727ca2c6b075d namespace=k8s.io Feb 12 19:33:45.587490 env[1119]: time="2024-02-12T19:33:45.587320422Z" level=info msg="cleaning up dead shim" Feb 12 19:33:45.595221 env[1119]: time="2024-02-12T19:33:45.595177409Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:33:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2399 runtime=io.containerd.runc.v2\n" Feb 12 19:33:45.601195 systemd[1]: Started sshd@5-10.0.0.38:22-10.0.0.1:33996.service. Feb 12 19:33:45.639170 sshd[2413]: Accepted publickey for core from 10.0.0.1 port 33996 ssh2: RSA SHA256:61J5tVZgtMsvFgBzlA318rHvk/8vx1tAF2anfHXiCnk Feb 12 19:33:45.640217 sshd[2413]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:33:45.644434 systemd[1]: Started session-6.scope. Feb 12 19:33:45.644766 systemd-logind[1103]: New session 6 of user core. Feb 12 19:33:45.765276 sshd[2413]: pam_unix(sshd:session): session closed for user core Feb 12 19:33:45.767694 systemd[1]: sshd@5-10.0.0.38:22-10.0.0.1:33996.service: Deactivated successfully. Feb 12 19:33:45.768397 systemd[1]: session-6.scope: Deactivated successfully. Feb 12 19:33:45.769106 systemd-logind[1103]: Session 6 logged out. Waiting for processes to exit. Feb 12 19:33:45.769816 systemd-logind[1103]: Removed session 6. Feb 12 19:33:46.119955 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-634fe1b911cdbd9f264c8becd8ceb8c0a16fc5b57d127a58e84727ca2c6b075d-rootfs.mount: Deactivated successfully. Feb 12 19:33:46.427969 kubelet[1964]: E0212 19:33:46.427932 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:33:46.429990 env[1119]: time="2024-02-12T19:33:46.429932549Z" level=info msg="CreateContainer within sandbox \"7a616c699c9a4867f6e0c811cd27abb3290921407580f5038715cdc5a6f6b360\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 12 19:33:46.450279 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2331992634.mount: Deactivated successfully. Feb 12 19:33:46.453326 env[1119]: time="2024-02-12T19:33:46.453281307Z" level=info msg="CreateContainer within sandbox \"7a616c699c9a4867f6e0c811cd27abb3290921407580f5038715cdc5a6f6b360\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8f2807b216d51e48974d9a7e91a93c307c2ef8019e010b6c72507e9b106d5daf\"" Feb 12 19:33:46.453843 env[1119]: time="2024-02-12T19:33:46.453814813Z" level=info msg="StartContainer for \"8f2807b216d51e48974d9a7e91a93c307c2ef8019e010b6c72507e9b106d5daf\"" Feb 12 19:33:46.472810 systemd[1]: Started cri-containerd-8f2807b216d51e48974d9a7e91a93c307c2ef8019e010b6c72507e9b106d5daf.scope. Feb 12 19:33:46.563111 env[1119]: time="2024-02-12T19:33:46.563060318Z" level=info msg="StartContainer for \"8f2807b216d51e48974d9a7e91a93c307c2ef8019e010b6c72507e9b106d5daf\" returns successfully" Feb 12 19:33:46.570828 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 19:33:46.571067 systemd[1]: Stopped systemd-sysctl.service. Feb 12 19:33:46.571371 systemd[1]: Stopping systemd-sysctl.service... Feb 12 19:33:46.572930 systemd[1]: Starting systemd-sysctl.service... Feb 12 19:33:46.573166 systemd[1]: cri-containerd-8f2807b216d51e48974d9a7e91a93c307c2ef8019e010b6c72507e9b106d5daf.scope: Deactivated successfully. Feb 12 19:33:46.582924 systemd[1]: Finished systemd-sysctl.service. Feb 12 19:33:46.600597 env[1119]: time="2024-02-12T19:33:46.600538292Z" level=info msg="shim disconnected" id=8f2807b216d51e48974d9a7e91a93c307c2ef8019e010b6c72507e9b106d5daf Feb 12 19:33:46.600597 env[1119]: time="2024-02-12T19:33:46.600592954Z" level=warning msg="cleaning up after shim disconnected" id=8f2807b216d51e48974d9a7e91a93c307c2ef8019e010b6c72507e9b106d5daf namespace=k8s.io Feb 12 19:33:46.600597 env[1119]: time="2024-02-12T19:33:46.600604217Z" level=info msg="cleaning up dead shim" Feb 12 19:33:46.606640 env[1119]: time="2024-02-12T19:33:46.606589622Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:33:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2477 runtime=io.containerd.runc.v2\n" Feb 12 19:33:47.118977 systemd[1]: run-containerd-runc-k8s.io-8f2807b216d51e48974d9a7e91a93c307c2ef8019e010b6c72507e9b106d5daf-runc.Eic7OX.mount: Deactivated successfully. Feb 12 19:33:47.119081 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8f2807b216d51e48974d9a7e91a93c307c2ef8019e010b6c72507e9b106d5daf-rootfs.mount: Deactivated successfully. Feb 12 19:33:47.431627 kubelet[1964]: E0212 19:33:47.430603 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:33:47.433999 env[1119]: time="2024-02-12T19:33:47.433962256Z" level=info msg="CreateContainer within sandbox \"7a616c699c9a4867f6e0c811cd27abb3290921407580f5038715cdc5a6f6b360\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 12 19:33:47.609355 env[1119]: time="2024-02-12T19:33:47.609306706Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:33:47.616824 env[1119]: time="2024-02-12T19:33:47.616777733Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:33:47.620512 env[1119]: time="2024-02-12T19:33:47.620480725Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:33:47.620738 env[1119]: time="2024-02-12T19:33:47.620691206Z" level=info msg="CreateContainer within sandbox \"7a616c699c9a4867f6e0c811cd27abb3290921407580f5038715cdc5a6f6b360\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"de44dbbd5522f645a17e53c34f3a7523a74a8ac8e29b2b0c34fb81838ee4306a\"" Feb 12 19:33:47.620979 env[1119]: time="2024-02-12T19:33:47.620926148Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 12 19:33:47.621535 env[1119]: time="2024-02-12T19:33:47.621266626Z" level=info msg="StartContainer for \"de44dbbd5522f645a17e53c34f3a7523a74a8ac8e29b2b0c34fb81838ee4306a\"" Feb 12 19:33:47.622326 env[1119]: time="2024-02-12T19:33:47.622241455Z" level=info msg="CreateContainer within sandbox \"8f2874c63b810eb998d094b7cf258b91e9afc8ac1e8e8402ed9d33cc15d05230\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 12 19:33:47.638448 systemd[1]: Started cri-containerd-de44dbbd5522f645a17e53c34f3a7523a74a8ac8e29b2b0c34fb81838ee4306a.scope. Feb 12 19:33:47.639413 env[1119]: time="2024-02-12T19:33:47.639205364Z" level=info msg="CreateContainer within sandbox \"8f2874c63b810eb998d094b7cf258b91e9afc8ac1e8e8402ed9d33cc15d05230\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c1d0312abfaa259226b4b715c53212e4403c3d1b34004ba522383400dd5e17d6\"" Feb 12 19:33:47.641783 env[1119]: time="2024-02-12T19:33:47.641481080Z" level=info msg="StartContainer for \"c1d0312abfaa259226b4b715c53212e4403c3d1b34004ba522383400dd5e17d6\"" Feb 12 19:33:47.661710 systemd[1]: Started cri-containerd-c1d0312abfaa259226b4b715c53212e4403c3d1b34004ba522383400dd5e17d6.scope. Feb 12 19:33:47.666310 systemd[1]: cri-containerd-de44dbbd5522f645a17e53c34f3a7523a74a8ac8e29b2b0c34fb81838ee4306a.scope: Deactivated successfully. Feb 12 19:33:47.847863 env[1119]: time="2024-02-12T19:33:47.847748116Z" level=info msg="StartContainer for \"de44dbbd5522f645a17e53c34f3a7523a74a8ac8e29b2b0c34fb81838ee4306a\" returns successfully" Feb 12 19:33:47.849863 env[1119]: time="2024-02-12T19:33:47.849806577Z" level=info msg="StartContainer for \"c1d0312abfaa259226b4b715c53212e4403c3d1b34004ba522383400dd5e17d6\" returns successfully" Feb 12 19:33:47.877420 env[1119]: time="2024-02-12T19:33:47.877350257Z" level=info msg="shim disconnected" id=de44dbbd5522f645a17e53c34f3a7523a74a8ac8e29b2b0c34fb81838ee4306a Feb 12 19:33:47.877420 env[1119]: time="2024-02-12T19:33:47.877417475Z" level=warning msg="cleaning up after shim disconnected" id=de44dbbd5522f645a17e53c34f3a7523a74a8ac8e29b2b0c34fb81838ee4306a namespace=k8s.io Feb 12 19:33:47.877604 env[1119]: time="2024-02-12T19:33:47.877428878Z" level=info msg="cleaning up dead shim" Feb 12 19:33:47.886289 env[1119]: time="2024-02-12T19:33:47.886257018Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:33:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2573 runtime=io.containerd.runc.v2\n" Feb 12 19:33:48.120415 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount36446293.mount: Deactivated successfully. Feb 12 19:33:48.120525 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-de44dbbd5522f645a17e53c34f3a7523a74a8ac8e29b2b0c34fb81838ee4306a-rootfs.mount: Deactivated successfully. Feb 12 19:33:48.433693 kubelet[1964]: E0212 19:33:48.433576 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:33:48.435458 kubelet[1964]: E0212 19:33:48.435432 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:33:48.438033 env[1119]: time="2024-02-12T19:33:48.438003542Z" level=info msg="CreateContainer within sandbox \"7a616c699c9a4867f6e0c811cd27abb3290921407580f5038715cdc5a6f6b360\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 12 19:33:48.459049 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3012229647.mount: Deactivated successfully. Feb 12 19:33:48.471673 env[1119]: time="2024-02-12T19:33:48.471616421Z" level=info msg="CreateContainer within sandbox \"7a616c699c9a4867f6e0c811cd27abb3290921407580f5038715cdc5a6f6b360\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"545017aa5134329ab9da75d863322ac988536de921d9ec5ec03ee1428efe9806\"" Feb 12 19:33:48.472557 env[1119]: time="2024-02-12T19:33:48.472537886Z" level=info msg="StartContainer for \"545017aa5134329ab9da75d863322ac988536de921d9ec5ec03ee1428efe9806\"" Feb 12 19:33:48.485613 systemd[1]: Started cri-containerd-545017aa5134329ab9da75d863322ac988536de921d9ec5ec03ee1428efe9806.scope. Feb 12 19:33:48.511464 systemd[1]: cri-containerd-545017aa5134329ab9da75d863322ac988536de921d9ec5ec03ee1428efe9806.scope: Deactivated successfully. Feb 12 19:33:48.513194 env[1119]: time="2024-02-12T19:33:48.513163968Z" level=info msg="StartContainer for \"545017aa5134329ab9da75d863322ac988536de921d9ec5ec03ee1428efe9806\" returns successfully" Feb 12 19:33:48.547396 env[1119]: time="2024-02-12T19:33:48.547343936Z" level=info msg="shim disconnected" id=545017aa5134329ab9da75d863322ac988536de921d9ec5ec03ee1428efe9806 Feb 12 19:33:48.547396 env[1119]: time="2024-02-12T19:33:48.547391794Z" level=warning msg="cleaning up after shim disconnected" id=545017aa5134329ab9da75d863322ac988536de921d9ec5ec03ee1428efe9806 namespace=k8s.io Feb 12 19:33:48.547396 env[1119]: time="2024-02-12T19:33:48.547402065Z" level=info msg="cleaning up dead shim" Feb 12 19:33:48.559837 env[1119]: time="2024-02-12T19:33:48.559789167Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:33:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2631 runtime=io.containerd.runc.v2\n" Feb 12 19:33:49.119150 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-545017aa5134329ab9da75d863322ac988536de921d9ec5ec03ee1428efe9806-rootfs.mount: Deactivated successfully. Feb 12 19:33:49.439385 kubelet[1964]: E0212 19:33:49.439359 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:33:49.439772 kubelet[1964]: E0212 19:33:49.439422 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:33:49.441937 env[1119]: time="2024-02-12T19:33:49.441889599Z" level=info msg="CreateContainer within sandbox \"7a616c699c9a4867f6e0c811cd27abb3290921407580f5038715cdc5a6f6b360\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 12 19:33:49.486150 kubelet[1964]: I0212 19:33:49.486092 1964 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-574c4bb98d-mcg9n" podStartSLOduration=2.656671466 podCreationTimestamp="2024-02-12 19:33:32 +0000 UTC" firstStartedPulling="2024-02-12 19:33:32.791812934 +0000 UTC m=+13.515888162" lastFinishedPulling="2024-02-12 19:33:47.62119054 +0000 UTC m=+28.345265769" observedRunningTime="2024-02-12 19:33:48.463685653 +0000 UTC m=+29.187760881" watchObservedRunningTime="2024-02-12 19:33:49.486049073 +0000 UTC m=+30.210124301" Feb 12 19:33:49.515040 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount187922477.mount: Deactivated successfully. Feb 12 19:33:49.518360 env[1119]: time="2024-02-12T19:33:49.518302867Z" level=info msg="CreateContainer within sandbox \"7a616c699c9a4867f6e0c811cd27abb3290921407580f5038715cdc5a6f6b360\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"73568447414767471fb163bce791d477abae2210400d3435a6adb89eb51dfc60\"" Feb 12 19:33:49.518911 env[1119]: time="2024-02-12T19:33:49.518883853Z" level=info msg="StartContainer for \"73568447414767471fb163bce791d477abae2210400d3435a6adb89eb51dfc60\"" Feb 12 19:33:49.535419 systemd[1]: Started cri-containerd-73568447414767471fb163bce791d477abae2210400d3435a6adb89eb51dfc60.scope. Feb 12 19:33:49.644909 env[1119]: time="2024-02-12T19:33:49.644841190Z" level=info msg="StartContainer for \"73568447414767471fb163bce791d477abae2210400d3435a6adb89eb51dfc60\" returns successfully" Feb 12 19:33:49.725429 kubelet[1964]: I0212 19:33:49.725277 1964 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 12 19:33:49.776776 kubelet[1964]: I0212 19:33:49.776732 1964 topology_manager.go:212] "Topology Admit Handler" Feb 12 19:33:49.783257 systemd[1]: Created slice kubepods-burstable-pod3bbd94ed_ecd9_4bdf_91a2_d417dc535cfa.slice. Feb 12 19:33:49.792821 kubelet[1964]: I0212 19:33:49.792782 1964 topology_manager.go:212] "Topology Admit Handler" Feb 12 19:33:49.798549 systemd[1]: Created slice kubepods-burstable-podce76abd9_ee18_4a61_83a5_ca216bf2ba6c.slice. Feb 12 19:33:49.845483 kubelet[1964]: I0212 19:33:49.845450 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9f5z\" (UniqueName: \"kubernetes.io/projected/3bbd94ed-ecd9-4bdf-91a2-d417dc535cfa-kube-api-access-d9f5z\") pod \"coredns-5d78c9869d-cz69v\" (UID: \"3bbd94ed-ecd9-4bdf-91a2-d417dc535cfa\") " pod="kube-system/coredns-5d78c9869d-cz69v" Feb 12 19:33:49.845635 kubelet[1964]: I0212 19:33:49.845498 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce76abd9-ee18-4a61-83a5-ca216bf2ba6c-config-volume\") pod \"coredns-5d78c9869d-qd52v\" (UID: \"ce76abd9-ee18-4a61-83a5-ca216bf2ba6c\") " pod="kube-system/coredns-5d78c9869d-qd52v" Feb 12 19:33:49.845635 kubelet[1964]: I0212 19:33:49.845517 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlpnx\" (UniqueName: \"kubernetes.io/projected/ce76abd9-ee18-4a61-83a5-ca216bf2ba6c-kube-api-access-qlpnx\") pod \"coredns-5d78c9869d-qd52v\" (UID: \"ce76abd9-ee18-4a61-83a5-ca216bf2ba6c\") " pod="kube-system/coredns-5d78c9869d-qd52v" Feb 12 19:33:49.845635 kubelet[1964]: I0212 19:33:49.845534 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3bbd94ed-ecd9-4bdf-91a2-d417dc535cfa-config-volume\") pod \"coredns-5d78c9869d-cz69v\" (UID: \"3bbd94ed-ecd9-4bdf-91a2-d417dc535cfa\") " pod="kube-system/coredns-5d78c9869d-cz69v" Feb 12 19:33:50.088928 kubelet[1964]: E0212 19:33:50.088794 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:33:50.089391 env[1119]: time="2024-02-12T19:33:50.089318270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-cz69v,Uid:3bbd94ed-ecd9-4bdf-91a2-d417dc535cfa,Namespace:kube-system,Attempt:0,}" Feb 12 19:33:50.105143 kubelet[1964]: E0212 19:33:50.105080 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:33:50.105738 env[1119]: time="2024-02-12T19:33:50.105669796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-qd52v,Uid:ce76abd9-ee18-4a61-83a5-ca216bf2ba6c,Namespace:kube-system,Attempt:0,}" Feb 12 19:33:50.443487 kubelet[1964]: E0212 19:33:50.443455 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:33:50.456378 kubelet[1964]: I0212 19:33:50.456211 1964 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-c6pzf" podStartSLOduration=6.123667024 podCreationTimestamp="2024-02-12 19:33:32 +0000 UTC" firstStartedPulling="2024-02-12 19:33:32.773358772 +0000 UTC m=+13.497434000" lastFinishedPulling="2024-02-12 19:33:45.105867976 +0000 UTC m=+25.829943204" observedRunningTime="2024-02-12 19:33:50.454750616 +0000 UTC m=+31.178825834" watchObservedRunningTime="2024-02-12 19:33:50.456176228 +0000 UTC m=+31.180251456" Feb 12 19:33:50.771724 systemd[1]: Started sshd@6-10.0.0.38:22-10.0.0.1:36280.service. Feb 12 19:33:50.810004 sshd[2805]: Accepted publickey for core from 10.0.0.1 port 36280 ssh2: RSA SHA256:61J5tVZgtMsvFgBzlA318rHvk/8vx1tAF2anfHXiCnk Feb 12 19:33:50.811202 sshd[2805]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:33:50.814904 systemd-logind[1103]: New session 7 of user core. Feb 12 19:33:50.815689 systemd[1]: Started session-7.scope. Feb 12 19:33:50.929401 sshd[2805]: pam_unix(sshd:session): session closed for user core Feb 12 19:33:50.931382 systemd[1]: sshd@6-10.0.0.38:22-10.0.0.1:36280.service: Deactivated successfully. Feb 12 19:33:50.932071 systemd[1]: session-7.scope: Deactivated successfully. Feb 12 19:33:50.932653 systemd-logind[1103]: Session 7 logged out. Waiting for processes to exit. Feb 12 19:33:50.933424 systemd-logind[1103]: Removed session 7. Feb 12 19:33:51.444994 kubelet[1964]: E0212 19:33:51.444957 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:33:51.500471 systemd-networkd[1018]: cilium_host: Link UP Feb 12 19:33:51.500608 systemd-networkd[1018]: cilium_net: Link UP Feb 12 19:33:51.500612 systemd-networkd[1018]: cilium_net: Gained carrier Feb 12 19:33:51.500750 systemd-networkd[1018]: cilium_host: Gained carrier Feb 12 19:33:51.506331 systemd-networkd[1018]: cilium_host: Gained IPv6LL Feb 12 19:33:51.507159 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 12 19:33:51.578027 systemd-networkd[1018]: cilium_vxlan: Link UP Feb 12 19:33:51.578038 systemd-networkd[1018]: cilium_vxlan: Gained carrier Feb 12 19:33:51.777158 kernel: NET: Registered PF_ALG protocol family Feb 12 19:33:51.935278 systemd-networkd[1018]: cilium_net: Gained IPv6LL Feb 12 19:33:52.298062 systemd-networkd[1018]: lxc_health: Link UP Feb 12 19:33:52.306033 systemd-networkd[1018]: lxc_health: Gained carrier Feb 12 19:33:52.306207 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 12 19:33:52.446655 kubelet[1964]: E0212 19:33:52.446620 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:33:52.658253 systemd-networkd[1018]: lxc8776b1df39ea: Link UP Feb 12 19:33:52.665373 systemd-networkd[1018]: lxc95be97b9f472: Link UP Feb 12 19:33:52.672159 kernel: eth0: renamed from tmp3d4fd Feb 12 19:33:52.676698 kernel: eth0: renamed from tmpb5b4b Feb 12 19:33:52.680929 systemd-networkd[1018]: lxc95be97b9f472: Gained carrier Feb 12 19:33:52.682163 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc95be97b9f472: link becomes ready Feb 12 19:33:52.685829 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 19:33:52.685934 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc8776b1df39ea: link becomes ready Feb 12 19:33:52.686052 systemd-networkd[1018]: lxc8776b1df39ea: Gained carrier Feb 12 19:33:53.201339 systemd-networkd[1018]: cilium_vxlan: Gained IPv6LL Feb 12 19:33:53.448810 kubelet[1964]: E0212 19:33:53.448782 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:33:54.033474 systemd-networkd[1018]: lxc_health: Gained IPv6LL Feb 12 19:33:54.095256 systemd-networkd[1018]: lxc95be97b9f472: Gained IPv6LL Feb 12 19:33:54.351368 systemd-networkd[1018]: lxc8776b1df39ea: Gained IPv6LL Feb 12 19:33:54.451300 kubelet[1964]: E0212 19:33:54.451266 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:33:55.452571 kubelet[1964]: E0212 19:33:55.452531 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:33:55.934324 systemd[1]: Started sshd@7-10.0.0.38:22-10.0.0.1:36286.service. Feb 12 19:33:55.971112 sshd[3209]: Accepted publickey for core from 10.0.0.1 port 36286 ssh2: RSA SHA256:61J5tVZgtMsvFgBzlA318rHvk/8vx1tAF2anfHXiCnk Feb 12 19:33:55.972514 sshd[3209]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:33:55.977211 systemd[1]: Started session-8.scope. Feb 12 19:33:55.978555 systemd-logind[1103]: New session 8 of user core. Feb 12 19:33:56.030913 env[1119]: time="2024-02-12T19:33:56.030815431Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:33:56.030913 env[1119]: time="2024-02-12T19:33:56.030864028Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:33:56.031451 env[1119]: time="2024-02-12T19:33:56.030873818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:33:56.031802 env[1119]: time="2024-02-12T19:33:56.031564118Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b5b4b8a3393dd0bb52a14aa0982b354ef5029d97e5c9ae07259124c36f7a536f pid=3223 runtime=io.containerd.runc.v2 Feb 12 19:33:56.032369 env[1119]: time="2024-02-12T19:33:56.032246743Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:33:56.032523 env[1119]: time="2024-02-12T19:33:56.032471485Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:33:56.032625 env[1119]: time="2024-02-12T19:33:56.032601918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:33:56.038901 env[1119]: time="2024-02-12T19:33:56.035439423Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3d4fd7c64a8e3c691196ef639b5ddb9c809cf7fe4507c1672774add65c332509 pid=3232 runtime=io.containerd.runc.v2 Feb 12 19:33:56.043911 systemd[1]: Started cri-containerd-b5b4b8a3393dd0bb52a14aa0982b354ef5029d97e5c9ae07259124c36f7a536f.scope. Feb 12 19:33:56.057921 systemd[1]: Started cri-containerd-3d4fd7c64a8e3c691196ef639b5ddb9c809cf7fe4507c1672774add65c332509.scope. Feb 12 19:33:56.060621 systemd-resolved[1060]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 12 19:33:56.071511 systemd-resolved[1060]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 12 19:33:56.086112 env[1119]: time="2024-02-12T19:33:56.086062898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-qd52v,Uid:ce76abd9-ee18-4a61-83a5-ca216bf2ba6c,Namespace:kube-system,Attempt:0,} returns sandbox id \"b5b4b8a3393dd0bb52a14aa0982b354ef5029d97e5c9ae07259124c36f7a536f\"" Feb 12 19:33:56.086574 kubelet[1964]: E0212 19:33:56.086555 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:33:56.090848 env[1119]: time="2024-02-12T19:33:56.090814808Z" level=info msg="CreateContainer within sandbox \"b5b4b8a3393dd0bb52a14aa0982b354ef5029d97e5c9ae07259124c36f7a536f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 12 19:33:56.107992 env[1119]: time="2024-02-12T19:33:56.107954194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-cz69v,Uid:3bbd94ed-ecd9-4bdf-91a2-d417dc535cfa,Namespace:kube-system,Attempt:0,} returns sandbox id \"3d4fd7c64a8e3c691196ef639b5ddb9c809cf7fe4507c1672774add65c332509\"" Feb 12 19:33:56.109182 kubelet[1964]: E0212 19:33:56.108870 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:33:56.111018 env[1119]: time="2024-02-12T19:33:56.110891489Z" level=info msg="CreateContainer within sandbox \"3d4fd7c64a8e3c691196ef639b5ddb9c809cf7fe4507c1672774add65c332509\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 12 19:33:56.111237 env[1119]: time="2024-02-12T19:33:56.111171522Z" level=info msg="CreateContainer within sandbox \"b5b4b8a3393dd0bb52a14aa0982b354ef5029d97e5c9ae07259124c36f7a536f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f358880f40bed28b24a8f8a26a0030143bf85f479a7506eeb2563b55e6a22065\"" Feb 12 19:33:56.112431 env[1119]: time="2024-02-12T19:33:56.112378633Z" level=info msg="StartContainer for \"f358880f40bed28b24a8f8a26a0030143bf85f479a7506eeb2563b55e6a22065\"" Feb 12 19:33:56.124562 env[1119]: time="2024-02-12T19:33:56.124506497Z" level=info msg="CreateContainer within sandbox \"3d4fd7c64a8e3c691196ef639b5ddb9c809cf7fe4507c1672774add65c332509\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"229462c54f73f3932411c30bee98342dcd6278ed6381faba952fd21afa7ef012\"" Feb 12 19:33:56.125288 env[1119]: time="2024-02-12T19:33:56.125260616Z" level=info msg="StartContainer for \"229462c54f73f3932411c30bee98342dcd6278ed6381faba952fd21afa7ef012\"" Feb 12 19:33:56.128994 systemd[1]: Started cri-containerd-f358880f40bed28b24a8f8a26a0030143bf85f479a7506eeb2563b55e6a22065.scope. Feb 12 19:33:56.132265 sshd[3209]: pam_unix(sshd:session): session closed for user core Feb 12 19:33:56.134625 systemd[1]: sshd@7-10.0.0.38:22-10.0.0.1:36286.service: Deactivated successfully. Feb 12 19:33:56.135521 systemd[1]: session-8.scope: Deactivated successfully. Feb 12 19:33:56.136222 systemd-logind[1103]: Session 8 logged out. Waiting for processes to exit. Feb 12 19:33:56.137115 systemd-logind[1103]: Removed session 8. Feb 12 19:33:56.150138 systemd[1]: Started cri-containerd-229462c54f73f3932411c30bee98342dcd6278ed6381faba952fd21afa7ef012.scope. Feb 12 19:33:56.165442 env[1119]: time="2024-02-12T19:33:56.165385531Z" level=info msg="StartContainer for \"f358880f40bed28b24a8f8a26a0030143bf85f479a7506eeb2563b55e6a22065\" returns successfully" Feb 12 19:33:56.177448 env[1119]: time="2024-02-12T19:33:56.177385275Z" level=info msg="StartContainer for \"229462c54f73f3932411c30bee98342dcd6278ed6381faba952fd21afa7ef012\" returns successfully" Feb 12 19:33:56.456479 kubelet[1964]: E0212 19:33:56.456280 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:33:56.458285 kubelet[1964]: E0212 19:33:56.458260 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:33:56.465328 kubelet[1964]: I0212 19:33:56.465288 1964 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5d78c9869d-cz69v" podStartSLOduration=24.464734575 podCreationTimestamp="2024-02-12 19:33:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:33:56.463944043 +0000 UTC m=+37.188019261" watchObservedRunningTime="2024-02-12 19:33:56.464734575 +0000 UTC m=+37.188809803" Feb 12 19:33:57.034999 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2430814221.mount: Deactivated successfully. Feb 12 19:33:57.460147 kubelet[1964]: E0212 19:33:57.460108 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:33:57.460541 kubelet[1964]: E0212 19:33:57.460271 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:33:58.462430 kubelet[1964]: E0212 19:33:58.462401 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:33:58.462806 kubelet[1964]: E0212 19:33:58.462476 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:34:01.135630 systemd[1]: Started sshd@8-10.0.0.38:22-10.0.0.1:35084.service. Feb 12 19:34:01.171820 sshd[3389]: Accepted publickey for core from 10.0.0.1 port 35084 ssh2: RSA SHA256:61J5tVZgtMsvFgBzlA318rHvk/8vx1tAF2anfHXiCnk Feb 12 19:34:01.172818 sshd[3389]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:34:01.175929 systemd-logind[1103]: New session 9 of user core. Feb 12 19:34:01.176855 systemd[1]: Started session-9.scope. Feb 12 19:34:01.304264 sshd[3389]: pam_unix(sshd:session): session closed for user core Feb 12 19:34:01.306677 systemd[1]: sshd@8-10.0.0.38:22-10.0.0.1:35084.service: Deactivated successfully. Feb 12 19:34:01.307373 systemd[1]: session-9.scope: Deactivated successfully. Feb 12 19:34:01.308111 systemd-logind[1103]: Session 9 logged out. Waiting for processes to exit. Feb 12 19:34:01.308765 systemd-logind[1103]: Removed session 9. Feb 12 19:34:06.308753 systemd[1]: Started sshd@9-10.0.0.38:22-10.0.0.1:41258.service. Feb 12 19:34:06.345553 sshd[3406]: Accepted publickey for core from 10.0.0.1 port 41258 ssh2: RSA SHA256:61J5tVZgtMsvFgBzlA318rHvk/8vx1tAF2anfHXiCnk Feb 12 19:34:06.346574 sshd[3406]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:34:06.349478 systemd-logind[1103]: New session 10 of user core. Feb 12 19:34:06.350282 systemd[1]: Started session-10.scope. Feb 12 19:34:06.455496 sshd[3406]: pam_unix(sshd:session): session closed for user core Feb 12 19:34:06.459015 systemd[1]: sshd@9-10.0.0.38:22-10.0.0.1:41258.service: Deactivated successfully. Feb 12 19:34:06.459705 systemd[1]: session-10.scope: Deactivated successfully. Feb 12 19:34:06.460245 systemd-logind[1103]: Session 10 logged out. Waiting for processes to exit. Feb 12 19:34:06.461459 systemd[1]: Started sshd@10-10.0.0.38:22-10.0.0.1:41268.service. Feb 12 19:34:06.462281 systemd-logind[1103]: Removed session 10. Feb 12 19:34:06.497623 sshd[3420]: Accepted publickey for core from 10.0.0.1 port 41268 ssh2: RSA SHA256:61J5tVZgtMsvFgBzlA318rHvk/8vx1tAF2anfHXiCnk Feb 12 19:34:06.498754 sshd[3420]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:34:06.501789 systemd-logind[1103]: New session 11 of user core. Feb 12 19:34:06.502696 systemd[1]: Started session-11.scope. Feb 12 19:34:07.249800 sshd[3420]: pam_unix(sshd:session): session closed for user core Feb 12 19:34:07.252850 systemd[1]: Started sshd@11-10.0.0.38:22-10.0.0.1:41274.service. Feb 12 19:34:07.258011 systemd[1]: sshd@10-10.0.0.38:22-10.0.0.1:41268.service: Deactivated successfully. Feb 12 19:34:07.258877 systemd[1]: session-11.scope: Deactivated successfully. Feb 12 19:34:07.259521 systemd-logind[1103]: Session 11 logged out. Waiting for processes to exit. Feb 12 19:34:07.261110 systemd-logind[1103]: Removed session 11. Feb 12 19:34:07.296447 sshd[3431]: Accepted publickey for core from 10.0.0.1 port 41274 ssh2: RSA SHA256:61J5tVZgtMsvFgBzlA318rHvk/8vx1tAF2anfHXiCnk Feb 12 19:34:07.297837 sshd[3431]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:34:07.301338 systemd-logind[1103]: New session 12 of user core. Feb 12 19:34:07.302439 systemd[1]: Started session-12.scope. Feb 12 19:34:07.403543 sshd[3431]: pam_unix(sshd:session): session closed for user core Feb 12 19:34:07.405367 systemd[1]: sshd@11-10.0.0.38:22-10.0.0.1:41274.service: Deactivated successfully. Feb 12 19:34:07.406074 systemd[1]: session-12.scope: Deactivated successfully. Feb 12 19:34:07.406593 systemd-logind[1103]: Session 12 logged out. Waiting for processes to exit. Feb 12 19:34:07.407247 systemd-logind[1103]: Removed session 12. Feb 12 19:34:12.409191 systemd[1]: Started sshd@12-10.0.0.38:22-10.0.0.1:41288.service. Feb 12 19:34:12.447481 sshd[3445]: Accepted publickey for core from 10.0.0.1 port 41288 ssh2: RSA SHA256:61J5tVZgtMsvFgBzlA318rHvk/8vx1tAF2anfHXiCnk Feb 12 19:34:12.448705 sshd[3445]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:34:12.452014 systemd-logind[1103]: New session 13 of user core. Feb 12 19:34:12.452930 systemd[1]: Started session-13.scope. Feb 12 19:34:12.567308 sshd[3445]: pam_unix(sshd:session): session closed for user core Feb 12 19:34:12.569941 systemd[1]: sshd@12-10.0.0.38:22-10.0.0.1:41288.service: Deactivated successfully. Feb 12 19:34:12.570668 systemd[1]: session-13.scope: Deactivated successfully. Feb 12 19:34:12.571164 systemd-logind[1103]: Session 13 logged out. Waiting for processes to exit. Feb 12 19:34:12.571716 systemd-logind[1103]: Removed session 13. Feb 12 19:34:17.572490 systemd[1]: Started sshd@13-10.0.0.38:22-10.0.0.1:33326.service. Feb 12 19:34:17.610385 sshd[3458]: Accepted publickey for core from 10.0.0.1 port 33326 ssh2: RSA SHA256:61J5tVZgtMsvFgBzlA318rHvk/8vx1tAF2anfHXiCnk Feb 12 19:34:17.611652 sshd[3458]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:34:17.615189 systemd-logind[1103]: New session 14 of user core. Feb 12 19:34:17.616016 systemd[1]: Started session-14.scope. Feb 12 19:34:17.716650 sshd[3458]: pam_unix(sshd:session): session closed for user core Feb 12 19:34:17.719957 systemd[1]: sshd@13-10.0.0.38:22-10.0.0.1:33326.service: Deactivated successfully. Feb 12 19:34:17.720591 systemd[1]: session-14.scope: Deactivated successfully. Feb 12 19:34:17.721146 systemd-logind[1103]: Session 14 logged out. Waiting for processes to exit. Feb 12 19:34:17.722399 systemd[1]: Started sshd@14-10.0.0.38:22-10.0.0.1:33328.service. Feb 12 19:34:17.723153 systemd-logind[1103]: Removed session 14. Feb 12 19:34:17.757647 sshd[3471]: Accepted publickey for core from 10.0.0.1 port 33328 ssh2: RSA SHA256:61J5tVZgtMsvFgBzlA318rHvk/8vx1tAF2anfHXiCnk Feb 12 19:34:17.758857 sshd[3471]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:34:17.762365 systemd-logind[1103]: New session 15 of user core. Feb 12 19:34:17.763257 systemd[1]: Started session-15.scope. Feb 12 19:34:18.413035 sshd[3471]: pam_unix(sshd:session): session closed for user core Feb 12 19:34:18.415972 systemd[1]: sshd@14-10.0.0.38:22-10.0.0.1:33328.service: Deactivated successfully. Feb 12 19:34:18.416537 systemd[1]: session-15.scope: Deactivated successfully. Feb 12 19:34:18.417174 systemd-logind[1103]: Session 15 logged out. Waiting for processes to exit. Feb 12 19:34:18.418306 systemd[1]: Started sshd@15-10.0.0.38:22-10.0.0.1:33332.service. Feb 12 19:34:18.419569 systemd-logind[1103]: Removed session 15. Feb 12 19:34:18.457840 sshd[3482]: Accepted publickey for core from 10.0.0.1 port 33332 ssh2: RSA SHA256:61J5tVZgtMsvFgBzlA318rHvk/8vx1tAF2anfHXiCnk Feb 12 19:34:18.458969 sshd[3482]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:34:18.462447 systemd-logind[1103]: New session 16 of user core. Feb 12 19:34:18.463246 systemd[1]: Started session-16.scope. Feb 12 19:34:19.442262 sshd[3482]: pam_unix(sshd:session): session closed for user core Feb 12 19:34:19.444930 systemd[1]: sshd@15-10.0.0.38:22-10.0.0.1:33332.service: Deactivated successfully. Feb 12 19:34:19.445477 systemd[1]: session-16.scope: Deactivated successfully. Feb 12 19:34:19.446071 systemd-logind[1103]: Session 16 logged out. Waiting for processes to exit. Feb 12 19:34:19.447679 systemd[1]: Started sshd@16-10.0.0.38:22-10.0.0.1:33338.service. Feb 12 19:34:19.448651 systemd-logind[1103]: Removed session 16. Feb 12 19:34:19.485743 sshd[3502]: Accepted publickey for core from 10.0.0.1 port 33338 ssh2: RSA SHA256:61J5tVZgtMsvFgBzlA318rHvk/8vx1tAF2anfHXiCnk Feb 12 19:34:19.487044 sshd[3502]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:34:19.491561 systemd-logind[1103]: New session 17 of user core. Feb 12 19:34:19.491717 systemd[1]: Started session-17.scope. Feb 12 19:34:19.786890 sshd[3502]: pam_unix(sshd:session): session closed for user core Feb 12 19:34:19.789677 systemd[1]: sshd@16-10.0.0.38:22-10.0.0.1:33338.service: Deactivated successfully. Feb 12 19:34:19.790306 systemd[1]: session-17.scope: Deactivated successfully. Feb 12 19:34:19.790807 systemd-logind[1103]: Session 17 logged out. Waiting for processes to exit. Feb 12 19:34:19.792118 systemd[1]: Started sshd@17-10.0.0.38:22-10.0.0.1:33350.service. Feb 12 19:34:19.792973 systemd-logind[1103]: Removed session 17. Feb 12 19:34:19.827098 sshd[3514]: Accepted publickey for core from 10.0.0.1 port 33350 ssh2: RSA SHA256:61J5tVZgtMsvFgBzlA318rHvk/8vx1tAF2anfHXiCnk Feb 12 19:34:19.828298 sshd[3514]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:34:19.831529 systemd-logind[1103]: New session 18 of user core. Feb 12 19:34:19.832384 systemd[1]: Started session-18.scope. Feb 12 19:34:19.971500 sshd[3514]: pam_unix(sshd:session): session closed for user core Feb 12 19:34:19.973782 systemd[1]: sshd@17-10.0.0.38:22-10.0.0.1:33350.service: Deactivated successfully. Feb 12 19:34:19.974532 systemd[1]: session-18.scope: Deactivated successfully. Feb 12 19:34:19.975026 systemd-logind[1103]: Session 18 logged out. Waiting for processes to exit. Feb 12 19:34:19.975693 systemd-logind[1103]: Removed session 18. Feb 12 19:34:24.975850 systemd[1]: Started sshd@18-10.0.0.38:22-10.0.0.1:33360.service. Feb 12 19:34:25.015761 sshd[3527]: Accepted publickey for core from 10.0.0.1 port 33360 ssh2: RSA SHA256:61J5tVZgtMsvFgBzlA318rHvk/8vx1tAF2anfHXiCnk Feb 12 19:34:25.017190 sshd[3527]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:34:25.020732 systemd-logind[1103]: New session 19 of user core. Feb 12 19:34:25.021545 systemd[1]: Started session-19.scope. Feb 12 19:34:25.117158 sshd[3527]: pam_unix(sshd:session): session closed for user core Feb 12 19:34:25.119562 systemd[1]: sshd@18-10.0.0.38:22-10.0.0.1:33360.service: Deactivated successfully. Feb 12 19:34:25.120452 systemd[1]: session-19.scope: Deactivated successfully. Feb 12 19:34:25.121153 systemd-logind[1103]: Session 19 logged out. Waiting for processes to exit. Feb 12 19:34:25.121960 systemd-logind[1103]: Removed session 19. Feb 12 19:34:30.121122 systemd[1]: Started sshd@19-10.0.0.38:22-10.0.0.1:40006.service. Feb 12 19:34:30.157711 sshd[3546]: Accepted publickey for core from 10.0.0.1 port 40006 ssh2: RSA SHA256:61J5tVZgtMsvFgBzlA318rHvk/8vx1tAF2anfHXiCnk Feb 12 19:34:30.158753 sshd[3546]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:34:30.161751 systemd-logind[1103]: New session 20 of user core. Feb 12 19:34:30.162439 systemd[1]: Started session-20.scope. Feb 12 19:34:30.261224 sshd[3546]: pam_unix(sshd:session): session closed for user core Feb 12 19:34:30.263309 systemd[1]: sshd@19-10.0.0.38:22-10.0.0.1:40006.service: Deactivated successfully. Feb 12 19:34:30.264053 systemd[1]: session-20.scope: Deactivated successfully. Feb 12 19:34:30.264618 systemd-logind[1103]: Session 20 logged out. Waiting for processes to exit. Feb 12 19:34:30.265296 systemd-logind[1103]: Removed session 20. Feb 12 19:34:35.266037 systemd[1]: Started sshd@20-10.0.0.38:22-10.0.0.1:40020.service. Feb 12 19:34:35.301972 sshd[3562]: Accepted publickey for core from 10.0.0.1 port 40020 ssh2: RSA SHA256:61J5tVZgtMsvFgBzlA318rHvk/8vx1tAF2anfHXiCnk Feb 12 19:34:35.303201 sshd[3562]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:34:35.306310 systemd-logind[1103]: New session 21 of user core. Feb 12 19:34:35.307069 systemd[1]: Started session-21.scope. Feb 12 19:34:35.409150 sshd[3562]: pam_unix(sshd:session): session closed for user core Feb 12 19:34:35.411902 systemd[1]: sshd@20-10.0.0.38:22-10.0.0.1:40020.service: Deactivated successfully. Feb 12 19:34:35.412675 systemd[1]: session-21.scope: Deactivated successfully. Feb 12 19:34:35.413247 systemd-logind[1103]: Session 21 logged out. Waiting for processes to exit. Feb 12 19:34:35.413848 systemd-logind[1103]: Removed session 21. Feb 12 19:34:37.366355 kubelet[1964]: E0212 19:34:37.366309 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:34:40.366311 kubelet[1964]: E0212 19:34:40.366263 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:34:40.413724 systemd[1]: Started sshd@21-10.0.0.38:22-10.0.0.1:58406.service. Feb 12 19:34:40.448855 sshd[3575]: Accepted publickey for core from 10.0.0.1 port 58406 ssh2: RSA SHA256:61J5tVZgtMsvFgBzlA318rHvk/8vx1tAF2anfHXiCnk Feb 12 19:34:40.449824 sshd[3575]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:34:40.452961 systemd-logind[1103]: New session 22 of user core. Feb 12 19:34:40.453821 systemd[1]: Started session-22.scope. Feb 12 19:34:40.553694 sshd[3575]: pam_unix(sshd:session): session closed for user core Feb 12 19:34:40.557431 systemd[1]: Started sshd@22-10.0.0.38:22-10.0.0.1:58420.service. Feb 12 19:34:40.557884 systemd[1]: sshd@21-10.0.0.38:22-10.0.0.1:58406.service: Deactivated successfully. Feb 12 19:34:40.558494 systemd[1]: session-22.scope: Deactivated successfully. Feb 12 19:34:40.559121 systemd-logind[1103]: Session 22 logged out. Waiting for processes to exit. Feb 12 19:34:40.559860 systemd-logind[1103]: Removed session 22. Feb 12 19:34:40.592967 sshd[3587]: Accepted publickey for core from 10.0.0.1 port 58420 ssh2: RSA SHA256:61J5tVZgtMsvFgBzlA318rHvk/8vx1tAF2anfHXiCnk Feb 12 19:34:40.593985 sshd[3587]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:34:40.597256 systemd-logind[1103]: New session 23 of user core. Feb 12 19:34:40.598056 systemd[1]: Started session-23.scope. Feb 12 19:34:41.366634 kubelet[1964]: E0212 19:34:41.366606 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:34:41.995083 kubelet[1964]: I0212 19:34:41.995033 1964 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5d78c9869d-qd52v" podStartSLOduration=69.994982839 podCreationTimestamp="2024-02-12 19:33:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:33:56.48883553 +0000 UTC m=+37.212910758" watchObservedRunningTime="2024-02-12 19:34:41.994982839 +0000 UTC m=+82.719058067" Feb 12 19:34:42.003423 env[1119]: time="2024-02-12T19:34:42.003378023Z" level=info msg="StopContainer for \"c1d0312abfaa259226b4b715c53212e4403c3d1b34004ba522383400dd5e17d6\" with timeout 30 (s)" Feb 12 19:34:42.004292 env[1119]: time="2024-02-12T19:34:42.004262273Z" level=info msg="Stop container \"c1d0312abfaa259226b4b715c53212e4403c3d1b34004ba522383400dd5e17d6\" with signal terminated" Feb 12 19:34:42.014477 systemd[1]: cri-containerd-c1d0312abfaa259226b4b715c53212e4403c3d1b34004ba522383400dd5e17d6.scope: Deactivated successfully. Feb 12 19:34:42.023323 env[1119]: time="2024-02-12T19:34:42.023265472Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 19:34:42.028599 env[1119]: time="2024-02-12T19:34:42.028563388Z" level=info msg="StopContainer for \"73568447414767471fb163bce791d477abae2210400d3435a6adb89eb51dfc60\" with timeout 1 (s)" Feb 12 19:34:42.031234 env[1119]: time="2024-02-12T19:34:42.028941321Z" level=info msg="Stop container \"73568447414767471fb163bce791d477abae2210400d3435a6adb89eb51dfc60\" with signal terminated" Feb 12 19:34:42.030654 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c1d0312abfaa259226b4b715c53212e4403c3d1b34004ba522383400dd5e17d6-rootfs.mount: Deactivated successfully. Feb 12 19:34:42.035289 systemd-networkd[1018]: lxc_health: Link DOWN Feb 12 19:34:42.035293 systemd-networkd[1018]: lxc_health: Lost carrier Feb 12 19:34:42.044254 env[1119]: time="2024-02-12T19:34:42.044202594Z" level=info msg="shim disconnected" id=c1d0312abfaa259226b4b715c53212e4403c3d1b34004ba522383400dd5e17d6 Feb 12 19:34:42.044254 env[1119]: time="2024-02-12T19:34:42.044245165Z" level=warning msg="cleaning up after shim disconnected" id=c1d0312abfaa259226b4b715c53212e4403c3d1b34004ba522383400dd5e17d6 namespace=k8s.io Feb 12 19:34:42.044254 env[1119]: time="2024-02-12T19:34:42.044253641Z" level=info msg="cleaning up dead shim" Feb 12 19:34:42.052496 env[1119]: time="2024-02-12T19:34:42.052449951Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:34:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3640 runtime=io.containerd.runc.v2\n" Feb 12 19:34:42.055359 env[1119]: time="2024-02-12T19:34:42.055319340Z" level=info msg="StopContainer for \"c1d0312abfaa259226b4b715c53212e4403c3d1b34004ba522383400dd5e17d6\" returns successfully" Feb 12 19:34:42.056145 env[1119]: time="2024-02-12T19:34:42.056091077Z" level=info msg="StopPodSandbox for \"8f2874c63b810eb998d094b7cf258b91e9afc8ac1e8e8402ed9d33cc15d05230\"" Feb 12 19:34:42.056265 env[1119]: time="2024-02-12T19:34:42.056239046Z" level=info msg="Container to stop \"c1d0312abfaa259226b4b715c53212e4403c3d1b34004ba522383400dd5e17d6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:34:42.057778 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8f2874c63b810eb998d094b7cf258b91e9afc8ac1e8e8402ed9d33cc15d05230-shm.mount: Deactivated successfully. Feb 12 19:34:42.062542 systemd[1]: cri-containerd-73568447414767471fb163bce791d477abae2210400d3435a6adb89eb51dfc60.scope: Deactivated successfully. Feb 12 19:34:42.062758 systemd[1]: cri-containerd-73568447414767471fb163bce791d477abae2210400d3435a6adb89eb51dfc60.scope: Consumed 6.063s CPU time. Feb 12 19:34:42.063169 systemd[1]: cri-containerd-8f2874c63b810eb998d094b7cf258b91e9afc8ac1e8e8402ed9d33cc15d05230.scope: Deactivated successfully. Feb 12 19:34:42.078668 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8f2874c63b810eb998d094b7cf258b91e9afc8ac1e8e8402ed9d33cc15d05230-rootfs.mount: Deactivated successfully. Feb 12 19:34:42.084169 env[1119]: time="2024-02-12T19:34:42.083072314Z" level=info msg="shim disconnected" id=8f2874c63b810eb998d094b7cf258b91e9afc8ac1e8e8402ed9d33cc15d05230 Feb 12 19:34:42.084169 env[1119]: time="2024-02-12T19:34:42.083125254Z" level=warning msg="cleaning up after shim disconnected" id=8f2874c63b810eb998d094b7cf258b91e9afc8ac1e8e8402ed9d33cc15d05230 namespace=k8s.io Feb 12 19:34:42.084169 env[1119]: time="2024-02-12T19:34:42.083141575Z" level=info msg="cleaning up dead shim" Feb 12 19:34:42.083690 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-73568447414767471fb163bce791d477abae2210400d3435a6adb89eb51dfc60-rootfs.mount: Deactivated successfully. Feb 12 19:34:42.084456 env[1119]: time="2024-02-12T19:34:42.084395172Z" level=info msg="shim disconnected" id=73568447414767471fb163bce791d477abae2210400d3435a6adb89eb51dfc60 Feb 12 19:34:42.084456 env[1119]: time="2024-02-12T19:34:42.084436991Z" level=warning msg="cleaning up after shim disconnected" id=73568447414767471fb163bce791d477abae2210400d3435a6adb89eb51dfc60 namespace=k8s.io Feb 12 19:34:42.084456 env[1119]: time="2024-02-12T19:34:42.084444776Z" level=info msg="cleaning up dead shim" Feb 12 19:34:42.091705 env[1119]: time="2024-02-12T19:34:42.091640836Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:34:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3686 runtime=io.containerd.runc.v2\n" Feb 12 19:34:42.096285 env[1119]: time="2024-02-12T19:34:42.096252035Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:34:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3685 runtime=io.containerd.runc.v2\n" Feb 12 19:34:42.096645 env[1119]: time="2024-02-12T19:34:42.096621974Z" level=info msg="TearDown network for sandbox \"8f2874c63b810eb998d094b7cf258b91e9afc8ac1e8e8402ed9d33cc15d05230\" successfully" Feb 12 19:34:42.096732 env[1119]: time="2024-02-12T19:34:42.096712194Z" level=info msg="StopPodSandbox for \"8f2874c63b810eb998d094b7cf258b91e9afc8ac1e8e8402ed9d33cc15d05230\" returns successfully" Feb 12 19:34:42.166250 env[1119]: time="2024-02-12T19:34:42.166195101Z" level=info msg="StopContainer for \"73568447414767471fb163bce791d477abae2210400d3435a6adb89eb51dfc60\" returns successfully" Feb 12 19:34:42.166609 env[1119]: time="2024-02-12T19:34:42.166573766Z" level=info msg="StopPodSandbox for \"7a616c699c9a4867f6e0c811cd27abb3290921407580f5038715cdc5a6f6b360\"" Feb 12 19:34:42.166829 env[1119]: time="2024-02-12T19:34:42.166620894Z" level=info msg="Container to stop \"8f2807b216d51e48974d9a7e91a93c307c2ef8019e010b6c72507e9b106d5daf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:34:42.166829 env[1119]: time="2024-02-12T19:34:42.166634080Z" level=info msg="Container to stop \"545017aa5134329ab9da75d863322ac988536de921d9ec5ec03ee1428efe9806\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:34:42.166829 env[1119]: time="2024-02-12T19:34:42.166643227Z" level=info msg="Container to stop \"73568447414767471fb163bce791d477abae2210400d3435a6adb89eb51dfc60\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:34:42.166829 env[1119]: time="2024-02-12T19:34:42.166652635Z" level=info msg="Container to stop \"634fe1b911cdbd9f264c8becd8ceb8c0a16fc5b57d127a58e84727ca2c6b075d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:34:42.166829 env[1119]: time="2024-02-12T19:34:42.166661261Z" level=info msg="Container to stop \"de44dbbd5522f645a17e53c34f3a7523a74a8ac8e29b2b0c34fb81838ee4306a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:34:42.171732 systemd[1]: cri-containerd-7a616c699c9a4867f6e0c811cd27abb3290921407580f5038715cdc5a6f6b360.scope: Deactivated successfully. Feb 12 19:34:42.208046 env[1119]: time="2024-02-12T19:34:42.207987089Z" level=info msg="shim disconnected" id=7a616c699c9a4867f6e0c811cd27abb3290921407580f5038715cdc5a6f6b360 Feb 12 19:34:42.208046 env[1119]: time="2024-02-12T19:34:42.208036693Z" level=warning msg="cleaning up after shim disconnected" id=7a616c699c9a4867f6e0c811cd27abb3290921407580f5038715cdc5a6f6b360 namespace=k8s.io Feb 12 19:34:42.208046 env[1119]: time="2024-02-12T19:34:42.208045359Z" level=info msg="cleaning up dead shim" Feb 12 19:34:42.213964 env[1119]: time="2024-02-12T19:34:42.213921967Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:34:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3727 runtime=io.containerd.runc.v2\n" Feb 12 19:34:42.214335 env[1119]: time="2024-02-12T19:34:42.214304850Z" level=info msg="TearDown network for sandbox \"7a616c699c9a4867f6e0c811cd27abb3290921407580f5038715cdc5a6f6b360\" successfully" Feb 12 19:34:42.214388 env[1119]: time="2024-02-12T19:34:42.214334707Z" level=info msg="StopPodSandbox for \"7a616c699c9a4867f6e0c811cd27abb3290921407580f5038715cdc5a6f6b360\" returns successfully" Feb 12 19:34:42.305993 kubelet[1964]: I0212 19:34:42.304515 1964 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f2a1ad13-a09e-4e69-96cc-ff6e19516d31-cilium-config-path\") pod \"f2a1ad13-a09e-4e69-96cc-ff6e19516d31\" (UID: \"f2a1ad13-a09e-4e69-96cc-ff6e19516d31\") " Feb 12 19:34:42.305993 kubelet[1964]: I0212 19:34:42.304559 1964 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/de82b679-77a0-48d7-9057-90f424cf4bb8-host-proc-sys-net\") pod \"de82b679-77a0-48d7-9057-90f424cf4bb8\" (UID: \"de82b679-77a0-48d7-9057-90f424cf4bb8\") " Feb 12 19:34:42.305993 kubelet[1964]: I0212 19:34:42.304575 1964 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/de82b679-77a0-48d7-9057-90f424cf4bb8-cilium-run\") pod \"de82b679-77a0-48d7-9057-90f424cf4bb8\" (UID: \"de82b679-77a0-48d7-9057-90f424cf4bb8\") " Feb 12 19:34:42.305993 kubelet[1964]: I0212 19:34:42.304591 1964 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/de82b679-77a0-48d7-9057-90f424cf4bb8-xtables-lock\") pod \"de82b679-77a0-48d7-9057-90f424cf4bb8\" (UID: \"de82b679-77a0-48d7-9057-90f424cf4bb8\") " Feb 12 19:34:42.305993 kubelet[1964]: I0212 19:34:42.304608 1964 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/de82b679-77a0-48d7-9057-90f424cf4bb8-bpf-maps\") pod \"de82b679-77a0-48d7-9057-90f424cf4bb8\" (UID: \"de82b679-77a0-48d7-9057-90f424cf4bb8\") " Feb 12 19:34:42.305993 kubelet[1964]: I0212 19:34:42.304621 1964 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/de82b679-77a0-48d7-9057-90f424cf4bb8-cni-path\") pod \"de82b679-77a0-48d7-9057-90f424cf4bb8\" (UID: \"de82b679-77a0-48d7-9057-90f424cf4bb8\") " Feb 12 19:34:42.306335 kubelet[1964]: I0212 19:34:42.304639 1964 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s6rnn\" (UniqueName: \"kubernetes.io/projected/de82b679-77a0-48d7-9057-90f424cf4bb8-kube-api-access-s6rnn\") pod \"de82b679-77a0-48d7-9057-90f424cf4bb8\" (UID: \"de82b679-77a0-48d7-9057-90f424cf4bb8\") " Feb 12 19:34:42.306335 kubelet[1964]: I0212 19:34:42.304655 1964 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/de82b679-77a0-48d7-9057-90f424cf4bb8-cilium-cgroup\") pod \"de82b679-77a0-48d7-9057-90f424cf4bb8\" (UID: \"de82b679-77a0-48d7-9057-90f424cf4bb8\") " Feb 12 19:34:42.306335 kubelet[1964]: I0212 19:34:42.304672 1964 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/de82b679-77a0-48d7-9057-90f424cf4bb8-clustermesh-secrets\") pod \"de82b679-77a0-48d7-9057-90f424cf4bb8\" (UID: \"de82b679-77a0-48d7-9057-90f424cf4bb8\") " Feb 12 19:34:42.306335 kubelet[1964]: I0212 19:34:42.304686 1964 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/de82b679-77a0-48d7-9057-90f424cf4bb8-host-proc-sys-kernel\") pod \"de82b679-77a0-48d7-9057-90f424cf4bb8\" (UID: \"de82b679-77a0-48d7-9057-90f424cf4bb8\") " Feb 12 19:34:42.306335 kubelet[1964]: I0212 19:34:42.304704 1964 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/de82b679-77a0-48d7-9057-90f424cf4bb8-etc-cni-netd\") pod \"de82b679-77a0-48d7-9057-90f424cf4bb8\" (UID: \"de82b679-77a0-48d7-9057-90f424cf4bb8\") " Feb 12 19:34:42.306335 kubelet[1964]: I0212 19:34:42.304731 1964 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2t2gc\" (UniqueName: \"kubernetes.io/projected/f2a1ad13-a09e-4e69-96cc-ff6e19516d31-kube-api-access-2t2gc\") pod \"f2a1ad13-a09e-4e69-96cc-ff6e19516d31\" (UID: \"f2a1ad13-a09e-4e69-96cc-ff6e19516d31\") " Feb 12 19:34:42.306530 kubelet[1964]: I0212 19:34:42.304747 1964 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/de82b679-77a0-48d7-9057-90f424cf4bb8-hostproc\") pod \"de82b679-77a0-48d7-9057-90f424cf4bb8\" (UID: \"de82b679-77a0-48d7-9057-90f424cf4bb8\") " Feb 12 19:34:42.306530 kubelet[1964]: I0212 19:34:42.304748 1964 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de82b679-77a0-48d7-9057-90f424cf4bb8-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "de82b679-77a0-48d7-9057-90f424cf4bb8" (UID: "de82b679-77a0-48d7-9057-90f424cf4bb8"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:34:42.306530 kubelet[1964]: I0212 19:34:42.304782 1964 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de82b679-77a0-48d7-9057-90f424cf4bb8-hostproc" (OuterVolumeSpecName: "hostproc") pod "de82b679-77a0-48d7-9057-90f424cf4bb8" (UID: "de82b679-77a0-48d7-9057-90f424cf4bb8"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:34:42.306530 kubelet[1964]: I0212 19:34:42.304792 1964 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de82b679-77a0-48d7-9057-90f424cf4bb8-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "de82b679-77a0-48d7-9057-90f424cf4bb8" (UID: "de82b679-77a0-48d7-9057-90f424cf4bb8"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:34:42.306530 kubelet[1964]: I0212 19:34:42.304814 1964 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de82b679-77a0-48d7-9057-90f424cf4bb8-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "de82b679-77a0-48d7-9057-90f424cf4bb8" (UID: "de82b679-77a0-48d7-9057-90f424cf4bb8"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:34:42.306690 kubelet[1964]: W0212 19:34:42.304746 1964 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/f2a1ad13-a09e-4e69-96cc-ff6e19516d31/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 12 19:34:42.306690 kubelet[1964]: I0212 19:34:42.305326 1964 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de82b679-77a0-48d7-9057-90f424cf4bb8-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "de82b679-77a0-48d7-9057-90f424cf4bb8" (UID: "de82b679-77a0-48d7-9057-90f424cf4bb8"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:34:42.306690 kubelet[1964]: I0212 19:34:42.305363 1964 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de82b679-77a0-48d7-9057-90f424cf4bb8-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "de82b679-77a0-48d7-9057-90f424cf4bb8" (UID: "de82b679-77a0-48d7-9057-90f424cf4bb8"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:34:42.306690 kubelet[1964]: I0212 19:34:42.305380 1964 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de82b679-77a0-48d7-9057-90f424cf4bb8-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "de82b679-77a0-48d7-9057-90f424cf4bb8" (UID: "de82b679-77a0-48d7-9057-90f424cf4bb8"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:34:42.306690 kubelet[1964]: I0212 19:34:42.305393 1964 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de82b679-77a0-48d7-9057-90f424cf4bb8-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "de82b679-77a0-48d7-9057-90f424cf4bb8" (UID: "de82b679-77a0-48d7-9057-90f424cf4bb8"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:34:42.306849 kubelet[1964]: I0212 19:34:42.304831 1964 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de82b679-77a0-48d7-9057-90f424cf4bb8-cni-path" (OuterVolumeSpecName: "cni-path") pod "de82b679-77a0-48d7-9057-90f424cf4bb8" (UID: "de82b679-77a0-48d7-9057-90f424cf4bb8"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:34:42.307393 kubelet[1964]: I0212 19:34:42.307366 1964 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de82b679-77a0-48d7-9057-90f424cf4bb8-kube-api-access-s6rnn" (OuterVolumeSpecName: "kube-api-access-s6rnn") pod "de82b679-77a0-48d7-9057-90f424cf4bb8" (UID: "de82b679-77a0-48d7-9057-90f424cf4bb8"). InnerVolumeSpecName "kube-api-access-s6rnn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:34:42.307560 kubelet[1964]: I0212 19:34:42.307437 1964 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de82b679-77a0-48d7-9057-90f424cf4bb8-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "de82b679-77a0-48d7-9057-90f424cf4bb8" (UID: "de82b679-77a0-48d7-9057-90f424cf4bb8"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 19:34:42.307736 kubelet[1964]: I0212 19:34:42.307708 1964 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2a1ad13-a09e-4e69-96cc-ff6e19516d31-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f2a1ad13-a09e-4e69-96cc-ff6e19516d31" (UID: "f2a1ad13-a09e-4e69-96cc-ff6e19516d31"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 19:34:42.309536 kubelet[1964]: I0212 19:34:42.309511 1964 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2a1ad13-a09e-4e69-96cc-ff6e19516d31-kube-api-access-2t2gc" (OuterVolumeSpecName: "kube-api-access-2t2gc") pod "f2a1ad13-a09e-4e69-96cc-ff6e19516d31" (UID: "f2a1ad13-a09e-4e69-96cc-ff6e19516d31"). InnerVolumeSpecName "kube-api-access-2t2gc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:34:42.405237 kubelet[1964]: I0212 19:34:42.405170 1964 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/de82b679-77a0-48d7-9057-90f424cf4bb8-hubble-tls\") pod \"de82b679-77a0-48d7-9057-90f424cf4bb8\" (UID: \"de82b679-77a0-48d7-9057-90f424cf4bb8\") " Feb 12 19:34:42.405237 kubelet[1964]: I0212 19:34:42.405230 1964 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/de82b679-77a0-48d7-9057-90f424cf4bb8-lib-modules\") pod \"de82b679-77a0-48d7-9057-90f424cf4bb8\" (UID: \"de82b679-77a0-48d7-9057-90f424cf4bb8\") " Feb 12 19:34:42.405679 kubelet[1964]: I0212 19:34:42.405267 1964 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/de82b679-77a0-48d7-9057-90f424cf4bb8-cilium-config-path\") pod \"de82b679-77a0-48d7-9057-90f424cf4bb8\" (UID: \"de82b679-77a0-48d7-9057-90f424cf4bb8\") " Feb 12 19:34:42.405679 kubelet[1964]: I0212 19:34:42.405313 1964 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/de82b679-77a0-48d7-9057-90f424cf4bb8-hostproc\") on node \"localhost\" DevicePath \"\"" Feb 12 19:34:42.405679 kubelet[1964]: I0212 19:34:42.405310 1964 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de82b679-77a0-48d7-9057-90f424cf4bb8-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "de82b679-77a0-48d7-9057-90f424cf4bb8" (UID: "de82b679-77a0-48d7-9057-90f424cf4bb8"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:34:42.405679 kubelet[1964]: I0212 19:34:42.405330 1964 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/de82b679-77a0-48d7-9057-90f424cf4bb8-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Feb 12 19:34:42.405679 kubelet[1964]: I0212 19:34:42.405408 1964 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-2t2gc\" (UniqueName: \"kubernetes.io/projected/f2a1ad13-a09e-4e69-96cc-ff6e19516d31-kube-api-access-2t2gc\") on node \"localhost\" DevicePath \"\"" Feb 12 19:34:42.405679 kubelet[1964]: I0212 19:34:42.405422 1964 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f2a1ad13-a09e-4e69-96cc-ff6e19516d31-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 12 19:34:42.405679 kubelet[1964]: I0212 19:34:42.405432 1964 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/de82b679-77a0-48d7-9057-90f424cf4bb8-cilium-run\") on node \"localhost\" DevicePath \"\"" Feb 12 19:34:42.405881 kubelet[1964]: I0212 19:34:42.405442 1964 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/de82b679-77a0-48d7-9057-90f424cf4bb8-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Feb 12 19:34:42.405881 kubelet[1964]: I0212 19:34:42.405463 1964 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/de82b679-77a0-48d7-9057-90f424cf4bb8-bpf-maps\") on node \"localhost\" DevicePath \"\"" Feb 12 19:34:42.405881 kubelet[1964]: I0212 19:34:42.405472 1964 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/de82b679-77a0-48d7-9057-90f424cf4bb8-cni-path\") on node \"localhost\" DevicePath \"\"" Feb 12 19:34:42.405881 kubelet[1964]: I0212 19:34:42.405491 1964 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/de82b679-77a0-48d7-9057-90f424cf4bb8-xtables-lock\") on node \"localhost\" DevicePath \"\"" Feb 12 19:34:42.405881 kubelet[1964]: I0212 19:34:42.405501 1964 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-s6rnn\" (UniqueName: \"kubernetes.io/projected/de82b679-77a0-48d7-9057-90f424cf4bb8-kube-api-access-s6rnn\") on node \"localhost\" DevicePath \"\"" Feb 12 19:34:42.405881 kubelet[1964]: I0212 19:34:42.405511 1964 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/de82b679-77a0-48d7-9057-90f424cf4bb8-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Feb 12 19:34:42.405881 kubelet[1964]: I0212 19:34:42.405519 1964 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/de82b679-77a0-48d7-9057-90f424cf4bb8-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Feb 12 19:34:42.405881 kubelet[1964]: I0212 19:34:42.405539 1964 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/de82b679-77a0-48d7-9057-90f424cf4bb8-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Feb 12 19:34:42.406308 kubelet[1964]: W0212 19:34:42.405562 1964 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/de82b679-77a0-48d7-9057-90f424cf4bb8/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 12 19:34:42.407757 kubelet[1964]: I0212 19:34:42.407722 1964 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de82b679-77a0-48d7-9057-90f424cf4bb8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "de82b679-77a0-48d7-9057-90f424cf4bb8" (UID: "de82b679-77a0-48d7-9057-90f424cf4bb8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 19:34:42.407885 kubelet[1964]: I0212 19:34:42.407846 1964 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de82b679-77a0-48d7-9057-90f424cf4bb8-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "de82b679-77a0-48d7-9057-90f424cf4bb8" (UID: "de82b679-77a0-48d7-9057-90f424cf4bb8"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:34:42.506256 kubelet[1964]: I0212 19:34:42.506206 1964 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/de82b679-77a0-48d7-9057-90f424cf4bb8-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 12 19:34:42.506256 kubelet[1964]: I0212 19:34:42.506244 1964 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/de82b679-77a0-48d7-9057-90f424cf4bb8-hubble-tls\") on node \"localhost\" DevicePath \"\"" Feb 12 19:34:42.506256 kubelet[1964]: I0212 19:34:42.506253 1964 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/de82b679-77a0-48d7-9057-90f424cf4bb8-lib-modules\") on node \"localhost\" DevicePath \"\"" Feb 12 19:34:42.537766 kubelet[1964]: I0212 19:34:42.537735 1964 scope.go:115] "RemoveContainer" containerID="c1d0312abfaa259226b4b715c53212e4403c3d1b34004ba522383400dd5e17d6" Feb 12 19:34:42.539143 env[1119]: time="2024-02-12T19:34:42.539102275Z" level=info msg="RemoveContainer for \"c1d0312abfaa259226b4b715c53212e4403c3d1b34004ba522383400dd5e17d6\"" Feb 12 19:34:42.542600 systemd[1]: Removed slice kubepods-besteffort-podf2a1ad13_a09e_4e69_96cc_ff6e19516d31.slice. Feb 12 19:34:42.546193 systemd[1]: Removed slice kubepods-burstable-podde82b679_77a0_48d7_9057_90f424cf4bb8.slice. Feb 12 19:34:42.546308 systemd[1]: kubepods-burstable-podde82b679_77a0_48d7_9057_90f424cf4bb8.slice: Consumed 6.155s CPU time. Feb 12 19:34:42.547315 env[1119]: time="2024-02-12T19:34:42.547257737Z" level=info msg="RemoveContainer for \"c1d0312abfaa259226b4b715c53212e4403c3d1b34004ba522383400dd5e17d6\" returns successfully" Feb 12 19:34:42.547968 kubelet[1964]: I0212 19:34:42.547941 1964 scope.go:115] "RemoveContainer" containerID="c1d0312abfaa259226b4b715c53212e4403c3d1b34004ba522383400dd5e17d6" Feb 12 19:34:42.548364 env[1119]: time="2024-02-12T19:34:42.548203233Z" level=error msg="ContainerStatus for \"c1d0312abfaa259226b4b715c53212e4403c3d1b34004ba522383400dd5e17d6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c1d0312abfaa259226b4b715c53212e4403c3d1b34004ba522383400dd5e17d6\": not found" Feb 12 19:34:42.548565 kubelet[1964]: E0212 19:34:42.548539 1964 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c1d0312abfaa259226b4b715c53212e4403c3d1b34004ba522383400dd5e17d6\": not found" containerID="c1d0312abfaa259226b4b715c53212e4403c3d1b34004ba522383400dd5e17d6" Feb 12 19:34:42.548627 kubelet[1964]: I0212 19:34:42.548585 1964 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:c1d0312abfaa259226b4b715c53212e4403c3d1b34004ba522383400dd5e17d6} err="failed to get container status \"c1d0312abfaa259226b4b715c53212e4403c3d1b34004ba522383400dd5e17d6\": rpc error: code = NotFound desc = an error occurred when try to find container \"c1d0312abfaa259226b4b715c53212e4403c3d1b34004ba522383400dd5e17d6\": not found" Feb 12 19:34:42.548627 kubelet[1964]: I0212 19:34:42.548599 1964 scope.go:115] "RemoveContainer" containerID="73568447414767471fb163bce791d477abae2210400d3435a6adb89eb51dfc60" Feb 12 19:34:42.550309 env[1119]: time="2024-02-12T19:34:42.550283382Z" level=info msg="RemoveContainer for \"73568447414767471fb163bce791d477abae2210400d3435a6adb89eb51dfc60\"" Feb 12 19:34:42.554036 env[1119]: time="2024-02-12T19:34:42.554002786Z" level=info msg="RemoveContainer for \"73568447414767471fb163bce791d477abae2210400d3435a6adb89eb51dfc60\" returns successfully" Feb 12 19:34:42.554257 kubelet[1964]: I0212 19:34:42.554235 1964 scope.go:115] "RemoveContainer" containerID="545017aa5134329ab9da75d863322ac988536de921d9ec5ec03ee1428efe9806" Feb 12 19:34:42.555502 env[1119]: time="2024-02-12T19:34:42.555464767Z" level=info msg="RemoveContainer for \"545017aa5134329ab9da75d863322ac988536de921d9ec5ec03ee1428efe9806\"" Feb 12 19:34:42.559782 env[1119]: time="2024-02-12T19:34:42.559702230Z" level=info msg="RemoveContainer for \"545017aa5134329ab9da75d863322ac988536de921d9ec5ec03ee1428efe9806\" returns successfully" Feb 12 19:34:42.561667 kubelet[1964]: I0212 19:34:42.561636 1964 scope.go:115] "RemoveContainer" containerID="de44dbbd5522f645a17e53c34f3a7523a74a8ac8e29b2b0c34fb81838ee4306a" Feb 12 19:34:42.562801 env[1119]: time="2024-02-12T19:34:42.562767460Z" level=info msg="RemoveContainer for \"de44dbbd5522f645a17e53c34f3a7523a74a8ac8e29b2b0c34fb81838ee4306a\"" Feb 12 19:34:42.565506 env[1119]: time="2024-02-12T19:34:42.565465986Z" level=info msg="RemoveContainer for \"de44dbbd5522f645a17e53c34f3a7523a74a8ac8e29b2b0c34fb81838ee4306a\" returns successfully" Feb 12 19:34:42.565708 kubelet[1964]: I0212 19:34:42.565679 1964 scope.go:115] "RemoveContainer" containerID="8f2807b216d51e48974d9a7e91a93c307c2ef8019e010b6c72507e9b106d5daf" Feb 12 19:34:42.566590 env[1119]: time="2024-02-12T19:34:42.566560303Z" level=info msg="RemoveContainer for \"8f2807b216d51e48974d9a7e91a93c307c2ef8019e010b6c72507e9b106d5daf\"" Feb 12 19:34:42.569362 env[1119]: time="2024-02-12T19:34:42.569338449Z" level=info msg="RemoveContainer for \"8f2807b216d51e48974d9a7e91a93c307c2ef8019e010b6c72507e9b106d5daf\" returns successfully" Feb 12 19:34:42.569510 kubelet[1964]: I0212 19:34:42.569490 1964 scope.go:115] "RemoveContainer" containerID="634fe1b911cdbd9f264c8becd8ceb8c0a16fc5b57d127a58e84727ca2c6b075d" Feb 12 19:34:42.570370 env[1119]: time="2024-02-12T19:34:42.570352795Z" level=info msg="RemoveContainer for \"634fe1b911cdbd9f264c8becd8ceb8c0a16fc5b57d127a58e84727ca2c6b075d\"" Feb 12 19:34:42.572974 env[1119]: time="2024-02-12T19:34:42.572947586Z" level=info msg="RemoveContainer for \"634fe1b911cdbd9f264c8becd8ceb8c0a16fc5b57d127a58e84727ca2c6b075d\" returns successfully" Feb 12 19:34:42.573187 kubelet[1964]: I0212 19:34:42.573115 1964 scope.go:115] "RemoveContainer" containerID="73568447414767471fb163bce791d477abae2210400d3435a6adb89eb51dfc60" Feb 12 19:34:42.573342 env[1119]: time="2024-02-12T19:34:42.573277358Z" level=error msg="ContainerStatus for \"73568447414767471fb163bce791d477abae2210400d3435a6adb89eb51dfc60\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"73568447414767471fb163bce791d477abae2210400d3435a6adb89eb51dfc60\": not found" Feb 12 19:34:42.573430 kubelet[1964]: E0212 19:34:42.573404 1964 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"73568447414767471fb163bce791d477abae2210400d3435a6adb89eb51dfc60\": not found" containerID="73568447414767471fb163bce791d477abae2210400d3435a6adb89eb51dfc60" Feb 12 19:34:42.573504 kubelet[1964]: I0212 19:34:42.573439 1964 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:73568447414767471fb163bce791d477abae2210400d3435a6adb89eb51dfc60} err="failed to get container status \"73568447414767471fb163bce791d477abae2210400d3435a6adb89eb51dfc60\": rpc error: code = NotFound desc = an error occurred when try to find container \"73568447414767471fb163bce791d477abae2210400d3435a6adb89eb51dfc60\": not found" Feb 12 19:34:42.573504 kubelet[1964]: I0212 19:34:42.573447 1964 scope.go:115] "RemoveContainer" containerID="545017aa5134329ab9da75d863322ac988536de921d9ec5ec03ee1428efe9806" Feb 12 19:34:42.573603 env[1119]: time="2024-02-12T19:34:42.573563228Z" level=error msg="ContainerStatus for \"545017aa5134329ab9da75d863322ac988536de921d9ec5ec03ee1428efe9806\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"545017aa5134329ab9da75d863322ac988536de921d9ec5ec03ee1428efe9806\": not found" Feb 12 19:34:42.573680 kubelet[1964]: E0212 19:34:42.573661 1964 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"545017aa5134329ab9da75d863322ac988536de921d9ec5ec03ee1428efe9806\": not found" containerID="545017aa5134329ab9da75d863322ac988536de921d9ec5ec03ee1428efe9806" Feb 12 19:34:42.573730 kubelet[1964]: I0212 19:34:42.573680 1964 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:545017aa5134329ab9da75d863322ac988536de921d9ec5ec03ee1428efe9806} err="failed to get container status \"545017aa5134329ab9da75d863322ac988536de921d9ec5ec03ee1428efe9806\": rpc error: code = NotFound desc = an error occurred when try to find container \"545017aa5134329ab9da75d863322ac988536de921d9ec5ec03ee1428efe9806\": not found" Feb 12 19:34:42.573730 kubelet[1964]: I0212 19:34:42.573693 1964 scope.go:115] "RemoveContainer" containerID="de44dbbd5522f645a17e53c34f3a7523a74a8ac8e29b2b0c34fb81838ee4306a" Feb 12 19:34:42.573836 env[1119]: time="2024-02-12T19:34:42.573801458Z" level=error msg="ContainerStatus for \"de44dbbd5522f645a17e53c34f3a7523a74a8ac8e29b2b0c34fb81838ee4306a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"de44dbbd5522f645a17e53c34f3a7523a74a8ac8e29b2b0c34fb81838ee4306a\": not found" Feb 12 19:34:42.573909 kubelet[1964]: E0212 19:34:42.573893 1964 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"de44dbbd5522f645a17e53c34f3a7523a74a8ac8e29b2b0c34fb81838ee4306a\": not found" containerID="de44dbbd5522f645a17e53c34f3a7523a74a8ac8e29b2b0c34fb81838ee4306a" Feb 12 19:34:42.573909 kubelet[1964]: I0212 19:34:42.573912 1964 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:de44dbbd5522f645a17e53c34f3a7523a74a8ac8e29b2b0c34fb81838ee4306a} err="failed to get container status \"de44dbbd5522f645a17e53c34f3a7523a74a8ac8e29b2b0c34fb81838ee4306a\": rpc error: code = NotFound desc = an error occurred when try to find container \"de44dbbd5522f645a17e53c34f3a7523a74a8ac8e29b2b0c34fb81838ee4306a\": not found" Feb 12 19:34:42.573996 kubelet[1964]: I0212 19:34:42.573919 1964 scope.go:115] "RemoveContainer" containerID="8f2807b216d51e48974d9a7e91a93c307c2ef8019e010b6c72507e9b106d5daf" Feb 12 19:34:42.574091 env[1119]: time="2024-02-12T19:34:42.574047654Z" level=error msg="ContainerStatus for \"8f2807b216d51e48974d9a7e91a93c307c2ef8019e010b6c72507e9b106d5daf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8f2807b216d51e48974d9a7e91a93c307c2ef8019e010b6c72507e9b106d5daf\": not found" Feb 12 19:34:42.574177 kubelet[1964]: E0212 19:34:42.574164 1964 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8f2807b216d51e48974d9a7e91a93c307c2ef8019e010b6c72507e9b106d5daf\": not found" containerID="8f2807b216d51e48974d9a7e91a93c307c2ef8019e010b6c72507e9b106d5daf" Feb 12 19:34:42.574227 kubelet[1964]: I0212 19:34:42.574181 1964 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:8f2807b216d51e48974d9a7e91a93c307c2ef8019e010b6c72507e9b106d5daf} err="failed to get container status \"8f2807b216d51e48974d9a7e91a93c307c2ef8019e010b6c72507e9b106d5daf\": rpc error: code = NotFound desc = an error occurred when try to find container \"8f2807b216d51e48974d9a7e91a93c307c2ef8019e010b6c72507e9b106d5daf\": not found" Feb 12 19:34:42.574227 kubelet[1964]: I0212 19:34:42.574188 1964 scope.go:115] "RemoveContainer" containerID="634fe1b911cdbd9f264c8becd8ceb8c0a16fc5b57d127a58e84727ca2c6b075d" Feb 12 19:34:42.574359 env[1119]: time="2024-02-12T19:34:42.574310229Z" level=error msg="ContainerStatus for \"634fe1b911cdbd9f264c8becd8ceb8c0a16fc5b57d127a58e84727ca2c6b075d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"634fe1b911cdbd9f264c8becd8ceb8c0a16fc5b57d127a58e84727ca2c6b075d\": not found" Feb 12 19:34:42.574425 kubelet[1964]: E0212 19:34:42.574418 1964 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"634fe1b911cdbd9f264c8becd8ceb8c0a16fc5b57d127a58e84727ca2c6b075d\": not found" containerID="634fe1b911cdbd9f264c8becd8ceb8c0a16fc5b57d127a58e84727ca2c6b075d" Feb 12 19:34:42.574457 kubelet[1964]: I0212 19:34:42.574434 1964 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:634fe1b911cdbd9f264c8becd8ceb8c0a16fc5b57d127a58e84727ca2c6b075d} err="failed to get container status \"634fe1b911cdbd9f264c8becd8ceb8c0a16fc5b57d127a58e84727ca2c6b075d\": rpc error: code = NotFound desc = an error occurred when try to find container \"634fe1b911cdbd9f264c8becd8ceb8c0a16fc5b57d127a58e84727ca2c6b075d\": not found" Feb 12 19:34:43.007643 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7a616c699c9a4867f6e0c811cd27abb3290921407580f5038715cdc5a6f6b360-rootfs.mount: Deactivated successfully. Feb 12 19:34:43.007727 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7a616c699c9a4867f6e0c811cd27abb3290921407580f5038715cdc5a6f6b360-shm.mount: Deactivated successfully. Feb 12 19:34:43.007776 systemd[1]: var-lib-kubelet-pods-de82b679\x2d77a0\x2d48d7\x2d9057\x2d90f424cf4bb8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ds6rnn.mount: Deactivated successfully. Feb 12 19:34:43.007826 systemd[1]: var-lib-kubelet-pods-f2a1ad13\x2da09e\x2d4e69\x2d96cc\x2dff6e19516d31-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2t2gc.mount: Deactivated successfully. Feb 12 19:34:43.007879 systemd[1]: var-lib-kubelet-pods-de82b679\x2d77a0\x2d48d7\x2d9057\x2d90f424cf4bb8-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 12 19:34:43.007930 systemd[1]: var-lib-kubelet-pods-de82b679\x2d77a0\x2d48d7\x2d9057\x2d90f424cf4bb8-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 12 19:34:43.368955 kubelet[1964]: I0212 19:34:43.368865 1964 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=de82b679-77a0-48d7-9057-90f424cf4bb8 path="/var/lib/kubelet/pods/de82b679-77a0-48d7-9057-90f424cf4bb8/volumes" Feb 12 19:34:43.369441 kubelet[1964]: I0212 19:34:43.369418 1964 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=f2a1ad13-a09e-4e69-96cc-ff6e19516d31 path="/var/lib/kubelet/pods/f2a1ad13-a09e-4e69-96cc-ff6e19516d31/volumes" Feb 12 19:34:43.893722 sshd[3587]: pam_unix(sshd:session): session closed for user core Feb 12 19:34:43.896991 systemd[1]: Started sshd@23-10.0.0.38:22-10.0.0.1:58436.service. Feb 12 19:34:43.897413 systemd[1]: sshd@22-10.0.0.38:22-10.0.0.1:58420.service: Deactivated successfully. Feb 12 19:34:43.898147 systemd[1]: session-23.scope: Deactivated successfully. Feb 12 19:34:43.898782 systemd-logind[1103]: Session 23 logged out. Waiting for processes to exit. Feb 12 19:34:43.899863 systemd-logind[1103]: Removed session 23. Feb 12 19:34:43.934964 sshd[3745]: Accepted publickey for core from 10.0.0.1 port 58436 ssh2: RSA SHA256:61J5tVZgtMsvFgBzlA318rHvk/8vx1tAF2anfHXiCnk Feb 12 19:34:43.935973 sshd[3745]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:34:43.939339 systemd-logind[1103]: New session 24 of user core. Feb 12 19:34:43.940332 systemd[1]: Started session-24.scope. Feb 12 19:34:44.429156 kubelet[1964]: E0212 19:34:44.429115 1964 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 12 19:34:44.454313 sshd[3745]: pam_unix(sshd:session): session closed for user core Feb 12 19:34:44.457042 systemd[1]: sshd@23-10.0.0.38:22-10.0.0.1:58436.service: Deactivated successfully. Feb 12 19:34:44.457580 systemd[1]: session-24.scope: Deactivated successfully. Feb 12 19:34:44.458912 systemd[1]: Started sshd@24-10.0.0.38:22-10.0.0.1:58444.service. Feb 12 19:34:44.459895 systemd-logind[1103]: Session 24 logged out. Waiting for processes to exit. Feb 12 19:34:44.460640 systemd-logind[1103]: Removed session 24. Feb 12 19:34:44.466588 kubelet[1964]: I0212 19:34:44.466553 1964 topology_manager.go:212] "Topology Admit Handler" Feb 12 19:34:44.466818 kubelet[1964]: E0212 19:34:44.466803 1964 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="de82b679-77a0-48d7-9057-90f424cf4bb8" containerName="apply-sysctl-overwrites" Feb 12 19:34:44.466896 kubelet[1964]: E0212 19:34:44.466883 1964 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="de82b679-77a0-48d7-9057-90f424cf4bb8" containerName="clean-cilium-state" Feb 12 19:34:44.466971 kubelet[1964]: E0212 19:34:44.466955 1964 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="de82b679-77a0-48d7-9057-90f424cf4bb8" containerName="cilium-agent" Feb 12 19:34:44.467053 kubelet[1964]: E0212 19:34:44.467038 1964 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="de82b679-77a0-48d7-9057-90f424cf4bb8" containerName="mount-cgroup" Feb 12 19:34:44.467156 kubelet[1964]: E0212 19:34:44.467119 1964 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="de82b679-77a0-48d7-9057-90f424cf4bb8" containerName="mount-bpf-fs" Feb 12 19:34:44.467239 kubelet[1964]: E0212 19:34:44.467223 1964 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f2a1ad13-a09e-4e69-96cc-ff6e19516d31" containerName="cilium-operator" Feb 12 19:34:44.467355 kubelet[1964]: I0212 19:34:44.467338 1964 memory_manager.go:346] "RemoveStaleState removing state" podUID="de82b679-77a0-48d7-9057-90f424cf4bb8" containerName="cilium-agent" Feb 12 19:34:44.467438 kubelet[1964]: I0212 19:34:44.467422 1964 memory_manager.go:346] "RemoveStaleState removing state" podUID="f2a1ad13-a09e-4e69-96cc-ff6e19516d31" containerName="cilium-operator" Feb 12 19:34:44.474048 systemd[1]: Created slice kubepods-burstable-pod9c952ae3_b2dd_4573_bdf1_18414a92275d.slice. Feb 12 19:34:44.499538 sshd[3759]: Accepted publickey for core from 10.0.0.1 port 58444 ssh2: RSA SHA256:61J5tVZgtMsvFgBzlA318rHvk/8vx1tAF2anfHXiCnk Feb 12 19:34:44.500786 sshd[3759]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:34:44.504365 systemd-logind[1103]: New session 25 of user core. Feb 12 19:34:44.505166 systemd[1]: Started session-25.scope. Feb 12 19:34:44.615309 kubelet[1964]: I0212 19:34:44.615258 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9c952ae3-b2dd-4573-bdf1-18414a92275d-cilium-run\") pod \"cilium-bxtdv\" (UID: \"9c952ae3-b2dd-4573-bdf1-18414a92275d\") " pod="kube-system/cilium-bxtdv" Feb 12 19:34:44.615309 kubelet[1964]: I0212 19:34:44.615308 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9c952ae3-b2dd-4573-bdf1-18414a92275d-hostproc\") pod \"cilium-bxtdv\" (UID: \"9c952ae3-b2dd-4573-bdf1-18414a92275d\") " pod="kube-system/cilium-bxtdv" Feb 12 19:34:44.615549 kubelet[1964]: I0212 19:34:44.615343 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9c952ae3-b2dd-4573-bdf1-18414a92275d-cilium-config-path\") pod \"cilium-bxtdv\" (UID: \"9c952ae3-b2dd-4573-bdf1-18414a92275d\") " pod="kube-system/cilium-bxtdv" Feb 12 19:34:44.615549 kubelet[1964]: I0212 19:34:44.615365 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kczgm\" (UniqueName: \"kubernetes.io/projected/9c952ae3-b2dd-4573-bdf1-18414a92275d-kube-api-access-kczgm\") pod \"cilium-bxtdv\" (UID: \"9c952ae3-b2dd-4573-bdf1-18414a92275d\") " pod="kube-system/cilium-bxtdv" Feb 12 19:34:44.615549 kubelet[1964]: I0212 19:34:44.615389 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9c952ae3-b2dd-4573-bdf1-18414a92275d-cilium-ipsec-secrets\") pod \"cilium-bxtdv\" (UID: \"9c952ae3-b2dd-4573-bdf1-18414a92275d\") " pod="kube-system/cilium-bxtdv" Feb 12 19:34:44.615549 kubelet[1964]: I0212 19:34:44.615432 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9c952ae3-b2dd-4573-bdf1-18414a92275d-xtables-lock\") pod \"cilium-bxtdv\" (UID: \"9c952ae3-b2dd-4573-bdf1-18414a92275d\") " pod="kube-system/cilium-bxtdv" Feb 12 19:34:44.615549 kubelet[1964]: I0212 19:34:44.615453 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9c952ae3-b2dd-4573-bdf1-18414a92275d-hubble-tls\") pod \"cilium-bxtdv\" (UID: \"9c952ae3-b2dd-4573-bdf1-18414a92275d\") " pod="kube-system/cilium-bxtdv" Feb 12 19:34:44.615725 kubelet[1964]: I0212 19:34:44.615532 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9c952ae3-b2dd-4573-bdf1-18414a92275d-host-proc-sys-net\") pod \"cilium-bxtdv\" (UID: \"9c952ae3-b2dd-4573-bdf1-18414a92275d\") " pod="kube-system/cilium-bxtdv" Feb 12 19:34:44.615725 kubelet[1964]: I0212 19:34:44.615641 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9c952ae3-b2dd-4573-bdf1-18414a92275d-cilium-cgroup\") pod \"cilium-bxtdv\" (UID: \"9c952ae3-b2dd-4573-bdf1-18414a92275d\") " pod="kube-system/cilium-bxtdv" Feb 12 19:34:44.615725 kubelet[1964]: I0212 19:34:44.615691 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9c952ae3-b2dd-4573-bdf1-18414a92275d-etc-cni-netd\") pod \"cilium-bxtdv\" (UID: \"9c952ae3-b2dd-4573-bdf1-18414a92275d\") " pod="kube-system/cilium-bxtdv" Feb 12 19:34:44.615725 kubelet[1964]: I0212 19:34:44.615722 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9c952ae3-b2dd-4573-bdf1-18414a92275d-lib-modules\") pod \"cilium-bxtdv\" (UID: \"9c952ae3-b2dd-4573-bdf1-18414a92275d\") " pod="kube-system/cilium-bxtdv" Feb 12 19:34:44.615866 kubelet[1964]: I0212 19:34:44.615751 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9c952ae3-b2dd-4573-bdf1-18414a92275d-clustermesh-secrets\") pod \"cilium-bxtdv\" (UID: \"9c952ae3-b2dd-4573-bdf1-18414a92275d\") " pod="kube-system/cilium-bxtdv" Feb 12 19:34:44.615866 kubelet[1964]: I0212 19:34:44.615790 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9c952ae3-b2dd-4573-bdf1-18414a92275d-cni-path\") pod \"cilium-bxtdv\" (UID: \"9c952ae3-b2dd-4573-bdf1-18414a92275d\") " pod="kube-system/cilium-bxtdv" Feb 12 19:34:44.615866 kubelet[1964]: I0212 19:34:44.615816 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9c952ae3-b2dd-4573-bdf1-18414a92275d-host-proc-sys-kernel\") pod \"cilium-bxtdv\" (UID: \"9c952ae3-b2dd-4573-bdf1-18414a92275d\") " pod="kube-system/cilium-bxtdv" Feb 12 19:34:44.615866 kubelet[1964]: I0212 19:34:44.615858 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9c952ae3-b2dd-4573-bdf1-18414a92275d-bpf-maps\") pod \"cilium-bxtdv\" (UID: \"9c952ae3-b2dd-4573-bdf1-18414a92275d\") " pod="kube-system/cilium-bxtdv" Feb 12 19:34:44.620363 sshd[3759]: pam_unix(sshd:session): session closed for user core Feb 12 19:34:44.622765 systemd[1]: sshd@24-10.0.0.38:22-10.0.0.1:58444.service: Deactivated successfully. Feb 12 19:34:44.623265 systemd[1]: session-25.scope: Deactivated successfully. Feb 12 19:34:44.624697 systemd[1]: Started sshd@25-10.0.0.38:22-10.0.0.1:58446.service. Feb 12 19:34:44.625515 systemd-logind[1103]: Session 25 logged out. Waiting for processes to exit. Feb 12 19:34:44.626570 systemd-logind[1103]: Removed session 25. Feb 12 19:34:44.661771 sshd[3772]: Accepted publickey for core from 10.0.0.1 port 58446 ssh2: RSA SHA256:61J5tVZgtMsvFgBzlA318rHvk/8vx1tAF2anfHXiCnk Feb 12 19:34:44.662939 sshd[3772]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:34:44.666373 systemd-logind[1103]: New session 26 of user core. Feb 12 19:34:44.667102 systemd[1]: Started session-26.scope. Feb 12 19:34:44.777033 kubelet[1964]: E0212 19:34:44.776919 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:34:44.777474 env[1119]: time="2024-02-12T19:34:44.777413658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bxtdv,Uid:9c952ae3-b2dd-4573-bdf1-18414a92275d,Namespace:kube-system,Attempt:0,}" Feb 12 19:34:44.791544 env[1119]: time="2024-02-12T19:34:44.791464247Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:34:44.791544 env[1119]: time="2024-02-12T19:34:44.791511707Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:34:44.791544 env[1119]: time="2024-02-12T19:34:44.791524482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:34:44.791747 env[1119]: time="2024-02-12T19:34:44.791689984Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/72399b472b61fe4e1922eabd57c15255325895afb805e3d68633011d89e2342b pid=3792 runtime=io.containerd.runc.v2 Feb 12 19:34:44.803258 systemd[1]: Started cri-containerd-72399b472b61fe4e1922eabd57c15255325895afb805e3d68633011d89e2342b.scope. Feb 12 19:34:44.824919 env[1119]: time="2024-02-12T19:34:44.824883692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bxtdv,Uid:9c952ae3-b2dd-4573-bdf1-18414a92275d,Namespace:kube-system,Attempt:0,} returns sandbox id \"72399b472b61fe4e1922eabd57c15255325895afb805e3d68633011d89e2342b\"" Feb 12 19:34:44.825776 kubelet[1964]: E0212 19:34:44.825760 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:34:44.829611 env[1119]: time="2024-02-12T19:34:44.829560703Z" level=info msg="CreateContainer within sandbox \"72399b472b61fe4e1922eabd57c15255325895afb805e3d68633011d89e2342b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 19:34:44.840150 env[1119]: time="2024-02-12T19:34:44.840110288Z" level=info msg="CreateContainer within sandbox \"72399b472b61fe4e1922eabd57c15255325895afb805e3d68633011d89e2342b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2425df19678053ed34fd10d7cc7d9059105684c30de72612afc72dc3734a5ca7\"" Feb 12 19:34:44.840555 env[1119]: time="2024-02-12T19:34:44.840492661Z" level=info msg="StartContainer for \"2425df19678053ed34fd10d7cc7d9059105684c30de72612afc72dc3734a5ca7\"" Feb 12 19:34:44.854779 systemd[1]: Started cri-containerd-2425df19678053ed34fd10d7cc7d9059105684c30de72612afc72dc3734a5ca7.scope. Feb 12 19:34:44.865443 systemd[1]: cri-containerd-2425df19678053ed34fd10d7cc7d9059105684c30de72612afc72dc3734a5ca7.scope: Deactivated successfully. Feb 12 19:34:44.865667 systemd[1]: Stopped cri-containerd-2425df19678053ed34fd10d7cc7d9059105684c30de72612afc72dc3734a5ca7.scope. Feb 12 19:34:44.878422 env[1119]: time="2024-02-12T19:34:44.878362117Z" level=info msg="shim disconnected" id=2425df19678053ed34fd10d7cc7d9059105684c30de72612afc72dc3734a5ca7 Feb 12 19:34:44.878422 env[1119]: time="2024-02-12T19:34:44.878406381Z" level=warning msg="cleaning up after shim disconnected" id=2425df19678053ed34fd10d7cc7d9059105684c30de72612afc72dc3734a5ca7 namespace=k8s.io Feb 12 19:34:44.878422 env[1119]: time="2024-02-12T19:34:44.878415237Z" level=info msg="cleaning up dead shim" Feb 12 19:34:44.885651 env[1119]: time="2024-02-12T19:34:44.885606327Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:34:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3849 runtime=io.containerd.runc.v2\ntime=\"2024-02-12T19:34:44Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/2425df19678053ed34fd10d7cc7d9059105684c30de72612afc72dc3734a5ca7/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 12 19:34:44.885902 env[1119]: time="2024-02-12T19:34:44.885816383Z" level=error msg="copy shim log" error="read /proc/self/fd/35: file already closed" Feb 12 19:34:44.888060 env[1119]: time="2024-02-12T19:34:44.888011568Z" level=error msg="Failed to pipe stdout of container \"2425df19678053ed34fd10d7cc7d9059105684c30de72612afc72dc3734a5ca7\"" error="reading from a closed fifo" Feb 12 19:34:44.888170 env[1119]: time="2024-02-12T19:34:44.888094896Z" level=error msg="Failed to pipe stderr of container \"2425df19678053ed34fd10d7cc7d9059105684c30de72612afc72dc3734a5ca7\"" error="reading from a closed fifo" Feb 12 19:34:44.890431 env[1119]: time="2024-02-12T19:34:44.890367356Z" level=error msg="StartContainer for \"2425df19678053ed34fd10d7cc7d9059105684c30de72612afc72dc3734a5ca7\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 12 19:34:44.890627 kubelet[1964]: E0212 19:34:44.890607 1964 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="2425df19678053ed34fd10d7cc7d9059105684c30de72612afc72dc3734a5ca7" Feb 12 19:34:44.890794 kubelet[1964]: E0212 19:34:44.890723 1964 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 12 19:34:44.890794 kubelet[1964]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 12 19:34:44.890794 kubelet[1964]: rm /hostbin/cilium-mount Feb 12 19:34:44.890867 kubelet[1964]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-kczgm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-bxtdv_kube-system(9c952ae3-b2dd-4573-bdf1-18414a92275d): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 12 19:34:44.890867 kubelet[1964]: E0212 19:34:44.890763 1964 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-bxtdv" podUID=9c952ae3-b2dd-4573-bdf1-18414a92275d Feb 12 19:34:45.547630 env[1119]: time="2024-02-12T19:34:45.547591915Z" level=info msg="StopPodSandbox for \"72399b472b61fe4e1922eabd57c15255325895afb805e3d68633011d89e2342b\"" Feb 12 19:34:45.547799 env[1119]: time="2024-02-12T19:34:45.547643382Z" level=info msg="Container to stop \"2425df19678053ed34fd10d7cc7d9059105684c30de72612afc72dc3734a5ca7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:34:45.552304 systemd[1]: cri-containerd-72399b472b61fe4e1922eabd57c15255325895afb805e3d68633011d89e2342b.scope: Deactivated successfully. Feb 12 19:34:45.574235 env[1119]: time="2024-02-12T19:34:45.574184277Z" level=info msg="shim disconnected" id=72399b472b61fe4e1922eabd57c15255325895afb805e3d68633011d89e2342b Feb 12 19:34:45.574510 env[1119]: time="2024-02-12T19:34:45.574473736Z" level=warning msg="cleaning up after shim disconnected" id=72399b472b61fe4e1922eabd57c15255325895afb805e3d68633011d89e2342b namespace=k8s.io Feb 12 19:34:45.574510 env[1119]: time="2024-02-12T19:34:45.574490137Z" level=info msg="cleaning up dead shim" Feb 12 19:34:45.580492 env[1119]: time="2024-02-12T19:34:45.580438395Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:34:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3878 runtime=io.containerd.runc.v2\n" Feb 12 19:34:45.580770 env[1119]: time="2024-02-12T19:34:45.580736760Z" level=info msg="TearDown network for sandbox \"72399b472b61fe4e1922eabd57c15255325895afb805e3d68633011d89e2342b\" successfully" Feb 12 19:34:45.580770 env[1119]: time="2024-02-12T19:34:45.580767768Z" level=info msg="StopPodSandbox for \"72399b472b61fe4e1922eabd57c15255325895afb805e3d68633011d89e2342b\" returns successfully" Feb 12 19:34:45.719616 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-72399b472b61fe4e1922eabd57c15255325895afb805e3d68633011d89e2342b-rootfs.mount: Deactivated successfully. Feb 12 19:34:45.719717 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-72399b472b61fe4e1922eabd57c15255325895afb805e3d68633011d89e2342b-shm.mount: Deactivated successfully. Feb 12 19:34:45.721683 kubelet[1964]: I0212 19:34:45.721655 1964 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9c952ae3-b2dd-4573-bdf1-18414a92275d-cilium-ipsec-secrets\") pod \"9c952ae3-b2dd-4573-bdf1-18414a92275d\" (UID: \"9c952ae3-b2dd-4573-bdf1-18414a92275d\") " Feb 12 19:34:45.721976 kubelet[1964]: I0212 19:34:45.721706 1964 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9c952ae3-b2dd-4573-bdf1-18414a92275d-cilium-run\") pod \"9c952ae3-b2dd-4573-bdf1-18414a92275d\" (UID: \"9c952ae3-b2dd-4573-bdf1-18414a92275d\") " Feb 12 19:34:45.721976 kubelet[1964]: I0212 19:34:45.721726 1964 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9c952ae3-b2dd-4573-bdf1-18414a92275d-xtables-lock\") pod \"9c952ae3-b2dd-4573-bdf1-18414a92275d\" (UID: \"9c952ae3-b2dd-4573-bdf1-18414a92275d\") " Feb 12 19:34:45.721976 kubelet[1964]: I0212 19:34:45.721741 1964 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9c952ae3-b2dd-4573-bdf1-18414a92275d-host-proc-sys-net\") pod \"9c952ae3-b2dd-4573-bdf1-18414a92275d\" (UID: \"9c952ae3-b2dd-4573-bdf1-18414a92275d\") " Feb 12 19:34:45.721976 kubelet[1964]: I0212 19:34:45.721761 1964 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9c952ae3-b2dd-4573-bdf1-18414a92275d-cilium-config-path\") pod \"9c952ae3-b2dd-4573-bdf1-18414a92275d\" (UID: \"9c952ae3-b2dd-4573-bdf1-18414a92275d\") " Feb 12 19:34:45.721976 kubelet[1964]: I0212 19:34:45.721778 1964 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9c952ae3-b2dd-4573-bdf1-18414a92275d-clustermesh-secrets\") pod \"9c952ae3-b2dd-4573-bdf1-18414a92275d\" (UID: \"9c952ae3-b2dd-4573-bdf1-18414a92275d\") " Feb 12 19:34:45.721976 kubelet[1964]: I0212 19:34:45.721794 1964 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9c952ae3-b2dd-4573-bdf1-18414a92275d-lib-modules\") pod \"9c952ae3-b2dd-4573-bdf1-18414a92275d\" (UID: \"9c952ae3-b2dd-4573-bdf1-18414a92275d\") " Feb 12 19:34:45.721976 kubelet[1964]: I0212 19:34:45.721813 1964 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9c952ae3-b2dd-4573-bdf1-18414a92275d-etc-cni-netd\") pod \"9c952ae3-b2dd-4573-bdf1-18414a92275d\" (UID: \"9c952ae3-b2dd-4573-bdf1-18414a92275d\") " Feb 12 19:34:45.721976 kubelet[1964]: I0212 19:34:45.721805 1964 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c952ae3-b2dd-4573-bdf1-18414a92275d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "9c952ae3-b2dd-4573-bdf1-18414a92275d" (UID: "9c952ae3-b2dd-4573-bdf1-18414a92275d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:34:45.721976 kubelet[1964]: I0212 19:34:45.721839 1964 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9c952ae3-b2dd-4573-bdf1-18414a92275d-hubble-tls\") pod \"9c952ae3-b2dd-4573-bdf1-18414a92275d\" (UID: \"9c952ae3-b2dd-4573-bdf1-18414a92275d\") " Feb 12 19:34:45.721976 kubelet[1964]: I0212 19:34:45.721865 1964 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9c952ae3-b2dd-4573-bdf1-18414a92275d-hostproc\") pod \"9c952ae3-b2dd-4573-bdf1-18414a92275d\" (UID: \"9c952ae3-b2dd-4573-bdf1-18414a92275d\") " Feb 12 19:34:45.721976 kubelet[1964]: I0212 19:34:45.721862 1964 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c952ae3-b2dd-4573-bdf1-18414a92275d-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "9c952ae3-b2dd-4573-bdf1-18414a92275d" (UID: "9c952ae3-b2dd-4573-bdf1-18414a92275d"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:34:45.721976 kubelet[1964]: I0212 19:34:45.721888 1964 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9c952ae3-b2dd-4573-bdf1-18414a92275d-bpf-maps\") pod \"9c952ae3-b2dd-4573-bdf1-18414a92275d\" (UID: \"9c952ae3-b2dd-4573-bdf1-18414a92275d\") " Feb 12 19:34:45.721976 kubelet[1964]: I0212 19:34:45.721906 1964 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9c952ae3-b2dd-4573-bdf1-18414a92275d-cni-path\") pod \"9c952ae3-b2dd-4573-bdf1-18414a92275d\" (UID: \"9c952ae3-b2dd-4573-bdf1-18414a92275d\") " Feb 12 19:34:45.721976 kubelet[1964]: I0212 19:34:45.721922 1964 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9c952ae3-b2dd-4573-bdf1-18414a92275d-host-proc-sys-kernel\") pod \"9c952ae3-b2dd-4573-bdf1-18414a92275d\" (UID: \"9c952ae3-b2dd-4573-bdf1-18414a92275d\") " Feb 12 19:34:45.721976 kubelet[1964]: I0212 19:34:45.721938 1964 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kczgm\" (UniqueName: \"kubernetes.io/projected/9c952ae3-b2dd-4573-bdf1-18414a92275d-kube-api-access-kczgm\") pod \"9c952ae3-b2dd-4573-bdf1-18414a92275d\" (UID: \"9c952ae3-b2dd-4573-bdf1-18414a92275d\") " Feb 12 19:34:45.721976 kubelet[1964]: I0212 19:34:45.721954 1964 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9c952ae3-b2dd-4573-bdf1-18414a92275d-cilium-cgroup\") pod \"9c952ae3-b2dd-4573-bdf1-18414a92275d\" (UID: \"9c952ae3-b2dd-4573-bdf1-18414a92275d\") " Feb 12 19:34:45.722532 kubelet[1964]: I0212 19:34:45.721987 1964 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9c952ae3-b2dd-4573-bdf1-18414a92275d-cilium-run\") on node \"localhost\" DevicePath \"\"" Feb 12 19:34:45.722532 kubelet[1964]: I0212 19:34:45.721996 1964 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9c952ae3-b2dd-4573-bdf1-18414a92275d-xtables-lock\") on node \"localhost\" DevicePath \"\"" Feb 12 19:34:45.722532 kubelet[1964]: I0212 19:34:45.722024 1964 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c952ae3-b2dd-4573-bdf1-18414a92275d-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "9c952ae3-b2dd-4573-bdf1-18414a92275d" (UID: "9c952ae3-b2dd-4573-bdf1-18414a92275d"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:34:45.722532 kubelet[1964]: I0212 19:34:45.722054 1964 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c952ae3-b2dd-4573-bdf1-18414a92275d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "9c952ae3-b2dd-4573-bdf1-18414a92275d" (UID: "9c952ae3-b2dd-4573-bdf1-18414a92275d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:34:45.722532 kubelet[1964]: I0212 19:34:45.722068 1964 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c952ae3-b2dd-4573-bdf1-18414a92275d-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "9c952ae3-b2dd-4573-bdf1-18414a92275d" (UID: "9c952ae3-b2dd-4573-bdf1-18414a92275d"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:34:45.722532 kubelet[1964]: I0212 19:34:45.722124 1964 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c952ae3-b2dd-4573-bdf1-18414a92275d-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "9c952ae3-b2dd-4573-bdf1-18414a92275d" (UID: "9c952ae3-b2dd-4573-bdf1-18414a92275d"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:34:45.722532 kubelet[1964]: W0212 19:34:45.722242 1964 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/9c952ae3-b2dd-4573-bdf1-18414a92275d/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 12 19:34:45.722728 kubelet[1964]: I0212 19:34:45.722667 1964 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c952ae3-b2dd-4573-bdf1-18414a92275d-hostproc" (OuterVolumeSpecName: "hostproc") pod "9c952ae3-b2dd-4573-bdf1-18414a92275d" (UID: "9c952ae3-b2dd-4573-bdf1-18414a92275d"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:34:45.722728 kubelet[1964]: I0212 19:34:45.722715 1964 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c952ae3-b2dd-4573-bdf1-18414a92275d-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "9c952ae3-b2dd-4573-bdf1-18414a92275d" (UID: "9c952ae3-b2dd-4573-bdf1-18414a92275d"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:34:45.722780 kubelet[1964]: I0212 19:34:45.722729 1964 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c952ae3-b2dd-4573-bdf1-18414a92275d-cni-path" (OuterVolumeSpecName: "cni-path") pod "9c952ae3-b2dd-4573-bdf1-18414a92275d" (UID: "9c952ae3-b2dd-4573-bdf1-18414a92275d"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:34:45.722780 kubelet[1964]: I0212 19:34:45.722743 1964 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c952ae3-b2dd-4573-bdf1-18414a92275d-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "9c952ae3-b2dd-4573-bdf1-18414a92275d" (UID: "9c952ae3-b2dd-4573-bdf1-18414a92275d"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:34:45.724210 kubelet[1964]: I0212 19:34:45.723975 1964 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c952ae3-b2dd-4573-bdf1-18414a92275d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9c952ae3-b2dd-4573-bdf1-18414a92275d" (UID: "9c952ae3-b2dd-4573-bdf1-18414a92275d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 19:34:45.724609 kubelet[1964]: I0212 19:34:45.724588 1964 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c952ae3-b2dd-4573-bdf1-18414a92275d-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "9c952ae3-b2dd-4573-bdf1-18414a92275d" (UID: "9c952ae3-b2dd-4573-bdf1-18414a92275d"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:34:45.725350 kubelet[1964]: I0212 19:34:45.725312 1964 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c952ae3-b2dd-4573-bdf1-18414a92275d-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "9c952ae3-b2dd-4573-bdf1-18414a92275d" (UID: "9c952ae3-b2dd-4573-bdf1-18414a92275d"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 19:34:45.725392 systemd[1]: var-lib-kubelet-pods-9c952ae3\x2db2dd\x2d4573\x2dbdf1\x2d18414a92275d-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 12 19:34:45.726005 kubelet[1964]: I0212 19:34:45.725974 1964 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c952ae3-b2dd-4573-bdf1-18414a92275d-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "9c952ae3-b2dd-4573-bdf1-18414a92275d" (UID: "9c952ae3-b2dd-4573-bdf1-18414a92275d"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 19:34:45.726734 systemd[1]: var-lib-kubelet-pods-9c952ae3\x2db2dd\x2d4573\x2dbdf1\x2d18414a92275d-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 12 19:34:45.726803 systemd[1]: var-lib-kubelet-pods-9c952ae3\x2db2dd\x2d4573\x2dbdf1\x2d18414a92275d-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 12 19:34:45.727404 kubelet[1964]: I0212 19:34:45.727370 1964 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c952ae3-b2dd-4573-bdf1-18414a92275d-kube-api-access-kczgm" (OuterVolumeSpecName: "kube-api-access-kczgm") pod "9c952ae3-b2dd-4573-bdf1-18414a92275d" (UID: "9c952ae3-b2dd-4573-bdf1-18414a92275d"). InnerVolumeSpecName "kube-api-access-kczgm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:34:45.728352 systemd[1]: var-lib-kubelet-pods-9c952ae3\x2db2dd\x2d4573\x2dbdf1\x2d18414a92275d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkczgm.mount: Deactivated successfully. Feb 12 19:34:45.822464 kubelet[1964]: I0212 19:34:45.822334 1964 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9c952ae3-b2dd-4573-bdf1-18414a92275d-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Feb 12 19:34:45.822464 kubelet[1964]: I0212 19:34:45.822371 1964 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9c952ae3-b2dd-4573-bdf1-18414a92275d-cni-path\") on node \"localhost\" DevicePath \"\"" Feb 12 19:34:45.822464 kubelet[1964]: I0212 19:34:45.822385 1964 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9c952ae3-b2dd-4573-bdf1-18414a92275d-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Feb 12 19:34:45.822464 kubelet[1964]: I0212 19:34:45.822398 1964 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-kczgm\" (UniqueName: \"kubernetes.io/projected/9c952ae3-b2dd-4573-bdf1-18414a92275d-kube-api-access-kczgm\") on node \"localhost\" DevicePath \"\"" Feb 12 19:34:45.822464 kubelet[1964]: I0212 19:34:45.822407 1964 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9c952ae3-b2dd-4573-bdf1-18414a92275d-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Feb 12 19:34:45.822464 kubelet[1964]: I0212 19:34:45.822415 1964 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9c952ae3-b2dd-4573-bdf1-18414a92275d-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Feb 12 19:34:45.822464 kubelet[1964]: I0212 19:34:45.822423 1964 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9c952ae3-b2dd-4573-bdf1-18414a92275d-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Feb 12 19:34:45.822464 kubelet[1964]: I0212 19:34:45.822431 1964 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9c952ae3-b2dd-4573-bdf1-18414a92275d-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 12 19:34:45.822464 kubelet[1964]: I0212 19:34:45.822438 1964 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9c952ae3-b2dd-4573-bdf1-18414a92275d-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Feb 12 19:34:45.822464 kubelet[1964]: I0212 19:34:45.822454 1964 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9c952ae3-b2dd-4573-bdf1-18414a92275d-lib-modules\") on node \"localhost\" DevicePath \"\"" Feb 12 19:34:45.822464 kubelet[1964]: I0212 19:34:45.822463 1964 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9c952ae3-b2dd-4573-bdf1-18414a92275d-hubble-tls\") on node \"localhost\" DevicePath \"\"" Feb 12 19:34:45.822464 kubelet[1964]: I0212 19:34:45.822473 1964 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9c952ae3-b2dd-4573-bdf1-18414a92275d-hostproc\") on node \"localhost\" DevicePath \"\"" Feb 12 19:34:45.822464 kubelet[1964]: I0212 19:34:45.822480 1964 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9c952ae3-b2dd-4573-bdf1-18414a92275d-bpf-maps\") on node \"localhost\" DevicePath \"\"" Feb 12 19:34:46.550262 kubelet[1964]: I0212 19:34:46.550238 1964 scope.go:115] "RemoveContainer" containerID="2425df19678053ed34fd10d7cc7d9059105684c30de72612afc72dc3734a5ca7" Feb 12 19:34:46.551267 env[1119]: time="2024-02-12T19:34:46.551224850Z" level=info msg="RemoveContainer for \"2425df19678053ed34fd10d7cc7d9059105684c30de72612afc72dc3734a5ca7\"" Feb 12 19:34:46.554052 systemd[1]: Removed slice kubepods-burstable-pod9c952ae3_b2dd_4573_bdf1_18414a92275d.slice. Feb 12 19:34:46.627848 kubelet[1964]: I0212 19:34:46.627804 1964 topology_manager.go:212] "Topology Admit Handler" Feb 12 19:34:46.628058 kubelet[1964]: E0212 19:34:46.627871 1964 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9c952ae3-b2dd-4573-bdf1-18414a92275d" containerName="mount-cgroup" Feb 12 19:34:46.628058 kubelet[1964]: I0212 19:34:46.627899 1964 memory_manager.go:346] "RemoveStaleState removing state" podUID="9c952ae3-b2dd-4573-bdf1-18414a92275d" containerName="mount-cgroup" Feb 12 19:34:46.628418 env[1119]: time="2024-02-12T19:34:46.628369098Z" level=info msg="RemoveContainer for \"2425df19678053ed34fd10d7cc7d9059105684c30de72612afc72dc3734a5ca7\" returns successfully" Feb 12 19:34:46.632711 systemd[1]: Created slice kubepods-burstable-pod4ac40c3b_8969_4ef1_8534_acc0a3cc7519.slice. Feb 12 19:34:46.827875 kubelet[1964]: I0212 19:34:46.827766 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4ac40c3b-8969-4ef1-8534-acc0a3cc7519-host-proc-sys-net\") pod \"cilium-fz262\" (UID: \"4ac40c3b-8969-4ef1-8534-acc0a3cc7519\") " pod="kube-system/cilium-fz262" Feb 12 19:34:46.827875 kubelet[1964]: I0212 19:34:46.827860 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4ac40c3b-8969-4ef1-8534-acc0a3cc7519-cilium-config-path\") pod \"cilium-fz262\" (UID: \"4ac40c3b-8969-4ef1-8534-acc0a3cc7519\") " pod="kube-system/cilium-fz262" Feb 12 19:34:46.828223 kubelet[1964]: I0212 19:34:46.827893 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4ac40c3b-8969-4ef1-8534-acc0a3cc7519-hubble-tls\") pod \"cilium-fz262\" (UID: \"4ac40c3b-8969-4ef1-8534-acc0a3cc7519\") " pod="kube-system/cilium-fz262" Feb 12 19:34:46.828223 kubelet[1964]: I0212 19:34:46.827918 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4ac40c3b-8969-4ef1-8534-acc0a3cc7519-cilium-cgroup\") pod \"cilium-fz262\" (UID: \"4ac40c3b-8969-4ef1-8534-acc0a3cc7519\") " pod="kube-system/cilium-fz262" Feb 12 19:34:46.828223 kubelet[1964]: I0212 19:34:46.827942 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4ac40c3b-8969-4ef1-8534-acc0a3cc7519-cni-path\") pod \"cilium-fz262\" (UID: \"4ac40c3b-8969-4ef1-8534-acc0a3cc7519\") " pod="kube-system/cilium-fz262" Feb 12 19:34:46.828223 kubelet[1964]: I0212 19:34:46.827970 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4ac40c3b-8969-4ef1-8534-acc0a3cc7519-lib-modules\") pod \"cilium-fz262\" (UID: \"4ac40c3b-8969-4ef1-8534-acc0a3cc7519\") " pod="kube-system/cilium-fz262" Feb 12 19:34:46.828223 kubelet[1964]: I0212 19:34:46.827992 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4ac40c3b-8969-4ef1-8534-acc0a3cc7519-cilium-ipsec-secrets\") pod \"cilium-fz262\" (UID: \"4ac40c3b-8969-4ef1-8534-acc0a3cc7519\") " pod="kube-system/cilium-fz262" Feb 12 19:34:46.828223 kubelet[1964]: I0212 19:34:46.828014 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4ac40c3b-8969-4ef1-8534-acc0a3cc7519-hostproc\") pod \"cilium-fz262\" (UID: \"4ac40c3b-8969-4ef1-8534-acc0a3cc7519\") " pod="kube-system/cilium-fz262" Feb 12 19:34:46.828223 kubelet[1964]: I0212 19:34:46.828037 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4ac40c3b-8969-4ef1-8534-acc0a3cc7519-etc-cni-netd\") pod \"cilium-fz262\" (UID: \"4ac40c3b-8969-4ef1-8534-acc0a3cc7519\") " pod="kube-system/cilium-fz262" Feb 12 19:34:46.828223 kubelet[1964]: I0212 19:34:46.828077 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4ac40c3b-8969-4ef1-8534-acc0a3cc7519-clustermesh-secrets\") pod \"cilium-fz262\" (UID: \"4ac40c3b-8969-4ef1-8534-acc0a3cc7519\") " pod="kube-system/cilium-fz262" Feb 12 19:34:46.828223 kubelet[1964]: I0212 19:34:46.828118 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4ac40c3b-8969-4ef1-8534-acc0a3cc7519-cilium-run\") pod \"cilium-fz262\" (UID: \"4ac40c3b-8969-4ef1-8534-acc0a3cc7519\") " pod="kube-system/cilium-fz262" Feb 12 19:34:46.828223 kubelet[1964]: I0212 19:34:46.828161 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4ac40c3b-8969-4ef1-8534-acc0a3cc7519-bpf-maps\") pod \"cilium-fz262\" (UID: \"4ac40c3b-8969-4ef1-8534-acc0a3cc7519\") " pod="kube-system/cilium-fz262" Feb 12 19:34:46.828223 kubelet[1964]: I0212 19:34:46.828192 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4ac40c3b-8969-4ef1-8534-acc0a3cc7519-xtables-lock\") pod \"cilium-fz262\" (UID: \"4ac40c3b-8969-4ef1-8534-acc0a3cc7519\") " pod="kube-system/cilium-fz262" Feb 12 19:34:46.828223 kubelet[1964]: I0212 19:34:46.828218 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4ac40c3b-8969-4ef1-8534-acc0a3cc7519-host-proc-sys-kernel\") pod \"cilium-fz262\" (UID: \"4ac40c3b-8969-4ef1-8534-acc0a3cc7519\") " pod="kube-system/cilium-fz262" Feb 12 19:34:46.828578 kubelet[1964]: I0212 19:34:46.828255 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6chh\" (UniqueName: \"kubernetes.io/projected/4ac40c3b-8969-4ef1-8534-acc0a3cc7519-kube-api-access-z6chh\") pod \"cilium-fz262\" (UID: \"4ac40c3b-8969-4ef1-8534-acc0a3cc7519\") " pod="kube-system/cilium-fz262" Feb 12 19:34:47.235390 kubelet[1964]: E0212 19:34:47.235346 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:34:47.235901 env[1119]: time="2024-02-12T19:34:47.235843628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fz262,Uid:4ac40c3b-8969-4ef1-8534-acc0a3cc7519,Namespace:kube-system,Attempt:0,}" Feb 12 19:34:47.246322 env[1119]: time="2024-02-12T19:34:47.246257342Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:34:47.246322 env[1119]: time="2024-02-12T19:34:47.246302096Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:34:47.246322 env[1119]: time="2024-02-12T19:34:47.246312967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:34:47.246522 env[1119]: time="2024-02-12T19:34:47.246435299Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c42f98cea7e6aa38e0a84c0e7535f984b70e0ab19ba96090d966f965e40418c6 pid=3905 runtime=io.containerd.runc.v2 Feb 12 19:34:47.256678 systemd[1]: Started cri-containerd-c42f98cea7e6aa38e0a84c0e7535f984b70e0ab19ba96090d966f965e40418c6.scope. Feb 12 19:34:47.272855 env[1119]: time="2024-02-12T19:34:47.272789002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fz262,Uid:4ac40c3b-8969-4ef1-8534-acc0a3cc7519,Namespace:kube-system,Attempt:0,} returns sandbox id \"c42f98cea7e6aa38e0a84c0e7535f984b70e0ab19ba96090d966f965e40418c6\"" Feb 12 19:34:47.273474 kubelet[1964]: E0212 19:34:47.273453 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:34:47.275693 env[1119]: time="2024-02-12T19:34:47.275649681Z" level=info msg="CreateContainer within sandbox \"c42f98cea7e6aa38e0a84c0e7535f984b70e0ab19ba96090d966f965e40418c6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 19:34:47.287571 env[1119]: time="2024-02-12T19:34:47.287530378Z" level=info msg="CreateContainer within sandbox \"c42f98cea7e6aa38e0a84c0e7535f984b70e0ab19ba96090d966f965e40418c6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"662806f17772db025869183e95628b1c4a688c59b0103e2c4023443f0664cfc1\"" Feb 12 19:34:47.288902 env[1119]: time="2024-02-12T19:34:47.288874408Z" level=info msg="StartContainer for \"662806f17772db025869183e95628b1c4a688c59b0103e2c4023443f0664cfc1\"" Feb 12 19:34:47.304335 systemd[1]: Started cri-containerd-662806f17772db025869183e95628b1c4a688c59b0103e2c4023443f0664cfc1.scope. Feb 12 19:34:47.327349 env[1119]: time="2024-02-12T19:34:47.327306092Z" level=info msg="StartContainer for \"662806f17772db025869183e95628b1c4a688c59b0103e2c4023443f0664cfc1\" returns successfully" Feb 12 19:34:47.333683 systemd[1]: cri-containerd-662806f17772db025869183e95628b1c4a688c59b0103e2c4023443f0664cfc1.scope: Deactivated successfully. Feb 12 19:34:47.354914 env[1119]: time="2024-02-12T19:34:47.354861295Z" level=info msg="shim disconnected" id=662806f17772db025869183e95628b1c4a688c59b0103e2c4023443f0664cfc1 Feb 12 19:34:47.354914 env[1119]: time="2024-02-12T19:34:47.354905519Z" level=warning msg="cleaning up after shim disconnected" id=662806f17772db025869183e95628b1c4a688c59b0103e2c4023443f0664cfc1 namespace=k8s.io Feb 12 19:34:47.355213 env[1119]: time="2024-02-12T19:34:47.354924034Z" level=info msg="cleaning up dead shim" Feb 12 19:34:47.361066 env[1119]: time="2024-02-12T19:34:47.361028438Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:34:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3989 runtime=io.containerd.runc.v2\n" Feb 12 19:34:47.368874 kubelet[1964]: I0212 19:34:47.368846 1964 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=9c952ae3-b2dd-4573-bdf1-18414a92275d path="/var/lib/kubelet/pods/9c952ae3-b2dd-4573-bdf1-18414a92275d/volumes" Feb 12 19:34:47.554040 kubelet[1964]: E0212 19:34:47.553926 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:34:47.556820 env[1119]: time="2024-02-12T19:34:47.556781711Z" level=info msg="CreateContainer within sandbox \"c42f98cea7e6aa38e0a84c0e7535f984b70e0ab19ba96090d966f965e40418c6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 12 19:34:47.573875 env[1119]: time="2024-02-12T19:34:47.573791922Z" level=info msg="CreateContainer within sandbox \"c42f98cea7e6aa38e0a84c0e7535f984b70e0ab19ba96090d966f965e40418c6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ea41095c3a5be85ef66ace153eacbc8b06de9e1564abaebf5ffb0ee21be47c57\"" Feb 12 19:34:47.574484 env[1119]: time="2024-02-12T19:34:47.574423932Z" level=info msg="StartContainer for \"ea41095c3a5be85ef66ace153eacbc8b06de9e1564abaebf5ffb0ee21be47c57\"" Feb 12 19:34:47.587818 systemd[1]: Started cri-containerd-ea41095c3a5be85ef66ace153eacbc8b06de9e1564abaebf5ffb0ee21be47c57.scope. Feb 12 19:34:47.615213 systemd[1]: cri-containerd-ea41095c3a5be85ef66ace153eacbc8b06de9e1564abaebf5ffb0ee21be47c57.scope: Deactivated successfully. Feb 12 19:34:47.650938 env[1119]: time="2024-02-12T19:34:47.650835673Z" level=info msg="StartContainer for \"ea41095c3a5be85ef66ace153eacbc8b06de9e1564abaebf5ffb0ee21be47c57\" returns successfully" Feb 12 19:34:47.671489 env[1119]: time="2024-02-12T19:34:47.671428934Z" level=info msg="shim disconnected" id=ea41095c3a5be85ef66ace153eacbc8b06de9e1564abaebf5ffb0ee21be47c57 Feb 12 19:34:47.671489 env[1119]: time="2024-02-12T19:34:47.671488897Z" level=warning msg="cleaning up after shim disconnected" id=ea41095c3a5be85ef66ace153eacbc8b06de9e1564abaebf5ffb0ee21be47c57 namespace=k8s.io Feb 12 19:34:47.671489 env[1119]: time="2024-02-12T19:34:47.671498686Z" level=info msg="cleaning up dead shim" Feb 12 19:34:47.678074 env[1119]: time="2024-02-12T19:34:47.678038607Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:34:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4053 runtime=io.containerd.runc.v2\n" Feb 12 19:34:47.984084 kubelet[1964]: W0212 19:34:47.984029 1964 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c952ae3_b2dd_4573_bdf1_18414a92275d.slice/cri-containerd-2425df19678053ed34fd10d7cc7d9059105684c30de72612afc72dc3734a5ca7.scope WatchSource:0}: container "2425df19678053ed34fd10d7cc7d9059105684c30de72612afc72dc3734a5ca7" in namespace "k8s.io": not found Feb 12 19:34:48.557259 kubelet[1964]: E0212 19:34:48.557235 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:34:48.558590 env[1119]: time="2024-02-12T19:34:48.558556651Z" level=info msg="CreateContainer within sandbox \"c42f98cea7e6aa38e0a84c0e7535f984b70e0ab19ba96090d966f965e40418c6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 12 19:34:48.825479 env[1119]: time="2024-02-12T19:34:48.825209826Z" level=info msg="CreateContainer within sandbox \"c42f98cea7e6aa38e0a84c0e7535f984b70e0ab19ba96090d966f965e40418c6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ce096e1b65e00d79c40c873541ffe22869caddee4756a144a149b89a9bdf232b\"" Feb 12 19:34:48.826039 env[1119]: time="2024-02-12T19:34:48.826010978Z" level=info msg="StartContainer for \"ce096e1b65e00d79c40c873541ffe22869caddee4756a144a149b89a9bdf232b\"" Feb 12 19:34:48.843581 systemd[1]: Started cri-containerd-ce096e1b65e00d79c40c873541ffe22869caddee4756a144a149b89a9bdf232b.scope. Feb 12 19:34:48.868788 env[1119]: time="2024-02-12T19:34:48.868737373Z" level=info msg="StartContainer for \"ce096e1b65e00d79c40c873541ffe22869caddee4756a144a149b89a9bdf232b\" returns successfully" Feb 12 19:34:48.869605 systemd[1]: cri-containerd-ce096e1b65e00d79c40c873541ffe22869caddee4756a144a149b89a9bdf232b.scope: Deactivated successfully. Feb 12 19:34:48.891685 env[1119]: time="2024-02-12T19:34:48.891628293Z" level=info msg="shim disconnected" id=ce096e1b65e00d79c40c873541ffe22869caddee4756a144a149b89a9bdf232b Feb 12 19:34:48.891685 env[1119]: time="2024-02-12T19:34:48.891683106Z" level=warning msg="cleaning up after shim disconnected" id=ce096e1b65e00d79c40c873541ffe22869caddee4756a144a149b89a9bdf232b namespace=k8s.io Feb 12 19:34:48.891685 env[1119]: time="2024-02-12T19:34:48.891691463Z" level=info msg="cleaning up dead shim" Feb 12 19:34:48.898410 env[1119]: time="2024-02-12T19:34:48.898358597Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:34:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4110 runtime=io.containerd.runc.v2\n" Feb 12 19:34:48.933580 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ce096e1b65e00d79c40c873541ffe22869caddee4756a144a149b89a9bdf232b-rootfs.mount: Deactivated successfully. Feb 12 19:34:49.430430 kubelet[1964]: E0212 19:34:49.430379 1964 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 12 19:34:49.560456 kubelet[1964]: E0212 19:34:49.560418 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:34:49.562597 env[1119]: time="2024-02-12T19:34:49.562550087Z" level=info msg="CreateContainer within sandbox \"c42f98cea7e6aa38e0a84c0e7535f984b70e0ab19ba96090d966f965e40418c6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 12 19:34:49.576269 env[1119]: time="2024-02-12T19:34:49.576215536Z" level=info msg="CreateContainer within sandbox \"c42f98cea7e6aa38e0a84c0e7535f984b70e0ab19ba96090d966f965e40418c6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"eac4c70f5f278c3c9eede1dd3bfb9d23d77a016a57d64663dac020cccd5cf256\"" Feb 12 19:34:49.576664 env[1119]: time="2024-02-12T19:34:49.576636236Z" level=info msg="StartContainer for \"eac4c70f5f278c3c9eede1dd3bfb9d23d77a016a57d64663dac020cccd5cf256\"" Feb 12 19:34:49.590365 systemd[1]: Started cri-containerd-eac4c70f5f278c3c9eede1dd3bfb9d23d77a016a57d64663dac020cccd5cf256.scope. Feb 12 19:34:49.606935 systemd[1]: cri-containerd-eac4c70f5f278c3c9eede1dd3bfb9d23d77a016a57d64663dac020cccd5cf256.scope: Deactivated successfully. Feb 12 19:34:49.609408 env[1119]: time="2024-02-12T19:34:49.609356508Z" level=info msg="StartContainer for \"eac4c70f5f278c3c9eede1dd3bfb9d23d77a016a57d64663dac020cccd5cf256\" returns successfully" Feb 12 19:34:49.626630 env[1119]: time="2024-02-12T19:34:49.626574861Z" level=info msg="shim disconnected" id=eac4c70f5f278c3c9eede1dd3bfb9d23d77a016a57d64663dac020cccd5cf256 Feb 12 19:34:49.626630 env[1119]: time="2024-02-12T19:34:49.626615518Z" level=warning msg="cleaning up after shim disconnected" id=eac4c70f5f278c3c9eede1dd3bfb9d23d77a016a57d64663dac020cccd5cf256 namespace=k8s.io Feb 12 19:34:49.626630 env[1119]: time="2024-02-12T19:34:49.626625246Z" level=info msg="cleaning up dead shim" Feb 12 19:34:49.632226 env[1119]: time="2024-02-12T19:34:49.632182681Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:34:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4164 runtime=io.containerd.runc.v2\n" Feb 12 19:34:49.933409 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eac4c70f5f278c3c9eede1dd3bfb9d23d77a016a57d64663dac020cccd5cf256-rootfs.mount: Deactivated successfully. Feb 12 19:34:50.564610 kubelet[1964]: E0212 19:34:50.564577 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:34:50.566737 env[1119]: time="2024-02-12T19:34:50.566673016Z" level=info msg="CreateContainer within sandbox \"c42f98cea7e6aa38e0a84c0e7535f984b70e0ab19ba96090d966f965e40418c6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 12 19:34:50.582371 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3762909975.mount: Deactivated successfully. Feb 12 19:34:50.583962 env[1119]: time="2024-02-12T19:34:50.583902768Z" level=info msg="CreateContainer within sandbox \"c42f98cea7e6aa38e0a84c0e7535f984b70e0ab19ba96090d966f965e40418c6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"74668bdef5127da631c04e502992b4ef7aa5c89e98fb04007fb2e44b6a89ef27\"" Feb 12 19:34:50.584472 env[1119]: time="2024-02-12T19:34:50.584440972Z" level=info msg="StartContainer for \"74668bdef5127da631c04e502992b4ef7aa5c89e98fb04007fb2e44b6a89ef27\"" Feb 12 19:34:50.600075 systemd[1]: Started cri-containerd-74668bdef5127da631c04e502992b4ef7aa5c89e98fb04007fb2e44b6a89ef27.scope. Feb 12 19:34:50.625046 env[1119]: time="2024-02-12T19:34:50.624979900Z" level=info msg="StartContainer for \"74668bdef5127da631c04e502992b4ef7aa5c89e98fb04007fb2e44b6a89ef27\" returns successfully" Feb 12 19:34:50.863159 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 12 19:34:50.953475 kubelet[1964]: I0212 19:34:50.953424 1964 setters.go:548] "Node became not ready" node="localhost" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-12 19:34:50.953341287 +0000 UTC m=+91.677416515 LastTransitionTime:2024-02-12 19:34:50.953341287 +0000 UTC m=+91.677416515 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 12 19:34:51.091422 kubelet[1964]: W0212 19:34:51.091376 1964 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ac40c3b_8969_4ef1_8534_acc0a3cc7519.slice/cri-containerd-662806f17772db025869183e95628b1c4a688c59b0103e2c4023443f0664cfc1.scope WatchSource:0}: task 662806f17772db025869183e95628b1c4a688c59b0103e2c4023443f0664cfc1 not found: not found Feb 12 19:34:51.569037 kubelet[1964]: E0212 19:34:51.569004 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:34:51.580038 kubelet[1964]: I0212 19:34:51.579998 1964 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-fz262" podStartSLOduration=5.579960911 podCreationTimestamp="2024-02-12 19:34:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:34:51.579699193 +0000 UTC m=+92.303774421" watchObservedRunningTime="2024-02-12 19:34:51.579960911 +0000 UTC m=+92.304036139" Feb 12 19:34:53.237043 kubelet[1964]: E0212 19:34:53.237006 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:34:53.339749 systemd-networkd[1018]: lxc_health: Link UP Feb 12 19:34:53.347166 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 12 19:34:53.347215 systemd-networkd[1018]: lxc_health: Gained carrier Feb 12 19:34:53.370773 kubelet[1964]: E0212 19:34:53.370733 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:34:54.198255 kubelet[1964]: W0212 19:34:54.198203 1964 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ac40c3b_8969_4ef1_8534_acc0a3cc7519.slice/cri-containerd-ea41095c3a5be85ef66ace153eacbc8b06de9e1564abaebf5ffb0ee21be47c57.scope WatchSource:0}: task ea41095c3a5be85ef66ace153eacbc8b06de9e1564abaebf5ffb0ee21be47c57 not found: not found Feb 12 19:34:54.833451 systemd-networkd[1018]: lxc_health: Gained IPv6LL Feb 12 19:34:55.240741 kubelet[1964]: E0212 19:34:55.240701 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:34:55.576441 kubelet[1964]: E0212 19:34:55.576300 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:34:56.577585 kubelet[1964]: E0212 19:34:56.577551 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:34:57.305634 kubelet[1964]: W0212 19:34:57.305567 1964 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ac40c3b_8969_4ef1_8534_acc0a3cc7519.slice/cri-containerd-ce096e1b65e00d79c40c873541ffe22869caddee4756a144a149b89a9bdf232b.scope WatchSource:0}: task ce096e1b65e00d79c40c873541ffe22869caddee4756a144a149b89a9bdf232b not found: not found Feb 12 19:34:59.366491 kubelet[1964]: E0212 19:34:59.366452 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:34:59.590433 sshd[3772]: pam_unix(sshd:session): session closed for user core Feb 12 19:34:59.593047 systemd[1]: sshd@25-10.0.0.38:22-10.0.0.1:58446.service: Deactivated successfully. Feb 12 19:34:59.593838 systemd[1]: session-26.scope: Deactivated successfully. Feb 12 19:34:59.594377 systemd-logind[1103]: Session 26 logged out. Waiting for processes to exit. Feb 12 19:34:59.594998 systemd-logind[1103]: Removed session 26. Feb 12 19:35:00.417552 kubelet[1964]: W0212 19:35:00.417478 1964 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ac40c3b_8969_4ef1_8534_acc0a3cc7519.slice/cri-containerd-eac4c70f5f278c3c9eede1dd3bfb9d23d77a016a57d64663dac020cccd5cf256.scope WatchSource:0}: task eac4c70f5f278c3c9eede1dd3bfb9d23d77a016a57d64663dac020cccd5cf256 not found: not found