Feb 9 00:46:58.811350 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Feb 8 21:14:17 -00 2024 Feb 9 00:46:58.811369 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 9 00:46:58.811378 kernel: BIOS-provided physical RAM map: Feb 9 00:46:58.811384 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 9 00:46:58.811398 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Feb 9 00:46:58.811404 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Feb 9 00:46:58.811411 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Feb 9 00:46:58.811417 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Feb 9 00:46:58.811422 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Feb 9 00:46:58.811429 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Feb 9 00:46:58.811435 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Feb 9 00:46:58.811441 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Feb 9 00:46:58.811447 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Feb 9 00:46:58.811453 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Feb 9 00:46:58.811460 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Feb 9 00:46:58.811467 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Feb 9 00:46:58.811473 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Feb 9 00:46:58.811479 kernel: NX (Execute Disable) protection: active Feb 9 00:46:58.811485 kernel: e820: update [mem 0x9b3fa018-0x9b403c57] usable ==> usable Feb 9 00:46:58.811492 kernel: e820: update [mem 0x9b3fa018-0x9b403c57] usable ==> usable Feb 9 00:46:58.811498 kernel: e820: update [mem 0x9b3bd018-0x9b3f9e57] usable ==> usable Feb 9 00:46:58.811504 kernel: e820: update [mem 0x9b3bd018-0x9b3f9e57] usable ==> usable Feb 9 00:46:58.811510 kernel: extended physical RAM map: Feb 9 00:46:58.811516 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 9 00:46:58.811522 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Feb 9 00:46:58.811529 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Feb 9 00:46:58.811535 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Feb 9 00:46:58.811541 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Feb 9 00:46:58.811548 kernel: reserve setup_data: [mem 0x000000000080c000-0x000000000080ffff] usable Feb 9 00:46:58.811554 kernel: reserve setup_data: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Feb 9 00:46:58.811560 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b3bd017] usable Feb 9 00:46:58.811566 kernel: reserve setup_data: [mem 0x000000009b3bd018-0x000000009b3f9e57] usable Feb 9 00:46:58.811572 kernel: reserve setup_data: [mem 0x000000009b3f9e58-0x000000009b3fa017] usable Feb 9 00:46:58.811578 kernel: reserve setup_data: [mem 0x000000009b3fa018-0x000000009b403c57] usable Feb 9 00:46:58.811584 kernel: reserve setup_data: [mem 0x000000009b403c58-0x000000009c8eefff] usable Feb 9 00:46:58.811590 kernel: reserve setup_data: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Feb 9 00:46:58.811598 kernel: reserve setup_data: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Feb 9 00:46:58.811604 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Feb 9 00:46:58.811610 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Feb 9 00:46:58.811616 kernel: reserve setup_data: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Feb 9 00:46:58.811626 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Feb 9 00:46:58.811632 kernel: efi: EFI v2.70 by EDK II Feb 9 00:46:58.811639 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b773018 RNG=0x9cb75018 Feb 9 00:46:58.811647 kernel: random: crng init done Feb 9 00:46:58.811654 kernel: SMBIOS 2.8 present. Feb 9 00:46:58.811660 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 0.0.0 02/06/2015 Feb 9 00:46:58.811667 kernel: Hypervisor detected: KVM Feb 9 00:46:58.811674 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 9 00:46:58.811680 kernel: kvm-clock: cpu 0, msr 24faa001, primary cpu clock Feb 9 00:46:58.811687 kernel: kvm-clock: using sched offset of 3992011086 cycles Feb 9 00:46:58.811694 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 9 00:46:58.811701 kernel: tsc: Detected 2794.750 MHz processor Feb 9 00:46:58.811709 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 9 00:46:58.811716 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 9 00:46:58.811723 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Feb 9 00:46:58.811730 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 9 00:46:58.811737 kernel: Using GB pages for direct mapping Feb 9 00:46:58.811744 kernel: Secure boot disabled Feb 9 00:46:58.811750 kernel: ACPI: Early table checksum verification disabled Feb 9 00:46:58.811757 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Feb 9 00:46:58.811764 kernel: ACPI: XSDT 0x000000009CB7D0E8 00004C (v01 BOCHS BXPC 00000001 01000013) Feb 9 00:46:58.811772 kernel: ACPI: FACP 0x000000009CB7A000 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 00:46:58.811779 kernel: ACPI: DSDT 0x000000009CB7B000 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 00:46:58.811786 kernel: ACPI: FACS 0x000000009CBDD000 000040 Feb 9 00:46:58.811793 kernel: ACPI: APIC 0x000000009CB79000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 00:46:58.811799 kernel: ACPI: HPET 0x000000009CB78000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 00:46:58.811806 kernel: ACPI: WAET 0x000000009CB77000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 00:46:58.811813 kernel: ACPI: BGRT 0x000000009CB76000 000038 (v01 INTEL EDK2 00000002 01000013) Feb 9 00:46:58.811820 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb7a000-0x9cb7a073] Feb 9 00:46:58.811826 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7b000-0x9cb7ca38] Feb 9 00:46:58.811834 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Feb 9 00:46:58.811841 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb79000-0x9cb7908f] Feb 9 00:46:58.811848 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb78000-0x9cb78037] Feb 9 00:46:58.811855 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb77000-0x9cb77027] Feb 9 00:46:58.811861 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb76000-0x9cb76037] Feb 9 00:46:58.811868 kernel: No NUMA configuration found Feb 9 00:46:58.811875 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Feb 9 00:46:58.811882 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Feb 9 00:46:58.811889 kernel: Zone ranges: Feb 9 00:46:58.811896 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 9 00:46:58.811903 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Feb 9 00:46:58.811919 kernel: Normal empty Feb 9 00:46:58.811926 kernel: Movable zone start for each node Feb 9 00:46:58.811932 kernel: Early memory node ranges Feb 9 00:46:58.811939 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Feb 9 00:46:58.811946 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Feb 9 00:46:58.811953 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Feb 9 00:46:58.811959 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Feb 9 00:46:58.811967 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Feb 9 00:46:58.811974 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Feb 9 00:46:58.811981 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Feb 9 00:46:58.811988 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 9 00:46:58.811995 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Feb 9 00:46:58.812001 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Feb 9 00:46:58.812008 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 9 00:46:58.812015 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Feb 9 00:46:58.812022 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Feb 9 00:46:58.812029 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Feb 9 00:46:58.812036 kernel: ACPI: PM-Timer IO Port: 0xb008 Feb 9 00:46:58.812043 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 9 00:46:58.812050 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 9 00:46:58.812057 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 9 00:46:58.812063 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 9 00:46:58.812070 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 9 00:46:58.812077 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 9 00:46:58.812084 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 9 00:46:58.812092 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 9 00:46:58.812099 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 9 00:46:58.812105 kernel: TSC deadline timer available Feb 9 00:46:58.812112 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Feb 9 00:46:58.812119 kernel: kvm-guest: KVM setup pv remote TLB flush Feb 9 00:46:58.812125 kernel: kvm-guest: setup PV sched yield Feb 9 00:46:58.812132 kernel: [mem 0x9d000000-0xffffffff] available for PCI devices Feb 9 00:46:58.812139 kernel: Booting paravirtualized kernel on KVM Feb 9 00:46:58.812146 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 9 00:46:58.812153 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Feb 9 00:46:58.812161 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u524288 Feb 9 00:46:58.812168 kernel: pcpu-alloc: s185624 r8192 d31464 u524288 alloc=1*2097152 Feb 9 00:46:58.812179 kernel: pcpu-alloc: [0] 0 1 2 3 Feb 9 00:46:58.812188 kernel: kvm-guest: setup async PF for cpu 0 Feb 9 00:46:58.812196 kernel: kvm-guest: stealtime: cpu 0, msr 9b01c0c0 Feb 9 00:46:58.812204 kernel: kvm-guest: PV spinlocks enabled Feb 9 00:46:58.812212 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 9 00:46:58.812221 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Feb 9 00:46:58.812228 kernel: Policy zone: DMA32 Feb 9 00:46:58.812236 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 9 00:46:58.812244 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 00:46:58.812252 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 9 00:46:58.812260 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 00:46:58.812267 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 00:46:58.812274 kernel: Memory: 2400512K/2567000K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 166228K reserved, 0K cma-reserved) Feb 9 00:46:58.812284 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 9 00:46:58.812293 kernel: ftrace: allocating 34475 entries in 135 pages Feb 9 00:46:58.812301 kernel: ftrace: allocated 135 pages with 4 groups Feb 9 00:46:58.812310 kernel: rcu: Hierarchical RCU implementation. Feb 9 00:46:58.812319 kernel: rcu: RCU event tracing is enabled. Feb 9 00:46:58.812328 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 9 00:46:58.812335 kernel: Rude variant of Tasks RCU enabled. Feb 9 00:46:58.812342 kernel: Tracing variant of Tasks RCU enabled. Feb 9 00:46:58.812349 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 00:46:58.812356 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 9 00:46:58.812365 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Feb 9 00:46:58.812372 kernel: Console: colour dummy device 80x25 Feb 9 00:46:58.812381 kernel: printk: console [ttyS0] enabled Feb 9 00:46:58.812396 kernel: ACPI: Core revision 20210730 Feb 9 00:46:58.812404 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 9 00:46:58.812411 kernel: APIC: Switch to symmetric I/O mode setup Feb 9 00:46:58.812419 kernel: x2apic enabled Feb 9 00:46:58.812426 kernel: Switched APIC routing to physical x2apic. Feb 9 00:46:58.812433 kernel: kvm-guest: setup PV IPIs Feb 9 00:46:58.812441 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 9 00:46:58.812448 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 9 00:46:58.812456 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Feb 9 00:46:58.812463 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 9 00:46:58.812470 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Feb 9 00:46:58.812478 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Feb 9 00:46:58.812485 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 9 00:46:58.812492 kernel: Spectre V2 : Mitigation: Retpolines Feb 9 00:46:58.812499 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 9 00:46:58.812508 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 9 00:46:58.812515 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Feb 9 00:46:58.812522 kernel: RETBleed: Mitigation: untrained return thunk Feb 9 00:46:58.812529 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 9 00:46:58.812536 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Feb 9 00:46:58.812544 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 9 00:46:58.812551 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 9 00:46:58.812560 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 9 00:46:58.812569 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 9 00:46:58.812577 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 9 00:46:58.812584 kernel: Freeing SMP alternatives memory: 32K Feb 9 00:46:58.812591 kernel: pid_max: default: 32768 minimum: 301 Feb 9 00:46:58.812598 kernel: LSM: Security Framework initializing Feb 9 00:46:58.812606 kernel: SELinux: Initializing. Feb 9 00:46:58.812613 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 00:46:58.812620 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 00:46:58.812627 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Feb 9 00:46:58.812635 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Feb 9 00:46:58.812643 kernel: ... version: 0 Feb 9 00:46:58.812650 kernel: ... bit width: 48 Feb 9 00:46:58.812657 kernel: ... generic registers: 6 Feb 9 00:46:58.812664 kernel: ... value mask: 0000ffffffffffff Feb 9 00:46:58.812671 kernel: ... max period: 00007fffffffffff Feb 9 00:46:58.812678 kernel: ... fixed-purpose events: 0 Feb 9 00:46:58.812685 kernel: ... event mask: 000000000000003f Feb 9 00:46:58.812692 kernel: signal: max sigframe size: 1776 Feb 9 00:46:58.812699 kernel: rcu: Hierarchical SRCU implementation. Feb 9 00:46:58.812708 kernel: smp: Bringing up secondary CPUs ... Feb 9 00:46:58.812715 kernel: x86: Booting SMP configuration: Feb 9 00:46:58.812722 kernel: .... node #0, CPUs: #1 Feb 9 00:46:58.812729 kernel: kvm-clock: cpu 1, msr 24faa041, secondary cpu clock Feb 9 00:46:58.812737 kernel: kvm-guest: setup async PF for cpu 1 Feb 9 00:46:58.812744 kernel: kvm-guest: stealtime: cpu 1, msr 9b09c0c0 Feb 9 00:46:58.812751 kernel: #2 Feb 9 00:46:58.812758 kernel: kvm-clock: cpu 2, msr 24faa081, secondary cpu clock Feb 9 00:46:58.812765 kernel: kvm-guest: setup async PF for cpu 2 Feb 9 00:46:58.812774 kernel: kvm-guest: stealtime: cpu 2, msr 9b11c0c0 Feb 9 00:46:58.812781 kernel: #3 Feb 9 00:46:58.812788 kernel: kvm-clock: cpu 3, msr 24faa0c1, secondary cpu clock Feb 9 00:46:58.812795 kernel: kvm-guest: setup async PF for cpu 3 Feb 9 00:46:58.812802 kernel: kvm-guest: stealtime: cpu 3, msr 9b19c0c0 Feb 9 00:46:58.812809 kernel: smp: Brought up 1 node, 4 CPUs Feb 9 00:46:58.812816 kernel: smpboot: Max logical packages: 1 Feb 9 00:46:58.812824 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Feb 9 00:46:58.812831 kernel: devtmpfs: initialized Feb 9 00:46:58.812839 kernel: x86/mm: Memory block size: 128MB Feb 9 00:46:58.812846 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Feb 9 00:46:58.812853 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Feb 9 00:46:58.812861 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Feb 9 00:46:58.812868 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Feb 9 00:46:58.812875 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Feb 9 00:46:58.812883 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 00:46:58.812890 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 9 00:46:58.812897 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 00:46:58.812914 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 00:46:58.812922 kernel: audit: initializing netlink subsys (disabled) Feb 9 00:46:58.812929 kernel: audit: type=2000 audit(1707439617.337:1): state=initialized audit_enabled=0 res=1 Feb 9 00:46:58.812936 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 00:46:58.812943 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 9 00:46:58.812950 kernel: cpuidle: using governor menu Feb 9 00:46:58.812957 kernel: ACPI: bus type PCI registered Feb 9 00:46:58.812965 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 00:46:58.812972 kernel: dca service started, version 1.12.1 Feb 9 00:46:58.812981 kernel: PCI: Using configuration type 1 for base access Feb 9 00:46:58.812988 kernel: PCI: Using configuration type 1 for extended access Feb 9 00:46:58.812995 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 9 00:46:58.813003 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 00:46:58.813010 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 00:46:58.813017 kernel: ACPI: Added _OSI(Module Device) Feb 9 00:46:58.813024 kernel: ACPI: Added _OSI(Processor Device) Feb 9 00:46:58.813031 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 00:46:58.813038 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 00:46:58.813046 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 00:46:58.813054 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 00:46:58.813061 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 00:46:58.813068 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 9 00:46:58.813075 kernel: ACPI: Interpreter enabled Feb 9 00:46:58.813082 kernel: ACPI: PM: (supports S0 S3 S5) Feb 9 00:46:58.813089 kernel: ACPI: Using IOAPIC for interrupt routing Feb 9 00:46:58.813097 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 9 00:46:58.813104 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Feb 9 00:46:58.813112 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 9 00:46:58.813230 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 9 00:46:58.813243 kernel: acpiphp: Slot [3] registered Feb 9 00:46:58.813250 kernel: acpiphp: Slot [4] registered Feb 9 00:46:58.813257 kernel: acpiphp: Slot [5] registered Feb 9 00:46:58.813264 kernel: acpiphp: Slot [6] registered Feb 9 00:46:58.813271 kernel: acpiphp: Slot [7] registered Feb 9 00:46:58.813278 kernel: acpiphp: Slot [8] registered Feb 9 00:46:58.813287 kernel: acpiphp: Slot [9] registered Feb 9 00:46:58.813294 kernel: acpiphp: Slot [10] registered Feb 9 00:46:58.813302 kernel: acpiphp: Slot [11] registered Feb 9 00:46:58.813309 kernel: acpiphp: Slot [12] registered Feb 9 00:46:58.813316 kernel: acpiphp: Slot [13] registered Feb 9 00:46:58.813323 kernel: acpiphp: Slot [14] registered Feb 9 00:46:58.813330 kernel: acpiphp: Slot [15] registered Feb 9 00:46:58.813337 kernel: acpiphp: Slot [16] registered Feb 9 00:46:58.813344 kernel: acpiphp: Slot [17] registered Feb 9 00:46:58.813351 kernel: acpiphp: Slot [18] registered Feb 9 00:46:58.813359 kernel: acpiphp: Slot [19] registered Feb 9 00:46:58.813366 kernel: acpiphp: Slot [20] registered Feb 9 00:46:58.813373 kernel: acpiphp: Slot [21] registered Feb 9 00:46:58.813380 kernel: acpiphp: Slot [22] registered Feb 9 00:46:58.813387 kernel: acpiphp: Slot [23] registered Feb 9 00:46:58.813402 kernel: acpiphp: Slot [24] registered Feb 9 00:46:58.813410 kernel: acpiphp: Slot [25] registered Feb 9 00:46:58.813417 kernel: acpiphp: Slot [26] registered Feb 9 00:46:58.813424 kernel: acpiphp: Slot [27] registered Feb 9 00:46:58.813432 kernel: acpiphp: Slot [28] registered Feb 9 00:46:58.813439 kernel: acpiphp: Slot [29] registered Feb 9 00:46:58.813446 kernel: acpiphp: Slot [30] registered Feb 9 00:46:58.813453 kernel: acpiphp: Slot [31] registered Feb 9 00:46:58.813460 kernel: PCI host bridge to bus 0000:00 Feb 9 00:46:58.813539 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 9 00:46:58.813603 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 9 00:46:58.813665 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 9 00:46:58.813728 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Feb 9 00:46:58.813788 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0x87fffffff window] Feb 9 00:46:58.813849 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 9 00:46:58.813942 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 9 00:46:58.814021 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 9 00:46:58.814102 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Feb 9 00:46:58.814175 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Feb 9 00:46:58.814243 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Feb 9 00:46:58.814311 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Feb 9 00:46:58.814379 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Feb 9 00:46:58.814456 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Feb 9 00:46:58.814532 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 9 00:46:58.814601 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Feb 9 00:46:58.814672 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Feb 9 00:46:58.814746 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Feb 9 00:46:58.814817 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Feb 9 00:46:58.814884 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc1043000-0xc1043fff] Feb 9 00:46:58.815053 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Feb 9 00:46:58.815129 kernel: pci 0000:00:02.0: BAR 0: assigned to efifb Feb 9 00:46:58.815199 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 9 00:46:58.815278 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Feb 9 00:46:58.815348 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0a0-0xc0bf] Feb 9 00:46:58.815429 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Feb 9 00:46:58.815497 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Feb 9 00:46:58.815572 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Feb 9 00:46:58.815642 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Feb 9 00:46:58.815714 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Feb 9 00:46:58.815788 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Feb 9 00:46:58.815864 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Feb 9 00:46:58.815996 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Feb 9 00:46:58.816067 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc1040000-0xc1040fff] Feb 9 00:46:58.816135 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Feb 9 00:46:58.816204 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Feb 9 00:46:58.816215 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 9 00:46:58.816227 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 9 00:46:58.816234 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 9 00:46:58.816242 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 9 00:46:58.816249 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 9 00:46:58.816256 kernel: iommu: Default domain type: Translated Feb 9 00:46:58.816263 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 9 00:46:58.816329 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Feb 9 00:46:58.816406 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 9 00:46:58.816475 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Feb 9 00:46:58.816487 kernel: vgaarb: loaded Feb 9 00:46:58.816494 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 00:46:58.816501 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 00:46:58.816509 kernel: PTP clock support registered Feb 9 00:46:58.816516 kernel: Registered efivars operations Feb 9 00:46:58.816523 kernel: PCI: Using ACPI for IRQ routing Feb 9 00:46:58.816530 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 9 00:46:58.816537 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Feb 9 00:46:58.816544 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Feb 9 00:46:58.816552 kernel: e820: reserve RAM buffer [mem 0x9b3bd018-0x9bffffff] Feb 9 00:46:58.816559 kernel: e820: reserve RAM buffer [mem 0x9b3fa018-0x9bffffff] Feb 9 00:46:58.816565 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Feb 9 00:46:58.816573 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Feb 9 00:46:58.816579 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 9 00:46:58.816587 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 9 00:46:58.816594 kernel: clocksource: Switched to clocksource kvm-clock Feb 9 00:46:58.816601 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 00:46:58.816610 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 00:46:58.816617 kernel: pnp: PnP ACPI init Feb 9 00:46:58.816693 kernel: pnp 00:02: [dma 2] Feb 9 00:46:58.816704 kernel: pnp: PnP ACPI: found 6 devices Feb 9 00:46:58.816712 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 9 00:46:58.816719 kernel: NET: Registered PF_INET protocol family Feb 9 00:46:58.816726 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 00:46:58.816734 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 9 00:46:58.816741 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 00:46:58.816750 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 9 00:46:58.816757 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 9 00:46:58.816765 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 9 00:46:58.816772 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 00:46:58.816779 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 00:46:58.816786 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 00:46:58.816794 kernel: NET: Registered PF_XDP protocol family Feb 9 00:46:58.816863 kernel: pci 0000:00:05.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Feb 9 00:46:58.817429 kernel: pci 0000:00:05.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Feb 9 00:46:58.817500 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 9 00:46:58.817564 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 9 00:46:58.817628 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 9 00:46:58.817688 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Feb 9 00:46:58.817748 kernel: pci_bus 0000:00: resource 8 [mem 0x800000000-0x87fffffff window] Feb 9 00:46:58.817826 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Feb 9 00:46:58.824769 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 9 00:46:58.825094 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Feb 9 00:46:58.825110 kernel: PCI: CLS 0 bytes, default 64 Feb 9 00:46:58.825118 kernel: Initialise system trusted keyrings Feb 9 00:46:58.825126 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 9 00:46:58.825134 kernel: Key type asymmetric registered Feb 9 00:46:58.825142 kernel: Asymmetric key parser 'x509' registered Feb 9 00:46:58.825149 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 00:46:58.825157 kernel: io scheduler mq-deadline registered Feb 9 00:46:58.825168 kernel: io scheduler kyber registered Feb 9 00:46:58.825176 kernel: io scheduler bfq registered Feb 9 00:46:58.825183 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 9 00:46:58.825191 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 9 00:46:58.825199 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Feb 9 00:46:58.825206 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 9 00:46:58.825214 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 00:46:58.825222 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 9 00:46:58.825230 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 9 00:46:58.825239 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 9 00:46:58.825246 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 9 00:46:58.825328 kernel: rtc_cmos 00:05: RTC can wake from S4 Feb 9 00:46:58.825342 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 9 00:46:58.825419 kernel: rtc_cmos 00:05: registered as rtc0 Feb 9 00:46:58.825566 kernel: rtc_cmos 00:05: setting system clock to 2024-02-09T00:46:58 UTC (1707439618) Feb 9 00:46:58.825639 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Feb 9 00:46:58.825649 kernel: efifb: probing for efifb Feb 9 00:46:58.825656 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Feb 9 00:46:58.825664 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Feb 9 00:46:58.825672 kernel: efifb: scrolling: redraw Feb 9 00:46:58.825680 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 9 00:46:58.825687 kernel: Console: switching to colour frame buffer device 160x50 Feb 9 00:46:58.825695 kernel: fb0: EFI VGA frame buffer device Feb 9 00:46:58.825706 kernel: pstore: Registered efi as persistent store backend Feb 9 00:46:58.825714 kernel: NET: Registered PF_INET6 protocol family Feb 9 00:46:58.825721 kernel: Segment Routing with IPv6 Feb 9 00:46:58.825729 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 00:46:58.825737 kernel: NET: Registered PF_PACKET protocol family Feb 9 00:46:58.825745 kernel: Key type dns_resolver registered Feb 9 00:46:58.825753 kernel: IPI shorthand broadcast: enabled Feb 9 00:46:58.825760 kernel: sched_clock: Marking stable (378127643, 145293355)->(566972172, -43551174) Feb 9 00:46:58.825768 kernel: registered taskstats version 1 Feb 9 00:46:58.825777 kernel: Loading compiled-in X.509 certificates Feb 9 00:46:58.825785 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: e9d857ae0e8100c174221878afd1046acbb054a6' Feb 9 00:46:58.825792 kernel: Key type .fscrypt registered Feb 9 00:46:58.825800 kernel: Key type fscrypt-provisioning registered Feb 9 00:46:58.825808 kernel: pstore: Using crash dump compression: deflate Feb 9 00:46:58.825816 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 00:46:58.825824 kernel: ima: Allocated hash algorithm: sha1 Feb 9 00:46:58.825831 kernel: ima: No architecture policies found Feb 9 00:46:58.825839 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 9 00:46:58.825847 kernel: Write protecting the kernel read-only data: 28672k Feb 9 00:46:58.825855 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 9 00:46:58.825863 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 9 00:46:58.825870 kernel: Run /init as init process Feb 9 00:46:58.825878 kernel: with arguments: Feb 9 00:46:58.825886 kernel: /init Feb 9 00:46:58.825893 kernel: with environment: Feb 9 00:46:58.825900 kernel: HOME=/ Feb 9 00:46:58.827317 kernel: TERM=linux Feb 9 00:46:58.827341 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 00:46:58.827352 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 00:46:58.827362 systemd[1]: Detected virtualization kvm. Feb 9 00:46:58.827371 systemd[1]: Detected architecture x86-64. Feb 9 00:46:58.827378 systemd[1]: Running in initrd. Feb 9 00:46:58.827386 systemd[1]: No hostname configured, using default hostname. Feb 9 00:46:58.827403 systemd[1]: Hostname set to . Feb 9 00:46:58.827413 systemd[1]: Initializing machine ID from VM UUID. Feb 9 00:46:58.827421 systemd[1]: Queued start job for default target initrd.target. Feb 9 00:46:58.827429 systemd[1]: Started systemd-ask-password-console.path. Feb 9 00:46:58.827437 systemd[1]: Reached target cryptsetup.target. Feb 9 00:46:58.827445 systemd[1]: Reached target paths.target. Feb 9 00:46:58.827452 systemd[1]: Reached target slices.target. Feb 9 00:46:58.827460 systemd[1]: Reached target swap.target. Feb 9 00:46:58.827468 systemd[1]: Reached target timers.target. Feb 9 00:46:58.827477 systemd[1]: Listening on iscsid.socket. Feb 9 00:46:58.827485 systemd[1]: Listening on iscsiuio.socket. Feb 9 00:46:58.827493 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 00:46:58.827501 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 00:46:58.827509 systemd[1]: Listening on systemd-journald.socket. Feb 9 00:46:58.827517 systemd[1]: Listening on systemd-networkd.socket. Feb 9 00:46:58.827525 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 00:46:58.827533 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 00:46:58.827541 systemd[1]: Reached target sockets.target. Feb 9 00:46:58.827550 systemd[1]: Starting kmod-static-nodes.service... Feb 9 00:46:58.827558 systemd[1]: Finished network-cleanup.service. Feb 9 00:46:58.827566 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 00:46:58.827574 systemd[1]: Starting systemd-journald.service... Feb 9 00:46:58.827582 systemd[1]: Starting systemd-modules-load.service... Feb 9 00:46:58.827590 systemd[1]: Starting systemd-resolved.service... Feb 9 00:46:58.827597 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 00:46:58.827605 systemd[1]: Finished kmod-static-nodes.service. Feb 9 00:46:58.827613 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 00:46:58.827622 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 00:46:58.827630 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 00:46:58.827638 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 00:46:58.827646 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 00:46:58.827655 kernel: audit: type=1130 audit(1707439618.821:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:46:58.827663 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 00:46:58.827671 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 00:46:58.827682 systemd-journald[198]: Journal started Feb 9 00:46:58.827727 systemd-journald[198]: Runtime Journal (/run/log/journal/6fb2d7a5757a4d678a49894bfd98d0bd) is 6.0M, max 48.4M, 42.4M free. Feb 9 00:46:58.821000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:46:58.801828 systemd-modules-load[199]: Inserted module 'overlay' Feb 9 00:46:58.831443 kernel: audit: type=1130 audit(1707439618.827:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:46:58.831460 systemd[1]: Started systemd-journald.service. Feb 9 00:46:58.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:46:58.832453 systemd-modules-load[199]: Inserted module 'br_netfilter' Feb 9 00:46:58.833261 kernel: Bridge firewalling registered Feb 9 00:46:58.833257 systemd[1]: Starting dracut-cmdline.service... Feb 9 00:46:58.836220 kernel: audit: type=1130 audit(1707439618.832:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:46:58.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:46:58.833992 systemd-resolved[200]: Positive Trust Anchors: Feb 9 00:46:58.833999 systemd-resolved[200]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 00:46:58.841217 kernel: audit: type=1130 audit(1707439618.836:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:46:58.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:46:58.834025 systemd-resolved[200]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 00:46:58.836153 systemd-resolved[200]: Defaulting to hostname 'linux'. Feb 9 00:46:58.836794 systemd[1]: Started systemd-resolved.service. Feb 9 00:46:58.837884 systemd[1]: Reached target nss-lookup.target. Feb 9 00:46:58.851941 kernel: SCSI subsystem initialized Feb 9 00:46:58.853306 dracut-cmdline[218]: dracut-dracut-053 Feb 9 00:46:58.855776 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 9 00:46:58.862001 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 00:46:58.862034 kernel: device-mapper: uevent: version 1.0.3 Feb 9 00:46:58.862929 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 00:46:58.865676 systemd-modules-load[199]: Inserted module 'dm_multipath' Feb 9 00:46:58.866578 systemd[1]: Finished systemd-modules-load.service. Feb 9 00:46:58.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:46:58.868821 systemd[1]: Starting systemd-sysctl.service... Feb 9 00:46:58.879054 kernel: audit: type=1130 audit(1707439618.866:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:46:58.883178 systemd[1]: Finished systemd-sysctl.service. Feb 9 00:46:58.909653 kernel: audit: type=1130 audit(1707439618.882:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:46:58.882000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:46:58.935938 kernel: Loading iSCSI transport class v2.0-870. Feb 9 00:46:58.954944 kernel: iscsi: registered transport (tcp) Feb 9 00:46:58.974986 kernel: iscsi: registered transport (qla4xxx) Feb 9 00:46:58.975071 kernel: QLogic iSCSI HBA Driver Feb 9 00:46:59.016574 systemd[1]: Finished dracut-cmdline.service. Feb 9 00:46:59.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:46:59.018473 systemd[1]: Starting dracut-pre-udev.service... Feb 9 00:46:59.027451 kernel: audit: type=1130 audit(1707439619.016:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:46:59.199985 kernel: raid6: avx2x4 gen() 17825 MB/s Feb 9 00:46:59.233453 kernel: raid6: avx2x4 xor() 4994 MB/s Feb 9 00:46:59.250444 kernel: raid6: avx2x2 gen() 13719 MB/s Feb 9 00:46:59.290449 kernel: raid6: avx2x2 xor() 7500 MB/s Feb 9 00:46:59.309444 kernel: raid6: avx2x1 gen() 11833 MB/s Feb 9 00:46:59.326443 kernel: raid6: avx2x1 xor() 6290 MB/s Feb 9 00:46:59.344983 kernel: raid6: sse2x4 gen() 4844 MB/s Feb 9 00:46:59.365458 kernel: raid6: sse2x4 xor() 3301 MB/s Feb 9 00:46:59.385982 kernel: raid6: sse2x2 gen() 7480 MB/s Feb 9 00:46:59.411414 kernel: raid6: sse2x2 xor() 5102 MB/s Feb 9 00:46:59.436414 kernel: raid6: sse2x1 gen() 5239 MB/s Feb 9 00:46:59.466180 kernel: raid6: sse2x1 xor() 3206 MB/s Feb 9 00:46:59.466260 kernel: raid6: using algorithm avx2x4 gen() 17825 MB/s Feb 9 00:46:59.466277 kernel: raid6: .... xor() 4994 MB/s, rmw enabled Feb 9 00:46:59.466291 kernel: raid6: using avx2x2 recovery algorithm Feb 9 00:46:59.512400 kernel: xor: automatically using best checksumming function avx Feb 9 00:47:00.642324 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 9 00:47:00.685504 systemd[1]: Finished dracut-pre-udev.service. Feb 9 00:47:00.692238 kernel: audit: type=1130 audit(1707439620.686:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:00.686000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:00.695000 audit: BPF prog-id=7 op=LOAD Feb 9 00:47:00.700654 kernel: audit: type=1334 audit(1707439620.695:10): prog-id=7 op=LOAD Feb 9 00:47:00.700000 audit: BPF prog-id=8 op=LOAD Feb 9 00:47:00.713186 systemd[1]: Starting systemd-udevd.service... Feb 9 00:47:00.788565 systemd-udevd[400]: Using default interface naming scheme 'v252'. Feb 9 00:47:00.817500 systemd[1]: Started systemd-udevd.service. Feb 9 00:47:00.823971 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 00:47:00.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:00.846329 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Feb 9 00:47:00.923157 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 00:47:00.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:00.929346 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 00:47:01.037496 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 00:47:01.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:01.177532 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 00:47:01.222390 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 9 00:47:01.234283 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 9 00:47:01.234364 kernel: GPT:9289727 != 19775487 Feb 9 00:47:01.234376 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 9 00:47:01.234388 kernel: GPT:9289727 != 19775487 Feb 9 00:47:01.234398 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 9 00:47:01.236867 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 00:47:01.249363 kernel: libata version 3.00 loaded. Feb 9 00:47:01.341937 kernel: ata_piix 0000:00:01.1: version 2.13 Feb 9 00:47:01.343073 kernel: scsi host0: ata_piix Feb 9 00:47:01.344485 kernel: scsi host1: ata_piix Feb 9 00:47:01.344648 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Feb 9 00:47:01.347873 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Feb 9 00:47:01.357943 kernel: AVX2 version of gcm_enc/dec engaged. Feb 9 00:47:01.424626 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 00:47:01.526746 kernel: AES CTR mode by8 optimization enabled Feb 9 00:47:01.526791 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (450) Feb 9 00:47:01.526806 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Feb 9 00:47:01.522080 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 00:47:01.536021 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 9 00:47:01.541290 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 00:47:01.551925 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 00:47:01.569935 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 00:47:01.573700 systemd[1]: Starting disk-uuid.service... Feb 9 00:47:01.645701 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Feb 9 00:47:01.646052 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 9 00:47:01.663429 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Feb 9 00:47:01.774627 disk-uuid[519]: Primary Header is updated. Feb 9 00:47:01.774627 disk-uuid[519]: Secondary Entries is updated. Feb 9 00:47:01.774627 disk-uuid[519]: Secondary Header is updated. Feb 9 00:47:01.791941 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 00:47:01.819383 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 00:47:02.862062 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 00:47:02.864209 disk-uuid[532]: The operation has completed successfully. Feb 9 00:47:03.008931 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 00:47:03.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:03.008000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:03.009052 systemd[1]: Finished disk-uuid.service. Feb 9 00:47:03.027602 systemd[1]: Starting verity-setup.service... Feb 9 00:47:03.083030 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Feb 9 00:47:03.353247 systemd[1]: Found device dev-mapper-usr.device. Feb 9 00:47:03.371158 systemd[1]: Mounting sysusr-usr.mount... Feb 9 00:47:03.373196 systemd[1]: Finished verity-setup.service. Feb 9 00:47:03.382841 kernel: kauditd_printk_skb: 6 callbacks suppressed Feb 9 00:47:03.382924 kernel: audit: type=1130 audit(1707439623.379:17): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:03.379000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:03.561941 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 00:47:03.565547 systemd[1]: Mounted sysusr-usr.mount. Feb 9 00:47:03.566501 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 00:47:03.569493 systemd[1]: Starting ignition-setup.service... Feb 9 00:47:03.590200 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 00:47:03.615146 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 00:47:03.615178 kernel: BTRFS info (device vda6): using free space tree Feb 9 00:47:03.615192 kernel: BTRFS info (device vda6): has skinny extents Feb 9 00:47:03.631216 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 00:47:03.673410 kernel: audit: type=1130 audit(1707439623.662:18): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:03.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:03.668064 systemd[1]: Finished ignition-setup.service. Feb 9 00:47:03.679903 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 00:47:03.775952 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 00:47:03.798368 kernel: audit: type=1130 audit(1707439623.789:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:03.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:03.799954 kernel: audit: type=1334 audit(1707439623.798:20): prog-id=9 op=LOAD Feb 9 00:47:03.798000 audit: BPF prog-id=9 op=LOAD Feb 9 00:47:03.804325 systemd[1]: Starting systemd-networkd.service... Feb 9 00:47:03.874387 ignition[643]: Ignition 2.14.0 Feb 9 00:47:03.874425 ignition[643]: Stage: fetch-offline Feb 9 00:47:03.875339 ignition[643]: no configs at "/usr/lib/ignition/base.d" Feb 9 00:47:03.875355 ignition[643]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 00:47:03.875504 ignition[643]: parsed url from cmdline: "" Feb 9 00:47:03.875509 ignition[643]: no config URL provided Feb 9 00:47:03.875517 ignition[643]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 00:47:03.875527 ignition[643]: no config at "/usr/lib/ignition/user.ign" Feb 9 00:47:03.875563 ignition[643]: op(1): [started] loading QEMU firmware config module Feb 9 00:47:03.875570 ignition[643]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 9 00:47:03.895770 ignition[643]: op(1): [finished] loading QEMU firmware config module Feb 9 00:47:03.922125 systemd-networkd[707]: lo: Link UP Feb 9 00:47:03.927915 systemd-networkd[707]: lo: Gained carrier Feb 9 00:47:03.937046 systemd-networkd[707]: Enumeration completed Feb 9 00:47:03.966385 kernel: audit: type=1130 audit(1707439623.944:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:03.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:03.937204 systemd[1]: Started systemd-networkd.service. Feb 9 00:47:03.942328 systemd-networkd[707]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 00:47:03.943658 systemd-networkd[707]: eth0: Link UP Feb 9 00:47:03.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:03.943662 systemd-networkd[707]: eth0: Gained carrier Feb 9 00:47:04.005564 kernel: audit: type=1130 audit(1707439623.985:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:04.005599 kernel: audit: type=1130 audit(1707439623.999:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:03.999000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:03.945172 systemd[1]: Reached target network.target. Feb 9 00:47:04.007039 iscsid[715]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 00:47:04.007039 iscsid[715]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Feb 9 00:47:04.007039 iscsid[715]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 00:47:04.007039 iscsid[715]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 00:47:04.007039 iscsid[715]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 00:47:04.007039 iscsid[715]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 00:47:04.007039 iscsid[715]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 00:47:03.977696 systemd[1]: Starting iscsiuio.service... Feb 9 00:47:04.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:04.051936 kernel: audit: type=1130 audit(1707439624.048:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:03.987154 systemd[1]: Started iscsiuio.service. Feb 9 00:47:03.993510 systemd[1]: Starting iscsid.service... Feb 9 00:47:03.999384 systemd[1]: Started iscsid.service. Feb 9 00:47:04.006234 systemd[1]: Starting dracut-initqueue.service... Feb 9 00:47:04.045563 systemd[1]: Finished dracut-initqueue.service. Feb 9 00:47:04.067212 ignition[643]: parsing config with SHA512: e76ea88e9f4bea2c56f6d8f93397535a06c02cc468cd8bab3190e35182de4149d4e00c8174eb0b24a04ac99d71d1a69a7033e0040fd503d353201068fe078c9f Feb 9 00:47:04.048654 systemd[1]: Reached target remote-fs-pre.target. Feb 9 00:47:04.052822 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 00:47:04.055127 systemd[1]: Reached target remote-fs.target. Feb 9 00:47:04.065506 systemd[1]: Starting dracut-pre-mount.service... Feb 9 00:47:04.092593 kernel: audit: type=1130 audit(1707439624.085:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:04.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:04.086831 systemd[1]: Finished dracut-pre-mount.service. Feb 9 00:47:04.114079 systemd-networkd[707]: eth0: DHCPv4 address 10.0.0.69/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 9 00:47:04.213113 unknown[643]: fetched base config from "system" Feb 9 00:47:04.214548 ignition[643]: fetch-offline: fetch-offline passed Feb 9 00:47:04.213133 unknown[643]: fetched user config from "qemu" Feb 9 00:47:04.214779 ignition[643]: Ignition finished successfully Feb 9 00:47:04.225738 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 00:47:04.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:04.260359 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 9 00:47:04.285802 systemd[1]: Starting ignition-kargs.service... Feb 9 00:47:04.298124 kernel: audit: type=1130 audit(1707439624.258:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:04.352390 ignition[729]: Ignition 2.14.0 Feb 9 00:47:04.352421 ignition[729]: Stage: kargs Feb 9 00:47:04.352598 ignition[729]: no configs at "/usr/lib/ignition/base.d" Feb 9 00:47:04.352610 ignition[729]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 00:47:04.361465 ignition[729]: kargs: kargs passed Feb 9 00:47:04.361552 ignition[729]: Ignition finished successfully Feb 9 00:47:04.392618 systemd[1]: Finished ignition-kargs.service. Feb 9 00:47:04.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:04.395783 systemd[1]: Starting ignition-disks.service... Feb 9 00:47:04.406229 ignition[735]: Ignition 2.14.0 Feb 9 00:47:04.406257 ignition[735]: Stage: disks Feb 9 00:47:04.406397 ignition[735]: no configs at "/usr/lib/ignition/base.d" Feb 9 00:47:04.406408 ignition[735]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 00:47:04.407952 ignition[735]: disks: disks passed Feb 9 00:47:04.407995 ignition[735]: Ignition finished successfully Feb 9 00:47:04.437771 systemd[1]: Finished ignition-disks.service. Feb 9 00:47:04.438000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:04.438329 systemd[1]: Reached target initrd-root-device.target. Feb 9 00:47:04.450555 systemd[1]: Reached target local-fs-pre.target. Feb 9 00:47:04.451977 systemd[1]: Reached target local-fs.target. Feb 9 00:47:04.452773 systemd[1]: Reached target sysinit.target. Feb 9 00:47:04.453581 systemd[1]: Reached target basic.target. Feb 9 00:47:04.457807 systemd[1]: Starting systemd-fsck-root.service... Feb 9 00:47:04.500495 systemd-fsck[742]: ROOT: clean, 602/553520 files, 56014/553472 blocks Feb 9 00:47:04.554106 systemd[1]: Finished systemd-fsck-root.service. Feb 9 00:47:04.558429 systemd[1]: Mounting sysroot.mount... Feb 9 00:47:04.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:04.674905 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 00:47:04.675407 systemd[1]: Mounted sysroot.mount. Feb 9 00:47:04.676329 systemd[1]: Reached target initrd-root-fs.target. Feb 9 00:47:04.688067 systemd[1]: Mounting sysroot-usr.mount... Feb 9 00:47:04.692795 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 9 00:47:04.694101 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 00:47:04.694145 systemd[1]: Reached target ignition-diskful.target. Feb 9 00:47:04.707756 systemd[1]: Mounted sysroot-usr.mount. Feb 9 00:47:04.718618 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 00:47:04.732823 systemd[1]: Starting initrd-setup-root.service... Feb 9 00:47:04.746691 initrd-setup-root[753]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 00:47:04.772284 initrd-setup-root[761]: cut: /sysroot/etc/group: No such file or directory Feb 9 00:47:04.791953 initrd-setup-root[769]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 00:47:04.800642 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (748) Feb 9 00:47:04.807421 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 00:47:04.807501 kernel: BTRFS info (device vda6): using free space tree Feb 9 00:47:04.807520 kernel: BTRFS info (device vda6): has skinny extents Feb 9 00:47:04.835312 initrd-setup-root[781]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 00:47:04.921065 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 00:47:05.120134 systemd[1]: Finished initrd-setup-root.service. Feb 9 00:47:05.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:05.142330 systemd[1]: Starting ignition-mount.service... Feb 9 00:47:05.152000 systemd[1]: Starting sysroot-boot.service... Feb 9 00:47:05.155635 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 9 00:47:05.155756 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 9 00:47:05.204996 ignition[813]: INFO : Ignition 2.14.0 Feb 9 00:47:05.214462 ignition[813]: INFO : Stage: mount Feb 9 00:47:05.214462 ignition[813]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 00:47:05.214462 ignition[813]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 00:47:05.214462 ignition[813]: INFO : mount: mount passed Feb 9 00:47:05.214462 ignition[813]: INFO : Ignition finished successfully Feb 9 00:47:05.219000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:05.215394 systemd[1]: Finished ignition-mount.service. Feb 9 00:47:05.221169 systemd[1]: Starting ignition-files.service... Feb 9 00:47:05.246668 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 00:47:05.303105 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (821) Feb 9 00:47:05.317881 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 00:47:05.317969 kernel: BTRFS info (device vda6): using free space tree Feb 9 00:47:05.317983 kernel: BTRFS info (device vda6): has skinny extents Feb 9 00:47:05.343162 systemd[1]: Finished sysroot-boot.service. Feb 9 00:47:05.342000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:05.359596 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 00:47:05.394422 ignition[842]: INFO : Ignition 2.14.0 Feb 9 00:47:05.394422 ignition[842]: INFO : Stage: files Feb 9 00:47:05.394422 ignition[842]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 00:47:05.394422 ignition[842]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 00:47:05.408320 ignition[842]: DEBUG : files: compiled without relabeling support, skipping Feb 9 00:47:05.426625 ignition[842]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 00:47:05.426625 ignition[842]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 00:47:05.488542 ignition[842]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 00:47:05.490242 ignition[842]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 00:47:05.500018 unknown[842]: wrote ssh authorized keys file for user: core Feb 9 00:47:05.526399 ignition[842]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 00:47:05.526399 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 9 00:47:05.526399 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 9 00:47:05.730517 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 9 00:47:05.894165 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 9 00:47:05.894165 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 9 00:47:05.893925 systemd-networkd[707]: eth0: Gained IPv6LL Feb 9 00:47:05.905080 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Feb 9 00:47:06.289933 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 9 00:47:06.716060 ignition[842]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Feb 9 00:47:06.725460 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 9 00:47:06.725460 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 9 00:47:06.725460 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-amd64.tar.gz: attempt #1 Feb 9 00:47:07.033518 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 9 00:47:07.249433 ignition[842]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: a3a2c02a90b008686c20babaf272e703924db2a3e2a0d4e2a7c81d994cbc68c47458a4a354ecc243af095b390815c7f203348b9749351ae817bd52a522300449 Feb 9 00:47:07.249433 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 9 00:47:07.249433 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 9 00:47:07.286225 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 9 00:47:07.286225 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 00:47:07.286225 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubeadm: attempt #1 Feb 9 00:47:07.396480 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 9 00:47:07.929374 ignition[842]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 1c324cd645a7bf93d19d24c87498d9a17878eb1cc927e2680200ffeab2f85051ddec47d85b79b8e774042dc6726299ad3d7caf52c060701f00deba30dc33f660 Feb 9 00:47:07.929374 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 00:47:07.929374 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubectl" Feb 9 00:47:07.929374 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubectl: attempt #1 Feb 9 00:47:07.984007 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 9 00:47:08.495692 ignition[842]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 97840854134909d75a1a2563628cc4ba632067369ce7fc8a8a1e90a387d32dd7bfd73f4f5b5a82ef842088e7470692951eb7fc869c5f297dd740f855672ee628 Feb 9 00:47:08.495692 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 9 00:47:08.495692 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 00:47:08.495692 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubelet: attempt #1 Feb 9 00:47:08.529618 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET result: OK Feb 9 00:47:08.628188 kernel: hrtimer: interrupt took 4313588 ns Feb 9 00:47:09.849165 ignition[842]: DEBUG : files: createFilesystemsFiles: createFiles: op(9): file matches expected sum of: 40daf2a9b9e666c14b10e627da931bd79978628b1f23ef6429c1cb4fcba261f86ccff440c0dbb0070ee760fe55772b4fd279c4582dfbb17fa30bc94b7f00126b Feb 9 00:47:09.849165 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 00:47:09.849165 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 00:47:09.849165 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 00:47:09.849165 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 00:47:09.849165 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 9 00:47:10.286584 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 9 00:47:10.577869 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 00:47:10.577869 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/install.sh" Feb 9 00:47:10.577869 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 00:47:10.577869 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 9 00:47:10.577869 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 9 00:47:10.577869 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 00:47:10.577869 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 00:47:10.577869 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 00:47:10.577869 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 00:47:10.639225 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 00:47:10.639225 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 00:47:10.639225 ignition[842]: INFO : files: op(11): [started] processing unit "containerd.service" Feb 9 00:47:10.639225 ignition[842]: INFO : files: op(11): op(12): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 9 00:47:10.639225 ignition[842]: INFO : files: op(11): op(12): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 9 00:47:10.639225 ignition[842]: INFO : files: op(11): [finished] processing unit "containerd.service" Feb 9 00:47:10.639225 ignition[842]: INFO : files: op(13): [started] processing unit "prepare-cni-plugins.service" Feb 9 00:47:10.639225 ignition[842]: INFO : files: op(13): op(14): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 00:47:10.639225 ignition[842]: INFO : files: op(13): op(14): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 00:47:10.639225 ignition[842]: INFO : files: op(13): [finished] processing unit "prepare-cni-plugins.service" Feb 9 00:47:10.639225 ignition[842]: INFO : files: op(15): [started] processing unit "prepare-critools.service" Feb 9 00:47:10.639225 ignition[842]: INFO : files: op(15): op(16): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 00:47:10.639225 ignition[842]: INFO : files: op(15): op(16): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 00:47:10.639225 ignition[842]: INFO : files: op(15): [finished] processing unit "prepare-critools.service" Feb 9 00:47:10.639225 ignition[842]: INFO : files: op(17): [started] processing unit "prepare-helm.service" Feb 9 00:47:10.639225 ignition[842]: INFO : files: op(17): op(18): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 00:47:10.639225 ignition[842]: INFO : files: op(17): op(18): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 00:47:10.763251 ignition[842]: INFO : files: op(17): [finished] processing unit "prepare-helm.service" Feb 9 00:47:10.763251 ignition[842]: INFO : files: op(19): [started] processing unit "coreos-metadata.service" Feb 9 00:47:10.763251 ignition[842]: INFO : files: op(19): op(1a): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 9 00:47:10.763251 ignition[842]: INFO : files: op(19): op(1a): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 9 00:47:10.763251 ignition[842]: INFO : files: op(19): [finished] processing unit "coreos-metadata.service" Feb 9 00:47:10.763251 ignition[842]: INFO : files: op(1b): [started] setting preset to enabled for "prepare-helm.service" Feb 9 00:47:10.763251 ignition[842]: INFO : files: op(1b): [finished] setting preset to enabled for "prepare-helm.service" Feb 9 00:47:10.763251 ignition[842]: INFO : files: op(1c): [started] setting preset to disabled for "coreos-metadata.service" Feb 9 00:47:10.763251 ignition[842]: INFO : files: op(1c): op(1d): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 9 00:47:11.097199 ignition[842]: INFO : files: op(1c): op(1d): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 9 00:47:11.097199 ignition[842]: INFO : files: op(1c): [finished] setting preset to disabled for "coreos-metadata.service" Feb 9 00:47:11.097199 ignition[842]: INFO : files: op(1e): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 00:47:11.097199 ignition[842]: INFO : files: op(1e): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 00:47:11.097199 ignition[842]: INFO : files: op(1f): [started] setting preset to enabled for "prepare-critools.service" Feb 9 00:47:11.097199 ignition[842]: INFO : files: op(1f): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 00:47:11.097199 ignition[842]: INFO : files: createResultFile: createFiles: op(20): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 00:47:11.097199 ignition[842]: INFO : files: createResultFile: createFiles: op(20): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 00:47:11.097199 ignition[842]: INFO : files: files passed Feb 9 00:47:11.097199 ignition[842]: INFO : Ignition finished successfully Feb 9 00:47:11.160374 kernel: kauditd_printk_skb: 6 callbacks suppressed Feb 9 00:47:11.160419 kernel: audit: type=1130 audit(1707439631.145:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:11.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:11.142402 systemd[1]: Finished ignition-files.service. Feb 9 00:47:11.153738 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 00:47:11.177152 kernel: audit: type=1130 audit(1707439631.169:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:11.177185 kernel: audit: type=1131 audit(1707439631.169:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:11.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:11.169000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:11.154507 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 00:47:11.179262 initrd-setup-root-after-ignition[868]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Feb 9 00:47:11.156735 systemd[1]: Starting ignition-quench.service... Feb 9 00:47:11.182441 initrd-setup-root-after-ignition[870]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 00:47:11.189738 kernel: audit: type=1130 audit(1707439631.184:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:11.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:11.165636 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 00:47:11.165753 systemd[1]: Finished ignition-quench.service. Feb 9 00:47:11.182257 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 00:47:11.184333 systemd[1]: Reached target ignition-complete.target. Feb 9 00:47:11.200226 systemd[1]: Starting initrd-parse-etc.service... Feb 9 00:47:11.283765 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 00:47:11.284227 systemd[1]: Finished initrd-parse-etc.service. Feb 9 00:47:11.306255 kernel: audit: type=1130 audit(1707439631.285:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:11.306293 kernel: audit: type=1131 audit(1707439631.285:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:11.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:11.285000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:11.286129 systemd[1]: Reached target initrd-fs.target. Feb 9 00:47:11.286940 systemd[1]: Reached target initrd.target. Feb 9 00:47:11.287717 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 00:47:11.302877 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 00:47:11.343541 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 00:47:11.346202 systemd[1]: Starting initrd-cleanup.service... Feb 9 00:47:11.342000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:11.360230 kernel: audit: type=1130 audit(1707439631.342:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:11.361040 systemd[1]: Stopped target nss-lookup.target. Feb 9 00:47:11.362112 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 00:47:11.363115 systemd[1]: Stopped target timers.target. Feb 9 00:47:11.363989 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 00:47:11.380000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:11.364161 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 00:47:11.392277 kernel: audit: type=1131 audit(1707439631.380:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:11.383040 systemd[1]: Stopped target initrd.target. Feb 9 00:47:11.393891 systemd[1]: Stopped target basic.target. Feb 9 00:47:11.401482 systemd[1]: Stopped target ignition-complete.target. Feb 9 00:47:11.402506 systemd[1]: Stopped target ignition-diskful.target. Feb 9 00:47:11.406555 systemd[1]: Stopped target initrd-root-device.target. Feb 9 00:47:11.409171 systemd[1]: Stopped target remote-fs.target. Feb 9 00:47:11.421242 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 00:47:11.424459 systemd[1]: Stopped target sysinit.target. Feb 9 00:47:11.426068 systemd[1]: Stopped target local-fs.target. Feb 9 00:47:11.428688 systemd[1]: Stopped target local-fs-pre.target. Feb 9 00:47:11.430894 systemd[1]: Stopped target swap.target. Feb 9 00:47:11.432373 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 00:47:11.436757 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 00:47:11.440000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:11.441359 systemd[1]: Stopped target cryptsetup.target. Feb 9 00:47:11.448741 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 00:47:11.454686 kernel: audit: type=1131 audit(1707439631.440:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:11.453010 systemd[1]: Stopped dracut-initqueue.service. Feb 9 00:47:11.461172 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 00:47:11.466157 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 00:47:11.460000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:11.472710 systemd[1]: Stopped target paths.target. Feb 9 00:47:11.474644 kernel: audit: type=1131 audit(1707439631.460:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:11.470000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:11.473614 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 00:47:11.480168 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 00:47:11.482228 systemd[1]: Stopped target slices.target. Feb 9 00:47:11.509435 systemd[1]: Stopped target sockets.target. Feb 9 00:47:11.509827 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 00:47:11.514363 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 00:47:11.524000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:11.526615 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 00:47:11.526000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:11.526773 systemd[1]: Stopped ignition-files.service. Feb 9 00:47:11.529842 systemd[1]: Stopping ignition-mount.service... Feb 9 00:47:11.538000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:11.540279 systemd[1]: Stopping iscsid.service... Feb 9 00:47:11.551615 iscsid[715]: iscsid shutting down. Feb 9 00:47:11.564000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:11.540918 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 00:47:11.567431 ignition[883]: INFO : Ignition 2.14.0 Feb 9 00:47:11.567431 ignition[883]: INFO : Stage: umount Feb 9 00:47:11.567431 ignition[883]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 00:47:11.567431 ignition[883]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 00:47:11.567431 ignition[883]: INFO : umount: umount passed Feb 9 00:47:11.567431 ignition[883]: INFO : Ignition finished successfully Feb 9 00:47:11.541132 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 00:47:11.561675 systemd[1]: Stopping sysroot-boot.service... Feb 9 00:47:11.562916 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 00:47:11.563168 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 00:47:11.564946 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 00:47:11.565123 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 00:47:11.594867 systemd[1]: iscsid.service: Deactivated successfully. Feb 9 00:47:11.590000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:11.597654 systemd[1]: Stopped iscsid.service. Feb 9 00:47:11.604000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:11.605565 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 00:47:11.604000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:11.611000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:11.612000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:11.605684 systemd[1]: Stopped ignition-mount.service. Feb 9 00:47:11.606979 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 00:47:11.613000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:11.621000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:11.621000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:11.607092 systemd[1]: Closed iscsid.socket. Feb 9 00:47:11.607938 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 00:47:11.608073 systemd[1]: Stopped ignition-disks.service. Feb 9 00:47:11.611470 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 00:47:11.611540 systemd[1]: Stopped ignition-kargs.service. Feb 9 00:47:11.613255 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 00:47:11.652000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:11.613307 systemd[1]: Stopped ignition-setup.service. Feb 9 00:47:11.615335 systemd[1]: Stopping iscsiuio.service... Feb 9 00:47:11.620928 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 00:47:11.621036 systemd[1]: Finished initrd-cleanup.service. Feb 9 00:47:11.637534 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 00:47:11.638198 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 00:47:11.638307 systemd[1]: Stopped iscsiuio.service. Feb 9 00:47:11.652723 systemd[1]: Stopped target network.target. Feb 9 00:47:11.688301 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 00:47:11.713000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:11.717000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:11.688364 systemd[1]: Closed iscsiuio.socket. Feb 9 00:47:11.691811 systemd[1]: Stopping systemd-networkd.service... Feb 9 00:47:11.698382 systemd[1]: Stopping systemd-resolved.service... Feb 9 00:47:11.706626 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 00:47:11.706736 systemd[1]: Stopped sysroot-boot.service. Feb 9 00:47:11.713450 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 00:47:11.713534 systemd[1]: Stopped initrd-setup-root.service. Feb 9 00:47:11.723800 systemd-networkd[707]: eth0: DHCPv6 lease lost Feb 9 00:47:11.729434 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 00:47:11.729587 systemd[1]: Stopped systemd-resolved.service. Feb 9 00:47:11.797000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:11.798650 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 00:47:11.803000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:11.804000 audit: BPF prog-id=6 op=UNLOAD Feb 9 00:47:11.798800 systemd[1]: Stopped systemd-networkd.service. Feb 9 00:47:11.804992 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 00:47:11.820000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:11.805046 systemd[1]: Closed systemd-networkd.socket. Feb 9 00:47:11.830000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:11.812874 systemd[1]: Stopping network-cleanup.service... Feb 9 00:47:11.816538 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 00:47:11.816646 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 00:47:11.822003 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 00:47:11.841000 audit: BPF prog-id=9 op=UNLOAD Feb 9 00:47:11.822094 systemd[1]: Stopped systemd-sysctl.service. Feb 9 00:47:11.834083 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 00:47:11.834160 systemd[1]: Stopped systemd-modules-load.service. Feb 9 00:47:11.847000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:11.850133 systemd[1]: Stopping systemd-udevd.service... Feb 9 00:47:11.852864 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 00:47:11.863515 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 00:47:11.864841 systemd[1]: Stopped systemd-udevd.service. Feb 9 00:47:11.865000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:11.868979 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 00:47:11.869066 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 00:47:11.869448 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 00:47:11.869487 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 00:47:11.879955 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 00:47:11.880042 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 00:47:11.902000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:11.902583 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 00:47:11.902662 systemd[1]: Stopped dracut-cmdline.service. Feb 9 00:47:11.922000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:11.922819 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 00:47:11.927000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:11.929000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:11.929000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:11.922904 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 00:47:11.928795 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 00:47:11.929098 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 00:47:11.929163 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 00:47:11.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:11.948000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:11.929585 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 00:47:11.929687 systemd[1]: Stopped network-cleanup.service. Feb 9 00:47:11.941402 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 00:47:11.941530 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 00:47:11.949268 systemd[1]: Reached target initrd-switch-root.target. Feb 9 00:47:11.991000 audit: BPF prog-id=8 op=UNLOAD Feb 9 00:47:11.991000 audit: BPF prog-id=7 op=UNLOAD Feb 9 00:47:11.998000 audit: BPF prog-id=5 op=UNLOAD Feb 9 00:47:11.958646 systemd[1]: Starting initrd-switch-root.service... Feb 9 00:47:11.989213 systemd[1]: Switching root. Feb 9 00:47:12.004000 audit: BPF prog-id=4 op=UNLOAD Feb 9 00:47:12.010000 audit: BPF prog-id=3 op=UNLOAD Feb 9 00:47:12.056134 systemd-journald[198]: Received SIGTERM from PID 1 (n/a). Feb 9 00:47:12.056219 systemd-journald[198]: Journal stopped Feb 9 00:47:18.985272 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 00:47:18.985342 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 00:47:18.985358 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 00:47:18.985373 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 00:47:18.985394 kernel: SELinux: policy capability open_perms=1 Feb 9 00:47:18.985412 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 00:47:18.985426 kernel: SELinux: policy capability always_check_network=0 Feb 9 00:47:18.985440 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 00:47:18.985454 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 00:47:18.985468 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 00:47:18.985482 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 00:47:18.985497 systemd[1]: Successfully loaded SELinux policy in 143.117ms. Feb 9 00:47:18.985528 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 19.096ms. Feb 9 00:47:18.985547 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 00:47:18.985563 systemd[1]: Detected virtualization kvm. Feb 9 00:47:18.985577 systemd[1]: Detected architecture x86-64. Feb 9 00:47:18.985592 systemd[1]: Detected first boot. Feb 9 00:47:18.985606 systemd[1]: Initializing machine ID from VM UUID. Feb 9 00:47:18.985622 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 00:47:18.985636 systemd[1]: Populated /etc with preset unit settings. Feb 9 00:47:18.985658 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 00:47:18.985674 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 00:47:18.985691 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 00:47:18.985710 systemd[1]: Queued start job for default target multi-user.target. Feb 9 00:47:18.985724 systemd[1]: Unnecessary job was removed for dev-vda6.device. Feb 9 00:47:18.985739 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 00:47:18.985753 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 00:47:18.985767 systemd[1]: Created slice system-getty.slice. Feb 9 00:47:18.985785 systemd[1]: Created slice system-modprobe.slice. Feb 9 00:47:18.985800 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 00:47:18.985815 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 00:47:18.985830 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 00:47:18.985844 systemd[1]: Created slice user.slice. Feb 9 00:47:18.985871 systemd[1]: Started systemd-ask-password-console.path. Feb 9 00:47:18.985886 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 00:47:18.985901 systemd[1]: Set up automount boot.automount. Feb 9 00:47:18.985934 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 00:47:18.985954 systemd[1]: Reached target integritysetup.target. Feb 9 00:47:18.985968 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 00:47:18.985982 systemd[1]: Reached target remote-fs.target. Feb 9 00:47:18.985996 systemd[1]: Reached target slices.target. Feb 9 00:47:18.986012 systemd[1]: Reached target swap.target. Feb 9 00:47:18.986031 systemd[1]: Reached target torcx.target. Feb 9 00:47:18.986046 systemd[1]: Reached target veritysetup.target. Feb 9 00:47:18.986061 systemd[1]: Listening on systemd-coredump.socket. Feb 9 00:47:18.986077 systemd[1]: Listening on systemd-initctl.socket. Feb 9 00:47:18.986092 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 00:47:18.986107 kernel: kauditd_printk_skb: 47 callbacks suppressed Feb 9 00:47:18.986123 kernel: audit: type=1400 audit(1707439638.855:83): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 00:47:18.986141 kernel: audit: type=1335 audit(1707439638.855:84): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 9 00:47:18.986156 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 00:47:18.986171 systemd[1]: Listening on systemd-journald.socket. Feb 9 00:47:18.986187 systemd[1]: Listening on systemd-networkd.socket. Feb 9 00:47:18.986202 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 00:47:18.986218 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 00:47:18.986236 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 00:47:18.986255 systemd[1]: Mounting dev-hugepages.mount... Feb 9 00:47:18.986275 systemd[1]: Mounting dev-mqueue.mount... Feb 9 00:47:18.986300 systemd[1]: Mounting media.mount... Feb 9 00:47:18.986319 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 00:47:18.986338 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 00:47:18.986355 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 00:47:18.986373 systemd[1]: Mounting tmp.mount... Feb 9 00:47:18.986392 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 00:47:18.986409 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 00:47:18.986424 systemd[1]: Starting kmod-static-nodes.service... Feb 9 00:47:18.986439 systemd[1]: Starting modprobe@configfs.service... Feb 9 00:47:18.986456 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 00:47:18.986470 systemd[1]: Starting modprobe@drm.service... Feb 9 00:47:18.986485 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 00:47:18.986500 systemd[1]: Starting modprobe@fuse.service... Feb 9 00:47:18.986515 systemd[1]: Starting modprobe@loop.service... Feb 9 00:47:18.986531 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 00:47:18.986546 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 9 00:47:18.986562 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Feb 9 00:47:18.986576 systemd[1]: Starting systemd-journald.service... Feb 9 00:47:18.986596 kernel: loop: module loaded Feb 9 00:47:18.986611 kernel: fuse: init (API version 7.34) Feb 9 00:47:18.986625 systemd[1]: Starting systemd-modules-load.service... Feb 9 00:47:18.986642 systemd[1]: Starting systemd-network-generator.service... Feb 9 00:47:18.986658 systemd[1]: Starting systemd-remount-fs.service... Feb 9 00:47:18.986672 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 00:47:18.986688 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 00:47:18.986703 systemd[1]: Mounted dev-hugepages.mount. Feb 9 00:47:18.986717 systemd[1]: Mounted dev-mqueue.mount. Feb 9 00:47:18.986734 systemd[1]: Mounted media.mount. Feb 9 00:47:18.986749 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 00:47:18.986763 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 00:47:18.986778 systemd[1]: Mounted tmp.mount. Feb 9 00:47:18.986793 kernel: audit: type=1305 audit(1707439638.982:85): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 00:47:18.986808 kernel: audit: type=1300 audit(1707439638.982:85): arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffd37253de0 a2=4000 a3=7ffd37253e7c items=0 ppid=1 pid=1034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 00:47:18.986827 systemd-journald[1034]: Journal started Feb 9 00:47:18.986898 systemd-journald[1034]: Runtime Journal (/run/log/journal/6fb2d7a5757a4d678a49894bfd98d0bd) is 6.0M, max 48.4M, 42.4M free. Feb 9 00:47:18.855000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 00:47:18.855000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 9 00:47:18.982000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 00:47:18.982000 audit[1034]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffd37253de0 a2=4000 a3=7ffd37253e7c items=0 ppid=1 pid=1034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 00:47:18.991335 kernel: audit: type=1327 audit(1707439638.982:85): proctitle="/usr/lib/systemd/systemd-journald" Feb 9 00:47:18.982000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 00:47:18.993067 systemd[1]: Started systemd-journald.service. Feb 9 00:47:18.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:18.996944 kernel: audit: type=1130 audit(1707439638.992:86): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:18.997188 systemd[1]: Finished kmod-static-nodes.service. Feb 9 00:47:18.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:18.998460 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 00:47:18.998787 systemd[1]: Finished modprobe@configfs.service. Feb 9 00:47:19.001991 kernel: audit: type=1130 audit(1707439638.997:87): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:19.002048 kernel: audit: type=1130 audit(1707439639.001:88): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:19.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:19.002296 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 00:47:19.002745 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 00:47:19.001000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:19.007770 kernel: audit: type=1131 audit(1707439639.001:89): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:19.007000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:19.009287 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 00:47:19.007000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:19.011934 kernel: audit: type=1130 audit(1707439639.007:90): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:19.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:19.013208 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 00:47:19.013489 systemd[1]: Finished modprobe@drm.service. Feb 9 00:47:19.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:19.013000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:19.014507 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 00:47:19.014788 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 00:47:19.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:19.014000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:19.015898 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 00:47:19.016263 systemd[1]: Finished modprobe@fuse.service. Feb 9 00:47:19.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:19.016000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:19.017308 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 00:47:19.017538 systemd[1]: Finished modprobe@loop.service. Feb 9 00:47:19.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:19.017000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:19.019031 systemd[1]: Finished systemd-modules-load.service. Feb 9 00:47:19.019000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:19.020494 systemd[1]: Finished systemd-network-generator.service. Feb 9 00:47:19.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:19.022188 systemd[1]: Finished systemd-remount-fs.service. Feb 9 00:47:19.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:19.024168 systemd[1]: Reached target network-pre.target. Feb 9 00:47:19.027039 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 00:47:19.029524 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 00:47:19.030466 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 00:47:19.032514 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 00:47:19.037446 systemd[1]: Starting systemd-journal-flush.service... Feb 9 00:47:19.038423 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 00:47:19.040494 systemd[1]: Starting systemd-random-seed.service... Feb 9 00:47:19.044130 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 00:47:19.045057 systemd-journald[1034]: Time spent on flushing to /var/log/journal/6fb2d7a5757a4d678a49894bfd98d0bd is 25.005ms for 1161 entries. Feb 9 00:47:19.045057 systemd-journald[1034]: System Journal (/var/log/journal/6fb2d7a5757a4d678a49894bfd98d0bd) is 8.0M, max 195.6M, 187.6M free. Feb 9 00:47:19.091626 systemd-journald[1034]: Received client request to flush runtime journal. Feb 9 00:47:19.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:19.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:19.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:19.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:19.045759 systemd[1]: Starting systemd-sysctl.service... Feb 9 00:47:19.049200 systemd[1]: Starting systemd-sysusers.service... Feb 9 00:47:19.054715 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 00:47:19.094171 udevadm[1078]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 9 00:47:19.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:19.055750 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 00:47:19.056705 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 00:47:19.057881 systemd[1]: Finished systemd-random-seed.service. Feb 9 00:47:19.058877 systemd[1]: Reached target first-boot-complete.target. Feb 9 00:47:19.061473 systemd[1]: Starting systemd-udev-settle.service... Feb 9 00:47:19.068228 systemd[1]: Finished systemd-sysctl.service. Feb 9 00:47:19.082760 systemd[1]: Finished systemd-sysusers.service. Feb 9 00:47:19.085398 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 00:47:19.093116 systemd[1]: Finished systemd-journal-flush.service. Feb 9 00:47:19.111767 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 00:47:19.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:19.570838 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 00:47:19.570000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:19.573294 systemd[1]: Starting systemd-udevd.service... Feb 9 00:47:19.591895 systemd-udevd[1088]: Using default interface naming scheme 'v252'. Feb 9 00:47:19.605619 systemd[1]: Started systemd-udevd.service. Feb 9 00:47:19.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:19.607855 systemd[1]: Starting systemd-networkd.service... Feb 9 00:47:19.621303 systemd[1]: Starting systemd-userdbd.service... Feb 9 00:47:19.630869 systemd[1]: Found device dev-ttyS0.device. Feb 9 00:47:19.664000 audit[1106]: AVC avc: denied { confidentiality } for pid=1106 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 00:47:19.668000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:19.668352 systemd[1]: Started systemd-userdbd.service. Feb 9 00:47:19.672927 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 9 00:47:19.677942 kernel: ACPI: button: Power Button [PWRF] Feb 9 00:47:19.664000 audit[1106]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55af85935c90 a1=32194 a2=7fc6df31abc5 a3=5 items=108 ppid=1088 pid=1106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 00:47:19.664000 audit: CWD cwd="/" Feb 9 00:47:19.664000 audit: PATH item=0 name=(null) inode=51 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=1 name=(null) inode=15425 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=2 name=(null) inode=15425 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=3 name=(null) inode=15428 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=4 name=(null) inode=15425 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=5 name=(null) inode=15429 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=6 name=(null) inode=15425 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=7 name=(null) inode=15430 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=8 name=(null) inode=15430 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=9 name=(null) inode=15431 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=10 name=(null) inode=15430 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=11 name=(null) inode=15432 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=12 name=(null) inode=15430 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=13 name=(null) inode=15433 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=14 name=(null) inode=15430 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=15 name=(null) inode=15434 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=16 name=(null) inode=15430 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=17 name=(null) inode=15435 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=18 name=(null) inode=15425 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=19 name=(null) inode=15436 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=20 name=(null) inode=15436 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=21 name=(null) inode=15437 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=22 name=(null) inode=15436 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=23 name=(null) inode=15438 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=24 name=(null) inode=15436 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=25 name=(null) inode=15439 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=26 name=(null) inode=15436 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=27 name=(null) inode=15440 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=28 name=(null) inode=15436 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=29 name=(null) inode=15441 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=30 name=(null) inode=15425 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=31 name=(null) inode=15442 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=32 name=(null) inode=15442 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=33 name=(null) inode=15443 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=34 name=(null) inode=15442 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=35 name=(null) inode=15444 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=36 name=(null) inode=15442 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=37 name=(null) inode=15445 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=38 name=(null) inode=15442 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=39 name=(null) inode=15446 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=40 name=(null) inode=15442 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=41 name=(null) inode=15447 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=42 name=(null) inode=15425 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=43 name=(null) inode=15448 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=44 name=(null) inode=15448 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=45 name=(null) inode=15449 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=46 name=(null) inode=15448 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=47 name=(null) inode=15450 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=48 name=(null) inode=15448 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=49 name=(null) inode=15451 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=50 name=(null) inode=15448 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=51 name=(null) inode=15452 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=52 name=(null) inode=15448 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=53 name=(null) inode=15453 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=54 name=(null) inode=51 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=55 name=(null) inode=15454 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=56 name=(null) inode=15454 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=57 name=(null) inode=15455 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=58 name=(null) inode=15454 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=59 name=(null) inode=15456 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=60 name=(null) inode=15454 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=61 name=(null) inode=15457 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=62 name=(null) inode=15457 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=63 name=(null) inode=15458 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=64 name=(null) inode=15457 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=65 name=(null) inode=15459 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=66 name=(null) inode=15457 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=67 name=(null) inode=15460 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=68 name=(null) inode=15457 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=69 name=(null) inode=15461 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=70 name=(null) inode=15457 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=71 name=(null) inode=15462 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=72 name=(null) inode=15454 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=73 name=(null) inode=15463 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=74 name=(null) inode=15463 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=75 name=(null) inode=15464 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=76 name=(null) inode=15463 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=77 name=(null) inode=15465 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=78 name=(null) inode=15463 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=79 name=(null) inode=15466 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=80 name=(null) inode=15463 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=81 name=(null) inode=15467 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=82 name=(null) inode=15463 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=83 name=(null) inode=15468 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=84 name=(null) inode=15454 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=85 name=(null) inode=15469 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=86 name=(null) inode=15469 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=87 name=(null) inode=15470 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=88 name=(null) inode=15469 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=89 name=(null) inode=15471 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=90 name=(null) inode=15469 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=91 name=(null) inode=15472 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=92 name=(null) inode=15469 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=93 name=(null) inode=15473 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=94 name=(null) inode=15469 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=95 name=(null) inode=15474 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=96 name=(null) inode=15454 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=97 name=(null) inode=15475 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=98 name=(null) inode=15475 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=99 name=(null) inode=15476 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=100 name=(null) inode=15475 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=101 name=(null) inode=15477 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=102 name=(null) inode=15475 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=103 name=(null) inode=15478 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=104 name=(null) inode=15475 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=105 name=(null) inode=15479 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=106 name=(null) inode=15475 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PATH item=107 name=(null) inode=15480 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:47:19.664000 audit: PROCTITLE proctitle="(udev-worker)" Feb 9 00:47:19.730995 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 0 Feb 9 00:47:19.734101 systemd-networkd[1097]: lo: Link UP Feb 9 00:47:19.734116 systemd-networkd[1097]: lo: Gained carrier Feb 9 00:47:19.734582 systemd-networkd[1097]: Enumeration completed Feb 9 00:47:19.734708 systemd-networkd[1097]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 00:47:19.734710 systemd[1]: Started systemd-networkd.service. Feb 9 00:47:19.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:19.736249 systemd-networkd[1097]: eth0: Link UP Feb 9 00:47:19.736264 systemd-networkd[1097]: eth0: Gained carrier Feb 9 00:47:19.743954 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 9 00:47:19.745783 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 00:47:19.751079 systemd-networkd[1097]: eth0: DHCPv4 address 10.0.0.69/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 9 00:47:19.751927 kernel: mousedev: PS/2 mouse device common for all mice Feb 9 00:47:19.788167 kernel: kvm: Nested Virtualization enabled Feb 9 00:47:19.788243 kernel: SVM: kvm: Nested Paging enabled Feb 9 00:47:19.788257 kernel: SVM: Virtual VMLOAD VMSAVE supported Feb 9 00:47:19.788270 kernel: SVM: Virtual GIF supported Feb 9 00:47:19.803931 kernel: EDAC MC: Ver: 3.0.0 Feb 9 00:47:19.824279 systemd[1]: Finished systemd-udev-settle.service. Feb 9 00:47:19.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:19.826056 systemd[1]: Starting lvm2-activation-early.service... Feb 9 00:47:19.833741 lvm[1125]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 00:47:19.864872 systemd[1]: Finished lvm2-activation-early.service. Feb 9 00:47:19.864000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:19.865759 systemd[1]: Reached target cryptsetup.target. Feb 9 00:47:19.867511 systemd[1]: Starting lvm2-activation.service... Feb 9 00:47:19.871279 lvm[1127]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 00:47:19.893862 systemd[1]: Finished lvm2-activation.service. Feb 9 00:47:19.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:19.894616 systemd[1]: Reached target local-fs-pre.target. Feb 9 00:47:19.895241 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 00:47:19.895265 systemd[1]: Reached target local-fs.target. Feb 9 00:47:19.895824 systemd[1]: Reached target machines.target. Feb 9 00:47:19.897500 systemd[1]: Starting ldconfig.service... Feb 9 00:47:19.898271 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 00:47:19.898324 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 00:47:19.899287 systemd[1]: Starting systemd-boot-update.service... Feb 9 00:47:19.900751 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 00:47:19.902776 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 00:47:19.903697 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 00:47:19.903736 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 00:47:19.904637 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 00:47:19.905992 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1130 (bootctl) Feb 9 00:47:19.906970 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 00:47:19.911000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:19.911238 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 00:47:19.921800 systemd-tmpfiles[1133]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 00:47:19.922520 systemd-tmpfiles[1133]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 00:47:19.924122 systemd-tmpfiles[1133]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 00:47:19.939393 systemd-fsck[1139]: fsck.fat 4.2 (2021-01-31) Feb 9 00:47:19.939393 systemd-fsck[1139]: /dev/vda1: 790 files, 115355/258078 clusters Feb 9 00:47:19.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:19.941077 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 00:47:19.943556 systemd[1]: Mounting boot.mount... Feb 9 00:47:19.951587 systemd[1]: Mounted boot.mount. Feb 9 00:47:19.964215 systemd[1]: Finished systemd-boot-update.service. Feb 9 00:47:19.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:20.009753 ldconfig[1129]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 00:47:20.572041 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 00:47:20.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:20.574525 systemd[1]: Starting audit-rules.service... Feb 9 00:47:20.576501 systemd[1]: Starting clean-ca-certificates.service... Feb 9 00:47:20.578956 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 00:47:20.582291 systemd[1]: Starting systemd-resolved.service... Feb 9 00:47:20.585417 systemd[1]: Starting systemd-timesyncd.service... Feb 9 00:47:20.587729 systemd[1]: Starting systemd-update-utmp.service... Feb 9 00:47:20.590020 systemd[1]: Finished ldconfig.service. Feb 9 00:47:20.591000 audit[1153]: SYSTEM_BOOT pid=1153 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 00:47:20.592000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:20.594276 systemd[1]: Finished clean-ca-certificates.service. Feb 9 00:47:20.594000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:20.598231 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 00:47:20.599255 systemd[1]: Finished systemd-update-utmp.service. Feb 9 00:47:20.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:20.603007 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 00:47:20.603000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:20.605882 systemd[1]: Starting systemd-update-done.service... Feb 9 00:47:20.615000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:20.615315 systemd[1]: Finished systemd-update-done.service. Feb 9 00:47:20.619216 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 00:47:20.621016 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 00:47:20.621000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:47:20.627000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 00:47:20.627000 audit[1173]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdb6169500 a2=420 a3=0 items=0 ppid=1146 pid=1173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 00:47:20.627000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 00:47:20.628439 augenrules[1173]: No rules Feb 9 00:47:20.629674 systemd[1]: Finished audit-rules.service. Feb 9 00:47:20.658551 systemd[1]: Started systemd-timesyncd.service. Feb 9 00:47:19.972064 systemd-timesyncd[1152]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 9 00:47:19.993995 systemd-journald[1034]: Time jumped backwards, rotating. Feb 9 00:47:19.972115 systemd-timesyncd[1152]: Initial clock synchronization to Fri 2024-02-09 00:47:19.971986 UTC. Feb 9 00:47:19.972357 systemd[1]: Reached target time-set.target. Feb 9 00:47:19.994488 systemd-resolved[1150]: Positive Trust Anchors: Feb 9 00:47:19.994503 systemd-resolved[1150]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 00:47:19.994540 systemd-resolved[1150]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 00:47:20.002504 systemd-resolved[1150]: Defaulting to hostname 'linux'. Feb 9 00:47:20.004408 systemd[1]: Started systemd-resolved.service. Feb 9 00:47:20.005146 systemd[1]: Reached target network.target. Feb 9 00:47:20.005711 systemd[1]: Reached target nss-lookup.target. Feb 9 00:47:20.006318 systemd[1]: Reached target sysinit.target. Feb 9 00:47:20.006974 systemd[1]: Started motdgen.path. Feb 9 00:47:20.007547 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 00:47:20.008522 systemd[1]: Started logrotate.timer. Feb 9 00:47:20.009269 systemd[1]: Started mdadm.timer. Feb 9 00:47:20.009898 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 00:47:20.010563 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 00:47:20.010596 systemd[1]: Reached target paths.target. Feb 9 00:47:20.011187 systemd[1]: Reached target timers.target. Feb 9 00:47:20.012079 systemd[1]: Listening on dbus.socket. Feb 9 00:47:20.013702 systemd[1]: Starting docker.socket... Feb 9 00:47:20.015175 systemd[1]: Listening on sshd.socket. Feb 9 00:47:20.015791 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 00:47:20.016026 systemd[1]: Listening on docker.socket. Feb 9 00:47:20.016606 systemd[1]: Reached target sockets.target. Feb 9 00:47:20.017214 systemd[1]: Reached target basic.target. Feb 9 00:47:20.017873 systemd[1]: System is tainted: cgroupsv1 Feb 9 00:47:20.017908 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 00:47:20.017924 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 00:47:20.018781 systemd[1]: Starting containerd.service... Feb 9 00:47:20.020140 systemd[1]: Starting dbus.service... Feb 9 00:47:20.021500 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 00:47:20.023300 systemd[1]: Starting extend-filesystems.service... Feb 9 00:47:20.024139 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 00:47:20.025481 systemd[1]: Starting motdgen.service... Feb 9 00:47:20.026901 jq[1186]: false Feb 9 00:47:20.027116 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 00:47:20.028892 systemd[1]: Starting prepare-critools.service... Feb 9 00:47:20.030660 systemd[1]: Starting prepare-helm.service... Feb 9 00:47:20.032319 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 00:47:20.034020 systemd[1]: Starting sshd-keygen.service... Feb 9 00:47:20.036364 systemd[1]: Starting systemd-logind.service... Feb 9 00:47:20.036908 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 00:47:20.036950 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 00:47:20.037946 systemd[1]: Starting update-engine.service... Feb 9 00:47:20.040893 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 00:47:20.044484 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 00:47:20.048268 jq[1205]: true Feb 9 00:47:20.048852 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 00:47:20.051924 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 00:47:20.052215 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 00:47:20.058704 tar[1215]: linux-amd64/helm Feb 9 00:47:20.061478 tar[1213]: ./ Feb 9 00:47:20.061478 tar[1213]: ./macvlan Feb 9 00:47:20.066729 tar[1214]: crictl Feb 9 00:47:20.067205 extend-filesystems[1187]: Found sr0 Feb 9 00:47:20.068232 extend-filesystems[1187]: Found vda Feb 9 00:47:20.068844 extend-filesystems[1187]: Found vda1 Feb 9 00:47:20.069612 extend-filesystems[1187]: Found vda2 Feb 9 00:47:20.070248 extend-filesystems[1187]: Found vda3 Feb 9 00:47:20.070844 extend-filesystems[1187]: Found usr Feb 9 00:47:20.071507 extend-filesystems[1187]: Found vda4 Feb 9 00:47:20.072104 extend-filesystems[1187]: Found vda6 Feb 9 00:47:20.072754 extend-filesystems[1187]: Found vda7 Feb 9 00:47:20.073450 extend-filesystems[1187]: Found vda9 Feb 9 00:47:20.074038 extend-filesystems[1187]: Checking size of /dev/vda9 Feb 9 00:47:20.076340 jq[1218]: true Feb 9 00:47:20.077749 systemd[1]: Started dbus.service. Feb 9 00:47:20.076845 dbus-daemon[1185]: [system] SELinux support is enabled Feb 9 00:47:20.080881 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 00:47:20.081167 systemd[1]: Finished motdgen.service. Feb 9 00:47:20.081968 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 00:47:20.081994 systemd[1]: Reached target system-config.target. Feb 9 00:47:20.082727 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 00:47:20.082746 systemd[1]: Reached target user-config.target. Feb 9 00:47:20.105318 extend-filesystems[1187]: Resized partition /dev/vda9 Feb 9 00:47:20.108444 extend-filesystems[1251]: resize2fs 1.46.5 (30-Dec-2021) Feb 9 00:47:20.112418 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 9 00:47:20.123890 update_engine[1202]: I0209 00:47:20.123598 1202 main.cc:92] Flatcar Update Engine starting Feb 9 00:47:20.131268 update_engine[1202]: I0209 00:47:20.125567 1202 update_check_scheduler.cc:74] Next update check in 10m19s Feb 9 00:47:20.125530 systemd[1]: Started update-engine.service. Feb 9 00:47:20.128352 systemd[1]: Started locksmithd.service. Feb 9 00:47:20.131738 systemd-logind[1200]: Watching system buttons on /dev/input/event1 (Power Button) Feb 9 00:47:20.131761 systemd-logind[1200]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 9 00:47:20.132683 systemd-logind[1200]: New seat seat0. Feb 9 00:47:20.139474 env[1221]: time="2024-02-09T00:47:20.139427090Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 00:47:20.140322 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 9 00:47:20.140386 bash[1250]: Updated "/home/core/.ssh/authorized_keys" Feb 9 00:47:20.141213 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 00:47:20.143357 systemd[1]: Started systemd-logind.service. Feb 9 00:47:20.164212 extend-filesystems[1251]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 9 00:47:20.164212 extend-filesystems[1251]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 9 00:47:20.164212 extend-filesystems[1251]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 9 00:47:20.167486 extend-filesystems[1187]: Resized filesystem in /dev/vda9 Feb 9 00:47:20.164588 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 00:47:20.164888 systemd[1]: Finished extend-filesystems.service. Feb 9 00:47:20.193659 env[1221]: time="2024-02-09T00:47:20.193601768Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 00:47:20.193825 env[1221]: time="2024-02-09T00:47:20.193795892Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 00:47:20.197199 tar[1213]: ./static Feb 9 00:47:20.204540 env[1221]: time="2024-02-09T00:47:20.204498699Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 00:47:20.204586 env[1221]: time="2024-02-09T00:47:20.204539536Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 00:47:20.204882 env[1221]: time="2024-02-09T00:47:20.204848405Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 00:47:20.204935 env[1221]: time="2024-02-09T00:47:20.204880155Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 00:47:20.204935 env[1221]: time="2024-02-09T00:47:20.204898709Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 00:47:20.204935 env[1221]: time="2024-02-09T00:47:20.204913497Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 00:47:20.205035 env[1221]: time="2024-02-09T00:47:20.205006021Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 00:47:20.205360 env[1221]: time="2024-02-09T00:47:20.205332132Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 00:47:20.205558 env[1221]: time="2024-02-09T00:47:20.205525164Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 00:47:20.205558 env[1221]: time="2024-02-09T00:47:20.205554940Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 00:47:20.205646 env[1221]: time="2024-02-09T00:47:20.205617768Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 00:47:20.205646 env[1221]: time="2024-02-09T00:47:20.205642794Z" level=info msg="metadata content store policy set" policy=shared Feb 9 00:47:20.210738 env[1221]: time="2024-02-09T00:47:20.210706890Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 00:47:20.210784 env[1221]: time="2024-02-09T00:47:20.210746003Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 00:47:20.210784 env[1221]: time="2024-02-09T00:47:20.210764508Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 00:47:20.210822 env[1221]: time="2024-02-09T00:47:20.210799573Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 00:47:20.210841 env[1221]: time="2024-02-09T00:47:20.210819892Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 00:47:20.210861 env[1221]: time="2024-02-09T00:47:20.210838436Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 00:47:20.210861 env[1221]: time="2024-02-09T00:47:20.210855128Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 00:47:20.210912 env[1221]: time="2024-02-09T00:47:20.210873432Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 00:47:20.210912 env[1221]: time="2024-02-09T00:47:20.210890594Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 00:47:20.210950 env[1221]: time="2024-02-09T00:47:20.210908297Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 00:47:20.210950 env[1221]: time="2024-02-09T00:47:20.210926051Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 00:47:20.210950 env[1221]: time="2024-02-09T00:47:20.210942231Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 00:47:20.211079 env[1221]: time="2024-02-09T00:47:20.211050424Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 00:47:20.211182 env[1221]: time="2024-02-09T00:47:20.211152625Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 00:47:20.211587 env[1221]: time="2024-02-09T00:47:20.211557905Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 00:47:20.211644 env[1221]: time="2024-02-09T00:47:20.211599163Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 00:47:20.211644 env[1221]: time="2024-02-09T00:47:20.211620402Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 00:47:20.211692 env[1221]: time="2024-02-09T00:47:20.211674925Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 00:47:20.211718 env[1221]: time="2024-02-09T00:47:20.211695233Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 00:47:20.211718 env[1221]: time="2024-02-09T00:47:20.211714088Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 00:47:20.211764 env[1221]: time="2024-02-09T00:47:20.211730439Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 00:47:20.211764 env[1221]: time="2024-02-09T00:47:20.211749985Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 00:47:20.211812 env[1221]: time="2024-02-09T00:47:20.211766276Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 00:47:20.211812 env[1221]: time="2024-02-09T00:47:20.211783418Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 00:47:20.211812 env[1221]: time="2024-02-09T00:47:20.211799438Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 00:47:20.211883 env[1221]: time="2024-02-09T00:47:20.211817211Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 00:47:20.211991 env[1221]: time="2024-02-09T00:47:20.211961793Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 00:47:20.212040 env[1221]: time="2024-02-09T00:47:20.211992270Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 00:47:20.212040 env[1221]: time="2024-02-09T00:47:20.212010063Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 00:47:20.212040 env[1221]: time="2024-02-09T00:47:20.212025452Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 00:47:20.212120 env[1221]: time="2024-02-09T00:47:20.212044017Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 00:47:20.212120 env[1221]: time="2024-02-09T00:47:20.212059025Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 00:47:20.212120 env[1221]: time="2024-02-09T00:47:20.212080966Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 00:47:20.212206 env[1221]: time="2024-02-09T00:47:20.212120901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 00:47:20.212461 env[1221]: time="2024-02-09T00:47:20.212389004Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 00:47:20.212461 env[1221]: time="2024-02-09T00:47:20.212464545Z" level=info msg="Connect containerd service" Feb 9 00:47:20.213385 env[1221]: time="2024-02-09T00:47:20.212505863Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 00:47:20.213385 env[1221]: time="2024-02-09T00:47:20.213250529Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 00:47:20.213599 env[1221]: time="2024-02-09T00:47:20.213568255Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 00:47:20.213649 env[1221]: time="2024-02-09T00:47:20.213624560Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 00:47:20.213709 env[1221]: time="2024-02-09T00:47:20.213682108Z" level=info msg="containerd successfully booted in 0.075862s" Feb 9 00:47:20.213844 systemd[1]: Started containerd.service. Feb 9 00:47:20.229430 env[1221]: time="2024-02-09T00:47:20.229324478Z" level=info msg="Start subscribing containerd event" Feb 9 00:47:20.229430 env[1221]: time="2024-02-09T00:47:20.229407052Z" level=info msg="Start recovering state" Feb 9 00:47:20.229552 env[1221]: time="2024-02-09T00:47:20.229482804Z" level=info msg="Start event monitor" Feb 9 00:47:20.229552 env[1221]: time="2024-02-09T00:47:20.229499376Z" level=info msg="Start snapshots syncer" Feb 9 00:47:20.229552 env[1221]: time="2024-02-09T00:47:20.229510707Z" level=info msg="Start cni network conf syncer for default" Feb 9 00:47:20.229552 env[1221]: time="2024-02-09T00:47:20.229519664Z" level=info msg="Start streaming server" Feb 9 00:47:20.239759 tar[1213]: ./vlan Feb 9 00:47:20.283920 tar[1213]: ./portmap Feb 9 00:47:20.305356 locksmithd[1252]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 00:47:20.320662 tar[1213]: ./host-local Feb 9 00:47:20.354102 tar[1213]: ./vrf Feb 9 00:47:20.383572 tar[1213]: ./bridge Feb 9 00:47:20.419352 tar[1213]: ./tuning Feb 9 00:47:20.445858 tar[1213]: ./firewall Feb 9 00:47:20.479989 tar[1213]: ./host-device Feb 9 00:47:20.510386 tar[1213]: ./sbr Feb 9 00:47:20.518794 tar[1215]: linux-amd64/LICENSE Feb 9 00:47:20.518925 tar[1215]: linux-amd64/README.md Feb 9 00:47:20.522950 systemd[1]: Finished prepare-helm.service. Feb 9 00:47:20.538705 tar[1213]: ./loopback Feb 9 00:47:20.546550 systemd[1]: Finished prepare-critools.service. Feb 9 00:47:20.563898 tar[1213]: ./dhcp Feb 9 00:47:20.616439 systemd-networkd[1097]: eth0: Gained IPv6LL Feb 9 00:47:20.630316 tar[1213]: ./ptp Feb 9 00:47:20.658976 tar[1213]: ./ipvlan Feb 9 00:47:20.686188 tar[1213]: ./bandwidth Feb 9 00:47:20.720330 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 00:47:21.053716 sshd_keygen[1228]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 00:47:21.070809 systemd[1]: Finished sshd-keygen.service. Feb 9 00:47:21.072930 systemd[1]: Starting issuegen.service... Feb 9 00:47:21.077047 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 00:47:21.077220 systemd[1]: Finished issuegen.service. Feb 9 00:47:21.079062 systemd[1]: Starting systemd-user-sessions.service... Feb 9 00:47:21.084269 systemd[1]: Finished systemd-user-sessions.service. Feb 9 00:47:21.086317 systemd[1]: Started getty@tty1.service. Feb 9 00:47:21.087874 systemd[1]: Started serial-getty@ttyS0.service. Feb 9 00:47:21.088850 systemd[1]: Reached target getty.target. Feb 9 00:47:21.089533 systemd[1]: Reached target multi-user.target. Feb 9 00:47:21.091560 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 00:47:21.097480 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 00:47:21.097705 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 00:47:21.099699 systemd[1]: Startup finished in 14.530s (kernel) + 9.121s (userspace) = 23.651s. Feb 9 00:47:29.344518 systemd[1]: Created slice system-sshd.slice. Feb 9 00:47:29.345833 systemd[1]: Started sshd@0-10.0.0.69:22-10.0.0.1:47236.service. Feb 9 00:47:29.378021 sshd[1296]: Accepted publickey for core from 10.0.0.1 port 47236 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:47:29.379536 sshd[1296]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:47:29.387044 systemd[1]: Created slice user-500.slice. Feb 9 00:47:29.387883 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 00:47:29.389434 systemd-logind[1200]: New session 1 of user core. Feb 9 00:47:29.395207 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 00:47:29.396359 systemd[1]: Starting user@500.service... Feb 9 00:47:29.399017 (systemd)[1301]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:47:29.484484 systemd[1301]: Queued start job for default target default.target. Feb 9 00:47:29.484708 systemd[1301]: Reached target paths.target. Feb 9 00:47:29.484728 systemd[1301]: Reached target sockets.target. Feb 9 00:47:29.484744 systemd[1301]: Reached target timers.target. Feb 9 00:47:29.484759 systemd[1301]: Reached target basic.target. Feb 9 00:47:29.484808 systemd[1301]: Reached target default.target. Feb 9 00:47:29.484837 systemd[1301]: Startup finished in 80ms. Feb 9 00:47:29.484902 systemd[1]: Started user@500.service. Feb 9 00:47:29.485740 systemd[1]: Started session-1.scope. Feb 9 00:47:29.535102 systemd[1]: Started sshd@1-10.0.0.69:22-10.0.0.1:47252.service. Feb 9 00:47:29.564380 sshd[1310]: Accepted publickey for core from 10.0.0.1 port 47252 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:47:29.565568 sshd[1310]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:47:29.568718 systemd-logind[1200]: New session 2 of user core. Feb 9 00:47:29.569500 systemd[1]: Started session-2.scope. Feb 9 00:47:29.622615 sshd[1310]: pam_unix(sshd:session): session closed for user core Feb 9 00:47:29.625228 systemd[1]: Started sshd@2-10.0.0.69:22-10.0.0.1:47258.service. Feb 9 00:47:29.625718 systemd[1]: sshd@1-10.0.0.69:22-10.0.0.1:47252.service: Deactivated successfully. Feb 9 00:47:29.626671 systemd-logind[1200]: Session 2 logged out. Waiting for processes to exit. Feb 9 00:47:29.626676 systemd[1]: session-2.scope: Deactivated successfully. Feb 9 00:47:29.627677 systemd-logind[1200]: Removed session 2. Feb 9 00:47:29.654621 sshd[1316]: Accepted publickey for core from 10.0.0.1 port 47258 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:47:29.655534 sshd[1316]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:47:29.658597 systemd-logind[1200]: New session 3 of user core. Feb 9 00:47:29.659306 systemd[1]: Started session-3.scope. Feb 9 00:47:29.724478 sshd[1316]: pam_unix(sshd:session): session closed for user core Feb 9 00:47:29.726747 systemd[1]: Started sshd@3-10.0.0.69:22-10.0.0.1:47260.service. Feb 9 00:47:29.727151 systemd[1]: sshd@2-10.0.0.69:22-10.0.0.1:47258.service: Deactivated successfully. Feb 9 00:47:29.727899 systemd-logind[1200]: Session 3 logged out. Waiting for processes to exit. Feb 9 00:47:29.727959 systemd[1]: session-3.scope: Deactivated successfully. Feb 9 00:47:29.728929 systemd-logind[1200]: Removed session 3. Feb 9 00:47:29.757977 sshd[1323]: Accepted publickey for core from 10.0.0.1 port 47260 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:47:29.759070 sshd[1323]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:47:29.762150 systemd-logind[1200]: New session 4 of user core. Feb 9 00:47:29.762854 systemd[1]: Started session-4.scope. Feb 9 00:47:29.814832 sshd[1323]: pam_unix(sshd:session): session closed for user core Feb 9 00:47:29.816791 systemd[1]: Started sshd@4-10.0.0.69:22-10.0.0.1:47264.service. Feb 9 00:47:29.817712 systemd[1]: sshd@3-10.0.0.69:22-10.0.0.1:47260.service: Deactivated successfully. Feb 9 00:47:29.818486 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 00:47:29.818620 systemd-logind[1200]: Session 4 logged out. Waiting for processes to exit. Feb 9 00:47:29.819396 systemd-logind[1200]: Removed session 4. Feb 9 00:47:29.848854 sshd[1329]: Accepted publickey for core from 10.0.0.1 port 47264 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:47:29.849887 sshd[1329]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:47:29.853140 systemd-logind[1200]: New session 5 of user core. Feb 9 00:47:29.853817 systemd[1]: Started session-5.scope. Feb 9 00:47:29.908028 sudo[1335]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 00:47:29.908191 sudo[1335]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 00:47:30.668221 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 00:47:30.672695 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 00:47:30.672926 systemd[1]: Reached target network-online.target. Feb 9 00:47:30.673900 systemd[1]: Starting docker.service... Feb 9 00:47:30.706670 env[1353]: time="2024-02-09T00:47:30.706619243Z" level=info msg="Starting up" Feb 9 00:47:30.707850 env[1353]: time="2024-02-09T00:47:30.707832618Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 00:47:30.707850 env[1353]: time="2024-02-09T00:47:30.707848067Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 00:47:30.707930 env[1353]: time="2024-02-09T00:47:30.707864297Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 00:47:30.707930 env[1353]: time="2024-02-09T00:47:30.707876320Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 00:47:30.709581 env[1353]: time="2024-02-09T00:47:30.709556901Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 00:47:30.709581 env[1353]: time="2024-02-09T00:47:30.709573241Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 00:47:30.709668 env[1353]: time="2024-02-09T00:47:30.709588059Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 00:47:30.709668 env[1353]: time="2024-02-09T00:47:30.709603879Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 00:47:30.714274 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport4219522178-merged.mount: Deactivated successfully. Feb 9 00:47:31.487323 env[1353]: time="2024-02-09T00:47:31.487274060Z" level=warning msg="Your kernel does not support cgroup blkio weight" Feb 9 00:47:31.487323 env[1353]: time="2024-02-09T00:47:31.487314706Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Feb 9 00:47:31.487537 env[1353]: time="2024-02-09T00:47:31.487445772Z" level=info msg="Loading containers: start." Feb 9 00:47:31.575326 kernel: Initializing XFRM netlink socket Feb 9 00:47:31.607123 env[1353]: time="2024-02-09T00:47:31.607077191Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 9 00:47:31.660364 systemd-networkd[1097]: docker0: Link UP Feb 9 00:47:31.669356 env[1353]: time="2024-02-09T00:47:31.669315628Z" level=info msg="Loading containers: done." Feb 9 00:47:31.679410 env[1353]: time="2024-02-09T00:47:31.679366153Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 9 00:47:31.679561 env[1353]: time="2024-02-09T00:47:31.679527495Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 9 00:47:31.679619 env[1353]: time="2024-02-09T00:47:31.679601614Z" level=info msg="Daemon has completed initialization" Feb 9 00:47:31.695346 systemd[1]: Started docker.service. Feb 9 00:47:31.698599 env[1353]: time="2024-02-09T00:47:31.698556545Z" level=info msg="API listen on /run/docker.sock" Feb 9 00:47:31.715362 systemd[1]: Reloading. Feb 9 00:47:31.762472 /usr/lib/systemd/system-generators/torcx-generator[1495]: time="2024-02-09T00:47:31Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 00:47:31.763951 /usr/lib/systemd/system-generators/torcx-generator[1495]: time="2024-02-09T00:47:31Z" level=info msg="torcx already run" Feb 9 00:47:31.840829 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 00:47:31.840845 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 00:47:31.858333 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 00:47:31.941801 systemd[1]: Started kubelet.service. Feb 9 00:47:31.994628 kubelet[1540]: E0209 00:47:31.994559 1540 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 00:47:31.997559 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 00:47:31.997718 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 00:47:32.291527 env[1221]: time="2024-02-09T00:47:32.291479665Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\"" Feb 9 00:47:33.388256 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1714974154.mount: Deactivated successfully. Feb 9 00:47:35.282086 env[1221]: time="2024-02-09T00:47:35.282024977Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:47:35.284005 env[1221]: time="2024-02-09T00:47:35.283983038Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:47:35.285876 env[1221]: time="2024-02-09T00:47:35.285818049Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:47:35.288909 env[1221]: time="2024-02-09T00:47:35.288859191Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2f28bed4096abd572a56595ac0304238bdc271dcfe22c650707c09bf97ec16fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:47:35.290065 env[1221]: time="2024-02-09T00:47:35.289992556Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\" returns image reference \"sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f\"" Feb 9 00:47:35.299900 env[1221]: time="2024-02-09T00:47:35.299834460Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\"" Feb 9 00:47:37.746778 env[1221]: time="2024-02-09T00:47:37.746712826Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:47:37.749917 env[1221]: time="2024-02-09T00:47:37.749875526Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:47:37.753700 env[1221]: time="2024-02-09T00:47:37.753645695Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:47:37.755943 env[1221]: time="2024-02-09T00:47:37.755922284Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fda420c6c15cdd01c4eba3404f0662fe486a9c7f38fa13c741a21334673841a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:47:37.756764 env[1221]: time="2024-02-09T00:47:37.756738594Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\" returns image reference \"sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486\"" Feb 9 00:47:37.765424 env[1221]: time="2024-02-09T00:47:37.765389455Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\"" Feb 9 00:47:39.526798 env[1221]: time="2024-02-09T00:47:39.526724798Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:47:39.529860 env[1221]: time="2024-02-09T00:47:39.529799303Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:47:39.532304 env[1221]: time="2024-02-09T00:47:39.532251230Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:47:39.534503 env[1221]: time="2024-02-09T00:47:39.534469630Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c3c7303ee6d01c8e5a769db28661cf854b55175aa72c67e9b6a7b9d47ac42af3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:47:39.535219 env[1221]: time="2024-02-09T00:47:39.535181835Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\" returns image reference \"sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e\"" Feb 9 00:47:39.546046 env[1221]: time="2024-02-09T00:47:39.546001121Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 9 00:47:40.599105 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1170426560.mount: Deactivated successfully. Feb 9 00:47:41.191664 env[1221]: time="2024-02-09T00:47:41.191594979Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:47:41.193571 env[1221]: time="2024-02-09T00:47:41.193523515Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:47:41.195052 env[1221]: time="2024-02-09T00:47:41.195020803Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:47:41.196437 env[1221]: time="2024-02-09T00:47:41.196401081Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:47:41.196916 env[1221]: time="2024-02-09T00:47:41.196887553Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f\"" Feb 9 00:47:41.205413 env[1221]: time="2024-02-09T00:47:41.205371350Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 9 00:47:41.690837 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1646164944.mount: Deactivated successfully. Feb 9 00:47:41.698468 env[1221]: time="2024-02-09T00:47:41.698303798Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:47:41.700590 env[1221]: time="2024-02-09T00:47:41.700529271Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:47:41.702071 env[1221]: time="2024-02-09T00:47:41.702036527Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:47:41.708154 env[1221]: time="2024-02-09T00:47:41.708086581Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:47:41.708659 env[1221]: time="2024-02-09T00:47:41.708630461Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 9 00:47:41.717215 env[1221]: time="2024-02-09T00:47:41.717176745Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\"" Feb 9 00:47:42.054125 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 9 00:47:42.054335 systemd[1]: Stopped kubelet.service. Feb 9 00:47:42.056007 systemd[1]: Started kubelet.service. Feb 9 00:47:42.100709 kubelet[1598]: E0209 00:47:42.100622 1598 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 00:47:42.104055 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 00:47:42.104201 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 00:47:42.488669 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3944602794.mount: Deactivated successfully. Feb 9 00:47:47.417698 env[1221]: time="2024-02-09T00:47:47.417641405Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:47:47.421743 env[1221]: time="2024-02-09T00:47:47.421716516Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:47:47.424587 env[1221]: time="2024-02-09T00:47:47.424538437Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:47:47.428825 env[1221]: time="2024-02-09T00:47:47.428780801Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:47:47.429556 env[1221]: time="2024-02-09T00:47:47.429503135Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\" returns image reference \"sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7\"" Feb 9 00:47:47.439914 env[1221]: time="2024-02-09T00:47:47.439857911Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\"" Feb 9 00:47:48.421686 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3703721670.mount: Deactivated successfully. Feb 9 00:47:49.695378 env[1221]: time="2024-02-09T00:47:49.695311556Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:47:49.697136 env[1221]: time="2024-02-09T00:47:49.697107884Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:47:49.700848 env[1221]: time="2024-02-09T00:47:49.700799846Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:47:49.702607 env[1221]: time="2024-02-09T00:47:49.702572911Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:47:49.703044 env[1221]: time="2024-02-09T00:47:49.703010201Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\" returns image reference \"sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a\"" Feb 9 00:47:51.383182 systemd[1]: Stopped kubelet.service. Feb 9 00:47:51.396225 systemd[1]: Reloading. Feb 9 00:47:51.454027 /usr/lib/systemd/system-generators/torcx-generator[1701]: time="2024-02-09T00:47:51Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 00:47:51.454055 /usr/lib/systemd/system-generators/torcx-generator[1701]: time="2024-02-09T00:47:51Z" level=info msg="torcx already run" Feb 9 00:47:51.530599 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 00:47:51.530621 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 00:47:51.555010 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 00:47:51.658637 systemd[1]: Started kubelet.service. Feb 9 00:47:51.710298 kubelet[1748]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 00:47:51.710298 kubelet[1748]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 00:47:51.710744 kubelet[1748]: I0209 00:47:51.710381 1748 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 00:47:51.712772 kubelet[1748]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 00:47:51.712772 kubelet[1748]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 00:47:52.012374 kubelet[1748]: I0209 00:47:52.012258 1748 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 00:47:52.012374 kubelet[1748]: I0209 00:47:52.012283 1748 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 00:47:52.012548 kubelet[1748]: I0209 00:47:52.012532 1748 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 00:47:52.014564 kubelet[1748]: I0209 00:47:52.014534 1748 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 00:47:52.015272 kubelet[1748]: E0209 00:47:52.015260 1748 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.69:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.69:6443: connect: connection refused Feb 9 00:47:52.018103 kubelet[1748]: I0209 00:47:52.018085 1748 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 00:47:52.018641 kubelet[1748]: I0209 00:47:52.018626 1748 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 00:47:52.018808 kubelet[1748]: I0209 00:47:52.018794 1748 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 00:47:52.018939 kubelet[1748]: I0209 00:47:52.018924 1748 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 00:47:52.019017 kubelet[1748]: I0209 00:47:52.019003 1748 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 00:47:52.019180 kubelet[1748]: I0209 00:47:52.019168 1748 state_mem.go:36] "Initialized new in-memory state store" Feb 9 00:47:52.022415 kubelet[1748]: I0209 00:47:52.022398 1748 kubelet.go:398] "Attempting to sync node with API server" Feb 9 00:47:52.022477 kubelet[1748]: I0209 00:47:52.022422 1748 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 00:47:52.022477 kubelet[1748]: I0209 00:47:52.022443 1748 kubelet.go:297] "Adding apiserver pod source" Feb 9 00:47:52.022477 kubelet[1748]: I0209 00:47:52.022453 1748 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 00:47:52.022985 kubelet[1748]: W0209 00:47:52.022946 1748 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.69:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.69:6443: connect: connection refused Feb 9 00:47:52.022985 kubelet[1748]: I0209 00:47:52.022971 1748 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 00:47:52.023051 kubelet[1748]: E0209 00:47:52.023001 1748 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.69:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.69:6443: connect: connection refused Feb 9 00:47:52.023077 kubelet[1748]: W0209 00:47:52.023053 1748 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.69:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.69:6443: connect: connection refused Feb 9 00:47:52.023104 kubelet[1748]: E0209 00:47:52.023078 1748 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.69:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.69:6443: connect: connection refused Feb 9 00:47:52.023299 kubelet[1748]: W0209 00:47:52.023272 1748 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 00:47:52.023691 kubelet[1748]: I0209 00:47:52.023673 1748 server.go:1186] "Started kubelet" Feb 9 00:47:52.023963 kubelet[1748]: I0209 00:47:52.023934 1748 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 00:47:52.024817 kubelet[1748]: E0209 00:47:52.024449 1748 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b20b55d66102c2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 0, 47, 52, 23646914, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 0, 47, 52, 23646914, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.0.0.69:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.69:6443: connect: connection refused'(may retry after sleeping) Feb 9 00:47:52.025120 kubelet[1748]: I0209 00:47:52.025094 1748 server.go:451] "Adding debug handlers to kubelet server" Feb 9 00:47:52.026123 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 9 00:47:52.026210 kubelet[1748]: I0209 00:47:52.026193 1748 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 00:47:52.026921 kubelet[1748]: E0209 00:47:52.026899 1748 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 00:47:52.026921 kubelet[1748]: E0209 00:47:52.026924 1748 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 00:47:52.028355 kubelet[1748]: E0209 00:47:52.028330 1748 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 00:47:52.028406 kubelet[1748]: I0209 00:47:52.028359 1748 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 00:47:52.028430 kubelet[1748]: I0209 00:47:52.028425 1748 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 00:47:52.028760 kubelet[1748]: W0209 00:47:52.028717 1748 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.69:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.69:6443: connect: connection refused Feb 9 00:47:52.028760 kubelet[1748]: E0209 00:47:52.028758 1748 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.69:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.69:6443: connect: connection refused Feb 9 00:47:52.028987 kubelet[1748]: E0209 00:47:52.028968 1748 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://10.0.0.69:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.69:6443: connect: connection refused Feb 9 00:47:52.057964 kubelet[1748]: I0209 00:47:52.057920 1748 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 00:47:52.059593 kubelet[1748]: I0209 00:47:52.059568 1748 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 00:47:52.059593 kubelet[1748]: I0209 00:47:52.059590 1748 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 00:47:52.059702 kubelet[1748]: I0209 00:47:52.059605 1748 state_mem.go:36] "Initialized new in-memory state store" Feb 9 00:47:52.062958 kubelet[1748]: I0209 00:47:52.062939 1748 policy_none.go:49] "None policy: Start" Feb 9 00:47:52.063540 kubelet[1748]: I0209 00:47:52.063516 1748 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 00:47:52.063540 kubelet[1748]: I0209 00:47:52.063536 1748 state_mem.go:35] "Initializing new in-memory state store" Feb 9 00:47:52.069603 kubelet[1748]: I0209 00:47:52.069575 1748 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 00:47:52.069761 kubelet[1748]: I0209 00:47:52.069746 1748 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 00:47:52.070832 kubelet[1748]: E0209 00:47:52.070806 1748 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 9 00:47:52.078466 kubelet[1748]: I0209 00:47:52.078443 1748 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 00:47:52.078466 kubelet[1748]: I0209 00:47:52.078467 1748 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 00:47:52.078553 kubelet[1748]: I0209 00:47:52.078487 1748 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 00:47:52.078553 kubelet[1748]: E0209 00:47:52.078535 1748 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 9 00:47:52.078976 kubelet[1748]: W0209 00:47:52.078931 1748 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.69:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.69:6443: connect: connection refused Feb 9 00:47:52.079126 kubelet[1748]: E0209 00:47:52.079109 1748 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.69:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.69:6443: connect: connection refused Feb 9 00:47:52.130109 kubelet[1748]: I0209 00:47:52.130079 1748 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 00:47:52.130443 kubelet[1748]: E0209 00:47:52.130430 1748 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.69:6443/api/v1/nodes\": dial tcp 10.0.0.69:6443: connect: connection refused" node="localhost" Feb 9 00:47:52.179571 kubelet[1748]: I0209 00:47:52.179528 1748 topology_manager.go:210] "Topology Admit Handler" Feb 9 00:47:52.180481 kubelet[1748]: I0209 00:47:52.180458 1748 topology_manager.go:210] "Topology Admit Handler" Feb 9 00:47:52.181342 kubelet[1748]: I0209 00:47:52.181321 1748 topology_manager.go:210] "Topology Admit Handler" Feb 9 00:47:52.181663 kubelet[1748]: I0209 00:47:52.181642 1748 status_manager.go:698] "Failed to get status for pod" podUID=0115fad8c735f8ab03bbbcff4c2dd433 pod="kube-system/kube-apiserver-localhost" err="Get \"https://10.0.0.69:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.69:6443: connect: connection refused" Feb 9 00:47:52.182318 kubelet[1748]: I0209 00:47:52.182279 1748 status_manager.go:698] "Failed to get status for pod" podUID=550020dd9f101bcc23e1d3c651841c4d pod="kube-system/kube-controller-manager-localhost" err="Get \"https://10.0.0.69:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 10.0.0.69:6443: connect: connection refused" Feb 9 00:47:52.183335 kubelet[1748]: I0209 00:47:52.183319 1748 status_manager.go:698] "Failed to get status for pod" podUID=72ae17a74a2eae76daac6d298477aff0 pod="kube-system/kube-scheduler-localhost" err="Get \"https://10.0.0.69:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 10.0.0.69:6443: connect: connection refused" Feb 9 00:47:52.229334 kubelet[1748]: I0209 00:47:52.229316 1748 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae17a74a2eae76daac6d298477aff0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae17a74a2eae76daac6d298477aff0\") " pod="kube-system/kube-scheduler-localhost" Feb 9 00:47:52.229483 kubelet[1748]: I0209 00:47:52.229439 1748 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0115fad8c735f8ab03bbbcff4c2dd433-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0115fad8c735f8ab03bbbcff4c2dd433\") " pod="kube-system/kube-apiserver-localhost" Feb 9 00:47:52.229483 kubelet[1748]: I0209 00:47:52.229488 1748 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 00:47:52.229625 kubelet[1748]: E0209 00:47:52.229458 1748 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://10.0.0.69:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.69:6443: connect: connection refused Feb 9 00:47:52.229625 kubelet[1748]: I0209 00:47:52.229521 1748 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 00:47:52.229625 kubelet[1748]: I0209 00:47:52.229551 1748 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 00:47:52.229625 kubelet[1748]: I0209 00:47:52.229579 1748 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 00:47:52.229625 kubelet[1748]: I0209 00:47:52.229600 1748 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 00:47:52.229734 kubelet[1748]: I0209 00:47:52.229621 1748 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0115fad8c735f8ab03bbbcff4c2dd433-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0115fad8c735f8ab03bbbcff4c2dd433\") " pod="kube-system/kube-apiserver-localhost" Feb 9 00:47:52.229734 kubelet[1748]: I0209 00:47:52.229649 1748 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0115fad8c735f8ab03bbbcff4c2dd433-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0115fad8c735f8ab03bbbcff4c2dd433\") " pod="kube-system/kube-apiserver-localhost" Feb 9 00:47:52.331601 kubelet[1748]: I0209 00:47:52.331579 1748 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 00:47:52.331912 kubelet[1748]: E0209 00:47:52.331889 1748 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.69:6443/api/v1/nodes\": dial tcp 10.0.0.69:6443: connect: connection refused" node="localhost" Feb 9 00:47:52.484503 kubelet[1748]: E0209 00:47:52.484471 1748 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:47:52.485091 env[1221]: time="2024-02-09T00:47:52.485057529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0115fad8c735f8ab03bbbcff4c2dd433,Namespace:kube-system,Attempt:0,}" Feb 9 00:47:52.486192 kubelet[1748]: E0209 00:47:52.486166 1748 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:47:52.486707 kubelet[1748]: E0209 00:47:52.486457 1748 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:47:52.486758 env[1221]: time="2024-02-09T00:47:52.486577769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:550020dd9f101bcc23e1d3c651841c4d,Namespace:kube-system,Attempt:0,}" Feb 9 00:47:52.486841 env[1221]: time="2024-02-09T00:47:52.486820384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae17a74a2eae76daac6d298477aff0,Namespace:kube-system,Attempt:0,}" Feb 9 00:47:52.630646 kubelet[1748]: E0209 00:47:52.630525 1748 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://10.0.0.69:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.69:6443: connect: connection refused Feb 9 00:47:52.733989 kubelet[1748]: I0209 00:47:52.733953 1748 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 00:47:52.734414 kubelet[1748]: E0209 00:47:52.734394 1748 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.69:6443/api/v1/nodes\": dial tcp 10.0.0.69:6443: connect: connection refused" node="localhost" Feb 9 00:47:52.954935 kubelet[1748]: W0209 00:47:52.954809 1748 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.69:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.69:6443: connect: connection refused Feb 9 00:47:52.954935 kubelet[1748]: E0209 00:47:52.954872 1748 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.69:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.69:6443: connect: connection refused Feb 9 00:47:53.003308 kubelet[1748]: W0209 00:47:53.003239 1748 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.69:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.69:6443: connect: connection refused Feb 9 00:47:53.003308 kubelet[1748]: E0209 00:47:53.003316 1748 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.69:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.69:6443: connect: connection refused Feb 9 00:47:53.043494 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3801463921.mount: Deactivated successfully. Feb 9 00:47:53.050356 env[1221]: time="2024-02-09T00:47:53.050303271Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:47:53.053467 env[1221]: time="2024-02-09T00:47:53.053428973Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:47:53.054665 env[1221]: time="2024-02-09T00:47:53.054630007Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:47:53.055646 env[1221]: time="2024-02-09T00:47:53.055619082Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:47:53.059075 env[1221]: time="2024-02-09T00:47:53.059050563Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:47:53.060255 env[1221]: time="2024-02-09T00:47:53.060220086Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:47:53.061250 env[1221]: time="2024-02-09T00:47:53.061222697Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:47:53.062333 env[1221]: time="2024-02-09T00:47:53.062302127Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:47:53.065806 env[1221]: time="2024-02-09T00:47:53.065771421Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:47:53.067275 env[1221]: time="2024-02-09T00:47:53.067247935Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:47:53.068474 env[1221]: time="2024-02-09T00:47:53.068449390Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:47:53.069639 env[1221]: time="2024-02-09T00:47:53.069610325Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:47:53.093842 env[1221]: time="2024-02-09T00:47:53.093778916Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 00:47:53.093842 env[1221]: time="2024-02-09T00:47:53.093817701Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 00:47:53.094001 env[1221]: time="2024-02-09T00:47:53.093840715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 00:47:53.094114 env[1221]: time="2024-02-09T00:47:53.094059416Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f2e92bcec2e3dcd901beab67d10d85588fbc7fb90cb4f532502e568e0a73470a pid=1827 runtime=io.containerd.runc.v2 Feb 9 00:47:53.112393 env[1221]: time="2024-02-09T00:47:53.111945012Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 00:47:53.112393 env[1221]: time="2024-02-09T00:47:53.112012943Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 00:47:53.112393 env[1221]: time="2024-02-09T00:47:53.112036709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 00:47:53.112393 env[1221]: time="2024-02-09T00:47:53.112058090Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 00:47:53.112393 env[1221]: time="2024-02-09T00:47:53.112112204Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 00:47:53.112393 env[1221]: time="2024-02-09T00:47:53.112129107Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 00:47:53.112393 env[1221]: time="2024-02-09T00:47:53.112277913Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d44f2dca3984990f52fe1d0f1c397e0774256b791ede68b8ea0fdd32512d764b pid=1861 runtime=io.containerd.runc.v2 Feb 9 00:47:53.118146 env[1221]: time="2024-02-09T00:47:53.112745213Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e83e8990c887ec0136e7a5eb502f06288b083bd0ac058fa248882a8744abef98 pid=1854 runtime=io.containerd.runc.v2 Feb 9 00:47:53.149617 env[1221]: time="2024-02-09T00:47:53.149488568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae17a74a2eae76daac6d298477aff0,Namespace:kube-system,Attempt:0,} returns sandbox id \"f2e92bcec2e3dcd901beab67d10d85588fbc7fb90cb4f532502e568e0a73470a\"" Feb 9 00:47:53.150234 kubelet[1748]: E0209 00:47:53.150193 1748 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:47:53.154826 env[1221]: time="2024-02-09T00:47:53.154784641Z" level=info msg="CreateContainer within sandbox \"f2e92bcec2e3dcd901beab67d10d85588fbc7fb90cb4f532502e568e0a73470a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 9 00:47:53.161398 env[1221]: time="2024-02-09T00:47:53.161349748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0115fad8c735f8ab03bbbcff4c2dd433,Namespace:kube-system,Attempt:0,} returns sandbox id \"e83e8990c887ec0136e7a5eb502f06288b083bd0ac058fa248882a8744abef98\"" Feb 9 00:47:53.162426 kubelet[1748]: E0209 00:47:53.162401 1748 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:47:53.165104 env[1221]: time="2024-02-09T00:47:53.165073802Z" level=info msg="CreateContainer within sandbox \"e83e8990c887ec0136e7a5eb502f06288b083bd0ac058fa248882a8744abef98\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 9 00:47:53.169507 env[1221]: time="2024-02-09T00:47:53.169448170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:550020dd9f101bcc23e1d3c651841c4d,Namespace:kube-system,Attempt:0,} returns sandbox id \"d44f2dca3984990f52fe1d0f1c397e0774256b791ede68b8ea0fdd32512d764b\"" Feb 9 00:47:53.170013 kubelet[1748]: E0209 00:47:53.169987 1748 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:47:53.172122 env[1221]: time="2024-02-09T00:47:53.172089107Z" level=info msg="CreateContainer within sandbox \"d44f2dca3984990f52fe1d0f1c397e0774256b791ede68b8ea0fdd32512d764b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 9 00:47:53.182247 env[1221]: time="2024-02-09T00:47:53.182202570Z" level=info msg="CreateContainer within sandbox \"f2e92bcec2e3dcd901beab67d10d85588fbc7fb90cb4f532502e568e0a73470a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"828c902e618999ae9b74706eace92637595daf84363e6f179aaad6456bbd37df\"" Feb 9 00:47:53.182890 env[1221]: time="2024-02-09T00:47:53.182857682Z" level=info msg="StartContainer for \"828c902e618999ae9b74706eace92637595daf84363e6f179aaad6456bbd37df\"" Feb 9 00:47:53.199481 env[1221]: time="2024-02-09T00:47:53.199435759Z" level=info msg="CreateContainer within sandbox \"e83e8990c887ec0136e7a5eb502f06288b083bd0ac058fa248882a8744abef98\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d65855edbef78f3755422541a9bc0c70107573894fa5c826dfaea2fddc03229a\"" Feb 9 00:47:53.199955 env[1221]: time="2024-02-09T00:47:53.199909331Z" level=info msg="StartContainer for \"d65855edbef78f3755422541a9bc0c70107573894fa5c826dfaea2fddc03229a\"" Feb 9 00:47:53.205021 env[1221]: time="2024-02-09T00:47:53.204928571Z" level=info msg="CreateContainer within sandbox \"d44f2dca3984990f52fe1d0f1c397e0774256b791ede68b8ea0fdd32512d764b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"32f4de9f3bed9f7821727764c0eff5a6b3bdf43c0220f5f97dbf51c315b5d07e\"" Feb 9 00:47:53.207998 env[1221]: time="2024-02-09T00:47:53.207957816Z" level=info msg="StartContainer for \"32f4de9f3bed9f7821727764c0eff5a6b3bdf43c0220f5f97dbf51c315b5d07e\"" Feb 9 00:47:53.216954 kubelet[1748]: E0209 00:47:53.216838 1748 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b20b55d66102c2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 0, 47, 52, 23646914, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 0, 47, 52, 23646914, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.0.0.69:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.69:6443: connect: connection refused'(may retry after sleeping) Feb 9 00:47:53.248559 env[1221]: time="2024-02-09T00:47:53.248498466Z" level=info msg="StartContainer for \"828c902e618999ae9b74706eace92637595daf84363e6f179aaad6456bbd37df\" returns successfully" Feb 9 00:47:53.281654 env[1221]: time="2024-02-09T00:47:53.281611607Z" level=info msg="StartContainer for \"d65855edbef78f3755422541a9bc0c70107573894fa5c826dfaea2fddc03229a\" returns successfully" Feb 9 00:47:53.291196 env[1221]: time="2024-02-09T00:47:53.291157999Z" level=info msg="StartContainer for \"32f4de9f3bed9f7821727764c0eff5a6b3bdf43c0220f5f97dbf51c315b5d07e\" returns successfully" Feb 9 00:47:53.536168 kubelet[1748]: I0209 00:47:53.536086 1748 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 00:47:54.083891 kubelet[1748]: E0209 00:47:54.083858 1748 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:47:54.085780 kubelet[1748]: E0209 00:47:54.085758 1748 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:47:54.087559 kubelet[1748]: E0209 00:47:54.087535 1748 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:47:54.679521 kubelet[1748]: E0209 00:47:54.679484 1748 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 9 00:47:54.762193 kubelet[1748]: I0209 00:47:54.762079 1748 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Feb 9 00:47:55.024565 kubelet[1748]: I0209 00:47:55.024452 1748 apiserver.go:52] "Watching apiserver" Feb 9 00:47:55.428649 kubelet[1748]: I0209 00:47:55.428617 1748 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 00:47:55.452829 kubelet[1748]: I0209 00:47:55.452807 1748 reconciler.go:41] "Reconciler: start to sync state" Feb 9 00:47:55.626845 kubelet[1748]: E0209 00:47:55.626800 1748 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Feb 9 00:47:55.627214 kubelet[1748]: E0209 00:47:55.627194 1748 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:47:55.826054 kubelet[1748]: E0209 00:47:55.826027 1748 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:47:56.026178 kubelet[1748]: E0209 00:47:56.026140 1748 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:47:56.089202 kubelet[1748]: E0209 00:47:56.089128 1748 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:47:56.089776 kubelet[1748]: E0209 00:47:56.089756 1748 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:47:57.225343 systemd[1]: Reloading. Feb 9 00:47:57.290696 /usr/lib/systemd/system-generators/torcx-generator[2083]: time="2024-02-09T00:47:57Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 00:47:57.290770 /usr/lib/systemd/system-generators/torcx-generator[2083]: time="2024-02-09T00:47:57Z" level=info msg="torcx already run" Feb 9 00:47:57.356205 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 00:47:57.356221 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 00:47:57.372484 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 00:47:57.450492 systemd[1]: Stopping kubelet.service... Feb 9 00:47:57.468884 systemd[1]: kubelet.service: Deactivated successfully. Feb 9 00:47:57.469263 systemd[1]: Stopped kubelet.service. Feb 9 00:47:57.471344 systemd[1]: Started kubelet.service. Feb 9 00:47:57.579663 kubelet[2130]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 00:47:57.580117 kubelet[2130]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 00:47:57.580267 kubelet[2130]: I0209 00:47:57.580234 2130 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 00:47:57.581988 kubelet[2130]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 00:47:57.582057 kubelet[2130]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 00:47:57.586390 kubelet[2130]: I0209 00:47:57.586318 2130 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 00:47:57.586390 kubelet[2130]: I0209 00:47:57.586339 2130 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 00:47:57.586662 kubelet[2130]: I0209 00:47:57.586572 2130 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 00:47:57.587975 kubelet[2130]: I0209 00:47:57.587950 2130 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 9 00:47:57.591158 kubelet[2130]: I0209 00:47:57.591107 2130 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 00:47:57.599807 kubelet[2130]: I0209 00:47:57.599280 2130 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 00:47:57.600179 kubelet[2130]: I0209 00:47:57.599880 2130 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 00:47:57.600179 kubelet[2130]: I0209 00:47:57.599966 2130 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 00:47:57.600179 kubelet[2130]: I0209 00:47:57.599991 2130 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 00:47:57.600179 kubelet[2130]: I0209 00:47:57.600005 2130 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 00:47:57.600179 kubelet[2130]: I0209 00:47:57.600044 2130 state_mem.go:36] "Initialized new in-memory state store" Feb 9 00:47:57.603979 kubelet[2130]: I0209 00:47:57.603941 2130 kubelet.go:398] "Attempting to sync node with API server" Feb 9 00:47:57.603979 kubelet[2130]: I0209 00:47:57.603973 2130 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 00:47:57.604180 kubelet[2130]: I0209 00:47:57.604003 2130 kubelet.go:297] "Adding apiserver pod source" Feb 9 00:47:57.604180 kubelet[2130]: I0209 00:47:57.604023 2130 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 00:47:57.609732 kubelet[2130]: I0209 00:47:57.609508 2130 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 00:47:57.612671 kubelet[2130]: I0209 00:47:57.612351 2130 server.go:1186] "Started kubelet" Feb 9 00:47:57.615925 kubelet[2130]: E0209 00:47:57.615907 2130 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 00:47:57.616064 kubelet[2130]: E0209 00:47:57.616047 2130 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 00:47:57.616662 kubelet[2130]: I0209 00:47:57.616627 2130 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 00:47:57.616841 kubelet[2130]: I0209 00:47:57.616827 2130 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 00:47:57.617684 kubelet[2130]: I0209 00:47:57.617670 2130 server.go:451] "Adding debug handlers to kubelet server" Feb 9 00:47:57.619611 kubelet[2130]: I0209 00:47:57.618820 2130 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 00:47:57.619773 kubelet[2130]: I0209 00:47:57.619758 2130 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 00:47:57.652777 kubelet[2130]: I0209 00:47:57.652751 2130 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 00:47:57.680833 sudo[2176]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 9 00:47:57.681054 sudo[2176]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Feb 9 00:47:57.692189 kubelet[2130]: I0209 00:47:57.692153 2130 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 00:47:57.692189 kubelet[2130]: I0209 00:47:57.692182 2130 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 00:47:57.692398 kubelet[2130]: I0209 00:47:57.692207 2130 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 00:47:57.692398 kubelet[2130]: E0209 00:47:57.692263 2130 kubelet.go:2137] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 00:47:57.699816 kubelet[2130]: I0209 00:47:57.699785 2130 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 00:47:57.699816 kubelet[2130]: I0209 00:47:57.699810 2130 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 00:47:57.699940 kubelet[2130]: I0209 00:47:57.699832 2130 state_mem.go:36] "Initialized new in-memory state store" Feb 9 00:47:57.700173 kubelet[2130]: I0209 00:47:57.700152 2130 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 9 00:47:57.700250 kubelet[2130]: I0209 00:47:57.700210 2130 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 9 00:47:57.700250 kubelet[2130]: I0209 00:47:57.700224 2130 policy_none.go:49] "None policy: Start" Feb 9 00:47:57.701670 kubelet[2130]: I0209 00:47:57.701652 2130 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 00:47:57.701734 kubelet[2130]: I0209 00:47:57.701702 2130 state_mem.go:35] "Initializing new in-memory state store" Feb 9 00:47:57.701922 kubelet[2130]: I0209 00:47:57.701903 2130 state_mem.go:75] "Updated machine memory state" Feb 9 00:47:57.705325 kubelet[2130]: I0209 00:47:57.705272 2130 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 00:47:57.705325 kubelet[2130]: I0209 00:47:57.705686 2130 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 00:47:57.725907 kubelet[2130]: I0209 00:47:57.725839 2130 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 00:47:57.734401 kubelet[2130]: I0209 00:47:57.734383 2130 kubelet_node_status.go:108] "Node was previously registered" node="localhost" Feb 9 00:47:57.734574 kubelet[2130]: I0209 00:47:57.734563 2130 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Feb 9 00:47:57.793301 kubelet[2130]: I0209 00:47:57.793234 2130 topology_manager.go:210] "Topology Admit Handler" Feb 9 00:47:57.793466 kubelet[2130]: I0209 00:47:57.793356 2130 topology_manager.go:210] "Topology Admit Handler" Feb 9 00:47:57.793466 kubelet[2130]: I0209 00:47:57.793384 2130 topology_manager.go:210] "Topology Admit Handler" Feb 9 00:47:57.800061 kubelet[2130]: E0209 00:47:57.800026 2130 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Feb 9 00:47:57.801038 kubelet[2130]: E0209 00:47:57.801022 2130 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 9 00:47:57.933937 kubelet[2130]: I0209 00:47:57.933829 2130 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0115fad8c735f8ab03bbbcff4c2dd433-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0115fad8c735f8ab03bbbcff4c2dd433\") " pod="kube-system/kube-apiserver-localhost" Feb 9 00:47:57.933937 kubelet[2130]: I0209 00:47:57.933887 2130 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 00:47:57.933937 kubelet[2130]: I0209 00:47:57.933922 2130 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 00:47:57.934136 kubelet[2130]: I0209 00:47:57.933954 2130 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 00:47:57.934136 kubelet[2130]: I0209 00:47:57.933982 2130 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0115fad8c735f8ab03bbbcff4c2dd433-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0115fad8c735f8ab03bbbcff4c2dd433\") " pod="kube-system/kube-apiserver-localhost" Feb 9 00:47:57.934136 kubelet[2130]: I0209 00:47:57.934007 2130 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0115fad8c735f8ab03bbbcff4c2dd433-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0115fad8c735f8ab03bbbcff4c2dd433\") " pod="kube-system/kube-apiserver-localhost" Feb 9 00:47:57.934136 kubelet[2130]: I0209 00:47:57.934030 2130 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 00:47:57.934136 kubelet[2130]: I0209 00:47:57.934056 2130 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 00:47:57.934253 kubelet[2130]: I0209 00:47:57.934081 2130 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae17a74a2eae76daac6d298477aff0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae17a74a2eae76daac6d298477aff0\") " pod="kube-system/kube-scheduler-localhost" Feb 9 00:47:58.103527 kubelet[2130]: E0209 00:47:58.103480 2130 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:47:58.103756 kubelet[2130]: E0209 00:47:58.103720 2130 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:47:58.117262 kubelet[2130]: E0209 00:47:58.116172 2130 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:47:58.239159 sudo[2176]: pam_unix(sudo:session): session closed for user root Feb 9 00:47:58.609854 kubelet[2130]: I0209 00:47:58.609795 2130 apiserver.go:52] "Watching apiserver" Feb 9 00:47:58.620237 kubelet[2130]: I0209 00:47:58.620204 2130 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 00:47:58.639413 kubelet[2130]: I0209 00:47:58.639365 2130 reconciler.go:41] "Reconciler: start to sync state" Feb 9 00:47:58.725346 kubelet[2130]: E0209 00:47:58.725316 2130 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:47:59.009546 kubelet[2130]: E0209 00:47:59.009440 2130 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 9 00:47:59.010047 kubelet[2130]: E0209 00:47:59.010029 2130 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:47:59.209570 kubelet[2130]: E0209 00:47:59.209537 2130 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Feb 9 00:47:59.209815 kubelet[2130]: E0209 00:47:59.209799 2130 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:47:59.246579 sudo[1335]: pam_unix(sudo:session): session closed for user root Feb 9 00:47:59.248043 sshd[1329]: pam_unix(sshd:session): session closed for user core Feb 9 00:47:59.250913 systemd[1]: sshd@4-10.0.0.69:22-10.0.0.1:47264.service: Deactivated successfully. Feb 9 00:47:59.252037 systemd-logind[1200]: Session 5 logged out. Waiting for processes to exit. Feb 9 00:47:59.252075 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 00:47:59.253110 systemd-logind[1200]: Removed session 5. Feb 9 00:47:59.412773 kubelet[2130]: I0209 00:47:59.412737 2130 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.41267907 pod.CreationTimestamp="2024-02-09 00:47:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 00:47:59.412418442 +0000 UTC m=+1.936295185" watchObservedRunningTime="2024-02-09 00:47:59.41267907 +0000 UTC m=+1.936555803" Feb 9 00:47:59.726270 kubelet[2130]: E0209 00:47:59.726162 2130 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:47:59.726270 kubelet[2130]: E0209 00:47:59.726212 2130 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:47:59.726798 kubelet[2130]: E0209 00:47:59.726483 2130 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:47:59.809214 kubelet[2130]: I0209 00:47:59.809182 2130 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.809147631 pod.CreationTimestamp="2024-02-09 00:47:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 00:47:59.808982195 +0000 UTC m=+2.332858928" watchObservedRunningTime="2024-02-09 00:47:59.809147631 +0000 UTC m=+2.333024364" Feb 9 00:48:00.209349 kubelet[2130]: I0209 00:48:00.209309 2130 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=5.209243704 pod.CreationTimestamp="2024-02-09 00:47:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 00:48:00.20922607 +0000 UTC m=+2.733102803" watchObservedRunningTime="2024-02-09 00:48:00.209243704 +0000 UTC m=+2.733120427" Feb 9 00:48:04.250493 kubelet[2130]: E0209 00:48:04.250448 2130 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:48:04.742811 kubelet[2130]: E0209 00:48:04.742785 2130 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:48:05.455316 update_engine[1202]: I0209 00:48:05.455258 1202 update_attempter.cc:509] Updating boot flags... Feb 9 00:48:05.744605 kubelet[2130]: E0209 00:48:05.744280 2130 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:48:08.897302 kubelet[2130]: E0209 00:48:08.897254 2130 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:48:09.087719 kubelet[2130]: E0209 00:48:09.087669 2130 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:48:10.426442 kubelet[2130]: I0209 00:48:10.426402 2130 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 9 00:48:10.426824 env[1221]: time="2024-02-09T00:48:10.426710654Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 00:48:10.427023 kubelet[2130]: I0209 00:48:10.426911 2130 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 9 00:48:11.299355 kubelet[2130]: I0209 00:48:11.299317 2130 topology_manager.go:210] "Topology Admit Handler" Feb 9 00:48:11.303937 kubelet[2130]: I0209 00:48:11.303911 2130 topology_manager.go:210] "Topology Admit Handler" Feb 9 00:48:11.324548 kubelet[2130]: I0209 00:48:11.324516 2130 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3f5c611e-f8bf-47ba-b2b2-3a8fd86c0129-kube-proxy\") pod \"kube-proxy-w8zs6\" (UID: \"3f5c611e-f8bf-47ba-b2b2-3a8fd86c0129\") " pod="kube-system/kube-proxy-w8zs6" Feb 9 00:48:11.324548 kubelet[2130]: I0209 00:48:11.324555 2130 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3f5c611e-f8bf-47ba-b2b2-3a8fd86c0129-xtables-lock\") pod \"kube-proxy-w8zs6\" (UID: \"3f5c611e-f8bf-47ba-b2b2-3a8fd86c0129\") " pod="kube-system/kube-proxy-w8zs6" Feb 9 00:48:11.324720 kubelet[2130]: I0209 00:48:11.324577 2130 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3f5c611e-f8bf-47ba-b2b2-3a8fd86c0129-lib-modules\") pod \"kube-proxy-w8zs6\" (UID: \"3f5c611e-f8bf-47ba-b2b2-3a8fd86c0129\") " pod="kube-system/kube-proxy-w8zs6" Feb 9 00:48:11.324720 kubelet[2130]: I0209 00:48:11.324602 2130 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxcl8\" (UniqueName: \"kubernetes.io/projected/3f5c611e-f8bf-47ba-b2b2-3a8fd86c0129-kube-api-access-pxcl8\") pod \"kube-proxy-w8zs6\" (UID: \"3f5c611e-f8bf-47ba-b2b2-3a8fd86c0129\") " pod="kube-system/kube-proxy-w8zs6" Feb 9 00:48:11.425023 kubelet[2130]: I0209 00:48:11.424995 2130 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/66c0b3c9-fc36-4fa8-a80c-50740d92d05e-cilium-run\") pod \"cilium-chqx8\" (UID: \"66c0b3c9-fc36-4fa8-a80c-50740d92d05e\") " pod="kube-system/cilium-chqx8" Feb 9 00:48:11.425182 kubelet[2130]: I0209 00:48:11.425050 2130 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/66c0b3c9-fc36-4fa8-a80c-50740d92d05e-hostproc\") pod \"cilium-chqx8\" (UID: \"66c0b3c9-fc36-4fa8-a80c-50740d92d05e\") " pod="kube-system/cilium-chqx8" Feb 9 00:48:11.425210 kubelet[2130]: I0209 00:48:11.425171 2130 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/66c0b3c9-fc36-4fa8-a80c-50740d92d05e-etc-cni-netd\") pod \"cilium-chqx8\" (UID: \"66c0b3c9-fc36-4fa8-a80c-50740d92d05e\") " pod="kube-system/cilium-chqx8" Feb 9 00:48:11.425236 kubelet[2130]: I0209 00:48:11.425213 2130 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/66c0b3c9-fc36-4fa8-a80c-50740d92d05e-clustermesh-secrets\") pod \"cilium-chqx8\" (UID: \"66c0b3c9-fc36-4fa8-a80c-50740d92d05e\") " pod="kube-system/cilium-chqx8" Feb 9 00:48:11.425328 kubelet[2130]: I0209 00:48:11.425264 2130 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/66c0b3c9-fc36-4fa8-a80c-50740d92d05e-bpf-maps\") pod \"cilium-chqx8\" (UID: \"66c0b3c9-fc36-4fa8-a80c-50740d92d05e\") " pod="kube-system/cilium-chqx8" Feb 9 00:48:11.425356 kubelet[2130]: I0209 00:48:11.425328 2130 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/66c0b3c9-fc36-4fa8-a80c-50740d92d05e-host-proc-sys-kernel\") pod \"cilium-chqx8\" (UID: \"66c0b3c9-fc36-4fa8-a80c-50740d92d05e\") " pod="kube-system/cilium-chqx8" Feb 9 00:48:11.425397 kubelet[2130]: I0209 00:48:11.425385 2130 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/66c0b3c9-fc36-4fa8-a80c-50740d92d05e-xtables-lock\") pod \"cilium-chqx8\" (UID: \"66c0b3c9-fc36-4fa8-a80c-50740d92d05e\") " pod="kube-system/cilium-chqx8" Feb 9 00:48:11.425432 kubelet[2130]: I0209 00:48:11.425418 2130 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qbgf\" (UniqueName: \"kubernetes.io/projected/66c0b3c9-fc36-4fa8-a80c-50740d92d05e-kube-api-access-2qbgf\") pod \"cilium-chqx8\" (UID: \"66c0b3c9-fc36-4fa8-a80c-50740d92d05e\") " pod="kube-system/cilium-chqx8" Feb 9 00:48:11.425466 kubelet[2130]: I0209 00:48:11.425455 2130 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/66c0b3c9-fc36-4fa8-a80c-50740d92d05e-lib-modules\") pod \"cilium-chqx8\" (UID: \"66c0b3c9-fc36-4fa8-a80c-50740d92d05e\") " pod="kube-system/cilium-chqx8" Feb 9 00:48:11.425494 kubelet[2130]: I0209 00:48:11.425478 2130 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/66c0b3c9-fc36-4fa8-a80c-50740d92d05e-cilium-config-path\") pod \"cilium-chqx8\" (UID: \"66c0b3c9-fc36-4fa8-a80c-50740d92d05e\") " pod="kube-system/cilium-chqx8" Feb 9 00:48:11.425519 kubelet[2130]: I0209 00:48:11.425512 2130 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/66c0b3c9-fc36-4fa8-a80c-50740d92d05e-cilium-cgroup\") pod \"cilium-chqx8\" (UID: \"66c0b3c9-fc36-4fa8-a80c-50740d92d05e\") " pod="kube-system/cilium-chqx8" Feb 9 00:48:11.425543 kubelet[2130]: I0209 00:48:11.425532 2130 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/66c0b3c9-fc36-4fa8-a80c-50740d92d05e-cni-path\") pod \"cilium-chqx8\" (UID: \"66c0b3c9-fc36-4fa8-a80c-50740d92d05e\") " pod="kube-system/cilium-chqx8" Feb 9 00:48:11.425567 kubelet[2130]: I0209 00:48:11.425555 2130 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/66c0b3c9-fc36-4fa8-a80c-50740d92d05e-host-proc-sys-net\") pod \"cilium-chqx8\" (UID: \"66c0b3c9-fc36-4fa8-a80c-50740d92d05e\") " pod="kube-system/cilium-chqx8" Feb 9 00:48:11.425592 kubelet[2130]: I0209 00:48:11.425572 2130 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/66c0b3c9-fc36-4fa8-a80c-50740d92d05e-hubble-tls\") pod \"cilium-chqx8\" (UID: \"66c0b3c9-fc36-4fa8-a80c-50740d92d05e\") " pod="kube-system/cilium-chqx8" Feb 9 00:48:11.462498 kubelet[2130]: I0209 00:48:11.462470 2130 topology_manager.go:210] "Topology Admit Handler" Feb 9 00:48:11.526545 kubelet[2130]: I0209 00:48:11.526513 2130 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/48e2af9a-4f3c-4242-ae26-cd4e1e47d507-cilium-config-path\") pod \"cilium-operator-f59cbd8c6-6vnbf\" (UID: \"48e2af9a-4f3c-4242-ae26-cd4e1e47d507\") " pod="kube-system/cilium-operator-f59cbd8c6-6vnbf" Feb 9 00:48:11.526706 kubelet[2130]: I0209 00:48:11.526654 2130 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgwtq\" (UniqueName: \"kubernetes.io/projected/48e2af9a-4f3c-4242-ae26-cd4e1e47d507-kube-api-access-kgwtq\") pod \"cilium-operator-f59cbd8c6-6vnbf\" (UID: \"48e2af9a-4f3c-4242-ae26-cd4e1e47d507\") " pod="kube-system/cilium-operator-f59cbd8c6-6vnbf" Feb 9 00:48:11.608032 kubelet[2130]: E0209 00:48:11.607762 2130 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:48:11.608621 env[1221]: time="2024-02-09T00:48:11.608569051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-w8zs6,Uid:3f5c611e-f8bf-47ba-b2b2-3a8fd86c0129,Namespace:kube-system,Attempt:0,}" Feb 9 00:48:11.795343 env[1221]: time="2024-02-09T00:48:11.795258049Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 00:48:11.795343 env[1221]: time="2024-02-09T00:48:11.795312651Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 00:48:11.795514 env[1221]: time="2024-02-09T00:48:11.795322651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 00:48:11.795707 env[1221]: time="2024-02-09T00:48:11.795666260Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5b13c971152bffe9cf6de34e92e3b4f46a6f1677a757ab3f1b9940f9dab21d20 pid=2262 runtime=io.containerd.runc.v2 Feb 9 00:48:11.822243 env[1221]: time="2024-02-09T00:48:11.822199137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-w8zs6,Uid:3f5c611e-f8bf-47ba-b2b2-3a8fd86c0129,Namespace:kube-system,Attempt:0,} returns sandbox id \"5b13c971152bffe9cf6de34e92e3b4f46a6f1677a757ab3f1b9940f9dab21d20\"" Feb 9 00:48:11.823504 kubelet[2130]: E0209 00:48:11.823046 2130 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:48:11.824920 env[1221]: time="2024-02-09T00:48:11.824882542Z" level=info msg="CreateContainer within sandbox \"5b13c971152bffe9cf6de34e92e3b4f46a6f1677a757ab3f1b9940f9dab21d20\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 00:48:11.913588 kubelet[2130]: E0209 00:48:11.913501 2130 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:48:11.914391 env[1221]: time="2024-02-09T00:48:11.913848255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-chqx8,Uid:66c0b3c9-fc36-4fa8-a80c-50740d92d05e,Namespace:kube-system,Attempt:0,}" Feb 9 00:48:12.139652 env[1221]: time="2024-02-09T00:48:12.139596321Z" level=info msg="CreateContainer within sandbox \"5b13c971152bffe9cf6de34e92e3b4f46a6f1677a757ab3f1b9940f9dab21d20\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6c5679592d76c02eea8671f382a182e08728bce82ff81a60ad94861c71238425\"" Feb 9 00:48:12.140240 env[1221]: time="2024-02-09T00:48:12.140214659Z" level=info msg="StartContainer for \"6c5679592d76c02eea8671f382a182e08728bce82ff81a60ad94861c71238425\"" Feb 9 00:48:12.146938 env[1221]: time="2024-02-09T00:48:12.146867173Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 00:48:12.146938 env[1221]: time="2024-02-09T00:48:12.146904374Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 00:48:12.146938 env[1221]: time="2024-02-09T00:48:12.146918992Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 00:48:12.147187 env[1221]: time="2024-02-09T00:48:12.147145850Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2d9a876b0d7498203fd3df7ac07165f26158303bd14de24919d264d1948cf493 pid=2312 runtime=io.containerd.runc.v2 Feb 9 00:48:12.181177 env[1221]: time="2024-02-09T00:48:12.181081052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-chqx8,Uid:66c0b3c9-fc36-4fa8-a80c-50740d92d05e,Namespace:kube-system,Attempt:0,} returns sandbox id \"2d9a876b0d7498203fd3df7ac07165f26158303bd14de24919d264d1948cf493\"" Feb 9 00:48:12.181704 kubelet[2130]: E0209 00:48:12.181667 2130 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:48:12.182733 env[1221]: time="2024-02-09T00:48:12.182703227Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 9 00:48:12.194130 env[1221]: time="2024-02-09T00:48:12.194084368Z" level=info msg="StartContainer for \"6c5679592d76c02eea8671f382a182e08728bce82ff81a60ad94861c71238425\" returns successfully" Feb 9 00:48:12.364894 kubelet[2130]: E0209 00:48:12.364848 2130 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:48:12.365423 env[1221]: time="2024-02-09T00:48:12.365371899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-6vnbf,Uid:48e2af9a-4f3c-4242-ae26-cd4e1e47d507,Namespace:kube-system,Attempt:0,}" Feb 9 00:48:12.386339 env[1221]: time="2024-02-09T00:48:12.386250371Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 00:48:12.386487 env[1221]: time="2024-02-09T00:48:12.386348507Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 00:48:12.386487 env[1221]: time="2024-02-09T00:48:12.386362263Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 00:48:12.386721 env[1221]: time="2024-02-09T00:48:12.386671498Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a6243f55be2013a67dab910ee74cd2506fbb14ebbca660159f302a3c26f5e86a pid=2451 runtime=io.containerd.runc.v2 Feb 9 00:48:12.431007 env[1221]: time="2024-02-09T00:48:12.430842215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-6vnbf,Uid:48e2af9a-4f3c-4242-ae26-cd4e1e47d507,Namespace:kube-system,Attempt:0,} returns sandbox id \"a6243f55be2013a67dab910ee74cd2506fbb14ebbca660159f302a3c26f5e86a\"" Feb 9 00:48:12.431851 kubelet[2130]: E0209 00:48:12.431407 2130 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:48:12.755671 kubelet[2130]: E0209 00:48:12.755582 2130 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:48:12.767110 kubelet[2130]: I0209 00:48:12.766879 2130 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-w8zs6" podStartSLOduration=1.766846315 pod.CreationTimestamp="2024-02-09 00:48:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 00:48:12.766575473 +0000 UTC m=+15.290452236" watchObservedRunningTime="2024-02-09 00:48:12.766846315 +0000 UTC m=+15.290723048" Feb 9 00:48:13.756346 kubelet[2130]: E0209 00:48:13.756311 2130 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:48:17.249166 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1420081538.mount: Deactivated successfully. Feb 9 00:48:21.301357 env[1221]: time="2024-02-09T00:48:21.301303243Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:48:21.302880 env[1221]: time="2024-02-09T00:48:21.302848053Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:48:21.304457 env[1221]: time="2024-02-09T00:48:21.304421917Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:48:21.304889 env[1221]: time="2024-02-09T00:48:21.304865743Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 9 00:48:21.305752 env[1221]: time="2024-02-09T00:48:21.305718609Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 9 00:48:21.306533 env[1221]: time="2024-02-09T00:48:21.306502866Z" level=info msg="CreateContainer within sandbox \"2d9a876b0d7498203fd3df7ac07165f26158303bd14de24919d264d1948cf493\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 00:48:21.318479 env[1221]: time="2024-02-09T00:48:21.318427165Z" level=info msg="CreateContainer within sandbox \"2d9a876b0d7498203fd3df7ac07165f26158303bd14de24919d264d1948cf493\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7df72d0f7e11e511703a79363f96fca2b776eba709f1c26ef47532ca9b6e1fba\"" Feb 9 00:48:21.319163 env[1221]: time="2024-02-09T00:48:21.319124759Z" level=info msg="StartContainer for \"7df72d0f7e11e511703a79363f96fca2b776eba709f1c26ef47532ca9b6e1fba\"" Feb 9 00:48:21.358116 env[1221]: time="2024-02-09T00:48:21.358070581Z" level=info msg="StartContainer for \"7df72d0f7e11e511703a79363f96fca2b776eba709f1c26ef47532ca9b6e1fba\" returns successfully" Feb 9 00:48:21.729606 env[1221]: time="2024-02-09T00:48:21.729558048Z" level=info msg="shim disconnected" id=7df72d0f7e11e511703a79363f96fca2b776eba709f1c26ef47532ca9b6e1fba Feb 9 00:48:21.729606 env[1221]: time="2024-02-09T00:48:21.729603563Z" level=warning msg="cleaning up after shim disconnected" id=7df72d0f7e11e511703a79363f96fca2b776eba709f1c26ef47532ca9b6e1fba namespace=k8s.io Feb 9 00:48:21.729606 env[1221]: time="2024-02-09T00:48:21.729612159Z" level=info msg="cleaning up dead shim" Feb 9 00:48:21.735670 env[1221]: time="2024-02-09T00:48:21.735646505Z" level=warning msg="cleanup warnings time=\"2024-02-09T00:48:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2572 runtime=io.containerd.runc.v2\n" Feb 9 00:48:21.772987 kubelet[2130]: E0209 00:48:21.772951 2130 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:48:21.777999 env[1221]: time="2024-02-09T00:48:21.777937027Z" level=info msg="CreateContainer within sandbox \"2d9a876b0d7498203fd3df7ac07165f26158303bd14de24919d264d1948cf493\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 00:48:21.795299 env[1221]: time="2024-02-09T00:48:21.795232472Z" level=info msg="CreateContainer within sandbox \"2d9a876b0d7498203fd3df7ac07165f26158303bd14de24919d264d1948cf493\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f716091975c7ae270aabc1c9169c3e2c90c995c67aa2bd06e094a4f23f8a5ff3\"" Feb 9 00:48:21.795864 env[1221]: time="2024-02-09T00:48:21.795827112Z" level=info msg="StartContainer for \"f716091975c7ae270aabc1c9169c3e2c90c995c67aa2bd06e094a4f23f8a5ff3\"" Feb 9 00:48:21.842644 env[1221]: time="2024-02-09T00:48:21.842583455Z" level=info msg="StartContainer for \"f716091975c7ae270aabc1c9169c3e2c90c995c67aa2bd06e094a4f23f8a5ff3\" returns successfully" Feb 9 00:48:21.849395 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 00:48:21.849620 systemd[1]: Stopped systemd-sysctl.service. Feb 9 00:48:21.849768 systemd[1]: Stopping systemd-sysctl.service... Feb 9 00:48:21.851136 systemd[1]: Starting systemd-sysctl.service... Feb 9 00:48:21.860951 systemd[1]: Finished systemd-sysctl.service. Feb 9 00:48:21.872621 env[1221]: time="2024-02-09T00:48:21.872566977Z" level=info msg="shim disconnected" id=f716091975c7ae270aabc1c9169c3e2c90c995c67aa2bd06e094a4f23f8a5ff3 Feb 9 00:48:21.872621 env[1221]: time="2024-02-09T00:48:21.872618846Z" level=warning msg="cleaning up after shim disconnected" id=f716091975c7ae270aabc1c9169c3e2c90c995c67aa2bd06e094a4f23f8a5ff3 namespace=k8s.io Feb 9 00:48:21.872748 env[1221]: time="2024-02-09T00:48:21.872628774Z" level=info msg="cleaning up dead shim" Feb 9 00:48:21.878696 env[1221]: time="2024-02-09T00:48:21.878653952Z" level=warning msg="cleanup warnings time=\"2024-02-09T00:48:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2638 runtime=io.containerd.runc.v2\n" Feb 9 00:48:22.316031 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7df72d0f7e11e511703a79363f96fca2b776eba709f1c26ef47532ca9b6e1fba-rootfs.mount: Deactivated successfully. Feb 9 00:48:22.657197 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1936086882.mount: Deactivated successfully. Feb 9 00:48:22.776255 kubelet[2130]: E0209 00:48:22.776219 2130 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:48:22.777940 env[1221]: time="2024-02-09T00:48:22.777817961Z" level=info msg="CreateContainer within sandbox \"2d9a876b0d7498203fd3df7ac07165f26158303bd14de24919d264d1948cf493\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 00:48:22.798363 env[1221]: time="2024-02-09T00:48:22.798308723Z" level=info msg="CreateContainer within sandbox \"2d9a876b0d7498203fd3df7ac07165f26158303bd14de24919d264d1948cf493\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"fdee257093ef53b91f29a10a321201012ce9a74a43807c5e7f1c9742fb00204e\"" Feb 9 00:48:22.798727 env[1221]: time="2024-02-09T00:48:22.798691394Z" level=info msg="StartContainer for \"fdee257093ef53b91f29a10a321201012ce9a74a43807c5e7f1c9742fb00204e\"" Feb 9 00:48:22.838496 env[1221]: time="2024-02-09T00:48:22.838453413Z" level=info msg="StartContainer for \"fdee257093ef53b91f29a10a321201012ce9a74a43807c5e7f1c9742fb00204e\" returns successfully" Feb 9 00:48:22.903601 env[1221]: time="2024-02-09T00:48:22.903544524Z" level=info msg="shim disconnected" id=fdee257093ef53b91f29a10a321201012ce9a74a43807c5e7f1c9742fb00204e Feb 9 00:48:22.903601 env[1221]: time="2024-02-09T00:48:22.903598766Z" level=warning msg="cleaning up after shim disconnected" id=fdee257093ef53b91f29a10a321201012ce9a74a43807c5e7f1c9742fb00204e namespace=k8s.io Feb 9 00:48:22.903784 env[1221]: time="2024-02-09T00:48:22.903612011Z" level=info msg="cleaning up dead shim" Feb 9 00:48:22.912084 env[1221]: time="2024-02-09T00:48:22.912007689Z" level=warning msg="cleanup warnings time=\"2024-02-09T00:48:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2695 runtime=io.containerd.runc.v2\n" Feb 9 00:48:23.252997 env[1221]: time="2024-02-09T00:48:23.252887123Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:48:23.254791 env[1221]: time="2024-02-09T00:48:23.254737336Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:48:23.256278 env[1221]: time="2024-02-09T00:48:23.256249462Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:48:23.256758 env[1221]: time="2024-02-09T00:48:23.256722163Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 9 00:48:23.259018 env[1221]: time="2024-02-09T00:48:23.258995080Z" level=info msg="CreateContainer within sandbox \"a6243f55be2013a67dab910ee74cd2506fbb14ebbca660159f302a3c26f5e86a\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 9 00:48:23.269670 env[1221]: time="2024-02-09T00:48:23.269636473Z" level=info msg="CreateContainer within sandbox \"a6243f55be2013a67dab910ee74cd2506fbb14ebbca660159f302a3c26f5e86a\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"72e67fad58269642562865ec7df0263fb5553284b23421722ed7ba4fe575ed05\"" Feb 9 00:48:23.269923 env[1221]: time="2024-02-09T00:48:23.269902704Z" level=info msg="StartContainer for \"72e67fad58269642562865ec7df0263fb5553284b23421722ed7ba4fe575ed05\"" Feb 9 00:48:23.309171 env[1221]: time="2024-02-09T00:48:23.309128903Z" level=info msg="StartContainer for \"72e67fad58269642562865ec7df0263fb5553284b23421722ed7ba4fe575ed05\" returns successfully" Feb 9 00:48:23.778534 kubelet[2130]: E0209 00:48:23.778497 2130 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:48:23.780210 kubelet[2130]: E0209 00:48:23.780187 2130 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:48:23.781930 env[1221]: time="2024-02-09T00:48:23.781891372Z" level=info msg="CreateContainer within sandbox \"2d9a876b0d7498203fd3df7ac07165f26158303bd14de24919d264d1948cf493\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 00:48:23.799883 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount718795992.mount: Deactivated successfully. Feb 9 00:48:23.803114 env[1221]: time="2024-02-09T00:48:23.803079317Z" level=info msg="CreateContainer within sandbox \"2d9a876b0d7498203fd3df7ac07165f26158303bd14de24919d264d1948cf493\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ac44d800cb6bd6f325528cd02b2a53667eb9578528a401ee17cb7d99cb960b83\"" Feb 9 00:48:23.803674 env[1221]: time="2024-02-09T00:48:23.803658037Z" level=info msg="StartContainer for \"ac44d800cb6bd6f325528cd02b2a53667eb9578528a401ee17cb7d99cb960b83\"" Feb 9 00:48:23.815879 kubelet[2130]: I0209 00:48:23.815369 2130 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-f59cbd8c6-6vnbf" podStartSLOduration=-9.223372024039444e+09 pod.CreationTimestamp="2024-02-09 00:48:11 +0000 UTC" firstStartedPulling="2024-02-09 00:48:12.432550544 +0000 UTC m=+14.956427277" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 00:48:23.791444325 +0000 UTC m=+26.315321048" watchObservedRunningTime="2024-02-09 00:48:23.81533104 +0000 UTC m=+26.339207773" Feb 9 00:48:23.855462 env[1221]: time="2024-02-09T00:48:23.855426517Z" level=info msg="StartContainer for \"ac44d800cb6bd6f325528cd02b2a53667eb9578528a401ee17cb7d99cb960b83\" returns successfully" Feb 9 00:48:24.030550 env[1221]: time="2024-02-09T00:48:24.030430324Z" level=info msg="shim disconnected" id=ac44d800cb6bd6f325528cd02b2a53667eb9578528a401ee17cb7d99cb960b83 Feb 9 00:48:24.030550 env[1221]: time="2024-02-09T00:48:24.030486200Z" level=warning msg="cleaning up after shim disconnected" id=ac44d800cb6bd6f325528cd02b2a53667eb9578528a401ee17cb7d99cb960b83 namespace=k8s.io Feb 9 00:48:24.030550 env[1221]: time="2024-02-09T00:48:24.030497732Z" level=info msg="cleaning up dead shim" Feb 9 00:48:24.037336 env[1221]: time="2024-02-09T00:48:24.037298821Z" level=warning msg="cleanup warnings time=\"2024-02-09T00:48:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2786 runtime=io.containerd.runc.v2\n" Feb 9 00:48:24.315825 systemd[1]: run-containerd-runc-k8s.io-ac44d800cb6bd6f325528cd02b2a53667eb9578528a401ee17cb7d99cb960b83-runc.ZBw8GY.mount: Deactivated successfully. Feb 9 00:48:24.315969 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ac44d800cb6bd6f325528cd02b2a53667eb9578528a401ee17cb7d99cb960b83-rootfs.mount: Deactivated successfully. Feb 9 00:48:24.785095 kubelet[2130]: E0209 00:48:24.785003 2130 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:48:24.785802 kubelet[2130]: E0209 00:48:24.785783 2130 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:48:24.788750 env[1221]: time="2024-02-09T00:48:24.788696972Z" level=info msg="CreateContainer within sandbox \"2d9a876b0d7498203fd3df7ac07165f26158303bd14de24919d264d1948cf493\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 00:48:24.803924 env[1221]: time="2024-02-09T00:48:24.803876770Z" level=info msg="CreateContainer within sandbox \"2d9a876b0d7498203fd3df7ac07165f26158303bd14de24919d264d1948cf493\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f1733d7f9076625aa79789fe8b2a67472781ae5abecf1ca7a12fd74dc7102f9e\"" Feb 9 00:48:24.804567 env[1221]: time="2024-02-09T00:48:24.804534157Z" level=info msg="StartContainer for \"f1733d7f9076625aa79789fe8b2a67472781ae5abecf1ca7a12fd74dc7102f9e\"" Feb 9 00:48:24.848162 env[1221]: time="2024-02-09T00:48:24.848108662Z" level=info msg="StartContainer for \"f1733d7f9076625aa79789fe8b2a67472781ae5abecf1ca7a12fd74dc7102f9e\" returns successfully" Feb 9 00:48:24.914332 kubelet[2130]: I0209 00:48:24.914267 2130 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 00:48:24.932320 kubelet[2130]: I0209 00:48:24.930479 2130 topology_manager.go:210] "Topology Admit Handler" Feb 9 00:48:24.935081 kubelet[2130]: I0209 00:48:24.935065 2130 topology_manager.go:210] "Topology Admit Handler" Feb 9 00:48:25.036761 kubelet[2130]: I0209 00:48:25.036649 2130 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lgql\" (UniqueName: \"kubernetes.io/projected/d83809ba-cd83-4353-8438-e0e3129980ab-kube-api-access-6lgql\") pod \"coredns-787d4945fb-wwmcr\" (UID: \"d83809ba-cd83-4353-8438-e0e3129980ab\") " pod="kube-system/coredns-787d4945fb-wwmcr" Feb 9 00:48:25.036761 kubelet[2130]: I0209 00:48:25.036686 2130 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d83809ba-cd83-4353-8438-e0e3129980ab-config-volume\") pod \"coredns-787d4945fb-wwmcr\" (UID: \"d83809ba-cd83-4353-8438-e0e3129980ab\") " pod="kube-system/coredns-787d4945fb-wwmcr" Feb 9 00:48:25.036761 kubelet[2130]: I0209 00:48:25.036726 2130 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n882f\" (UniqueName: \"kubernetes.io/projected/a5f89a5e-27d4-41a0-bf84-b21fdd9e9864-kube-api-access-n882f\") pod \"coredns-787d4945fb-nm6bz\" (UID: \"a5f89a5e-27d4-41a0-bf84-b21fdd9e9864\") " pod="kube-system/coredns-787d4945fb-nm6bz" Feb 9 00:48:25.036761 kubelet[2130]: I0209 00:48:25.036747 2130 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a5f89a5e-27d4-41a0-bf84-b21fdd9e9864-config-volume\") pod \"coredns-787d4945fb-nm6bz\" (UID: \"a5f89a5e-27d4-41a0-bf84-b21fdd9e9864\") " pod="kube-system/coredns-787d4945fb-nm6bz" Feb 9 00:48:25.238998 kubelet[2130]: E0209 00:48:25.238948 2130 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:48:25.239464 env[1221]: time="2024-02-09T00:48:25.239427094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-wwmcr,Uid:d83809ba-cd83-4353-8438-e0e3129980ab,Namespace:kube-system,Attempt:0,}" Feb 9 00:48:25.245748 kubelet[2130]: E0209 00:48:25.245728 2130 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:48:25.246182 env[1221]: time="2024-02-09T00:48:25.246119857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-nm6bz,Uid:a5f89a5e-27d4-41a0-bf84-b21fdd9e9864,Namespace:kube-system,Attempt:0,}" Feb 9 00:48:25.788269 kubelet[2130]: E0209 00:48:25.788240 2130 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:48:25.798237 kubelet[2130]: I0209 00:48:25.798213 2130 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-chqx8" podStartSLOduration=-9.223372022056597e+09 pod.CreationTimestamp="2024-02-09 00:48:11 +0000 UTC" firstStartedPulling="2024-02-09 00:48:12.18232877 +0000 UTC m=+14.706205503" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 00:48:25.797949548 +0000 UTC m=+28.321826281" watchObservedRunningTime="2024-02-09 00:48:25.798179029 +0000 UTC m=+28.322055762" Feb 9 00:48:26.721194 systemd-networkd[1097]: cilium_host: Link UP Feb 9 00:48:26.721335 systemd-networkd[1097]: cilium_net: Link UP Feb 9 00:48:26.721337 systemd-networkd[1097]: cilium_net: Gained carrier Feb 9 00:48:26.721469 systemd-networkd[1097]: cilium_host: Gained carrier Feb 9 00:48:26.722529 systemd-networkd[1097]: cilium_host: Gained IPv6LL Feb 9 00:48:26.723324 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 9 00:48:26.789799 kubelet[2130]: E0209 00:48:26.789744 2130 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:48:26.801101 systemd-networkd[1097]: cilium_vxlan: Link UP Feb 9 00:48:26.801107 systemd-networkd[1097]: cilium_vxlan: Gained carrier Feb 9 00:48:26.984311 kernel: NET: Registered PF_ALG protocol family Feb 9 00:48:27.484790 systemd-networkd[1097]: lxc_health: Link UP Feb 9 00:48:27.492641 systemd-networkd[1097]: lxc_health: Gained carrier Feb 9 00:48:27.493306 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 00:48:27.560411 systemd-networkd[1097]: cilium_net: Gained IPv6LL Feb 9 00:48:27.773884 systemd-networkd[1097]: lxc51a77d4393e6: Link UP Feb 9 00:48:27.781309 kernel: eth0: renamed from tmp1df1e Feb 9 00:48:27.791222 kubelet[2130]: E0209 00:48:27.791196 2130 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:48:27.795014 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 00:48:27.795112 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc51a77d4393e6: link becomes ready Feb 9 00:48:27.795234 systemd-networkd[1097]: lxc51a77d4393e6: Gained carrier Feb 9 00:48:27.798090 systemd-networkd[1097]: lxc5cd4f8332ca2: Link UP Feb 9 00:48:27.807320 kernel: eth0: renamed from tmp5a489 Feb 9 00:48:27.819516 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc5cd4f8332ca2: link becomes ready Feb 9 00:48:27.817211 systemd-networkd[1097]: lxc5cd4f8332ca2: Gained carrier Feb 9 00:48:28.520407 systemd-networkd[1097]: cilium_vxlan: Gained IPv6LL Feb 9 00:48:28.793301 kubelet[2130]: E0209 00:48:28.793198 2130 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:48:28.968410 systemd-networkd[1097]: lxc51a77d4393e6: Gained IPv6LL Feb 9 00:48:29.160401 systemd-networkd[1097]: lxc5cd4f8332ca2: Gained IPv6LL Feb 9 00:48:29.224410 systemd-networkd[1097]: lxc_health: Gained IPv6LL Feb 9 00:48:29.795139 kubelet[2130]: E0209 00:48:29.795110 2130 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:48:31.023277 env[1221]: time="2024-02-09T00:48:31.023191503Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 00:48:31.023691 env[1221]: time="2024-02-09T00:48:31.023277033Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 00:48:31.023691 env[1221]: time="2024-02-09T00:48:31.023361433Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 00:48:31.024245 env[1221]: time="2024-02-09T00:48:31.023558884Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1df1e34978a5372f6bcbcffae1e7a0a7f5f77aec5970ce52f4a16ce250d70414 pid=3356 runtime=io.containerd.runc.v2 Feb 9 00:48:31.024831 env[1221]: time="2024-02-09T00:48:31.024770911Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 00:48:31.024946 env[1221]: time="2024-02-09T00:48:31.024810365Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 00:48:31.025063 env[1221]: time="2024-02-09T00:48:31.024933367Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 00:48:31.026023 env[1221]: time="2024-02-09T00:48:31.025374877Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5a489ead7cb3fa631419aba816b0bd3c0680d6839ce644c8912c3bc5ebde4606 pid=3352 runtime=io.containerd.runc.v2 Feb 9 00:48:31.054104 systemd-resolved[1150]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 00:48:31.056342 systemd-resolved[1150]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 00:48:31.079877 env[1221]: time="2024-02-09T00:48:31.079840931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-wwmcr,Uid:d83809ba-cd83-4353-8438-e0e3129980ab,Namespace:kube-system,Attempt:0,} returns sandbox id \"1df1e34978a5372f6bcbcffae1e7a0a7f5f77aec5970ce52f4a16ce250d70414\"" Feb 9 00:48:31.080506 kubelet[2130]: E0209 00:48:31.080485 2130 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:48:31.083013 env[1221]: time="2024-02-09T00:48:31.082320451Z" level=info msg="CreateContainer within sandbox \"1df1e34978a5372f6bcbcffae1e7a0a7f5f77aec5970ce52f4a16ce250d70414\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 00:48:31.088179 env[1221]: time="2024-02-09T00:48:31.088152121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-nm6bz,Uid:a5f89a5e-27d4-41a0-bf84-b21fdd9e9864,Namespace:kube-system,Attempt:0,} returns sandbox id \"5a489ead7cb3fa631419aba816b0bd3c0680d6839ce644c8912c3bc5ebde4606\"" Feb 9 00:48:31.088630 kubelet[2130]: E0209 00:48:31.088613 2130 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:48:31.093555 env[1221]: time="2024-02-09T00:48:31.090480188Z" level=info msg="CreateContainer within sandbox \"5a489ead7cb3fa631419aba816b0bd3c0680d6839ce644c8912c3bc5ebde4606\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 00:48:31.096644 env[1221]: time="2024-02-09T00:48:31.096607293Z" level=info msg="CreateContainer within sandbox \"1df1e34978a5372f6bcbcffae1e7a0a7f5f77aec5970ce52f4a16ce250d70414\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3ea34cd524bc81b5cfcc391d4bc36bdc125271fa1b9f63dc05a6ba9232dee17e\"" Feb 9 00:48:31.098065 env[1221]: time="2024-02-09T00:48:31.098017693Z" level=info msg="StartContainer for \"3ea34cd524bc81b5cfcc391d4bc36bdc125271fa1b9f63dc05a6ba9232dee17e\"" Feb 9 00:48:31.106632 env[1221]: time="2024-02-09T00:48:31.106586658Z" level=info msg="CreateContainer within sandbox \"5a489ead7cb3fa631419aba816b0bd3c0680d6839ce644c8912c3bc5ebde4606\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"82e5102742baccd08432335db5f9bd1a27662bac5a7c19036723be0b93fcc9e5\"" Feb 9 00:48:31.107037 env[1221]: time="2024-02-09T00:48:31.107012749Z" level=info msg="StartContainer for \"82e5102742baccd08432335db5f9bd1a27662bac5a7c19036723be0b93fcc9e5\"" Feb 9 00:48:31.146486 env[1221]: time="2024-02-09T00:48:31.146439018Z" level=info msg="StartContainer for \"3ea34cd524bc81b5cfcc391d4bc36bdc125271fa1b9f63dc05a6ba9232dee17e\" returns successfully" Feb 9 00:48:31.162235 env[1221]: time="2024-02-09T00:48:31.162191293Z" level=info msg="StartContainer for \"82e5102742baccd08432335db5f9bd1a27662bac5a7c19036723be0b93fcc9e5\" returns successfully" Feb 9 00:48:31.799224 kubelet[2130]: E0209 00:48:31.799186 2130 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:48:31.801009 kubelet[2130]: E0209 00:48:31.800987 2130 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:48:32.084813 kubelet[2130]: I0209 00:48:32.084781 2130 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-nm6bz" podStartSLOduration=21.08474136 pod.CreationTimestamp="2024-02-09 00:48:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 00:48:32.012349044 +0000 UTC m=+34.536225777" watchObservedRunningTime="2024-02-09 00:48:32.08474136 +0000 UTC m=+34.608618083" Feb 9 00:48:32.231691 kubelet[2130]: I0209 00:48:32.231649 2130 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-wwmcr" podStartSLOduration=21.23160031 pod.CreationTimestamp="2024-02-09 00:48:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 00:48:32.223306424 +0000 UTC m=+34.747183157" watchObservedRunningTime="2024-02-09 00:48:32.23160031 +0000 UTC m=+34.755477053" Feb 9 00:48:32.802853 kubelet[2130]: E0209 00:48:32.802819 2130 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:48:32.803070 kubelet[2130]: E0209 00:48:32.802918 2130 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:48:33.804360 kubelet[2130]: E0209 00:48:33.804338 2130 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:48:33.804706 kubelet[2130]: E0209 00:48:33.804487 2130 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:48:35.052330 systemd[1]: Started sshd@5-10.0.0.69:22-10.0.0.1:59720.service. Feb 9 00:48:35.086558 sshd[3580]: Accepted publickey for core from 10.0.0.1 port 59720 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:48:35.087822 sshd[3580]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:48:35.091432 systemd-logind[1200]: New session 6 of user core. Feb 9 00:48:35.092212 systemd[1]: Started session-6.scope. Feb 9 00:48:35.208803 sshd[3580]: pam_unix(sshd:session): session closed for user core Feb 9 00:48:35.210614 systemd[1]: sshd@5-10.0.0.69:22-10.0.0.1:59720.service: Deactivated successfully. Feb 9 00:48:35.211526 systemd[1]: session-6.scope: Deactivated successfully. Feb 9 00:48:35.211537 systemd-logind[1200]: Session 6 logged out. Waiting for processes to exit. Feb 9 00:48:35.212194 systemd-logind[1200]: Removed session 6. Feb 9 00:48:40.212310 systemd[1]: Started sshd@6-10.0.0.69:22-10.0.0.1:41196.service. Feb 9 00:48:40.241784 sshd[3595]: Accepted publickey for core from 10.0.0.1 port 41196 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:48:40.242700 sshd[3595]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:48:40.245847 systemd-logind[1200]: New session 7 of user core. Feb 9 00:48:40.246580 systemd[1]: Started session-7.scope. Feb 9 00:48:40.369531 sshd[3595]: pam_unix(sshd:session): session closed for user core Feb 9 00:48:40.371552 systemd[1]: sshd@6-10.0.0.69:22-10.0.0.1:41196.service: Deactivated successfully. Feb 9 00:48:40.372271 systemd[1]: session-7.scope: Deactivated successfully. Feb 9 00:48:40.373011 systemd-logind[1200]: Session 7 logged out. Waiting for processes to exit. Feb 9 00:48:40.373665 systemd-logind[1200]: Removed session 7. Feb 9 00:48:45.372328 systemd[1]: Started sshd@7-10.0.0.69:22-10.0.0.1:41206.service. Feb 9 00:48:45.406217 sshd[3613]: Accepted publickey for core from 10.0.0.1 port 41206 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:48:45.407309 sshd[3613]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:48:45.410793 systemd-logind[1200]: New session 8 of user core. Feb 9 00:48:45.411668 systemd[1]: Started session-8.scope. Feb 9 00:48:45.517486 sshd[3613]: pam_unix(sshd:session): session closed for user core Feb 9 00:48:45.519490 systemd[1]: sshd@7-10.0.0.69:22-10.0.0.1:41206.service: Deactivated successfully. Feb 9 00:48:45.520392 systemd-logind[1200]: Session 8 logged out. Waiting for processes to exit. Feb 9 00:48:45.520395 systemd[1]: session-8.scope: Deactivated successfully. Feb 9 00:48:45.521089 systemd-logind[1200]: Removed session 8. Feb 9 00:48:50.520745 systemd[1]: Started sshd@8-10.0.0.69:22-10.0.0.1:42896.service. Feb 9 00:48:50.553008 sshd[3628]: Accepted publickey for core from 10.0.0.1 port 42896 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:48:50.554081 sshd[3628]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:48:50.557558 systemd-logind[1200]: New session 9 of user core. Feb 9 00:48:50.558408 systemd[1]: Started session-9.scope. Feb 9 00:48:50.658061 sshd[3628]: pam_unix(sshd:session): session closed for user core Feb 9 00:48:50.660230 systemd[1]: sshd@8-10.0.0.69:22-10.0.0.1:42896.service: Deactivated successfully. Feb 9 00:48:50.661328 systemd-logind[1200]: Session 9 logged out. Waiting for processes to exit. Feb 9 00:48:50.661379 systemd[1]: session-9.scope: Deactivated successfully. Feb 9 00:48:50.662185 systemd-logind[1200]: Removed session 9. Feb 9 00:48:55.661739 systemd[1]: Started sshd@9-10.0.0.69:22-10.0.0.1:42912.service. Feb 9 00:48:55.694620 sshd[3644]: Accepted publickey for core from 10.0.0.1 port 42912 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:48:55.695818 sshd[3644]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:48:55.699843 systemd-logind[1200]: New session 10 of user core. Feb 9 00:48:55.700282 systemd[1]: Started session-10.scope. Feb 9 00:48:55.832355 sshd[3644]: pam_unix(sshd:session): session closed for user core Feb 9 00:48:55.834563 systemd[1]: sshd@9-10.0.0.69:22-10.0.0.1:42912.service: Deactivated successfully. Feb 9 00:48:55.835637 systemd[1]: session-10.scope: Deactivated successfully. Feb 9 00:48:55.836757 systemd-logind[1200]: Session 10 logged out. Waiting for processes to exit. Feb 9 00:48:55.837563 systemd-logind[1200]: Removed session 10. Feb 9 00:49:00.835482 systemd[1]: Started sshd@10-10.0.0.69:22-10.0.0.1:59900.service. Feb 9 00:49:00.865940 sshd[3661]: Accepted publickey for core from 10.0.0.1 port 59900 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:49:00.866995 sshd[3661]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:49:00.870047 systemd-logind[1200]: New session 11 of user core. Feb 9 00:49:00.870744 systemd[1]: Started session-11.scope. Feb 9 00:49:00.967986 sshd[3661]: pam_unix(sshd:session): session closed for user core Feb 9 00:49:00.970389 systemd[1]: Started sshd@11-10.0.0.69:22-10.0.0.1:59904.service. Feb 9 00:49:00.970834 systemd[1]: sshd@10-10.0.0.69:22-10.0.0.1:59900.service: Deactivated successfully. Feb 9 00:49:00.972250 systemd-logind[1200]: Session 11 logged out. Waiting for processes to exit. Feb 9 00:49:00.972320 systemd[1]: session-11.scope: Deactivated successfully. Feb 9 00:49:00.973200 systemd-logind[1200]: Removed session 11. Feb 9 00:49:01.001459 sshd[3675]: Accepted publickey for core from 10.0.0.1 port 59904 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:49:01.002345 sshd[3675]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:49:01.005992 systemd-logind[1200]: New session 12 of user core. Feb 9 00:49:01.006720 systemd[1]: Started session-12.scope. Feb 9 00:49:01.686866 systemd[1]: Started sshd@12-10.0.0.69:22-10.0.0.1:59914.service. Feb 9 00:49:01.687754 sshd[3675]: pam_unix(sshd:session): session closed for user core Feb 9 00:49:01.690230 systemd-logind[1200]: Session 12 logged out. Waiting for processes to exit. Feb 9 00:49:01.690308 systemd[1]: sshd@11-10.0.0.69:22-10.0.0.1:59904.service: Deactivated successfully. Feb 9 00:49:01.690990 systemd[1]: session-12.scope: Deactivated successfully. Feb 9 00:49:01.691472 systemd-logind[1200]: Removed session 12. Feb 9 00:49:01.721645 sshd[3686]: Accepted publickey for core from 10.0.0.1 port 59914 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:49:01.722716 sshd[3686]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:49:01.725871 systemd-logind[1200]: New session 13 of user core. Feb 9 00:49:01.726648 systemd[1]: Started session-13.scope. Feb 9 00:49:01.827440 sshd[3686]: pam_unix(sshd:session): session closed for user core Feb 9 00:49:01.829752 systemd[1]: sshd@12-10.0.0.69:22-10.0.0.1:59914.service: Deactivated successfully. Feb 9 00:49:01.830683 systemd[1]: session-13.scope: Deactivated successfully. Feb 9 00:49:01.830684 systemd-logind[1200]: Session 13 logged out. Waiting for processes to exit. Feb 9 00:49:01.831525 systemd-logind[1200]: Removed session 13. Feb 9 00:49:06.831139 systemd[1]: Started sshd@13-10.0.0.69:22-10.0.0.1:33206.service. Feb 9 00:49:06.861104 sshd[3703]: Accepted publickey for core from 10.0.0.1 port 33206 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:49:06.862175 sshd[3703]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:49:06.865756 systemd-logind[1200]: New session 14 of user core. Feb 9 00:49:06.866373 systemd[1]: Started session-14.scope. Feb 9 00:49:06.968685 sshd[3703]: pam_unix(sshd:session): session closed for user core Feb 9 00:49:06.970513 systemd[1]: sshd@13-10.0.0.69:22-10.0.0.1:33206.service: Deactivated successfully. Feb 9 00:49:06.971385 systemd-logind[1200]: Session 14 logged out. Waiting for processes to exit. Feb 9 00:49:06.971426 systemd[1]: session-14.scope: Deactivated successfully. Feb 9 00:49:06.972047 systemd-logind[1200]: Removed session 14. Feb 9 00:49:11.971949 systemd[1]: Started sshd@14-10.0.0.69:22-10.0.0.1:33212.service. Feb 9 00:49:12.003583 sshd[3717]: Accepted publickey for core from 10.0.0.1 port 33212 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:49:12.004681 sshd[3717]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:49:12.008075 systemd-logind[1200]: New session 15 of user core. Feb 9 00:49:12.008954 systemd[1]: Started session-15.scope. Feb 9 00:49:12.111952 sshd[3717]: pam_unix(sshd:session): session closed for user core Feb 9 00:49:12.114308 systemd[1]: Started sshd@15-10.0.0.69:22-10.0.0.1:33218.service. Feb 9 00:49:12.114712 systemd[1]: sshd@14-10.0.0.69:22-10.0.0.1:33212.service: Deactivated successfully. Feb 9 00:49:12.115665 systemd[1]: session-15.scope: Deactivated successfully. Feb 9 00:49:12.115759 systemd-logind[1200]: Session 15 logged out. Waiting for processes to exit. Feb 9 00:49:12.116640 systemd-logind[1200]: Removed session 15. Feb 9 00:49:12.144889 sshd[3729]: Accepted publickey for core from 10.0.0.1 port 33218 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:49:12.145943 sshd[3729]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:49:12.149604 systemd-logind[1200]: New session 16 of user core. Feb 9 00:49:12.150623 systemd[1]: Started session-16.scope. Feb 9 00:49:12.312782 sshd[3729]: pam_unix(sshd:session): session closed for user core Feb 9 00:49:12.314848 systemd[1]: Started sshd@16-10.0.0.69:22-10.0.0.1:33222.service. Feb 9 00:49:12.315307 systemd[1]: sshd@15-10.0.0.69:22-10.0.0.1:33218.service: Deactivated successfully. Feb 9 00:49:12.316024 systemd-logind[1200]: Session 16 logged out. Waiting for processes to exit. Feb 9 00:49:12.316068 systemd[1]: session-16.scope: Deactivated successfully. Feb 9 00:49:12.316771 systemd-logind[1200]: Removed session 16. Feb 9 00:49:12.347096 sshd[3743]: Accepted publickey for core from 10.0.0.1 port 33222 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:49:12.348007 sshd[3743]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:49:12.350849 systemd-logind[1200]: New session 17 of user core. Feb 9 00:49:12.351523 systemd[1]: Started session-17.scope. Feb 9 00:49:13.170479 sshd[3743]: pam_unix(sshd:session): session closed for user core Feb 9 00:49:13.172739 systemd[1]: Started sshd@17-10.0.0.69:22-10.0.0.1:33230.service. Feb 9 00:49:13.175106 systemd-logind[1200]: Session 17 logged out. Waiting for processes to exit. Feb 9 00:49:13.175390 systemd[1]: sshd@16-10.0.0.69:22-10.0.0.1:33222.service: Deactivated successfully. Feb 9 00:49:13.176149 systemd[1]: session-17.scope: Deactivated successfully. Feb 9 00:49:13.177257 systemd-logind[1200]: Removed session 17. Feb 9 00:49:13.205758 sshd[3765]: Accepted publickey for core from 10.0.0.1 port 33230 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:49:13.206864 sshd[3765]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:49:13.210675 systemd-logind[1200]: New session 18 of user core. Feb 9 00:49:13.211624 systemd[1]: Started session-18.scope. Feb 9 00:49:13.412834 systemd[1]: Started sshd@18-10.0.0.69:22-10.0.0.1:33242.service. Feb 9 00:49:13.413436 sshd[3765]: pam_unix(sshd:session): session closed for user core Feb 9 00:49:13.416222 systemd[1]: sshd@17-10.0.0.69:22-10.0.0.1:33230.service: Deactivated successfully. Feb 9 00:49:13.416988 systemd[1]: session-18.scope: Deactivated successfully. Feb 9 00:49:13.417694 systemd-logind[1200]: Session 18 logged out. Waiting for processes to exit. Feb 9 00:49:13.418642 systemd-logind[1200]: Removed session 18. Feb 9 00:49:13.443706 sshd[3824]: Accepted publickey for core from 10.0.0.1 port 33242 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:49:13.444884 sshd[3824]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:49:13.448349 systemd-logind[1200]: New session 19 of user core. Feb 9 00:49:13.449038 systemd[1]: Started session-19.scope. Feb 9 00:49:13.554038 sshd[3824]: pam_unix(sshd:session): session closed for user core Feb 9 00:49:13.556096 systemd[1]: sshd@18-10.0.0.69:22-10.0.0.1:33242.service: Deactivated successfully. Feb 9 00:49:13.557136 systemd-logind[1200]: Session 19 logged out. Waiting for processes to exit. Feb 9 00:49:13.557168 systemd[1]: session-19.scope: Deactivated successfully. Feb 9 00:49:13.558014 systemd-logind[1200]: Removed session 19. Feb 9 00:49:18.557389 systemd[1]: Started sshd@19-10.0.0.69:22-10.0.0.1:50968.service. Feb 9 00:49:18.585906 sshd[3840]: Accepted publickey for core from 10.0.0.1 port 50968 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:49:18.586833 sshd[3840]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:49:18.589711 systemd-logind[1200]: New session 20 of user core. Feb 9 00:49:18.590570 systemd[1]: Started session-20.scope. Feb 9 00:49:18.683704 sshd[3840]: pam_unix(sshd:session): session closed for user core Feb 9 00:49:18.685437 systemd[1]: sshd@19-10.0.0.69:22-10.0.0.1:50968.service: Deactivated successfully. Feb 9 00:49:18.686410 systemd-logind[1200]: Session 20 logged out. Waiting for processes to exit. Feb 9 00:49:18.686481 systemd[1]: session-20.scope: Deactivated successfully. Feb 9 00:49:18.687166 systemd-logind[1200]: Removed session 20. Feb 9 00:49:21.693625 kubelet[2130]: E0209 00:49:21.693592 2130 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:49:23.687077 systemd[1]: Started sshd@20-10.0.0.69:22-10.0.0.1:50970.service. Feb 9 00:49:23.717038 sshd[3881]: Accepted publickey for core from 10.0.0.1 port 50970 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:49:23.718011 sshd[3881]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:49:23.721164 systemd-logind[1200]: New session 21 of user core. Feb 9 00:49:23.722171 systemd[1]: Started session-21.scope. Feb 9 00:49:23.824248 sshd[3881]: pam_unix(sshd:session): session closed for user core Feb 9 00:49:23.826196 systemd[1]: sshd@20-10.0.0.69:22-10.0.0.1:50970.service: Deactivated successfully. Feb 9 00:49:23.827259 systemd-logind[1200]: Session 21 logged out. Waiting for processes to exit. Feb 9 00:49:23.827337 systemd[1]: session-21.scope: Deactivated successfully. Feb 9 00:49:23.828272 systemd-logind[1200]: Removed session 21. Feb 9 00:49:26.693534 kubelet[2130]: E0209 00:49:26.693498 2130 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:49:28.827328 systemd[1]: Started sshd@21-10.0.0.69:22-10.0.0.1:52574.service. Feb 9 00:49:28.858054 sshd[3895]: Accepted publickey for core from 10.0.0.1 port 52574 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:49:28.859083 sshd[3895]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:49:28.862207 systemd-logind[1200]: New session 22 of user core. Feb 9 00:49:28.862848 systemd[1]: Started session-22.scope. Feb 9 00:49:28.971331 sshd[3895]: pam_unix(sshd:session): session closed for user core Feb 9 00:49:28.973564 systemd[1]: sshd@21-10.0.0.69:22-10.0.0.1:52574.service: Deactivated successfully. Feb 9 00:49:28.974708 systemd-logind[1200]: Session 22 logged out. Waiting for processes to exit. Feb 9 00:49:28.974754 systemd[1]: session-22.scope: Deactivated successfully. Feb 9 00:49:28.975794 systemd-logind[1200]: Removed session 22. Feb 9 00:49:31.692920 kubelet[2130]: E0209 00:49:31.692893 2130 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:49:33.693593 kubelet[2130]: E0209 00:49:33.693563 2130 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:49:33.974064 systemd[1]: Started sshd@22-10.0.0.69:22-10.0.0.1:52586.service. Feb 9 00:49:34.003298 sshd[3909]: Accepted publickey for core from 10.0.0.1 port 52586 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:49:34.004203 sshd[3909]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:49:34.007350 systemd-logind[1200]: New session 23 of user core. Feb 9 00:49:34.008125 systemd[1]: Started session-23.scope. Feb 9 00:49:34.107238 sshd[3909]: pam_unix(sshd:session): session closed for user core Feb 9 00:49:34.109522 systemd[1]: Started sshd@23-10.0.0.69:22-10.0.0.1:52598.service. Feb 9 00:49:34.111641 systemd[1]: sshd@22-10.0.0.69:22-10.0.0.1:52586.service: Deactivated successfully. Feb 9 00:49:34.112888 systemd[1]: session-23.scope: Deactivated successfully. Feb 9 00:49:34.113485 systemd-logind[1200]: Session 23 logged out. Waiting for processes to exit. Feb 9 00:49:34.114226 systemd-logind[1200]: Removed session 23. Feb 9 00:49:34.141093 sshd[3921]: Accepted publickey for core from 10.0.0.1 port 52598 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:49:34.142059 sshd[3921]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:49:34.145234 systemd-logind[1200]: New session 24 of user core. Feb 9 00:49:34.145993 systemd[1]: Started session-24.scope. Feb 9 00:49:35.493109 systemd[1]: run-containerd-runc-k8s.io-f1733d7f9076625aa79789fe8b2a67472781ae5abecf1ca7a12fd74dc7102f9e-runc.FM3xrt.mount: Deactivated successfully. Feb 9 00:49:35.505116 env[1221]: time="2024-02-09T00:49:35.505062922Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 00:49:35.509795 env[1221]: time="2024-02-09T00:49:35.509757001Z" level=info msg="StopContainer for \"f1733d7f9076625aa79789fe8b2a67472781ae5abecf1ca7a12fd74dc7102f9e\" with timeout 1 (s)" Feb 9 00:49:35.509982 env[1221]: time="2024-02-09T00:49:35.509957652Z" level=info msg="Stop container \"f1733d7f9076625aa79789fe8b2a67472781ae5abecf1ca7a12fd74dc7102f9e\" with signal terminated" Feb 9 00:49:35.514995 systemd-networkd[1097]: lxc_health: Link DOWN Feb 9 00:49:35.515003 systemd-networkd[1097]: lxc_health: Lost carrier Feb 9 00:49:35.539136 env[1221]: time="2024-02-09T00:49:35.539089736Z" level=info msg="StopContainer for \"72e67fad58269642562865ec7df0263fb5553284b23421722ed7ba4fe575ed05\" with timeout 30 (s)" Feb 9 00:49:35.539776 env[1221]: time="2024-02-09T00:49:35.539748676Z" level=info msg="Stop container \"72e67fad58269642562865ec7df0263fb5553284b23421722ed7ba4fe575ed05\" with signal terminated" Feb 9 00:49:35.560129 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f1733d7f9076625aa79789fe8b2a67472781ae5abecf1ca7a12fd74dc7102f9e-rootfs.mount: Deactivated successfully. Feb 9 00:49:35.566893 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-72e67fad58269642562865ec7df0263fb5553284b23421722ed7ba4fe575ed05-rootfs.mount: Deactivated successfully. Feb 9 00:49:35.631502 env[1221]: time="2024-02-09T00:49:35.631455941Z" level=info msg="shim disconnected" id=f1733d7f9076625aa79789fe8b2a67472781ae5abecf1ca7a12fd74dc7102f9e Feb 9 00:49:35.631603 env[1221]: time="2024-02-09T00:49:35.631503792Z" level=warning msg="cleaning up after shim disconnected" id=f1733d7f9076625aa79789fe8b2a67472781ae5abecf1ca7a12fd74dc7102f9e namespace=k8s.io Feb 9 00:49:35.631603 env[1221]: time="2024-02-09T00:49:35.631513009Z" level=info msg="cleaning up dead shim" Feb 9 00:49:35.631721 env[1221]: time="2024-02-09T00:49:35.631696958Z" level=info msg="shim disconnected" id=72e67fad58269642562865ec7df0263fb5553284b23421722ed7ba4fe575ed05 Feb 9 00:49:35.631769 env[1221]: time="2024-02-09T00:49:35.631724230Z" level=warning msg="cleaning up after shim disconnected" id=72e67fad58269642562865ec7df0263fb5553284b23421722ed7ba4fe575ed05 namespace=k8s.io Feb 9 00:49:35.631769 env[1221]: time="2024-02-09T00:49:35.631732586Z" level=info msg="cleaning up dead shim" Feb 9 00:49:35.638178 env[1221]: time="2024-02-09T00:49:35.638147101Z" level=warning msg="cleanup warnings time=\"2024-02-09T00:49:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3993 runtime=io.containerd.runc.v2\n" Feb 9 00:49:35.638891 env[1221]: time="2024-02-09T00:49:35.638844555Z" level=warning msg="cleanup warnings time=\"2024-02-09T00:49:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3994 runtime=io.containerd.runc.v2\n" Feb 9 00:49:35.641641 env[1221]: time="2024-02-09T00:49:35.641598602Z" level=info msg="StopContainer for \"f1733d7f9076625aa79789fe8b2a67472781ae5abecf1ca7a12fd74dc7102f9e\" returns successfully" Feb 9 00:49:35.642281 env[1221]: time="2024-02-09T00:49:35.642237385Z" level=info msg="StopPodSandbox for \"2d9a876b0d7498203fd3df7ac07165f26158303bd14de24919d264d1948cf493\"" Feb 9 00:49:35.642415 env[1221]: time="2024-02-09T00:49:35.642370468Z" level=info msg="Container to stop \"ac44d800cb6bd6f325528cd02b2a53667eb9578528a401ee17cb7d99cb960b83\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 00:49:35.642415 env[1221]: time="2024-02-09T00:49:35.642388151Z" level=info msg="Container to stop \"f1733d7f9076625aa79789fe8b2a67472781ae5abecf1ca7a12fd74dc7102f9e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 00:49:35.642415 env[1221]: time="2024-02-09T00:49:35.642399012Z" level=info msg="Container to stop \"7df72d0f7e11e511703a79363f96fca2b776eba709f1c26ef47532ca9b6e1fba\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 00:49:35.642415 env[1221]: time="2024-02-09T00:49:35.642409572Z" level=info msg="Container to stop \"fdee257093ef53b91f29a10a321201012ce9a74a43807c5e7f1c9742fb00204e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 00:49:35.642540 env[1221]: time="2024-02-09T00:49:35.642419881Z" level=info msg="Container to stop \"f716091975c7ae270aabc1c9169c3e2c90c995c67aa2bd06e094a4f23f8a5ff3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 00:49:35.644489 env[1221]: time="2024-02-09T00:49:35.644427922Z" level=info msg="StopContainer for \"72e67fad58269642562865ec7df0263fb5553284b23421722ed7ba4fe575ed05\" returns successfully" Feb 9 00:49:35.644885 env[1221]: time="2024-02-09T00:49:35.644848952Z" level=info msg="StopPodSandbox for \"a6243f55be2013a67dab910ee74cd2506fbb14ebbca660159f302a3c26f5e86a\"" Feb 9 00:49:35.644941 env[1221]: time="2024-02-09T00:49:35.644905388Z" level=info msg="Container to stop \"72e67fad58269642562865ec7df0263fb5553284b23421722ed7ba4fe575ed05\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 00:49:35.667569 env[1221]: time="2024-02-09T00:49:35.667528950Z" level=info msg="shim disconnected" id=a6243f55be2013a67dab910ee74cd2506fbb14ebbca660159f302a3c26f5e86a Feb 9 00:49:35.667784 env[1221]: time="2024-02-09T00:49:35.667752744Z" level=warning msg="cleaning up after shim disconnected" id=a6243f55be2013a67dab910ee74cd2506fbb14ebbca660159f302a3c26f5e86a namespace=k8s.io Feb 9 00:49:35.667784 env[1221]: time="2024-02-09T00:49:35.667770347Z" level=info msg="cleaning up dead shim" Feb 9 00:49:35.667973 env[1221]: time="2024-02-09T00:49:35.667616937Z" level=info msg="shim disconnected" id=2d9a876b0d7498203fd3df7ac07165f26158303bd14de24919d264d1948cf493 Feb 9 00:49:35.667973 env[1221]: time="2024-02-09T00:49:35.667961811Z" level=warning msg="cleaning up after shim disconnected" id=2d9a876b0d7498203fd3df7ac07165f26158303bd14de24919d264d1948cf493 namespace=k8s.io Feb 9 00:49:35.667973 env[1221]: time="2024-02-09T00:49:35.667969145Z" level=info msg="cleaning up dead shim" Feb 9 00:49:35.674458 env[1221]: time="2024-02-09T00:49:35.674428485Z" level=warning msg="cleanup warnings time=\"2024-02-09T00:49:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4060 runtime=io.containerd.runc.v2\n" Feb 9 00:49:35.674730 env[1221]: time="2024-02-09T00:49:35.674707504Z" level=info msg="TearDown network for sandbox \"2d9a876b0d7498203fd3df7ac07165f26158303bd14de24919d264d1948cf493\" successfully" Feb 9 00:49:35.674730 env[1221]: time="2024-02-09T00:49:35.674728484Z" level=info msg="StopPodSandbox for \"2d9a876b0d7498203fd3df7ac07165f26158303bd14de24919d264d1948cf493\" returns successfully" Feb 9 00:49:35.676643 env[1221]: time="2024-02-09T00:49:35.676604204Z" level=warning msg="cleanup warnings time=\"2024-02-09T00:49:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4059 runtime=io.containerd.runc.v2\n" Feb 9 00:49:35.676903 env[1221]: time="2024-02-09T00:49:35.676878476Z" level=info msg="TearDown network for sandbox \"a6243f55be2013a67dab910ee74cd2506fbb14ebbca660159f302a3c26f5e86a\" successfully" Feb 9 00:49:35.676903 env[1221]: time="2024-02-09T00:49:35.676898924Z" level=info msg="StopPodSandbox for \"a6243f55be2013a67dab910ee74cd2506fbb14ebbca660159f302a3c26f5e86a\" returns successfully" Feb 9 00:49:35.692861 kubelet[2130]: E0209 00:49:35.692840 2130 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:49:35.802686 kubelet[2130]: I0209 00:49:35.801748 2130 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/66c0b3c9-fc36-4fa8-a80c-50740d92d05e-host-proc-sys-kernel\") pod \"66c0b3c9-fc36-4fa8-a80c-50740d92d05e\" (UID: \"66c0b3c9-fc36-4fa8-a80c-50740d92d05e\") " Feb 9 00:49:35.802686 kubelet[2130]: I0209 00:49:35.801810 2130 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/66c0b3c9-fc36-4fa8-a80c-50740d92d05e-cni-path\") pod \"66c0b3c9-fc36-4fa8-a80c-50740d92d05e\" (UID: \"66c0b3c9-fc36-4fa8-a80c-50740d92d05e\") " Feb 9 00:49:35.802686 kubelet[2130]: I0209 00:49:35.801847 2130 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgwtq\" (UniqueName: \"kubernetes.io/projected/48e2af9a-4f3c-4242-ae26-cd4e1e47d507-kube-api-access-kgwtq\") pod \"48e2af9a-4f3c-4242-ae26-cd4e1e47d507\" (UID: \"48e2af9a-4f3c-4242-ae26-cd4e1e47d507\") " Feb 9 00:49:35.802686 kubelet[2130]: I0209 00:49:35.801844 2130 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/66c0b3c9-fc36-4fa8-a80c-50740d92d05e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "66c0b3c9-fc36-4fa8-a80c-50740d92d05e" (UID: "66c0b3c9-fc36-4fa8-a80c-50740d92d05e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 00:49:35.802686 kubelet[2130]: I0209 00:49:35.801872 2130 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/66c0b3c9-fc36-4fa8-a80c-50740d92d05e-xtables-lock\") pod \"66c0b3c9-fc36-4fa8-a80c-50740d92d05e\" (UID: \"66c0b3c9-fc36-4fa8-a80c-50740d92d05e\") " Feb 9 00:49:35.802686 kubelet[2130]: I0209 00:49:35.801905 2130 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qbgf\" (UniqueName: \"kubernetes.io/projected/66c0b3c9-fc36-4fa8-a80c-50740d92d05e-kube-api-access-2qbgf\") pod \"66c0b3c9-fc36-4fa8-a80c-50740d92d05e\" (UID: \"66c0b3c9-fc36-4fa8-a80c-50740d92d05e\") " Feb 9 00:49:35.802936 kubelet[2130]: I0209 00:49:35.801934 2130 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/48e2af9a-4f3c-4242-ae26-cd4e1e47d507-cilium-config-path\") pod \"48e2af9a-4f3c-4242-ae26-cd4e1e47d507\" (UID: \"48e2af9a-4f3c-4242-ae26-cd4e1e47d507\") " Feb 9 00:49:35.802936 kubelet[2130]: I0209 00:49:35.801952 2130 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/66c0b3c9-fc36-4fa8-a80c-50740d92d05e-etc-cni-netd\") pod \"66c0b3c9-fc36-4fa8-a80c-50740d92d05e\" (UID: \"66c0b3c9-fc36-4fa8-a80c-50740d92d05e\") " Feb 9 00:49:35.802936 kubelet[2130]: I0209 00:49:35.801969 2130 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/66c0b3c9-fc36-4fa8-a80c-50740d92d05e-hubble-tls\") pod \"66c0b3c9-fc36-4fa8-a80c-50740d92d05e\" (UID: \"66c0b3c9-fc36-4fa8-a80c-50740d92d05e\") " Feb 9 00:49:35.802936 kubelet[2130]: I0209 00:49:35.801951 2130 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/66c0b3c9-fc36-4fa8-a80c-50740d92d05e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "66c0b3c9-fc36-4fa8-a80c-50740d92d05e" (UID: "66c0b3c9-fc36-4fa8-a80c-50740d92d05e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 00:49:35.802936 kubelet[2130]: I0209 00:49:35.801986 2130 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/66c0b3c9-fc36-4fa8-a80c-50740d92d05e-cilium-run\") pod \"66c0b3c9-fc36-4fa8-a80c-50740d92d05e\" (UID: \"66c0b3c9-fc36-4fa8-a80c-50740d92d05e\") " Feb 9 00:49:35.802936 kubelet[2130]: I0209 00:49:35.802013 2130 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/66c0b3c9-fc36-4fa8-a80c-50740d92d05e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "66c0b3c9-fc36-4fa8-a80c-50740d92d05e" (UID: "66c0b3c9-fc36-4fa8-a80c-50740d92d05e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 00:49:35.803074 kubelet[2130]: I0209 00:49:35.802049 2130 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/66c0b3c9-fc36-4fa8-a80c-50740d92d05e-hostproc\") pod \"66c0b3c9-fc36-4fa8-a80c-50740d92d05e\" (UID: \"66c0b3c9-fc36-4fa8-a80c-50740d92d05e\") " Feb 9 00:49:35.803074 kubelet[2130]: I0209 00:49:35.802077 2130 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/66c0b3c9-fc36-4fa8-a80c-50740d92d05e-lib-modules\") pod \"66c0b3c9-fc36-4fa8-a80c-50740d92d05e\" (UID: \"66c0b3c9-fc36-4fa8-a80c-50740d92d05e\") " Feb 9 00:49:35.803074 kubelet[2130]: I0209 00:49:35.802103 2130 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/66c0b3c9-fc36-4fa8-a80c-50740d92d05e-cilium-cgroup\") pod \"66c0b3c9-fc36-4fa8-a80c-50740d92d05e\" (UID: \"66c0b3c9-fc36-4fa8-a80c-50740d92d05e\") " Feb 9 00:49:35.803074 kubelet[2130]: I0209 00:49:35.802138 2130 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/66c0b3c9-fc36-4fa8-a80c-50740d92d05e-clustermesh-secrets\") pod \"66c0b3c9-fc36-4fa8-a80c-50740d92d05e\" (UID: \"66c0b3c9-fc36-4fa8-a80c-50740d92d05e\") " Feb 9 00:49:35.803074 kubelet[2130]: I0209 00:49:35.802162 2130 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/66c0b3c9-fc36-4fa8-a80c-50740d92d05e-bpf-maps\") pod \"66c0b3c9-fc36-4fa8-a80c-50740d92d05e\" (UID: \"66c0b3c9-fc36-4fa8-a80c-50740d92d05e\") " Feb 9 00:49:35.803074 kubelet[2130]: I0209 00:49:35.802191 2130 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/66c0b3c9-fc36-4fa8-a80c-50740d92d05e-cilium-config-path\") pod \"66c0b3c9-fc36-4fa8-a80c-50740d92d05e\" (UID: \"66c0b3c9-fc36-4fa8-a80c-50740d92d05e\") " Feb 9 00:49:35.803254 kubelet[2130]: I0209 00:49:35.802239 2130 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/66c0b3c9-fc36-4fa8-a80c-50740d92d05e-host-proc-sys-net\") pod \"66c0b3c9-fc36-4fa8-a80c-50740d92d05e\" (UID: \"66c0b3c9-fc36-4fa8-a80c-50740d92d05e\") " Feb 9 00:49:35.803254 kubelet[2130]: I0209 00:49:35.802310 2130 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/66c0b3c9-fc36-4fa8-a80c-50740d92d05e-cilium-run\") on node \"localhost\" DevicePath \"\"" Feb 9 00:49:35.803254 kubelet[2130]: I0209 00:49:35.802356 2130 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/66c0b3c9-fc36-4fa8-a80c-50740d92d05e-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Feb 9 00:49:35.803254 kubelet[2130]: I0209 00:49:35.802371 2130 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/66c0b3c9-fc36-4fa8-a80c-50740d92d05e-xtables-lock\") on node \"localhost\" DevicePath \"\"" Feb 9 00:49:35.803254 kubelet[2130]: I0209 00:49:35.802391 2130 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/66c0b3c9-fc36-4fa8-a80c-50740d92d05e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "66c0b3c9-fc36-4fa8-a80c-50740d92d05e" (UID: "66c0b3c9-fc36-4fa8-a80c-50740d92d05e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 00:49:35.803254 kubelet[2130]: I0209 00:49:35.802415 2130 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/66c0b3c9-fc36-4fa8-a80c-50740d92d05e-hostproc" (OuterVolumeSpecName: "hostproc") pod "66c0b3c9-fc36-4fa8-a80c-50740d92d05e" (UID: "66c0b3c9-fc36-4fa8-a80c-50740d92d05e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 00:49:35.803408 kubelet[2130]: I0209 00:49:35.802433 2130 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/66c0b3c9-fc36-4fa8-a80c-50740d92d05e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "66c0b3c9-fc36-4fa8-a80c-50740d92d05e" (UID: "66c0b3c9-fc36-4fa8-a80c-50740d92d05e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 00:49:35.803408 kubelet[2130]: I0209 00:49:35.802451 2130 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/66c0b3c9-fc36-4fa8-a80c-50740d92d05e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "66c0b3c9-fc36-4fa8-a80c-50740d92d05e" (UID: "66c0b3c9-fc36-4fa8-a80c-50740d92d05e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 00:49:35.803408 kubelet[2130]: I0209 00:49:35.802971 2130 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/66c0b3c9-fc36-4fa8-a80c-50740d92d05e-cni-path" (OuterVolumeSpecName: "cni-path") pod "66c0b3c9-fc36-4fa8-a80c-50740d92d05e" (UID: "66c0b3c9-fc36-4fa8-a80c-50740d92d05e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 00:49:35.803408 kubelet[2130]: I0209 00:49:35.803005 2130 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/66c0b3c9-fc36-4fa8-a80c-50740d92d05e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "66c0b3c9-fc36-4fa8-a80c-50740d92d05e" (UID: "66c0b3c9-fc36-4fa8-a80c-50740d92d05e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 00:49:35.803408 kubelet[2130]: I0209 00:49:35.803024 2130 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/66c0b3c9-fc36-4fa8-a80c-50740d92d05e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "66c0b3c9-fc36-4fa8-a80c-50740d92d05e" (UID: "66c0b3c9-fc36-4fa8-a80c-50740d92d05e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 00:49:35.803535 kubelet[2130]: W0209 00:49:35.803157 2130 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/48e2af9a-4f3c-4242-ae26-cd4e1e47d507/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 00:49:35.805508 kubelet[2130]: W0209 00:49:35.804230 2130 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/66c0b3c9-fc36-4fa8-a80c-50740d92d05e/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 00:49:35.805508 kubelet[2130]: I0209 00:49:35.804519 2130 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48e2af9a-4f3c-4242-ae26-cd4e1e47d507-kube-api-access-kgwtq" (OuterVolumeSpecName: "kube-api-access-kgwtq") pod "48e2af9a-4f3c-4242-ae26-cd4e1e47d507" (UID: "48e2af9a-4f3c-4242-ae26-cd4e1e47d507"). InnerVolumeSpecName "kube-api-access-kgwtq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 00:49:35.805971 kubelet[2130]: I0209 00:49:35.805942 2130 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66c0b3c9-fc36-4fa8-a80c-50740d92d05e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "66c0b3c9-fc36-4fa8-a80c-50740d92d05e" (UID: "66c0b3c9-fc36-4fa8-a80c-50740d92d05e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 00:49:35.806159 kubelet[2130]: I0209 00:49:35.806119 2130 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48e2af9a-4f3c-4242-ae26-cd4e1e47d507-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "48e2af9a-4f3c-4242-ae26-cd4e1e47d507" (UID: "48e2af9a-4f3c-4242-ae26-cd4e1e47d507"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 00:49:35.806543 kubelet[2130]: I0209 00:49:35.806495 2130 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66c0b3c9-fc36-4fa8-a80c-50740d92d05e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "66c0b3c9-fc36-4fa8-a80c-50740d92d05e" (UID: "66c0b3c9-fc36-4fa8-a80c-50740d92d05e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 00:49:35.806543 kubelet[2130]: I0209 00:49:35.806515 2130 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66c0b3c9-fc36-4fa8-a80c-50740d92d05e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "66c0b3c9-fc36-4fa8-a80c-50740d92d05e" (UID: "66c0b3c9-fc36-4fa8-a80c-50740d92d05e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 00:49:35.806759 kubelet[2130]: I0209 00:49:35.806740 2130 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66c0b3c9-fc36-4fa8-a80c-50740d92d05e-kube-api-access-2qbgf" (OuterVolumeSpecName: "kube-api-access-2qbgf") pod "66c0b3c9-fc36-4fa8-a80c-50740d92d05e" (UID: "66c0b3c9-fc36-4fa8-a80c-50740d92d05e"). InnerVolumeSpecName "kube-api-access-2qbgf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 00:49:35.903220 kubelet[2130]: I0209 00:49:35.903185 2130 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/66c0b3c9-fc36-4fa8-a80c-50740d92d05e-cni-path\") on node \"localhost\" DevicePath \"\"" Feb 9 00:49:35.903220 kubelet[2130]: I0209 00:49:35.903212 2130 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-kgwtq\" (UniqueName: \"kubernetes.io/projected/48e2af9a-4f3c-4242-ae26-cd4e1e47d507-kube-api-access-kgwtq\") on node \"localhost\" DevicePath \"\"" Feb 9 00:49:35.903220 kubelet[2130]: I0209 00:49:35.903222 2130 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-2qbgf\" (UniqueName: \"kubernetes.io/projected/66c0b3c9-fc36-4fa8-a80c-50740d92d05e-kube-api-access-2qbgf\") on node \"localhost\" DevicePath \"\"" Feb 9 00:49:35.903220 kubelet[2130]: I0209 00:49:35.903232 2130 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/48e2af9a-4f3c-4242-ae26-cd4e1e47d507-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 9 00:49:35.903429 kubelet[2130]: I0209 00:49:35.903240 2130 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/66c0b3c9-fc36-4fa8-a80c-50740d92d05e-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Feb 9 00:49:35.903429 kubelet[2130]: I0209 00:49:35.903248 2130 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/66c0b3c9-fc36-4fa8-a80c-50740d92d05e-hubble-tls\") on node \"localhost\" DevicePath \"\"" Feb 9 00:49:35.903429 kubelet[2130]: I0209 00:49:35.903257 2130 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/66c0b3c9-fc36-4fa8-a80c-50740d92d05e-hostproc\") on node \"localhost\" DevicePath \"\"" Feb 9 00:49:35.903429 kubelet[2130]: I0209 00:49:35.903264 2130 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/66c0b3c9-fc36-4fa8-a80c-50740d92d05e-lib-modules\") on node \"localhost\" DevicePath \"\"" Feb 9 00:49:35.903429 kubelet[2130]: I0209 00:49:35.903273 2130 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/66c0b3c9-fc36-4fa8-a80c-50740d92d05e-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Feb 9 00:49:35.903429 kubelet[2130]: I0209 00:49:35.903280 2130 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/66c0b3c9-fc36-4fa8-a80c-50740d92d05e-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Feb 9 00:49:35.903429 kubelet[2130]: I0209 00:49:35.903308 2130 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/66c0b3c9-fc36-4fa8-a80c-50740d92d05e-bpf-maps\") on node \"localhost\" DevicePath \"\"" Feb 9 00:49:35.903429 kubelet[2130]: I0209 00:49:35.903322 2130 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/66c0b3c9-fc36-4fa8-a80c-50740d92d05e-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Feb 9 00:49:35.903602 kubelet[2130]: I0209 00:49:35.903330 2130 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/66c0b3c9-fc36-4fa8-a80c-50740d92d05e-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 9 00:49:35.907563 kubelet[2130]: I0209 00:49:35.907543 2130 scope.go:115] "RemoveContainer" containerID="72e67fad58269642562865ec7df0263fb5553284b23421722ed7ba4fe575ed05" Feb 9 00:49:35.909099 env[1221]: time="2024-02-09T00:49:35.909052332Z" level=info msg="RemoveContainer for \"72e67fad58269642562865ec7df0263fb5553284b23421722ed7ba4fe575ed05\"" Feb 9 00:49:35.914082 env[1221]: time="2024-02-09T00:49:35.914054727Z" level=info msg="RemoveContainer for \"72e67fad58269642562865ec7df0263fb5553284b23421722ed7ba4fe575ed05\" returns successfully" Feb 9 00:49:35.914558 kubelet[2130]: I0209 00:49:35.914537 2130 scope.go:115] "RemoveContainer" containerID="72e67fad58269642562865ec7df0263fb5553284b23421722ed7ba4fe575ed05" Feb 9 00:49:35.915134 env[1221]: time="2024-02-09T00:49:35.915075334Z" level=error msg="ContainerStatus for \"72e67fad58269642562865ec7df0263fb5553284b23421722ed7ba4fe575ed05\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"72e67fad58269642562865ec7df0263fb5553284b23421722ed7ba4fe575ed05\": not found" Feb 9 00:49:35.918298 kubelet[2130]: E0209 00:49:35.918263 2130 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"72e67fad58269642562865ec7df0263fb5553284b23421722ed7ba4fe575ed05\": not found" containerID="72e67fad58269642562865ec7df0263fb5553284b23421722ed7ba4fe575ed05" Feb 9 00:49:35.918352 kubelet[2130]: I0209 00:49:35.918322 2130 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:72e67fad58269642562865ec7df0263fb5553284b23421722ed7ba4fe575ed05} err="failed to get container status \"72e67fad58269642562865ec7df0263fb5553284b23421722ed7ba4fe575ed05\": rpc error: code = NotFound desc = an error occurred when try to find container \"72e67fad58269642562865ec7df0263fb5553284b23421722ed7ba4fe575ed05\": not found" Feb 9 00:49:35.918352 kubelet[2130]: I0209 00:49:35.918337 2130 scope.go:115] "RemoveContainer" containerID="f1733d7f9076625aa79789fe8b2a67472781ae5abecf1ca7a12fd74dc7102f9e" Feb 9 00:49:35.922838 env[1221]: time="2024-02-09T00:49:35.922798443Z" level=info msg="RemoveContainer for \"f1733d7f9076625aa79789fe8b2a67472781ae5abecf1ca7a12fd74dc7102f9e\"" Feb 9 00:49:35.927072 env[1221]: time="2024-02-09T00:49:35.926370453Z" level=info msg="RemoveContainer for \"f1733d7f9076625aa79789fe8b2a67472781ae5abecf1ca7a12fd74dc7102f9e\" returns successfully" Feb 9 00:49:35.927164 kubelet[2130]: I0209 00:49:35.926540 2130 scope.go:115] "RemoveContainer" containerID="ac44d800cb6bd6f325528cd02b2a53667eb9578528a401ee17cb7d99cb960b83" Feb 9 00:49:35.928494 env[1221]: time="2024-02-09T00:49:35.928449309Z" level=info msg="RemoveContainer for \"ac44d800cb6bd6f325528cd02b2a53667eb9578528a401ee17cb7d99cb960b83\"" Feb 9 00:49:35.931506 env[1221]: time="2024-02-09T00:49:35.931471364Z" level=info msg="RemoveContainer for \"ac44d800cb6bd6f325528cd02b2a53667eb9578528a401ee17cb7d99cb960b83\" returns successfully" Feb 9 00:49:35.931672 kubelet[2130]: I0209 00:49:35.931638 2130 scope.go:115] "RemoveContainer" containerID="fdee257093ef53b91f29a10a321201012ce9a74a43807c5e7f1c9742fb00204e" Feb 9 00:49:35.932696 env[1221]: time="2024-02-09T00:49:35.932669438Z" level=info msg="RemoveContainer for \"fdee257093ef53b91f29a10a321201012ce9a74a43807c5e7f1c9742fb00204e\"" Feb 9 00:49:35.935040 env[1221]: time="2024-02-09T00:49:35.935011774Z" level=info msg="RemoveContainer for \"fdee257093ef53b91f29a10a321201012ce9a74a43807c5e7f1c9742fb00204e\" returns successfully" Feb 9 00:49:35.935188 kubelet[2130]: I0209 00:49:35.935160 2130 scope.go:115] "RemoveContainer" containerID="f716091975c7ae270aabc1c9169c3e2c90c995c67aa2bd06e094a4f23f8a5ff3" Feb 9 00:49:35.935989 env[1221]: time="2024-02-09T00:49:35.935960525Z" level=info msg="RemoveContainer for \"f716091975c7ae270aabc1c9169c3e2c90c995c67aa2bd06e094a4f23f8a5ff3\"" Feb 9 00:49:35.938334 env[1221]: time="2024-02-09T00:49:35.938310605Z" level=info msg="RemoveContainer for \"f716091975c7ae270aabc1c9169c3e2c90c995c67aa2bd06e094a4f23f8a5ff3\" returns successfully" Feb 9 00:49:35.938430 kubelet[2130]: I0209 00:49:35.938414 2130 scope.go:115] "RemoveContainer" containerID="7df72d0f7e11e511703a79363f96fca2b776eba709f1c26ef47532ca9b6e1fba" Feb 9 00:49:35.940171 env[1221]: time="2024-02-09T00:49:35.940141411Z" level=info msg="RemoveContainer for \"7df72d0f7e11e511703a79363f96fca2b776eba709f1c26ef47532ca9b6e1fba\"" Feb 9 00:49:35.942922 env[1221]: time="2024-02-09T00:49:35.942883055Z" level=info msg="RemoveContainer for \"7df72d0f7e11e511703a79363f96fca2b776eba709f1c26ef47532ca9b6e1fba\" returns successfully" Feb 9 00:49:35.943095 kubelet[2130]: I0209 00:49:35.943054 2130 scope.go:115] "RemoveContainer" containerID="f1733d7f9076625aa79789fe8b2a67472781ae5abecf1ca7a12fd74dc7102f9e" Feb 9 00:49:35.943846 env[1221]: time="2024-02-09T00:49:35.943786930Z" level=error msg="ContainerStatus for \"f1733d7f9076625aa79789fe8b2a67472781ae5abecf1ca7a12fd74dc7102f9e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f1733d7f9076625aa79789fe8b2a67472781ae5abecf1ca7a12fd74dc7102f9e\": not found" Feb 9 00:49:35.944106 kubelet[2130]: E0209 00:49:35.944049 2130 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f1733d7f9076625aa79789fe8b2a67472781ae5abecf1ca7a12fd74dc7102f9e\": not found" containerID="f1733d7f9076625aa79789fe8b2a67472781ae5abecf1ca7a12fd74dc7102f9e" Feb 9 00:49:35.944106 kubelet[2130]: I0209 00:49:35.944083 2130 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:f1733d7f9076625aa79789fe8b2a67472781ae5abecf1ca7a12fd74dc7102f9e} err="failed to get container status \"f1733d7f9076625aa79789fe8b2a67472781ae5abecf1ca7a12fd74dc7102f9e\": rpc error: code = NotFound desc = an error occurred when try to find container \"f1733d7f9076625aa79789fe8b2a67472781ae5abecf1ca7a12fd74dc7102f9e\": not found" Feb 9 00:49:35.944106 kubelet[2130]: I0209 00:49:35.944105 2130 scope.go:115] "RemoveContainer" containerID="ac44d800cb6bd6f325528cd02b2a53667eb9578528a401ee17cb7d99cb960b83" Feb 9 00:49:35.944414 env[1221]: time="2024-02-09T00:49:35.944243536Z" level=error msg="ContainerStatus for \"ac44d800cb6bd6f325528cd02b2a53667eb9578528a401ee17cb7d99cb960b83\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ac44d800cb6bd6f325528cd02b2a53667eb9578528a401ee17cb7d99cb960b83\": not found" Feb 9 00:49:35.944483 kubelet[2130]: E0209 00:49:35.944421 2130 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ac44d800cb6bd6f325528cd02b2a53667eb9578528a401ee17cb7d99cb960b83\": not found" containerID="ac44d800cb6bd6f325528cd02b2a53667eb9578528a401ee17cb7d99cb960b83" Feb 9 00:49:35.944483 kubelet[2130]: I0209 00:49:35.944442 2130 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:ac44d800cb6bd6f325528cd02b2a53667eb9578528a401ee17cb7d99cb960b83} err="failed to get container status \"ac44d800cb6bd6f325528cd02b2a53667eb9578528a401ee17cb7d99cb960b83\": rpc error: code = NotFound desc = an error occurred when try to find container \"ac44d800cb6bd6f325528cd02b2a53667eb9578528a401ee17cb7d99cb960b83\": not found" Feb 9 00:49:35.944483 kubelet[2130]: I0209 00:49:35.944450 2130 scope.go:115] "RemoveContainer" containerID="fdee257093ef53b91f29a10a321201012ce9a74a43807c5e7f1c9742fb00204e" Feb 9 00:49:35.944726 env[1221]: time="2024-02-09T00:49:35.944642514Z" level=error msg="ContainerStatus for \"fdee257093ef53b91f29a10a321201012ce9a74a43807c5e7f1c9742fb00204e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fdee257093ef53b91f29a10a321201012ce9a74a43807c5e7f1c9742fb00204e\": not found" Feb 9 00:49:35.944775 kubelet[2130]: E0209 00:49:35.944764 2130 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fdee257093ef53b91f29a10a321201012ce9a74a43807c5e7f1c9742fb00204e\": not found" containerID="fdee257093ef53b91f29a10a321201012ce9a74a43807c5e7f1c9742fb00204e" Feb 9 00:49:35.944806 kubelet[2130]: I0209 00:49:35.944780 2130 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:fdee257093ef53b91f29a10a321201012ce9a74a43807c5e7f1c9742fb00204e} err="failed to get container status \"fdee257093ef53b91f29a10a321201012ce9a74a43807c5e7f1c9742fb00204e\": rpc error: code = NotFound desc = an error occurred when try to find container \"fdee257093ef53b91f29a10a321201012ce9a74a43807c5e7f1c9742fb00204e\": not found" Feb 9 00:49:35.944806 kubelet[2130]: I0209 00:49:35.944788 2130 scope.go:115] "RemoveContainer" containerID="f716091975c7ae270aabc1c9169c3e2c90c995c67aa2bd06e094a4f23f8a5ff3" Feb 9 00:49:35.944938 env[1221]: time="2024-02-09T00:49:35.944892729Z" level=error msg="ContainerStatus for \"f716091975c7ae270aabc1c9169c3e2c90c995c67aa2bd06e094a4f23f8a5ff3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f716091975c7ae270aabc1c9169c3e2c90c995c67aa2bd06e094a4f23f8a5ff3\": not found" Feb 9 00:49:35.945052 kubelet[2130]: E0209 00:49:35.945039 2130 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f716091975c7ae270aabc1c9169c3e2c90c995c67aa2bd06e094a4f23f8a5ff3\": not found" containerID="f716091975c7ae270aabc1c9169c3e2c90c995c67aa2bd06e094a4f23f8a5ff3" Feb 9 00:49:35.945094 kubelet[2130]: I0209 00:49:35.945063 2130 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:f716091975c7ae270aabc1c9169c3e2c90c995c67aa2bd06e094a4f23f8a5ff3} err="failed to get container status \"f716091975c7ae270aabc1c9169c3e2c90c995c67aa2bd06e094a4f23f8a5ff3\": rpc error: code = NotFound desc = an error occurred when try to find container \"f716091975c7ae270aabc1c9169c3e2c90c995c67aa2bd06e094a4f23f8a5ff3\": not found" Feb 9 00:49:35.945094 kubelet[2130]: I0209 00:49:35.945071 2130 scope.go:115] "RemoveContainer" containerID="7df72d0f7e11e511703a79363f96fca2b776eba709f1c26ef47532ca9b6e1fba" Feb 9 00:49:35.945235 env[1221]: time="2024-02-09T00:49:35.945201655Z" level=error msg="ContainerStatus for \"7df72d0f7e11e511703a79363f96fca2b776eba709f1c26ef47532ca9b6e1fba\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7df72d0f7e11e511703a79363f96fca2b776eba709f1c26ef47532ca9b6e1fba\": not found" Feb 9 00:49:35.945977 kubelet[2130]: E0209 00:49:35.945382 2130 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7df72d0f7e11e511703a79363f96fca2b776eba709f1c26ef47532ca9b6e1fba\": not found" containerID="7df72d0f7e11e511703a79363f96fca2b776eba709f1c26ef47532ca9b6e1fba" Feb 9 00:49:35.945977 kubelet[2130]: I0209 00:49:35.945412 2130 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:7df72d0f7e11e511703a79363f96fca2b776eba709f1c26ef47532ca9b6e1fba} err="failed to get container status \"7df72d0f7e11e511703a79363f96fca2b776eba709f1c26ef47532ca9b6e1fba\": rpc error: code = NotFound desc = an error occurred when try to find container \"7df72d0f7e11e511703a79363f96fca2b776eba709f1c26ef47532ca9b6e1fba\": not found" Feb 9 00:49:36.489544 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a6243f55be2013a67dab910ee74cd2506fbb14ebbca660159f302a3c26f5e86a-rootfs.mount: Deactivated successfully. Feb 9 00:49:36.489693 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a6243f55be2013a67dab910ee74cd2506fbb14ebbca660159f302a3c26f5e86a-shm.mount: Deactivated successfully. Feb 9 00:49:36.489805 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2d9a876b0d7498203fd3df7ac07165f26158303bd14de24919d264d1948cf493-rootfs.mount: Deactivated successfully. Feb 9 00:49:36.489900 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2d9a876b0d7498203fd3df7ac07165f26158303bd14de24919d264d1948cf493-shm.mount: Deactivated successfully. Feb 9 00:49:36.489995 systemd[1]: var-lib-kubelet-pods-48e2af9a\x2d4f3c\x2d4242\x2dae26\x2dcd4e1e47d507-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkgwtq.mount: Deactivated successfully. Feb 9 00:49:36.490074 systemd[1]: var-lib-kubelet-pods-66c0b3c9\x2dfc36\x2d4fa8\x2da80c\x2d50740d92d05e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2qbgf.mount: Deactivated successfully. Feb 9 00:49:36.490156 systemd[1]: var-lib-kubelet-pods-66c0b3c9\x2dfc36\x2d4fa8\x2da80c\x2d50740d92d05e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 00:49:36.490235 systemd[1]: var-lib-kubelet-pods-66c0b3c9\x2dfc36\x2d4fa8\x2da80c\x2d50740d92d05e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 00:49:37.429755 sshd[3921]: pam_unix(sshd:session): session closed for user core Feb 9 00:49:37.433096 systemd[1]: Started sshd@24-10.0.0.69:22-10.0.0.1:49688.service. Feb 9 00:49:37.433575 systemd[1]: sshd@23-10.0.0.69:22-10.0.0.1:52598.service: Deactivated successfully. Feb 9 00:49:37.435162 systemd[1]: session-24.scope: Deactivated successfully. Feb 9 00:49:37.436383 systemd-logind[1200]: Session 24 logged out. Waiting for processes to exit. Feb 9 00:49:37.437272 systemd-logind[1200]: Removed session 24. Feb 9 00:49:37.462941 sshd[4089]: Accepted publickey for core from 10.0.0.1 port 49688 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:49:37.463871 sshd[4089]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:49:37.466847 systemd-logind[1200]: New session 25 of user core. Feb 9 00:49:37.467558 systemd[1]: Started session-25.scope. Feb 9 00:49:37.693118 kubelet[2130]: E0209 00:49:37.693018 2130 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:49:37.695544 kubelet[2130]: I0209 00:49:37.695516 2130 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=48e2af9a-4f3c-4242-ae26-cd4e1e47d507 path="/var/lib/kubelet/pods/48e2af9a-4f3c-4242-ae26-cd4e1e47d507/volumes" Feb 9 00:49:37.695882 kubelet[2130]: I0209 00:49:37.695861 2130 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=66c0b3c9-fc36-4fa8-a80c-50740d92d05e path="/var/lib/kubelet/pods/66c0b3c9-fc36-4fa8-a80c-50740d92d05e/volumes" Feb 9 00:49:37.732222 kubelet[2130]: E0209 00:49:37.732203 2130 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 00:49:38.165312 sshd[4089]: pam_unix(sshd:session): session closed for user core Feb 9 00:49:38.167018 systemd[1]: Started sshd@25-10.0.0.69:22-10.0.0.1:49692.service. Feb 9 00:49:38.170543 kubelet[2130]: I0209 00:49:38.170508 2130 topology_manager.go:210] "Topology Admit Handler" Feb 9 00:49:38.170649 kubelet[2130]: E0209 00:49:38.170560 2130 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="66c0b3c9-fc36-4fa8-a80c-50740d92d05e" containerName="apply-sysctl-overwrites" Feb 9 00:49:38.170649 kubelet[2130]: E0209 00:49:38.170571 2130 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="66c0b3c9-fc36-4fa8-a80c-50740d92d05e" containerName="mount-bpf-fs" Feb 9 00:49:38.170649 kubelet[2130]: E0209 00:49:38.170577 2130 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="48e2af9a-4f3c-4242-ae26-cd4e1e47d507" containerName="cilium-operator" Feb 9 00:49:38.170649 kubelet[2130]: E0209 00:49:38.170582 2130 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="66c0b3c9-fc36-4fa8-a80c-50740d92d05e" containerName="clean-cilium-state" Feb 9 00:49:38.170649 kubelet[2130]: E0209 00:49:38.170589 2130 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="66c0b3c9-fc36-4fa8-a80c-50740d92d05e" containerName="mount-cgroup" Feb 9 00:49:38.170649 kubelet[2130]: E0209 00:49:38.170594 2130 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="66c0b3c9-fc36-4fa8-a80c-50740d92d05e" containerName="cilium-agent" Feb 9 00:49:38.170649 kubelet[2130]: I0209 00:49:38.170616 2130 memory_manager.go:346] "RemoveStaleState removing state" podUID="66c0b3c9-fc36-4fa8-a80c-50740d92d05e" containerName="cilium-agent" Feb 9 00:49:38.170649 kubelet[2130]: I0209 00:49:38.170621 2130 memory_manager.go:346] "RemoveStaleState removing state" podUID="48e2af9a-4f3c-4242-ae26-cd4e1e47d507" containerName="cilium-operator" Feb 9 00:49:38.176624 systemd[1]: sshd@24-10.0.0.69:22-10.0.0.1:49688.service: Deactivated successfully. Feb 9 00:49:38.178395 systemd[1]: session-25.scope: Deactivated successfully. Feb 9 00:49:38.179184 systemd-logind[1200]: Session 25 logged out. Waiting for processes to exit. Feb 9 00:49:38.179981 systemd-logind[1200]: Removed session 25. Feb 9 00:49:38.216081 kubelet[2130]: I0209 00:49:38.216042 2130 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/71f437d5-36ed-4191-a638-a999d58abe7a-lib-modules\") pod \"cilium-bxd9w\" (UID: \"71f437d5-36ed-4191-a638-a999d58abe7a\") " pod="kube-system/cilium-bxd9w" Feb 9 00:49:38.216081 kubelet[2130]: I0209 00:49:38.216084 2130 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/71f437d5-36ed-4191-a638-a999d58abe7a-host-proc-sys-net\") pod \"cilium-bxd9w\" (UID: \"71f437d5-36ed-4191-a638-a999d58abe7a\") " pod="kube-system/cilium-bxd9w" Feb 9 00:49:38.216245 kubelet[2130]: I0209 00:49:38.216104 2130 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/71f437d5-36ed-4191-a638-a999d58abe7a-etc-cni-netd\") pod \"cilium-bxd9w\" (UID: \"71f437d5-36ed-4191-a638-a999d58abe7a\") " pod="kube-system/cilium-bxd9w" Feb 9 00:49:38.216245 kubelet[2130]: I0209 00:49:38.216124 2130 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/71f437d5-36ed-4191-a638-a999d58abe7a-cilium-config-path\") pod \"cilium-bxd9w\" (UID: \"71f437d5-36ed-4191-a638-a999d58abe7a\") " pod="kube-system/cilium-bxd9w" Feb 9 00:49:38.216245 kubelet[2130]: I0209 00:49:38.216162 2130 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/71f437d5-36ed-4191-a638-a999d58abe7a-hubble-tls\") pod \"cilium-bxd9w\" (UID: \"71f437d5-36ed-4191-a638-a999d58abe7a\") " pod="kube-system/cilium-bxd9w" Feb 9 00:49:38.216383 kubelet[2130]: I0209 00:49:38.216260 2130 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/71f437d5-36ed-4191-a638-a999d58abe7a-bpf-maps\") pod \"cilium-bxd9w\" (UID: \"71f437d5-36ed-4191-a638-a999d58abe7a\") " pod="kube-system/cilium-bxd9w" Feb 9 00:49:38.216383 kubelet[2130]: I0209 00:49:38.216320 2130 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/71f437d5-36ed-4191-a638-a999d58abe7a-clustermesh-secrets\") pod \"cilium-bxd9w\" (UID: \"71f437d5-36ed-4191-a638-a999d58abe7a\") " pod="kube-system/cilium-bxd9w" Feb 9 00:49:38.216438 kubelet[2130]: I0209 00:49:38.216412 2130 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ddq2\" (UniqueName: \"kubernetes.io/projected/71f437d5-36ed-4191-a638-a999d58abe7a-kube-api-access-5ddq2\") pod \"cilium-bxd9w\" (UID: \"71f437d5-36ed-4191-a638-a999d58abe7a\") " pod="kube-system/cilium-bxd9w" Feb 9 00:49:38.216474 kubelet[2130]: I0209 00:49:38.216461 2130 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/71f437d5-36ed-4191-a638-a999d58abe7a-cilium-ipsec-secrets\") pod \"cilium-bxd9w\" (UID: \"71f437d5-36ed-4191-a638-a999d58abe7a\") " pod="kube-system/cilium-bxd9w" Feb 9 00:49:38.216536 kubelet[2130]: I0209 00:49:38.216503 2130 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/71f437d5-36ed-4191-a638-a999d58abe7a-host-proc-sys-kernel\") pod \"cilium-bxd9w\" (UID: \"71f437d5-36ed-4191-a638-a999d58abe7a\") " pod="kube-system/cilium-bxd9w" Feb 9 00:49:38.216654 kubelet[2130]: I0209 00:49:38.216548 2130 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/71f437d5-36ed-4191-a638-a999d58abe7a-xtables-lock\") pod \"cilium-bxd9w\" (UID: \"71f437d5-36ed-4191-a638-a999d58abe7a\") " pod="kube-system/cilium-bxd9w" Feb 9 00:49:38.216654 kubelet[2130]: I0209 00:49:38.216595 2130 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/71f437d5-36ed-4191-a638-a999d58abe7a-cilium-cgroup\") pod \"cilium-bxd9w\" (UID: \"71f437d5-36ed-4191-a638-a999d58abe7a\") " pod="kube-system/cilium-bxd9w" Feb 9 00:49:38.216654 kubelet[2130]: I0209 00:49:38.216649 2130 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/71f437d5-36ed-4191-a638-a999d58abe7a-cni-path\") pod \"cilium-bxd9w\" (UID: \"71f437d5-36ed-4191-a638-a999d58abe7a\") " pod="kube-system/cilium-bxd9w" Feb 9 00:49:38.216736 kubelet[2130]: I0209 00:49:38.216675 2130 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/71f437d5-36ed-4191-a638-a999d58abe7a-cilium-run\") pod \"cilium-bxd9w\" (UID: \"71f437d5-36ed-4191-a638-a999d58abe7a\") " pod="kube-system/cilium-bxd9w" Feb 9 00:49:38.216736 kubelet[2130]: I0209 00:49:38.216702 2130 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/71f437d5-36ed-4191-a638-a999d58abe7a-hostproc\") pod \"cilium-bxd9w\" (UID: \"71f437d5-36ed-4191-a638-a999d58abe7a\") " pod="kube-system/cilium-bxd9w" Feb 9 00:49:38.216951 sshd[4101]: Accepted publickey for core from 10.0.0.1 port 49692 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:49:38.218025 sshd[4101]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:49:38.221265 systemd-logind[1200]: New session 26 of user core. Feb 9 00:49:38.221964 systemd[1]: Started session-26.scope. Feb 9 00:49:38.335631 sshd[4101]: pam_unix(sshd:session): session closed for user core Feb 9 00:49:38.337755 systemd[1]: Started sshd@26-10.0.0.69:22-10.0.0.1:49706.service. Feb 9 00:49:38.340922 systemd[1]: sshd@25-10.0.0.69:22-10.0.0.1:49692.service: Deactivated successfully. Feb 9 00:49:38.341814 systemd[1]: session-26.scope: Deactivated successfully. Feb 9 00:49:38.343565 systemd-logind[1200]: Session 26 logged out. Waiting for processes to exit. Feb 9 00:49:38.344609 systemd-logind[1200]: Removed session 26. Feb 9 00:49:38.370981 sshd[4119]: Accepted publickey for core from 10.0.0.1 port 49706 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:49:38.372093 sshd[4119]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:49:38.375469 systemd-logind[1200]: New session 27 of user core. Feb 9 00:49:38.376231 systemd[1]: Started session-27.scope. Feb 9 00:49:38.474535 kubelet[2130]: E0209 00:49:38.474423 2130 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:49:38.475779 env[1221]: time="2024-02-09T00:49:38.475345833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bxd9w,Uid:71f437d5-36ed-4191-a638-a999d58abe7a,Namespace:kube-system,Attempt:0,}" Feb 9 00:49:38.491946 env[1221]: time="2024-02-09T00:49:38.491864442Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 00:49:38.492080 env[1221]: time="2024-02-09T00:49:38.491920618Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 00:49:38.492080 env[1221]: time="2024-02-09T00:49:38.491956296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 00:49:38.492484 env[1221]: time="2024-02-09T00:49:38.492136818Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/925cce726ef9420670f7cad0ee1ca395226798c85ba54266e58b440635fd031f pid=4141 runtime=io.containerd.runc.v2 Feb 9 00:49:38.525342 env[1221]: time="2024-02-09T00:49:38.525302980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bxd9w,Uid:71f437d5-36ed-4191-a638-a999d58abe7a,Namespace:kube-system,Attempt:0,} returns sandbox id \"925cce726ef9420670f7cad0ee1ca395226798c85ba54266e58b440635fd031f\"" Feb 9 00:49:38.525771 kubelet[2130]: E0209 00:49:38.525747 2130 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:49:38.527339 env[1221]: time="2024-02-09T00:49:38.527271714Z" level=info msg="CreateContainer within sandbox \"925cce726ef9420670f7cad0ee1ca395226798c85ba54266e58b440635fd031f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 00:49:38.539768 env[1221]: time="2024-02-09T00:49:38.539717649Z" level=info msg="CreateContainer within sandbox \"925cce726ef9420670f7cad0ee1ca395226798c85ba54266e58b440635fd031f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7453ef8afdc14a76f1ecbc233894731d82008c566655519a0078adb61b8ae874\"" Feb 9 00:49:38.540063 env[1221]: time="2024-02-09T00:49:38.540034278Z" level=info msg="StartContainer for \"7453ef8afdc14a76f1ecbc233894731d82008c566655519a0078adb61b8ae874\"" Feb 9 00:49:38.581318 env[1221]: time="2024-02-09T00:49:38.580197901Z" level=info msg="StartContainer for \"7453ef8afdc14a76f1ecbc233894731d82008c566655519a0078adb61b8ae874\" returns successfully" Feb 9 00:49:38.706364 env[1221]: time="2024-02-09T00:49:38.706304622Z" level=info msg="shim disconnected" id=7453ef8afdc14a76f1ecbc233894731d82008c566655519a0078adb61b8ae874 Feb 9 00:49:38.706364 env[1221]: time="2024-02-09T00:49:38.706361270Z" level=warning msg="cleaning up after shim disconnected" id=7453ef8afdc14a76f1ecbc233894731d82008c566655519a0078adb61b8ae874 namespace=k8s.io Feb 9 00:49:38.706364 env[1221]: time="2024-02-09T00:49:38.706373372Z" level=info msg="cleaning up dead shim" Feb 9 00:49:38.713068 env[1221]: time="2024-02-09T00:49:38.713034696Z" level=warning msg="cleanup warnings time=\"2024-02-09T00:49:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4223 runtime=io.containerd.runc.v2\n" Feb 9 00:49:38.919179 env[1221]: time="2024-02-09T00:49:38.919139188Z" level=info msg="StopPodSandbox for \"925cce726ef9420670f7cad0ee1ca395226798c85ba54266e58b440635fd031f\"" Feb 9 00:49:38.919378 env[1221]: time="2024-02-09T00:49:38.919200274Z" level=info msg="Container to stop \"7453ef8afdc14a76f1ecbc233894731d82008c566655519a0078adb61b8ae874\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 00:49:38.944379 env[1221]: time="2024-02-09T00:49:38.944268136Z" level=info msg="shim disconnected" id=925cce726ef9420670f7cad0ee1ca395226798c85ba54266e58b440635fd031f Feb 9 00:49:38.944557 env[1221]: time="2024-02-09T00:49:38.944383706Z" level=warning msg="cleaning up after shim disconnected" id=925cce726ef9420670f7cad0ee1ca395226798c85ba54266e58b440635fd031f namespace=k8s.io Feb 9 00:49:38.944557 env[1221]: time="2024-02-09T00:49:38.944393945Z" level=info msg="cleaning up dead shim" Feb 9 00:49:38.951894 env[1221]: time="2024-02-09T00:49:38.951842422Z" level=warning msg="cleanup warnings time=\"2024-02-09T00:49:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4255 runtime=io.containerd.runc.v2\n" Feb 9 00:49:38.952249 env[1221]: time="2024-02-09T00:49:38.952215469Z" level=info msg="TearDown network for sandbox \"925cce726ef9420670f7cad0ee1ca395226798c85ba54266e58b440635fd031f\" successfully" Feb 9 00:49:38.952249 env[1221]: time="2024-02-09T00:49:38.952238113Z" level=info msg="StopPodSandbox for \"925cce726ef9420670f7cad0ee1ca395226798c85ba54266e58b440635fd031f\" returns successfully" Feb 9 00:49:39.021691 kubelet[2130]: I0209 00:49:39.021641 2130 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/71f437d5-36ed-4191-a638-a999d58abe7a-cilium-config-path\") pod \"71f437d5-36ed-4191-a638-a999d58abe7a\" (UID: \"71f437d5-36ed-4191-a638-a999d58abe7a\") " Feb 9 00:49:39.021691 kubelet[2130]: I0209 00:49:39.021681 2130 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/71f437d5-36ed-4191-a638-a999d58abe7a-cni-path\") pod \"71f437d5-36ed-4191-a638-a999d58abe7a\" (UID: \"71f437d5-36ed-4191-a638-a999d58abe7a\") " Feb 9 00:49:39.021691 kubelet[2130]: I0209 00:49:39.021704 2130 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/71f437d5-36ed-4191-a638-a999d58abe7a-cilium-ipsec-secrets\") pod \"71f437d5-36ed-4191-a638-a999d58abe7a\" (UID: \"71f437d5-36ed-4191-a638-a999d58abe7a\") " Feb 9 00:49:39.022223 kubelet[2130]: I0209 00:49:39.021726 2130 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/71f437d5-36ed-4191-a638-a999d58abe7a-host-proc-sys-kernel\") pod \"71f437d5-36ed-4191-a638-a999d58abe7a\" (UID: \"71f437d5-36ed-4191-a638-a999d58abe7a\") " Feb 9 00:49:39.022223 kubelet[2130]: I0209 00:49:39.021751 2130 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5ddq2\" (UniqueName: \"kubernetes.io/projected/71f437d5-36ed-4191-a638-a999d58abe7a-kube-api-access-5ddq2\") pod \"71f437d5-36ed-4191-a638-a999d58abe7a\" (UID: \"71f437d5-36ed-4191-a638-a999d58abe7a\") " Feb 9 00:49:39.022223 kubelet[2130]: I0209 00:49:39.021773 2130 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/71f437d5-36ed-4191-a638-a999d58abe7a-cni-path" (OuterVolumeSpecName: "cni-path") pod "71f437d5-36ed-4191-a638-a999d58abe7a" (UID: "71f437d5-36ed-4191-a638-a999d58abe7a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 00:49:39.022223 kubelet[2130]: I0209 00:49:39.021796 2130 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/71f437d5-36ed-4191-a638-a999d58abe7a-cilium-run\") pod \"71f437d5-36ed-4191-a638-a999d58abe7a\" (UID: \"71f437d5-36ed-4191-a638-a999d58abe7a\") " Feb 9 00:49:39.022223 kubelet[2130]: I0209 00:49:39.021824 2130 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/71f437d5-36ed-4191-a638-a999d58abe7a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "71f437d5-36ed-4191-a638-a999d58abe7a" (UID: "71f437d5-36ed-4191-a638-a999d58abe7a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 00:49:39.022223 kubelet[2130]: I0209 00:49:39.021856 2130 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/71f437d5-36ed-4191-a638-a999d58abe7a-hostproc\") pod \"71f437d5-36ed-4191-a638-a999d58abe7a\" (UID: \"71f437d5-36ed-4191-a638-a999d58abe7a\") " Feb 9 00:49:39.023213 kubelet[2130]: I0209 00:49:39.021877 2130 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/71f437d5-36ed-4191-a638-a999d58abe7a-lib-modules\") pod \"71f437d5-36ed-4191-a638-a999d58abe7a\" (UID: \"71f437d5-36ed-4191-a638-a999d58abe7a\") " Feb 9 00:49:39.023213 kubelet[2130]: W0209 00:49:39.021858 2130 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/71f437d5-36ed-4191-a638-a999d58abe7a/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 00:49:39.023213 kubelet[2130]: I0209 00:49:39.021891 2130 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/71f437d5-36ed-4191-a638-a999d58abe7a-etc-cni-netd\") pod \"71f437d5-36ed-4191-a638-a999d58abe7a\" (UID: \"71f437d5-36ed-4191-a638-a999d58abe7a\") " Feb 9 00:49:39.023213 kubelet[2130]: I0209 00:49:39.021924 2130 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/71f437d5-36ed-4191-a638-a999d58abe7a-clustermesh-secrets\") pod \"71f437d5-36ed-4191-a638-a999d58abe7a\" (UID: \"71f437d5-36ed-4191-a638-a999d58abe7a\") " Feb 9 00:49:39.023213 kubelet[2130]: I0209 00:49:39.021940 2130 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/71f437d5-36ed-4191-a638-a999d58abe7a-cilium-cgroup\") pod \"71f437d5-36ed-4191-a638-a999d58abe7a\" (UID: \"71f437d5-36ed-4191-a638-a999d58abe7a\") " Feb 9 00:49:39.023213 kubelet[2130]: I0209 00:49:39.021956 2130 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/71f437d5-36ed-4191-a638-a999d58abe7a-host-proc-sys-net\") pod \"71f437d5-36ed-4191-a638-a999d58abe7a\" (UID: \"71f437d5-36ed-4191-a638-a999d58abe7a\") " Feb 9 00:49:39.023429 kubelet[2130]: I0209 00:49:39.021977 2130 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/71f437d5-36ed-4191-a638-a999d58abe7a-hubble-tls\") pod \"71f437d5-36ed-4191-a638-a999d58abe7a\" (UID: \"71f437d5-36ed-4191-a638-a999d58abe7a\") " Feb 9 00:49:39.023429 kubelet[2130]: I0209 00:49:39.021992 2130 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/71f437d5-36ed-4191-a638-a999d58abe7a-bpf-maps\") pod \"71f437d5-36ed-4191-a638-a999d58abe7a\" (UID: \"71f437d5-36ed-4191-a638-a999d58abe7a\") " Feb 9 00:49:39.023429 kubelet[2130]: I0209 00:49:39.022006 2130 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/71f437d5-36ed-4191-a638-a999d58abe7a-xtables-lock\") pod \"71f437d5-36ed-4191-a638-a999d58abe7a\" (UID: \"71f437d5-36ed-4191-a638-a999d58abe7a\") " Feb 9 00:49:39.023429 kubelet[2130]: I0209 00:49:39.022037 2130 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/71f437d5-36ed-4191-a638-a999d58abe7a-cni-path\") on node \"localhost\" DevicePath \"\"" Feb 9 00:49:39.023429 kubelet[2130]: I0209 00:49:39.022047 2130 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/71f437d5-36ed-4191-a638-a999d58abe7a-cilium-run\") on node \"localhost\" DevicePath \"\"" Feb 9 00:49:39.023429 kubelet[2130]: I0209 00:49:39.022060 2130 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/71f437d5-36ed-4191-a638-a999d58abe7a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "71f437d5-36ed-4191-a638-a999d58abe7a" (UID: "71f437d5-36ed-4191-a638-a999d58abe7a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 00:49:39.023583 kubelet[2130]: I0209 00:49:39.022075 2130 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/71f437d5-36ed-4191-a638-a999d58abe7a-hostproc" (OuterVolumeSpecName: "hostproc") pod "71f437d5-36ed-4191-a638-a999d58abe7a" (UID: "71f437d5-36ed-4191-a638-a999d58abe7a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 00:49:39.023583 kubelet[2130]: I0209 00:49:39.022086 2130 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/71f437d5-36ed-4191-a638-a999d58abe7a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "71f437d5-36ed-4191-a638-a999d58abe7a" (UID: "71f437d5-36ed-4191-a638-a999d58abe7a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 00:49:39.023583 kubelet[2130]: I0209 00:49:39.022099 2130 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/71f437d5-36ed-4191-a638-a999d58abe7a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "71f437d5-36ed-4191-a638-a999d58abe7a" (UID: "71f437d5-36ed-4191-a638-a999d58abe7a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 00:49:39.023583 kubelet[2130]: I0209 00:49:39.022340 2130 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/71f437d5-36ed-4191-a638-a999d58abe7a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "71f437d5-36ed-4191-a638-a999d58abe7a" (UID: "71f437d5-36ed-4191-a638-a999d58abe7a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 00:49:39.023583 kubelet[2130]: I0209 00:49:39.022363 2130 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/71f437d5-36ed-4191-a638-a999d58abe7a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "71f437d5-36ed-4191-a638-a999d58abe7a" (UID: "71f437d5-36ed-4191-a638-a999d58abe7a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 00:49:39.023696 kubelet[2130]: I0209 00:49:39.022378 2130 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/71f437d5-36ed-4191-a638-a999d58abe7a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "71f437d5-36ed-4191-a638-a999d58abe7a" (UID: "71f437d5-36ed-4191-a638-a999d58abe7a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 00:49:39.023696 kubelet[2130]: I0209 00:49:39.022624 2130 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/71f437d5-36ed-4191-a638-a999d58abe7a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "71f437d5-36ed-4191-a638-a999d58abe7a" (UID: "71f437d5-36ed-4191-a638-a999d58abe7a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 00:49:39.023696 kubelet[2130]: I0209 00:49:39.023553 2130 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71f437d5-36ed-4191-a638-a999d58abe7a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "71f437d5-36ed-4191-a638-a999d58abe7a" (UID: "71f437d5-36ed-4191-a638-a999d58abe7a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 00:49:39.024943 kubelet[2130]: I0209 00:49:39.024888 2130 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71f437d5-36ed-4191-a638-a999d58abe7a-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "71f437d5-36ed-4191-a638-a999d58abe7a" (UID: "71f437d5-36ed-4191-a638-a999d58abe7a"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 00:49:39.025094 kubelet[2130]: I0209 00:49:39.025052 2130 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71f437d5-36ed-4191-a638-a999d58abe7a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "71f437d5-36ed-4191-a638-a999d58abe7a" (UID: "71f437d5-36ed-4191-a638-a999d58abe7a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 00:49:39.025665 kubelet[2130]: I0209 00:49:39.025637 2130 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71f437d5-36ed-4191-a638-a999d58abe7a-kube-api-access-5ddq2" (OuterVolumeSpecName: "kube-api-access-5ddq2") pod "71f437d5-36ed-4191-a638-a999d58abe7a" (UID: "71f437d5-36ed-4191-a638-a999d58abe7a"). InnerVolumeSpecName "kube-api-access-5ddq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 00:49:39.026250 kubelet[2130]: I0209 00:49:39.026222 2130 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71f437d5-36ed-4191-a638-a999d58abe7a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "71f437d5-36ed-4191-a638-a999d58abe7a" (UID: "71f437d5-36ed-4191-a638-a999d58abe7a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 00:49:39.122773 kubelet[2130]: I0209 00:49:39.122734 2130 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-5ddq2\" (UniqueName: \"kubernetes.io/projected/71f437d5-36ed-4191-a638-a999d58abe7a-kube-api-access-5ddq2\") on node \"localhost\" DevicePath \"\"" Feb 9 00:49:39.122773 kubelet[2130]: I0209 00:49:39.122762 2130 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/71f437d5-36ed-4191-a638-a999d58abe7a-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Feb 9 00:49:39.122773 kubelet[2130]: I0209 00:49:39.122774 2130 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/71f437d5-36ed-4191-a638-a999d58abe7a-hostproc\") on node \"localhost\" DevicePath \"\"" Feb 9 00:49:39.122773 kubelet[2130]: I0209 00:49:39.122786 2130 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/71f437d5-36ed-4191-a638-a999d58abe7a-lib-modules\") on node \"localhost\" DevicePath \"\"" Feb 9 00:49:39.123014 kubelet[2130]: I0209 00:49:39.122794 2130 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/71f437d5-36ed-4191-a638-a999d58abe7a-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Feb 9 00:49:39.123014 kubelet[2130]: I0209 00:49:39.122805 2130 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/71f437d5-36ed-4191-a638-a999d58abe7a-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Feb 9 00:49:39.123014 kubelet[2130]: I0209 00:49:39.122814 2130 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/71f437d5-36ed-4191-a638-a999d58abe7a-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Feb 9 00:49:39.123014 kubelet[2130]: I0209 00:49:39.122822 2130 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/71f437d5-36ed-4191-a638-a999d58abe7a-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Feb 9 00:49:39.123014 kubelet[2130]: I0209 00:49:39.122829 2130 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/71f437d5-36ed-4191-a638-a999d58abe7a-hubble-tls\") on node \"localhost\" DevicePath \"\"" Feb 9 00:49:39.123014 kubelet[2130]: I0209 00:49:39.122837 2130 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/71f437d5-36ed-4191-a638-a999d58abe7a-bpf-maps\") on node \"localhost\" DevicePath \"\"" Feb 9 00:49:39.123014 kubelet[2130]: I0209 00:49:39.122847 2130 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/71f437d5-36ed-4191-a638-a999d58abe7a-xtables-lock\") on node \"localhost\" DevicePath \"\"" Feb 9 00:49:39.123014 kubelet[2130]: I0209 00:49:39.122855 2130 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/71f437d5-36ed-4191-a638-a999d58abe7a-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 9 00:49:39.123177 kubelet[2130]: I0209 00:49:39.122864 2130 reconciler_common.go:295] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/71f437d5-36ed-4191-a638-a999d58abe7a-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Feb 9 00:49:39.224351 kubelet[2130]: I0209 00:49:39.224250 2130 setters.go:548] "Node became not ready" node="localhost" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-09 00:49:39.224185095 +0000 UTC m=+101.748061828 LastTransitionTime:2024-02-09 00:49:39.224185095 +0000 UTC m=+101.748061828 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 9 00:49:39.321353 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-925cce726ef9420670f7cad0ee1ca395226798c85ba54266e58b440635fd031f-shm.mount: Deactivated successfully. Feb 9 00:49:39.321491 systemd[1]: var-lib-kubelet-pods-71f437d5\x2d36ed\x2d4191\x2da638\x2da999d58abe7a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5ddq2.mount: Deactivated successfully. Feb 9 00:49:39.321580 systemd[1]: var-lib-kubelet-pods-71f437d5\x2d36ed\x2d4191\x2da638\x2da999d58abe7a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 00:49:39.321657 systemd[1]: var-lib-kubelet-pods-71f437d5\x2d36ed\x2d4191\x2da638\x2da999d58abe7a-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 9 00:49:39.321756 systemd[1]: var-lib-kubelet-pods-71f437d5\x2d36ed\x2d4191\x2da638\x2da999d58abe7a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 00:49:39.921649 kubelet[2130]: I0209 00:49:39.921624 2130 scope.go:115] "RemoveContainer" containerID="7453ef8afdc14a76f1ecbc233894731d82008c566655519a0078adb61b8ae874" Feb 9 00:49:39.923919 env[1221]: time="2024-02-09T00:49:39.923667450Z" level=info msg="RemoveContainer for \"7453ef8afdc14a76f1ecbc233894731d82008c566655519a0078adb61b8ae874\"" Feb 9 00:49:39.998478 env[1221]: time="2024-02-09T00:49:39.998385123Z" level=info msg="RemoveContainer for \"7453ef8afdc14a76f1ecbc233894731d82008c566655519a0078adb61b8ae874\" returns successfully" Feb 9 00:49:40.271176 kubelet[2130]: I0209 00:49:40.271081 2130 topology_manager.go:210] "Topology Admit Handler" Feb 9 00:49:40.271560 kubelet[2130]: E0209 00:49:40.271544 2130 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="71f437d5-36ed-4191-a638-a999d58abe7a" containerName="mount-cgroup" Feb 9 00:49:40.271662 kubelet[2130]: I0209 00:49:40.271648 2130 memory_manager.go:346] "RemoveStaleState removing state" podUID="71f437d5-36ed-4191-a638-a999d58abe7a" containerName="mount-cgroup" Feb 9 00:49:40.331421 kubelet[2130]: I0209 00:49:40.331385 2130 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/81538baf-ba52-43bf-9621-1b40ad4b7c48-cni-path\") pod \"cilium-2jb85\" (UID: \"81538baf-ba52-43bf-9621-1b40ad4b7c48\") " pod="kube-system/cilium-2jb85" Feb 9 00:49:40.331421 kubelet[2130]: I0209 00:49:40.331419 2130 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/81538baf-ba52-43bf-9621-1b40ad4b7c48-bpf-maps\") pod \"cilium-2jb85\" (UID: \"81538baf-ba52-43bf-9621-1b40ad4b7c48\") " pod="kube-system/cilium-2jb85" Feb 9 00:49:40.331421 kubelet[2130]: I0209 00:49:40.331437 2130 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/81538baf-ba52-43bf-9621-1b40ad4b7c48-lib-modules\") pod \"cilium-2jb85\" (UID: \"81538baf-ba52-43bf-9621-1b40ad4b7c48\") " pod="kube-system/cilium-2jb85" Feb 9 00:49:40.331637 kubelet[2130]: I0209 00:49:40.331455 2130 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/81538baf-ba52-43bf-9621-1b40ad4b7c48-cilium-run\") pod \"cilium-2jb85\" (UID: \"81538baf-ba52-43bf-9621-1b40ad4b7c48\") " pod="kube-system/cilium-2jb85" Feb 9 00:49:40.331637 kubelet[2130]: I0209 00:49:40.331475 2130 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbghc\" (UniqueName: \"kubernetes.io/projected/81538baf-ba52-43bf-9621-1b40ad4b7c48-kube-api-access-kbghc\") pod \"cilium-2jb85\" (UID: \"81538baf-ba52-43bf-9621-1b40ad4b7c48\") " pod="kube-system/cilium-2jb85" Feb 9 00:49:40.331637 kubelet[2130]: I0209 00:49:40.331590 2130 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/81538baf-ba52-43bf-9621-1b40ad4b7c48-host-proc-sys-kernel\") pod \"cilium-2jb85\" (UID: \"81538baf-ba52-43bf-9621-1b40ad4b7c48\") " pod="kube-system/cilium-2jb85" Feb 9 00:49:40.331710 kubelet[2130]: I0209 00:49:40.331655 2130 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/81538baf-ba52-43bf-9621-1b40ad4b7c48-xtables-lock\") pod \"cilium-2jb85\" (UID: \"81538baf-ba52-43bf-9621-1b40ad4b7c48\") " pod="kube-system/cilium-2jb85" Feb 9 00:49:40.331710 kubelet[2130]: I0209 00:49:40.331680 2130 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/81538baf-ba52-43bf-9621-1b40ad4b7c48-hubble-tls\") pod \"cilium-2jb85\" (UID: \"81538baf-ba52-43bf-9621-1b40ad4b7c48\") " pod="kube-system/cilium-2jb85" Feb 9 00:49:40.331710 kubelet[2130]: I0209 00:49:40.331699 2130 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/81538baf-ba52-43bf-9621-1b40ad4b7c48-hostproc\") pod \"cilium-2jb85\" (UID: \"81538baf-ba52-43bf-9621-1b40ad4b7c48\") " pod="kube-system/cilium-2jb85" Feb 9 00:49:40.331780 kubelet[2130]: I0209 00:49:40.331730 2130 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/81538baf-ba52-43bf-9621-1b40ad4b7c48-etc-cni-netd\") pod \"cilium-2jb85\" (UID: \"81538baf-ba52-43bf-9621-1b40ad4b7c48\") " pod="kube-system/cilium-2jb85" Feb 9 00:49:40.331780 kubelet[2130]: I0209 00:49:40.331765 2130 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/81538baf-ba52-43bf-9621-1b40ad4b7c48-cilium-config-path\") pod \"cilium-2jb85\" (UID: \"81538baf-ba52-43bf-9621-1b40ad4b7c48\") " pod="kube-system/cilium-2jb85" Feb 9 00:49:40.331826 kubelet[2130]: I0209 00:49:40.331799 2130 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/81538baf-ba52-43bf-9621-1b40ad4b7c48-cilium-ipsec-secrets\") pod \"cilium-2jb85\" (UID: \"81538baf-ba52-43bf-9621-1b40ad4b7c48\") " pod="kube-system/cilium-2jb85" Feb 9 00:49:40.331851 kubelet[2130]: I0209 00:49:40.331827 2130 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/81538baf-ba52-43bf-9621-1b40ad4b7c48-clustermesh-secrets\") pod \"cilium-2jb85\" (UID: \"81538baf-ba52-43bf-9621-1b40ad4b7c48\") " pod="kube-system/cilium-2jb85" Feb 9 00:49:40.331877 kubelet[2130]: I0209 00:49:40.331853 2130 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/81538baf-ba52-43bf-9621-1b40ad4b7c48-host-proc-sys-net\") pod \"cilium-2jb85\" (UID: \"81538baf-ba52-43bf-9621-1b40ad4b7c48\") " pod="kube-system/cilium-2jb85" Feb 9 00:49:40.331913 kubelet[2130]: I0209 00:49:40.331881 2130 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/81538baf-ba52-43bf-9621-1b40ad4b7c48-cilium-cgroup\") pod \"cilium-2jb85\" (UID: \"81538baf-ba52-43bf-9621-1b40ad4b7c48\") " pod="kube-system/cilium-2jb85" Feb 9 00:49:40.576889 kubelet[2130]: E0209 00:49:40.576861 2130 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:49:40.577372 env[1221]: time="2024-02-09T00:49:40.577330958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2jb85,Uid:81538baf-ba52-43bf-9621-1b40ad4b7c48,Namespace:kube-system,Attempt:0,}" Feb 9 00:49:40.588577 env[1221]: time="2024-02-09T00:49:40.588506202Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 00:49:40.588577 env[1221]: time="2024-02-09T00:49:40.588542051Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 00:49:40.588577 env[1221]: time="2024-02-09T00:49:40.588565635Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 00:49:40.588755 env[1221]: time="2024-02-09T00:49:40.588697435Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b06f151c3f46670d66d0ad32038830c46c22f22c1cb059aa243d153599bfc27c pid=4281 runtime=io.containerd.runc.v2 Feb 9 00:49:40.616087 env[1221]: time="2024-02-09T00:49:40.616048133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2jb85,Uid:81538baf-ba52-43bf-9621-1b40ad4b7c48,Namespace:kube-system,Attempt:0,} returns sandbox id \"b06f151c3f46670d66d0ad32038830c46c22f22c1cb059aa243d153599bfc27c\"" Feb 9 00:49:40.616689 kubelet[2130]: E0209 00:49:40.616666 2130 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:49:40.618782 env[1221]: time="2024-02-09T00:49:40.618735127Z" level=info msg="CreateContainer within sandbox \"b06f151c3f46670d66d0ad32038830c46c22f22c1cb059aa243d153599bfc27c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 00:49:40.628813 env[1221]: time="2024-02-09T00:49:40.628772554Z" level=info msg="CreateContainer within sandbox \"b06f151c3f46670d66d0ad32038830c46c22f22c1cb059aa243d153599bfc27c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6ae8f39d53b18f61d26caf75ff80009f061813dc016a62e5a170d440ba50f778\"" Feb 9 00:49:40.629265 env[1221]: time="2024-02-09T00:49:40.629222868Z" level=info msg="StartContainer for \"6ae8f39d53b18f61d26caf75ff80009f061813dc016a62e5a170d440ba50f778\"" Feb 9 00:49:40.667410 env[1221]: time="2024-02-09T00:49:40.667365614Z" level=info msg="StartContainer for \"6ae8f39d53b18f61d26caf75ff80009f061813dc016a62e5a170d440ba50f778\" returns successfully" Feb 9 00:49:40.690718 env[1221]: time="2024-02-09T00:49:40.690667175Z" level=info msg="shim disconnected" id=6ae8f39d53b18f61d26caf75ff80009f061813dc016a62e5a170d440ba50f778 Feb 9 00:49:40.690718 env[1221]: time="2024-02-09T00:49:40.690718573Z" level=warning msg="cleaning up after shim disconnected" id=6ae8f39d53b18f61d26caf75ff80009f061813dc016a62e5a170d440ba50f778 namespace=k8s.io Feb 9 00:49:40.690917 env[1221]: time="2024-02-09T00:49:40.690729323Z" level=info msg="cleaning up dead shim" Feb 9 00:49:40.696315 env[1221]: time="2024-02-09T00:49:40.696280005Z" level=warning msg="cleanup warnings time=\"2024-02-09T00:49:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4364 runtime=io.containerd.runc.v2\n" Feb 9 00:49:40.924805 kubelet[2130]: E0209 00:49:40.924644 2130 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:49:40.928204 env[1221]: time="2024-02-09T00:49:40.928117971Z" level=info msg="CreateContainer within sandbox \"b06f151c3f46670d66d0ad32038830c46c22f22c1cb059aa243d153599bfc27c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 00:49:41.038719 env[1221]: time="2024-02-09T00:49:41.038664074Z" level=info msg="CreateContainer within sandbox \"b06f151c3f46670d66d0ad32038830c46c22f22c1cb059aa243d153599bfc27c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b3996eef9b3c7aea297822a201340b1e41d35dc9cc548f0e50212bb31ab9a6f4\"" Feb 9 00:49:41.039163 env[1221]: time="2024-02-09T00:49:41.039138042Z" level=info msg="StartContainer for \"b3996eef9b3c7aea297822a201340b1e41d35dc9cc548f0e50212bb31ab9a6f4\"" Feb 9 00:49:41.077336 env[1221]: time="2024-02-09T00:49:41.074575525Z" level=info msg="StartContainer for \"b3996eef9b3c7aea297822a201340b1e41d35dc9cc548f0e50212bb31ab9a6f4\" returns successfully" Feb 9 00:49:41.126780 env[1221]: time="2024-02-09T00:49:41.126728096Z" level=info msg="shim disconnected" id=b3996eef9b3c7aea297822a201340b1e41d35dc9cc548f0e50212bb31ab9a6f4 Feb 9 00:49:41.126780 env[1221]: time="2024-02-09T00:49:41.126781807Z" level=warning msg="cleaning up after shim disconnected" id=b3996eef9b3c7aea297822a201340b1e41d35dc9cc548f0e50212bb31ab9a6f4 namespace=k8s.io Feb 9 00:49:41.127045 env[1221]: time="2024-02-09T00:49:41.126796244Z" level=info msg="cleaning up dead shim" Feb 9 00:49:41.133608 env[1221]: time="2024-02-09T00:49:41.133578127Z" level=warning msg="cleanup warnings time=\"2024-02-09T00:49:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4427 runtime=io.containerd.runc.v2\n" Feb 9 00:49:41.694915 kubelet[2130]: I0209 00:49:41.694875 2130 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=71f437d5-36ed-4191-a638-a999d58abe7a path="/var/lib/kubelet/pods/71f437d5-36ed-4191-a638-a999d58abe7a/volumes" Feb 9 00:49:41.927946 kubelet[2130]: E0209 00:49:41.927916 2130 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:49:41.929608 env[1221]: time="2024-02-09T00:49:41.929566476Z" level=info msg="CreateContainer within sandbox \"b06f151c3f46670d66d0ad32038830c46c22f22c1cb059aa243d153599bfc27c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 00:49:41.941518 env[1221]: time="2024-02-09T00:49:41.941472842Z" level=info msg="CreateContainer within sandbox \"b06f151c3f46670d66d0ad32038830c46c22f22c1cb059aa243d153599bfc27c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"bec89026ec903e78d3ea086d50df75890579e4b80d035e9b12ae225d3a4027c4\"" Feb 9 00:49:41.941951 env[1221]: time="2024-02-09T00:49:41.941931791Z" level=info msg="StartContainer for \"bec89026ec903e78d3ea086d50df75890579e4b80d035e9b12ae225d3a4027c4\"" Feb 9 00:49:41.981619 env[1221]: time="2024-02-09T00:49:41.981517545Z" level=info msg="StartContainer for \"bec89026ec903e78d3ea086d50df75890579e4b80d035e9b12ae225d3a4027c4\" returns successfully" Feb 9 00:49:41.999865 env[1221]: time="2024-02-09T00:49:41.999815835Z" level=info msg="shim disconnected" id=bec89026ec903e78d3ea086d50df75890579e4b80d035e9b12ae225d3a4027c4 Feb 9 00:49:41.999865 env[1221]: time="2024-02-09T00:49:41.999864868Z" level=warning msg="cleaning up after shim disconnected" id=bec89026ec903e78d3ea086d50df75890579e4b80d035e9b12ae225d3a4027c4 namespace=k8s.io Feb 9 00:49:42.000038 env[1221]: time="2024-02-09T00:49:41.999873574Z" level=info msg="cleaning up dead shim" Feb 9 00:49:42.005534 env[1221]: time="2024-02-09T00:49:42.005509073Z" level=warning msg="cleanup warnings time=\"2024-02-09T00:49:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4483 runtime=io.containerd.runc.v2\n" Feb 9 00:49:42.436702 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bec89026ec903e78d3ea086d50df75890579e4b80d035e9b12ae225d3a4027c4-rootfs.mount: Deactivated successfully. Feb 9 00:49:42.733233 kubelet[2130]: E0209 00:49:42.733144 2130 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 00:49:42.931559 kubelet[2130]: E0209 00:49:42.931534 2130 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:49:42.934739 env[1221]: time="2024-02-09T00:49:42.934689591Z" level=info msg="CreateContainer within sandbox \"b06f151c3f46670d66d0ad32038830c46c22f22c1cb059aa243d153599bfc27c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 00:49:42.948571 env[1221]: time="2024-02-09T00:49:42.948514516Z" level=info msg="CreateContainer within sandbox \"b06f151c3f46670d66d0ad32038830c46c22f22c1cb059aa243d153599bfc27c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2f407229a02c7a8bbfb4e5cde635154dbb9f22dc260288fdcb2af8ca4a828341\"" Feb 9 00:49:42.950319 env[1221]: time="2024-02-09T00:49:42.948961052Z" level=info msg="StartContainer for \"2f407229a02c7a8bbfb4e5cde635154dbb9f22dc260288fdcb2af8ca4a828341\"" Feb 9 00:49:42.989674 env[1221]: time="2024-02-09T00:49:42.988913715Z" level=info msg="StartContainer for \"2f407229a02c7a8bbfb4e5cde635154dbb9f22dc260288fdcb2af8ca4a828341\" returns successfully" Feb 9 00:49:43.009197 env[1221]: time="2024-02-09T00:49:43.009134927Z" level=info msg="shim disconnected" id=2f407229a02c7a8bbfb4e5cde635154dbb9f22dc260288fdcb2af8ca4a828341 Feb 9 00:49:43.009463 env[1221]: time="2024-02-09T00:49:43.009200542Z" level=warning msg="cleaning up after shim disconnected" id=2f407229a02c7a8bbfb4e5cde635154dbb9f22dc260288fdcb2af8ca4a828341 namespace=k8s.io Feb 9 00:49:43.009463 env[1221]: time="2024-02-09T00:49:43.009214939Z" level=info msg="cleaning up dead shim" Feb 9 00:49:43.015116 env[1221]: time="2024-02-09T00:49:43.015083226Z" level=warning msg="cleanup warnings time=\"2024-02-09T00:49:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4540 runtime=io.containerd.runc.v2\n" Feb 9 00:49:43.437146 systemd[1]: run-containerd-runc-k8s.io-2f407229a02c7a8bbfb4e5cde635154dbb9f22dc260288fdcb2af8ca4a828341-runc.FsWyXz.mount: Deactivated successfully. Feb 9 00:49:43.437325 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2f407229a02c7a8bbfb4e5cde635154dbb9f22dc260288fdcb2af8ca4a828341-rootfs.mount: Deactivated successfully. Feb 9 00:49:43.935231 kubelet[2130]: E0209 00:49:43.935196 2130 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:49:43.937261 env[1221]: time="2024-02-09T00:49:43.937221045Z" level=info msg="CreateContainer within sandbox \"b06f151c3f46670d66d0ad32038830c46c22f22c1cb059aa243d153599bfc27c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 00:49:43.957112 env[1221]: time="2024-02-09T00:49:43.957058474Z" level=info msg="CreateContainer within sandbox \"b06f151c3f46670d66d0ad32038830c46c22f22c1cb059aa243d153599bfc27c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"bc486ab9343db3256af7173be2f2886532d58a8559f16fea018b771760cba479\"" Feb 9 00:49:43.957653 env[1221]: time="2024-02-09T00:49:43.957609328Z" level=info msg="StartContainer for \"bc486ab9343db3256af7173be2f2886532d58a8559f16fea018b771760cba479\"" Feb 9 00:49:44.004404 env[1221]: time="2024-02-09T00:49:44.004355041Z" level=info msg="StartContainer for \"bc486ab9343db3256af7173be2f2886532d58a8559f16fea018b771760cba479\" returns successfully" Feb 9 00:49:44.232384 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 9 00:49:44.939489 kubelet[2130]: E0209 00:49:44.939466 2130 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:49:44.949824 kubelet[2130]: I0209 00:49:44.949794 2130 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-2jb85" podStartSLOduration=4.949760846 pod.CreationTimestamp="2024-02-09 00:49:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 00:49:44.949441171 +0000 UTC m=+107.473317904" watchObservedRunningTime="2024-02-09 00:49:44.949760846 +0000 UTC m=+107.473637579" Feb 9 00:49:45.941504 kubelet[2130]: E0209 00:49:45.941479 2130 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:49:46.762337 systemd-networkd[1097]: lxc_health: Link UP Feb 9 00:49:46.769311 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 00:49:46.769377 systemd-networkd[1097]: lxc_health: Gained carrier Feb 9 00:49:46.943542 kubelet[2130]: E0209 00:49:46.943513 2130 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:49:48.578727 kubelet[2130]: E0209 00:49:48.578687 2130 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:49:48.776473 systemd-networkd[1097]: lxc_health: Gained IPv6LL Feb 9 00:49:48.946484 kubelet[2130]: E0209 00:49:48.946391 2130 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:49:49.948486 kubelet[2130]: E0209 00:49:49.948442 2130 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:49:53.051843 sshd[4119]: pam_unix(sshd:session): session closed for user core Feb 9 00:49:53.054269 systemd[1]: sshd@26-10.0.0.69:22-10.0.0.1:49706.service: Deactivated successfully. Feb 9 00:49:53.055195 systemd[1]: session-27.scope: Deactivated successfully. Feb 9 00:49:53.056205 systemd-logind[1200]: Session 27 logged out. Waiting for processes to exit. Feb 9 00:49:53.057013 systemd-logind[1200]: Removed session 27.