Feb 9 00:52:30.784694 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Feb 8 21:14:17 -00 2024 Feb 9 00:52:30.784728 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 9 00:52:30.784741 kernel: BIOS-provided physical RAM map: Feb 9 00:52:30.784748 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 9 00:52:30.784755 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Feb 9 00:52:30.784775 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Feb 9 00:52:30.784784 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Feb 9 00:52:30.784791 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Feb 9 00:52:30.784799 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Feb 9 00:52:30.784808 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Feb 9 00:52:30.784816 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Feb 9 00:52:30.784834 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Feb 9 00:52:30.784842 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Feb 9 00:52:30.784849 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Feb 9 00:52:30.784859 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Feb 9 00:52:30.784869 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Feb 9 00:52:30.784888 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Feb 9 00:52:30.784896 kernel: NX (Execute Disable) protection: active Feb 9 00:52:30.784904 kernel: e820: update [mem 0x9b3f7018-0x9b400c57] usable ==> usable Feb 9 00:52:30.784912 kernel: e820: update [mem 0x9b3f7018-0x9b400c57] usable ==> usable Feb 9 00:52:30.784920 kernel: e820: update [mem 0x9b1aa018-0x9b1e6e57] usable ==> usable Feb 9 00:52:30.784927 kernel: e820: update [mem 0x9b1aa018-0x9b1e6e57] usable ==> usable Feb 9 00:52:30.784946 kernel: extended physical RAM map: Feb 9 00:52:30.784954 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 9 00:52:30.784962 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Feb 9 00:52:30.784972 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Feb 9 00:52:30.784980 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Feb 9 00:52:30.784988 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Feb 9 00:52:30.784996 kernel: reserve setup_data: [mem 0x000000000080c000-0x000000000080ffff] usable Feb 9 00:52:30.785003 kernel: reserve setup_data: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Feb 9 00:52:30.785011 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b1aa017] usable Feb 9 00:52:30.785019 kernel: reserve setup_data: [mem 0x000000009b1aa018-0x000000009b1e6e57] usable Feb 9 00:52:30.785027 kernel: reserve setup_data: [mem 0x000000009b1e6e58-0x000000009b3f7017] usable Feb 9 00:52:30.785033 kernel: reserve setup_data: [mem 0x000000009b3f7018-0x000000009b400c57] usable Feb 9 00:52:30.785038 kernel: reserve setup_data: [mem 0x000000009b400c58-0x000000009c8eefff] usable Feb 9 00:52:30.785053 kernel: reserve setup_data: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Feb 9 00:52:30.785067 kernel: reserve setup_data: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Feb 9 00:52:30.785075 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Feb 9 00:52:30.785083 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Feb 9 00:52:30.785091 kernel: reserve setup_data: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Feb 9 00:52:30.785103 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Feb 9 00:52:30.785111 kernel: efi: EFI v2.70 by EDK II Feb 9 00:52:30.785120 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b773018 RNG=0x9cb75018 Feb 9 00:52:30.785129 kernel: random: crng init done Feb 9 00:52:30.785138 kernel: SMBIOS 2.8 present. Feb 9 00:52:30.785146 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 0.0.0 02/06/2015 Feb 9 00:52:30.785166 kernel: Hypervisor detected: KVM Feb 9 00:52:30.785175 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 9 00:52:30.785183 kernel: kvm-clock: cpu 0, msr 6afaa001, primary cpu clock Feb 9 00:52:30.785192 kernel: kvm-clock: using sched offset of 3901639970 cycles Feb 9 00:52:30.785201 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 9 00:52:30.785210 kernel: tsc: Detected 2794.750 MHz processor Feb 9 00:52:30.785227 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 9 00:52:30.785236 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 9 00:52:30.785245 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Feb 9 00:52:30.785264 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 9 00:52:30.785273 kernel: Using GB pages for direct mapping Feb 9 00:52:30.785282 kernel: Secure boot disabled Feb 9 00:52:30.785291 kernel: ACPI: Early table checksum verification disabled Feb 9 00:52:30.785299 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Feb 9 00:52:30.785308 kernel: ACPI: XSDT 0x000000009CB7D0E8 00004C (v01 BOCHS BXPC 00000001 01000013) Feb 9 00:52:30.785331 kernel: ACPI: FACP 0x000000009CB7A000 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 00:52:30.785341 kernel: ACPI: DSDT 0x000000009CB7B000 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 00:52:30.785349 kernel: ACPI: FACS 0x000000009CBDD000 000040 Feb 9 00:52:30.785358 kernel: ACPI: APIC 0x000000009CB79000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 00:52:30.785367 kernel: ACPI: HPET 0x000000009CB78000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 00:52:30.785376 kernel: ACPI: WAET 0x000000009CB77000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 00:52:30.785385 kernel: ACPI: BGRT 0x000000009CB76000 000038 (v01 INTEL EDK2 00000002 01000013) Feb 9 00:52:30.785394 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb7a000-0x9cb7a073] Feb 9 00:52:30.785403 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7b000-0x9cb7ca38] Feb 9 00:52:30.785413 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Feb 9 00:52:30.785422 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb79000-0x9cb7908f] Feb 9 00:52:30.785442 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb78000-0x9cb78037] Feb 9 00:52:30.785451 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb77000-0x9cb77027] Feb 9 00:52:30.785460 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb76000-0x9cb76037] Feb 9 00:52:30.785469 kernel: No NUMA configuration found Feb 9 00:52:30.785477 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Feb 9 00:52:30.785486 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Feb 9 00:52:30.785495 kernel: Zone ranges: Feb 9 00:52:30.785512 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 9 00:52:30.785526 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Feb 9 00:52:30.785534 kernel: Normal empty Feb 9 00:52:30.785543 kernel: Movable zone start for each node Feb 9 00:52:30.785552 kernel: Early memory node ranges Feb 9 00:52:30.785560 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Feb 9 00:52:30.785569 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Feb 9 00:52:30.785578 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Feb 9 00:52:30.785594 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Feb 9 00:52:30.785609 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Feb 9 00:52:30.785617 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Feb 9 00:52:30.785626 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Feb 9 00:52:30.785635 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 9 00:52:30.785643 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Feb 9 00:52:30.785652 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Feb 9 00:52:30.785672 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 9 00:52:30.785680 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Feb 9 00:52:30.785689 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Feb 9 00:52:30.785700 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Feb 9 00:52:30.785720 kernel: ACPI: PM-Timer IO Port: 0xb008 Feb 9 00:52:30.785729 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 9 00:52:30.785738 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 9 00:52:30.785747 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 9 00:52:30.785756 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 9 00:52:30.785775 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 9 00:52:30.785784 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 9 00:52:30.785793 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 9 00:52:30.785804 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 9 00:52:30.785813 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 9 00:52:30.785833 kernel: TSC deadline timer available Feb 9 00:52:30.785841 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Feb 9 00:52:30.785850 kernel: kvm-guest: KVM setup pv remote TLB flush Feb 9 00:52:30.785859 kernel: kvm-guest: setup PV sched yield Feb 9 00:52:30.785878 kernel: [mem 0x9d000000-0xffffffff] available for PCI devices Feb 9 00:52:30.785887 kernel: Booting paravirtualized kernel on KVM Feb 9 00:52:30.785896 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 9 00:52:30.785905 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Feb 9 00:52:30.785923 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u524288 Feb 9 00:52:30.785936 kernel: pcpu-alloc: s185624 r8192 d31464 u524288 alloc=1*2097152 Feb 9 00:52:30.785951 kernel: pcpu-alloc: [0] 0 1 2 3 Feb 9 00:52:30.785961 kernel: kvm-guest: setup async PF for cpu 0 Feb 9 00:52:30.785981 kernel: kvm-guest: stealtime: cpu 0, msr 9ae1c0c0 Feb 9 00:52:30.785991 kernel: kvm-guest: PV spinlocks enabled Feb 9 00:52:30.786000 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 9 00:52:30.786009 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Feb 9 00:52:30.786029 kernel: Policy zone: DMA32 Feb 9 00:52:30.786040 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 9 00:52:30.786050 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 00:52:30.786061 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 9 00:52:30.786070 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 00:52:30.786090 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 00:52:30.786101 kernel: Memory: 2400436K/2567000K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 166304K reserved, 0K cma-reserved) Feb 9 00:52:30.786110 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 9 00:52:30.786121 kernel: ftrace: allocating 34475 entries in 135 pages Feb 9 00:52:30.786130 kernel: ftrace: allocated 135 pages with 4 groups Feb 9 00:52:30.786150 kernel: rcu: Hierarchical RCU implementation. Feb 9 00:52:30.786161 kernel: rcu: RCU event tracing is enabled. Feb 9 00:52:30.786170 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 9 00:52:30.786179 kernel: Rude variant of Tasks RCU enabled. Feb 9 00:52:30.786189 kernel: Tracing variant of Tasks RCU enabled. Feb 9 00:52:30.786209 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 00:52:30.786224 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 9 00:52:30.786235 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Feb 9 00:52:30.786244 kernel: Console: colour dummy device 80x25 Feb 9 00:52:30.786301 kernel: printk: console [ttyS0] enabled Feb 9 00:52:30.786311 kernel: ACPI: Core revision 20210730 Feb 9 00:52:30.786320 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 9 00:52:30.786329 kernel: APIC: Switch to symmetric I/O mode setup Feb 9 00:52:30.786338 kernel: x2apic enabled Feb 9 00:52:30.786358 kernel: Switched APIC routing to physical x2apic. Feb 9 00:52:30.786368 kernel: kvm-guest: setup PV IPIs Feb 9 00:52:30.786379 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 9 00:52:30.786389 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 9 00:52:30.786409 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Feb 9 00:52:30.786419 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 9 00:52:30.786428 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Feb 9 00:52:30.786437 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Feb 9 00:52:30.786458 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 9 00:52:30.786467 kernel: Spectre V2 : Mitigation: Retpolines Feb 9 00:52:30.786477 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 9 00:52:30.786488 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 9 00:52:30.786508 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Feb 9 00:52:30.786520 kernel: RETBleed: Mitigation: untrained return thunk Feb 9 00:52:30.786530 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 9 00:52:30.786542 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Feb 9 00:52:30.786551 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 9 00:52:30.786560 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 9 00:52:30.786570 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 9 00:52:30.786579 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 9 00:52:30.786590 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 9 00:52:30.786599 kernel: Freeing SMP alternatives memory: 32K Feb 9 00:52:30.786608 kernel: pid_max: default: 32768 minimum: 301 Feb 9 00:52:30.786618 kernel: LSM: Security Framework initializing Feb 9 00:52:30.786627 kernel: SELinux: Initializing. Feb 9 00:52:30.786636 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 00:52:30.786645 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 00:52:30.786655 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Feb 9 00:52:30.786665 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Feb 9 00:52:30.786673 kernel: ... version: 0 Feb 9 00:52:30.786680 kernel: ... bit width: 48 Feb 9 00:52:30.786687 kernel: ... generic registers: 6 Feb 9 00:52:30.786694 kernel: ... value mask: 0000ffffffffffff Feb 9 00:52:30.786700 kernel: ... max period: 00007fffffffffff Feb 9 00:52:30.786707 kernel: ... fixed-purpose events: 0 Feb 9 00:52:30.786714 kernel: ... event mask: 000000000000003f Feb 9 00:52:30.786720 kernel: signal: max sigframe size: 1776 Feb 9 00:52:30.786727 kernel: rcu: Hierarchical SRCU implementation. Feb 9 00:52:30.786735 kernel: smp: Bringing up secondary CPUs ... Feb 9 00:52:30.786742 kernel: x86: Booting SMP configuration: Feb 9 00:52:30.786748 kernel: .... node #0, CPUs: #1 Feb 9 00:52:30.786755 kernel: kvm-clock: cpu 1, msr 6afaa041, secondary cpu clock Feb 9 00:52:30.786762 kernel: kvm-guest: setup async PF for cpu 1 Feb 9 00:52:30.786769 kernel: kvm-guest: stealtime: cpu 1, msr 9ae9c0c0 Feb 9 00:52:30.786775 kernel: #2 Feb 9 00:52:30.786782 kernel: kvm-clock: cpu 2, msr 6afaa081, secondary cpu clock Feb 9 00:52:30.786789 kernel: kvm-guest: setup async PF for cpu 2 Feb 9 00:52:30.786797 kernel: kvm-guest: stealtime: cpu 2, msr 9af1c0c0 Feb 9 00:52:30.786804 kernel: #3 Feb 9 00:52:30.786811 kernel: kvm-clock: cpu 3, msr 6afaa0c1, secondary cpu clock Feb 9 00:52:30.786817 kernel: kvm-guest: setup async PF for cpu 3 Feb 9 00:52:30.786824 kernel: kvm-guest: stealtime: cpu 3, msr 9af9c0c0 Feb 9 00:52:30.786831 kernel: smp: Brought up 1 node, 4 CPUs Feb 9 00:52:30.786838 kernel: smpboot: Max logical packages: 1 Feb 9 00:52:30.786845 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Feb 9 00:52:30.786852 kernel: devtmpfs: initialized Feb 9 00:52:30.786860 kernel: x86/mm: Memory block size: 128MB Feb 9 00:52:30.786867 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Feb 9 00:52:30.786874 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Feb 9 00:52:30.786880 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Feb 9 00:52:30.786887 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Feb 9 00:52:30.786894 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Feb 9 00:52:30.786901 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 00:52:30.786907 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 9 00:52:30.786914 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 00:52:30.786922 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 00:52:30.786929 kernel: audit: initializing netlink subsys (disabled) Feb 9 00:52:30.786936 kernel: audit: type=2000 audit(1707439949.925:1): state=initialized audit_enabled=0 res=1 Feb 9 00:52:30.786942 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 00:52:30.786949 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 9 00:52:30.786955 kernel: cpuidle: using governor menu Feb 9 00:52:30.786962 kernel: ACPI: bus type PCI registered Feb 9 00:52:30.786969 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 00:52:30.786976 kernel: dca service started, version 1.12.1 Feb 9 00:52:30.786983 kernel: PCI: Using configuration type 1 for base access Feb 9 00:52:30.786990 kernel: PCI: Using configuration type 1 for extended access Feb 9 00:52:30.786997 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 9 00:52:30.787004 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 00:52:30.787011 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 00:52:30.787017 kernel: ACPI: Added _OSI(Module Device) Feb 9 00:52:30.787024 kernel: ACPI: Added _OSI(Processor Device) Feb 9 00:52:30.787031 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 00:52:30.787037 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 00:52:30.787045 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 00:52:30.787052 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 00:52:30.787059 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 00:52:30.787066 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 9 00:52:30.787072 kernel: ACPI: Interpreter enabled Feb 9 00:52:30.787079 kernel: ACPI: PM: (supports S0 S3 S5) Feb 9 00:52:30.787086 kernel: ACPI: Using IOAPIC for interrupt routing Feb 9 00:52:30.787093 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 9 00:52:30.787099 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Feb 9 00:52:30.787107 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 9 00:52:30.787226 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 9 00:52:30.787238 kernel: acpiphp: Slot [3] registered Feb 9 00:52:30.787245 kernel: acpiphp: Slot [4] registered Feb 9 00:52:30.787264 kernel: acpiphp: Slot [5] registered Feb 9 00:52:30.787274 kernel: acpiphp: Slot [6] registered Feb 9 00:52:30.787281 kernel: acpiphp: Slot [7] registered Feb 9 00:52:30.787288 kernel: acpiphp: Slot [8] registered Feb 9 00:52:30.787295 kernel: acpiphp: Slot [9] registered Feb 9 00:52:30.787304 kernel: acpiphp: Slot [10] registered Feb 9 00:52:30.787310 kernel: acpiphp: Slot [11] registered Feb 9 00:52:30.787317 kernel: acpiphp: Slot [12] registered Feb 9 00:52:30.787324 kernel: acpiphp: Slot [13] registered Feb 9 00:52:30.787330 kernel: acpiphp: Slot [14] registered Feb 9 00:52:30.787337 kernel: acpiphp: Slot [15] registered Feb 9 00:52:30.787344 kernel: acpiphp: Slot [16] registered Feb 9 00:52:30.787350 kernel: acpiphp: Slot [17] registered Feb 9 00:52:30.787357 kernel: acpiphp: Slot [18] registered Feb 9 00:52:30.787364 kernel: acpiphp: Slot [19] registered Feb 9 00:52:30.787371 kernel: acpiphp: Slot [20] registered Feb 9 00:52:30.787378 kernel: acpiphp: Slot [21] registered Feb 9 00:52:30.787384 kernel: acpiphp: Slot [22] registered Feb 9 00:52:30.787391 kernel: acpiphp: Slot [23] registered Feb 9 00:52:30.787397 kernel: acpiphp: Slot [24] registered Feb 9 00:52:30.787404 kernel: acpiphp: Slot [25] registered Feb 9 00:52:30.787410 kernel: acpiphp: Slot [26] registered Feb 9 00:52:30.787417 kernel: acpiphp: Slot [27] registered Feb 9 00:52:30.787424 kernel: acpiphp: Slot [28] registered Feb 9 00:52:30.787433 kernel: acpiphp: Slot [29] registered Feb 9 00:52:30.787439 kernel: acpiphp: Slot [30] registered Feb 9 00:52:30.787446 kernel: acpiphp: Slot [31] registered Feb 9 00:52:30.787452 kernel: PCI host bridge to bus 0000:00 Feb 9 00:52:30.787532 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 9 00:52:30.787594 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 9 00:52:30.787655 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 9 00:52:30.787718 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Feb 9 00:52:30.787778 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0x87fffffff window] Feb 9 00:52:30.787838 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 9 00:52:30.787920 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 9 00:52:30.787996 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 9 00:52:30.788071 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Feb 9 00:52:30.788138 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Feb 9 00:52:30.788207 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Feb 9 00:52:30.788312 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Feb 9 00:52:30.788411 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Feb 9 00:52:30.788504 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Feb 9 00:52:30.788606 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 9 00:52:30.788704 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Feb 9 00:52:30.788808 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Feb 9 00:52:30.788917 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Feb 9 00:52:30.789011 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Feb 9 00:52:30.789104 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc1043000-0xc1043fff] Feb 9 00:52:30.789199 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Feb 9 00:52:30.789320 kernel: pci 0000:00:02.0: BAR 0: assigned to efifb Feb 9 00:52:30.789418 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 9 00:52:30.789539 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Feb 9 00:52:30.789642 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0a0-0xc0bf] Feb 9 00:52:30.789749 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Feb 9 00:52:30.789850 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Feb 9 00:52:30.789978 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Feb 9 00:52:30.790080 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Feb 9 00:52:30.790179 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Feb 9 00:52:30.790307 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Feb 9 00:52:30.790415 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Feb 9 00:52:30.790513 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Feb 9 00:52:30.790610 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc1040000-0xc1040fff] Feb 9 00:52:30.790706 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Feb 9 00:52:30.790806 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Feb 9 00:52:30.790821 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 9 00:52:30.790834 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 9 00:52:30.790843 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 9 00:52:30.790852 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 9 00:52:30.790861 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 9 00:52:30.790871 kernel: iommu: Default domain type: Translated Feb 9 00:52:30.790880 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 9 00:52:30.790968 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Feb 9 00:52:30.791039 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 9 00:52:30.791107 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Feb 9 00:52:30.791118 kernel: vgaarb: loaded Feb 9 00:52:30.791125 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 00:52:30.791132 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 00:52:30.791139 kernel: PTP clock support registered Feb 9 00:52:30.791146 kernel: Registered efivars operations Feb 9 00:52:30.791152 kernel: PCI: Using ACPI for IRQ routing Feb 9 00:52:30.791159 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 9 00:52:30.791166 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Feb 9 00:52:30.791173 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Feb 9 00:52:30.791181 kernel: e820: reserve RAM buffer [mem 0x9b1aa018-0x9bffffff] Feb 9 00:52:30.791188 kernel: e820: reserve RAM buffer [mem 0x9b3f7018-0x9bffffff] Feb 9 00:52:30.791194 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Feb 9 00:52:30.791201 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Feb 9 00:52:30.791208 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 9 00:52:30.791215 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 9 00:52:30.791233 kernel: clocksource: Switched to clocksource kvm-clock Feb 9 00:52:30.791242 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 00:52:30.791262 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 00:52:30.791274 kernel: pnp: PnP ACPI init Feb 9 00:52:30.791377 kernel: pnp 00:02: [dma 2] Feb 9 00:52:30.791391 kernel: pnp: PnP ACPI: found 6 devices Feb 9 00:52:30.791401 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 9 00:52:30.791411 kernel: NET: Registered PF_INET protocol family Feb 9 00:52:30.791420 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 00:52:30.791430 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 9 00:52:30.791440 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 00:52:30.791452 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 9 00:52:30.791462 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 9 00:52:30.791471 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 9 00:52:30.791481 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 00:52:30.791490 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 00:52:30.791499 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 00:52:30.791509 kernel: NET: Registered PF_XDP protocol family Feb 9 00:52:30.791616 kernel: pci 0000:00:05.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Feb 9 00:52:30.791731 kernel: pci 0000:00:05.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Feb 9 00:52:30.791815 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 9 00:52:30.791905 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 9 00:52:30.791995 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 9 00:52:30.792085 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Feb 9 00:52:30.792173 kernel: pci_bus 0000:00: resource 8 [mem 0x800000000-0x87fffffff window] Feb 9 00:52:30.792273 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Feb 9 00:52:30.792351 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 9 00:52:30.792424 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Feb 9 00:52:30.792434 kernel: PCI: CLS 0 bytes, default 64 Feb 9 00:52:30.792442 kernel: Initialise system trusted keyrings Feb 9 00:52:30.792449 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 9 00:52:30.792457 kernel: Key type asymmetric registered Feb 9 00:52:30.792466 kernel: Asymmetric key parser 'x509' registered Feb 9 00:52:30.792475 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 00:52:30.792485 kernel: io scheduler mq-deadline registered Feb 9 00:52:30.792495 kernel: io scheduler kyber registered Feb 9 00:52:30.792507 kernel: io scheduler bfq registered Feb 9 00:52:30.792515 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 9 00:52:30.792523 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 9 00:52:30.792530 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Feb 9 00:52:30.792537 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 9 00:52:30.792544 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 00:52:30.792552 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 9 00:52:30.792559 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 9 00:52:30.792566 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 9 00:52:30.792574 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 9 00:52:30.792673 kernel: rtc_cmos 00:05: RTC can wake from S4 Feb 9 00:52:30.792694 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 9 00:52:30.792785 kernel: rtc_cmos 00:05: registered as rtc0 Feb 9 00:52:30.792881 kernel: rtc_cmos 00:05: setting system clock to 2024-02-09T00:52:30 UTC (1707439950) Feb 9 00:52:30.792970 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Feb 9 00:52:30.792984 kernel: efifb: probing for efifb Feb 9 00:52:30.792994 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Feb 9 00:52:30.793004 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Feb 9 00:52:30.793014 kernel: efifb: scrolling: redraw Feb 9 00:52:30.793024 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 9 00:52:30.793034 kernel: Console: switching to colour frame buffer device 160x50 Feb 9 00:52:30.793044 kernel: fb0: EFI VGA frame buffer device Feb 9 00:52:30.793056 kernel: pstore: Registered efi as persistent store backend Feb 9 00:52:30.793066 kernel: NET: Registered PF_INET6 protocol family Feb 9 00:52:30.793076 kernel: Segment Routing with IPv6 Feb 9 00:52:30.793086 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 00:52:30.793096 kernel: NET: Registered PF_PACKET protocol family Feb 9 00:52:30.793106 kernel: Key type dns_resolver registered Feb 9 00:52:30.793116 kernel: IPI shorthand broadcast: enabled Feb 9 00:52:30.793127 kernel: sched_clock: Marking stable (364368280, 93497708)->(480016891, -22150903) Feb 9 00:52:30.793137 kernel: registered taskstats version 1 Feb 9 00:52:30.793147 kernel: Loading compiled-in X.509 certificates Feb 9 00:52:30.793159 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: e9d857ae0e8100c174221878afd1046acbb054a6' Feb 9 00:52:30.793169 kernel: Key type .fscrypt registered Feb 9 00:52:30.793179 kernel: Key type fscrypt-provisioning registered Feb 9 00:52:30.793189 kernel: pstore: Using crash dump compression: deflate Feb 9 00:52:30.793199 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 00:52:30.793210 kernel: ima: Allocated hash algorithm: sha1 Feb 9 00:52:30.793229 kernel: ima: No architecture policies found Feb 9 00:52:30.793240 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 9 00:52:30.793264 kernel: Write protecting the kernel read-only data: 28672k Feb 9 00:52:30.793277 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 9 00:52:30.793288 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 9 00:52:30.793298 kernel: Run /init as init process Feb 9 00:52:30.793308 kernel: with arguments: Feb 9 00:52:30.793318 kernel: /init Feb 9 00:52:30.793327 kernel: with environment: Feb 9 00:52:30.793336 kernel: HOME=/ Feb 9 00:52:30.793346 kernel: TERM=linux Feb 9 00:52:30.793355 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 00:52:30.793370 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 00:52:30.793383 systemd[1]: Detected virtualization kvm. Feb 9 00:52:30.793395 systemd[1]: Detected architecture x86-64. Feb 9 00:52:30.793405 systemd[1]: Running in initrd. Feb 9 00:52:30.793416 systemd[1]: No hostname configured, using default hostname. Feb 9 00:52:30.793426 systemd[1]: Hostname set to . Feb 9 00:52:30.793437 systemd[1]: Initializing machine ID from VM UUID. Feb 9 00:52:30.793449 systemd[1]: Queued start job for default target initrd.target. Feb 9 00:52:30.793459 systemd[1]: Started systemd-ask-password-console.path. Feb 9 00:52:30.793469 systemd[1]: Reached target cryptsetup.target. Feb 9 00:52:30.793479 systemd[1]: Reached target paths.target. Feb 9 00:52:30.793489 systemd[1]: Reached target slices.target. Feb 9 00:52:30.793498 systemd[1]: Reached target swap.target. Feb 9 00:52:30.793508 systemd[1]: Reached target timers.target. Feb 9 00:52:30.793522 systemd[1]: Listening on iscsid.socket. Feb 9 00:52:30.793532 systemd[1]: Listening on iscsiuio.socket. Feb 9 00:52:30.793544 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 00:52:30.793555 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 00:52:30.793566 systemd[1]: Listening on systemd-journald.socket. Feb 9 00:52:30.793577 systemd[1]: Listening on systemd-networkd.socket. Feb 9 00:52:30.793588 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 00:52:30.793599 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 00:52:30.793611 systemd[1]: Reached target sockets.target. Feb 9 00:52:30.793624 systemd[1]: Starting kmod-static-nodes.service... Feb 9 00:52:30.793635 systemd[1]: Finished network-cleanup.service. Feb 9 00:52:30.793645 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 00:52:30.793655 systemd[1]: Starting systemd-journald.service... Feb 9 00:52:30.793666 systemd[1]: Starting systemd-modules-load.service... Feb 9 00:52:30.793677 systemd[1]: Starting systemd-resolved.service... Feb 9 00:52:30.793687 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 00:52:30.793697 systemd[1]: Finished kmod-static-nodes.service. Feb 9 00:52:30.793716 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 00:52:30.793756 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 00:52:30.793770 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 00:52:30.793791 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 00:52:30.793815 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 00:52:30.793845 systemd-journald[197]: Journal started Feb 9 00:52:30.793904 systemd-journald[197]: Runtime Journal (/run/log/journal/00f76943063a46bb89dc2f800122fc15) is 6.0M, max 48.4M, 42.4M free. Feb 9 00:52:30.774829 systemd-modules-load[198]: Inserted module 'overlay' Feb 9 00:52:30.800033 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 00:52:30.800051 systemd[1]: Started systemd-journald.service. Feb 9 00:52:30.800066 kernel: audit: type=1130 audit(1707439950.796:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:30.800084 kernel: Bridge firewalling registered Feb 9 00:52:30.796000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:30.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:30.794428 systemd-resolved[199]: Positive Trust Anchors: Feb 9 00:52:30.807038 kernel: audit: type=1130 audit(1707439950.799:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:30.807057 kernel: audit: type=1130 audit(1707439950.802:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:30.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:30.794436 systemd-resolved[199]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 00:52:30.794463 systemd-resolved[199]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 00:52:30.796839 systemd-resolved[199]: Defaulting to hostname 'linux'. Feb 9 00:52:30.799365 systemd[1]: Started systemd-resolved.service. Feb 9 00:52:30.800201 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 00:52:30.815296 dracut-cmdline[215]: dracut-dracut-053 Feb 9 00:52:30.802460 systemd-modules-load[198]: Inserted module 'br_netfilter' Feb 9 00:52:30.816891 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 9 00:52:30.802892 systemd[1]: Reached target nss-lookup.target. Feb 9 00:52:30.806308 systemd[1]: Starting dracut-cmdline.service... Feb 9 00:52:30.823288 kernel: SCSI subsystem initialized Feb 9 00:52:30.835835 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 00:52:30.835870 kernel: device-mapper: uevent: version 1.0.3 Feb 9 00:52:30.835890 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 00:52:30.838714 systemd-modules-load[198]: Inserted module 'dm_multipath' Feb 9 00:52:30.839576 systemd[1]: Finished systemd-modules-load.service. Feb 9 00:52:30.843525 kernel: audit: type=1130 audit(1707439950.840:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:30.840000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:30.840933 systemd[1]: Starting systemd-sysctl.service... Feb 9 00:52:30.850172 systemd[1]: Finished systemd-sysctl.service. Feb 9 00:52:30.850000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:30.853290 kernel: audit: type=1130 audit(1707439950.850:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:30.871266 kernel: Loading iSCSI transport class v2.0-870. Feb 9 00:52:30.881272 kernel: iscsi: registered transport (tcp) Feb 9 00:52:30.899296 kernel: iscsi: registered transport (qla4xxx) Feb 9 00:52:30.899310 kernel: QLogic iSCSI HBA Driver Feb 9 00:52:30.927356 systemd[1]: Finished dracut-cmdline.service. Feb 9 00:52:30.930763 kernel: audit: type=1130 audit(1707439950.927:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:30.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:30.928600 systemd[1]: Starting dracut-pre-udev.service... Feb 9 00:52:30.974272 kernel: raid6: avx2x4 gen() 31028 MB/s Feb 9 00:52:30.991262 kernel: raid6: avx2x4 xor() 7978 MB/s Feb 9 00:52:31.008258 kernel: raid6: avx2x2 gen() 32791 MB/s Feb 9 00:52:31.025260 kernel: raid6: avx2x2 xor() 19279 MB/s Feb 9 00:52:31.042265 kernel: raid6: avx2x1 gen() 26645 MB/s Feb 9 00:52:31.059261 kernel: raid6: avx2x1 xor() 15350 MB/s Feb 9 00:52:31.076264 kernel: raid6: sse2x4 gen() 14863 MB/s Feb 9 00:52:31.093262 kernel: raid6: sse2x4 xor() 7652 MB/s Feb 9 00:52:31.110263 kernel: raid6: sse2x2 gen() 16220 MB/s Feb 9 00:52:31.127258 kernel: raid6: sse2x2 xor() 9736 MB/s Feb 9 00:52:31.144261 kernel: raid6: sse2x1 gen() 12388 MB/s Feb 9 00:52:31.161707 kernel: raid6: sse2x1 xor() 7835 MB/s Feb 9 00:52:31.161736 kernel: raid6: using algorithm avx2x2 gen() 32791 MB/s Feb 9 00:52:31.161746 kernel: raid6: .... xor() 19279 MB/s, rmw enabled Feb 9 00:52:31.161755 kernel: raid6: using avx2x2 recovery algorithm Feb 9 00:52:31.173270 kernel: xor: automatically using best checksumming function avx Feb 9 00:52:31.260276 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 9 00:52:31.267016 systemd[1]: Finished dracut-pre-udev.service. Feb 9 00:52:31.270729 kernel: audit: type=1130 audit(1707439951.267:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:31.270748 kernel: audit: type=1334 audit(1707439951.269:9): prog-id=7 op=LOAD Feb 9 00:52:31.270758 kernel: audit: type=1334 audit(1707439951.270:10): prog-id=8 op=LOAD Feb 9 00:52:31.267000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:31.269000 audit: BPF prog-id=7 op=LOAD Feb 9 00:52:31.270000 audit: BPF prog-id=8 op=LOAD Feb 9 00:52:31.271085 systemd[1]: Starting systemd-udevd.service... Feb 9 00:52:31.282624 systemd-udevd[400]: Using default interface naming scheme 'v252'. Feb 9 00:52:31.286932 systemd[1]: Started systemd-udevd.service. Feb 9 00:52:31.286000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:31.288069 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 00:52:31.297036 dracut-pre-trigger[401]: rd.md=0: removing MD RAID activation Feb 9 00:52:31.318965 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 00:52:31.319000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:31.320122 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 00:52:31.351061 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 00:52:31.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:31.376272 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 9 00:52:31.382642 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 9 00:52:31.382691 kernel: GPT:9289727 != 19775487 Feb 9 00:52:31.382706 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 9 00:52:31.382715 kernel: GPT:9289727 != 19775487 Feb 9 00:52:31.382724 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 9 00:52:31.382732 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 00:52:31.388269 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 00:52:31.389295 kernel: libata version 3.00 loaded. Feb 9 00:52:31.392441 kernel: ata_piix 0000:00:01.1: version 2.13 Feb 9 00:52:31.392737 kernel: scsi host0: ata_piix Feb 9 00:52:31.392837 kernel: scsi host1: ata_piix Feb 9 00:52:31.393698 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Feb 9 00:52:31.393718 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Feb 9 00:52:31.402487 kernel: AVX2 version of gcm_enc/dec engaged. Feb 9 00:52:31.402509 kernel: AES CTR mode by8 optimization enabled Feb 9 00:52:31.412029 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 00:52:31.419002 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (469) Feb 9 00:52:31.416497 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 00:52:31.417302 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 00:52:31.431040 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 00:52:31.435268 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 00:52:31.436491 systemd[1]: Starting disk-uuid.service... Feb 9 00:52:31.443470 disk-uuid[510]: Primary Header is updated. Feb 9 00:52:31.443470 disk-uuid[510]: Secondary Entries is updated. Feb 9 00:52:31.443470 disk-uuid[510]: Secondary Header is updated. Feb 9 00:52:31.446265 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 00:52:31.449260 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 00:52:31.551285 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Feb 9 00:52:31.551347 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 9 00:52:31.581279 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Feb 9 00:52:31.581463 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 9 00:52:31.598278 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Feb 9 00:52:32.449964 disk-uuid[511]: The operation has completed successfully. Feb 9 00:52:32.450993 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 00:52:32.474398 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 00:52:32.474000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:32.474000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:32.474475 systemd[1]: Finished disk-uuid.service. Feb 9 00:52:32.478629 systemd[1]: Starting verity-setup.service... Feb 9 00:52:32.490282 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Feb 9 00:52:32.504795 systemd[1]: Found device dev-mapper-usr.device. Feb 9 00:52:32.506524 systemd[1]: Mounting sysusr-usr.mount... Feb 9 00:52:32.508431 systemd[1]: Finished verity-setup.service. Feb 9 00:52:32.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:32.562027 systemd[1]: Mounted sysusr-usr.mount. Feb 9 00:52:32.562965 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 00:52:32.562546 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 00:52:32.563992 systemd[1]: Starting ignition-setup.service... Feb 9 00:52:32.565419 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 00:52:32.573650 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 00:52:32.573672 kernel: BTRFS info (device vda6): using free space tree Feb 9 00:52:32.573681 kernel: BTRFS info (device vda6): has skinny extents Feb 9 00:52:32.580396 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 00:52:32.587307 systemd[1]: Finished ignition-setup.service. Feb 9 00:52:32.587000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:32.588207 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 00:52:32.620213 ignition[626]: Ignition 2.14.0 Feb 9 00:52:32.620224 ignition[626]: Stage: fetch-offline Feb 9 00:52:32.620323 ignition[626]: no configs at "/usr/lib/ignition/base.d" Feb 9 00:52:32.620333 ignition[626]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 00:52:32.620441 ignition[626]: parsed url from cmdline: "" Feb 9 00:52:32.620445 ignition[626]: no config URL provided Feb 9 00:52:32.620452 ignition[626]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 00:52:32.620459 ignition[626]: no config at "/usr/lib/ignition/user.ign" Feb 9 00:52:32.620476 ignition[626]: op(1): [started] loading QEMU firmware config module Feb 9 00:52:32.620481 ignition[626]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 9 00:52:32.624182 ignition[626]: op(1): [finished] loading QEMU firmware config module Feb 9 00:52:32.627222 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 00:52:32.628994 systemd[1]: Starting systemd-networkd.service... Feb 9 00:52:32.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:32.628000 audit: BPF prog-id=9 op=LOAD Feb 9 00:52:32.681786 ignition[626]: parsing config with SHA512: 6d4fcdb79058d3c0b4a3a59b5d7eabb46e22dc6dd8d3b0d9ee8441260088f0519348304522eb3416bccf9beff4dc907a6392c54e9147f41e5a609472ca349126 Feb 9 00:52:32.698413 systemd-networkd[705]: lo: Link UP Feb 9 00:52:32.698424 systemd-networkd[705]: lo: Gained carrier Feb 9 00:52:32.699784 systemd-networkd[705]: Enumeration completed Feb 9 00:52:32.700568 systemd[1]: Started systemd-networkd.service. Feb 9 00:52:32.700990 systemd[1]: Reached target network.target. Feb 9 00:52:32.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:32.701791 systemd[1]: Starting iscsiuio.service... Feb 9 00:52:32.704138 systemd-networkd[705]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 00:52:32.706185 systemd-networkd[705]: eth0: Link UP Feb 9 00:52:32.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:32.706190 systemd-networkd[705]: eth0: Gained carrier Feb 9 00:52:32.706444 systemd[1]: Started iscsiuio.service. Feb 9 00:52:32.708334 systemd[1]: Starting iscsid.service... Feb 9 00:52:32.710894 iscsid[710]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 00:52:32.710894 iscsid[710]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 00:52:32.710894 iscsid[710]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 00:52:32.710894 iscsid[710]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 00:52:32.710894 iscsid[710]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 00:52:32.710894 iscsid[710]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 00:52:32.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:32.712118 systemd[1]: Started iscsid.service. Feb 9 00:52:32.713282 systemd[1]: Starting dracut-initqueue.service... Feb 9 00:52:32.722314 systemd-networkd[705]: eth0: DHCPv4 address 10.0.0.122/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 9 00:52:32.723724 unknown[626]: fetched base config from "system" Feb 9 00:52:32.723730 unknown[626]: fetched user config from "qemu" Feb 9 00:52:32.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:32.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:32.724790 ignition[626]: fetch-offline: fetch-offline passed Feb 9 00:52:32.726748 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 00:52:32.725715 ignition[626]: Ignition finished successfully Feb 9 00:52:32.727549 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 9 00:52:32.728129 systemd[1]: Starting ignition-kargs.service... Feb 9 00:52:32.735828 ignition[719]: Ignition 2.14.0 Feb 9 00:52:32.728824 systemd[1]: Finished dracut-initqueue.service. Feb 9 00:52:32.735835 ignition[719]: Stage: kargs Feb 9 00:52:32.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:32.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:32.729507 systemd[1]: Reached target remote-fs-pre.target. Feb 9 00:52:32.735931 ignition[719]: no configs at "/usr/lib/ignition/base.d" Feb 9 00:52:32.730098 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 00:52:32.735942 ignition[719]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 00:52:32.730728 systemd[1]: Reached target remote-fs.target. Feb 9 00:52:32.737459 ignition[719]: kargs: kargs passed Feb 9 00:52:32.731771 systemd[1]: Starting dracut-pre-mount.service... Feb 9 00:52:32.737500 ignition[719]: Ignition finished successfully Feb 9 00:52:32.738771 systemd[1]: Finished ignition-kargs.service. Feb 9 00:52:32.739556 systemd[1]: Finished dracut-pre-mount.service. Feb 9 00:52:32.744860 systemd[1]: Starting ignition-disks.service... Feb 9 00:52:32.750990 ignition[730]: Ignition 2.14.0 Feb 9 00:52:32.751001 ignition[730]: Stage: disks Feb 9 00:52:32.751092 ignition[730]: no configs at "/usr/lib/ignition/base.d" Feb 9 00:52:32.751104 ignition[730]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 00:52:32.752222 ignition[730]: disks: disks passed Feb 9 00:52:32.752267 ignition[730]: Ignition finished successfully Feb 9 00:52:32.754810 systemd[1]: Finished ignition-disks.service. Feb 9 00:52:32.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:32.755231 systemd[1]: Reached target initrd-root-device.target. Feb 9 00:52:32.756147 systemd[1]: Reached target local-fs-pre.target. Feb 9 00:52:32.756499 systemd[1]: Reached target local-fs.target. Feb 9 00:52:32.758412 systemd[1]: Reached target sysinit.target. Feb 9 00:52:32.759400 systemd[1]: Reached target basic.target. Feb 9 00:52:32.761158 systemd[1]: Starting systemd-fsck-root.service... Feb 9 00:52:32.771331 systemd-fsck[738]: ROOT: clean, 602/553520 files, 56014/553472 blocks Feb 9 00:52:32.775572 systemd[1]: Finished systemd-fsck-root.service. Feb 9 00:52:32.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:32.777793 systemd[1]: Mounting sysroot.mount... Feb 9 00:52:32.783267 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 00:52:32.783630 systemd[1]: Mounted sysroot.mount. Feb 9 00:52:32.784591 systemd[1]: Reached target initrd-root-fs.target. Feb 9 00:52:32.786286 systemd[1]: Mounting sysroot-usr.mount... Feb 9 00:52:32.787553 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 9 00:52:32.787583 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 00:52:32.787601 systemd[1]: Reached target ignition-diskful.target. Feb 9 00:52:32.791421 systemd[1]: Mounted sysroot-usr.mount. Feb 9 00:52:32.792851 systemd[1]: Starting initrd-setup-root.service... Feb 9 00:52:32.796375 initrd-setup-root[748]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 00:52:32.799700 initrd-setup-root[756]: cut: /sysroot/etc/group: No such file or directory Feb 9 00:52:32.802140 initrd-setup-root[764]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 00:52:32.804887 initrd-setup-root[772]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 00:52:32.826877 systemd[1]: Finished initrd-setup-root.service. Feb 9 00:52:32.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:32.828601 systemd[1]: Starting ignition-mount.service... Feb 9 00:52:32.830045 systemd[1]: Starting sysroot-boot.service... Feb 9 00:52:32.833449 bash[789]: umount: /sysroot/usr/share/oem: not mounted. Feb 9 00:52:32.840203 ignition[790]: INFO : Ignition 2.14.0 Feb 9 00:52:32.840948 ignition[790]: INFO : Stage: mount Feb 9 00:52:32.840948 ignition[790]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 00:52:32.840948 ignition[790]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 00:52:32.843801 ignition[790]: INFO : mount: mount passed Feb 9 00:52:32.844345 ignition[790]: INFO : Ignition finished successfully Feb 9 00:52:32.845128 systemd[1]: Finished ignition-mount.service. Feb 9 00:52:32.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:32.848545 systemd[1]: Finished sysroot-boot.service. Feb 9 00:52:32.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:33.514390 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 00:52:33.519262 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (800) Feb 9 00:52:33.519286 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 00:52:33.520569 kernel: BTRFS info (device vda6): using free space tree Feb 9 00:52:33.520589 kernel: BTRFS info (device vda6): has skinny extents Feb 9 00:52:33.523779 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 00:52:33.524863 systemd[1]: Starting ignition-files.service... Feb 9 00:52:33.537306 ignition[820]: INFO : Ignition 2.14.0 Feb 9 00:52:33.537306 ignition[820]: INFO : Stage: files Feb 9 00:52:33.538410 ignition[820]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 00:52:33.538410 ignition[820]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 00:52:33.540068 ignition[820]: DEBUG : files: compiled without relabeling support, skipping Feb 9 00:52:33.540068 ignition[820]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 00:52:33.540068 ignition[820]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 00:52:33.542966 ignition[820]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 00:52:33.542966 ignition[820]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 00:52:33.542966 ignition[820]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 00:52:33.542966 ignition[820]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 9 00:52:33.542966 ignition[820]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 9 00:52:33.541299 unknown[820]: wrote ssh authorized keys file for user: core Feb 9 00:52:33.666312 ignition[820]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 9 00:52:33.729489 ignition[820]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 9 00:52:33.731029 ignition[820]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 9 00:52:33.731029 ignition[820]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-amd64.tar.gz: attempt #1 Feb 9 00:52:34.108207 ignition[820]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 9 00:52:34.181415 ignition[820]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: aa622325bf05520939f9e020d7a28ab48ac23e2fae6f47d5a4e52174c88c1ebc31b464853e4fd65bd8f5331f330a6ca96fd370d247d3eeaed042da4ee2d1219a Feb 9 00:52:34.183531 ignition[820]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 9 00:52:34.183531 ignition[820]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 9 00:52:34.183531 ignition[820]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz: attempt #1 Feb 9 00:52:34.475340 ignition[820]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 9 00:52:34.565360 systemd-networkd[705]: eth0: Gained IPv6LL Feb 9 00:52:34.646690 ignition[820]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 5d0324ca8a3c90c680b6e1fddb245a2255582fa15949ba1f3c6bb7323df9d3af754dae98d6e40ac9ccafb2999c932df2c4288d418949a4915d928eb23c090540 Feb 9 00:52:34.648886 ignition[820]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 9 00:52:34.648886 ignition[820]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 00:52:34.648886 ignition[820]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 00:52:34.648886 ignition[820]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubectl" Feb 9 00:52:34.648886 ignition[820]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubectl: attempt #1 Feb 9 00:52:34.715231 ignition[820]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 9 00:52:34.924510 ignition[820]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 33cf3f6e37bcee4dff7ce14ab933c605d07353d4e31446dd2b52c3f05e0b150b60e531f6069f112d8a76331322a72b593537531e62104cfc7c70cb03d46f76b3 Feb 9 00:52:34.924510 ignition[820]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 9 00:52:34.927671 ignition[820]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 00:52:34.927671 ignition[820]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubelet: attempt #1 Feb 9 00:52:34.973700 ignition[820]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 9 00:52:35.366737 ignition[820]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: ce6ba764274162d38ac1c44e1fb1f0f835346f3afc5b508bb755b1b7d7170910f5812b0a1941b32e29d950e905bbd08ae761c87befad921db4d44969c8562e75 Feb 9 00:52:35.368931 ignition[820]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 00:52:35.368931 ignition[820]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 00:52:35.368931 ignition[820]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubeadm: attempt #1 Feb 9 00:52:35.416218 ignition[820]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET result: OK Feb 9 00:52:35.597443 ignition[820]: DEBUG : files: createFilesystemsFiles: createFiles: op(9): file matches expected sum of: f4daad200c8378dfdc6cb69af28eaca4215f2b4a2dbdf75f29f9210171cb5683bc873fc000319022e6b3ad61175475d77190734713ba9136644394e8a8faafa1 Feb 9 00:52:35.599528 ignition[820]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 00:52:35.599528 ignition[820]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 00:52:35.599528 ignition[820]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 9 00:52:36.030183 ignition[820]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 9 00:52:36.091494 ignition[820]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 00:52:36.092815 ignition[820]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Feb 9 00:52:36.093947 ignition[820]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 00:52:36.095060 ignition[820]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 9 00:52:36.096212 ignition[820]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 9 00:52:36.097346 ignition[820]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 00:52:36.098489 ignition[820]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 00:52:36.099638 ignition[820]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 00:52:36.101144 ignition[820]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 00:52:36.102346 ignition[820]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 00:52:36.103533 ignition[820]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 00:52:36.104700 ignition[820]: INFO : files: op(10): [started] processing unit "prepare-cni-plugins.service" Feb 9 00:52:36.105625 ignition[820]: INFO : files: op(10): op(11): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 00:52:36.107036 ignition[820]: INFO : files: op(10): op(11): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 00:52:36.107036 ignition[820]: INFO : files: op(10): [finished] processing unit "prepare-cni-plugins.service" Feb 9 00:52:36.109265 ignition[820]: INFO : files: op(12): [started] processing unit "prepare-critools.service" Feb 9 00:52:36.109265 ignition[820]: INFO : files: op(12): op(13): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 00:52:36.111404 ignition[820]: INFO : files: op(12): op(13): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 00:52:36.111404 ignition[820]: INFO : files: op(12): [finished] processing unit "prepare-critools.service" Feb 9 00:52:36.111404 ignition[820]: INFO : files: op(14): [started] processing unit "prepare-helm.service" Feb 9 00:52:36.111404 ignition[820]: INFO : files: op(14): op(15): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 00:52:36.115579 ignition[820]: INFO : files: op(14): op(15): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 00:52:36.115579 ignition[820]: INFO : files: op(14): [finished] processing unit "prepare-helm.service" Feb 9 00:52:36.115579 ignition[820]: INFO : files: op(16): [started] processing unit "coreos-metadata.service" Feb 9 00:52:36.115579 ignition[820]: INFO : files: op(16): op(17): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 9 00:52:36.119738 ignition[820]: INFO : files: op(16): op(17): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 9 00:52:36.119738 ignition[820]: INFO : files: op(16): [finished] processing unit "coreos-metadata.service" Feb 9 00:52:36.119738 ignition[820]: INFO : files: op(18): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 00:52:36.119738 ignition[820]: INFO : files: op(18): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 00:52:36.119738 ignition[820]: INFO : files: op(19): [started] setting preset to enabled for "prepare-critools.service" Feb 9 00:52:36.125658 ignition[820]: INFO : files: op(19): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 00:52:36.125658 ignition[820]: INFO : files: op(1a): [started] setting preset to enabled for "prepare-helm.service" Feb 9 00:52:36.125658 ignition[820]: INFO : files: op(1a): [finished] setting preset to enabled for "prepare-helm.service" Feb 9 00:52:36.125658 ignition[820]: INFO : files: op(1b): [started] setting preset to disabled for "coreos-metadata.service" Feb 9 00:52:36.125658 ignition[820]: INFO : files: op(1b): op(1c): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 9 00:52:36.141393 ignition[820]: INFO : files: op(1b): op(1c): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 9 00:52:36.142652 ignition[820]: INFO : files: op(1b): [finished] setting preset to disabled for "coreos-metadata.service" Feb 9 00:52:36.142652 ignition[820]: INFO : files: createResultFile: createFiles: op(1d): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 00:52:36.142652 ignition[820]: INFO : files: createResultFile: createFiles: op(1d): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 00:52:36.142652 ignition[820]: INFO : files: files passed Feb 9 00:52:36.142652 ignition[820]: INFO : Ignition finished successfully Feb 9 00:52:36.155447 kernel: kauditd_printk_skb: 21 callbacks suppressed Feb 9 00:52:36.155473 kernel: audit: type=1130 audit(1707439956.144:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:36.155484 kernel: audit: type=1130 audit(1707439956.151:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:36.155495 kernel: audit: type=1130 audit(1707439956.155:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:36.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:36.151000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:36.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:36.142683 systemd[1]: Finished ignition-files.service. Feb 9 00:52:36.160929 kernel: audit: type=1131 audit(1707439956.155:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:36.155000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:36.145135 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 00:52:36.161737 initrd-setup-root-after-ignition[844]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Feb 9 00:52:36.149405 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 00:52:36.164066 initrd-setup-root-after-ignition[846]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 00:52:36.149971 systemd[1]: Starting ignition-quench.service... Feb 9 00:52:36.151181 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 00:52:36.152377 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 00:52:36.152436 systemd[1]: Finished ignition-quench.service. Feb 9 00:52:36.155520 systemd[1]: Reached target ignition-complete.target. Feb 9 00:52:36.160380 systemd[1]: Starting initrd-parse-etc.service... Feb 9 00:52:36.170690 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 00:52:36.170764 systemd[1]: Finished initrd-parse-etc.service. Feb 9 00:52:36.177625 kernel: audit: type=1130 audit(1707439956.172:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:36.178277 kernel: audit: type=1131 audit(1707439956.172:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:36.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:36.172000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:36.172422 systemd[1]: Reached target initrd-fs.target. Feb 9 00:52:36.177610 systemd[1]: Reached target initrd.target. Feb 9 00:52:36.178293 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 00:52:36.178806 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 00:52:36.186873 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 00:52:36.190913 kernel: audit: type=1130 audit(1707439956.187:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:36.187000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:36.188105 systemd[1]: Starting initrd-cleanup.service... Feb 9 00:52:36.195169 systemd[1]: Stopped target nss-lookup.target. Feb 9 00:52:36.195925 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 00:52:36.197360 systemd[1]: Stopped target timers.target. Feb 9 00:52:36.198759 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 00:52:36.203427 kernel: audit: type=1131 audit(1707439956.199:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:36.199000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:36.198866 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 00:52:36.200184 systemd[1]: Stopped target initrd.target. Feb 9 00:52:36.203505 systemd[1]: Stopped target basic.target. Feb 9 00:52:36.204831 systemd[1]: Stopped target ignition-complete.target. Feb 9 00:52:36.206214 systemd[1]: Stopped target ignition-diskful.target. Feb 9 00:52:36.207601 systemd[1]: Stopped target initrd-root-device.target. Feb 9 00:52:36.209124 systemd[1]: Stopped target remote-fs.target. Feb 9 00:52:36.210528 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 00:52:36.211992 systemd[1]: Stopped target sysinit.target. Feb 9 00:52:36.213353 systemd[1]: Stopped target local-fs.target. Feb 9 00:52:36.214711 systemd[1]: Stopped target local-fs-pre.target. Feb 9 00:52:36.216104 systemd[1]: Stopped target swap.target. Feb 9 00:52:36.221877 kernel: audit: type=1131 audit(1707439956.218:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:36.218000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:36.217352 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 00:52:36.217435 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 00:52:36.226770 kernel: audit: type=1131 audit(1707439956.223:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:36.223000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:36.226000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:36.218777 systemd[1]: Stopped target cryptsetup.target. Feb 9 00:52:36.221896 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 00:52:36.221978 systemd[1]: Stopped dracut-initqueue.service. Feb 9 00:52:36.223523 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 00:52:36.223603 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 00:52:36.226858 systemd[1]: Stopped target paths.target. Feb 9 00:52:36.227065 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 00:52:36.231275 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 00:52:36.232300 systemd[1]: Stopped target slices.target. Feb 9 00:52:36.236000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:36.233545 systemd[1]: Stopped target sockets.target. Feb 9 00:52:36.237000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:36.235170 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 00:52:36.240949 iscsid[710]: iscsid shutting down. Feb 9 00:52:36.235273 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 00:52:36.236771 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 00:52:36.241000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:36.245000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:36.246421 ignition[861]: INFO : Ignition 2.14.0 Feb 9 00:52:36.246421 ignition[861]: INFO : Stage: umount Feb 9 00:52:36.246421 ignition[861]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 00:52:36.246421 ignition[861]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 00:52:36.246421 ignition[861]: INFO : umount: umount passed Feb 9 00:52:36.246421 ignition[861]: INFO : Ignition finished successfully Feb 9 00:52:36.236848 systemd[1]: Stopped ignition-files.service. Feb 9 00:52:36.248000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:36.238664 systemd[1]: Stopping ignition-mount.service... Feb 9 00:52:36.239419 systemd[1]: Stopping iscsid.service... Feb 9 00:52:36.240877 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 00:52:36.241011 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 00:52:36.242507 systemd[1]: Stopping sysroot-boot.service... Feb 9 00:52:36.243793 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 00:52:36.243938 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 00:52:36.245845 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 00:52:36.245982 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 00:52:36.260938 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 00:52:36.262373 systemd[1]: iscsid.service: Deactivated successfully. Feb 9 00:52:36.263155 systemd[1]: Stopped iscsid.service. Feb 9 00:52:36.264000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:36.264664 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 00:52:36.265341 systemd[1]: Stopped ignition-mount.service. Feb 9 00:52:36.266000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:36.266601 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 00:52:36.267262 systemd[1]: Stopped sysroot-boot.service. Feb 9 00:52:36.268000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:36.268550 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 00:52:36.269157 systemd[1]: Closed iscsid.socket. Feb 9 00:52:36.270087 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 00:52:36.270119 systemd[1]: Stopped ignition-disks.service. Feb 9 00:52:36.271000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:36.271698 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 00:52:36.271729 systemd[1]: Stopped ignition-kargs.service. Feb 9 00:52:36.273000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:36.273343 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 00:52:36.273371 systemd[1]: Stopped ignition-setup.service. Feb 9 00:52:36.274000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:36.274979 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 00:52:36.275624 systemd[1]: Stopped initrd-setup-root.service. Feb 9 00:52:36.276000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:36.276766 systemd[1]: Stopping iscsiuio.service... Feb 9 00:52:36.277823 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 00:52:36.278504 systemd[1]: Finished initrd-cleanup.service. Feb 9 00:52:36.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:36.279000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:36.279706 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 00:52:36.280359 systemd[1]: Stopped iscsiuio.service. Feb 9 00:52:36.281000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:36.281979 systemd[1]: Stopped target network.target. Feb 9 00:52:36.283026 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 00:52:36.283051 systemd[1]: Closed iscsiuio.socket. Feb 9 00:52:36.284499 systemd[1]: Stopping systemd-networkd.service... Feb 9 00:52:36.285687 systemd[1]: Stopping systemd-resolved.service... Feb 9 00:52:36.291278 systemd-networkd[705]: eth0: DHCPv6 lease lost Feb 9 00:52:36.292155 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 00:52:36.292241 systemd[1]: Stopped systemd-networkd.service. Feb 9 00:52:36.293000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:36.293466 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 00:52:36.293487 systemd[1]: Closed systemd-networkd.socket. Feb 9 00:52:36.294321 systemd[1]: Stopping network-cleanup.service... Feb 9 00:52:36.296585 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 00:52:36.296632 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 00:52:36.297000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:36.298000 audit: BPF prog-id=9 op=UNLOAD Feb 9 00:52:36.298543 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 00:52:36.298578 systemd[1]: Stopped systemd-sysctl.service. Feb 9 00:52:36.299000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:36.300224 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 00:52:36.300273 systemd[1]: Stopped systemd-modules-load.service. Feb 9 00:52:36.301000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:36.302100 systemd[1]: Stopping systemd-udevd.service... Feb 9 00:52:36.303472 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 00:52:36.303828 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 00:52:36.303895 systemd[1]: Stopped systemd-resolved.service. Feb 9 00:52:36.306000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:36.308910 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 00:52:36.308994 systemd[1]: Stopped network-cleanup.service. Feb 9 00:52:36.308000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:36.310000 audit: BPF prog-id=6 op=UNLOAD Feb 9 00:52:36.310625 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 00:52:36.310721 systemd[1]: Stopped systemd-udevd.service. Feb 9 00:52:36.311000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:36.312388 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 00:52:36.312429 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 00:52:36.314196 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 00:52:36.314225 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 00:52:36.315877 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 00:52:36.315915 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 00:52:36.317000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:36.317599 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 00:52:36.317629 systemd[1]: Stopped dracut-cmdline.service. Feb 9 00:52:36.318000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:36.319177 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 00:52:36.319207 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 00:52:36.320000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:36.321512 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 00:52:36.322671 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 00:52:36.322713 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 00:52:36.324000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:36.325765 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 00:52:36.326545 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 00:52:36.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:36.327000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:36.327820 systemd[1]: Reached target initrd-switch-root.target. Feb 9 00:52:36.329556 systemd[1]: Starting initrd-switch-root.service... Feb 9 00:52:36.344923 systemd[1]: Switching root. Feb 9 00:52:36.365402 systemd-journald[197]: Journal stopped Feb 9 00:52:39.274546 systemd-journald[197]: Received SIGTERM from PID 1 (systemd). Feb 9 00:52:39.274596 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 00:52:39.274608 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 00:52:39.274621 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 00:52:39.274630 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 00:52:39.274641 kernel: SELinux: policy capability open_perms=1 Feb 9 00:52:39.274727 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 00:52:39.274737 kernel: SELinux: policy capability always_check_network=0 Feb 9 00:52:39.274746 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 00:52:39.274756 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 00:52:39.274765 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 00:52:39.274777 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 00:52:39.274787 systemd[1]: Successfully loaded SELinux policy in 33.994ms. Feb 9 00:52:39.274807 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.233ms. Feb 9 00:52:39.274819 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 00:52:39.274830 systemd[1]: Detected virtualization kvm. Feb 9 00:52:39.274841 systemd[1]: Detected architecture x86-64. Feb 9 00:52:39.274851 systemd[1]: Detected first boot. Feb 9 00:52:39.274861 systemd[1]: Initializing machine ID from VM UUID. Feb 9 00:52:39.274871 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 00:52:39.274881 systemd[1]: Populated /etc with preset unit settings. Feb 9 00:52:39.274892 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 00:52:39.274908 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 00:52:39.274919 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 00:52:39.274930 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 9 00:52:39.274939 systemd[1]: Stopped initrd-switch-root.service. Feb 9 00:52:39.274949 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 9 00:52:39.274959 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 00:52:39.274969 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 00:52:39.274980 systemd[1]: Created slice system-getty.slice. Feb 9 00:52:39.274990 systemd[1]: Created slice system-modprobe.slice. Feb 9 00:52:39.275007 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 00:52:39.275019 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 00:52:39.275029 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 00:52:39.275039 systemd[1]: Created slice user.slice. Feb 9 00:52:39.275049 systemd[1]: Started systemd-ask-password-console.path. Feb 9 00:52:39.275059 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 00:52:39.275070 systemd[1]: Set up automount boot.automount. Feb 9 00:52:39.275081 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 00:52:39.275092 systemd[1]: Stopped target initrd-switch-root.target. Feb 9 00:52:39.275102 systemd[1]: Stopped target initrd-fs.target. Feb 9 00:52:39.275111 systemd[1]: Stopped target initrd-root-fs.target. Feb 9 00:52:39.275121 systemd[1]: Reached target integritysetup.target. Feb 9 00:52:39.275131 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 00:52:39.275141 systemd[1]: Reached target remote-fs.target. Feb 9 00:52:39.275150 systemd[1]: Reached target slices.target. Feb 9 00:52:39.275162 systemd[1]: Reached target swap.target. Feb 9 00:52:39.275172 systemd[1]: Reached target torcx.target. Feb 9 00:52:39.275182 systemd[1]: Reached target veritysetup.target. Feb 9 00:52:39.275191 systemd[1]: Listening on systemd-coredump.socket. Feb 9 00:52:39.275201 systemd[1]: Listening on systemd-initctl.socket. Feb 9 00:52:39.275211 systemd[1]: Listening on systemd-networkd.socket. Feb 9 00:52:39.275221 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 00:52:39.275230 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 00:52:39.275240 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 00:52:39.275259 systemd[1]: Mounting dev-hugepages.mount... Feb 9 00:52:39.275271 systemd[1]: Mounting dev-mqueue.mount... Feb 9 00:52:39.275281 systemd[1]: Mounting media.mount... Feb 9 00:52:39.275292 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 00:52:39.275302 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 00:52:39.275312 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 00:52:39.275322 systemd[1]: Mounting tmp.mount... Feb 9 00:52:39.275332 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 00:52:39.275342 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 00:52:39.275351 systemd[1]: Starting kmod-static-nodes.service... Feb 9 00:52:39.275363 systemd[1]: Starting modprobe@configfs.service... Feb 9 00:52:39.275372 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 00:52:39.275383 systemd[1]: Starting modprobe@drm.service... Feb 9 00:52:39.275393 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 00:52:39.275403 systemd[1]: Starting modprobe@fuse.service... Feb 9 00:52:39.275412 systemd[1]: Starting modprobe@loop.service... Feb 9 00:52:39.275422 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 00:52:39.275432 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 9 00:52:39.275443 systemd[1]: Stopped systemd-fsck-root.service. Feb 9 00:52:39.275454 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 9 00:52:39.275468 systemd[1]: Stopped systemd-fsck-usr.service. Feb 9 00:52:39.275480 systemd[1]: Stopped systemd-journald.service. Feb 9 00:52:39.275490 kernel: loop: module loaded Feb 9 00:52:39.275501 kernel: fuse: init (API version 7.34) Feb 9 00:52:39.275513 systemd[1]: Starting systemd-journald.service... Feb 9 00:52:39.275522 systemd[1]: Starting systemd-modules-load.service... Feb 9 00:52:39.275533 systemd[1]: Starting systemd-network-generator.service... Feb 9 00:52:39.275544 systemd[1]: Starting systemd-remount-fs.service... Feb 9 00:52:39.275554 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 00:52:39.275563 systemd[1]: verity-setup.service: Deactivated successfully. Feb 9 00:52:39.275573 systemd[1]: Stopped verity-setup.service. Feb 9 00:52:39.275583 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 00:52:39.275594 systemd[1]: Mounted dev-hugepages.mount. Feb 9 00:52:39.275606 systemd-journald[968]: Journal started Feb 9 00:52:39.275642 systemd-journald[968]: Runtime Journal (/run/log/journal/00f76943063a46bb89dc2f800122fc15) is 6.0M, max 48.4M, 42.4M free. Feb 9 00:52:36.417000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 9 00:52:37.148000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 00:52:37.148000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 00:52:37.148000 audit: BPF prog-id=10 op=LOAD Feb 9 00:52:37.148000 audit: BPF prog-id=10 op=UNLOAD Feb 9 00:52:37.148000 audit: BPF prog-id=11 op=LOAD Feb 9 00:52:37.148000 audit: BPF prog-id=11 op=UNLOAD Feb 9 00:52:37.179000 audit[894]: AVC avc: denied { associate } for pid=894 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 00:52:37.179000 audit[894]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001858e2 a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=877 pid=894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 00:52:37.179000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 00:52:37.180000 audit[894]: AVC avc: denied { associate } for pid=894 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 9 00:52:37.180000 audit[894]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001859b9 a2=1ed a3=0 items=2 ppid=877 pid=894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 00:52:37.180000 audit: CWD cwd="/" Feb 9 00:52:37.180000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:37.180000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:37.180000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 00:52:39.182000 audit: BPF prog-id=12 op=LOAD Feb 9 00:52:39.182000 audit: BPF prog-id=3 op=UNLOAD Feb 9 00:52:39.182000 audit: BPF prog-id=13 op=LOAD Feb 9 00:52:39.182000 audit: BPF prog-id=14 op=LOAD Feb 9 00:52:39.182000 audit: BPF prog-id=4 op=UNLOAD Feb 9 00:52:39.182000 audit: BPF prog-id=5 op=UNLOAD Feb 9 00:52:39.183000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:39.185000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:39.185000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:39.194000 audit: BPF prog-id=12 op=UNLOAD Feb 9 00:52:39.254000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:39.256000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:39.257000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:39.257000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:39.258000 audit: BPF prog-id=15 op=LOAD Feb 9 00:52:39.259000 audit: BPF prog-id=16 op=LOAD Feb 9 00:52:39.259000 audit: BPF prog-id=17 op=LOAD Feb 9 00:52:39.259000 audit: BPF prog-id=13 op=UNLOAD Feb 9 00:52:39.259000 audit: BPF prog-id=14 op=UNLOAD Feb 9 00:52:39.272000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:39.272000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 00:52:39.272000 audit[968]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7fff9c9769c0 a2=4000 a3=7fff9c976a5c items=0 ppid=1 pid=968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 00:52:39.272000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 00:52:39.181372 systemd[1]: Queued start job for default target multi-user.target. Feb 9 00:52:37.178243 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2024-02-09T00:52:37Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 00:52:39.181381 systemd[1]: Unnecessary job was removed for dev-vda6.device. Feb 9 00:52:37.178456 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2024-02-09T00:52:37Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 00:52:39.183678 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 9 00:52:37.178472 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2024-02-09T00:52:37Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 00:52:37.178498 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2024-02-09T00:52:37Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 9 00:52:37.178506 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2024-02-09T00:52:37Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 9 00:52:37.178533 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2024-02-09T00:52:37Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 9 00:52:37.178544 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2024-02-09T00:52:37Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 9 00:52:37.178738 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2024-02-09T00:52:37Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 9 00:52:39.277299 systemd[1]: Started systemd-journald.service. Feb 9 00:52:37.178775 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2024-02-09T00:52:37Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 00:52:37.178786 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2024-02-09T00:52:37Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 00:52:37.179083 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2024-02-09T00:52:37Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 9 00:52:37.179116 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2024-02-09T00:52:37Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 9 00:52:37.179131 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2024-02-09T00:52:37Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 9 00:52:37.179143 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2024-02-09T00:52:37Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 9 00:52:39.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:37.179161 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2024-02-09T00:52:37Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 9 00:52:37.179179 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2024-02-09T00:52:37Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 9 00:52:38.942635 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2024-02-09T00:52:38Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 00:52:38.942869 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2024-02-09T00:52:38Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 00:52:38.942951 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2024-02-09T00:52:38Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 00:52:39.278015 systemd[1]: Mounted dev-mqueue.mount. Feb 9 00:52:38.943104 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2024-02-09T00:52:38Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 00:52:38.943165 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2024-02-09T00:52:38Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 9 00:52:38.943219 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2024-02-09T00:52:38Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 9 00:52:39.278603 systemd[1]: Mounted media.mount. Feb 9 00:52:39.279132 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 00:52:39.279734 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 00:52:39.280375 systemd[1]: Mounted tmp.mount. Feb 9 00:52:39.281181 systemd[1]: Finished kmod-static-nodes.service. Feb 9 00:52:39.281000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:39.282094 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 00:52:39.282193 systemd[1]: Finished modprobe@configfs.service. Feb 9 00:52:39.282000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:39.282000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:39.283117 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 00:52:39.283216 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 00:52:39.283000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:39.283000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:39.284124 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 00:52:39.284221 systemd[1]: Finished modprobe@drm.service. Feb 9 00:52:39.284000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:39.284000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:39.284950 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 00:52:39.285106 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 00:52:39.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:39.285000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:39.285881 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 00:52:39.286043 systemd[1]: Finished modprobe@fuse.service. Feb 9 00:52:39.286000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:39.286000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:39.286785 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 00:52:39.286926 systemd[1]: Finished modprobe@loop.service. Feb 9 00:52:39.287000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:39.287000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:39.287762 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 00:52:39.288000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:39.288615 systemd[1]: Finished systemd-modules-load.service. Feb 9 00:52:39.288000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:39.289469 systemd[1]: Finished systemd-network-generator.service. Feb 9 00:52:39.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:39.290432 systemd[1]: Finished systemd-remount-fs.service. Feb 9 00:52:39.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:39.291412 systemd[1]: Reached target network-pre.target. Feb 9 00:52:39.293066 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 00:52:39.294497 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 00:52:39.295035 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 00:52:39.296081 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 00:52:39.297384 systemd[1]: Starting systemd-journal-flush.service... Feb 9 00:52:39.298214 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 00:52:39.299078 systemd[1]: Starting systemd-random-seed.service... Feb 9 00:52:39.299668 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 00:52:39.301135 systemd-journald[968]: Time spent on flushing to /var/log/journal/00f76943063a46bb89dc2f800122fc15 is 23.998ms for 1183 entries. Feb 9 00:52:39.301135 systemd-journald[968]: System Journal (/var/log/journal/00f76943063a46bb89dc2f800122fc15) is 8.0M, max 195.6M, 187.6M free. Feb 9 00:52:39.335731 systemd-journald[968]: Received client request to flush runtime journal. Feb 9 00:52:39.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:39.312000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:39.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:39.300534 systemd[1]: Starting systemd-sysctl.service... Feb 9 00:52:39.303021 systemd[1]: Starting systemd-sysusers.service... Feb 9 00:52:39.305579 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 00:52:39.306278 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 00:52:39.309644 systemd[1]: Finished systemd-random-seed.service. Feb 9 00:52:39.310384 systemd[1]: Reached target first-boot-complete.target. Feb 9 00:52:39.311790 systemd[1]: Finished systemd-sysctl.service. Feb 9 00:52:39.321288 systemd[1]: Finished systemd-sysusers.service. Feb 9 00:52:39.336435 systemd[1]: Finished systemd-journal-flush.service. Feb 9 00:52:39.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:39.339108 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 00:52:39.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:39.340742 systemd[1]: Starting systemd-udev-settle.service... Feb 9 00:52:39.346799 udevadm[1000]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 9 00:52:39.705096 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 00:52:39.705000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:39.706000 audit: BPF prog-id=18 op=LOAD Feb 9 00:52:39.706000 audit: BPF prog-id=19 op=LOAD Feb 9 00:52:39.706000 audit: BPF prog-id=7 op=UNLOAD Feb 9 00:52:39.706000 audit: BPF prog-id=8 op=UNLOAD Feb 9 00:52:39.707020 systemd[1]: Starting systemd-udevd.service... Feb 9 00:52:39.721666 systemd-udevd[1001]: Using default interface naming scheme 'v252'. Feb 9 00:52:39.735014 systemd[1]: Started systemd-udevd.service. Feb 9 00:52:39.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:39.736000 audit: BPF prog-id=20 op=LOAD Feb 9 00:52:39.736881 systemd[1]: Starting systemd-networkd.service... Feb 9 00:52:39.740000 audit: BPF prog-id=21 op=LOAD Feb 9 00:52:39.741000 audit: BPF prog-id=22 op=LOAD Feb 9 00:52:39.741000 audit: BPF prog-id=23 op=LOAD Feb 9 00:52:39.741869 systemd[1]: Starting systemd-userdbd.service... Feb 9 00:52:39.754918 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Feb 9 00:52:39.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:39.774256 systemd[1]: Started systemd-userdbd.service. Feb 9 00:52:39.787275 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 9 00:52:39.790419 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 00:52:39.799309 kernel: ACPI: button: Power Button [PWRF] Feb 9 00:52:39.805000 audit[1021]: AVC avc: denied { confidentiality } for pid=1021 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 00:52:39.805000 audit[1021]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55f6451ab2e0 a1=32194 a2=7fd63adccbc5 a3=5 items=108 ppid=1001 pid=1021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 00:52:39.805000 audit: CWD cwd="/" Feb 9 00:52:39.805000 audit: PATH item=0 name=(null) inode=51 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=1 name=(null) inode=13275 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=2 name=(null) inode=13275 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=3 name=(null) inode=13276 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=4 name=(null) inode=13275 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=5 name=(null) inode=13277 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=6 name=(null) inode=13275 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=7 name=(null) inode=13278 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=8 name=(null) inode=13278 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=9 name=(null) inode=13279 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=10 name=(null) inode=13278 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=11 name=(null) inode=13280 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=12 name=(null) inode=13278 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=13 name=(null) inode=13281 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=14 name=(null) inode=13278 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=15 name=(null) inode=13282 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=16 name=(null) inode=13278 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=17 name=(null) inode=13283 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=18 name=(null) inode=13275 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=19 name=(null) inode=13284 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=20 name=(null) inode=13284 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=21 name=(null) inode=13285 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=22 name=(null) inode=13284 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=23 name=(null) inode=13286 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=24 name=(null) inode=13284 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=25 name=(null) inode=13287 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=26 name=(null) inode=13284 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=27 name=(null) inode=13288 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=28 name=(null) inode=13284 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=29 name=(null) inode=13289 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=30 name=(null) inode=13275 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=31 name=(null) inode=13290 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=32 name=(null) inode=13290 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=33 name=(null) inode=13291 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=34 name=(null) inode=13290 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=35 name=(null) inode=13292 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=36 name=(null) inode=13290 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=37 name=(null) inode=13293 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=38 name=(null) inode=13290 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=39 name=(null) inode=13294 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=40 name=(null) inode=13290 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=41 name=(null) inode=13295 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=42 name=(null) inode=13275 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=43 name=(null) inode=13296 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=44 name=(null) inode=13296 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=45 name=(null) inode=13297 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=46 name=(null) inode=13296 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=47 name=(null) inode=13298 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=48 name=(null) inode=13296 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=49 name=(null) inode=13299 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=50 name=(null) inode=13296 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=51 name=(null) inode=13300 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=52 name=(null) inode=13296 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=53 name=(null) inode=13301 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=54 name=(null) inode=51 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=55 name=(null) inode=13302 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=56 name=(null) inode=13302 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=57 name=(null) inode=13303 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=58 name=(null) inode=13302 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=59 name=(null) inode=13304 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=60 name=(null) inode=13302 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=61 name=(null) inode=13305 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=62 name=(null) inode=13305 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=63 name=(null) inode=13306 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=64 name=(null) inode=13305 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=65 name=(null) inode=13307 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=66 name=(null) inode=13305 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=67 name=(null) inode=13308 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=68 name=(null) inode=13305 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=69 name=(null) inode=13309 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=70 name=(null) inode=13305 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=71 name=(null) inode=13310 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=72 name=(null) inode=13302 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=73 name=(null) inode=13311 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=74 name=(null) inode=13311 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=75 name=(null) inode=13312 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=76 name=(null) inode=13311 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=77 name=(null) inode=16385 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=78 name=(null) inode=13311 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=79 name=(null) inode=16386 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=80 name=(null) inode=13311 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=81 name=(null) inode=16387 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=82 name=(null) inode=13311 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=83 name=(null) inode=16388 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=84 name=(null) inode=13302 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=85 name=(null) inode=16389 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=86 name=(null) inode=16389 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=87 name=(null) inode=16390 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=88 name=(null) inode=16389 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=89 name=(null) inode=16391 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=90 name=(null) inode=16389 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=91 name=(null) inode=16392 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=92 name=(null) inode=16389 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=93 name=(null) inode=16393 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=94 name=(null) inode=16389 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=95 name=(null) inode=16394 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=96 name=(null) inode=13302 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=97 name=(null) inode=16395 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=98 name=(null) inode=16395 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=99 name=(null) inode=16396 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=100 name=(null) inode=16395 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=101 name=(null) inode=16397 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=102 name=(null) inode=16395 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=103 name=(null) inode=16398 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=104 name=(null) inode=16395 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=105 name=(null) inode=16399 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=106 name=(null) inode=16395 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PATH item=107 name=(null) inode=16400 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:52:39.805000 audit: PROCTITLE proctitle="(udev-worker)" Feb 9 00:52:39.822060 systemd-networkd[1007]: lo: Link UP Feb 9 00:52:39.822071 systemd-networkd[1007]: lo: Gained carrier Feb 9 00:52:39.822436 systemd-networkd[1007]: Enumeration completed Feb 9 00:52:39.822518 systemd[1]: Started systemd-networkd.service. Feb 9 00:52:39.822541 systemd-networkd[1007]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 00:52:39.822000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:39.824081 systemd-networkd[1007]: eth0: Link UP Feb 9 00:52:39.824093 systemd-networkd[1007]: eth0: Gained carrier Feb 9 00:52:39.825275 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 0 Feb 9 00:52:39.828288 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 9 00:52:39.832288 kernel: mousedev: PS/2 mouse device common for all mice Feb 9 00:52:39.834358 systemd-networkd[1007]: eth0: DHCPv4 address 10.0.0.122/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 9 00:52:39.887319 kernel: kvm: Nested Virtualization enabled Feb 9 00:52:39.887423 kernel: SVM: kvm: Nested Paging enabled Feb 9 00:52:39.887459 kernel: SVM: Virtual VMLOAD VMSAVE supported Feb 9 00:52:39.887473 kernel: SVM: Virtual GIF supported Feb 9 00:52:39.900270 kernel: EDAC MC: Ver: 3.0.0 Feb 9 00:52:39.917689 systemd[1]: Finished systemd-udev-settle.service. Feb 9 00:52:39.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:39.919537 systemd[1]: Starting lvm2-activation-early.service... Feb 9 00:52:39.926563 lvm[1037]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 00:52:39.955814 systemd[1]: Finished lvm2-activation-early.service. Feb 9 00:52:39.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:39.956532 systemd[1]: Reached target cryptsetup.target. Feb 9 00:52:39.957996 systemd[1]: Starting lvm2-activation.service... Feb 9 00:52:39.960915 lvm[1038]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 00:52:39.987384 systemd[1]: Finished lvm2-activation.service. Feb 9 00:52:39.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:39.988083 systemd[1]: Reached target local-fs-pre.target. Feb 9 00:52:39.988681 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 00:52:39.988706 systemd[1]: Reached target local-fs.target. Feb 9 00:52:39.989273 systemd[1]: Reached target machines.target. Feb 9 00:52:39.990869 systemd[1]: Starting ldconfig.service... Feb 9 00:52:39.991585 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 00:52:39.991628 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 00:52:39.992425 systemd[1]: Starting systemd-boot-update.service... Feb 9 00:52:39.993893 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 00:52:39.995348 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 00:52:39.996505 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 00:52:39.996535 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 00:52:39.997368 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 00:52:39.998157 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1040 (bootctl) Feb 9 00:52:39.999527 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 00:52:40.011625 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 00:52:40.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:40.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:40.013592 systemd-tmpfiles[1043]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 00:52:40.048517 systemd-fsck[1050]: fsck.fat 4.2 (2021-01-31) Feb 9 00:52:40.048517 systemd-fsck[1050]: /dev/vda1: 790 files, 115355/258078 clusters Feb 9 00:52:40.015242 systemd-tmpfiles[1043]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 00:52:40.017917 systemd-tmpfiles[1043]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 00:52:40.036399 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 00:52:40.038876 systemd[1]: Mounting boot.mount... Feb 9 00:52:40.046607 systemd[1]: Mounted boot.mount. Feb 9 00:52:40.269710 systemd[1]: Finished systemd-boot-update.service. Feb 9 00:52:40.270000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:40.406183 ldconfig[1039]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 00:52:40.522267 systemd[1]: Finished ldconfig.service. Feb 9 00:52:40.522000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:40.534347 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 00:52:40.534000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:40.536569 systemd[1]: Starting audit-rules.service... Feb 9 00:52:40.538268 systemd[1]: Starting clean-ca-certificates.service... Feb 9 00:52:40.541000 audit: BPF prog-id=24 op=LOAD Feb 9 00:52:40.540246 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 00:52:40.542860 systemd[1]: Starting systemd-resolved.service... Feb 9 00:52:40.545000 audit: BPF prog-id=25 op=LOAD Feb 9 00:52:40.547052 systemd[1]: Starting systemd-timesyncd.service... Feb 9 00:52:40.548888 systemd[1]: Starting systemd-update-utmp.service... Feb 9 00:52:40.550781 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 00:52:40.551662 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 00:52:40.552000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:40.552000 audit[1067]: SYSTEM_BOOT pid=1067 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 00:52:40.552920 systemd[1]: Finished clean-ca-certificates.service. Feb 9 00:52:40.553000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:40.555686 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 00:52:40.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:52:40.557000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 00:52:40.557000 audit[1073]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffefb6d5fe0 a2=420 a3=0 items=0 ppid=1053 pid=1073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 00:52:40.557000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 00:52:40.558648 augenrules[1073]: No rules Feb 9 00:52:40.557982 systemd[1]: Starting systemd-update-done.service... Feb 9 00:52:40.559174 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 00:52:40.559583 systemd[1]: Finished audit-rules.service. Feb 9 00:52:40.560452 systemd[1]: Finished systemd-update-utmp.service. Feb 9 00:52:40.564167 systemd[1]: Finished systemd-update-done.service. Feb 9 00:52:40.597377 systemd[1]: Started systemd-timesyncd.service. Feb 9 00:52:40.598240 systemd[1]: Reached target time-set.target. Feb 9 00:52:40.598379 systemd-timesyncd[1066]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 9 00:52:40.598428 systemd-timesyncd[1066]: Initial clock synchronization to Fri 2024-02-09 00:52:40.411551 UTC. Feb 9 00:52:40.599033 systemd-resolved[1062]: Positive Trust Anchors: Feb 9 00:52:40.599043 systemd-resolved[1062]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 00:52:40.599068 systemd-resolved[1062]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 00:52:40.604593 systemd-resolved[1062]: Defaulting to hostname 'linux'. Feb 9 00:52:40.605800 systemd[1]: Started systemd-resolved.service. Feb 9 00:52:40.606439 systemd[1]: Reached target network.target. Feb 9 00:52:40.606988 systemd[1]: Reached target nss-lookup.target. Feb 9 00:52:40.607586 systemd[1]: Reached target sysinit.target. Feb 9 00:52:40.608199 systemd[1]: Started motdgen.path. Feb 9 00:52:40.608717 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 00:52:40.609598 systemd[1]: Started logrotate.timer. Feb 9 00:52:40.610164 systemd[1]: Started mdadm.timer. Feb 9 00:52:40.610645 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 00:52:40.611236 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 00:52:40.611275 systemd[1]: Reached target paths.target. Feb 9 00:52:40.611797 systemd[1]: Reached target timers.target. Feb 9 00:52:40.612533 systemd[1]: Listening on dbus.socket. Feb 9 00:52:40.613890 systemd[1]: Starting docker.socket... Feb 9 00:52:40.615922 systemd[1]: Listening on sshd.socket. Feb 9 00:52:40.616548 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 00:52:40.616836 systemd[1]: Listening on docker.socket. Feb 9 00:52:40.617434 systemd[1]: Reached target sockets.target. Feb 9 00:52:40.617977 systemd[1]: Reached target basic.target. Feb 9 00:52:40.618541 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 00:52:40.618560 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 00:52:40.619157 systemd[1]: Starting containerd.service... Feb 9 00:52:40.620296 systemd[1]: Starting dbus.service... Feb 9 00:52:40.621402 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 00:52:40.623065 systemd[1]: Starting extend-filesystems.service... Feb 9 00:52:40.624066 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 00:52:40.625015 jq[1084]: false Feb 9 00:52:40.625204 systemd[1]: Starting motdgen.service... Feb 9 00:52:40.627313 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 00:52:40.629184 systemd[1]: Starting prepare-critools.service... Feb 9 00:52:40.631726 extend-filesystems[1085]: Found sr0 Feb 9 00:52:40.631726 extend-filesystems[1085]: Found vda Feb 9 00:52:40.631726 extend-filesystems[1085]: Found vda1 Feb 9 00:52:40.631726 extend-filesystems[1085]: Found vda2 Feb 9 00:52:40.631726 extend-filesystems[1085]: Found vda3 Feb 9 00:52:40.631726 extend-filesystems[1085]: Found usr Feb 9 00:52:40.631726 extend-filesystems[1085]: Found vda4 Feb 9 00:52:40.631726 extend-filesystems[1085]: Found vda6 Feb 9 00:52:40.631726 extend-filesystems[1085]: Found vda7 Feb 9 00:52:40.631726 extend-filesystems[1085]: Found vda9 Feb 9 00:52:40.631726 extend-filesystems[1085]: Checking size of /dev/vda9 Feb 9 00:52:40.644479 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 9 00:52:40.630983 systemd[1]: Starting prepare-helm.service... Feb 9 00:52:40.645346 extend-filesystems[1085]: Resized partition /dev/vda9 Feb 9 00:52:40.633610 dbus-daemon[1083]: [system] SELinux support is enabled Feb 9 00:52:40.634498 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 00:52:40.674830 extend-filesystems[1104]: resize2fs 1.46.5 (30-Dec-2021) Feb 9 00:52:40.702723 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 9 00:52:40.637305 systemd[1]: Starting sshd-keygen.service... Feb 9 00:52:40.641531 systemd[1]: Starting systemd-logind.service... Feb 9 00:52:40.643471 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 00:52:40.643536 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 00:52:40.703302 update_engine[1109]: I0209 00:52:40.685047 1109 main.cc:92] Flatcar Update Engine starting Feb 9 00:52:40.703302 update_engine[1109]: I0209 00:52:40.686820 1109 update_check_scheduler.cc:74] Next update check in 4m5s Feb 9 00:52:40.643949 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 9 00:52:40.703583 jq[1110]: true Feb 9 00:52:40.644549 systemd[1]: Starting update-engine.service... Feb 9 00:52:40.646149 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 00:52:40.703896 tar[1112]: ./ Feb 9 00:52:40.703896 tar[1112]: ./loopback Feb 9 00:52:40.647827 systemd[1]: Started dbus.service. Feb 9 00:52:40.704177 tar[1113]: crictl Feb 9 00:52:40.651343 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 00:52:40.704463 tar[1114]: linux-amd64/helm Feb 9 00:52:40.651520 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 00:52:40.704713 extend-filesystems[1104]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 9 00:52:40.704713 extend-filesystems[1104]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 9 00:52:40.704713 extend-filesystems[1104]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 9 00:52:40.717123 jq[1118]: true Feb 9 00:52:40.651798 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 00:52:40.723982 extend-filesystems[1085]: Resized filesystem in /dev/vda9 Feb 9 00:52:40.725615 env[1119]: time="2024-02-09T00:52:40.703192594Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 00:52:40.651952 systemd[1]: Finished motdgen.service. Feb 9 00:52:40.654662 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 00:52:40.732171 bash[1141]: Updated "/home/core/.ssh/authorized_keys" Feb 9 00:52:40.654827 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 00:52:40.657657 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 00:52:40.657682 systemd[1]: Reached target system-config.target. Feb 9 00:52:40.658479 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 00:52:40.658494 systemd[1]: Reached target user-config.target. Feb 9 00:52:40.686842 systemd[1]: Started update-engine.service. Feb 9 00:52:40.690150 systemd[1]: Started locksmithd.service. Feb 9 00:52:40.702709 systemd-logind[1107]: Watching system buttons on /dev/input/event1 (Power Button) Feb 9 00:52:40.702726 systemd-logind[1107]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 9 00:52:40.704712 systemd-logind[1107]: New seat seat0. Feb 9 00:52:40.706622 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 00:52:40.706751 systemd[1]: Finished extend-filesystems.service. Feb 9 00:52:40.714096 systemd[1]: Started systemd-logind.service. Feb 9 00:52:40.721049 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 00:52:40.745853 env[1119]: time="2024-02-09T00:52:40.745817415Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 00:52:40.745988 env[1119]: time="2024-02-09T00:52:40.745966414Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 00:52:40.747103 env[1119]: time="2024-02-09T00:52:40.747076095Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 00:52:40.747103 env[1119]: time="2024-02-09T00:52:40.747100992Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 00:52:40.747286 env[1119]: time="2024-02-09T00:52:40.747265891Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 00:52:40.747286 env[1119]: time="2024-02-09T00:52:40.747283544Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 00:52:40.747367 env[1119]: time="2024-02-09T00:52:40.747294064Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 00:52:40.747367 env[1119]: time="2024-02-09T00:52:40.747302139Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 00:52:40.747367 env[1119]: time="2024-02-09T00:52:40.747361600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 00:52:40.747571 env[1119]: time="2024-02-09T00:52:40.747534094Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 00:52:40.747667 env[1119]: time="2024-02-09T00:52:40.747646334Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 00:52:40.747667 env[1119]: time="2024-02-09T00:52:40.747662705Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 00:52:40.747737 env[1119]: time="2024-02-09T00:52:40.747702490Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 00:52:40.747737 env[1119]: time="2024-02-09T00:52:40.747711997Z" level=info msg="metadata content store policy set" policy=shared Feb 9 00:52:40.758471 env[1119]: time="2024-02-09T00:52:40.758450272Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 00:52:40.758523 env[1119]: time="2024-02-09T00:52:40.758474387Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 00:52:40.758523 env[1119]: time="2024-02-09T00:52:40.758485387Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 00:52:40.758523 env[1119]: time="2024-02-09T00:52:40.758510074Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 00:52:40.758607 env[1119]: time="2024-02-09T00:52:40.758522627Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 00:52:40.758607 env[1119]: time="2024-02-09T00:52:40.758534349Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 00:52:40.758607 env[1119]: time="2024-02-09T00:52:40.758545220Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 00:52:40.758607 env[1119]: time="2024-02-09T00:52:40.758556831Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 00:52:40.758607 env[1119]: time="2024-02-09T00:52:40.758567752Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 00:52:40.758607 env[1119]: time="2024-02-09T00:52:40.758578352Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 00:52:40.758607 env[1119]: time="2024-02-09T00:52:40.758588741Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 00:52:40.758607 env[1119]: time="2024-02-09T00:52:40.758599521Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 00:52:40.758831 env[1119]: time="2024-02-09T00:52:40.758667459Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 00:52:40.758831 env[1119]: time="2024-02-09T00:52:40.758724716Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 00:52:40.758963 env[1119]: time="2024-02-09T00:52:40.758934830Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 00:52:40.759013 env[1119]: time="2024-02-09T00:52:40.758968263Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 00:52:40.759013 env[1119]: time="2024-02-09T00:52:40.758980065Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 00:52:40.759073 env[1119]: time="2024-02-09T00:52:40.759013748Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 00:52:40.759073 env[1119]: time="2024-02-09T00:52:40.759024138Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 00:52:40.759073 env[1119]: time="2024-02-09T00:52:40.759034207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 00:52:40.759073 env[1119]: time="2024-02-09T00:52:40.759044015Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 00:52:40.759073 env[1119]: time="2024-02-09T00:52:40.759053843Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 00:52:40.759073 env[1119]: time="2024-02-09T00:52:40.759063782Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 00:52:40.759224 env[1119]: time="2024-02-09T00:52:40.759073330Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 00:52:40.759224 env[1119]: time="2024-02-09T00:52:40.759082848Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 00:52:40.759224 env[1119]: time="2024-02-09T00:52:40.759094319Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 00:52:40.759224 env[1119]: time="2024-02-09T00:52:40.759179389Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 00:52:40.759224 env[1119]: time="2024-02-09T00:52:40.759191381Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 00:52:40.759224 env[1119]: time="2024-02-09T00:52:40.759202051Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 00:52:40.759224 env[1119]: time="2024-02-09T00:52:40.759211759Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 00:52:40.759441 env[1119]: time="2024-02-09T00:52:40.759223371Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 00:52:40.759441 env[1119]: time="2024-02-09T00:52:40.759233250Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 00:52:40.759441 env[1119]: time="2024-02-09T00:52:40.759262064Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 00:52:40.759441 env[1119]: time="2024-02-09T00:52:40.759292591Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 00:52:40.759554 env[1119]: time="2024-02-09T00:52:40.759455206Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 00:52:40.759554 env[1119]: time="2024-02-09T00:52:40.759499128Z" level=info msg="Connect containerd service" Feb 9 00:52:40.759554 env[1119]: time="2024-02-09T00:52:40.759523985Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 00:52:40.760200 env[1119]: time="2024-02-09T00:52:40.760022620Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 00:52:40.760200 env[1119]: time="2024-02-09T00:52:40.760121565Z" level=info msg="Start subscribing containerd event" Feb 9 00:52:40.760200 env[1119]: time="2024-02-09T00:52:40.760150810Z" level=info msg="Start recovering state" Feb 9 00:52:40.760200 env[1119]: time="2024-02-09T00:52:40.760189322Z" level=info msg="Start event monitor" Feb 9 00:52:40.760331 env[1119]: time="2024-02-09T00:52:40.760202417Z" level=info msg="Start snapshots syncer" Feb 9 00:52:40.760331 env[1119]: time="2024-02-09T00:52:40.760210001Z" level=info msg="Start cni network conf syncer for default" Feb 9 00:52:40.760331 env[1119]: time="2024-02-09T00:52:40.760215812Z" level=info msg="Start streaming server" Feb 9 00:52:40.760452 env[1119]: time="2024-02-09T00:52:40.760431877Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 00:52:40.760499 env[1119]: time="2024-02-09T00:52:40.760487692Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 00:52:40.760584 systemd[1]: Started containerd.service. Feb 9 00:52:40.768344 env[1119]: time="2024-02-09T00:52:40.768295952Z" level=info msg="containerd successfully booted in 0.075249s" Feb 9 00:52:40.771824 tar[1112]: ./bandwidth Feb 9 00:52:40.802466 tar[1112]: ./ptp Feb 9 00:52:40.836490 tar[1112]: ./vlan Feb 9 00:52:40.868760 tar[1112]: ./host-device Feb 9 00:52:40.900350 tar[1112]: ./tuning Feb 9 00:52:40.928609 tar[1112]: ./vrf Feb 9 00:52:40.958105 tar[1112]: ./sbr Feb 9 00:52:40.987043 tar[1112]: ./tap Feb 9 00:52:41.019998 tar[1112]: ./dhcp Feb 9 00:52:41.089968 systemd[1]: Finished prepare-critools.service. Feb 9 00:52:41.100314 locksmithd[1142]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 00:52:41.101852 tar[1112]: ./static Feb 9 00:52:41.124815 tar[1112]: ./firewall Feb 9 00:52:41.153710 tar[1114]: linux-amd64/LICENSE Feb 9 00:52:41.153802 tar[1114]: linux-amd64/README.md Feb 9 00:52:41.157416 systemd[1]: Finished prepare-helm.service. Feb 9 00:52:41.160007 tar[1112]: ./macvlan Feb 9 00:52:41.188619 tar[1112]: ./dummy Feb 9 00:52:41.216797 tar[1112]: ./bridge Feb 9 00:52:41.247707 tar[1112]: ./ipvlan Feb 9 00:52:41.276156 tar[1112]: ./portmap Feb 9 00:52:41.303134 tar[1112]: ./host-local Feb 9 00:52:41.335703 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 00:52:41.349330 systemd-networkd[1007]: eth0: Gained IPv6LL Feb 9 00:52:41.880161 sshd_keygen[1108]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 00:52:41.896581 systemd[1]: Finished sshd-keygen.service. Feb 9 00:52:41.898365 systemd[1]: Starting issuegen.service... Feb 9 00:52:41.902328 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 00:52:41.902440 systemd[1]: Finished issuegen.service. Feb 9 00:52:41.903929 systemd[1]: Starting systemd-user-sessions.service... Feb 9 00:52:41.908135 systemd[1]: Finished systemd-user-sessions.service. Feb 9 00:52:41.909636 systemd[1]: Started getty@tty1.service. Feb 9 00:52:41.910989 systemd[1]: Started serial-getty@ttyS0.service. Feb 9 00:52:41.911764 systemd[1]: Reached target getty.target. Feb 9 00:52:41.912371 systemd[1]: Reached target multi-user.target. Feb 9 00:52:41.913730 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 00:52:41.919809 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 00:52:41.919912 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 00:52:41.920673 systemd[1]: Startup finished in 499ms (kernel) + 5.731s (initrd) + 5.538s (userspace) = 11.770s. Feb 9 00:52:46.011738 systemd[1]: Created slice system-sshd.slice. Feb 9 00:52:46.012671 systemd[1]: Started sshd@0-10.0.0.122:22-10.0.0.1:60444.service. Feb 9 00:52:46.045929 sshd[1171]: Accepted publickey for core from 10.0.0.1 port 60444 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:52:46.047309 sshd[1171]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:52:46.055876 systemd-logind[1107]: New session 1 of user core. Feb 9 00:52:46.056736 systemd[1]: Created slice user-500.slice. Feb 9 00:52:46.057660 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 00:52:46.064658 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 00:52:46.065731 systemd[1]: Starting user@500.service... Feb 9 00:52:46.068097 (systemd)[1174]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:52:46.148453 systemd[1174]: Queued start job for default target default.target. Feb 9 00:52:46.148954 systemd[1174]: Reached target paths.target. Feb 9 00:52:46.148979 systemd[1174]: Reached target sockets.target. Feb 9 00:52:46.148995 systemd[1174]: Reached target timers.target. Feb 9 00:52:46.149008 systemd[1174]: Reached target basic.target. Feb 9 00:52:46.149047 systemd[1174]: Reached target default.target. Feb 9 00:52:46.149074 systemd[1174]: Startup finished in 76ms. Feb 9 00:52:46.149194 systemd[1]: Started user@500.service. Feb 9 00:52:46.150112 systemd[1]: Started session-1.scope. Feb 9 00:52:46.199430 systemd[1]: Started sshd@1-10.0.0.122:22-10.0.0.1:35622.service. Feb 9 00:52:46.230738 sshd[1183]: Accepted publickey for core from 10.0.0.1 port 35622 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:52:46.231768 sshd[1183]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:52:46.235621 systemd-logind[1107]: New session 2 of user core. Feb 9 00:52:46.236561 systemd[1]: Started session-2.scope. Feb 9 00:52:46.288872 sshd[1183]: pam_unix(sshd:session): session closed for user core Feb 9 00:52:46.291576 systemd[1]: sshd@1-10.0.0.122:22-10.0.0.1:35622.service: Deactivated successfully. Feb 9 00:52:46.292086 systemd[1]: session-2.scope: Deactivated successfully. Feb 9 00:52:46.292553 systemd-logind[1107]: Session 2 logged out. Waiting for processes to exit. Feb 9 00:52:46.293634 systemd[1]: Started sshd@2-10.0.0.122:22-10.0.0.1:35634.service. Feb 9 00:52:46.294185 systemd-logind[1107]: Removed session 2. Feb 9 00:52:46.326707 sshd[1189]: Accepted publickey for core from 10.0.0.1 port 35634 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:52:46.327751 sshd[1189]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:52:46.330531 systemd-logind[1107]: New session 3 of user core. Feb 9 00:52:46.331193 systemd[1]: Started session-3.scope. Feb 9 00:52:46.380646 sshd[1189]: pam_unix(sshd:session): session closed for user core Feb 9 00:52:46.383167 systemd[1]: sshd@2-10.0.0.122:22-10.0.0.1:35634.service: Deactivated successfully. Feb 9 00:52:46.383662 systemd[1]: session-3.scope: Deactivated successfully. Feb 9 00:52:46.384218 systemd-logind[1107]: Session 3 logged out. Waiting for processes to exit. Feb 9 00:52:46.385141 systemd[1]: Started sshd@3-10.0.0.122:22-10.0.0.1:35642.service. Feb 9 00:52:46.385713 systemd-logind[1107]: Removed session 3. Feb 9 00:52:46.418334 sshd[1195]: Accepted publickey for core from 10.0.0.1 port 35642 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:52:46.419400 sshd[1195]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:52:46.422274 systemd-logind[1107]: New session 4 of user core. Feb 9 00:52:46.422992 systemd[1]: Started session-4.scope. Feb 9 00:52:46.473595 sshd[1195]: pam_unix(sshd:session): session closed for user core Feb 9 00:52:46.476107 systemd[1]: sshd@3-10.0.0.122:22-10.0.0.1:35642.service: Deactivated successfully. Feb 9 00:52:46.476599 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 00:52:46.477054 systemd-logind[1107]: Session 4 logged out. Waiting for processes to exit. Feb 9 00:52:46.477914 systemd[1]: Started sshd@4-10.0.0.122:22-10.0.0.1:35658.service. Feb 9 00:52:46.478478 systemd-logind[1107]: Removed session 4. Feb 9 00:52:46.510991 sshd[1201]: Accepted publickey for core from 10.0.0.1 port 35658 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:52:46.512014 sshd[1201]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:52:46.514954 systemd-logind[1107]: New session 5 of user core. Feb 9 00:52:46.515660 systemd[1]: Started session-5.scope. Feb 9 00:52:46.568130 sudo[1204]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 00:52:46.568299 sudo[1204]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 00:52:47.101934 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 00:52:47.106155 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 00:52:47.106464 systemd[1]: Reached target network-online.target. Feb 9 00:52:47.107785 systemd[1]: Starting docker.service... Feb 9 00:52:47.136813 env[1221]: time="2024-02-09T00:52:47.136763545Z" level=info msg="Starting up" Feb 9 00:52:47.137804 env[1221]: time="2024-02-09T00:52:47.137770512Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 00:52:47.137804 env[1221]: time="2024-02-09T00:52:47.137793532Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 00:52:47.137870 env[1221]: time="2024-02-09T00:52:47.137810505Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 00:52:47.137870 env[1221]: time="2024-02-09T00:52:47.137818535Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 00:52:47.139181 env[1221]: time="2024-02-09T00:52:47.139155246Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 00:52:47.139181 env[1221]: time="2024-02-09T00:52:47.139172882Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 00:52:47.139269 env[1221]: time="2024-02-09T00:52:47.139189755Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 00:52:47.139269 env[1221]: time="2024-02-09T00:52:47.139199302Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 00:52:47.870546 env[1221]: time="2024-02-09T00:52:47.870496793Z" level=info msg="Loading containers: start." Feb 9 00:52:47.956271 kernel: Initializing XFRM netlink socket Feb 9 00:52:47.983080 env[1221]: time="2024-02-09T00:52:47.983030560Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 9 00:52:48.025794 systemd-networkd[1007]: docker0: Link UP Feb 9 00:52:48.035137 env[1221]: time="2024-02-09T00:52:48.035052452Z" level=info msg="Loading containers: done." Feb 9 00:52:48.116915 env[1221]: time="2024-02-09T00:52:48.116834468Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 9 00:52:48.117087 env[1221]: time="2024-02-09T00:52:48.117070015Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 9 00:52:48.117216 env[1221]: time="2024-02-09T00:52:48.117195512Z" level=info msg="Daemon has completed initialization" Feb 9 00:52:48.134585 systemd[1]: Started docker.service. Feb 9 00:52:48.138050 env[1221]: time="2024-02-09T00:52:48.138016041Z" level=info msg="API listen on /run/docker.sock" Feb 9 00:52:48.150837 systemd[1]: Reloading. Feb 9 00:52:48.219225 /usr/lib/systemd/system-generators/torcx-generator[1364]: time="2024-02-09T00:52:48Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 00:52:48.219271 /usr/lib/systemd/system-generators/torcx-generator[1364]: time="2024-02-09T00:52:48Z" level=info msg="torcx already run" Feb 9 00:52:48.267750 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 00:52:48.267764 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 00:52:48.283753 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 00:52:48.353573 systemd[1]: Started kubelet.service. Feb 9 00:52:48.391970 kubelet[1405]: E0209 00:52:48.391878 1405 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 9 00:52:48.393574 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 00:52:48.393704 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 00:52:48.694374 env[1119]: time="2024-02-09T00:52:48.694273901Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.6\"" Feb 9 00:52:49.630119 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2832321849.mount: Deactivated successfully. Feb 9 00:52:51.381897 env[1119]: time="2024-02-09T00:52:51.381840129Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:52:51.383808 env[1119]: time="2024-02-09T00:52:51.383787441Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:70e88c5e3a8e409ff4604a5fdb1dacb736ea02ba0b7a3da635f294e953906f47,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:52:51.385306 env[1119]: time="2024-02-09T00:52:51.385284584Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:52:51.387574 env[1119]: time="2024-02-09T00:52:51.387537813Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:98a686df810b9f1de8e3b2ae869e79c51a36e7434d33c53f011852618aec0a68,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:52:51.388774 env[1119]: time="2024-02-09T00:52:51.388741137Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.6\" returns image reference \"sha256:70e88c5e3a8e409ff4604a5fdb1dacb736ea02ba0b7a3da635f294e953906f47\"" Feb 9 00:52:51.397829 env[1119]: time="2024-02-09T00:52:51.397791780Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.6\"" Feb 9 00:52:54.892192 env[1119]: time="2024-02-09T00:52:54.892131113Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:52:54.997024 env[1119]: time="2024-02-09T00:52:54.996976268Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:18dbd2df3bb54036300d2af8b20ef60d479173946ff089a4d16e258b27faa55c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:52:55.081789 env[1119]: time="2024-02-09T00:52:55.081729763Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:52:55.083636 env[1119]: time="2024-02-09T00:52:55.083583614Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:80bdcd72cfe26028bb2fed75732fc2f511c35fa8d1edc03deae11f3490713c9e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:52:55.084366 env[1119]: time="2024-02-09T00:52:55.084314150Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.6\" returns image reference \"sha256:18dbd2df3bb54036300d2af8b20ef60d479173946ff089a4d16e258b27faa55c\"" Feb 9 00:52:55.093290 env[1119]: time="2024-02-09T00:52:55.093242046Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.6\"" Feb 9 00:52:56.792374 env[1119]: time="2024-02-09T00:52:56.792323657Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:52:56.794371 env[1119]: time="2024-02-09T00:52:56.794324947Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7597ecaaf12074e2980eee086736dbd01e566dc266351560001aa47dbbb0e5fe,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:52:56.795700 env[1119]: time="2024-02-09T00:52:56.795675111Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:52:56.797686 env[1119]: time="2024-02-09T00:52:56.797664018Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:a89db556c34d652d403d909882dbd97336f2e935b1c726b2e2b2c0400186ac39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:52:56.798281 env[1119]: time="2024-02-09T00:52:56.798236895Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.6\" returns image reference \"sha256:7597ecaaf12074e2980eee086736dbd01e566dc266351560001aa47dbbb0e5fe\"" Feb 9 00:52:56.806977 env[1119]: time="2024-02-09T00:52:56.806914828Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.6\"" Feb 9 00:52:58.644535 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 9 00:52:58.644716 systemd[1]: Stopped kubelet.service. Feb 9 00:52:58.646037 systemd[1]: Started kubelet.service. Feb 9 00:52:58.687355 kubelet[1448]: E0209 00:52:58.687297 1448 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 9 00:52:58.690332 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 00:52:58.690465 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 00:53:00.880084 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3940603317.mount: Deactivated successfully. Feb 9 00:53:01.813237 env[1119]: time="2024-02-09T00:53:01.813175523Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:53:01.815336 env[1119]: time="2024-02-09T00:53:01.815299215Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:342a759d88156b4f56ba522a1aed0e3d32d72542545346b40877f6583bebe05f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:53:01.816749 env[1119]: time="2024-02-09T00:53:01.816678553Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:53:01.817875 env[1119]: time="2024-02-09T00:53:01.817848407Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:3898a1671ae42be1cd3c2e777549bc7b5b306b8da3a224b747365f6679fb902a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:53:01.818226 env[1119]: time="2024-02-09T00:53:01.818190557Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.6\" returns image reference \"sha256:342a759d88156b4f56ba522a1aed0e3d32d72542545346b40877f6583bebe05f\"" Feb 9 00:53:01.827214 env[1119]: time="2024-02-09T00:53:01.827177021Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 9 00:53:02.381522 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount372339512.mount: Deactivated successfully. Feb 9 00:53:02.386432 env[1119]: time="2024-02-09T00:53:02.386388783Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:53:02.388067 env[1119]: time="2024-02-09T00:53:02.388041216Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:53:02.389472 env[1119]: time="2024-02-09T00:53:02.389436561Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:53:02.390651 env[1119]: time="2024-02-09T00:53:02.390617477Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:53:02.390993 env[1119]: time="2024-02-09T00:53:02.390966097Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 9 00:53:02.398626 env[1119]: time="2024-02-09T00:53:02.398591451Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.9-0\"" Feb 9 00:53:02.943557 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2444170420.mount: Deactivated successfully. Feb 9 00:53:08.941430 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 9 00:53:08.941689 systemd[1]: Stopped kubelet.service. Feb 9 00:53:08.943448 systemd[1]: Started kubelet.service. Feb 9 00:53:08.984189 kubelet[1469]: E0209 00:53:08.984143 1469 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 9 00:53:08.986041 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 00:53:08.986184 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 00:53:09.068289 env[1119]: time="2024-02-09T00:53:09.068216128Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.9-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:53:09.070371 env[1119]: time="2024-02-09T00:53:09.070344890Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:53:09.072357 env[1119]: time="2024-02-09T00:53:09.072332384Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.9-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:53:09.074328 env[1119]: time="2024-02-09T00:53:09.074297649Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:53:09.075019 env[1119]: time="2024-02-09T00:53:09.074979110Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.9-0\" returns image reference \"sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9\"" Feb 9 00:53:09.083464 env[1119]: time="2024-02-09T00:53:09.083433124Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Feb 9 00:53:09.792837 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1996792376.mount: Deactivated successfully. Feb 9 00:53:10.442329 env[1119]: time="2024-02-09T00:53:10.442272802Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:53:10.443978 env[1119]: time="2024-02-09T00:53:10.443947380Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:53:10.445347 env[1119]: time="2024-02-09T00:53:10.445300611Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:53:10.446640 env[1119]: time="2024-02-09T00:53:10.446590895Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:53:10.447008 env[1119]: time="2024-02-09T00:53:10.446976199Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Feb 9 00:53:11.998540 systemd[1]: Stopped kubelet.service. Feb 9 00:53:12.011231 systemd[1]: Reloading. Feb 9 00:53:12.068291 /usr/lib/systemd/system-generators/torcx-generator[1581]: time="2024-02-09T00:53:12Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 00:53:12.068319 /usr/lib/systemd/system-generators/torcx-generator[1581]: time="2024-02-09T00:53:12Z" level=info msg="torcx already run" Feb 9 00:53:12.132263 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 00:53:12.132279 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 00:53:12.148237 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 00:53:12.220752 systemd[1]: Started kubelet.service. Feb 9 00:53:12.260369 kubelet[1622]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 00:53:12.260369 kubelet[1622]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 9 00:53:12.260369 kubelet[1622]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 00:53:12.260369 kubelet[1622]: I0209 00:53:12.260095 1622 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 00:53:12.936149 kubelet[1622]: I0209 00:53:12.936100 1622 server.go:467] "Kubelet version" kubeletVersion="v1.28.1" Feb 9 00:53:12.936149 kubelet[1622]: I0209 00:53:12.936134 1622 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 00:53:12.936426 kubelet[1622]: I0209 00:53:12.936396 1622 server.go:895] "Client rotation is on, will bootstrap in background" Feb 9 00:53:12.939301 kubelet[1622]: I0209 00:53:12.939280 1622 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 00:53:12.940168 kubelet[1622]: E0209 00:53:12.940138 1622 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.122:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.122:6443: connect: connection refused Feb 9 00:53:12.944709 kubelet[1622]: I0209 00:53:12.944679 1622 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 00:53:12.944857 kubelet[1622]: I0209 00:53:12.944835 1622 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 00:53:12.944992 kubelet[1622]: I0209 00:53:12.944968 1622 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 9 00:53:12.944992 kubelet[1622]: I0209 00:53:12.944986 1622 topology_manager.go:138] "Creating topology manager with none policy" Feb 9 00:53:12.944992 kubelet[1622]: I0209 00:53:12.944996 1622 container_manager_linux.go:301] "Creating device plugin manager" Feb 9 00:53:12.945178 kubelet[1622]: I0209 00:53:12.945089 1622 state_mem.go:36] "Initialized new in-memory state store" Feb 9 00:53:12.945178 kubelet[1622]: I0209 00:53:12.945159 1622 kubelet.go:393] "Attempting to sync node with API server" Feb 9 00:53:12.945178 kubelet[1622]: I0209 00:53:12.945177 1622 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 00:53:12.945313 kubelet[1622]: I0209 00:53:12.945198 1622 kubelet.go:309] "Adding apiserver pod source" Feb 9 00:53:12.945313 kubelet[1622]: I0209 00:53:12.945210 1622 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 00:53:12.948738 kubelet[1622]: I0209 00:53:12.948707 1622 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 00:53:12.948957 kubelet[1622]: W0209 00:53:12.948908 1622 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.122:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused Feb 9 00:53:12.948957 kubelet[1622]: E0209 00:53:12.948961 1622 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.122:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused Feb 9 00:53:12.949052 kubelet[1622]: W0209 00:53:12.949012 1622 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 00:53:12.949205 kubelet[1622]: W0209 00:53:12.949164 1622 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.122:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused Feb 9 00:53:12.949205 kubelet[1622]: E0209 00:53:12.949201 1622 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.122:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused Feb 9 00:53:12.949571 kubelet[1622]: I0209 00:53:12.949544 1622 server.go:1232] "Started kubelet" Feb 9 00:53:12.949720 kubelet[1622]: I0209 00:53:12.949693 1622 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 00:53:12.949778 kubelet[1622]: I0209 00:53:12.949769 1622 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 9 00:53:12.950186 kubelet[1622]: I0209 00:53:12.950154 1622 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 9 00:53:12.950514 kubelet[1622]: I0209 00:53:12.950489 1622 server.go:462] "Adding debug handlers to kubelet server" Feb 9 00:53:12.950724 kubelet[1622]: E0209 00:53:12.950592 1622 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b20ba08f0d272b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 0, 53, 12, 949516075, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 0, 53, 12, 949516075, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"localhost"}': 'Post "https://10.0.0.122:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.122:6443: connect: connection refused'(may retry after sleeping) Feb 9 00:53:12.950911 kubelet[1622]: E0209 00:53:12.950864 1622 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 00:53:12.950966 kubelet[1622]: E0209 00:53:12.950915 1622 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 00:53:12.953274 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 9 00:53:12.953418 kubelet[1622]: I0209 00:53:12.953385 1622 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 00:53:12.953548 kubelet[1622]: I0209 00:53:12.953527 1622 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 9 00:53:12.953619 kubelet[1622]: I0209 00:53:12.953597 1622 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 00:53:12.953664 kubelet[1622]: I0209 00:53:12.953661 1622 reconciler_new.go:29] "Reconciler: start to sync state" Feb 9 00:53:12.954539 kubelet[1622]: E0209 00:53:12.954418 1622 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.122:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.122:6443: connect: connection refused" interval="200ms" Feb 9 00:53:12.954539 kubelet[1622]: W0209 00:53:12.954461 1622 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.122:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused Feb 9 00:53:12.954539 kubelet[1622]: E0209 00:53:12.954486 1622 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.122:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused Feb 9 00:53:12.968587 kubelet[1622]: I0209 00:53:12.968559 1622 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 9 00:53:12.969689 kubelet[1622]: I0209 00:53:12.969649 1622 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 9 00:53:12.969749 kubelet[1622]: I0209 00:53:12.969693 1622 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 9 00:53:12.969749 kubelet[1622]: I0209 00:53:12.969726 1622 kubelet.go:2303] "Starting kubelet main sync loop" Feb 9 00:53:12.969806 kubelet[1622]: E0209 00:53:12.969786 1622 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 00:53:12.970347 kubelet[1622]: W0209 00:53:12.970310 1622 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.122:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused Feb 9 00:53:12.970347 kubelet[1622]: E0209 00:53:12.970348 1622 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.122:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused Feb 9 00:53:12.971395 kubelet[1622]: I0209 00:53:12.971371 1622 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 00:53:12.971395 kubelet[1622]: I0209 00:53:12.971389 1622 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 00:53:12.971458 kubelet[1622]: I0209 00:53:12.971401 1622 state_mem.go:36] "Initialized new in-memory state store" Feb 9 00:53:13.007612 kubelet[1622]: I0209 00:53:13.007585 1622 policy_none.go:49] "None policy: Start" Feb 9 00:53:13.008202 kubelet[1622]: I0209 00:53:13.008179 1622 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 00:53:13.008267 kubelet[1622]: I0209 00:53:13.008218 1622 state_mem.go:35] "Initializing new in-memory state store" Feb 9 00:53:13.012547 systemd[1]: Created slice kubepods.slice. Feb 9 00:53:13.016155 systemd[1]: Created slice kubepods-burstable.slice. Feb 9 00:53:13.018439 systemd[1]: Created slice kubepods-besteffort.slice. Feb 9 00:53:13.023956 kubelet[1622]: I0209 00:53:13.023925 1622 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 00:53:13.024212 kubelet[1622]: I0209 00:53:13.024187 1622 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 00:53:13.024676 kubelet[1622]: E0209 00:53:13.024661 1622 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 9 00:53:13.055191 kubelet[1622]: I0209 00:53:13.055176 1622 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 00:53:13.055571 kubelet[1622]: E0209 00:53:13.055539 1622 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.122:6443/api/v1/nodes\": dial tcp 10.0.0.122:6443: connect: connection refused" node="localhost" Feb 9 00:53:13.070616 kubelet[1622]: I0209 00:53:13.070577 1622 topology_manager.go:215] "Topology Admit Handler" podUID="0b63e2b30e007bded58a1df158511251" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 9 00:53:13.071391 kubelet[1622]: I0209 00:53:13.071361 1622 topology_manager.go:215] "Topology Admit Handler" podUID="212dcc5e2f08bec92c239ac5786b7e2b" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 9 00:53:13.071890 kubelet[1622]: I0209 00:53:13.071868 1622 topology_manager.go:215] "Topology Admit Handler" podUID="d0325d16aab19669b5fea4b6623890e6" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 9 00:53:13.076692 systemd[1]: Created slice kubepods-burstable-pod212dcc5e2f08bec92c239ac5786b7e2b.slice. Feb 9 00:53:13.093104 systemd[1]: Created slice kubepods-burstable-pod0b63e2b30e007bded58a1df158511251.slice. Feb 9 00:53:13.102953 systemd[1]: Created slice kubepods-burstable-podd0325d16aab19669b5fea4b6623890e6.slice. Feb 9 00:53:13.155447 kubelet[1622]: I0209 00:53:13.155399 1622 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/212dcc5e2f08bec92c239ac5786b7e2b-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"212dcc5e2f08bec92c239ac5786b7e2b\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 00:53:13.155648 kubelet[1622]: I0209 00:53:13.155465 1622 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/212dcc5e2f08bec92c239ac5786b7e2b-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"212dcc5e2f08bec92c239ac5786b7e2b\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 00:53:13.155648 kubelet[1622]: I0209 00:53:13.155489 1622 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0b63e2b30e007bded58a1df158511251-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0b63e2b30e007bded58a1df158511251\") " pod="kube-system/kube-apiserver-localhost" Feb 9 00:53:13.155648 kubelet[1622]: I0209 00:53:13.155507 1622 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0b63e2b30e007bded58a1df158511251-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0b63e2b30e007bded58a1df158511251\") " pod="kube-system/kube-apiserver-localhost" Feb 9 00:53:13.155648 kubelet[1622]: I0209 00:53:13.155525 1622 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/212dcc5e2f08bec92c239ac5786b7e2b-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"212dcc5e2f08bec92c239ac5786b7e2b\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 00:53:13.155648 kubelet[1622]: I0209 00:53:13.155544 1622 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/212dcc5e2f08bec92c239ac5786b7e2b-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"212dcc5e2f08bec92c239ac5786b7e2b\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 00:53:13.155818 kubelet[1622]: I0209 00:53:13.155621 1622 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d0325d16aab19669b5fea4b6623890e6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d0325d16aab19669b5fea4b6623890e6\") " pod="kube-system/kube-scheduler-localhost" Feb 9 00:53:13.155818 kubelet[1622]: I0209 00:53:13.155650 1622 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0b63e2b30e007bded58a1df158511251-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0b63e2b30e007bded58a1df158511251\") " pod="kube-system/kube-apiserver-localhost" Feb 9 00:53:13.155818 kubelet[1622]: I0209 00:53:13.155670 1622 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/212dcc5e2f08bec92c239ac5786b7e2b-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"212dcc5e2f08bec92c239ac5786b7e2b\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 00:53:13.156135 kubelet[1622]: E0209 00:53:13.156107 1622 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.122:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.122:6443: connect: connection refused" interval="400ms" Feb 9 00:53:13.257277 kubelet[1622]: I0209 00:53:13.257165 1622 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 00:53:13.257440 kubelet[1622]: E0209 00:53:13.257426 1622 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.122:6443/api/v1/nodes\": dial tcp 10.0.0.122:6443: connect: connection refused" node="localhost" Feb 9 00:53:13.392263 kubelet[1622]: E0209 00:53:13.392217 1622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:53:13.392970 env[1119]: time="2024-02-09T00:53:13.392921624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:212dcc5e2f08bec92c239ac5786b7e2b,Namespace:kube-system,Attempt:0,}" Feb 9 00:53:13.402187 kubelet[1622]: E0209 00:53:13.402152 1622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:53:13.402755 env[1119]: time="2024-02-09T00:53:13.402702728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0b63e2b30e007bded58a1df158511251,Namespace:kube-system,Attempt:0,}" Feb 9 00:53:13.404823 kubelet[1622]: E0209 00:53:13.404787 1622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:53:13.405209 env[1119]: time="2024-02-09T00:53:13.405160448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d0325d16aab19669b5fea4b6623890e6,Namespace:kube-system,Attempt:0,}" Feb 9 00:53:13.557454 kubelet[1622]: E0209 00:53:13.557420 1622 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.122:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.122:6443: connect: connection refused" interval="800ms" Feb 9 00:53:13.659262 kubelet[1622]: I0209 00:53:13.659209 1622 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 00:53:13.659593 kubelet[1622]: E0209 00:53:13.659550 1622 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.122:6443/api/v1/nodes\": dial tcp 10.0.0.122:6443: connect: connection refused" node="localhost" Feb 9 00:53:13.883281 kubelet[1622]: W0209 00:53:13.883098 1622 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.122:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused Feb 9 00:53:13.883281 kubelet[1622]: E0209 00:53:13.883177 1622 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.122:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused Feb 9 00:53:13.894567 kubelet[1622]: W0209 00:53:13.894503 1622 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.122:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused Feb 9 00:53:13.894567 kubelet[1622]: E0209 00:53:13.894567 1622 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.122:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused Feb 9 00:53:13.931829 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1829809241.mount: Deactivated successfully. Feb 9 00:53:13.935369 env[1119]: time="2024-02-09T00:53:13.935335453Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:53:13.938361 env[1119]: time="2024-02-09T00:53:13.938305379Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:53:13.939291 env[1119]: time="2024-02-09T00:53:13.939271188Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:53:13.940825 env[1119]: time="2024-02-09T00:53:13.940798448Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:53:13.942264 env[1119]: time="2024-02-09T00:53:13.942220325Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:53:13.943328 env[1119]: time="2024-02-09T00:53:13.943297518Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:53:13.944360 env[1119]: time="2024-02-09T00:53:13.944340054Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:53:13.945430 env[1119]: time="2024-02-09T00:53:13.945407558Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:53:13.947427 env[1119]: time="2024-02-09T00:53:13.947402048Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:53:13.948632 env[1119]: time="2024-02-09T00:53:13.948605254Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:53:13.950034 env[1119]: time="2024-02-09T00:53:13.950005779Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:53:13.951241 env[1119]: time="2024-02-09T00:53:13.951214855Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:53:13.976845 env[1119]: time="2024-02-09T00:53:13.976444675Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 00:53:13.976845 env[1119]: time="2024-02-09T00:53:13.976485674Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 00:53:13.976845 env[1119]: time="2024-02-09T00:53:13.976498318Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 00:53:13.976845 env[1119]: time="2024-02-09T00:53:13.976093128Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 00:53:13.976845 env[1119]: time="2024-02-09T00:53:13.976136321Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 00:53:13.976845 env[1119]: time="2024-02-09T00:53:13.976149507Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 00:53:13.976845 env[1119]: time="2024-02-09T00:53:13.976404999Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/75b8fbd9b2d431a3ddb55d626d94ef2559e4db9f0cd9d5a3ad6f7a79a3da1156 pid=1668 runtime=io.containerd.runc.v2 Feb 9 00:53:13.977204 env[1119]: time="2024-02-09T00:53:13.976952011Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fd646640a1fbdf00d570f2ec7d323da4a04e89a311ce6aa6c955d281b9e83a75 pid=1683 runtime=io.containerd.runc.v2 Feb 9 00:53:13.977364 env[1119]: time="2024-02-09T00:53:13.977313888Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 00:53:13.977593 env[1119]: time="2024-02-09T00:53:13.977549632Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 00:53:13.977661 env[1119]: time="2024-02-09T00:53:13.977593235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 00:53:13.977779 env[1119]: time="2024-02-09T00:53:13.977713507Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8ca7dc56a314c412e029a643c0661f2ada5c15add528155a2680e2b279190648 pid=1684 runtime=io.containerd.runc.v2 Feb 9 00:53:13.988851 systemd[1]: Started cri-containerd-fd646640a1fbdf00d570f2ec7d323da4a04e89a311ce6aa6c955d281b9e83a75.scope. Feb 9 00:53:13.989807 kubelet[1622]: W0209 00:53:13.989718 1622 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.122:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused Feb 9 00:53:13.989807 kubelet[1622]: E0209 00:53:13.989785 1622 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.122:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused Feb 9 00:53:13.996114 systemd[1]: Started cri-containerd-8ca7dc56a314c412e029a643c0661f2ada5c15add528155a2680e2b279190648.scope. Feb 9 00:53:14.003724 systemd[1]: Started cri-containerd-75b8fbd9b2d431a3ddb55d626d94ef2559e4db9f0cd9d5a3ad6f7a79a3da1156.scope. Feb 9 00:53:14.034963 env[1119]: time="2024-02-09T00:53:14.034923604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:212dcc5e2f08bec92c239ac5786b7e2b,Namespace:kube-system,Attempt:0,} returns sandbox id \"fd646640a1fbdf00d570f2ec7d323da4a04e89a311ce6aa6c955d281b9e83a75\"" Feb 9 00:53:14.035981 kubelet[1622]: W0209 00:53:14.035731 1622 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.122:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused Feb 9 00:53:14.035981 kubelet[1622]: E0209 00:53:14.035790 1622 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.122:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused Feb 9 00:53:14.037993 env[1119]: time="2024-02-09T00:53:14.036465887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0b63e2b30e007bded58a1df158511251,Namespace:kube-system,Attempt:0,} returns sandbox id \"8ca7dc56a314c412e029a643c0661f2ada5c15add528155a2680e2b279190648\"" Feb 9 00:53:14.038063 kubelet[1622]: E0209 00:53:14.036712 1622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:53:14.038063 kubelet[1622]: E0209 00:53:14.037512 1622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:53:14.039758 env[1119]: time="2024-02-09T00:53:14.039689153Z" level=info msg="CreateContainer within sandbox \"fd646640a1fbdf00d570f2ec7d323da4a04e89a311ce6aa6c955d281b9e83a75\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 9 00:53:14.040045 env[1119]: time="2024-02-09T00:53:14.040014728Z" level=info msg="CreateContainer within sandbox \"8ca7dc56a314c412e029a643c0661f2ada5c15add528155a2680e2b279190648\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 9 00:53:14.045191 env[1119]: time="2024-02-09T00:53:14.045155939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d0325d16aab19669b5fea4b6623890e6,Namespace:kube-system,Attempt:0,} returns sandbox id \"75b8fbd9b2d431a3ddb55d626d94ef2559e4db9f0cd9d5a3ad6f7a79a3da1156\"" Feb 9 00:53:14.045749 kubelet[1622]: E0209 00:53:14.045722 1622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:53:14.047675 env[1119]: time="2024-02-09T00:53:14.047641957Z" level=info msg="CreateContainer within sandbox \"75b8fbd9b2d431a3ddb55d626d94ef2559e4db9f0cd9d5a3ad6f7a79a3da1156\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 9 00:53:14.062809 env[1119]: time="2024-02-09T00:53:14.062765823Z" level=info msg="CreateContainer within sandbox \"fd646640a1fbdf00d570f2ec7d323da4a04e89a311ce6aa6c955d281b9e83a75\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b63407754e89ea4bd339bef81c212dea91a07f25f0f522f55bba295c3319b008\"" Feb 9 00:53:14.063311 env[1119]: time="2024-02-09T00:53:14.063286555Z" level=info msg="StartContainer for \"b63407754e89ea4bd339bef81c212dea91a07f25f0f522f55bba295c3319b008\"" Feb 9 00:53:14.069812 env[1119]: time="2024-02-09T00:53:14.069772771Z" level=info msg="CreateContainer within sandbox \"8ca7dc56a314c412e029a643c0661f2ada5c15add528155a2680e2b279190648\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"32884cb1be5ed056c01a7bfb8f84a91d26c61ad364135d9950db1321e40223da\"" Feb 9 00:53:14.070333 env[1119]: time="2024-02-09T00:53:14.070294573Z" level=info msg="StartContainer for \"32884cb1be5ed056c01a7bfb8f84a91d26c61ad364135d9950db1321e40223da\"" Feb 9 00:53:14.072416 env[1119]: time="2024-02-09T00:53:14.072376585Z" level=info msg="CreateContainer within sandbox \"75b8fbd9b2d431a3ddb55d626d94ef2559e4db9f0cd9d5a3ad6f7a79a3da1156\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9f5b36d956d2f2aed1cdff89a79559b5f4afe345a45d317f30cf1da85e50cf8f\"" Feb 9 00:53:14.072883 env[1119]: time="2024-02-09T00:53:14.072866466Z" level=info msg="StartContainer for \"9f5b36d956d2f2aed1cdff89a79559b5f4afe345a45d317f30cf1da85e50cf8f\"" Feb 9 00:53:14.077713 systemd[1]: Started cri-containerd-b63407754e89ea4bd339bef81c212dea91a07f25f0f522f55bba295c3319b008.scope. Feb 9 00:53:14.088028 systemd[1]: Started cri-containerd-32884cb1be5ed056c01a7bfb8f84a91d26c61ad364135d9950db1321e40223da.scope. Feb 9 00:53:14.094548 systemd[1]: Started cri-containerd-9f5b36d956d2f2aed1cdff89a79559b5f4afe345a45d317f30cf1da85e50cf8f.scope. Feb 9 00:53:14.128173 env[1119]: time="2024-02-09T00:53:14.128127271Z" level=info msg="StartContainer for \"b63407754e89ea4bd339bef81c212dea91a07f25f0f522f55bba295c3319b008\" returns successfully" Feb 9 00:53:14.139156 env[1119]: time="2024-02-09T00:53:14.137622531Z" level=info msg="StartContainer for \"32884cb1be5ed056c01a7bfb8f84a91d26c61ad364135d9950db1321e40223da\" returns successfully" Feb 9 00:53:14.144980 env[1119]: time="2024-02-09T00:53:14.144938060Z" level=info msg="StartContainer for \"9f5b36d956d2f2aed1cdff89a79559b5f4afe345a45d317f30cf1da85e50cf8f\" returns successfully" Feb 9 00:53:14.461019 kubelet[1622]: I0209 00:53:14.460605 1622 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 00:53:14.977608 kubelet[1622]: E0209 00:53:14.977578 1622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:53:14.979209 kubelet[1622]: E0209 00:53:14.979186 1622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:53:14.980341 kubelet[1622]: E0209 00:53:14.980328 1622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:53:15.309703 kubelet[1622]: E0209 00:53:15.309658 1622 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 9 00:53:15.384607 kubelet[1622]: I0209 00:53:15.384565 1622 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Feb 9 00:53:15.392076 kubelet[1622]: E0209 00:53:15.392046 1622 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 00:53:15.492764 kubelet[1622]: E0209 00:53:15.492704 1622 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 00:53:15.593690 kubelet[1622]: E0209 00:53:15.593541 1622 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 00:53:15.694083 kubelet[1622]: E0209 00:53:15.694040 1622 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 00:53:15.794659 kubelet[1622]: E0209 00:53:15.794602 1622 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 00:53:15.895315 kubelet[1622]: E0209 00:53:15.895165 1622 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 00:53:15.982052 kubelet[1622]: E0209 00:53:15.982016 1622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:53:15.995858 kubelet[1622]: E0209 00:53:15.995826 1622 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 00:53:16.096605 kubelet[1622]: E0209 00:53:16.096552 1622 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 00:53:16.197513 kubelet[1622]: E0209 00:53:16.197386 1622 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 00:53:16.297981 kubelet[1622]: E0209 00:53:16.297936 1622 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 00:53:16.950683 kubelet[1622]: I0209 00:53:16.950636 1622 apiserver.go:52] "Watching apiserver" Feb 9 00:53:16.954066 kubelet[1622]: I0209 00:53:16.954043 1622 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 00:53:18.065880 systemd[1]: Reloading. Feb 9 00:53:18.109925 kubelet[1622]: E0209 00:53:18.109418 1622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:53:18.134610 /usr/lib/systemd/system-generators/torcx-generator[1918]: time="2024-02-09T00:53:18Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 00:53:18.134645 /usr/lib/systemd/system-generators/torcx-generator[1918]: time="2024-02-09T00:53:18Z" level=info msg="torcx already run" Feb 9 00:53:18.190785 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 00:53:18.190802 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 00:53:18.207579 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 00:53:18.292997 systemd[1]: Stopping kubelet.service... Feb 9 00:53:18.314679 systemd[1]: kubelet.service: Deactivated successfully. Feb 9 00:53:18.314848 systemd[1]: Stopped kubelet.service. Feb 9 00:53:18.314905 systemd[1]: kubelet.service: Consumed 1.007s CPU time. Feb 9 00:53:18.316438 systemd[1]: Started kubelet.service. Feb 9 00:53:18.365780 kubelet[1959]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 00:53:18.365780 kubelet[1959]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 9 00:53:18.365780 kubelet[1959]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 00:53:18.366165 kubelet[1959]: I0209 00:53:18.365817 1959 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 00:53:18.366917 sudo[1971]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 9 00:53:18.367083 sudo[1971]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Feb 9 00:53:18.369446 kubelet[1959]: I0209 00:53:18.369274 1959 server.go:467] "Kubelet version" kubeletVersion="v1.28.1" Feb 9 00:53:18.369446 kubelet[1959]: I0209 00:53:18.369295 1959 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 00:53:18.369554 kubelet[1959]: I0209 00:53:18.369509 1959 server.go:895] "Client rotation is on, will bootstrap in background" Feb 9 00:53:18.370730 kubelet[1959]: I0209 00:53:18.370714 1959 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 9 00:53:18.371783 kubelet[1959]: I0209 00:53:18.371595 1959 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 00:53:18.382407 kubelet[1959]: I0209 00:53:18.382386 1959 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 00:53:18.382558 kubelet[1959]: I0209 00:53:18.382542 1959 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 00:53:18.382694 kubelet[1959]: I0209 00:53:18.382672 1959 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 9 00:53:18.382694 kubelet[1959]: I0209 00:53:18.382693 1959 topology_manager.go:138] "Creating topology manager with none policy" Feb 9 00:53:18.382823 kubelet[1959]: I0209 00:53:18.382700 1959 container_manager_linux.go:301] "Creating device plugin manager" Feb 9 00:53:18.382823 kubelet[1959]: I0209 00:53:18.382727 1959 state_mem.go:36] "Initialized new in-memory state store" Feb 9 00:53:18.382823 kubelet[1959]: I0209 00:53:18.382794 1959 kubelet.go:393] "Attempting to sync node with API server" Feb 9 00:53:18.382823 kubelet[1959]: I0209 00:53:18.382815 1959 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 00:53:18.382906 kubelet[1959]: I0209 00:53:18.382834 1959 kubelet.go:309] "Adding apiserver pod source" Feb 9 00:53:18.382906 kubelet[1959]: I0209 00:53:18.382851 1959 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 00:53:18.383513 kubelet[1959]: I0209 00:53:18.383492 1959 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 00:53:18.383918 kubelet[1959]: I0209 00:53:18.383903 1959 server.go:1232] "Started kubelet" Feb 9 00:53:18.387883 kubelet[1959]: I0209 00:53:18.387857 1959 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 00:53:18.392516 kubelet[1959]: I0209 00:53:18.392502 1959 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 00:53:18.393114 kubelet[1959]: I0209 00:53:18.393087 1959 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 9 00:53:18.393310 kubelet[1959]: I0209 00:53:18.393294 1959 server.go:462] "Adding debug handlers to kubelet server" Feb 9 00:53:18.394184 kubelet[1959]: I0209 00:53:18.394171 1959 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 9 00:53:18.395946 kubelet[1959]: I0209 00:53:18.395935 1959 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 9 00:53:18.396042 kubelet[1959]: E0209 00:53:18.395560 1959 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 00:53:18.396117 kubelet[1959]: E0209 00:53:18.396104 1959 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 00:53:18.396546 kubelet[1959]: I0209 00:53:18.396535 1959 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 00:53:18.396884 kubelet[1959]: I0209 00:53:18.396872 1959 reconciler_new.go:29] "Reconciler: start to sync state" Feb 9 00:53:18.407441 kubelet[1959]: I0209 00:53:18.407413 1959 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 9 00:53:18.408179 kubelet[1959]: I0209 00:53:18.408164 1959 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 9 00:53:18.408179 kubelet[1959]: I0209 00:53:18.408185 1959 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 9 00:53:18.408179 kubelet[1959]: I0209 00:53:18.408205 1959 kubelet.go:2303] "Starting kubelet main sync loop" Feb 9 00:53:18.408302 kubelet[1959]: E0209 00:53:18.408244 1959 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 00:53:18.444515 kubelet[1959]: I0209 00:53:18.444477 1959 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 00:53:18.444515 kubelet[1959]: I0209 00:53:18.444507 1959 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 00:53:18.444708 kubelet[1959]: I0209 00:53:18.444533 1959 state_mem.go:36] "Initialized new in-memory state store" Feb 9 00:53:18.444708 kubelet[1959]: I0209 00:53:18.444696 1959 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 9 00:53:18.444776 kubelet[1959]: I0209 00:53:18.444719 1959 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 9 00:53:18.444776 kubelet[1959]: I0209 00:53:18.444726 1959 policy_none.go:49] "None policy: Start" Feb 9 00:53:18.445500 kubelet[1959]: I0209 00:53:18.445460 1959 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 00:53:18.445500 kubelet[1959]: I0209 00:53:18.445501 1959 state_mem.go:35] "Initializing new in-memory state store" Feb 9 00:53:18.445660 kubelet[1959]: I0209 00:53:18.445640 1959 state_mem.go:75] "Updated machine memory state" Feb 9 00:53:18.448882 kubelet[1959]: I0209 00:53:18.448860 1959 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 00:53:18.449079 kubelet[1959]: I0209 00:53:18.449057 1959 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 00:53:18.497035 kubelet[1959]: I0209 00:53:18.497004 1959 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 00:53:18.503891 kubelet[1959]: I0209 00:53:18.503864 1959 kubelet_node_status.go:108] "Node was previously registered" node="localhost" Feb 9 00:53:18.503964 kubelet[1959]: I0209 00:53:18.503916 1959 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Feb 9 00:53:18.508996 kubelet[1959]: I0209 00:53:18.508956 1959 topology_manager.go:215] "Topology Admit Handler" podUID="212dcc5e2f08bec92c239ac5786b7e2b" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 9 00:53:18.509172 kubelet[1959]: I0209 00:53:18.509043 1959 topology_manager.go:215] "Topology Admit Handler" podUID="d0325d16aab19669b5fea4b6623890e6" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 9 00:53:18.509172 kubelet[1959]: I0209 00:53:18.509079 1959 topology_manager.go:215] "Topology Admit Handler" podUID="0b63e2b30e007bded58a1df158511251" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 9 00:53:18.512978 kubelet[1959]: E0209 00:53:18.512947 1959 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 9 00:53:18.598572 kubelet[1959]: I0209 00:53:18.598454 1959 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/212dcc5e2f08bec92c239ac5786b7e2b-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"212dcc5e2f08bec92c239ac5786b7e2b\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 00:53:18.598572 kubelet[1959]: I0209 00:53:18.598513 1959 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/212dcc5e2f08bec92c239ac5786b7e2b-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"212dcc5e2f08bec92c239ac5786b7e2b\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 00:53:18.598572 kubelet[1959]: I0209 00:53:18.598541 1959 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/212dcc5e2f08bec92c239ac5786b7e2b-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"212dcc5e2f08bec92c239ac5786b7e2b\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 00:53:18.598774 kubelet[1959]: I0209 00:53:18.598595 1959 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/212dcc5e2f08bec92c239ac5786b7e2b-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"212dcc5e2f08bec92c239ac5786b7e2b\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 00:53:18.598774 kubelet[1959]: I0209 00:53:18.598650 1959 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0b63e2b30e007bded58a1df158511251-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0b63e2b30e007bded58a1df158511251\") " pod="kube-system/kube-apiserver-localhost" Feb 9 00:53:18.598774 kubelet[1959]: I0209 00:53:18.598673 1959 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0b63e2b30e007bded58a1df158511251-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0b63e2b30e007bded58a1df158511251\") " pod="kube-system/kube-apiserver-localhost" Feb 9 00:53:18.598774 kubelet[1959]: I0209 00:53:18.598700 1959 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/212dcc5e2f08bec92c239ac5786b7e2b-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"212dcc5e2f08bec92c239ac5786b7e2b\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 00:53:18.598774 kubelet[1959]: I0209 00:53:18.598721 1959 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d0325d16aab19669b5fea4b6623890e6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d0325d16aab19669b5fea4b6623890e6\") " pod="kube-system/kube-scheduler-localhost" Feb 9 00:53:18.598934 kubelet[1959]: I0209 00:53:18.598842 1959 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0b63e2b30e007bded58a1df158511251-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0b63e2b30e007bded58a1df158511251\") " pod="kube-system/kube-apiserver-localhost" Feb 9 00:53:18.814764 kubelet[1959]: E0209 00:53:18.814700 1959 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:53:18.816293 kubelet[1959]: E0209 00:53:18.816243 1959 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:53:18.816754 kubelet[1959]: E0209 00:53:18.816738 1959 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:53:18.835043 sudo[1971]: pam_unix(sudo:session): session closed for user root Feb 9 00:53:19.383863 kubelet[1959]: I0209 00:53:19.383823 1959 apiserver.go:52] "Watching apiserver" Feb 9 00:53:19.397384 kubelet[1959]: I0209 00:53:19.397339 1959 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 00:53:19.427332 kubelet[1959]: E0209 00:53:19.427308 1959 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:53:19.427417 kubelet[1959]: E0209 00:53:19.427364 1959 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:53:19.428792 kubelet[1959]: E0209 00:53:19.428765 1959 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:53:19.452305 kubelet[1959]: I0209 00:53:19.452263 1959 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.452212726 podCreationTimestamp="2024-02-09 00:53:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 00:53:19.447193376 +0000 UTC m=+1.126587541" watchObservedRunningTime="2024-02-09 00:53:19.452212726 +0000 UTC m=+1.131606881" Feb 9 00:53:19.452493 kubelet[1959]: I0209 00:53:19.452352 1959 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.452338286 podCreationTimestamp="2024-02-09 00:53:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 00:53:19.451791812 +0000 UTC m=+1.131185977" watchObservedRunningTime="2024-02-09 00:53:19.452338286 +0000 UTC m=+1.131732451" Feb 9 00:53:19.757991 sudo[1204]: pam_unix(sudo:session): session closed for user root Feb 9 00:53:19.759231 sshd[1201]: pam_unix(sshd:session): session closed for user core Feb 9 00:53:19.761515 systemd[1]: sshd@4-10.0.0.122:22-10.0.0.1:35658.service: Deactivated successfully. Feb 9 00:53:19.762158 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 00:53:19.762298 systemd[1]: session-5.scope: Consumed 2.885s CPU time. Feb 9 00:53:19.762634 systemd-logind[1107]: Session 5 logged out. Waiting for processes to exit. Feb 9 00:53:19.763204 systemd-logind[1107]: Removed session 5. Feb 9 00:53:20.429263 kubelet[1959]: E0209 00:53:20.429211 1959 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:53:24.545122 kubelet[1959]: E0209 00:53:24.545093 1959 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:53:24.555202 kubelet[1959]: I0209 00:53:24.555174 1959 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=6.555134279 podCreationTimestamp="2024-02-09 00:53:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 00:53:19.458215624 +0000 UTC m=+1.137609789" watchObservedRunningTime="2024-02-09 00:53:24.555134279 +0000 UTC m=+6.234528444" Feb 9 00:53:25.199113 kubelet[1959]: E0209 00:53:25.199063 1959 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:53:25.433892 kubelet[1959]: E0209 00:53:25.433858 1959 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:53:25.434050 kubelet[1959]: E0209 00:53:25.433909 1959 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:53:26.151821 update_engine[1109]: I0209 00:53:26.151784 1109 update_attempter.cc:509] Updating boot flags... Feb 9 00:53:26.434831 kubelet[1959]: E0209 00:53:26.434739 1959 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:53:28.317790 kubelet[1959]: E0209 00:53:28.317754 1959 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:53:28.437368 kubelet[1959]: E0209 00:53:28.437335 1959 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:53:32.884078 kubelet[1959]: I0209 00:53:32.884048 1959 topology_manager.go:215] "Topology Admit Handler" podUID="cbb6d553-f16b-476f-a0b2-949da044bfb2" podNamespace="kube-system" podName="cilium-operator-6bc8ccdb58-85kd2" Feb 9 00:53:32.888100 systemd[1]: Created slice kubepods-besteffort-podcbb6d553_f16b_476f_a0b2_949da044bfb2.slice. Feb 9 00:53:32.906504 kubelet[1959]: I0209 00:53:32.906457 1959 topology_manager.go:215] "Topology Admit Handler" podUID="c60e5e55-7382-4ea3-ae3d-d9edf820d93e" podNamespace="kube-system" podName="kube-proxy-xxslz" Feb 9 00:53:32.910178 systemd[1]: Created slice kubepods-besteffort-podc60e5e55_7382_4ea3_ae3d_d9edf820d93e.slice. Feb 9 00:53:32.911832 kubelet[1959]: I0209 00:53:32.911815 1959 topology_manager.go:215] "Topology Admit Handler" podUID="7fe16cc4-f4b1-4cde-8a15-503fd6a1db00" podNamespace="kube-system" podName="cilium-ndngt" Feb 9 00:53:32.931554 systemd[1]: Created slice kubepods-burstable-pod7fe16cc4_f4b1_4cde_8a15_503fd6a1db00.slice. Feb 9 00:53:32.960560 kubelet[1959]: I0209 00:53:32.960536 1959 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 9 00:53:32.960983 env[1119]: time="2024-02-09T00:53:32.960921405Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 00:53:32.961434 kubelet[1959]: I0209 00:53:32.961134 1959 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 9 00:53:32.994390 kubelet[1959]: I0209 00:53:32.994364 1959 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c60e5e55-7382-4ea3-ae3d-d9edf820d93e-lib-modules\") pod \"kube-proxy-xxslz\" (UID: \"c60e5e55-7382-4ea3-ae3d-d9edf820d93e\") " pod="kube-system/kube-proxy-xxslz" Feb 9 00:53:32.994501 kubelet[1959]: I0209 00:53:32.994398 1959 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7fe16cc4-f4b1-4cde-8a15-503fd6a1db00-hostproc\") pod \"cilium-ndngt\" (UID: \"7fe16cc4-f4b1-4cde-8a15-503fd6a1db00\") " pod="kube-system/cilium-ndngt" Feb 9 00:53:32.994501 kubelet[1959]: I0209 00:53:32.994418 1959 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7fe16cc4-f4b1-4cde-8a15-503fd6a1db00-host-proc-sys-kernel\") pod \"cilium-ndngt\" (UID: \"7fe16cc4-f4b1-4cde-8a15-503fd6a1db00\") " pod="kube-system/cilium-ndngt" Feb 9 00:53:32.994501 kubelet[1959]: I0209 00:53:32.994436 1959 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7fe16cc4-f4b1-4cde-8a15-503fd6a1db00-lib-modules\") pod \"cilium-ndngt\" (UID: \"7fe16cc4-f4b1-4cde-8a15-503fd6a1db00\") " pod="kube-system/cilium-ndngt" Feb 9 00:53:32.994501 kubelet[1959]: I0209 00:53:32.994454 1959 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7fe16cc4-f4b1-4cde-8a15-503fd6a1db00-hubble-tls\") pod \"cilium-ndngt\" (UID: \"7fe16cc4-f4b1-4cde-8a15-503fd6a1db00\") " pod="kube-system/cilium-ndngt" Feb 9 00:53:32.994501 kubelet[1959]: I0209 00:53:32.994482 1959 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9n9b\" (UniqueName: \"kubernetes.io/projected/7fe16cc4-f4b1-4cde-8a15-503fd6a1db00-kube-api-access-n9n9b\") pod \"cilium-ndngt\" (UID: \"7fe16cc4-f4b1-4cde-8a15-503fd6a1db00\") " pod="kube-system/cilium-ndngt" Feb 9 00:53:32.994618 kubelet[1959]: I0209 00:53:32.994503 1959 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7fe16cc4-f4b1-4cde-8a15-503fd6a1db00-cilium-run\") pod \"cilium-ndngt\" (UID: \"7fe16cc4-f4b1-4cde-8a15-503fd6a1db00\") " pod="kube-system/cilium-ndngt" Feb 9 00:53:32.994618 kubelet[1959]: I0209 00:53:32.994551 1959 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7fe16cc4-f4b1-4cde-8a15-503fd6a1db00-bpf-maps\") pod \"cilium-ndngt\" (UID: \"7fe16cc4-f4b1-4cde-8a15-503fd6a1db00\") " pod="kube-system/cilium-ndngt" Feb 9 00:53:32.994618 kubelet[1959]: I0209 00:53:32.994598 1959 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7fe16cc4-f4b1-4cde-8a15-503fd6a1db00-xtables-lock\") pod \"cilium-ndngt\" (UID: \"7fe16cc4-f4b1-4cde-8a15-503fd6a1db00\") " pod="kube-system/cilium-ndngt" Feb 9 00:53:32.994618 kubelet[1959]: I0209 00:53:32.994620 1959 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7fe16cc4-f4b1-4cde-8a15-503fd6a1db00-clustermesh-secrets\") pod \"cilium-ndngt\" (UID: \"7fe16cc4-f4b1-4cde-8a15-503fd6a1db00\") " pod="kube-system/cilium-ndngt" Feb 9 00:53:32.994713 kubelet[1959]: I0209 00:53:32.994637 1959 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7fe16cc4-f4b1-4cde-8a15-503fd6a1db00-cilium-config-path\") pod \"cilium-ndngt\" (UID: \"7fe16cc4-f4b1-4cde-8a15-503fd6a1db00\") " pod="kube-system/cilium-ndngt" Feb 9 00:53:32.994713 kubelet[1959]: I0209 00:53:32.994658 1959 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c60e5e55-7382-4ea3-ae3d-d9edf820d93e-kube-proxy\") pod \"kube-proxy-xxslz\" (UID: \"c60e5e55-7382-4ea3-ae3d-d9edf820d93e\") " pod="kube-system/kube-proxy-xxslz" Feb 9 00:53:32.994713 kubelet[1959]: I0209 00:53:32.994705 1959 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cbb6d553-f16b-476f-a0b2-949da044bfb2-cilium-config-path\") pod \"cilium-operator-6bc8ccdb58-85kd2\" (UID: \"cbb6d553-f16b-476f-a0b2-949da044bfb2\") " pod="kube-system/cilium-operator-6bc8ccdb58-85kd2" Feb 9 00:53:32.994784 kubelet[1959]: I0209 00:53:32.994732 1959 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rt55s\" (UniqueName: \"kubernetes.io/projected/cbb6d553-f16b-476f-a0b2-949da044bfb2-kube-api-access-rt55s\") pod \"cilium-operator-6bc8ccdb58-85kd2\" (UID: \"cbb6d553-f16b-476f-a0b2-949da044bfb2\") " pod="kube-system/cilium-operator-6bc8ccdb58-85kd2" Feb 9 00:53:32.994784 kubelet[1959]: I0209 00:53:32.994760 1959 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkth2\" (UniqueName: \"kubernetes.io/projected/c60e5e55-7382-4ea3-ae3d-d9edf820d93e-kube-api-access-fkth2\") pod \"kube-proxy-xxslz\" (UID: \"c60e5e55-7382-4ea3-ae3d-d9edf820d93e\") " pod="kube-system/kube-proxy-xxslz" Feb 9 00:53:32.994833 kubelet[1959]: I0209 00:53:32.994788 1959 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7fe16cc4-f4b1-4cde-8a15-503fd6a1db00-cilium-cgroup\") pod \"cilium-ndngt\" (UID: \"7fe16cc4-f4b1-4cde-8a15-503fd6a1db00\") " pod="kube-system/cilium-ndngt" Feb 9 00:53:32.994833 kubelet[1959]: I0209 00:53:32.994820 1959 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7fe16cc4-f4b1-4cde-8a15-503fd6a1db00-cni-path\") pod \"cilium-ndngt\" (UID: \"7fe16cc4-f4b1-4cde-8a15-503fd6a1db00\") " pod="kube-system/cilium-ndngt" Feb 9 00:53:32.994882 kubelet[1959]: I0209 00:53:32.994839 1959 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7fe16cc4-f4b1-4cde-8a15-503fd6a1db00-etc-cni-netd\") pod \"cilium-ndngt\" (UID: \"7fe16cc4-f4b1-4cde-8a15-503fd6a1db00\") " pod="kube-system/cilium-ndngt" Feb 9 00:53:32.994882 kubelet[1959]: I0209 00:53:32.994875 1959 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7fe16cc4-f4b1-4cde-8a15-503fd6a1db00-host-proc-sys-net\") pod \"cilium-ndngt\" (UID: \"7fe16cc4-f4b1-4cde-8a15-503fd6a1db00\") " pod="kube-system/cilium-ndngt" Feb 9 00:53:32.994927 kubelet[1959]: I0209 00:53:32.994899 1959 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c60e5e55-7382-4ea3-ae3d-d9edf820d93e-xtables-lock\") pod \"kube-proxy-xxslz\" (UID: \"c60e5e55-7382-4ea3-ae3d-d9edf820d93e\") " pod="kube-system/kube-proxy-xxslz" Feb 9 00:53:33.195712 kubelet[1959]: E0209 00:53:33.195619 1959 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:53:33.196903 env[1119]: time="2024-02-09T00:53:33.196856473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-85kd2,Uid:cbb6d553-f16b-476f-a0b2-949da044bfb2,Namespace:kube-system,Attempt:0,}" Feb 9 00:53:33.215435 kubelet[1959]: E0209 00:53:33.215408 1959 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:53:33.215662 env[1119]: time="2024-02-09T00:53:33.215588667Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 00:53:33.215764 env[1119]: time="2024-02-09T00:53:33.215645514Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 00:53:33.215764 env[1119]: time="2024-02-09T00:53:33.215659221Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 00:53:33.216038 env[1119]: time="2024-02-09T00:53:33.216003683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xxslz,Uid:c60e5e55-7382-4ea3-ae3d-d9edf820d93e,Namespace:kube-system,Attempt:0,}" Feb 9 00:53:33.216153 env[1119]: time="2024-02-09T00:53:33.216101016Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/10bb309a83fd3a587b8817bbff33c73faeacb522bc65b045dfb15b214bbe5ace pid=2066 runtime=io.containerd.runc.v2 Feb 9 00:53:33.229886 systemd[1]: Started cri-containerd-10bb309a83fd3a587b8817bbff33c73faeacb522bc65b045dfb15b214bbe5ace.scope. Feb 9 00:53:33.233390 env[1119]: time="2024-02-09T00:53:33.233325036Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 00:53:33.234959 kubelet[1959]: E0209 00:53:33.234234 1959 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:53:33.239642 env[1119]: time="2024-02-09T00:53:33.239610943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ndngt,Uid:7fe16cc4-f4b1-4cde-8a15-503fd6a1db00,Namespace:kube-system,Attempt:0,}" Feb 9 00:53:33.240199 env[1119]: time="2024-02-09T00:53:33.240151566Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 00:53:33.240270 env[1119]: time="2024-02-09T00:53:33.240178948Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 00:53:33.240744 env[1119]: time="2024-02-09T00:53:33.240702619Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/75ba407aa26ef41716e3262e53665edc47c236c8795c4f2820ebdf2380195f95 pid=2093 runtime=io.containerd.runc.v2 Feb 9 00:53:33.253123 systemd[1]: Started cri-containerd-75ba407aa26ef41716e3262e53665edc47c236c8795c4f2820ebdf2380195f95.scope. Feb 9 00:53:33.277358 env[1119]: time="2024-02-09T00:53:33.277169408Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 00:53:33.277358 env[1119]: time="2024-02-09T00:53:33.277209314Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 00:53:33.277358 env[1119]: time="2024-02-09T00:53:33.277222349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 00:53:33.277555 env[1119]: time="2024-02-09T00:53:33.277381129Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/da13d6c060c16840978724fc88fbc3dc8fe88b70221f71e209442ee17ec4b5c3 pid=2140 runtime=io.containerd.runc.v2 Feb 9 00:53:33.277652 env[1119]: time="2024-02-09T00:53:33.277619200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xxslz,Uid:c60e5e55-7382-4ea3-ae3d-d9edf820d93e,Namespace:kube-system,Attempt:0,} returns sandbox id \"75ba407aa26ef41716e3262e53665edc47c236c8795c4f2820ebdf2380195f95\"" Feb 9 00:53:33.278646 kubelet[1959]: E0209 00:53:33.278625 1959 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:53:33.280547 env[1119]: time="2024-02-09T00:53:33.280514289Z" level=info msg="CreateContainer within sandbox \"75ba407aa26ef41716e3262e53665edc47c236c8795c4f2820ebdf2380195f95\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 00:53:33.288339 env[1119]: time="2024-02-09T00:53:33.288288183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-85kd2,Uid:cbb6d553-f16b-476f-a0b2-949da044bfb2,Namespace:kube-system,Attempt:0,} returns sandbox id \"10bb309a83fd3a587b8817bbff33c73faeacb522bc65b045dfb15b214bbe5ace\"" Feb 9 00:53:33.288782 kubelet[1959]: E0209 00:53:33.288764 1959 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:53:33.289772 env[1119]: time="2024-02-09T00:53:33.289734750Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 9 00:53:33.293518 systemd[1]: Started cri-containerd-da13d6c060c16840978724fc88fbc3dc8fe88b70221f71e209442ee17ec4b5c3.scope. Feb 9 00:53:33.307200 env[1119]: time="2024-02-09T00:53:33.307153237Z" level=info msg="CreateContainer within sandbox \"75ba407aa26ef41716e3262e53665edc47c236c8795c4f2820ebdf2380195f95\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c7110263398e56c20c9d31e09f0488b08ccd942126a2182e5b57c4d93ce8c478\"" Feb 9 00:53:33.309512 env[1119]: time="2024-02-09T00:53:33.309479511Z" level=info msg="StartContainer for \"c7110263398e56c20c9d31e09f0488b08ccd942126a2182e5b57c4d93ce8c478\"" Feb 9 00:53:33.317601 env[1119]: time="2024-02-09T00:53:33.317564603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ndngt,Uid:7fe16cc4-f4b1-4cde-8a15-503fd6a1db00,Namespace:kube-system,Attempt:0,} returns sandbox id \"da13d6c060c16840978724fc88fbc3dc8fe88b70221f71e209442ee17ec4b5c3\"" Feb 9 00:53:33.318448 kubelet[1959]: E0209 00:53:33.318424 1959 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:53:33.325699 systemd[1]: Started cri-containerd-c7110263398e56c20c9d31e09f0488b08ccd942126a2182e5b57c4d93ce8c478.scope. Feb 9 00:53:33.374871 env[1119]: time="2024-02-09T00:53:33.374836118Z" level=info msg="StartContainer for \"c7110263398e56c20c9d31e09f0488b08ccd942126a2182e5b57c4d93ce8c478\" returns successfully" Feb 9 00:53:33.446989 kubelet[1959]: E0209 00:53:33.446881 1959 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:53:33.453395 kubelet[1959]: I0209 00:53:33.453372 1959 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-xxslz" podStartSLOduration=1.453343185 podCreationTimestamp="2024-02-09 00:53:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 00:53:33.453074487 +0000 UTC m=+15.132468662" watchObservedRunningTime="2024-02-09 00:53:33.453343185 +0000 UTC m=+15.132737350" Feb 9 00:53:34.619412 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3041744985.mount: Deactivated successfully. Feb 9 00:53:36.316271 env[1119]: time="2024-02-09T00:53:36.316206698Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:53:36.317822 env[1119]: time="2024-02-09T00:53:36.317789862Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:53:36.319337 env[1119]: time="2024-02-09T00:53:36.319292793Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:53:36.319785 env[1119]: time="2024-02-09T00:53:36.319752802Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 9 00:53:36.320371 env[1119]: time="2024-02-09T00:53:36.320325745Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 9 00:53:36.322020 env[1119]: time="2024-02-09T00:53:36.321928265Z" level=info msg="CreateContainer within sandbox \"10bb309a83fd3a587b8817bbff33c73faeacb522bc65b045dfb15b214bbe5ace\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 9 00:53:36.340887 env[1119]: time="2024-02-09T00:53:36.340668104Z" level=info msg="CreateContainer within sandbox \"10bb309a83fd3a587b8817bbff33c73faeacb522bc65b045dfb15b214bbe5ace\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"d6e68ec437fe0ca9ef1e02caa5b8220f2c1009f9de918d5561a8c2383ae166b7\"" Feb 9 00:53:36.341612 env[1119]: time="2024-02-09T00:53:36.341568366Z" level=info msg="StartContainer for \"d6e68ec437fe0ca9ef1e02caa5b8220f2c1009f9de918d5561a8c2383ae166b7\"" Feb 9 00:53:36.360983 systemd[1]: run-containerd-runc-k8s.io-d6e68ec437fe0ca9ef1e02caa5b8220f2c1009f9de918d5561a8c2383ae166b7-runc.lVay8t.mount: Deactivated successfully. Feb 9 00:53:36.362191 systemd[1]: Started cri-containerd-d6e68ec437fe0ca9ef1e02caa5b8220f2c1009f9de918d5561a8c2383ae166b7.scope. Feb 9 00:53:36.384710 env[1119]: time="2024-02-09T00:53:36.384658188Z" level=info msg="StartContainer for \"d6e68ec437fe0ca9ef1e02caa5b8220f2c1009f9de918d5561a8c2383ae166b7\" returns successfully" Feb 9 00:53:36.452680 kubelet[1959]: E0209 00:53:36.452350 1959 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:53:37.453440 kubelet[1959]: E0209 00:53:37.453396 1959 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:53:38.797954 systemd[1]: Started sshd@5-10.0.0.122:22-10.0.0.1:41556.service. Feb 9 00:53:38.830586 sshd[2378]: Accepted publickey for core from 10.0.0.1 port 41556 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:53:38.831757 sshd[2378]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:53:38.835420 systemd-logind[1107]: New session 6 of user core. Feb 9 00:53:38.836176 systemd[1]: Started session-6.scope. Feb 9 00:53:38.958172 sshd[2378]: pam_unix(sshd:session): session closed for user core Feb 9 00:53:38.960600 systemd[1]: sshd@5-10.0.0.122:22-10.0.0.1:41556.service: Deactivated successfully. Feb 9 00:53:38.961287 systemd[1]: session-6.scope: Deactivated successfully. Feb 9 00:53:38.962026 systemd-logind[1107]: Session 6 logged out. Waiting for processes to exit. Feb 9 00:53:38.962862 systemd-logind[1107]: Removed session 6. Feb 9 00:53:42.803889 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1385459512.mount: Deactivated successfully. Feb 9 00:53:43.962697 systemd[1]: Started sshd@6-10.0.0.122:22-10.0.0.1:41564.service. Feb 9 00:53:44.241320 sshd[2399]: Accepted publickey for core from 10.0.0.1 port 41564 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:53:44.242226 sshd[2399]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:53:44.245266 systemd-logind[1107]: New session 7 of user core. Feb 9 00:53:44.245961 systemd[1]: Started session-7.scope. Feb 9 00:53:44.351884 sshd[2399]: pam_unix(sshd:session): session closed for user core Feb 9 00:53:44.354059 systemd[1]: sshd@6-10.0.0.122:22-10.0.0.1:41564.service: Deactivated successfully. Feb 9 00:53:44.354859 systemd[1]: session-7.scope: Deactivated successfully. Feb 9 00:53:44.355493 systemd-logind[1107]: Session 7 logged out. Waiting for processes to exit. Feb 9 00:53:44.356407 systemd-logind[1107]: Removed session 7. Feb 9 00:53:46.695950 env[1119]: time="2024-02-09T00:53:46.695893956Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:53:46.697527 env[1119]: time="2024-02-09T00:53:46.697474857Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:53:46.698928 env[1119]: time="2024-02-09T00:53:46.698889935Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:53:46.699387 env[1119]: time="2024-02-09T00:53:46.699361494Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 9 00:53:46.700866 env[1119]: time="2024-02-09T00:53:46.700837528Z" level=info msg="CreateContainer within sandbox \"da13d6c060c16840978724fc88fbc3dc8fe88b70221f71e209442ee17ec4b5c3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 00:53:46.710844 env[1119]: time="2024-02-09T00:53:46.710800867Z" level=info msg="CreateContainer within sandbox \"da13d6c060c16840978724fc88fbc3dc8fe88b70221f71e209442ee17ec4b5c3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c037b79a5a9875861c5cfaab9c08db2ff3964ac3a5fe14b8338c2addf037a01f\"" Feb 9 00:53:46.711287 env[1119]: time="2024-02-09T00:53:46.711243562Z" level=info msg="StartContainer for \"c037b79a5a9875861c5cfaab9c08db2ff3964ac3a5fe14b8338c2addf037a01f\"" Feb 9 00:53:46.726283 systemd[1]: Started cri-containerd-c037b79a5a9875861c5cfaab9c08db2ff3964ac3a5fe14b8338c2addf037a01f.scope. Feb 9 00:53:46.747656 env[1119]: time="2024-02-09T00:53:46.747619037Z" level=info msg="StartContainer for \"c037b79a5a9875861c5cfaab9c08db2ff3964ac3a5fe14b8338c2addf037a01f\" returns successfully" Feb 9 00:53:46.755543 systemd[1]: cri-containerd-c037b79a5a9875861c5cfaab9c08db2ff3964ac3a5fe14b8338c2addf037a01f.scope: Deactivated successfully. Feb 9 00:53:47.373419 env[1119]: time="2024-02-09T00:53:47.373366007Z" level=info msg="shim disconnected" id=c037b79a5a9875861c5cfaab9c08db2ff3964ac3a5fe14b8338c2addf037a01f Feb 9 00:53:47.373722 env[1119]: time="2024-02-09T00:53:47.373424477Z" level=warning msg="cleaning up after shim disconnected" id=c037b79a5a9875861c5cfaab9c08db2ff3964ac3a5fe14b8338c2addf037a01f namespace=k8s.io Feb 9 00:53:47.373722 env[1119]: time="2024-02-09T00:53:47.373439335Z" level=info msg="cleaning up dead shim" Feb 9 00:53:47.380711 env[1119]: time="2024-02-09T00:53:47.380674336Z" level=warning msg="cleanup warnings time=\"2024-02-09T00:53:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2460 runtime=io.containerd.runc.v2\n" Feb 9 00:53:47.465981 kubelet[1959]: E0209 00:53:47.465957 1959 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:53:47.468899 env[1119]: time="2024-02-09T00:53:47.468854314Z" level=info msg="CreateContainer within sandbox \"da13d6c060c16840978724fc88fbc3dc8fe88b70221f71e209442ee17ec4b5c3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 00:53:47.478192 kubelet[1959]: I0209 00:53:47.478165 1959 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-6bc8ccdb58-85kd2" podStartSLOduration=12.447429127 podCreationTimestamp="2024-02-09 00:53:32 +0000 UTC" firstStartedPulling="2024-02-09 00:53:33.289401549 +0000 UTC m=+14.968795714" lastFinishedPulling="2024-02-09 00:53:36.320102444 +0000 UTC m=+17.999496619" observedRunningTime="2024-02-09 00:53:36.46419804 +0000 UTC m=+18.143592205" watchObservedRunningTime="2024-02-09 00:53:47.478130032 +0000 UTC m=+29.157524198" Feb 9 00:53:47.480945 env[1119]: time="2024-02-09T00:53:47.480896919Z" level=info msg="CreateContainer within sandbox \"da13d6c060c16840978724fc88fbc3dc8fe88b70221f71e209442ee17ec4b5c3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a52472c001098a1a309951e243c2c7f05653b19c3da99a0dc18720ac43cf52aa\"" Feb 9 00:53:47.481361 env[1119]: time="2024-02-09T00:53:47.481332862Z" level=info msg="StartContainer for \"a52472c001098a1a309951e243c2c7f05653b19c3da99a0dc18720ac43cf52aa\"" Feb 9 00:53:47.496935 systemd[1]: Started cri-containerd-a52472c001098a1a309951e243c2c7f05653b19c3da99a0dc18720ac43cf52aa.scope. Feb 9 00:53:47.522970 env[1119]: time="2024-02-09T00:53:47.522914401Z" level=info msg="StartContainer for \"a52472c001098a1a309951e243c2c7f05653b19c3da99a0dc18720ac43cf52aa\" returns successfully" Feb 9 00:53:47.531856 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 00:53:47.532099 systemd[1]: Stopped systemd-sysctl.service. Feb 9 00:53:47.532294 systemd[1]: Stopping systemd-sysctl.service... Feb 9 00:53:47.533845 systemd[1]: Starting systemd-sysctl.service... Feb 9 00:53:47.536798 systemd[1]: cri-containerd-a52472c001098a1a309951e243c2c7f05653b19c3da99a0dc18720ac43cf52aa.scope: Deactivated successfully. Feb 9 00:53:47.544538 systemd[1]: Finished systemd-sysctl.service. Feb 9 00:53:47.559036 env[1119]: time="2024-02-09T00:53:47.558973999Z" level=info msg="shim disconnected" id=a52472c001098a1a309951e243c2c7f05653b19c3da99a0dc18720ac43cf52aa Feb 9 00:53:47.559036 env[1119]: time="2024-02-09T00:53:47.559029573Z" level=warning msg="cleaning up after shim disconnected" id=a52472c001098a1a309951e243c2c7f05653b19c3da99a0dc18720ac43cf52aa namespace=k8s.io Feb 9 00:53:47.559036 env[1119]: time="2024-02-09T00:53:47.559040534Z" level=info msg="cleaning up dead shim" Feb 9 00:53:47.565513 env[1119]: time="2024-02-09T00:53:47.565470137Z" level=warning msg="cleanup warnings time=\"2024-02-09T00:53:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2525 runtime=io.containerd.runc.v2\n" Feb 9 00:53:47.708288 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c037b79a5a9875861c5cfaab9c08db2ff3964ac3a5fe14b8338c2addf037a01f-rootfs.mount: Deactivated successfully. Feb 9 00:53:48.468079 kubelet[1959]: E0209 00:53:48.468049 1959 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:53:48.469666 env[1119]: time="2024-02-09T00:53:48.469626141Z" level=info msg="CreateContainer within sandbox \"da13d6c060c16840978724fc88fbc3dc8fe88b70221f71e209442ee17ec4b5c3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 00:53:48.677369 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2342489236.mount: Deactivated successfully. Feb 9 00:53:48.686537 env[1119]: time="2024-02-09T00:53:48.686501960Z" level=info msg="CreateContainer within sandbox \"da13d6c060c16840978724fc88fbc3dc8fe88b70221f71e209442ee17ec4b5c3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e42ec181aad399ec432fddbf0828d2a81193695cc64689696b06c46f28a4dc63\"" Feb 9 00:53:48.686939 env[1119]: time="2024-02-09T00:53:48.686900131Z" level=info msg="StartContainer for \"e42ec181aad399ec432fddbf0828d2a81193695cc64689696b06c46f28a4dc63\"" Feb 9 00:53:48.700650 systemd[1]: Started cri-containerd-e42ec181aad399ec432fddbf0828d2a81193695cc64689696b06c46f28a4dc63.scope. Feb 9 00:53:48.722429 env[1119]: time="2024-02-09T00:53:48.722342555Z" level=info msg="StartContainer for \"e42ec181aad399ec432fddbf0828d2a81193695cc64689696b06c46f28a4dc63\" returns successfully" Feb 9 00:53:48.722457 systemd[1]: cri-containerd-e42ec181aad399ec432fddbf0828d2a81193695cc64689696b06c46f28a4dc63.scope: Deactivated successfully. Feb 9 00:53:48.736760 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e42ec181aad399ec432fddbf0828d2a81193695cc64689696b06c46f28a4dc63-rootfs.mount: Deactivated successfully. Feb 9 00:53:48.741599 env[1119]: time="2024-02-09T00:53:48.741552107Z" level=info msg="shim disconnected" id=e42ec181aad399ec432fddbf0828d2a81193695cc64689696b06c46f28a4dc63 Feb 9 00:53:48.741599 env[1119]: time="2024-02-09T00:53:48.741599677Z" level=warning msg="cleaning up after shim disconnected" id=e42ec181aad399ec432fddbf0828d2a81193695cc64689696b06c46f28a4dc63 namespace=k8s.io Feb 9 00:53:48.741599 env[1119]: time="2024-02-09T00:53:48.741607923Z" level=info msg="cleaning up dead shim" Feb 9 00:53:48.746982 env[1119]: time="2024-02-09T00:53:48.746946637Z" level=warning msg="cleanup warnings time=\"2024-02-09T00:53:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2581 runtime=io.containerd.runc.v2\n" Feb 9 00:53:49.355628 systemd[1]: Started sshd@7-10.0.0.122:22-10.0.0.1:53224.service. Feb 9 00:53:49.387234 sshd[2594]: Accepted publickey for core from 10.0.0.1 port 53224 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:53:49.388193 sshd[2594]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:53:49.391291 systemd-logind[1107]: New session 8 of user core. Feb 9 00:53:49.392125 systemd[1]: Started session-8.scope. Feb 9 00:53:49.471570 kubelet[1959]: E0209 00:53:49.471543 1959 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:53:49.476297 env[1119]: time="2024-02-09T00:53:49.476239752Z" level=info msg="CreateContainer within sandbox \"da13d6c060c16840978724fc88fbc3dc8fe88b70221f71e209442ee17ec4b5c3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 00:53:49.490116 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2780444248.mount: Deactivated successfully. Feb 9 00:53:49.491630 env[1119]: time="2024-02-09T00:53:49.491587710Z" level=info msg="CreateContainer within sandbox \"da13d6c060c16840978724fc88fbc3dc8fe88b70221f71e209442ee17ec4b5c3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"fe4b7562b6c6ff7b7446e4aa5d9fa5c9f4137f03f0c8792c1546459d7040afd2\"" Feb 9 00:53:49.492282 env[1119]: time="2024-02-09T00:53:49.492212708Z" level=info msg="StartContainer for \"fe4b7562b6c6ff7b7446e4aa5d9fa5c9f4137f03f0c8792c1546459d7040afd2\"" Feb 9 00:53:49.502078 sshd[2594]: pam_unix(sshd:session): session closed for user core Feb 9 00:53:49.506440 systemd-logind[1107]: Session 8 logged out. Waiting for processes to exit. Feb 9 00:53:49.508971 systemd[1]: Started cri-containerd-fe4b7562b6c6ff7b7446e4aa5d9fa5c9f4137f03f0c8792c1546459d7040afd2.scope. Feb 9 00:53:49.509242 systemd[1]: sshd@7-10.0.0.122:22-10.0.0.1:53224.service: Deactivated successfully. Feb 9 00:53:49.509738 systemd[1]: session-8.scope: Deactivated successfully. Feb 9 00:53:49.511432 systemd-logind[1107]: Removed session 8. Feb 9 00:53:49.530359 systemd[1]: cri-containerd-fe4b7562b6c6ff7b7446e4aa5d9fa5c9f4137f03f0c8792c1546459d7040afd2.scope: Deactivated successfully. Feb 9 00:53:49.533866 env[1119]: time="2024-02-09T00:53:49.533836462Z" level=info msg="StartContainer for \"fe4b7562b6c6ff7b7446e4aa5d9fa5c9f4137f03f0c8792c1546459d7040afd2\" returns successfully" Feb 9 00:53:49.555641 env[1119]: time="2024-02-09T00:53:49.555595514Z" level=info msg="shim disconnected" id=fe4b7562b6c6ff7b7446e4aa5d9fa5c9f4137f03f0c8792c1546459d7040afd2 Feb 9 00:53:49.555641 env[1119]: time="2024-02-09T00:53:49.555640308Z" level=warning msg="cleaning up after shim disconnected" id=fe4b7562b6c6ff7b7446e4aa5d9fa5c9f4137f03f0c8792c1546459d7040afd2 namespace=k8s.io Feb 9 00:53:49.555809 env[1119]: time="2024-02-09T00:53:49.555648645Z" level=info msg="cleaning up dead shim" Feb 9 00:53:49.561408 env[1119]: time="2024-02-09T00:53:49.561369117Z" level=warning msg="cleanup warnings time=\"2024-02-09T00:53:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2649 runtime=io.containerd.runc.v2\n" Feb 9 00:53:49.707737 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fe4b7562b6c6ff7b7446e4aa5d9fa5c9f4137f03f0c8792c1546459d7040afd2-rootfs.mount: Deactivated successfully. Feb 9 00:53:50.474801 kubelet[1959]: E0209 00:53:50.474761 1959 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:53:50.476864 env[1119]: time="2024-02-09T00:53:50.476832271Z" level=info msg="CreateContainer within sandbox \"da13d6c060c16840978724fc88fbc3dc8fe88b70221f71e209442ee17ec4b5c3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 00:53:50.492336 env[1119]: time="2024-02-09T00:53:50.492283050Z" level=info msg="CreateContainer within sandbox \"da13d6c060c16840978724fc88fbc3dc8fe88b70221f71e209442ee17ec4b5c3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"acdd9a55e3ade1b559c31db3f9180da9373c5293efcffdad4b93ab6f5f650f5d\"" Feb 9 00:53:50.492835 env[1119]: time="2024-02-09T00:53:50.492792399Z" level=info msg="StartContainer for \"acdd9a55e3ade1b559c31db3f9180da9373c5293efcffdad4b93ab6f5f650f5d\"" Feb 9 00:53:50.509016 systemd[1]: Started cri-containerd-acdd9a55e3ade1b559c31db3f9180da9373c5293efcffdad4b93ab6f5f650f5d.scope. Feb 9 00:53:50.534826 env[1119]: time="2024-02-09T00:53:50.534782252Z" level=info msg="StartContainer for \"acdd9a55e3ade1b559c31db3f9180da9373c5293efcffdad4b93ab6f5f650f5d\" returns successfully" Feb 9 00:53:50.602452 kubelet[1959]: I0209 00:53:50.602416 1959 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 00:53:50.616238 kubelet[1959]: I0209 00:53:50.616194 1959 topology_manager.go:215] "Topology Admit Handler" podUID="73a36caf-ba9a-4960-8ffb-4d326f3220b2" podNamespace="kube-system" podName="coredns-5dd5756b68-g5d9j" Feb 9 00:53:50.623203 systemd[1]: Created slice kubepods-burstable-pod73a36caf_ba9a_4960_8ffb_4d326f3220b2.slice. Feb 9 00:53:50.632705 kubelet[1959]: I0209 00:53:50.632672 1959 topology_manager.go:215] "Topology Admit Handler" podUID="778226eb-5b94-4665-a5dc-d8ca2533d769" podNamespace="kube-system" podName="coredns-5dd5756b68-rcwnt" Feb 9 00:53:50.638730 systemd[1]: Created slice kubepods-burstable-pod778226eb_5b94_4665_a5dc_d8ca2533d769.slice. Feb 9 00:53:50.709040 systemd[1]: run-containerd-runc-k8s.io-acdd9a55e3ade1b559c31db3f9180da9373c5293efcffdad4b93ab6f5f650f5d-runc.xST0wA.mount: Deactivated successfully. Feb 9 00:53:50.710824 kubelet[1959]: I0209 00:53:50.710782 1959 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/778226eb-5b94-4665-a5dc-d8ca2533d769-config-volume\") pod \"coredns-5dd5756b68-rcwnt\" (UID: \"778226eb-5b94-4665-a5dc-d8ca2533d769\") " pod="kube-system/coredns-5dd5756b68-rcwnt" Feb 9 00:53:50.710943 kubelet[1959]: I0209 00:53:50.710890 1959 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4xhj\" (UniqueName: \"kubernetes.io/projected/778226eb-5b94-4665-a5dc-d8ca2533d769-kube-api-access-n4xhj\") pod \"coredns-5dd5756b68-rcwnt\" (UID: \"778226eb-5b94-4665-a5dc-d8ca2533d769\") " pod="kube-system/coredns-5dd5756b68-rcwnt" Feb 9 00:53:50.711230 kubelet[1959]: I0209 00:53:50.711186 1959 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/73a36caf-ba9a-4960-8ffb-4d326f3220b2-config-volume\") pod \"coredns-5dd5756b68-g5d9j\" (UID: \"73a36caf-ba9a-4960-8ffb-4d326f3220b2\") " pod="kube-system/coredns-5dd5756b68-g5d9j" Feb 9 00:53:50.711346 kubelet[1959]: I0209 00:53:50.711324 1959 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cglf\" (UniqueName: \"kubernetes.io/projected/73a36caf-ba9a-4960-8ffb-4d326f3220b2-kube-api-access-7cglf\") pod \"coredns-5dd5756b68-g5d9j\" (UID: \"73a36caf-ba9a-4960-8ffb-4d326f3220b2\") " pod="kube-system/coredns-5dd5756b68-g5d9j" Feb 9 00:53:50.928371 kubelet[1959]: E0209 00:53:50.928336 1959 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:53:50.928826 env[1119]: time="2024-02-09T00:53:50.928795077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-g5d9j,Uid:73a36caf-ba9a-4960-8ffb-4d326f3220b2,Namespace:kube-system,Attempt:0,}" Feb 9 00:53:50.942970 kubelet[1959]: E0209 00:53:50.942939 1959 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:53:50.943330 env[1119]: time="2024-02-09T00:53:50.943300734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-rcwnt,Uid:778226eb-5b94-4665-a5dc-d8ca2533d769,Namespace:kube-system,Attempt:0,}" Feb 9 00:53:51.480552 kubelet[1959]: E0209 00:53:51.480524 1959 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:53:51.491406 kubelet[1959]: I0209 00:53:51.491380 1959 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-ndngt" podStartSLOduration=6.110826266 podCreationTimestamp="2024-02-09 00:53:32 +0000 UTC" firstStartedPulling="2024-02-09 00:53:33.319072336 +0000 UTC m=+14.998466501" lastFinishedPulling="2024-02-09 00:53:46.699583664 +0000 UTC m=+28.378977829" observedRunningTime="2024-02-09 00:53:51.490065948 +0000 UTC m=+33.169460113" watchObservedRunningTime="2024-02-09 00:53:51.491337594 +0000 UTC m=+33.170731749" Feb 9 00:53:52.425299 systemd-networkd[1007]: cilium_host: Link UP Feb 9 00:53:52.425400 systemd-networkd[1007]: cilium_net: Link UP Feb 9 00:53:52.425403 systemd-networkd[1007]: cilium_net: Gained carrier Feb 9 00:53:52.425511 systemd-networkd[1007]: cilium_host: Gained carrier Feb 9 00:53:52.427627 systemd-networkd[1007]: cilium_host: Gained IPv6LL Feb 9 00:53:52.428300 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 9 00:53:52.482451 kubelet[1959]: E0209 00:53:52.482420 1959 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:53:52.486651 systemd-networkd[1007]: cilium_vxlan: Link UP Feb 9 00:53:52.486661 systemd-networkd[1007]: cilium_vxlan: Gained carrier Feb 9 00:53:52.659275 kernel: NET: Registered PF_ALG protocol family Feb 9 00:53:53.115897 systemd-networkd[1007]: lxc_health: Link UP Feb 9 00:53:53.126137 systemd-networkd[1007]: lxc_health: Gained carrier Feb 9 00:53:53.126314 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 00:53:53.413375 systemd-networkd[1007]: cilium_net: Gained IPv6LL Feb 9 00:53:53.484416 kubelet[1959]: E0209 00:53:53.484384 1959 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:53:53.503953 systemd-networkd[1007]: lxcbe35d3194980: Link UP Feb 9 00:53:53.512553 kernel: eth0: renamed from tmp3f88b Feb 9 00:53:53.549037 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 00:53:53.549398 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcbe35d3194980: link becomes ready Feb 9 00:53:53.549751 systemd-networkd[1007]: lxcbe35d3194980: Gained carrier Feb 9 00:53:53.549931 systemd-networkd[1007]: lxc276d98a63426: Link UP Feb 9 00:53:53.554356 kernel: eth0: renamed from tmpac351 Feb 9 00:53:53.564081 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc276d98a63426: link becomes ready Feb 9 00:53:53.563359 systemd-networkd[1007]: lxc276d98a63426: Gained carrier Feb 9 00:53:53.862446 systemd-networkd[1007]: cilium_vxlan: Gained IPv6LL Feb 9 00:53:54.485885 kubelet[1959]: I0209 00:53:54.485859 1959 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 9 00:53:54.486547 kubelet[1959]: E0209 00:53:54.486534 1959 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:53:54.505857 systemd[1]: Started sshd@8-10.0.0.122:22-10.0.0.1:53228.service. Feb 9 00:53:54.539953 sshd[3213]: Accepted publickey for core from 10.0.0.1 port 53228 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:53:54.541676 sshd[3213]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:53:54.546498 systemd[1]: Started session-9.scope. Feb 9 00:53:54.547057 systemd-logind[1107]: New session 9 of user core. Feb 9 00:53:54.629377 systemd-networkd[1007]: lxc276d98a63426: Gained IPv6LL Feb 9 00:53:54.665190 sshd[3213]: pam_unix(sshd:session): session closed for user core Feb 9 00:53:54.667395 systemd[1]: sshd@8-10.0.0.122:22-10.0.0.1:53228.service: Deactivated successfully. Feb 9 00:53:54.668286 systemd[1]: session-9.scope: Deactivated successfully. Feb 9 00:53:54.669287 systemd-logind[1107]: Session 9 logged out. Waiting for processes to exit. Feb 9 00:53:54.669939 systemd-logind[1107]: Removed session 9. Feb 9 00:53:54.822386 systemd-networkd[1007]: lxc_health: Gained IPv6LL Feb 9 00:53:55.141451 systemd-networkd[1007]: lxcbe35d3194980: Gained IPv6LL Feb 9 00:53:55.694291 kubelet[1959]: I0209 00:53:55.693450 1959 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 9 00:53:55.694291 kubelet[1959]: E0209 00:53:55.694112 1959 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:53:56.488826 kubelet[1959]: E0209 00:53:56.488785 1959 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:53:56.770630 env[1119]: time="2024-02-09T00:53:56.770374097Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 00:53:56.770630 env[1119]: time="2024-02-09T00:53:56.770406938Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 00:53:56.771048 env[1119]: time="2024-02-09T00:53:56.770416727Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 00:53:56.771048 env[1119]: time="2024-02-09T00:53:56.770522947Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3f88b896417b1d3b1c6b942348faee585030c95b157d093d8e4e516b4c70b63e pid=3248 runtime=io.containerd.runc.v2 Feb 9 00:53:56.775590 env[1119]: time="2024-02-09T00:53:56.771216533Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 00:53:56.775590 env[1119]: time="2024-02-09T00:53:56.771238074Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 00:53:56.775590 env[1119]: time="2024-02-09T00:53:56.771260476Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 00:53:56.775590 env[1119]: time="2024-02-09T00:53:56.771396873Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ac35181c485d6d03f0316c28cbae12aed577b21b7f0339f900ae16d54cccfb41 pid=3256 runtime=io.containerd.runc.v2 Feb 9 00:53:56.785195 systemd[1]: Started cri-containerd-3f88b896417b1d3b1c6b942348faee585030c95b157d093d8e4e516b4c70b63e.scope. Feb 9 00:53:56.791964 systemd[1]: Started cri-containerd-ac35181c485d6d03f0316c28cbae12aed577b21b7f0339f900ae16d54cccfb41.scope. Feb 9 00:53:56.793281 systemd[1]: run-containerd-runc-k8s.io-ac35181c485d6d03f0316c28cbae12aed577b21b7f0339f900ae16d54cccfb41-runc.qZ4GfU.mount: Deactivated successfully. Feb 9 00:53:56.796068 systemd-resolved[1062]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 00:53:56.806334 systemd-resolved[1062]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 00:53:56.818506 env[1119]: time="2024-02-09T00:53:56.818475535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-rcwnt,Uid:778226eb-5b94-4665-a5dc-d8ca2533d769,Namespace:kube-system,Attempt:0,} returns sandbox id \"3f88b896417b1d3b1c6b942348faee585030c95b157d093d8e4e516b4c70b63e\"" Feb 9 00:53:56.819112 kubelet[1959]: E0209 00:53:56.819089 1959 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:53:56.821995 env[1119]: time="2024-02-09T00:53:56.821973994Z" level=info msg="CreateContainer within sandbox \"3f88b896417b1d3b1c6b942348faee585030c95b157d093d8e4e516b4c70b63e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 00:53:56.832111 env[1119]: time="2024-02-09T00:53:56.832069295Z" level=info msg="CreateContainer within sandbox \"3f88b896417b1d3b1c6b942348faee585030c95b157d093d8e4e516b4c70b63e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a11f1cd1e942ca7105ef707c06502714333df0fa89ac616f24fd3b879d9ee0f6\"" Feb 9 00:53:56.833364 env[1119]: time="2024-02-09T00:53:56.832625263Z" level=info msg="StartContainer for \"a11f1cd1e942ca7105ef707c06502714333df0fa89ac616f24fd3b879d9ee0f6\"" Feb 9 00:53:56.834499 env[1119]: time="2024-02-09T00:53:56.834472090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-g5d9j,Uid:73a36caf-ba9a-4960-8ffb-4d326f3220b2,Namespace:kube-system,Attempt:0,} returns sandbox id \"ac35181c485d6d03f0316c28cbae12aed577b21b7f0339f900ae16d54cccfb41\"" Feb 9 00:53:56.835110 kubelet[1959]: E0209 00:53:56.835085 1959 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:53:56.836667 env[1119]: time="2024-02-09T00:53:56.836637358Z" level=info msg="CreateContainer within sandbox \"ac35181c485d6d03f0316c28cbae12aed577b21b7f0339f900ae16d54cccfb41\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 00:53:56.847653 env[1119]: time="2024-02-09T00:53:56.847615303Z" level=info msg="CreateContainer within sandbox \"ac35181c485d6d03f0316c28cbae12aed577b21b7f0339f900ae16d54cccfb41\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a0645775d1cd915369199375422ab56feec9746b25c0046a5e8bcf3a41c667dc\"" Feb 9 00:53:56.847847 systemd[1]: Started cri-containerd-a11f1cd1e942ca7105ef707c06502714333df0fa89ac616f24fd3b879d9ee0f6.scope. Feb 9 00:53:56.848404 env[1119]: time="2024-02-09T00:53:56.848383479Z" level=info msg="StartContainer for \"a0645775d1cd915369199375422ab56feec9746b25c0046a5e8bcf3a41c667dc\"" Feb 9 00:53:56.867393 systemd[1]: Started cri-containerd-a0645775d1cd915369199375422ab56feec9746b25c0046a5e8bcf3a41c667dc.scope. Feb 9 00:53:56.872986 env[1119]: time="2024-02-09T00:53:56.872898809Z" level=info msg="StartContainer for \"a11f1cd1e942ca7105ef707c06502714333df0fa89ac616f24fd3b879d9ee0f6\" returns successfully" Feb 9 00:53:56.895227 env[1119]: time="2024-02-09T00:53:56.895190670Z" level=info msg="StartContainer for \"a0645775d1cd915369199375422ab56feec9746b25c0046a5e8bcf3a41c667dc\" returns successfully" Feb 9 00:53:57.490581 kubelet[1959]: E0209 00:53:57.490559 1959 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:53:57.492168 kubelet[1959]: E0209 00:53:57.492139 1959 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:53:57.498370 kubelet[1959]: I0209 00:53:57.498326 1959 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-g5d9j" podStartSLOduration=25.498296755 podCreationTimestamp="2024-02-09 00:53:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 00:53:57.497116072 +0000 UTC m=+39.176510238" watchObservedRunningTime="2024-02-09 00:53:57.498296755 +0000 UTC m=+39.177690910" Feb 9 00:53:57.512696 kubelet[1959]: I0209 00:53:57.512655 1959 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-rcwnt" podStartSLOduration=25.512607996 podCreationTimestamp="2024-02-09 00:53:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 00:53:57.511332814 +0000 UTC m=+39.190726979" watchObservedRunningTime="2024-02-09 00:53:57.512607996 +0000 UTC m=+39.192002151" Feb 9 00:53:58.493305 kubelet[1959]: E0209 00:53:58.493278 1959 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:53:58.493663 kubelet[1959]: E0209 00:53:58.493505 1959 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:53:59.495269 kubelet[1959]: E0209 00:53:59.495217 1959 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:53:59.495627 kubelet[1959]: E0209 00:53:59.495386 1959 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:53:59.669247 systemd[1]: Started sshd@9-10.0.0.122:22-10.0.0.1:40706.service. Feb 9 00:53:59.700116 sshd[3406]: Accepted publickey for core from 10.0.0.1 port 40706 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:53:59.701107 sshd[3406]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:53:59.704431 systemd-logind[1107]: New session 10 of user core. Feb 9 00:53:59.705213 systemd[1]: Started session-10.scope. Feb 9 00:53:59.811191 sshd[3406]: pam_unix(sshd:session): session closed for user core Feb 9 00:53:59.813677 systemd[1]: sshd@9-10.0.0.122:22-10.0.0.1:40706.service: Deactivated successfully. Feb 9 00:53:59.814164 systemd[1]: session-10.scope: Deactivated successfully. Feb 9 00:53:59.814692 systemd-logind[1107]: Session 10 logged out. Waiting for processes to exit. Feb 9 00:53:59.815589 systemd[1]: Started sshd@10-10.0.0.122:22-10.0.0.1:40708.service. Feb 9 00:53:59.816069 systemd-logind[1107]: Removed session 10. Feb 9 00:53:59.846026 sshd[3420]: Accepted publickey for core from 10.0.0.1 port 40708 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:53:59.847117 sshd[3420]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:53:59.851031 systemd-logind[1107]: New session 11 of user core. Feb 9 00:53:59.852277 systemd[1]: Started session-11.scope. Feb 9 00:54:00.502096 sshd[3420]: pam_unix(sshd:session): session closed for user core Feb 9 00:54:00.506891 systemd[1]: Started sshd@11-10.0.0.122:22-10.0.0.1:40724.service. Feb 9 00:54:00.507855 systemd[1]: sshd@10-10.0.0.122:22-10.0.0.1:40708.service: Deactivated successfully. Feb 9 00:54:00.508414 systemd[1]: session-11.scope: Deactivated successfully. Feb 9 00:54:00.510758 systemd-logind[1107]: Session 11 logged out. Waiting for processes to exit. Feb 9 00:54:00.514161 systemd-logind[1107]: Removed session 11. Feb 9 00:54:00.547572 sshd[3430]: Accepted publickey for core from 10.0.0.1 port 40724 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:54:00.548808 sshd[3430]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:54:00.552533 systemd-logind[1107]: New session 12 of user core. Feb 9 00:54:00.553651 systemd[1]: Started session-12.scope. Feb 9 00:54:00.658650 sshd[3430]: pam_unix(sshd:session): session closed for user core Feb 9 00:54:00.661197 systemd[1]: sshd@11-10.0.0.122:22-10.0.0.1:40724.service: Deactivated successfully. Feb 9 00:54:00.661884 systemd[1]: session-12.scope: Deactivated successfully. Feb 9 00:54:00.662663 systemd-logind[1107]: Session 12 logged out. Waiting for processes to exit. Feb 9 00:54:00.663368 systemd-logind[1107]: Removed session 12. Feb 9 00:54:05.663644 systemd[1]: Started sshd@12-10.0.0.122:22-10.0.0.1:40738.service. Feb 9 00:54:05.694267 sshd[3446]: Accepted publickey for core from 10.0.0.1 port 40738 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:54:05.695610 sshd[3446]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:54:05.699241 systemd-logind[1107]: New session 13 of user core. Feb 9 00:54:05.700003 systemd[1]: Started session-13.scope. Feb 9 00:54:05.809889 sshd[3446]: pam_unix(sshd:session): session closed for user core Feb 9 00:54:05.812004 systemd[1]: sshd@12-10.0.0.122:22-10.0.0.1:40738.service: Deactivated successfully. Feb 9 00:54:05.812786 systemd[1]: session-13.scope: Deactivated successfully. Feb 9 00:54:05.813314 systemd-logind[1107]: Session 13 logged out. Waiting for processes to exit. Feb 9 00:54:05.813972 systemd-logind[1107]: Removed session 13. Feb 9 00:54:10.815056 systemd[1]: Started sshd@13-10.0.0.122:22-10.0.0.1:41686.service. Feb 9 00:54:10.846721 sshd[3460]: Accepted publickey for core from 10.0.0.1 port 41686 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:54:10.848005 sshd[3460]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:54:10.851330 systemd-logind[1107]: New session 14 of user core. Feb 9 00:54:10.852267 systemd[1]: Started session-14.scope. Feb 9 00:54:10.962231 sshd[3460]: pam_unix(sshd:session): session closed for user core Feb 9 00:54:10.964988 systemd[1]: sshd@13-10.0.0.122:22-10.0.0.1:41686.service: Deactivated successfully. Feb 9 00:54:10.965553 systemd[1]: session-14.scope: Deactivated successfully. Feb 9 00:54:10.968044 systemd[1]: Started sshd@14-10.0.0.122:22-10.0.0.1:41688.service. Feb 9 00:54:10.968987 systemd-logind[1107]: Session 14 logged out. Waiting for processes to exit. Feb 9 00:54:10.969848 systemd-logind[1107]: Removed session 14. Feb 9 00:54:10.998225 sshd[3473]: Accepted publickey for core from 10.0.0.1 port 41688 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:54:10.999116 sshd[3473]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:54:11.002184 systemd-logind[1107]: New session 15 of user core. Feb 9 00:54:11.003069 systemd[1]: Started session-15.scope. Feb 9 00:54:11.153217 sshd[3473]: pam_unix(sshd:session): session closed for user core Feb 9 00:54:11.155754 systemd[1]: sshd@14-10.0.0.122:22-10.0.0.1:41688.service: Deactivated successfully. Feb 9 00:54:11.156238 systemd[1]: session-15.scope: Deactivated successfully. Feb 9 00:54:11.156742 systemd-logind[1107]: Session 15 logged out. Waiting for processes to exit. Feb 9 00:54:11.157523 systemd[1]: Started sshd@15-10.0.0.122:22-10.0.0.1:41698.service. Feb 9 00:54:11.158230 systemd-logind[1107]: Removed session 15. Feb 9 00:54:11.188943 sshd[3485]: Accepted publickey for core from 10.0.0.1 port 41698 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:54:11.190146 sshd[3485]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:54:11.193508 systemd-logind[1107]: New session 16 of user core. Feb 9 00:54:11.194213 systemd[1]: Started session-16.scope. Feb 9 00:54:11.980535 sshd[3485]: pam_unix(sshd:session): session closed for user core Feb 9 00:54:11.984087 systemd[1]: Started sshd@16-10.0.0.122:22-10.0.0.1:41700.service. Feb 9 00:54:11.984505 systemd[1]: sshd@15-10.0.0.122:22-10.0.0.1:41698.service: Deactivated successfully. Feb 9 00:54:11.987213 systemd[1]: session-16.scope: Deactivated successfully. Feb 9 00:54:11.988490 systemd-logind[1107]: Session 16 logged out. Waiting for processes to exit. Feb 9 00:54:11.989457 systemd-logind[1107]: Removed session 16. Feb 9 00:54:12.016878 sshd[3502]: Accepted publickey for core from 10.0.0.1 port 41700 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:54:12.018027 sshd[3502]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:54:12.021309 systemd-logind[1107]: New session 17 of user core. Feb 9 00:54:12.022073 systemd[1]: Started session-17.scope. Feb 9 00:54:12.364694 sshd[3502]: pam_unix(sshd:session): session closed for user core Feb 9 00:54:12.367293 systemd[1]: sshd@16-10.0.0.122:22-10.0.0.1:41700.service: Deactivated successfully. Feb 9 00:54:12.367774 systemd[1]: session-17.scope: Deactivated successfully. Feb 9 00:54:12.368334 systemd-logind[1107]: Session 17 logged out. Waiting for processes to exit. Feb 9 00:54:12.369150 systemd[1]: Started sshd@17-10.0.0.122:22-10.0.0.1:41706.service. Feb 9 00:54:12.369696 systemd-logind[1107]: Removed session 17. Feb 9 00:54:12.401826 sshd[3515]: Accepted publickey for core from 10.0.0.1 port 41706 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:54:12.402716 sshd[3515]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:54:12.405737 systemd-logind[1107]: New session 18 of user core. Feb 9 00:54:12.406433 systemd[1]: Started session-18.scope. Feb 9 00:54:12.543920 sshd[3515]: pam_unix(sshd:session): session closed for user core Feb 9 00:54:12.546128 systemd[1]: sshd@17-10.0.0.122:22-10.0.0.1:41706.service: Deactivated successfully. Feb 9 00:54:12.546794 systemd[1]: session-18.scope: Deactivated successfully. Feb 9 00:54:12.547432 systemd-logind[1107]: Session 18 logged out. Waiting for processes to exit. Feb 9 00:54:12.548084 systemd-logind[1107]: Removed session 18. Feb 9 00:54:17.547406 systemd[1]: Started sshd@18-10.0.0.122:22-10.0.0.1:36692.service. Feb 9 00:54:17.577265 sshd[3529]: Accepted publickey for core from 10.0.0.1 port 36692 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:54:17.578153 sshd[3529]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:54:17.581096 systemd-logind[1107]: New session 19 of user core. Feb 9 00:54:17.582060 systemd[1]: Started session-19.scope. Feb 9 00:54:17.679529 sshd[3529]: pam_unix(sshd:session): session closed for user core Feb 9 00:54:17.681433 systemd[1]: sshd@18-10.0.0.122:22-10.0.0.1:36692.service: Deactivated successfully. Feb 9 00:54:17.682139 systemd[1]: session-19.scope: Deactivated successfully. Feb 9 00:54:17.682654 systemd-logind[1107]: Session 19 logged out. Waiting for processes to exit. Feb 9 00:54:17.683330 systemd-logind[1107]: Removed session 19. Feb 9 00:54:22.684190 systemd[1]: Started sshd@19-10.0.0.122:22-10.0.0.1:36696.service. Feb 9 00:54:22.714689 sshd[3549]: Accepted publickey for core from 10.0.0.1 port 36696 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:54:22.716031 sshd[3549]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:54:22.719596 systemd-logind[1107]: New session 20 of user core. Feb 9 00:54:22.720417 systemd[1]: Started session-20.scope. Feb 9 00:54:22.822595 sshd[3549]: pam_unix(sshd:session): session closed for user core Feb 9 00:54:22.824884 systemd[1]: sshd@19-10.0.0.122:22-10.0.0.1:36696.service: Deactivated successfully. Feb 9 00:54:22.825539 systemd[1]: session-20.scope: Deactivated successfully. Feb 9 00:54:22.826004 systemd-logind[1107]: Session 20 logged out. Waiting for processes to exit. Feb 9 00:54:22.826645 systemd-logind[1107]: Removed session 20. Feb 9 00:54:27.827341 systemd[1]: Started sshd@20-10.0.0.122:22-10.0.0.1:56948.service. Feb 9 00:54:27.857747 sshd[3563]: Accepted publickey for core from 10.0.0.1 port 56948 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:54:27.858849 sshd[3563]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:54:27.862207 systemd-logind[1107]: New session 21 of user core. Feb 9 00:54:27.863013 systemd[1]: Started session-21.scope. Feb 9 00:54:27.964415 sshd[3563]: pam_unix(sshd:session): session closed for user core Feb 9 00:54:27.966654 systemd[1]: sshd@20-10.0.0.122:22-10.0.0.1:56948.service: Deactivated successfully. Feb 9 00:54:27.967493 systemd[1]: session-21.scope: Deactivated successfully. Feb 9 00:54:27.968145 systemd-logind[1107]: Session 21 logged out. Waiting for processes to exit. Feb 9 00:54:27.968790 systemd-logind[1107]: Removed session 21. Feb 9 00:54:31.409015 kubelet[1959]: E0209 00:54:31.408981 1959 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:54:32.968927 systemd[1]: Started sshd@21-10.0.0.122:22-10.0.0.1:56950.service. Feb 9 00:54:32.998563 sshd[3576]: Accepted publickey for core from 10.0.0.1 port 56950 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:54:32.999495 sshd[3576]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:54:33.002581 systemd-logind[1107]: New session 22 of user core. Feb 9 00:54:33.003540 systemd[1]: Started session-22.scope. Feb 9 00:54:33.103459 sshd[3576]: pam_unix(sshd:session): session closed for user core Feb 9 00:54:33.106183 systemd[1]: sshd@21-10.0.0.122:22-10.0.0.1:56950.service: Deactivated successfully. Feb 9 00:54:33.106804 systemd[1]: session-22.scope: Deactivated successfully. Feb 9 00:54:33.109356 systemd[1]: Started sshd@22-10.0.0.122:22-10.0.0.1:56958.service. Feb 9 00:54:33.110029 systemd-logind[1107]: Session 22 logged out. Waiting for processes to exit. Feb 9 00:54:33.110834 systemd-logind[1107]: Removed session 22. Feb 9 00:54:33.139329 sshd[3589]: Accepted publickey for core from 10.0.0.1 port 56958 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:54:33.140446 sshd[3589]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:54:33.143868 systemd-logind[1107]: New session 23 of user core. Feb 9 00:54:33.144918 systemd[1]: Started session-23.scope. Feb 9 00:54:34.545881 env[1119]: time="2024-02-09T00:54:34.545831365Z" level=info msg="StopContainer for \"d6e68ec437fe0ca9ef1e02caa5b8220f2c1009f9de918d5561a8c2383ae166b7\" with timeout 30 (s)" Feb 9 00:54:34.546382 env[1119]: time="2024-02-09T00:54:34.546100770Z" level=info msg="Stop container \"d6e68ec437fe0ca9ef1e02caa5b8220f2c1009f9de918d5561a8c2383ae166b7\" with signal terminated" Feb 9 00:54:34.554871 systemd[1]: run-containerd-runc-k8s.io-acdd9a55e3ade1b559c31db3f9180da9373c5293efcffdad4b93ab6f5f650f5d-runc.VklRxP.mount: Deactivated successfully. Feb 9 00:54:34.558826 systemd[1]: cri-containerd-d6e68ec437fe0ca9ef1e02caa5b8220f2c1009f9de918d5561a8c2383ae166b7.scope: Deactivated successfully. Feb 9 00:54:34.569615 env[1119]: time="2024-02-09T00:54:34.569563586Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 00:54:34.573930 env[1119]: time="2024-02-09T00:54:34.573905713Z" level=info msg="StopContainer for \"acdd9a55e3ade1b559c31db3f9180da9373c5293efcffdad4b93ab6f5f650f5d\" with timeout 2 (s)" Feb 9 00:54:34.574101 env[1119]: time="2024-02-09T00:54:34.574085166Z" level=info msg="Stop container \"acdd9a55e3ade1b559c31db3f9180da9373c5293efcffdad4b93ab6f5f650f5d\" with signal terminated" Feb 9 00:54:34.577703 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d6e68ec437fe0ca9ef1e02caa5b8220f2c1009f9de918d5561a8c2383ae166b7-rootfs.mount: Deactivated successfully. Feb 9 00:54:34.579371 systemd-networkd[1007]: lxc_health: Link DOWN Feb 9 00:54:34.579376 systemd-networkd[1007]: lxc_health: Lost carrier Feb 9 00:54:34.589776 env[1119]: time="2024-02-09T00:54:34.589722632Z" level=info msg="shim disconnected" id=d6e68ec437fe0ca9ef1e02caa5b8220f2c1009f9de918d5561a8c2383ae166b7 Feb 9 00:54:34.589776 env[1119]: time="2024-02-09T00:54:34.589769662Z" level=warning msg="cleaning up after shim disconnected" id=d6e68ec437fe0ca9ef1e02caa5b8220f2c1009f9de918d5561a8c2383ae166b7 namespace=k8s.io Feb 9 00:54:34.589776 env[1119]: time="2024-02-09T00:54:34.589778979Z" level=info msg="cleaning up dead shim" Feb 9 00:54:34.598028 env[1119]: time="2024-02-09T00:54:34.597972393Z" level=warning msg="cleanup warnings time=\"2024-02-09T00:54:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3646 runtime=io.containerd.runc.v2\n" Feb 9 00:54:34.601809 env[1119]: time="2024-02-09T00:54:34.601767905Z" level=info msg="StopContainer for \"d6e68ec437fe0ca9ef1e02caa5b8220f2c1009f9de918d5561a8c2383ae166b7\" returns successfully" Feb 9 00:54:34.602479 env[1119]: time="2024-02-09T00:54:34.602442355Z" level=info msg="StopPodSandbox for \"10bb309a83fd3a587b8817bbff33c73faeacb522bc65b045dfb15b214bbe5ace\"" Feb 9 00:54:34.602528 env[1119]: time="2024-02-09T00:54:34.602507900Z" level=info msg="Container to stop \"d6e68ec437fe0ca9ef1e02caa5b8220f2c1009f9de918d5561a8c2383ae166b7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 00:54:34.603854 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-10bb309a83fd3a587b8817bbff33c73faeacb522bc65b045dfb15b214bbe5ace-shm.mount: Deactivated successfully. Feb 9 00:54:34.604500 systemd[1]: cri-containerd-acdd9a55e3ade1b559c31db3f9180da9373c5293efcffdad4b93ab6f5f650f5d.scope: Deactivated successfully. Feb 9 00:54:34.604729 systemd[1]: cri-containerd-acdd9a55e3ade1b559c31db3f9180da9373c5293efcffdad4b93ab6f5f650f5d.scope: Consumed 5.759s CPU time. Feb 9 00:54:34.615679 systemd[1]: cri-containerd-10bb309a83fd3a587b8817bbff33c73faeacb522bc65b045dfb15b214bbe5ace.scope: Deactivated successfully. Feb 9 00:54:34.620622 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-acdd9a55e3ade1b559c31db3f9180da9373c5293efcffdad4b93ab6f5f650f5d-rootfs.mount: Deactivated successfully. Feb 9 00:54:34.626522 env[1119]: time="2024-02-09T00:54:34.626463339Z" level=info msg="shim disconnected" id=acdd9a55e3ade1b559c31db3f9180da9373c5293efcffdad4b93ab6f5f650f5d Feb 9 00:54:34.626522 env[1119]: time="2024-02-09T00:54:34.626520278Z" level=warning msg="cleaning up after shim disconnected" id=acdd9a55e3ade1b559c31db3f9180da9373c5293efcffdad4b93ab6f5f650f5d namespace=k8s.io Feb 9 00:54:34.626522 env[1119]: time="2024-02-09T00:54:34.626528814Z" level=info msg="cleaning up dead shim" Feb 9 00:54:34.634206 env[1119]: time="2024-02-09T00:54:34.634160584Z" level=warning msg="cleanup warnings time=\"2024-02-09T00:54:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3689 runtime=io.containerd.runc.v2\n" Feb 9 00:54:34.634806 env[1119]: time="2024-02-09T00:54:34.634772766Z" level=info msg="shim disconnected" id=10bb309a83fd3a587b8817bbff33c73faeacb522bc65b045dfb15b214bbe5ace Feb 9 00:54:34.634856 env[1119]: time="2024-02-09T00:54:34.634811519Z" level=warning msg="cleaning up after shim disconnected" id=10bb309a83fd3a587b8817bbff33c73faeacb522bc65b045dfb15b214bbe5ace namespace=k8s.io Feb 9 00:54:34.634856 env[1119]: time="2024-02-09T00:54:34.634820466Z" level=info msg="cleaning up dead shim" Feb 9 00:54:34.636444 env[1119]: time="2024-02-09T00:54:34.636417451Z" level=info msg="StopContainer for \"acdd9a55e3ade1b559c31db3f9180da9373c5293efcffdad4b93ab6f5f650f5d\" returns successfully" Feb 9 00:54:34.636853 env[1119]: time="2024-02-09T00:54:34.636817738Z" level=info msg="StopPodSandbox for \"da13d6c060c16840978724fc88fbc3dc8fe88b70221f71e209442ee17ec4b5c3\"" Feb 9 00:54:34.636970 env[1119]: time="2024-02-09T00:54:34.636877231Z" level=info msg="Container to stop \"a52472c001098a1a309951e243c2c7f05653b19c3da99a0dc18720ac43cf52aa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 00:54:34.636970 env[1119]: time="2024-02-09T00:54:34.636891488Z" level=info msg="Container to stop \"fe4b7562b6c6ff7b7446e4aa5d9fa5c9f4137f03f0c8792c1546459d7040afd2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 00:54:34.636970 env[1119]: time="2024-02-09T00:54:34.636900796Z" level=info msg="Container to stop \"acdd9a55e3ade1b559c31db3f9180da9373c5293efcffdad4b93ab6f5f650f5d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 00:54:34.636970 env[1119]: time="2024-02-09T00:54:34.636911066Z" level=info msg="Container to stop \"c037b79a5a9875861c5cfaab9c08db2ff3964ac3a5fe14b8338c2addf037a01f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 00:54:34.636970 env[1119]: time="2024-02-09T00:54:34.636920504Z" level=info msg="Container to stop \"e42ec181aad399ec432fddbf0828d2a81193695cc64689696b06c46f28a4dc63\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 00:54:34.641035 env[1119]: time="2024-02-09T00:54:34.641005378Z" level=warning msg="cleanup warnings time=\"2024-02-09T00:54:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3701 runtime=io.containerd.runc.v2\n" Feb 9 00:54:34.641303 env[1119]: time="2024-02-09T00:54:34.641279874Z" level=info msg="TearDown network for sandbox \"10bb309a83fd3a587b8817bbff33c73faeacb522bc65b045dfb15b214bbe5ace\" successfully" Feb 9 00:54:34.641338 env[1119]: time="2024-02-09T00:54:34.641303419Z" level=info msg="StopPodSandbox for \"10bb309a83fd3a587b8817bbff33c73faeacb522bc65b045dfb15b214bbe5ace\" returns successfully" Feb 9 00:54:34.642869 systemd[1]: cri-containerd-da13d6c060c16840978724fc88fbc3dc8fe88b70221f71e209442ee17ec4b5c3.scope: Deactivated successfully. Feb 9 00:54:34.664479 env[1119]: time="2024-02-09T00:54:34.664419982Z" level=info msg="shim disconnected" id=da13d6c060c16840978724fc88fbc3dc8fe88b70221f71e209442ee17ec4b5c3 Feb 9 00:54:34.664647 env[1119]: time="2024-02-09T00:54:34.664482612Z" level=warning msg="cleaning up after shim disconnected" id=da13d6c060c16840978724fc88fbc3dc8fe88b70221f71e209442ee17ec4b5c3 namespace=k8s.io Feb 9 00:54:34.664647 env[1119]: time="2024-02-09T00:54:34.664492811Z" level=info msg="cleaning up dead shim" Feb 9 00:54:34.671993 env[1119]: time="2024-02-09T00:54:34.671958153Z" level=warning msg="cleanup warnings time=\"2024-02-09T00:54:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3731 runtime=io.containerd.runc.v2\n" Feb 9 00:54:34.672288 env[1119]: time="2024-02-09T00:54:34.672245001Z" level=info msg="TearDown network for sandbox \"da13d6c060c16840978724fc88fbc3dc8fe88b70221f71e209442ee17ec4b5c3\" successfully" Feb 9 00:54:34.672347 env[1119]: time="2024-02-09T00:54:34.672296840Z" level=info msg="StopPodSandbox for \"da13d6c060c16840978724fc88fbc3dc8fe88b70221f71e209442ee17ec4b5c3\" returns successfully" Feb 9 00:54:34.739392 kubelet[1959]: I0209 00:54:34.739363 1959 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9n9b\" (UniqueName: \"kubernetes.io/projected/7fe16cc4-f4b1-4cde-8a15-503fd6a1db00-kube-api-access-n9n9b\") pod \"7fe16cc4-f4b1-4cde-8a15-503fd6a1db00\" (UID: \"7fe16cc4-f4b1-4cde-8a15-503fd6a1db00\") " Feb 9 00:54:34.739392 kubelet[1959]: I0209 00:54:34.739394 1959 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7fe16cc4-f4b1-4cde-8a15-503fd6a1db00-cilium-cgroup\") pod \"7fe16cc4-f4b1-4cde-8a15-503fd6a1db00\" (UID: \"7fe16cc4-f4b1-4cde-8a15-503fd6a1db00\") " Feb 9 00:54:34.739809 kubelet[1959]: I0209 00:54:34.739412 1959 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7fe16cc4-f4b1-4cde-8a15-503fd6a1db00-cni-path\") pod \"7fe16cc4-f4b1-4cde-8a15-503fd6a1db00\" (UID: \"7fe16cc4-f4b1-4cde-8a15-503fd6a1db00\") " Feb 9 00:54:34.739809 kubelet[1959]: I0209 00:54:34.739426 1959 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7fe16cc4-f4b1-4cde-8a15-503fd6a1db00-etc-cni-netd\") pod \"7fe16cc4-f4b1-4cde-8a15-503fd6a1db00\" (UID: \"7fe16cc4-f4b1-4cde-8a15-503fd6a1db00\") " Feb 9 00:54:34.739809 kubelet[1959]: I0209 00:54:34.739443 1959 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7fe16cc4-f4b1-4cde-8a15-503fd6a1db00-hostproc\") pod \"7fe16cc4-f4b1-4cde-8a15-503fd6a1db00\" (UID: \"7fe16cc4-f4b1-4cde-8a15-503fd6a1db00\") " Feb 9 00:54:34.739809 kubelet[1959]: I0209 00:54:34.739456 1959 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7fe16cc4-f4b1-4cde-8a15-503fd6a1db00-bpf-maps\") pod \"7fe16cc4-f4b1-4cde-8a15-503fd6a1db00\" (UID: \"7fe16cc4-f4b1-4cde-8a15-503fd6a1db00\") " Feb 9 00:54:34.739809 kubelet[1959]: I0209 00:54:34.739469 1959 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7fe16cc4-f4b1-4cde-8a15-503fd6a1db00-xtables-lock\") pod \"7fe16cc4-f4b1-4cde-8a15-503fd6a1db00\" (UID: \"7fe16cc4-f4b1-4cde-8a15-503fd6a1db00\") " Feb 9 00:54:34.739809 kubelet[1959]: I0209 00:54:34.739486 1959 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rt55s\" (UniqueName: \"kubernetes.io/projected/cbb6d553-f16b-476f-a0b2-949da044bfb2-kube-api-access-rt55s\") pod \"cbb6d553-f16b-476f-a0b2-949da044bfb2\" (UID: \"cbb6d553-f16b-476f-a0b2-949da044bfb2\") " Feb 9 00:54:34.739944 kubelet[1959]: I0209 00:54:34.739500 1959 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7fe16cc4-f4b1-4cde-8a15-503fd6a1db00-host-proc-sys-net\") pod \"7fe16cc4-f4b1-4cde-8a15-503fd6a1db00\" (UID: \"7fe16cc4-f4b1-4cde-8a15-503fd6a1db00\") " Feb 9 00:54:34.739944 kubelet[1959]: I0209 00:54:34.739516 1959 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7fe16cc4-f4b1-4cde-8a15-503fd6a1db00-hubble-tls\") pod \"7fe16cc4-f4b1-4cde-8a15-503fd6a1db00\" (UID: \"7fe16cc4-f4b1-4cde-8a15-503fd6a1db00\") " Feb 9 00:54:34.739944 kubelet[1959]: I0209 00:54:34.739505 1959 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fe16cc4-f4b1-4cde-8a15-503fd6a1db00-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "7fe16cc4-f4b1-4cde-8a15-503fd6a1db00" (UID: "7fe16cc4-f4b1-4cde-8a15-503fd6a1db00"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 00:54:34.739944 kubelet[1959]: I0209 00:54:34.739549 1959 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fe16cc4-f4b1-4cde-8a15-503fd6a1db00-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "7fe16cc4-f4b1-4cde-8a15-503fd6a1db00" (UID: "7fe16cc4-f4b1-4cde-8a15-503fd6a1db00"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 00:54:34.739944 kubelet[1959]: I0209 00:54:34.739531 1959 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7fe16cc4-f4b1-4cde-8a15-503fd6a1db00-cilium-run\") pod \"7fe16cc4-f4b1-4cde-8a15-503fd6a1db00\" (UID: \"7fe16cc4-f4b1-4cde-8a15-503fd6a1db00\") " Feb 9 00:54:34.740069 kubelet[1959]: I0209 00:54:34.739569 1959 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fe16cc4-f4b1-4cde-8a15-503fd6a1db00-cni-path" (OuterVolumeSpecName: "cni-path") pod "7fe16cc4-f4b1-4cde-8a15-503fd6a1db00" (UID: "7fe16cc4-f4b1-4cde-8a15-503fd6a1db00"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 00:54:34.740069 kubelet[1959]: I0209 00:54:34.739582 1959 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fe16cc4-f4b1-4cde-8a15-503fd6a1db00-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "7fe16cc4-f4b1-4cde-8a15-503fd6a1db00" (UID: "7fe16cc4-f4b1-4cde-8a15-503fd6a1db00"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 00:54:34.740069 kubelet[1959]: I0209 00:54:34.739593 1959 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fe16cc4-f4b1-4cde-8a15-503fd6a1db00-hostproc" (OuterVolumeSpecName: "hostproc") pod "7fe16cc4-f4b1-4cde-8a15-503fd6a1db00" (UID: "7fe16cc4-f4b1-4cde-8a15-503fd6a1db00"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 00:54:34.740069 kubelet[1959]: I0209 00:54:34.739604 1959 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fe16cc4-f4b1-4cde-8a15-503fd6a1db00-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "7fe16cc4-f4b1-4cde-8a15-503fd6a1db00" (UID: "7fe16cc4-f4b1-4cde-8a15-503fd6a1db00"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 00:54:34.740069 kubelet[1959]: I0209 00:54:34.739604 1959 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7fe16cc4-f4b1-4cde-8a15-503fd6a1db00-cilium-config-path\") pod \"7fe16cc4-f4b1-4cde-8a15-503fd6a1db00\" (UID: \"7fe16cc4-f4b1-4cde-8a15-503fd6a1db00\") " Feb 9 00:54:34.740178 kubelet[1959]: I0209 00:54:34.739637 1959 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cbb6d553-f16b-476f-a0b2-949da044bfb2-cilium-config-path\") pod \"cbb6d553-f16b-476f-a0b2-949da044bfb2\" (UID: \"cbb6d553-f16b-476f-a0b2-949da044bfb2\") " Feb 9 00:54:34.740178 kubelet[1959]: I0209 00:54:34.739654 1959 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7fe16cc4-f4b1-4cde-8a15-503fd6a1db00-lib-modules\") pod \"7fe16cc4-f4b1-4cde-8a15-503fd6a1db00\" (UID: \"7fe16cc4-f4b1-4cde-8a15-503fd6a1db00\") " Feb 9 00:54:34.740178 kubelet[1959]: I0209 00:54:34.739676 1959 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7fe16cc4-f4b1-4cde-8a15-503fd6a1db00-clustermesh-secrets\") pod \"7fe16cc4-f4b1-4cde-8a15-503fd6a1db00\" (UID: \"7fe16cc4-f4b1-4cde-8a15-503fd6a1db00\") " Feb 9 00:54:34.740178 kubelet[1959]: I0209 00:54:34.739690 1959 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7fe16cc4-f4b1-4cde-8a15-503fd6a1db00-host-proc-sys-kernel\") pod \"7fe16cc4-f4b1-4cde-8a15-503fd6a1db00\" (UID: \"7fe16cc4-f4b1-4cde-8a15-503fd6a1db00\") " Feb 9 00:54:34.740178 kubelet[1959]: I0209 00:54:34.739714 1959 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7fe16cc4-f4b1-4cde-8a15-503fd6a1db00-cilium-run\") on node \"localhost\" DevicePath \"\"" Feb 9 00:54:34.740178 kubelet[1959]: I0209 00:54:34.739724 1959 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7fe16cc4-f4b1-4cde-8a15-503fd6a1db00-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Feb 9 00:54:34.740178 kubelet[1959]: I0209 00:54:34.739732 1959 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7fe16cc4-f4b1-4cde-8a15-503fd6a1db00-cni-path\") on node \"localhost\" DevicePath \"\"" Feb 9 00:54:34.740353 kubelet[1959]: I0209 00:54:34.739740 1959 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7fe16cc4-f4b1-4cde-8a15-503fd6a1db00-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Feb 9 00:54:34.740353 kubelet[1959]: I0209 00:54:34.739748 1959 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7fe16cc4-f4b1-4cde-8a15-503fd6a1db00-hostproc\") on node \"localhost\" DevicePath \"\"" Feb 9 00:54:34.740353 kubelet[1959]: I0209 00:54:34.739757 1959 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7fe16cc4-f4b1-4cde-8a15-503fd6a1db00-bpf-maps\") on node \"localhost\" DevicePath \"\"" Feb 9 00:54:34.740353 kubelet[1959]: I0209 00:54:34.739769 1959 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fe16cc4-f4b1-4cde-8a15-503fd6a1db00-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "7fe16cc4-f4b1-4cde-8a15-503fd6a1db00" (UID: "7fe16cc4-f4b1-4cde-8a15-503fd6a1db00"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 00:54:34.740353 kubelet[1959]: I0209 00:54:34.739784 1959 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fe16cc4-f4b1-4cde-8a15-503fd6a1db00-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "7fe16cc4-f4b1-4cde-8a15-503fd6a1db00" (UID: "7fe16cc4-f4b1-4cde-8a15-503fd6a1db00"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 00:54:34.740353 kubelet[1959]: I0209 00:54:34.739900 1959 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fe16cc4-f4b1-4cde-8a15-503fd6a1db00-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "7fe16cc4-f4b1-4cde-8a15-503fd6a1db00" (UID: "7fe16cc4-f4b1-4cde-8a15-503fd6a1db00"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 00:54:34.740493 kubelet[1959]: I0209 00:54:34.739985 1959 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fe16cc4-f4b1-4cde-8a15-503fd6a1db00-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7fe16cc4-f4b1-4cde-8a15-503fd6a1db00" (UID: "7fe16cc4-f4b1-4cde-8a15-503fd6a1db00"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 00:54:34.742483 kubelet[1959]: I0209 00:54:34.742457 1959 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fe16cc4-f4b1-4cde-8a15-503fd6a1db00-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7fe16cc4-f4b1-4cde-8a15-503fd6a1db00" (UID: "7fe16cc4-f4b1-4cde-8a15-503fd6a1db00"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 00:54:34.742648 kubelet[1959]: I0209 00:54:34.742613 1959 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fe16cc4-f4b1-4cde-8a15-503fd6a1db00-kube-api-access-n9n9b" (OuterVolumeSpecName: "kube-api-access-n9n9b") pod "7fe16cc4-f4b1-4cde-8a15-503fd6a1db00" (UID: "7fe16cc4-f4b1-4cde-8a15-503fd6a1db00"). InnerVolumeSpecName "kube-api-access-n9n9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 00:54:34.742738 kubelet[1959]: I0209 00:54:34.742699 1959 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fe16cc4-f4b1-4cde-8a15-503fd6a1db00-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "7fe16cc4-f4b1-4cde-8a15-503fd6a1db00" (UID: "7fe16cc4-f4b1-4cde-8a15-503fd6a1db00"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 00:54:34.743156 kubelet[1959]: I0209 00:54:34.743132 1959 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cbb6d553-f16b-476f-a0b2-949da044bfb2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "cbb6d553-f16b-476f-a0b2-949da044bfb2" (UID: "cbb6d553-f16b-476f-a0b2-949da044bfb2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 00:54:34.743691 kubelet[1959]: I0209 00:54:34.743667 1959 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbb6d553-f16b-476f-a0b2-949da044bfb2-kube-api-access-rt55s" (OuterVolumeSpecName: "kube-api-access-rt55s") pod "cbb6d553-f16b-476f-a0b2-949da044bfb2" (UID: "cbb6d553-f16b-476f-a0b2-949da044bfb2"). InnerVolumeSpecName "kube-api-access-rt55s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 00:54:34.744899 kubelet[1959]: I0209 00:54:34.744880 1959 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fe16cc4-f4b1-4cde-8a15-503fd6a1db00-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "7fe16cc4-f4b1-4cde-8a15-503fd6a1db00" (UID: "7fe16cc4-f4b1-4cde-8a15-503fd6a1db00"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 00:54:34.840279 kubelet[1959]: I0209 00:54:34.840203 1959 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7fe16cc4-f4b1-4cde-8a15-503fd6a1db00-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Feb 9 00:54:34.840279 kubelet[1959]: I0209 00:54:34.840222 1959 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7fe16cc4-f4b1-4cde-8a15-503fd6a1db00-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Feb 9 00:54:34.840279 kubelet[1959]: I0209 00:54:34.840231 1959 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-n9n9b\" (UniqueName: \"kubernetes.io/projected/7fe16cc4-f4b1-4cde-8a15-503fd6a1db00-kube-api-access-n9n9b\") on node \"localhost\" DevicePath \"\"" Feb 9 00:54:34.840279 kubelet[1959]: I0209 00:54:34.840240 1959 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7fe16cc4-f4b1-4cde-8a15-503fd6a1db00-xtables-lock\") on node \"localhost\" DevicePath \"\"" Feb 9 00:54:34.840279 kubelet[1959]: I0209 00:54:34.840263 1959 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-rt55s\" (UniqueName: \"kubernetes.io/projected/cbb6d553-f16b-476f-a0b2-949da044bfb2-kube-api-access-rt55s\") on node \"localhost\" DevicePath \"\"" Feb 9 00:54:34.840279 kubelet[1959]: I0209 00:54:34.840272 1959 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7fe16cc4-f4b1-4cde-8a15-503fd6a1db00-hubble-tls\") on node \"localhost\" DevicePath \"\"" Feb 9 00:54:34.840279 kubelet[1959]: I0209 00:54:34.840280 1959 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7fe16cc4-f4b1-4cde-8a15-503fd6a1db00-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Feb 9 00:54:34.840470 kubelet[1959]: I0209 00:54:34.840288 1959 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7fe16cc4-f4b1-4cde-8a15-503fd6a1db00-lib-modules\") on node \"localhost\" DevicePath \"\"" Feb 9 00:54:34.840470 kubelet[1959]: I0209 00:54:34.840296 1959 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7fe16cc4-f4b1-4cde-8a15-503fd6a1db00-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 9 00:54:34.840470 kubelet[1959]: I0209 00:54:34.840303 1959 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cbb6d553-f16b-476f-a0b2-949da044bfb2-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 9 00:54:35.552770 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-da13d6c060c16840978724fc88fbc3dc8fe88b70221f71e209442ee17ec4b5c3-rootfs.mount: Deactivated successfully. Feb 9 00:54:35.552877 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-da13d6c060c16840978724fc88fbc3dc8fe88b70221f71e209442ee17ec4b5c3-shm.mount: Deactivated successfully. Feb 9 00:54:35.552931 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-10bb309a83fd3a587b8817bbff33c73faeacb522bc65b045dfb15b214bbe5ace-rootfs.mount: Deactivated successfully. Feb 9 00:54:35.552981 systemd[1]: var-lib-kubelet-pods-7fe16cc4\x2df4b1\x2d4cde\x2d8a15\x2d503fd6a1db00-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dn9n9b.mount: Deactivated successfully. Feb 9 00:54:35.553044 systemd[1]: var-lib-kubelet-pods-cbb6d553\x2df16b\x2d476f\x2da0b2\x2d949da044bfb2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drt55s.mount: Deactivated successfully. Feb 9 00:54:35.553102 systemd[1]: var-lib-kubelet-pods-7fe16cc4\x2df4b1\x2d4cde\x2d8a15\x2d503fd6a1db00-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 00:54:35.553148 systemd[1]: var-lib-kubelet-pods-7fe16cc4\x2df4b1\x2d4cde\x2d8a15\x2d503fd6a1db00-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 00:54:35.556159 kubelet[1959]: I0209 00:54:35.556138 1959 scope.go:117] "RemoveContainer" containerID="acdd9a55e3ade1b559c31db3f9180da9373c5293efcffdad4b93ab6f5f650f5d" Feb 9 00:54:35.558718 env[1119]: time="2024-02-09T00:54:35.558660710Z" level=info msg="RemoveContainer for \"acdd9a55e3ade1b559c31db3f9180da9373c5293efcffdad4b93ab6f5f650f5d\"" Feb 9 00:54:35.559087 systemd[1]: Removed slice kubepods-burstable-pod7fe16cc4_f4b1_4cde_8a15_503fd6a1db00.slice. Feb 9 00:54:35.559161 systemd[1]: kubepods-burstable-pod7fe16cc4_f4b1_4cde_8a15_503fd6a1db00.slice: Consumed 5.843s CPU time. Feb 9 00:54:35.563744 env[1119]: time="2024-02-09T00:54:35.563706989Z" level=info msg="RemoveContainer for \"acdd9a55e3ade1b559c31db3f9180da9373c5293efcffdad4b93ab6f5f650f5d\" returns successfully" Feb 9 00:54:35.564563 kubelet[1959]: I0209 00:54:35.564541 1959 scope.go:117] "RemoveContainer" containerID="fe4b7562b6c6ff7b7446e4aa5d9fa5c9f4137f03f0c8792c1546459d7040afd2" Feb 9 00:54:35.566449 systemd[1]: Removed slice kubepods-besteffort-podcbb6d553_f16b_476f_a0b2_949da044bfb2.slice. Feb 9 00:54:35.566817 env[1119]: time="2024-02-09T00:54:35.566783653Z" level=info msg="RemoveContainer for \"fe4b7562b6c6ff7b7446e4aa5d9fa5c9f4137f03f0c8792c1546459d7040afd2\"" Feb 9 00:54:35.569553 env[1119]: time="2024-02-09T00:54:35.569501810Z" level=info msg="RemoveContainer for \"fe4b7562b6c6ff7b7446e4aa5d9fa5c9f4137f03f0c8792c1546459d7040afd2\" returns successfully" Feb 9 00:54:35.569757 kubelet[1959]: I0209 00:54:35.569740 1959 scope.go:117] "RemoveContainer" containerID="e42ec181aad399ec432fddbf0828d2a81193695cc64689696b06c46f28a4dc63" Feb 9 00:54:35.571138 env[1119]: time="2024-02-09T00:54:35.571104455Z" level=info msg="RemoveContainer for \"e42ec181aad399ec432fddbf0828d2a81193695cc64689696b06c46f28a4dc63\"" Feb 9 00:54:35.573888 env[1119]: time="2024-02-09T00:54:35.573860035Z" level=info msg="RemoveContainer for \"e42ec181aad399ec432fddbf0828d2a81193695cc64689696b06c46f28a4dc63\" returns successfully" Feb 9 00:54:35.574030 kubelet[1959]: I0209 00:54:35.574002 1959 scope.go:117] "RemoveContainer" containerID="a52472c001098a1a309951e243c2c7f05653b19c3da99a0dc18720ac43cf52aa" Feb 9 00:54:35.574977 env[1119]: time="2024-02-09T00:54:35.574944137Z" level=info msg="RemoveContainer for \"a52472c001098a1a309951e243c2c7f05653b19c3da99a0dc18720ac43cf52aa\"" Feb 9 00:54:35.581216 env[1119]: time="2024-02-09T00:54:35.578097898Z" level=info msg="RemoveContainer for \"a52472c001098a1a309951e243c2c7f05653b19c3da99a0dc18720ac43cf52aa\" returns successfully" Feb 9 00:54:35.582231 kubelet[1959]: I0209 00:54:35.582205 1959 scope.go:117] "RemoveContainer" containerID="c037b79a5a9875861c5cfaab9c08db2ff3964ac3a5fe14b8338c2addf037a01f" Feb 9 00:54:35.583205 env[1119]: time="2024-02-09T00:54:35.583155038Z" level=info msg="RemoveContainer for \"c037b79a5a9875861c5cfaab9c08db2ff3964ac3a5fe14b8338c2addf037a01f\"" Feb 9 00:54:35.587145 env[1119]: time="2024-02-09T00:54:35.587119309Z" level=info msg="RemoveContainer for \"c037b79a5a9875861c5cfaab9c08db2ff3964ac3a5fe14b8338c2addf037a01f\" returns successfully" Feb 9 00:54:35.587283 kubelet[1959]: I0209 00:54:35.587245 1959 scope.go:117] "RemoveContainer" containerID="acdd9a55e3ade1b559c31db3f9180da9373c5293efcffdad4b93ab6f5f650f5d" Feb 9 00:54:35.587585 env[1119]: time="2024-02-09T00:54:35.587502962Z" level=error msg="ContainerStatus for \"acdd9a55e3ade1b559c31db3f9180da9373c5293efcffdad4b93ab6f5f650f5d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"acdd9a55e3ade1b559c31db3f9180da9373c5293efcffdad4b93ab6f5f650f5d\": not found" Feb 9 00:54:35.587765 kubelet[1959]: E0209 00:54:35.587747 1959 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"acdd9a55e3ade1b559c31db3f9180da9373c5293efcffdad4b93ab6f5f650f5d\": not found" containerID="acdd9a55e3ade1b559c31db3f9180da9373c5293efcffdad4b93ab6f5f650f5d" Feb 9 00:54:35.587846 kubelet[1959]: I0209 00:54:35.587837 1959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"acdd9a55e3ade1b559c31db3f9180da9373c5293efcffdad4b93ab6f5f650f5d"} err="failed to get container status \"acdd9a55e3ade1b559c31db3f9180da9373c5293efcffdad4b93ab6f5f650f5d\": rpc error: code = NotFound desc = an error occurred when try to find container \"acdd9a55e3ade1b559c31db3f9180da9373c5293efcffdad4b93ab6f5f650f5d\": not found" Feb 9 00:54:35.587846 kubelet[1959]: I0209 00:54:35.587852 1959 scope.go:117] "RemoveContainer" containerID="fe4b7562b6c6ff7b7446e4aa5d9fa5c9f4137f03f0c8792c1546459d7040afd2" Feb 9 00:54:35.588034 env[1119]: time="2024-02-09T00:54:35.587994833Z" level=error msg="ContainerStatus for \"fe4b7562b6c6ff7b7446e4aa5d9fa5c9f4137f03f0c8792c1546459d7040afd2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fe4b7562b6c6ff7b7446e4aa5d9fa5c9f4137f03f0c8792c1546459d7040afd2\": not found" Feb 9 00:54:35.588133 kubelet[1959]: E0209 00:54:35.588115 1959 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fe4b7562b6c6ff7b7446e4aa5d9fa5c9f4137f03f0c8792c1546459d7040afd2\": not found" containerID="fe4b7562b6c6ff7b7446e4aa5d9fa5c9f4137f03f0c8792c1546459d7040afd2" Feb 9 00:54:35.588196 kubelet[1959]: I0209 00:54:35.588150 1959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fe4b7562b6c6ff7b7446e4aa5d9fa5c9f4137f03f0c8792c1546459d7040afd2"} err="failed to get container status \"fe4b7562b6c6ff7b7446e4aa5d9fa5c9f4137f03f0c8792c1546459d7040afd2\": rpc error: code = NotFound desc = an error occurred when try to find container \"fe4b7562b6c6ff7b7446e4aa5d9fa5c9f4137f03f0c8792c1546459d7040afd2\": not found" Feb 9 00:54:35.588196 kubelet[1959]: I0209 00:54:35.588163 1959 scope.go:117] "RemoveContainer" containerID="e42ec181aad399ec432fddbf0828d2a81193695cc64689696b06c46f28a4dc63" Feb 9 00:54:35.588380 env[1119]: time="2024-02-09T00:54:35.588335394Z" level=error msg="ContainerStatus for \"e42ec181aad399ec432fddbf0828d2a81193695cc64689696b06c46f28a4dc63\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e42ec181aad399ec432fddbf0828d2a81193695cc64689696b06c46f28a4dc63\": not found" Feb 9 00:54:35.588489 kubelet[1959]: E0209 00:54:35.588472 1959 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e42ec181aad399ec432fddbf0828d2a81193695cc64689696b06c46f28a4dc63\": not found" containerID="e42ec181aad399ec432fddbf0828d2a81193695cc64689696b06c46f28a4dc63" Feb 9 00:54:35.588536 kubelet[1959]: I0209 00:54:35.588505 1959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e42ec181aad399ec432fddbf0828d2a81193695cc64689696b06c46f28a4dc63"} err="failed to get container status \"e42ec181aad399ec432fddbf0828d2a81193695cc64689696b06c46f28a4dc63\": rpc error: code = NotFound desc = an error occurred when try to find container \"e42ec181aad399ec432fddbf0828d2a81193695cc64689696b06c46f28a4dc63\": not found" Feb 9 00:54:35.588536 kubelet[1959]: I0209 00:54:35.588515 1959 scope.go:117] "RemoveContainer" containerID="a52472c001098a1a309951e243c2c7f05653b19c3da99a0dc18720ac43cf52aa" Feb 9 00:54:35.588698 env[1119]: time="2024-02-09T00:54:35.588658181Z" level=error msg="ContainerStatus for \"a52472c001098a1a309951e243c2c7f05653b19c3da99a0dc18720ac43cf52aa\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a52472c001098a1a309951e243c2c7f05653b19c3da99a0dc18720ac43cf52aa\": not found" Feb 9 00:54:35.588837 kubelet[1959]: E0209 00:54:35.588820 1959 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a52472c001098a1a309951e243c2c7f05653b19c3da99a0dc18720ac43cf52aa\": not found" containerID="a52472c001098a1a309951e243c2c7f05653b19c3da99a0dc18720ac43cf52aa" Feb 9 00:54:35.588886 kubelet[1959]: I0209 00:54:35.588841 1959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a52472c001098a1a309951e243c2c7f05653b19c3da99a0dc18720ac43cf52aa"} err="failed to get container status \"a52472c001098a1a309951e243c2c7f05653b19c3da99a0dc18720ac43cf52aa\": rpc error: code = NotFound desc = an error occurred when try to find container \"a52472c001098a1a309951e243c2c7f05653b19c3da99a0dc18720ac43cf52aa\": not found" Feb 9 00:54:35.588886 kubelet[1959]: I0209 00:54:35.588849 1959 scope.go:117] "RemoveContainer" containerID="c037b79a5a9875861c5cfaab9c08db2ff3964ac3a5fe14b8338c2addf037a01f" Feb 9 00:54:35.588991 env[1119]: time="2024-02-09T00:54:35.588956862Z" level=error msg="ContainerStatus for \"c037b79a5a9875861c5cfaab9c08db2ff3964ac3a5fe14b8338c2addf037a01f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c037b79a5a9875861c5cfaab9c08db2ff3964ac3a5fe14b8338c2addf037a01f\": not found" Feb 9 00:54:35.589098 kubelet[1959]: E0209 00:54:35.589083 1959 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c037b79a5a9875861c5cfaab9c08db2ff3964ac3a5fe14b8338c2addf037a01f\": not found" containerID="c037b79a5a9875861c5cfaab9c08db2ff3964ac3a5fe14b8338c2addf037a01f" Feb 9 00:54:35.589162 kubelet[1959]: I0209 00:54:35.589102 1959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c037b79a5a9875861c5cfaab9c08db2ff3964ac3a5fe14b8338c2addf037a01f"} err="failed to get container status \"c037b79a5a9875861c5cfaab9c08db2ff3964ac3a5fe14b8338c2addf037a01f\": rpc error: code = NotFound desc = an error occurred when try to find container \"c037b79a5a9875861c5cfaab9c08db2ff3964ac3a5fe14b8338c2addf037a01f\": not found" Feb 9 00:54:35.589162 kubelet[1959]: I0209 00:54:35.589110 1959 scope.go:117] "RemoveContainer" containerID="d6e68ec437fe0ca9ef1e02caa5b8220f2c1009f9de918d5561a8c2383ae166b7" Feb 9 00:54:35.590045 env[1119]: time="2024-02-09T00:54:35.590023833Z" level=info msg="RemoveContainer for \"d6e68ec437fe0ca9ef1e02caa5b8220f2c1009f9de918d5561a8c2383ae166b7\"" Feb 9 00:54:35.592458 env[1119]: time="2024-02-09T00:54:35.592432789Z" level=info msg="RemoveContainer for \"d6e68ec437fe0ca9ef1e02caa5b8220f2c1009f9de918d5561a8c2383ae166b7\" returns successfully" Feb 9 00:54:35.592605 kubelet[1959]: I0209 00:54:35.592567 1959 scope.go:117] "RemoveContainer" containerID="d6e68ec437fe0ca9ef1e02caa5b8220f2c1009f9de918d5561a8c2383ae166b7" Feb 9 00:54:35.592768 env[1119]: time="2024-02-09T00:54:35.592716903Z" level=error msg="ContainerStatus for \"d6e68ec437fe0ca9ef1e02caa5b8220f2c1009f9de918d5561a8c2383ae166b7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d6e68ec437fe0ca9ef1e02caa5b8220f2c1009f9de918d5561a8c2383ae166b7\": not found" Feb 9 00:54:35.592893 kubelet[1959]: E0209 00:54:35.592865 1959 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d6e68ec437fe0ca9ef1e02caa5b8220f2c1009f9de918d5561a8c2383ae166b7\": not found" containerID="d6e68ec437fe0ca9ef1e02caa5b8220f2c1009f9de918d5561a8c2383ae166b7" Feb 9 00:54:35.592893 kubelet[1959]: I0209 00:54:35.592897 1959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d6e68ec437fe0ca9ef1e02caa5b8220f2c1009f9de918d5561a8c2383ae166b7"} err="failed to get container status \"d6e68ec437fe0ca9ef1e02caa5b8220f2c1009f9de918d5561a8c2383ae166b7\": rpc error: code = NotFound desc = an error occurred when try to find container \"d6e68ec437fe0ca9ef1e02caa5b8220f2c1009f9de918d5561a8c2383ae166b7\": not found" Feb 9 00:54:36.410950 kubelet[1959]: I0209 00:54:36.410916 1959 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="7fe16cc4-f4b1-4cde-8a15-503fd6a1db00" path="/var/lib/kubelet/pods/7fe16cc4-f4b1-4cde-8a15-503fd6a1db00/volumes" Feb 9 00:54:36.411421 kubelet[1959]: I0209 00:54:36.411395 1959 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="cbb6d553-f16b-476f-a0b2-949da044bfb2" path="/var/lib/kubelet/pods/cbb6d553-f16b-476f-a0b2-949da044bfb2/volumes" Feb 9 00:54:36.516835 sshd[3589]: pam_unix(sshd:session): session closed for user core Feb 9 00:54:36.519232 systemd[1]: sshd@22-10.0.0.122:22-10.0.0.1:56958.service: Deactivated successfully. Feb 9 00:54:36.519775 systemd[1]: session-23.scope: Deactivated successfully. Feb 9 00:54:36.520325 systemd-logind[1107]: Session 23 logged out. Waiting for processes to exit. Feb 9 00:54:36.521385 systemd[1]: Started sshd@23-10.0.0.122:22-10.0.0.1:56636.service. Feb 9 00:54:36.522139 systemd-logind[1107]: Removed session 23. Feb 9 00:54:36.553856 sshd[3752]: Accepted publickey for core from 10.0.0.1 port 56636 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:54:36.554849 sshd[3752]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:54:36.557976 systemd-logind[1107]: New session 24 of user core. Feb 9 00:54:36.558713 systemd[1]: Started session-24.scope. Feb 9 00:54:37.064277 sshd[3752]: pam_unix(sshd:session): session closed for user core Feb 9 00:54:37.065940 systemd[1]: Started sshd@24-10.0.0.122:22-10.0.0.1:56644.service. Feb 9 00:54:37.069375 systemd[1]: sshd@23-10.0.0.122:22-10.0.0.1:56636.service: Deactivated successfully. Feb 9 00:54:37.069958 systemd[1]: session-24.scope: Deactivated successfully. Feb 9 00:54:37.070929 systemd-logind[1107]: Session 24 logged out. Waiting for processes to exit. Feb 9 00:54:37.071729 systemd-logind[1107]: Removed session 24. Feb 9 00:54:37.082783 kubelet[1959]: I0209 00:54:37.078605 1959 topology_manager.go:215] "Topology Admit Handler" podUID="2b1a3919-d851-435c-a2eb-feba1d2f4b8d" podNamespace="kube-system" podName="cilium-qgphw" Feb 9 00:54:37.082783 kubelet[1959]: E0209 00:54:37.078653 1959 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7fe16cc4-f4b1-4cde-8a15-503fd6a1db00" containerName="mount-cgroup" Feb 9 00:54:37.082783 kubelet[1959]: E0209 00:54:37.078661 1959 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7fe16cc4-f4b1-4cde-8a15-503fd6a1db00" containerName="clean-cilium-state" Feb 9 00:54:37.082783 kubelet[1959]: E0209 00:54:37.078667 1959 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7fe16cc4-f4b1-4cde-8a15-503fd6a1db00" containerName="cilium-agent" Feb 9 00:54:37.082783 kubelet[1959]: E0209 00:54:37.078673 1959 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7fe16cc4-f4b1-4cde-8a15-503fd6a1db00" containerName="apply-sysctl-overwrites" Feb 9 00:54:37.082783 kubelet[1959]: E0209 00:54:37.078678 1959 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7fe16cc4-f4b1-4cde-8a15-503fd6a1db00" containerName="mount-bpf-fs" Feb 9 00:54:37.082783 kubelet[1959]: E0209 00:54:37.078684 1959 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cbb6d553-f16b-476f-a0b2-949da044bfb2" containerName="cilium-operator" Feb 9 00:54:37.082783 kubelet[1959]: I0209 00:54:37.078703 1959 memory_manager.go:346] "RemoveStaleState removing state" podUID="cbb6d553-f16b-476f-a0b2-949da044bfb2" containerName="cilium-operator" Feb 9 00:54:37.082783 kubelet[1959]: I0209 00:54:37.078708 1959 memory_manager.go:346] "RemoveStaleState removing state" podUID="7fe16cc4-f4b1-4cde-8a15-503fd6a1db00" containerName="cilium-agent" Feb 9 00:54:37.083270 systemd[1]: Created slice kubepods-burstable-pod2b1a3919_d851_435c_a2eb_feba1d2f4b8d.slice. Feb 9 00:54:37.103685 sshd[3763]: Accepted publickey for core from 10.0.0.1 port 56644 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:54:37.104666 sshd[3763]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:54:37.107672 systemd-logind[1107]: New session 25 of user core. Feb 9 00:54:37.108412 systemd[1]: Started session-25.scope. Feb 9 00:54:37.151100 kubelet[1959]: I0209 00:54:37.151072 1959 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2b1a3919-d851-435c-a2eb-feba1d2f4b8d-host-proc-sys-net\") pod \"cilium-qgphw\" (UID: \"2b1a3919-d851-435c-a2eb-feba1d2f4b8d\") " pod="kube-system/cilium-qgphw" Feb 9 00:54:37.151160 kubelet[1959]: I0209 00:54:37.151108 1959 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2b1a3919-d851-435c-a2eb-feba1d2f4b8d-hostproc\") pod \"cilium-qgphw\" (UID: \"2b1a3919-d851-435c-a2eb-feba1d2f4b8d\") " pod="kube-system/cilium-qgphw" Feb 9 00:54:37.151160 kubelet[1959]: I0209 00:54:37.151125 1959 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2b1a3919-d851-435c-a2eb-feba1d2f4b8d-host-proc-sys-kernel\") pod \"cilium-qgphw\" (UID: \"2b1a3919-d851-435c-a2eb-feba1d2f4b8d\") " pod="kube-system/cilium-qgphw" Feb 9 00:54:37.151160 kubelet[1959]: I0209 00:54:37.151142 1959 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2b1a3919-d851-435c-a2eb-feba1d2f4b8d-cni-path\") pod \"cilium-qgphw\" (UID: \"2b1a3919-d851-435c-a2eb-feba1d2f4b8d\") " pod="kube-system/cilium-qgphw" Feb 9 00:54:37.151237 kubelet[1959]: I0209 00:54:37.151221 1959 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2b1a3919-d851-435c-a2eb-feba1d2f4b8d-xtables-lock\") pod \"cilium-qgphw\" (UID: \"2b1a3919-d851-435c-a2eb-feba1d2f4b8d\") " pod="kube-system/cilium-qgphw" Feb 9 00:54:37.151292 kubelet[1959]: I0209 00:54:37.151275 1959 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2b1a3919-d851-435c-a2eb-feba1d2f4b8d-cilium-config-path\") pod \"cilium-qgphw\" (UID: \"2b1a3919-d851-435c-a2eb-feba1d2f4b8d\") " pod="kube-system/cilium-qgphw" Feb 9 00:54:37.151361 kubelet[1959]: I0209 00:54:37.151302 1959 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2b1a3919-d851-435c-a2eb-feba1d2f4b8d-hubble-tls\") pod \"cilium-qgphw\" (UID: \"2b1a3919-d851-435c-a2eb-feba1d2f4b8d\") " pod="kube-system/cilium-qgphw" Feb 9 00:54:37.151361 kubelet[1959]: I0209 00:54:37.151328 1959 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2b1a3919-d851-435c-a2eb-feba1d2f4b8d-cilium-run\") pod \"cilium-qgphw\" (UID: \"2b1a3919-d851-435c-a2eb-feba1d2f4b8d\") " pod="kube-system/cilium-qgphw" Feb 9 00:54:37.151361 kubelet[1959]: I0209 00:54:37.151354 1959 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2b1a3919-d851-435c-a2eb-feba1d2f4b8d-cilium-ipsec-secrets\") pod \"cilium-qgphw\" (UID: \"2b1a3919-d851-435c-a2eb-feba1d2f4b8d\") " pod="kube-system/cilium-qgphw" Feb 9 00:54:37.151435 kubelet[1959]: I0209 00:54:37.151421 1959 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2b1a3919-d851-435c-a2eb-feba1d2f4b8d-etc-cni-netd\") pod \"cilium-qgphw\" (UID: \"2b1a3919-d851-435c-a2eb-feba1d2f4b8d\") " pod="kube-system/cilium-qgphw" Feb 9 00:54:37.151458 kubelet[1959]: I0209 00:54:37.151451 1959 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2b1a3919-d851-435c-a2eb-feba1d2f4b8d-clustermesh-secrets\") pod \"cilium-qgphw\" (UID: \"2b1a3919-d851-435c-a2eb-feba1d2f4b8d\") " pod="kube-system/cilium-qgphw" Feb 9 00:54:37.151484 kubelet[1959]: I0209 00:54:37.151470 1959 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqnzs\" (UniqueName: \"kubernetes.io/projected/2b1a3919-d851-435c-a2eb-feba1d2f4b8d-kube-api-access-zqnzs\") pod \"cilium-qgphw\" (UID: \"2b1a3919-d851-435c-a2eb-feba1d2f4b8d\") " pod="kube-system/cilium-qgphw" Feb 9 00:54:37.151511 kubelet[1959]: I0209 00:54:37.151486 1959 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2b1a3919-d851-435c-a2eb-feba1d2f4b8d-lib-modules\") pod \"cilium-qgphw\" (UID: \"2b1a3919-d851-435c-a2eb-feba1d2f4b8d\") " pod="kube-system/cilium-qgphw" Feb 9 00:54:37.151511 kubelet[1959]: I0209 00:54:37.151502 1959 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2b1a3919-d851-435c-a2eb-feba1d2f4b8d-cilium-cgroup\") pod \"cilium-qgphw\" (UID: \"2b1a3919-d851-435c-a2eb-feba1d2f4b8d\") " pod="kube-system/cilium-qgphw" Feb 9 00:54:37.151559 kubelet[1959]: I0209 00:54:37.151521 1959 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2b1a3919-d851-435c-a2eb-feba1d2f4b8d-bpf-maps\") pod \"cilium-qgphw\" (UID: \"2b1a3919-d851-435c-a2eb-feba1d2f4b8d\") " pod="kube-system/cilium-qgphw" Feb 9 00:54:37.214422 sshd[3763]: pam_unix(sshd:session): session closed for user core Feb 9 00:54:37.219910 kubelet[1959]: E0209 00:54:37.219036 1959 pod_workers.go:1300] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-zqnzs lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-qgphw" podUID="2b1a3919-d851-435c-a2eb-feba1d2f4b8d" Feb 9 00:54:37.220942 systemd[1]: Started sshd@25-10.0.0.122:22-10.0.0.1:56660.service. Feb 9 00:54:37.224044 systemd-logind[1107]: Session 25 logged out. Waiting for processes to exit. Feb 9 00:54:37.224128 systemd[1]: sshd@24-10.0.0.122:22-10.0.0.1:56644.service: Deactivated successfully. Feb 9 00:54:37.224909 systemd[1]: session-25.scope: Deactivated successfully. Feb 9 00:54:37.225486 systemd-logind[1107]: Removed session 25. Feb 9 00:54:37.256384 sshd[3777]: Accepted publickey for core from 10.0.0.1 port 56660 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:54:37.258101 sshd[3777]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:54:37.269501 systemd-logind[1107]: New session 26 of user core. Feb 9 00:54:37.270648 systemd[1]: Started session-26.scope. Feb 9 00:54:37.409315 kubelet[1959]: E0209 00:54:37.409209 1959 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:54:37.654631 kubelet[1959]: I0209 00:54:37.654605 1959 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2b1a3919-d851-435c-a2eb-feba1d2f4b8d-hostproc\") pod \"2b1a3919-d851-435c-a2eb-feba1d2f4b8d\" (UID: \"2b1a3919-d851-435c-a2eb-feba1d2f4b8d\") " Feb 9 00:54:37.654631 kubelet[1959]: I0209 00:54:37.654631 1959 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2b1a3919-d851-435c-a2eb-feba1d2f4b8d-cilium-run\") pod \"2b1a3919-d851-435c-a2eb-feba1d2f4b8d\" (UID: \"2b1a3919-d851-435c-a2eb-feba1d2f4b8d\") " Feb 9 00:54:37.654936 kubelet[1959]: I0209 00:54:37.654648 1959 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2b1a3919-d851-435c-a2eb-feba1d2f4b8d-host-proc-sys-net\") pod \"2b1a3919-d851-435c-a2eb-feba1d2f4b8d\" (UID: \"2b1a3919-d851-435c-a2eb-feba1d2f4b8d\") " Feb 9 00:54:37.654936 kubelet[1959]: I0209 00:54:37.654663 1959 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2b1a3919-d851-435c-a2eb-feba1d2f4b8d-lib-modules\") pod \"2b1a3919-d851-435c-a2eb-feba1d2f4b8d\" (UID: \"2b1a3919-d851-435c-a2eb-feba1d2f4b8d\") " Feb 9 00:54:37.654936 kubelet[1959]: I0209 00:54:37.654684 1959 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqnzs\" (UniqueName: \"kubernetes.io/projected/2b1a3919-d851-435c-a2eb-feba1d2f4b8d-kube-api-access-zqnzs\") pod \"2b1a3919-d851-435c-a2eb-feba1d2f4b8d\" (UID: \"2b1a3919-d851-435c-a2eb-feba1d2f4b8d\") " Feb 9 00:54:37.654936 kubelet[1959]: I0209 00:54:37.654699 1959 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2b1a3919-d851-435c-a2eb-feba1d2f4b8d-cilium-cgroup\") pod \"2b1a3919-d851-435c-a2eb-feba1d2f4b8d\" (UID: \"2b1a3919-d851-435c-a2eb-feba1d2f4b8d\") " Feb 9 00:54:37.654936 kubelet[1959]: I0209 00:54:37.654714 1959 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2b1a3919-d851-435c-a2eb-feba1d2f4b8d-bpf-maps\") pod \"2b1a3919-d851-435c-a2eb-feba1d2f4b8d\" (UID: \"2b1a3919-d851-435c-a2eb-feba1d2f4b8d\") " Feb 9 00:54:37.654936 kubelet[1959]: I0209 00:54:37.654728 1959 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2b1a3919-d851-435c-a2eb-feba1d2f4b8d-cni-path\") pod \"2b1a3919-d851-435c-a2eb-feba1d2f4b8d\" (UID: \"2b1a3919-d851-435c-a2eb-feba1d2f4b8d\") " Feb 9 00:54:37.655078 kubelet[1959]: I0209 00:54:37.654728 1959 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b1a3919-d851-435c-a2eb-feba1d2f4b8d-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "2b1a3919-d851-435c-a2eb-feba1d2f4b8d" (UID: "2b1a3919-d851-435c-a2eb-feba1d2f4b8d"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 00:54:37.655078 kubelet[1959]: I0209 00:54:37.654740 1959 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b1a3919-d851-435c-a2eb-feba1d2f4b8d-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "2b1a3919-d851-435c-a2eb-feba1d2f4b8d" (UID: "2b1a3919-d851-435c-a2eb-feba1d2f4b8d"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 00:54:37.655078 kubelet[1959]: I0209 00:54:37.654774 1959 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b1a3919-d851-435c-a2eb-feba1d2f4b8d-hostproc" (OuterVolumeSpecName: "hostproc") pod "2b1a3919-d851-435c-a2eb-feba1d2f4b8d" (UID: "2b1a3919-d851-435c-a2eb-feba1d2f4b8d"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 00:54:37.655078 kubelet[1959]: I0209 00:54:37.654746 1959 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2b1a3919-d851-435c-a2eb-feba1d2f4b8d-cilium-config-path\") pod \"2b1a3919-d851-435c-a2eb-feba1d2f4b8d\" (UID: \"2b1a3919-d851-435c-a2eb-feba1d2f4b8d\") " Feb 9 00:54:37.655078 kubelet[1959]: I0209 00:54:37.654789 1959 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b1a3919-d851-435c-a2eb-feba1d2f4b8d-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "2b1a3919-d851-435c-a2eb-feba1d2f4b8d" (UID: "2b1a3919-d851-435c-a2eb-feba1d2f4b8d"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 00:54:37.655190 kubelet[1959]: I0209 00:54:37.654802 1959 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b1a3919-d851-435c-a2eb-feba1d2f4b8d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "2b1a3919-d851-435c-a2eb-feba1d2f4b8d" (UID: "2b1a3919-d851-435c-a2eb-feba1d2f4b8d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 00:54:37.655190 kubelet[1959]: I0209 00:54:37.654824 1959 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2b1a3919-d851-435c-a2eb-feba1d2f4b8d-xtables-lock\") pod \"2b1a3919-d851-435c-a2eb-feba1d2f4b8d\" (UID: \"2b1a3919-d851-435c-a2eb-feba1d2f4b8d\") " Feb 9 00:54:37.655190 kubelet[1959]: I0209 00:54:37.654850 1959 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2b1a3919-d851-435c-a2eb-feba1d2f4b8d-hubble-tls\") pod \"2b1a3919-d851-435c-a2eb-feba1d2f4b8d\" (UID: \"2b1a3919-d851-435c-a2eb-feba1d2f4b8d\") " Feb 9 00:54:37.655190 kubelet[1959]: I0209 00:54:37.654872 1959 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2b1a3919-d851-435c-a2eb-feba1d2f4b8d-clustermesh-secrets\") pod \"2b1a3919-d851-435c-a2eb-feba1d2f4b8d\" (UID: \"2b1a3919-d851-435c-a2eb-feba1d2f4b8d\") " Feb 9 00:54:37.655190 kubelet[1959]: I0209 00:54:37.654905 1959 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2b1a3919-d851-435c-a2eb-feba1d2f4b8d-host-proc-sys-kernel\") pod \"2b1a3919-d851-435c-a2eb-feba1d2f4b8d\" (UID: \"2b1a3919-d851-435c-a2eb-feba1d2f4b8d\") " Feb 9 00:54:37.655190 kubelet[1959]: I0209 00:54:37.654922 1959 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2b1a3919-d851-435c-a2eb-feba1d2f4b8d-cilium-ipsec-secrets\") pod \"2b1a3919-d851-435c-a2eb-feba1d2f4b8d\" (UID: \"2b1a3919-d851-435c-a2eb-feba1d2f4b8d\") " Feb 9 00:54:37.655341 kubelet[1959]: I0209 00:54:37.654937 1959 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2b1a3919-d851-435c-a2eb-feba1d2f4b8d-etc-cni-netd\") pod \"2b1a3919-d851-435c-a2eb-feba1d2f4b8d\" (UID: \"2b1a3919-d851-435c-a2eb-feba1d2f4b8d\") " Feb 9 00:54:37.655341 kubelet[1959]: I0209 00:54:37.654963 1959 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2b1a3919-d851-435c-a2eb-feba1d2f4b8d-cilium-run\") on node \"localhost\" DevicePath \"\"" Feb 9 00:54:37.655341 kubelet[1959]: I0209 00:54:37.654973 1959 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2b1a3919-d851-435c-a2eb-feba1d2f4b8d-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Feb 9 00:54:37.655341 kubelet[1959]: I0209 00:54:37.654982 1959 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2b1a3919-d851-435c-a2eb-feba1d2f4b8d-hostproc\") on node \"localhost\" DevicePath \"\"" Feb 9 00:54:37.655341 kubelet[1959]: I0209 00:54:37.654990 1959 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2b1a3919-d851-435c-a2eb-feba1d2f4b8d-lib-modules\") on node \"localhost\" DevicePath \"\"" Feb 9 00:54:37.655341 kubelet[1959]: I0209 00:54:37.654999 1959 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2b1a3919-d851-435c-a2eb-feba1d2f4b8d-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Feb 9 00:54:37.655341 kubelet[1959]: I0209 00:54:37.655012 1959 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b1a3919-d851-435c-a2eb-feba1d2f4b8d-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "2b1a3919-d851-435c-a2eb-feba1d2f4b8d" (UID: "2b1a3919-d851-435c-a2eb-feba1d2f4b8d"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 00:54:37.655491 kubelet[1959]: I0209 00:54:37.655026 1959 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b1a3919-d851-435c-a2eb-feba1d2f4b8d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "2b1a3919-d851-435c-a2eb-feba1d2f4b8d" (UID: "2b1a3919-d851-435c-a2eb-feba1d2f4b8d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 00:54:37.655491 kubelet[1959]: I0209 00:54:37.655083 1959 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b1a3919-d851-435c-a2eb-feba1d2f4b8d-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "2b1a3919-d851-435c-a2eb-feba1d2f4b8d" (UID: "2b1a3919-d851-435c-a2eb-feba1d2f4b8d"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 00:54:37.655491 kubelet[1959]: I0209 00:54:37.655107 1959 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b1a3919-d851-435c-a2eb-feba1d2f4b8d-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "2b1a3919-d851-435c-a2eb-feba1d2f4b8d" (UID: "2b1a3919-d851-435c-a2eb-feba1d2f4b8d"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 00:54:37.655491 kubelet[1959]: I0209 00:54:37.655320 1959 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b1a3919-d851-435c-a2eb-feba1d2f4b8d-cni-path" (OuterVolumeSpecName: "cni-path") pod "2b1a3919-d851-435c-a2eb-feba1d2f4b8d" (UID: "2b1a3919-d851-435c-a2eb-feba1d2f4b8d"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 00:54:37.656236 kubelet[1959]: I0209 00:54:37.656218 1959 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b1a3919-d851-435c-a2eb-feba1d2f4b8d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2b1a3919-d851-435c-a2eb-feba1d2f4b8d" (UID: "2b1a3919-d851-435c-a2eb-feba1d2f4b8d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 00:54:37.657708 kubelet[1959]: I0209 00:54:37.657678 1959 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b1a3919-d851-435c-a2eb-feba1d2f4b8d-kube-api-access-zqnzs" (OuterVolumeSpecName: "kube-api-access-zqnzs") pod "2b1a3919-d851-435c-a2eb-feba1d2f4b8d" (UID: "2b1a3919-d851-435c-a2eb-feba1d2f4b8d"). InnerVolumeSpecName "kube-api-access-zqnzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 00:54:37.657961 kubelet[1959]: I0209 00:54:37.657938 1959 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b1a3919-d851-435c-a2eb-feba1d2f4b8d-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "2b1a3919-d851-435c-a2eb-feba1d2f4b8d" (UID: "2b1a3919-d851-435c-a2eb-feba1d2f4b8d"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 00:54:37.658438 systemd[1]: var-lib-kubelet-pods-2b1a3919\x2dd851\x2d435c\x2da2eb\x2dfeba1d2f4b8d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzqnzs.mount: Deactivated successfully. Feb 9 00:54:37.658521 systemd[1]: var-lib-kubelet-pods-2b1a3919\x2dd851\x2d435c\x2da2eb\x2dfeba1d2f4b8d-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 00:54:37.658568 systemd[1]: var-lib-kubelet-pods-2b1a3919\x2dd851\x2d435c\x2da2eb\x2dfeba1d2f4b8d-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 00:54:37.659123 kubelet[1959]: I0209 00:54:37.659049 1959 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b1a3919-d851-435c-a2eb-feba1d2f4b8d-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "2b1a3919-d851-435c-a2eb-feba1d2f4b8d" (UID: "2b1a3919-d851-435c-a2eb-feba1d2f4b8d"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 00:54:37.659664 kubelet[1959]: I0209 00:54:37.659603 1959 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b1a3919-d851-435c-a2eb-feba1d2f4b8d-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "2b1a3919-d851-435c-a2eb-feba1d2f4b8d" (UID: "2b1a3919-d851-435c-a2eb-feba1d2f4b8d"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 00:54:37.660324 systemd[1]: var-lib-kubelet-pods-2b1a3919\x2dd851\x2d435c\x2da2eb\x2dfeba1d2f4b8d-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 9 00:54:37.755935 kubelet[1959]: I0209 00:54:37.755890 1959 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2b1a3919-d851-435c-a2eb-feba1d2f4b8d-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Feb 9 00:54:37.755935 kubelet[1959]: I0209 00:54:37.755921 1959 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2b1a3919-d851-435c-a2eb-feba1d2f4b8d-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Feb 9 00:54:37.755935 kubelet[1959]: I0209 00:54:37.755932 1959 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2b1a3919-d851-435c-a2eb-feba1d2f4b8d-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Feb 9 00:54:37.755935 kubelet[1959]: I0209 00:54:37.755940 1959 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2b1a3919-d851-435c-a2eb-feba1d2f4b8d-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Feb 9 00:54:37.755935 kubelet[1959]: I0209 00:54:37.755950 1959 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2b1a3919-d851-435c-a2eb-feba1d2f4b8d-bpf-maps\") on node \"localhost\" DevicePath \"\"" Feb 9 00:54:37.756194 kubelet[1959]: I0209 00:54:37.755959 1959 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-zqnzs\" (UniqueName: \"kubernetes.io/projected/2b1a3919-d851-435c-a2eb-feba1d2f4b8d-kube-api-access-zqnzs\") on node \"localhost\" DevicePath \"\"" Feb 9 00:54:37.756194 kubelet[1959]: I0209 00:54:37.755967 1959 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2b1a3919-d851-435c-a2eb-feba1d2f4b8d-cni-path\") on node \"localhost\" DevicePath \"\"" Feb 9 00:54:37.756194 kubelet[1959]: I0209 00:54:37.755977 1959 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2b1a3919-d851-435c-a2eb-feba1d2f4b8d-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 9 00:54:37.756194 kubelet[1959]: I0209 00:54:37.755986 1959 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2b1a3919-d851-435c-a2eb-feba1d2f4b8d-xtables-lock\") on node \"localhost\" DevicePath \"\"" Feb 9 00:54:37.756194 kubelet[1959]: I0209 00:54:37.755993 1959 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2b1a3919-d851-435c-a2eb-feba1d2f4b8d-hubble-tls\") on node \"localhost\" DevicePath \"\"" Feb 9 00:54:38.413321 systemd[1]: Removed slice kubepods-burstable-pod2b1a3919_d851_435c_a2eb_feba1d2f4b8d.slice. Feb 9 00:54:38.464600 kubelet[1959]: E0209 00:54:38.464574 1959 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 00:54:38.591837 kubelet[1959]: I0209 00:54:38.591801 1959 topology_manager.go:215] "Topology Admit Handler" podUID="3bcaca66-a3f2-44ab-a968-d2095d9c7fb5" podNamespace="kube-system" podName="cilium-dvlpm" Feb 9 00:54:38.596029 systemd[1]: Created slice kubepods-burstable-pod3bcaca66_a3f2_44ab_a968_d2095d9c7fb5.slice. Feb 9 00:54:38.660563 kubelet[1959]: I0209 00:54:38.660495 1959 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3bcaca66-a3f2-44ab-a968-d2095d9c7fb5-xtables-lock\") pod \"cilium-dvlpm\" (UID: \"3bcaca66-a3f2-44ab-a968-d2095d9c7fb5\") " pod="kube-system/cilium-dvlpm" Feb 9 00:54:38.660563 kubelet[1959]: I0209 00:54:38.660543 1959 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3bcaca66-a3f2-44ab-a968-d2095d9c7fb5-cilium-ipsec-secrets\") pod \"cilium-dvlpm\" (UID: \"3bcaca66-a3f2-44ab-a968-d2095d9c7fb5\") " pod="kube-system/cilium-dvlpm" Feb 9 00:54:38.660563 kubelet[1959]: I0209 00:54:38.660570 1959 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3bcaca66-a3f2-44ab-a968-d2095d9c7fb5-cilium-cgroup\") pod \"cilium-dvlpm\" (UID: \"3bcaca66-a3f2-44ab-a968-d2095d9c7fb5\") " pod="kube-system/cilium-dvlpm" Feb 9 00:54:38.660962 kubelet[1959]: I0209 00:54:38.660639 1959 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3bcaca66-a3f2-44ab-a968-d2095d9c7fb5-lib-modules\") pod \"cilium-dvlpm\" (UID: \"3bcaca66-a3f2-44ab-a968-d2095d9c7fb5\") " pod="kube-system/cilium-dvlpm" Feb 9 00:54:38.660962 kubelet[1959]: I0209 00:54:38.660684 1959 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3bcaca66-a3f2-44ab-a968-d2095d9c7fb5-host-proc-sys-net\") pod \"cilium-dvlpm\" (UID: \"3bcaca66-a3f2-44ab-a968-d2095d9c7fb5\") " pod="kube-system/cilium-dvlpm" Feb 9 00:54:38.660962 kubelet[1959]: I0209 00:54:38.660703 1959 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3bcaca66-a3f2-44ab-a968-d2095d9c7fb5-host-proc-sys-kernel\") pod \"cilium-dvlpm\" (UID: \"3bcaca66-a3f2-44ab-a968-d2095d9c7fb5\") " pod="kube-system/cilium-dvlpm" Feb 9 00:54:38.660962 kubelet[1959]: I0209 00:54:38.660720 1959 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3bcaca66-a3f2-44ab-a968-d2095d9c7fb5-hubble-tls\") pod \"cilium-dvlpm\" (UID: \"3bcaca66-a3f2-44ab-a968-d2095d9c7fb5\") " pod="kube-system/cilium-dvlpm" Feb 9 00:54:38.660962 kubelet[1959]: I0209 00:54:38.660748 1959 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3bcaca66-a3f2-44ab-a968-d2095d9c7fb5-cni-path\") pod \"cilium-dvlpm\" (UID: \"3bcaca66-a3f2-44ab-a968-d2095d9c7fb5\") " pod="kube-system/cilium-dvlpm" Feb 9 00:54:38.660962 kubelet[1959]: I0209 00:54:38.660795 1959 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3bcaca66-a3f2-44ab-a968-d2095d9c7fb5-bpf-maps\") pod \"cilium-dvlpm\" (UID: \"3bcaca66-a3f2-44ab-a968-d2095d9c7fb5\") " pod="kube-system/cilium-dvlpm" Feb 9 00:54:38.661128 kubelet[1959]: I0209 00:54:38.660852 1959 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3bcaca66-a3f2-44ab-a968-d2095d9c7fb5-cilium-run\") pod \"cilium-dvlpm\" (UID: \"3bcaca66-a3f2-44ab-a968-d2095d9c7fb5\") " pod="kube-system/cilium-dvlpm" Feb 9 00:54:38.661128 kubelet[1959]: I0209 00:54:38.660889 1959 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3bcaca66-a3f2-44ab-a968-d2095d9c7fb5-etc-cni-netd\") pod \"cilium-dvlpm\" (UID: \"3bcaca66-a3f2-44ab-a968-d2095d9c7fb5\") " pod="kube-system/cilium-dvlpm" Feb 9 00:54:38.661128 kubelet[1959]: I0209 00:54:38.660923 1959 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3bcaca66-a3f2-44ab-a968-d2095d9c7fb5-cilium-config-path\") pod \"cilium-dvlpm\" (UID: \"3bcaca66-a3f2-44ab-a968-d2095d9c7fb5\") " pod="kube-system/cilium-dvlpm" Feb 9 00:54:38.661128 kubelet[1959]: I0209 00:54:38.660957 1959 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flkft\" (UniqueName: \"kubernetes.io/projected/3bcaca66-a3f2-44ab-a968-d2095d9c7fb5-kube-api-access-flkft\") pod \"cilium-dvlpm\" (UID: \"3bcaca66-a3f2-44ab-a968-d2095d9c7fb5\") " pod="kube-system/cilium-dvlpm" Feb 9 00:54:38.661128 kubelet[1959]: I0209 00:54:38.660986 1959 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3bcaca66-a3f2-44ab-a968-d2095d9c7fb5-hostproc\") pod \"cilium-dvlpm\" (UID: \"3bcaca66-a3f2-44ab-a968-d2095d9c7fb5\") " pod="kube-system/cilium-dvlpm" Feb 9 00:54:38.661128 kubelet[1959]: I0209 00:54:38.661012 1959 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3bcaca66-a3f2-44ab-a968-d2095d9c7fb5-clustermesh-secrets\") pod \"cilium-dvlpm\" (UID: \"3bcaca66-a3f2-44ab-a968-d2095d9c7fb5\") " pod="kube-system/cilium-dvlpm" Feb 9 00:54:38.898460 kubelet[1959]: E0209 00:54:38.898426 1959 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:54:38.898982 env[1119]: time="2024-02-09T00:54:38.898941996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dvlpm,Uid:3bcaca66-a3f2-44ab-a968-d2095d9c7fb5,Namespace:kube-system,Attempt:0,}" Feb 9 00:54:38.909626 env[1119]: time="2024-02-09T00:54:38.909560765Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 00:54:38.909626 env[1119]: time="2024-02-09T00:54:38.909597386Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 00:54:38.909626 env[1119]: time="2024-02-09T00:54:38.909607745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 00:54:38.909787 env[1119]: time="2024-02-09T00:54:38.909733675Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/929d0eb2a13f8102de24c35813fcadbb68919df429745608be9ad0aa82870ee6 pid=3806 runtime=io.containerd.runc.v2 Feb 9 00:54:38.918938 systemd[1]: Started cri-containerd-929d0eb2a13f8102de24c35813fcadbb68919df429745608be9ad0aa82870ee6.scope. Feb 9 00:54:38.938091 env[1119]: time="2024-02-09T00:54:38.938040658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dvlpm,Uid:3bcaca66-a3f2-44ab-a968-d2095d9c7fb5,Namespace:kube-system,Attempt:0,} returns sandbox id \"929d0eb2a13f8102de24c35813fcadbb68919df429745608be9ad0aa82870ee6\"" Feb 9 00:54:38.939130 kubelet[1959]: E0209 00:54:38.938754 1959 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:54:38.941395 env[1119]: time="2024-02-09T00:54:38.941351681Z" level=info msg="CreateContainer within sandbox \"929d0eb2a13f8102de24c35813fcadbb68919df429745608be9ad0aa82870ee6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 00:54:38.955847 env[1119]: time="2024-02-09T00:54:38.955794696Z" level=info msg="CreateContainer within sandbox \"929d0eb2a13f8102de24c35813fcadbb68919df429745608be9ad0aa82870ee6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4ac5b5d7033d98f55de678d9ebc62dfda50ba09263d2fc4a4a7309f20d87e762\"" Feb 9 00:54:38.956274 env[1119]: time="2024-02-09T00:54:38.956226761Z" level=info msg="StartContainer for \"4ac5b5d7033d98f55de678d9ebc62dfda50ba09263d2fc4a4a7309f20d87e762\"" Feb 9 00:54:38.969987 systemd[1]: Started cri-containerd-4ac5b5d7033d98f55de678d9ebc62dfda50ba09263d2fc4a4a7309f20d87e762.scope. Feb 9 00:54:38.996728 env[1119]: time="2024-02-09T00:54:38.996665872Z" level=info msg="StartContainer for \"4ac5b5d7033d98f55de678d9ebc62dfda50ba09263d2fc4a4a7309f20d87e762\" returns successfully" Feb 9 00:54:39.003240 systemd[1]: cri-containerd-4ac5b5d7033d98f55de678d9ebc62dfda50ba09263d2fc4a4a7309f20d87e762.scope: Deactivated successfully. Feb 9 00:54:39.029278 env[1119]: time="2024-02-09T00:54:39.029211469Z" level=info msg="shim disconnected" id=4ac5b5d7033d98f55de678d9ebc62dfda50ba09263d2fc4a4a7309f20d87e762 Feb 9 00:54:39.029278 env[1119]: time="2024-02-09T00:54:39.029279038Z" level=warning msg="cleaning up after shim disconnected" id=4ac5b5d7033d98f55de678d9ebc62dfda50ba09263d2fc4a4a7309f20d87e762 namespace=k8s.io Feb 9 00:54:39.029571 env[1119]: time="2024-02-09T00:54:39.029291242Z" level=info msg="cleaning up dead shim" Feb 9 00:54:39.036056 env[1119]: time="2024-02-09T00:54:39.036019707Z" level=warning msg="cleanup warnings time=\"2024-02-09T00:54:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3890 runtime=io.containerd.runc.v2\n" Feb 9 00:54:39.409778 kubelet[1959]: E0209 00:54:39.409739 1959 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:54:39.569545 kubelet[1959]: E0209 00:54:39.569514 1959 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:54:39.572828 env[1119]: time="2024-02-09T00:54:39.572449888Z" level=info msg="CreateContainer within sandbox \"929d0eb2a13f8102de24c35813fcadbb68919df429745608be9ad0aa82870ee6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 00:54:39.599915 env[1119]: time="2024-02-09T00:54:39.599858294Z" level=info msg="CreateContainer within sandbox \"929d0eb2a13f8102de24c35813fcadbb68919df429745608be9ad0aa82870ee6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"dcd6a9784ca3c8ddf5f70c11ed296103645b553b7b2fb7bbaaa188f01b8fea79\"" Feb 9 00:54:39.600424 env[1119]: time="2024-02-09T00:54:39.600332189Z" level=info msg="StartContainer for \"dcd6a9784ca3c8ddf5f70c11ed296103645b553b7b2fb7bbaaa188f01b8fea79\"" Feb 9 00:54:39.613140 systemd[1]: Started cri-containerd-dcd6a9784ca3c8ddf5f70c11ed296103645b553b7b2fb7bbaaa188f01b8fea79.scope. Feb 9 00:54:39.640735 env[1119]: time="2024-02-09T00:54:39.640689285Z" level=info msg="StartContainer for \"dcd6a9784ca3c8ddf5f70c11ed296103645b553b7b2fb7bbaaa188f01b8fea79\" returns successfully" Feb 9 00:54:39.644103 systemd[1]: cri-containerd-dcd6a9784ca3c8ddf5f70c11ed296103645b553b7b2fb7bbaaa188f01b8fea79.scope: Deactivated successfully. Feb 9 00:54:39.662542 env[1119]: time="2024-02-09T00:54:39.662442273Z" level=info msg="shim disconnected" id=dcd6a9784ca3c8ddf5f70c11ed296103645b553b7b2fb7bbaaa188f01b8fea79 Feb 9 00:54:39.662542 env[1119]: time="2024-02-09T00:54:39.662495615Z" level=warning msg="cleaning up after shim disconnected" id=dcd6a9784ca3c8ddf5f70c11ed296103645b553b7b2fb7bbaaa188f01b8fea79 namespace=k8s.io Feb 9 00:54:39.662542 env[1119]: time="2024-02-09T00:54:39.662504973Z" level=info msg="cleaning up dead shim" Feb 9 00:54:39.669053 env[1119]: time="2024-02-09T00:54:39.669011525Z" level=warning msg="cleanup warnings time=\"2024-02-09T00:54:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3951 runtime=io.containerd.runc.v2\n" Feb 9 00:54:39.990890 kubelet[1959]: I0209 00:54:39.990786 1959 setters.go:552] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-02-09T00:54:39Z","lastTransitionTime":"2024-02-09T00:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 9 00:54:40.410966 kubelet[1959]: I0209 00:54:40.410921 1959 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="2b1a3919-d851-435c-a2eb-feba1d2f4b8d" path="/var/lib/kubelet/pods/2b1a3919-d851-435c-a2eb-feba1d2f4b8d/volumes" Feb 9 00:54:40.572356 kubelet[1959]: E0209 00:54:40.572330 1959 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:54:40.574064 env[1119]: time="2024-02-09T00:54:40.574026231Z" level=info msg="CreateContainer within sandbox \"929d0eb2a13f8102de24c35813fcadbb68919df429745608be9ad0aa82870ee6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 00:54:40.586654 env[1119]: time="2024-02-09T00:54:40.586604997Z" level=info msg="CreateContainer within sandbox \"929d0eb2a13f8102de24c35813fcadbb68919df429745608be9ad0aa82870ee6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a822a98dca82a40837f8ddac90f485c0cd1dba784170e721815284df9beb0a38\"" Feb 9 00:54:40.587123 env[1119]: time="2024-02-09T00:54:40.587099199Z" level=info msg="StartContainer for \"a822a98dca82a40837f8ddac90f485c0cd1dba784170e721815284df9beb0a38\"" Feb 9 00:54:40.601770 systemd[1]: Started cri-containerd-a822a98dca82a40837f8ddac90f485c0cd1dba784170e721815284df9beb0a38.scope. Feb 9 00:54:40.624132 systemd[1]: cri-containerd-a822a98dca82a40837f8ddac90f485c0cd1dba784170e721815284df9beb0a38.scope: Deactivated successfully. Feb 9 00:54:40.624690 env[1119]: time="2024-02-09T00:54:40.624522618Z" level=info msg="StartContainer for \"a822a98dca82a40837f8ddac90f485c0cd1dba784170e721815284df9beb0a38\" returns successfully" Feb 9 00:54:40.644829 env[1119]: time="2024-02-09T00:54:40.644779225Z" level=info msg="shim disconnected" id=a822a98dca82a40837f8ddac90f485c0cd1dba784170e721815284df9beb0a38 Feb 9 00:54:40.644829 env[1119]: time="2024-02-09T00:54:40.644824251Z" level=warning msg="cleaning up after shim disconnected" id=a822a98dca82a40837f8ddac90f485c0cd1dba784170e721815284df9beb0a38 namespace=k8s.io Feb 9 00:54:40.644829 env[1119]: time="2024-02-09T00:54:40.644833408Z" level=info msg="cleaning up dead shim" Feb 9 00:54:40.650609 env[1119]: time="2024-02-09T00:54:40.650574585Z" level=warning msg="cleanup warnings time=\"2024-02-09T00:54:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4007 runtime=io.containerd.runc.v2\n" Feb 9 00:54:40.765990 systemd[1]: run-containerd-runc-k8s.io-a822a98dca82a40837f8ddac90f485c0cd1dba784170e721815284df9beb0a38-runc.Mf1hPU.mount: Deactivated successfully. Feb 9 00:54:40.766084 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a822a98dca82a40837f8ddac90f485c0cd1dba784170e721815284df9beb0a38-rootfs.mount: Deactivated successfully. Feb 9 00:54:41.575738 kubelet[1959]: E0209 00:54:41.575705 1959 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:54:41.578525 env[1119]: time="2024-02-09T00:54:41.578472901Z" level=info msg="CreateContainer within sandbox \"929d0eb2a13f8102de24c35813fcadbb68919df429745608be9ad0aa82870ee6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 00:54:41.589163 env[1119]: time="2024-02-09T00:54:41.589091659Z" level=info msg="CreateContainer within sandbox \"929d0eb2a13f8102de24c35813fcadbb68919df429745608be9ad0aa82870ee6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ab5b74ade912425a8f8c04771199990f6346e21938b5d273879ad811430f4d4f\"" Feb 9 00:54:41.589665 env[1119]: time="2024-02-09T00:54:41.589629525Z" level=info msg="StartContainer for \"ab5b74ade912425a8f8c04771199990f6346e21938b5d273879ad811430f4d4f\"" Feb 9 00:54:41.607610 systemd[1]: Started cri-containerd-ab5b74ade912425a8f8c04771199990f6346e21938b5d273879ad811430f4d4f.scope. Feb 9 00:54:41.629835 systemd[1]: cri-containerd-ab5b74ade912425a8f8c04771199990f6346e21938b5d273879ad811430f4d4f.scope: Deactivated successfully. Feb 9 00:54:41.632201 env[1119]: time="2024-02-09T00:54:41.632165481Z" level=info msg="StartContainer for \"ab5b74ade912425a8f8c04771199990f6346e21938b5d273879ad811430f4d4f\" returns successfully" Feb 9 00:54:41.648757 env[1119]: time="2024-02-09T00:54:41.648711365Z" level=info msg="shim disconnected" id=ab5b74ade912425a8f8c04771199990f6346e21938b5d273879ad811430f4d4f Feb 9 00:54:41.648757 env[1119]: time="2024-02-09T00:54:41.648754747Z" level=warning msg="cleaning up after shim disconnected" id=ab5b74ade912425a8f8c04771199990f6346e21938b5d273879ad811430f4d4f namespace=k8s.io Feb 9 00:54:41.648757 env[1119]: time="2024-02-09T00:54:41.648763094Z" level=info msg="cleaning up dead shim" Feb 9 00:54:41.654537 env[1119]: time="2024-02-09T00:54:41.654503275Z" level=warning msg="cleanup warnings time=\"2024-02-09T00:54:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4062 runtime=io.containerd.runc.v2\n" Feb 9 00:54:41.765999 systemd[1]: run-containerd-runc-k8s.io-ab5b74ade912425a8f8c04771199990f6346e21938b5d273879ad811430f4d4f-runc.1zW3ov.mount: Deactivated successfully. Feb 9 00:54:41.766085 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ab5b74ade912425a8f8c04771199990f6346e21938b5d273879ad811430f4d4f-rootfs.mount: Deactivated successfully. Feb 9 00:54:42.579509 kubelet[1959]: E0209 00:54:42.579471 1959 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:54:42.581475 env[1119]: time="2024-02-09T00:54:42.581434060Z" level=info msg="CreateContainer within sandbox \"929d0eb2a13f8102de24c35813fcadbb68919df429745608be9ad0aa82870ee6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 00:54:42.596470 env[1119]: time="2024-02-09T00:54:42.596402947Z" level=info msg="CreateContainer within sandbox \"929d0eb2a13f8102de24c35813fcadbb68919df429745608be9ad0aa82870ee6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"80a80d82514e8d8fd17fe8b28017761bf1ffde066661264495e5b8809f6fcc30\"" Feb 9 00:54:42.596893 env[1119]: time="2024-02-09T00:54:42.596862753Z" level=info msg="StartContainer for \"80a80d82514e8d8fd17fe8b28017761bf1ffde066661264495e5b8809f6fcc30\"" Feb 9 00:54:42.611940 systemd[1]: Started cri-containerd-80a80d82514e8d8fd17fe8b28017761bf1ffde066661264495e5b8809f6fcc30.scope. Feb 9 00:54:42.637494 env[1119]: time="2024-02-09T00:54:42.637447038Z" level=info msg="StartContainer for \"80a80d82514e8d8fd17fe8b28017761bf1ffde066661264495e5b8809f6fcc30\" returns successfully" Feb 9 00:54:42.766193 systemd[1]: run-containerd-runc-k8s.io-80a80d82514e8d8fd17fe8b28017761bf1ffde066661264495e5b8809f6fcc30-runc.y8KnzK.mount: Deactivated successfully. Feb 9 00:54:42.862366 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 9 00:54:43.583915 kubelet[1959]: E0209 00:54:43.583889 1959 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:54:44.899865 kubelet[1959]: E0209 00:54:44.899829 1959 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:54:45.233524 systemd-networkd[1007]: lxc_health: Link UP Feb 9 00:54:45.240290 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 00:54:45.241361 systemd-networkd[1007]: lxc_health: Gained carrier Feb 9 00:54:46.900316 kubelet[1959]: E0209 00:54:46.900272 1959 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:54:46.918558 kubelet[1959]: I0209 00:54:46.918526 1959 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-dvlpm" podStartSLOduration=8.918479936 podCreationTimestamp="2024-02-09 00:54:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 00:54:43.594776925 +0000 UTC m=+85.274171090" watchObservedRunningTime="2024-02-09 00:54:46.918479936 +0000 UTC m=+88.597874101" Feb 9 00:54:46.983547 systemd-networkd[1007]: lxc_health: Gained IPv6LL Feb 9 00:54:47.590393 kubelet[1959]: E0209 00:54:47.590361 1959 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:54:49.736733 systemd[1]: run-containerd-runc-k8s.io-80a80d82514e8d8fd17fe8b28017761bf1ffde066661264495e5b8809f6fcc30-runc.AQOkJ6.mount: Deactivated successfully. Feb 9 00:54:51.870260 sshd[3777]: pam_unix(sshd:session): session closed for user core Feb 9 00:54:51.872741 systemd[1]: sshd@25-10.0.0.122:22-10.0.0.1:56660.service: Deactivated successfully. Feb 9 00:54:51.873624 systemd[1]: session-26.scope: Deactivated successfully. Feb 9 00:54:51.874285 systemd-logind[1107]: Session 26 logged out. Waiting for processes to exit. Feb 9 00:54:51.874979 systemd-logind[1107]: Removed session 26. Feb 9 00:54:52.409441 kubelet[1959]: E0209 00:54:52.409398 1959 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"