Feb 9 00:42:54.832478 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Feb 8 21:14:17 -00 2024 Feb 9 00:42:54.832502 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 9 00:42:54.832519 kernel: BIOS-provided physical RAM map: Feb 9 00:42:54.832527 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 9 00:42:54.832535 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Feb 9 00:42:54.832543 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Feb 9 00:42:54.832552 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Feb 9 00:42:54.832560 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Feb 9 00:42:54.832568 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Feb 9 00:42:54.832578 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Feb 9 00:42:54.832586 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Feb 9 00:42:54.832594 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Feb 9 00:42:54.832602 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Feb 9 00:42:54.832610 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Feb 9 00:42:54.832620 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Feb 9 00:42:54.832631 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Feb 9 00:42:54.832639 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Feb 9 00:42:54.832647 kernel: NX (Execute Disable) protection: active Feb 9 00:42:54.832655 kernel: e820: update [mem 0x9b3f7018-0x9b400c57] usable ==> usable Feb 9 00:42:54.832663 kernel: e820: update [mem 0x9b3f7018-0x9b400c57] usable ==> usable Feb 9 00:42:54.832672 kernel: e820: update [mem 0x9b3ba018-0x9b3f6e57] usable ==> usable Feb 9 00:42:54.832680 kernel: e820: update [mem 0x9b3ba018-0x9b3f6e57] usable ==> usable Feb 9 00:42:54.832688 kernel: extended physical RAM map: Feb 9 00:42:54.832696 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 9 00:42:54.832705 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Feb 9 00:42:54.832715 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Feb 9 00:42:54.832724 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Feb 9 00:42:54.832733 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Feb 9 00:42:54.832741 kernel: reserve setup_data: [mem 0x000000000080c000-0x000000000080ffff] usable Feb 9 00:42:54.832750 kernel: reserve setup_data: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Feb 9 00:42:54.832758 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b3ba017] usable Feb 9 00:42:54.832767 kernel: reserve setup_data: [mem 0x000000009b3ba018-0x000000009b3f6e57] usable Feb 9 00:42:54.832775 kernel: reserve setup_data: [mem 0x000000009b3f6e58-0x000000009b3f7017] usable Feb 9 00:42:54.832784 kernel: reserve setup_data: [mem 0x000000009b3f7018-0x000000009b400c57] usable Feb 9 00:42:54.832792 kernel: reserve setup_data: [mem 0x000000009b400c58-0x000000009c8eefff] usable Feb 9 00:42:54.832801 kernel: reserve setup_data: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Feb 9 00:42:54.832811 kernel: reserve setup_data: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Feb 9 00:42:54.832820 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Feb 9 00:42:54.832828 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Feb 9 00:42:54.832837 kernel: reserve setup_data: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Feb 9 00:42:54.832849 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Feb 9 00:42:54.832857 kernel: efi: EFI v2.70 by EDK II Feb 9 00:42:54.832866 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b775018 RNG=0x9cb75018 Feb 9 00:42:54.832876 kernel: random: crng init done Feb 9 00:42:54.832885 kernel: SMBIOS 2.8 present. Feb 9 00:42:54.832895 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 0.0.0 02/06/2015 Feb 9 00:42:54.832904 kernel: Hypervisor detected: KVM Feb 9 00:42:54.832913 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 9 00:42:54.832922 kernel: kvm-clock: cpu 0, msr 20faa001, primary cpu clock Feb 9 00:42:54.832931 kernel: kvm-clock: using sched offset of 5049904256 cycles Feb 9 00:42:54.832941 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 9 00:42:54.832951 kernel: tsc: Detected 2794.750 MHz processor Feb 9 00:42:54.832963 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 9 00:42:54.832972 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 9 00:42:54.832982 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Feb 9 00:42:54.832991 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 9 00:42:54.833001 kernel: Using GB pages for direct mapping Feb 9 00:42:54.833010 kernel: Secure boot disabled Feb 9 00:42:54.833019 kernel: ACPI: Early table checksum verification disabled Feb 9 00:42:54.833028 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Feb 9 00:42:54.833038 kernel: ACPI: XSDT 0x000000009CB7D0E8 00004C (v01 BOCHS BXPC 00000001 01000013) Feb 9 00:42:54.833049 kernel: ACPI: FACP 0x000000009CB7A000 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 00:42:54.833058 kernel: ACPI: DSDT 0x000000009CB7B000 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 00:42:54.833067 kernel: ACPI: FACS 0x000000009CBDD000 000040 Feb 9 00:42:54.833076 kernel: ACPI: APIC 0x000000009CB79000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 00:42:54.833086 kernel: ACPI: HPET 0x000000009CB78000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 00:42:54.833095 kernel: ACPI: WAET 0x000000009CB77000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 00:42:54.833104 kernel: ACPI: BGRT 0x000000009CB76000 000038 (v01 INTEL EDK2 00000002 01000013) Feb 9 00:42:54.833114 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb7a000-0x9cb7a073] Feb 9 00:42:54.833134 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7b000-0x9cb7ca38] Feb 9 00:42:54.833146 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Feb 9 00:42:54.833156 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb79000-0x9cb7908f] Feb 9 00:42:54.833165 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb78000-0x9cb78037] Feb 9 00:42:54.833174 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb77000-0x9cb77027] Feb 9 00:42:54.833184 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb76000-0x9cb76037] Feb 9 00:42:54.833193 kernel: No NUMA configuration found Feb 9 00:42:54.833202 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Feb 9 00:42:54.833211 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Feb 9 00:42:54.833221 kernel: Zone ranges: Feb 9 00:42:54.833232 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 9 00:42:54.833241 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Feb 9 00:42:54.833263 kernel: Normal empty Feb 9 00:42:54.833276 kernel: Movable zone start for each node Feb 9 00:42:54.833285 kernel: Early memory node ranges Feb 9 00:42:54.833294 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Feb 9 00:42:54.833306 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Feb 9 00:42:54.833315 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Feb 9 00:42:54.833324 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Feb 9 00:42:54.833336 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Feb 9 00:42:54.833345 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Feb 9 00:42:54.833355 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Feb 9 00:42:54.833364 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 9 00:42:54.833373 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Feb 9 00:42:54.833390 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Feb 9 00:42:54.833400 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 9 00:42:54.833409 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Feb 9 00:42:54.833479 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Feb 9 00:42:54.833492 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Feb 9 00:42:54.833501 kernel: ACPI: PM-Timer IO Port: 0xb008 Feb 9 00:42:54.833510 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 9 00:42:54.833519 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 9 00:42:54.833529 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 9 00:42:54.833538 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 9 00:42:54.833548 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 9 00:42:54.833557 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 9 00:42:54.833566 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 9 00:42:54.833578 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 9 00:42:54.833587 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 9 00:42:54.833597 kernel: TSC deadline timer available Feb 9 00:42:54.833606 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Feb 9 00:42:54.833616 kernel: kvm-guest: KVM setup pv remote TLB flush Feb 9 00:42:54.833625 kernel: kvm-guest: setup PV sched yield Feb 9 00:42:54.833634 kernel: [mem 0x9d000000-0xffffffff] available for PCI devices Feb 9 00:42:54.833644 kernel: Booting paravirtualized kernel on KVM Feb 9 00:42:54.833653 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 9 00:42:54.833663 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Feb 9 00:42:54.833674 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u524288 Feb 9 00:42:54.833683 kernel: pcpu-alloc: s185624 r8192 d31464 u524288 alloc=1*2097152 Feb 9 00:42:54.833698 kernel: pcpu-alloc: [0] 0 1 2 3 Feb 9 00:42:54.833709 kernel: kvm-guest: setup async PF for cpu 0 Feb 9 00:42:54.833718 kernel: kvm-guest: stealtime: cpu 0, msr 9ba1c0c0 Feb 9 00:42:54.833727 kernel: kvm-guest: PV spinlocks enabled Feb 9 00:42:54.833736 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 9 00:42:54.833746 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Feb 9 00:42:54.833755 kernel: Policy zone: DMA32 Feb 9 00:42:54.833766 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 9 00:42:54.833776 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 00:42:54.833788 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 9 00:42:54.833798 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 00:42:54.833808 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 00:42:54.833819 kernel: Memory: 2405540K/2567000K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 161200K reserved, 0K cma-reserved) Feb 9 00:42:54.833829 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 9 00:42:54.833840 kernel: ftrace: allocating 34475 entries in 135 pages Feb 9 00:42:54.833850 kernel: ftrace: allocated 135 pages with 4 groups Feb 9 00:42:54.833860 kernel: rcu: Hierarchical RCU implementation. Feb 9 00:42:54.833871 kernel: rcu: RCU event tracing is enabled. Feb 9 00:42:54.833881 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 9 00:42:54.833891 kernel: Rude variant of Tasks RCU enabled. Feb 9 00:42:54.833900 kernel: Tracing variant of Tasks RCU enabled. Feb 9 00:42:54.833910 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 00:42:54.833920 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 9 00:42:54.833932 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Feb 9 00:42:54.833942 kernel: Console: colour dummy device 80x25 Feb 9 00:42:54.833951 kernel: printk: console [ttyS0] enabled Feb 9 00:42:54.833961 kernel: ACPI: Core revision 20210730 Feb 9 00:42:54.833971 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 9 00:42:54.833980 kernel: APIC: Switch to symmetric I/O mode setup Feb 9 00:42:54.833990 kernel: x2apic enabled Feb 9 00:42:54.833999 kernel: Switched APIC routing to physical x2apic. Feb 9 00:42:54.834010 kernel: kvm-guest: setup PV IPIs Feb 9 00:42:54.834021 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 9 00:42:54.834031 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 9 00:42:54.834042 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Feb 9 00:42:54.834051 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 9 00:42:54.834061 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Feb 9 00:42:54.834071 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Feb 9 00:42:54.834081 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 9 00:42:54.834091 kernel: Spectre V2 : Mitigation: Retpolines Feb 9 00:42:54.834101 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 9 00:42:54.834112 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 9 00:42:54.834123 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Feb 9 00:42:54.834161 kernel: RETBleed: Mitigation: untrained return thunk Feb 9 00:42:54.834171 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 9 00:42:54.834181 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Feb 9 00:42:54.834191 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 9 00:42:54.834200 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 9 00:42:54.834210 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 9 00:42:54.834220 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 9 00:42:54.834233 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 9 00:42:54.834242 kernel: Freeing SMP alternatives memory: 32K Feb 9 00:42:54.834252 kernel: pid_max: default: 32768 minimum: 301 Feb 9 00:42:54.834262 kernel: LSM: Security Framework initializing Feb 9 00:42:54.834272 kernel: SELinux: Initializing. Feb 9 00:42:54.834282 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 00:42:54.834292 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 00:42:54.834302 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Feb 9 00:42:54.834314 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Feb 9 00:42:54.834324 kernel: ... version: 0 Feb 9 00:42:54.834334 kernel: ... bit width: 48 Feb 9 00:42:54.834343 kernel: ... generic registers: 6 Feb 9 00:42:54.834354 kernel: ... value mask: 0000ffffffffffff Feb 9 00:42:54.834363 kernel: ... max period: 00007fffffffffff Feb 9 00:42:54.834373 kernel: ... fixed-purpose events: 0 Feb 9 00:42:54.834390 kernel: ... event mask: 000000000000003f Feb 9 00:42:54.834400 kernel: signal: max sigframe size: 1776 Feb 9 00:42:54.834409 kernel: rcu: Hierarchical SRCU implementation. Feb 9 00:42:54.834420 kernel: smp: Bringing up secondary CPUs ... Feb 9 00:42:54.834429 kernel: x86: Booting SMP configuration: Feb 9 00:42:54.834438 kernel: .... node #0, CPUs: #1 Feb 9 00:42:54.834447 kernel: kvm-clock: cpu 1, msr 20faa041, secondary cpu clock Feb 9 00:42:54.834456 kernel: kvm-guest: setup async PF for cpu 1 Feb 9 00:42:54.834465 kernel: kvm-guest: stealtime: cpu 1, msr 9ba9c0c0 Feb 9 00:42:54.834474 kernel: #2 Feb 9 00:42:54.834484 kernel: kvm-clock: cpu 2, msr 20faa081, secondary cpu clock Feb 9 00:42:54.834493 kernel: kvm-guest: setup async PF for cpu 2 Feb 9 00:42:54.834503 kernel: kvm-guest: stealtime: cpu 2, msr 9bb1c0c0 Feb 9 00:42:54.834512 kernel: #3 Feb 9 00:42:54.834522 kernel: kvm-clock: cpu 3, msr 20faa0c1, secondary cpu clock Feb 9 00:42:54.834530 kernel: kvm-guest: setup async PF for cpu 3 Feb 9 00:42:54.834539 kernel: kvm-guest: stealtime: cpu 3, msr 9bb9c0c0 Feb 9 00:42:54.834549 kernel: smp: Brought up 1 node, 4 CPUs Feb 9 00:42:54.834558 kernel: smpboot: Max logical packages: 1 Feb 9 00:42:54.834567 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Feb 9 00:42:54.834576 kernel: devtmpfs: initialized Feb 9 00:42:54.834586 kernel: x86/mm: Memory block size: 128MB Feb 9 00:42:54.834596 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Feb 9 00:42:54.834605 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Feb 9 00:42:54.834614 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Feb 9 00:42:54.834624 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Feb 9 00:42:54.834633 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Feb 9 00:42:54.834643 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 00:42:54.834652 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 9 00:42:54.834662 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 00:42:54.834674 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 00:42:54.834683 kernel: audit: initializing netlink subsys (disabled) Feb 9 00:42:54.834693 kernel: audit: type=2000 audit(1707439374.134:1): state=initialized audit_enabled=0 res=1 Feb 9 00:42:54.834703 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 00:42:54.834713 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 9 00:42:54.834722 kernel: cpuidle: using governor menu Feb 9 00:42:54.834732 kernel: ACPI: bus type PCI registered Feb 9 00:42:54.834742 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 00:42:54.834752 kernel: dca service started, version 1.12.1 Feb 9 00:42:54.834763 kernel: PCI: Using configuration type 1 for base access Feb 9 00:42:54.834773 kernel: PCI: Using configuration type 1 for extended access Feb 9 00:42:54.834783 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 9 00:42:54.834793 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 00:42:54.834803 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 00:42:54.834813 kernel: ACPI: Added _OSI(Module Device) Feb 9 00:42:54.834822 kernel: ACPI: Added _OSI(Processor Device) Feb 9 00:42:54.834832 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 00:42:54.834842 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 00:42:54.834853 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 00:42:54.834862 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 00:42:54.834871 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 00:42:54.834881 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 9 00:42:54.834890 kernel: ACPI: Interpreter enabled Feb 9 00:42:54.834899 kernel: ACPI: PM: (supports S0 S3 S5) Feb 9 00:42:54.834908 kernel: ACPI: Using IOAPIC for interrupt routing Feb 9 00:42:54.834917 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 9 00:42:54.834926 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Feb 9 00:42:54.834937 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 9 00:42:54.835069 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 9 00:42:54.835086 kernel: acpiphp: Slot [3] registered Feb 9 00:42:54.835096 kernel: acpiphp: Slot [4] registered Feb 9 00:42:54.835106 kernel: acpiphp: Slot [5] registered Feb 9 00:42:54.835116 kernel: acpiphp: Slot [6] registered Feb 9 00:42:54.835136 kernel: acpiphp: Slot [7] registered Feb 9 00:42:54.835146 kernel: acpiphp: Slot [8] registered Feb 9 00:42:54.835156 kernel: acpiphp: Slot [9] registered Feb 9 00:42:54.835168 kernel: acpiphp: Slot [10] registered Feb 9 00:42:54.835178 kernel: acpiphp: Slot [11] registered Feb 9 00:42:54.835188 kernel: acpiphp: Slot [12] registered Feb 9 00:42:54.835198 kernel: acpiphp: Slot [13] registered Feb 9 00:42:54.835207 kernel: acpiphp: Slot [14] registered Feb 9 00:42:54.835216 kernel: acpiphp: Slot [15] registered Feb 9 00:42:54.835225 kernel: acpiphp: Slot [16] registered Feb 9 00:42:54.835234 kernel: acpiphp: Slot [17] registered Feb 9 00:42:54.835244 kernel: acpiphp: Slot [18] registered Feb 9 00:42:54.835261 kernel: acpiphp: Slot [19] registered Feb 9 00:42:54.835270 kernel: acpiphp: Slot [20] registered Feb 9 00:42:54.835280 kernel: acpiphp: Slot [21] registered Feb 9 00:42:54.835290 kernel: acpiphp: Slot [22] registered Feb 9 00:42:54.835301 kernel: acpiphp: Slot [23] registered Feb 9 00:42:54.835312 kernel: acpiphp: Slot [24] registered Feb 9 00:42:54.835324 kernel: acpiphp: Slot [25] registered Feb 9 00:42:54.835334 kernel: acpiphp: Slot [26] registered Feb 9 00:42:54.835344 kernel: acpiphp: Slot [27] registered Feb 9 00:42:54.835354 kernel: acpiphp: Slot [28] registered Feb 9 00:42:54.835365 kernel: acpiphp: Slot [29] registered Feb 9 00:42:54.835381 kernel: acpiphp: Slot [30] registered Feb 9 00:42:54.835391 kernel: acpiphp: Slot [31] registered Feb 9 00:42:54.835401 kernel: PCI host bridge to bus 0000:00 Feb 9 00:42:54.835510 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 9 00:42:54.835600 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 9 00:42:54.835686 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 9 00:42:54.835777 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Feb 9 00:42:54.835863 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0x87fffffff window] Feb 9 00:42:54.835950 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 9 00:42:54.836062 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 9 00:42:54.836185 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 9 00:42:54.836299 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Feb 9 00:42:54.836415 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Feb 9 00:42:54.836519 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Feb 9 00:42:54.836615 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Feb 9 00:42:54.836714 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Feb 9 00:42:54.836811 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Feb 9 00:42:54.836916 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 9 00:42:54.837014 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Feb 9 00:42:54.837114 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Feb 9 00:42:54.837245 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Feb 9 00:42:54.837372 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Feb 9 00:42:54.837484 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc1043000-0xc1043fff] Feb 9 00:42:54.837620 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Feb 9 00:42:54.837769 kernel: pci 0000:00:02.0: BAR 0: assigned to efifb Feb 9 00:42:54.837926 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 9 00:42:54.838069 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Feb 9 00:42:54.838215 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0a0-0xc0bf] Feb 9 00:42:54.838351 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Feb 9 00:42:54.838488 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Feb 9 00:42:54.838626 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Feb 9 00:42:54.838770 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Feb 9 00:42:54.838923 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Feb 9 00:42:54.839077 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Feb 9 00:42:54.839268 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Feb 9 00:42:54.839432 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Feb 9 00:42:54.839583 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc1040000-0xc1040fff] Feb 9 00:42:54.839748 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Feb 9 00:42:54.839903 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Feb 9 00:42:54.839921 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 9 00:42:54.839935 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 9 00:42:54.839946 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 9 00:42:54.839955 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 9 00:42:54.839965 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 9 00:42:54.839975 kernel: iommu: Default domain type: Translated Feb 9 00:42:54.840001 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 9 00:42:54.840097 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Feb 9 00:42:54.840217 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 9 00:42:54.840312 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Feb 9 00:42:54.840328 kernel: vgaarb: loaded Feb 9 00:42:54.840338 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 00:42:54.840348 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 00:42:54.840358 kernel: PTP clock support registered Feb 9 00:42:54.840368 kernel: Registered efivars operations Feb 9 00:42:54.840385 kernel: PCI: Using ACPI for IRQ routing Feb 9 00:42:54.840395 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 9 00:42:54.840405 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Feb 9 00:42:54.840414 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Feb 9 00:42:54.840425 kernel: e820: reserve RAM buffer [mem 0x9b3ba018-0x9bffffff] Feb 9 00:42:54.840434 kernel: e820: reserve RAM buffer [mem 0x9b3f7018-0x9bffffff] Feb 9 00:42:54.840444 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Feb 9 00:42:54.840454 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Feb 9 00:42:54.840463 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 9 00:42:54.840473 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 9 00:42:54.840483 kernel: clocksource: Switched to clocksource kvm-clock Feb 9 00:42:54.840492 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 00:42:54.840502 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 00:42:54.840513 kernel: pnp: PnP ACPI init Feb 9 00:42:54.840611 kernel: pnp 00:02: [dma 2] Feb 9 00:42:54.840625 kernel: pnp: PnP ACPI: found 6 devices Feb 9 00:42:54.840635 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 9 00:42:54.840645 kernel: NET: Registered PF_INET protocol family Feb 9 00:42:54.840655 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 00:42:54.840665 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 9 00:42:54.840675 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 00:42:54.840687 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 9 00:42:54.840697 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 9 00:42:54.840706 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 9 00:42:54.840715 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 00:42:54.840725 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 00:42:54.840735 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 00:42:54.840745 kernel: NET: Registered PF_XDP protocol family Feb 9 00:42:54.840838 kernel: pci 0000:00:05.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Feb 9 00:42:54.840946 kernel: pci 0000:00:05.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Feb 9 00:42:54.841030 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 9 00:42:54.841111 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 9 00:42:54.841258 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 9 00:42:54.841340 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Feb 9 00:42:54.841428 kernel: pci_bus 0000:00: resource 8 [mem 0x800000000-0x87fffffff window] Feb 9 00:42:54.841519 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Feb 9 00:42:54.841613 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 9 00:42:54.841707 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Feb 9 00:42:54.841720 kernel: PCI: CLS 0 bytes, default 64 Feb 9 00:42:54.841731 kernel: Initialise system trusted keyrings Feb 9 00:42:54.841741 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 9 00:42:54.841751 kernel: Key type asymmetric registered Feb 9 00:42:54.841761 kernel: Asymmetric key parser 'x509' registered Feb 9 00:42:54.841770 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 00:42:54.841780 kernel: io scheduler mq-deadline registered Feb 9 00:42:54.841790 kernel: io scheduler kyber registered Feb 9 00:42:54.841801 kernel: io scheduler bfq registered Feb 9 00:42:54.841811 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 9 00:42:54.841821 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 9 00:42:54.841831 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Feb 9 00:42:54.841841 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 9 00:42:54.841852 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 00:42:54.841862 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 9 00:42:54.841872 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 9 00:42:54.841881 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 9 00:42:54.841893 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 9 00:42:54.841990 kernel: rtc_cmos 00:05: RTC can wake from S4 Feb 9 00:42:54.842007 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 9 00:42:54.842087 kernel: rtc_cmos 00:05: registered as rtc0 Feb 9 00:42:54.842199 kernel: rtc_cmos 00:05: setting system clock to 2024-02-09T00:42:54 UTC (1707439374) Feb 9 00:42:54.842283 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Feb 9 00:42:54.842296 kernel: efifb: probing for efifb Feb 9 00:42:54.842306 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Feb 9 00:42:54.842316 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Feb 9 00:42:54.842326 kernel: efifb: scrolling: redraw Feb 9 00:42:54.842336 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 9 00:42:54.842346 kernel: Console: switching to colour frame buffer device 160x50 Feb 9 00:42:54.842357 kernel: fb0: EFI VGA frame buffer device Feb 9 00:42:54.842370 kernel: pstore: Registered efi as persistent store backend Feb 9 00:42:54.842387 kernel: NET: Registered PF_INET6 protocol family Feb 9 00:42:54.842396 kernel: Segment Routing with IPv6 Feb 9 00:42:54.842406 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 00:42:54.842415 kernel: NET: Registered PF_PACKET protocol family Feb 9 00:42:54.842425 kernel: Key type dns_resolver registered Feb 9 00:42:54.842435 kernel: IPI shorthand broadcast: enabled Feb 9 00:42:54.842445 kernel: sched_clock: Marking stable (377112722, 94295103)->(504647985, -33240160) Feb 9 00:42:54.842455 kernel: registered taskstats version 1 Feb 9 00:42:54.842465 kernel: Loading compiled-in X.509 certificates Feb 9 00:42:54.842477 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: e9d857ae0e8100c174221878afd1046acbb054a6' Feb 9 00:42:54.842486 kernel: Key type .fscrypt registered Feb 9 00:42:54.842496 kernel: Key type fscrypt-provisioning registered Feb 9 00:42:54.842505 kernel: pstore: Using crash dump compression: deflate Feb 9 00:42:54.842515 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 00:42:54.842525 kernel: ima: Allocated hash algorithm: sha1 Feb 9 00:42:54.842534 kernel: ima: No architecture policies found Feb 9 00:42:54.842545 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 9 00:42:54.842556 kernel: Write protecting the kernel read-only data: 28672k Feb 9 00:42:54.842568 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 9 00:42:54.842578 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 9 00:42:54.842587 kernel: Run /init as init process Feb 9 00:42:54.842597 kernel: with arguments: Feb 9 00:42:54.842607 kernel: /init Feb 9 00:42:54.842616 kernel: with environment: Feb 9 00:42:54.842625 kernel: HOME=/ Feb 9 00:42:54.842635 kernel: TERM=linux Feb 9 00:42:54.842645 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 00:42:54.842660 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 00:42:54.842672 systemd[1]: Detected virtualization kvm. Feb 9 00:42:54.842683 systemd[1]: Detected architecture x86-64. Feb 9 00:42:54.842693 systemd[1]: Running in initrd. Feb 9 00:42:54.842703 systemd[1]: No hostname configured, using default hostname. Feb 9 00:42:54.842713 systemd[1]: Hostname set to . Feb 9 00:42:54.842725 systemd[1]: Initializing machine ID from VM UUID. Feb 9 00:42:54.842736 systemd[1]: Queued start job for default target initrd.target. Feb 9 00:42:54.842746 systemd[1]: Started systemd-ask-password-console.path. Feb 9 00:42:54.842756 systemd[1]: Reached target cryptsetup.target. Feb 9 00:42:54.842767 systemd[1]: Reached target paths.target. Feb 9 00:42:54.842777 systemd[1]: Reached target slices.target. Feb 9 00:42:54.842787 systemd[1]: Reached target swap.target. Feb 9 00:42:54.842797 systemd[1]: Reached target timers.target. Feb 9 00:42:54.842809 systemd[1]: Listening on iscsid.socket. Feb 9 00:42:54.842820 systemd[1]: Listening on iscsiuio.socket. Feb 9 00:42:54.842830 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 00:42:54.842840 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 00:42:54.842851 systemd[1]: Listening on systemd-journald.socket. Feb 9 00:42:54.842862 systemd[1]: Listening on systemd-networkd.socket. Feb 9 00:42:54.842873 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 00:42:54.842883 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 00:42:54.842893 systemd[1]: Reached target sockets.target. Feb 9 00:42:54.842905 systemd[1]: Starting kmod-static-nodes.service... Feb 9 00:42:54.842915 systemd[1]: Finished network-cleanup.service. Feb 9 00:42:54.842925 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 00:42:54.842935 systemd[1]: Starting systemd-journald.service... Feb 9 00:42:54.842946 systemd[1]: Starting systemd-modules-load.service... Feb 9 00:42:54.842957 systemd[1]: Starting systemd-resolved.service... Feb 9 00:42:54.842967 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 00:42:54.843369 systemd[1]: Finished kmod-static-nodes.service. Feb 9 00:42:54.843390 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 00:42:54.843403 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 00:42:54.843414 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 00:42:54.843425 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 00:42:54.843436 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 00:42:54.843449 systemd-journald[197]: Journal started Feb 9 00:42:54.843501 systemd-journald[197]: Runtime Journal (/run/log/journal/054666b1e06349f2a08405ccc8691856) is 6.0M, max 48.4M, 42.4M free. Feb 9 00:42:54.831102 systemd-modules-load[198]: Inserted module 'overlay' Feb 9 00:42:54.845537 systemd[1]: Started systemd-journald.service. Feb 9 00:42:54.845000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:54.849151 kernel: audit: type=1130 audit(1707439374.845:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:54.849320 systemd-resolved[199]: Positive Trust Anchors: Feb 9 00:42:54.849334 systemd-resolved[199]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 00:42:54.849382 systemd-resolved[199]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 00:42:54.862321 kernel: audit: type=1130 audit(1707439374.855:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:54.862339 kernel: audit: type=1130 audit(1707439374.858:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:54.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:54.858000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:54.851569 systemd-resolved[199]: Defaulting to hostname 'linux'. Feb 9 00:42:54.852210 systemd[1]: Started systemd-resolved.service. Feb 9 00:42:54.865507 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 00:42:54.857282 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 00:42:54.858414 systemd[1]: Reached target nss-lookup.target. Feb 9 00:42:54.867504 kernel: Bridge firewalling registered Feb 9 00:42:54.861730 systemd[1]: Starting dracut-cmdline.service... Feb 9 00:42:54.867391 systemd-modules-load[198]: Inserted module 'br_netfilter' Feb 9 00:42:54.869430 dracut-cmdline[214]: dracut-dracut-053 Feb 9 00:42:54.871086 dracut-cmdline[214]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 9 00:42:54.885150 kernel: SCSI subsystem initialized Feb 9 00:42:54.898084 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 00:42:54.898176 kernel: device-mapper: uevent: version 1.0.3 Feb 9 00:42:54.898188 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 00:42:54.901049 systemd-modules-load[198]: Inserted module 'dm_multipath' Feb 9 00:42:54.901628 systemd[1]: Finished systemd-modules-load.service. Feb 9 00:42:54.905366 kernel: audit: type=1130 audit(1707439374.902:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:54.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:54.903140 systemd[1]: Starting systemd-sysctl.service... Feb 9 00:42:54.910510 systemd[1]: Finished systemd-sysctl.service. Feb 9 00:42:54.911000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:54.914146 kernel: audit: type=1130 audit(1707439374.911:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:54.926156 kernel: Loading iSCSI transport class v2.0-870. Feb 9 00:42:54.936157 kernel: iscsi: registered transport (tcp) Feb 9 00:42:54.955151 kernel: iscsi: registered transport (qla4xxx) Feb 9 00:42:54.955207 kernel: QLogic iSCSI HBA Driver Feb 9 00:42:54.973894 systemd[1]: Finished dracut-cmdline.service. Feb 9 00:42:54.993269 kernel: audit: type=1130 audit(1707439374.974:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:54.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:54.975186 systemd[1]: Starting dracut-pre-udev.service... Feb 9 00:42:55.028157 kernel: raid6: avx2x4 gen() 31020 MB/s Feb 9 00:42:55.045150 kernel: raid6: avx2x4 xor() 8043 MB/s Feb 9 00:42:55.062152 kernel: raid6: avx2x2 gen() 32535 MB/s Feb 9 00:42:55.079149 kernel: raid6: avx2x2 xor() 19182 MB/s Feb 9 00:42:55.096146 kernel: raid6: avx2x1 gen() 26598 MB/s Feb 9 00:42:55.113148 kernel: raid6: avx2x1 xor() 15410 MB/s Feb 9 00:42:55.130142 kernel: raid6: sse2x4 gen() 14884 MB/s Feb 9 00:42:55.147146 kernel: raid6: sse2x4 xor() 7587 MB/s Feb 9 00:42:55.164145 kernel: raid6: sse2x2 gen() 16390 MB/s Feb 9 00:42:55.181139 kernel: raid6: sse2x2 xor() 9866 MB/s Feb 9 00:42:55.198152 kernel: raid6: sse2x1 gen() 12402 MB/s Feb 9 00:42:55.215156 kernel: raid6: sse2x1 xor() 7811 MB/s Feb 9 00:42:55.215175 kernel: raid6: using algorithm avx2x2 gen() 32535 MB/s Feb 9 00:42:55.215185 kernel: raid6: .... xor() 19182 MB/s, rmw enabled Feb 9 00:42:55.216143 kernel: raid6: using avx2x2 recovery algorithm Feb 9 00:42:55.227145 kernel: xor: automatically using best checksumming function avx Feb 9 00:42:55.313153 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 9 00:42:55.318761 systemd[1]: Finished dracut-pre-udev.service. Feb 9 00:42:55.322394 kernel: audit: type=1130 audit(1707439375.319:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:55.322414 kernel: audit: type=1334 audit(1707439375.321:9): prog-id=7 op=LOAD Feb 9 00:42:55.322423 kernel: audit: type=1334 audit(1707439375.322:10): prog-id=8 op=LOAD Feb 9 00:42:55.319000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:55.321000 audit: BPF prog-id=7 op=LOAD Feb 9 00:42:55.322000 audit: BPF prog-id=8 op=LOAD Feb 9 00:42:55.322656 systemd[1]: Starting systemd-udevd.service... Feb 9 00:42:55.333987 systemd-udevd[398]: Using default interface naming scheme 'v252'. Feb 9 00:42:55.337670 systemd[1]: Started systemd-udevd.service. Feb 9 00:42:55.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:55.338747 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 00:42:55.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:55.386734 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Feb 9 00:42:55.364993 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 00:42:55.387234 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 00:42:55.419503 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 00:42:55.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:55.445141 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 9 00:42:55.449123 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 9 00:42:55.449157 kernel: GPT:9289727 != 19775487 Feb 9 00:42:55.449169 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 9 00:42:55.449179 kernel: GPT:9289727 != 19775487 Feb 9 00:42:55.449187 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 9 00:42:55.449195 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 00:42:55.451141 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 00:42:55.464144 kernel: AVX2 version of gcm_enc/dec engaged. Feb 9 00:42:55.464182 kernel: AES CTR mode by8 optimization enabled Feb 9 00:42:55.464191 kernel: libata version 3.00 loaded. Feb 9 00:42:55.468142 kernel: ata_piix 0000:00:01.1: version 2.13 Feb 9 00:42:55.469151 kernel: scsi host0: ata_piix Feb 9 00:42:55.470146 kernel: scsi host1: ata_piix Feb 9 00:42:55.470261 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Feb 9 00:42:55.471673 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Feb 9 00:42:55.488154 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (452) Feb 9 00:42:55.489472 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 00:42:55.497007 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 00:42:55.500249 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 00:42:55.502770 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 00:42:55.502830 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 00:42:55.537903 systemd[1]: Starting disk-uuid.service... Feb 9 00:42:55.628144 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Feb 9 00:42:55.628170 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 9 00:42:55.659151 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Feb 9 00:42:55.659290 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 9 00:42:55.676165 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Feb 9 00:42:55.772309 disk-uuid[516]: Primary Header is updated. Feb 9 00:42:55.772309 disk-uuid[516]: Secondary Entries is updated. Feb 9 00:42:55.772309 disk-uuid[516]: Secondary Header is updated. Feb 9 00:42:55.775175 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 00:42:55.778161 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 00:42:56.822580 disk-uuid[531]: The operation has completed successfully. Feb 9 00:42:56.823666 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 00:42:56.842555 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 00:42:56.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:56.843000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:56.842626 systemd[1]: Finished disk-uuid.service. Feb 9 00:42:56.849344 systemd[1]: Starting verity-setup.service... Feb 9 00:42:56.861155 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Feb 9 00:42:56.875709 systemd[1]: Found device dev-mapper-usr.device. Feb 9 00:42:56.877514 systemd[1]: Mounting sysusr-usr.mount... Feb 9 00:42:56.879774 systemd[1]: Finished verity-setup.service. Feb 9 00:42:56.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:56.932033 systemd[1]: Mounted sysusr-usr.mount. Feb 9 00:42:56.933014 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 00:42:56.933101 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 00:42:56.933720 systemd[1]: Starting ignition-setup.service... Feb 9 00:42:56.935106 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 00:42:56.941869 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 00:42:56.941924 kernel: BTRFS info (device vda6): using free space tree Feb 9 00:42:56.941934 kernel: BTRFS info (device vda6): has skinny extents Feb 9 00:42:56.948716 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 00:42:56.987190 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 00:42:56.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:56.998000 audit: BPF prog-id=9 op=LOAD Feb 9 00:42:56.999169 systemd[1]: Starting systemd-networkd.service... Feb 9 00:42:57.017565 systemd-networkd[702]: lo: Link UP Feb 9 00:42:57.017573 systemd-networkd[702]: lo: Gained carrier Feb 9 00:42:57.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:57.017935 systemd-networkd[702]: Enumeration completed Feb 9 00:42:57.018007 systemd[1]: Started systemd-networkd.service. Feb 9 00:42:57.018179 systemd-networkd[702]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 00:42:57.018903 systemd-networkd[702]: eth0: Link UP Feb 9 00:42:57.018907 systemd-networkd[702]: eth0: Gained carrier Feb 9 00:42:57.019045 systemd[1]: Reached target network.target. Feb 9 00:42:57.020557 systemd[1]: Starting iscsiuio.service... Feb 9 00:42:57.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:57.023886 systemd[1]: Started iscsiuio.service. Feb 9 00:42:57.025470 systemd[1]: Starting iscsid.service... Feb 9 00:42:57.028143 iscsid[707]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 00:42:57.028143 iscsid[707]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 00:42:57.028143 iscsid[707]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 00:42:57.028143 iscsid[707]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 00:42:57.028143 iscsid[707]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 00:42:57.028143 iscsid[707]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 00:42:57.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:57.029147 systemd[1]: Started iscsid.service. Feb 9 00:42:57.032552 systemd[1]: Starting dracut-initqueue.service... Feb 9 00:42:57.040192 systemd-networkd[702]: eth0: DHCPv4 address 10.0.0.31/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 9 00:42:57.041000 systemd[1]: Finished dracut-initqueue.service. Feb 9 00:42:57.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:57.041715 systemd[1]: Reached target remote-fs-pre.target. Feb 9 00:42:57.042752 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 00:42:57.043353 systemd[1]: Reached target remote-fs.target. Feb 9 00:42:57.043864 systemd[1]: Starting dracut-pre-mount.service... Feb 9 00:42:57.050424 systemd[1]: Finished dracut-pre-mount.service. Feb 9 00:42:57.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:57.072703 systemd[1]: Finished ignition-setup.service. Feb 9 00:42:57.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:57.074073 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 00:42:57.109038 ignition[722]: Ignition 2.14.0 Feb 9 00:42:57.109046 ignition[722]: Stage: fetch-offline Feb 9 00:42:57.109099 ignition[722]: no configs at "/usr/lib/ignition/base.d" Feb 9 00:42:57.109107 ignition[722]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 00:42:57.109207 ignition[722]: parsed url from cmdline: "" Feb 9 00:42:57.109210 ignition[722]: no config URL provided Feb 9 00:42:57.109214 ignition[722]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 00:42:57.109220 ignition[722]: no config at "/usr/lib/ignition/user.ign" Feb 9 00:42:57.109237 ignition[722]: op(1): [started] loading QEMU firmware config module Feb 9 00:42:57.109241 ignition[722]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 9 00:42:57.116044 ignition[722]: op(1): [finished] loading QEMU firmware config module Feb 9 00:42:57.175467 ignition[722]: parsing config with SHA512: 7519f852bfb612994ef444b1cd81b0413766bc8641babd2d20fa8a5265d223e9893e7b7adaadb01d8a6e1034e22e3656e02612612c1100e9df63074f2c93701d Feb 9 00:42:57.207946 unknown[722]: fetched base config from "system" Feb 9 00:42:57.207961 unknown[722]: fetched user config from "qemu" Feb 9 00:42:57.208603 ignition[722]: fetch-offline: fetch-offline passed Feb 9 00:42:57.208665 ignition[722]: Ignition finished successfully Feb 9 00:42:57.211000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:57.209816 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 00:42:57.211227 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 9 00:42:57.211893 systemd[1]: Starting ignition-kargs.service... Feb 9 00:42:57.222306 ignition[731]: Ignition 2.14.0 Feb 9 00:42:57.222322 ignition[731]: Stage: kargs Feb 9 00:42:57.222405 ignition[731]: no configs at "/usr/lib/ignition/base.d" Feb 9 00:42:57.222414 ignition[731]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 00:42:57.223608 ignition[731]: kargs: kargs passed Feb 9 00:42:57.224977 systemd[1]: Finished ignition-kargs.service. Feb 9 00:42:57.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:57.223646 ignition[731]: Ignition finished successfully Feb 9 00:42:57.226706 systemd[1]: Starting ignition-disks.service... Feb 9 00:42:57.234248 ignition[737]: Ignition 2.14.0 Feb 9 00:42:57.234257 ignition[737]: Stage: disks Feb 9 00:42:57.234372 ignition[737]: no configs at "/usr/lib/ignition/base.d" Feb 9 00:42:57.234384 ignition[737]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 00:42:57.235720 ignition[737]: disks: disks passed Feb 9 00:42:57.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:57.236373 systemd[1]: Finished ignition-disks.service. Feb 9 00:42:57.235758 ignition[737]: Ignition finished successfully Feb 9 00:42:57.237206 systemd[1]: Reached target initrd-root-device.target. Feb 9 00:42:57.238172 systemd[1]: Reached target local-fs-pre.target. Feb 9 00:42:57.238734 systemd[1]: Reached target local-fs.target. Feb 9 00:42:57.239251 systemd[1]: Reached target sysinit.target. Feb 9 00:42:57.239284 systemd[1]: Reached target basic.target. Feb 9 00:42:57.239981 systemd[1]: Starting systemd-fsck-root.service... Feb 9 00:42:57.251309 systemd-fsck[745]: ROOT: clean, 602/553520 files, 56014/553472 blocks Feb 9 00:42:57.255914 systemd[1]: Finished systemd-fsck-root.service. Feb 9 00:42:57.257000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:57.258209 systemd[1]: Mounting sysroot.mount... Feb 9 00:42:57.264732 systemd[1]: Mounted sysroot.mount. Feb 9 00:42:57.266203 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 00:42:57.265250 systemd[1]: Reached target initrd-root-fs.target. Feb 9 00:42:57.266790 systemd[1]: Mounting sysroot-usr.mount... Feb 9 00:42:57.267454 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 9 00:42:57.267482 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 00:42:57.267498 systemd[1]: Reached target ignition-diskful.target. Feb 9 00:42:57.268895 systemd[1]: Mounted sysroot-usr.mount. Feb 9 00:42:57.270251 systemd[1]: Starting initrd-setup-root.service... Feb 9 00:42:57.274178 initrd-setup-root[755]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 00:42:57.277124 initrd-setup-root[763]: cut: /sysroot/etc/group: No such file or directory Feb 9 00:42:57.279973 initrd-setup-root[771]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 00:42:57.283104 initrd-setup-root[779]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 00:42:57.305028 systemd[1]: Finished initrd-setup-root.service. Feb 9 00:42:57.305000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:57.306533 systemd[1]: Starting ignition-mount.service... Feb 9 00:42:57.307238 systemd[1]: Starting sysroot-boot.service... Feb 9 00:42:57.313412 bash[797]: umount: /sysroot/usr/share/oem: not mounted. Feb 9 00:42:57.321368 ignition[798]: INFO : Ignition 2.14.0 Feb 9 00:42:57.321368 ignition[798]: INFO : Stage: mount Feb 9 00:42:57.322573 ignition[798]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 00:42:57.322573 ignition[798]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 00:42:57.323462 systemd[1]: Finished ignition-mount.service. Feb 9 00:42:57.324000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:57.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:42:57.325831 ignition[798]: INFO : mount: mount passed Feb 9 00:42:57.325831 ignition[798]: INFO : Ignition finished successfully Feb 9 00:42:57.324944 systemd[1]: Finished sysroot-boot.service. Feb 9 00:42:57.884865 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 00:42:57.892515 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (806) Feb 9 00:42:57.892542 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 00:42:57.892552 kernel: BTRFS info (device vda6): using free space tree Feb 9 00:42:57.894146 kernel: BTRFS info (device vda6): has skinny extents Feb 9 00:42:57.896579 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 00:42:57.897902 systemd[1]: Starting ignition-files.service... Feb 9 00:42:57.909958 ignition[826]: INFO : Ignition 2.14.0 Feb 9 00:42:57.909958 ignition[826]: INFO : Stage: files Feb 9 00:42:57.911415 ignition[826]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 00:42:57.911415 ignition[826]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 00:42:57.911415 ignition[826]: DEBUG : files: compiled without relabeling support, skipping Feb 9 00:42:57.914522 ignition[826]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 00:42:57.914522 ignition[826]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 00:42:57.914522 ignition[826]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 00:42:57.914522 ignition[826]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 00:42:57.914522 ignition[826]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 00:42:57.914522 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 9 00:42:57.914522 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 9 00:42:57.913778 unknown[826]: wrote ssh authorized keys file for user: core Feb 9 00:42:57.958209 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 9 00:42:58.012336 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 9 00:42:58.013849 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 9 00:42:58.013849 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-amd64.tar.gz: attempt #1 Feb 9 00:42:58.351414 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 9 00:42:58.460929 ignition[826]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: aa622325bf05520939f9e020d7a28ab48ac23e2fae6f47d5a4e52174c88c1ebc31b464853e4fd65bd8f5331f330a6ca96fd370d247d3eeaed042da4ee2d1219a Feb 9 00:42:58.462993 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 9 00:42:58.462993 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 9 00:42:58.462993 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz: attempt #1 Feb 9 00:42:58.682257 systemd-networkd[702]: eth0: Gained IPv6LL Feb 9 00:42:58.764310 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 9 00:42:59.021731 ignition[826]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 5d0324ca8a3c90c680b6e1fddb245a2255582fa15949ba1f3c6bb7323df9d3af754dae98d6e40ac9ccafb2999c932df2c4288d418949a4915d928eb23c090540 Feb 9 00:42:59.021731 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 9 00:42:59.025523 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 00:42:59.025523 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 00:42:59.025523 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubectl" Feb 9 00:42:59.025523 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.27.2/bin/linux/amd64/kubectl: attempt #1 Feb 9 00:42:59.098382 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 9 00:42:59.345321 ignition[826]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 857e67001e74840518413593d90c6e64ad3f00d55fa44ad9a8e2ed6135392c908caff7ec19af18cbe10784b8f83afe687a0bc3bacbc9eee984cdeb9c0749cb83 Feb 9 00:42:59.345321 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 9 00:42:59.348687 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 00:42:59.348687 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.27.2/bin/linux/amd64/kubelet: attempt #1 Feb 9 00:42:59.399305 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 9 00:43:00.011254 ignition[826]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: a283da2224d456958b2cb99b4f6faf4457c4ed89e9e95f37d970c637f6a7f64ff4dd4d2bfce538759b2d2090933bece599a285ef8fd132eb383fece9a3941560 Feb 9 00:43:00.013591 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 00:43:00.013591 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 00:43:00.013591 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET https://dl.k8s.io/release/v1.27.2/bin/linux/amd64/kubeadm: attempt #1 Feb 9 00:43:00.063171 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET result: OK Feb 9 00:43:00.338122 ignition[826]: DEBUG : files: createFilesystemsFiles: createFiles: op(9): file matches expected sum of: f40216b7d14046931c58072d10c7122934eac5a23c08821371f8b08ac1779443ad11d3458a4c5dcde7cf80fc600a9fefb14b1942aa46a52330248d497ca88836 Feb 9 00:43:00.340450 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 00:43:00.340450 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 00:43:00.340450 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 9 00:43:00.639112 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 9 00:43:00.728170 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 00:43:00.728170 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Feb 9 00:43:00.730590 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 00:43:00.730590 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 9 00:43:00.733016 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 9 00:43:00.733016 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 00:43:00.733016 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 00:43:00.733016 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 00:43:00.733016 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 00:43:00.733016 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 00:43:00.733016 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 00:43:00.733016 ignition[826]: INFO : files: op(10): [started] processing unit "prepare-cni-plugins.service" Feb 9 00:43:00.733016 ignition[826]: INFO : files: op(10): op(11): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 00:43:00.733016 ignition[826]: INFO : files: op(10): op(11): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 00:43:00.733016 ignition[826]: INFO : files: op(10): [finished] processing unit "prepare-cni-plugins.service" Feb 9 00:43:00.733016 ignition[826]: INFO : files: op(12): [started] processing unit "prepare-critools.service" Feb 9 00:43:00.733016 ignition[826]: INFO : files: op(12): op(13): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 00:43:00.733016 ignition[826]: INFO : files: op(12): op(13): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 00:43:00.733016 ignition[826]: INFO : files: op(12): [finished] processing unit "prepare-critools.service" Feb 9 00:43:00.733016 ignition[826]: INFO : files: op(14): [started] processing unit "prepare-helm.service" Feb 9 00:43:00.733016 ignition[826]: INFO : files: op(14): op(15): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 00:43:00.756728 ignition[826]: INFO : files: op(14): op(15): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 00:43:00.756728 ignition[826]: INFO : files: op(14): [finished] processing unit "prepare-helm.service" Feb 9 00:43:00.756728 ignition[826]: INFO : files: op(16): [started] processing unit "coreos-metadata.service" Feb 9 00:43:00.756728 ignition[826]: INFO : files: op(16): op(17): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 9 00:43:00.756728 ignition[826]: INFO : files: op(16): op(17): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 9 00:43:00.756728 ignition[826]: INFO : files: op(16): [finished] processing unit "coreos-metadata.service" Feb 9 00:43:00.756728 ignition[826]: INFO : files: op(18): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 00:43:00.756728 ignition[826]: INFO : files: op(18): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 00:43:00.756728 ignition[826]: INFO : files: op(19): [started] setting preset to enabled for "prepare-critools.service" Feb 9 00:43:00.756728 ignition[826]: INFO : files: op(19): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 00:43:00.756728 ignition[826]: INFO : files: op(1a): [started] setting preset to enabled for "prepare-helm.service" Feb 9 00:43:00.756728 ignition[826]: INFO : files: op(1a): [finished] setting preset to enabled for "prepare-helm.service" Feb 9 00:43:00.756728 ignition[826]: INFO : files: op(1b): [started] setting preset to disabled for "coreos-metadata.service" Feb 9 00:43:00.756728 ignition[826]: INFO : files: op(1b): op(1c): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 9 00:43:00.820408 ignition[826]: INFO : files: op(1b): op(1c): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 9 00:43:00.821739 ignition[826]: INFO : files: op(1b): [finished] setting preset to disabled for "coreos-metadata.service" Feb 9 00:43:00.821739 ignition[826]: INFO : files: createResultFile: createFiles: op(1d): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 00:43:00.821739 ignition[826]: INFO : files: createResultFile: createFiles: op(1d): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 00:43:00.821739 ignition[826]: INFO : files: files passed Feb 9 00:43:00.821739 ignition[826]: INFO : Ignition finished successfully Feb 9 00:43:00.834438 kernel: kauditd_printk_skb: 21 callbacks suppressed Feb 9 00:43:00.834462 kernel: audit: type=1130 audit(1707439380.823:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:00.834473 kernel: audit: type=1130 audit(1707439380.830:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:00.834484 kernel: audit: type=1130 audit(1707439380.834:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:00.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:00.830000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:00.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:00.821824 systemd[1]: Finished ignition-files.service. Feb 9 00:43:00.839560 kernel: audit: type=1131 audit(1707439380.834:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:00.834000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:00.824401 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 00:43:00.828226 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 00:43:00.842243 initrd-setup-root-after-ignition[849]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Feb 9 00:43:00.828701 systemd[1]: Starting ignition-quench.service... Feb 9 00:43:00.844036 initrd-setup-root-after-ignition[852]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 00:43:00.830012 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 00:43:00.831283 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 00:43:00.831338 systemd[1]: Finished ignition-quench.service. Feb 9 00:43:00.834509 systemd[1]: Reached target ignition-complete.target. Feb 9 00:43:00.840020 systemd[1]: Starting initrd-parse-etc.service... Feb 9 00:43:00.851784 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 00:43:00.851862 systemd[1]: Finished initrd-parse-etc.service. Feb 9 00:43:00.858001 kernel: audit: type=1130 audit(1707439380.852:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:00.858018 kernel: audit: type=1131 audit(1707439380.852:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:00.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:00.852000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:00.853107 systemd[1]: Reached target initrd-fs.target. Feb 9 00:43:00.858015 systemd[1]: Reached target initrd.target. Feb 9 00:43:00.858567 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 00:43:00.859163 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 00:43:00.868356 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 00:43:00.872714 kernel: audit: type=1130 audit(1707439380.869:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:00.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:00.869723 systemd[1]: Starting initrd-cleanup.service... Feb 9 00:43:00.876964 systemd[1]: Stopped target network.target. Feb 9 00:43:00.877618 systemd[1]: Stopped target nss-lookup.target. Feb 9 00:43:00.878772 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 00:43:00.880190 systemd[1]: Stopped target timers.target. Feb 9 00:43:00.881627 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 00:43:00.886193 kernel: audit: type=1131 audit(1707439380.882:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:00.882000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:00.881715 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 00:43:00.882794 systemd[1]: Stopped target initrd.target. Feb 9 00:43:00.886284 systemd[1]: Stopped target basic.target. Feb 9 00:43:00.887377 systemd[1]: Stopped target ignition-complete.target. Feb 9 00:43:00.888498 systemd[1]: Stopped target ignition-diskful.target. Feb 9 00:43:00.889613 systemd[1]: Stopped target initrd-root-device.target. Feb 9 00:43:00.890910 systemd[1]: Stopped target remote-fs.target. Feb 9 00:43:00.892076 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 00:43:00.893271 systemd[1]: Stopped target sysinit.target. Feb 9 00:43:00.894489 systemd[1]: Stopped target local-fs.target. Feb 9 00:43:00.895598 systemd[1]: Stopped target local-fs-pre.target. Feb 9 00:43:00.896808 systemd[1]: Stopped target swap.target. Feb 9 00:43:00.902384 kernel: audit: type=1131 audit(1707439380.899:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:00.899000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:00.897995 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 00:43:00.898079 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 00:43:00.907190 kernel: audit: type=1131 audit(1707439380.903:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:00.903000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:00.899333 systemd[1]: Stopped target cryptsetup.target. Feb 9 00:43:00.907000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:00.902419 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 00:43:00.902500 systemd[1]: Stopped dracut-initqueue.service. Feb 9 00:43:00.903770 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 00:43:00.903848 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 00:43:00.907288 systemd[1]: Stopped target paths.target. Feb 9 00:43:00.908373 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 00:43:00.914186 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 00:43:00.915000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:00.916000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:00.914400 systemd[1]: Stopped target slices.target. Feb 9 00:43:00.914695 systemd[1]: Stopped target sockets.target. Feb 9 00:43:00.914807 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 00:43:00.922000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:00.923000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:00.914873 systemd[1]: Closed iscsid.socket. Feb 9 00:43:00.915047 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 00:43:00.915104 systemd[1]: Closed iscsiuio.socket. Feb 9 00:43:00.915449 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 00:43:00.915536 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 00:43:00.915786 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 00:43:00.915864 systemd[1]: Stopped ignition-files.service. Feb 9 00:43:00.932255 ignition[866]: INFO : Ignition 2.14.0 Feb 9 00:43:00.932255 ignition[866]: INFO : Stage: umount Feb 9 00:43:00.932255 ignition[866]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 00:43:00.932255 ignition[866]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 00:43:00.932255 ignition[866]: INFO : umount: umount passed Feb 9 00:43:00.932255 ignition[866]: INFO : Ignition finished successfully Feb 9 00:43:00.932000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:00.935000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:00.917009 systemd[1]: Stopping ignition-mount.service... Feb 9 00:43:00.918000 systemd[1]: Stopping sysroot-boot.service... Feb 9 00:43:00.919016 systemd[1]: Stopping systemd-networkd.service... Feb 9 00:43:00.920351 systemd[1]: Stopping systemd-resolved.service... Feb 9 00:43:00.921297 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 00:43:00.921415 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 00:43:00.922823 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 00:43:00.922924 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 00:43:00.924292 systemd-networkd[702]: eth0: DHCPv6 lease lost Feb 9 00:43:00.930694 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 00:43:00.930780 systemd[1]: Stopped systemd-resolved.service. Feb 9 00:43:00.934233 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 00:43:00.934750 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 00:43:00.934866 systemd[1]: Stopped systemd-networkd.service. Feb 9 00:43:00.938406 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 00:43:00.939542 systemd[1]: Stopped ignition-mount.service. Feb 9 00:43:00.946000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:00.946688 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 00:43:00.947385 systemd[1]: Stopped sysroot-boot.service. Feb 9 00:43:00.948000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:00.948000 audit: BPF prog-id=9 op=UNLOAD Feb 9 00:43:00.948000 audit: BPF prog-id=6 op=UNLOAD Feb 9 00:43:00.948867 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 00:43:00.949707 systemd[1]: Closed systemd-networkd.socket. Feb 9 00:43:00.951058 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 00:43:00.951106 systemd[1]: Stopped ignition-disks.service. Feb 9 00:43:00.953000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:00.953239 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 00:43:00.953271 systemd[1]: Stopped ignition-kargs.service. Feb 9 00:43:00.954000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:00.954992 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 00:43:00.955024 systemd[1]: Stopped ignition-setup.service. Feb 9 00:43:00.956000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:00.957145 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 00:43:00.957188 systemd[1]: Stopped initrd-setup-root.service. Feb 9 00:43:00.959000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:00.960207 systemd[1]: Stopping network-cleanup.service... Feb 9 00:43:00.961504 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 00:43:00.961557 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 00:43:00.963000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:00.964030 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 00:43:00.964077 systemd[1]: Stopped systemd-sysctl.service. Feb 9 00:43:00.965000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:00.966474 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 00:43:00.967362 systemd[1]: Stopped systemd-modules-load.service. Feb 9 00:43:00.968000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:00.968940 systemd[1]: Stopping systemd-udevd.service... Feb 9 00:43:00.971091 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 00:43:00.971589 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 00:43:00.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:00.973000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:00.971654 systemd[1]: Finished initrd-cleanup.service. Feb 9 00:43:00.976046 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 00:43:00.976187 systemd[1]: Stopped network-cleanup.service. Feb 9 00:43:00.977000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:00.977764 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 00:43:00.978000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:00.977875 systemd[1]: Stopped systemd-udevd.service. Feb 9 00:43:00.980190 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 00:43:00.980231 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 00:43:00.981278 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 00:43:00.983000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:00.985000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:00.981304 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 00:43:00.986000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:00.982624 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 00:43:00.982662 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 00:43:00.989000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:00.983948 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 00:43:00.983980 systemd[1]: Stopped dracut-cmdline.service. Feb 9 00:43:00.991000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:00.992000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:00.985440 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 00:43:00.985472 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 00:43:00.987032 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 00:43:00.988188 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 9 00:43:00.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:00.995000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:00.988237 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 9 00:43:00.989666 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 00:43:00.989700 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 00:43:00.991425 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 00:43:00.991503 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 00:43:00.993620 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 9 00:43:00.994157 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 00:43:00.994270 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 00:43:00.995703 systemd[1]: Reached target initrd-switch-root.target. Feb 9 00:43:00.997981 systemd[1]: Starting initrd-switch-root.service... Feb 9 00:43:01.009417 systemd[1]: Switching root. Feb 9 00:43:01.029170 iscsid[707]: iscsid shutting down. Feb 9 00:43:01.029762 systemd-journald[197]: Journal stopped Feb 9 00:43:05.445496 systemd-journald[197]: Received SIGTERM from PID 1 (systemd). Feb 9 00:43:05.445551 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 00:43:05.445566 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 00:43:05.445580 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 00:43:05.445589 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 00:43:05.445598 kernel: SELinux: policy capability open_perms=1 Feb 9 00:43:05.445610 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 00:43:05.445619 kernel: SELinux: policy capability always_check_network=0 Feb 9 00:43:05.445629 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 00:43:05.445638 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 00:43:05.445647 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 00:43:05.445658 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 00:43:05.445669 systemd[1]: Successfully loaded SELinux policy in 36.361ms. Feb 9 00:43:05.445683 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.488ms. Feb 9 00:43:05.445694 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 00:43:05.445704 systemd[1]: Detected virtualization kvm. Feb 9 00:43:05.445714 systemd[1]: Detected architecture x86-64. Feb 9 00:43:05.445724 systemd[1]: Detected first boot. Feb 9 00:43:05.445733 systemd[1]: Initializing machine ID from VM UUID. Feb 9 00:43:05.445744 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 00:43:05.445754 systemd[1]: Populated /etc with preset unit settings. Feb 9 00:43:05.445764 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 00:43:05.445778 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 00:43:05.445789 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 00:43:05.445800 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 00:43:05.445809 systemd[1]: Stopped iscsiuio.service. Feb 9 00:43:05.445819 systemd[1]: iscsid.service: Deactivated successfully. Feb 9 00:43:05.445831 systemd[1]: Stopped iscsid.service. Feb 9 00:43:05.445841 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 9 00:43:05.445851 systemd[1]: Stopped initrd-switch-root.service. Feb 9 00:43:05.445862 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 9 00:43:05.445872 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 00:43:05.445882 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 00:43:05.445892 systemd[1]: Created slice system-getty.slice. Feb 9 00:43:05.445903 systemd[1]: Created slice system-modprobe.slice. Feb 9 00:43:05.445913 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 00:43:05.445924 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 00:43:05.445933 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 00:43:05.445943 systemd[1]: Created slice user.slice. Feb 9 00:43:05.445955 systemd[1]: Started systemd-ask-password-console.path. Feb 9 00:43:05.445964 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 00:43:05.445975 systemd[1]: Set up automount boot.automount. Feb 9 00:43:05.445985 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 00:43:05.445996 systemd[1]: Stopped target initrd-switch-root.target. Feb 9 00:43:05.446006 systemd[1]: Stopped target initrd-fs.target. Feb 9 00:43:05.446016 systemd[1]: Stopped target initrd-root-fs.target. Feb 9 00:43:05.446026 systemd[1]: Reached target integritysetup.target. Feb 9 00:43:05.446035 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 00:43:05.446045 systemd[1]: Reached target remote-fs.target. Feb 9 00:43:05.446055 systemd[1]: Reached target slices.target. Feb 9 00:43:05.446065 systemd[1]: Reached target swap.target. Feb 9 00:43:05.446074 systemd[1]: Reached target torcx.target. Feb 9 00:43:05.446084 systemd[1]: Reached target veritysetup.target. Feb 9 00:43:05.446095 systemd[1]: Listening on systemd-coredump.socket. Feb 9 00:43:05.446114 systemd[1]: Listening on systemd-initctl.socket. Feb 9 00:43:05.446124 systemd[1]: Listening on systemd-networkd.socket. Feb 9 00:43:05.446146 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 00:43:05.446157 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 00:43:05.446167 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 00:43:05.446177 systemd[1]: Mounting dev-hugepages.mount... Feb 9 00:43:05.446187 systemd[1]: Mounting dev-mqueue.mount... Feb 9 00:43:05.446197 systemd[1]: Mounting media.mount... Feb 9 00:43:05.446211 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 00:43:05.446222 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 00:43:05.446232 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 00:43:05.446242 systemd[1]: Mounting tmp.mount... Feb 9 00:43:05.446252 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 00:43:05.446262 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 00:43:05.446272 systemd[1]: Starting kmod-static-nodes.service... Feb 9 00:43:05.446282 systemd[1]: Starting modprobe@configfs.service... Feb 9 00:43:05.446292 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 00:43:05.446303 systemd[1]: Starting modprobe@drm.service... Feb 9 00:43:05.446313 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 00:43:05.446323 systemd[1]: Starting modprobe@fuse.service... Feb 9 00:43:05.446333 systemd[1]: Starting modprobe@loop.service... Feb 9 00:43:05.446344 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 00:43:05.446354 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 9 00:43:05.446400 systemd[1]: Stopped systemd-fsck-root.service. Feb 9 00:43:05.446411 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 9 00:43:05.446427 kernel: loop: module loaded Feb 9 00:43:05.446439 systemd[1]: Stopped systemd-fsck-usr.service. Feb 9 00:43:05.446448 kernel: fuse: init (API version 7.34) Feb 9 00:43:05.446475 systemd[1]: Stopped systemd-journald.service. Feb 9 00:43:05.446490 systemd[1]: Starting systemd-journald.service... Feb 9 00:43:05.446503 systemd[1]: Starting systemd-modules-load.service... Feb 9 00:43:05.446515 systemd[1]: Starting systemd-network-generator.service... Feb 9 00:43:05.446525 systemd[1]: Starting systemd-remount-fs.service... Feb 9 00:43:05.446535 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 00:43:05.446565 systemd[1]: verity-setup.service: Deactivated successfully. Feb 9 00:43:05.446575 systemd[1]: Stopped verity-setup.service. Feb 9 00:43:05.446585 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 00:43:05.446594 systemd[1]: Mounted dev-hugepages.mount. Feb 9 00:43:05.446605 systemd[1]: Mounted dev-mqueue.mount. Feb 9 00:43:05.446614 systemd[1]: Mounted media.mount. Feb 9 00:43:05.446626 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 00:43:05.446636 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 00:43:05.446648 systemd-journald[969]: Journal started Feb 9 00:43:05.446690 systemd-journald[969]: Runtime Journal (/run/log/journal/054666b1e06349f2a08405ccc8691856) is 6.0M, max 48.4M, 42.4M free. Feb 9 00:43:01.089000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 9 00:43:02.285000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 00:43:02.285000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 00:43:02.285000 audit: BPF prog-id=10 op=LOAD Feb 9 00:43:02.285000 audit: BPF prog-id=10 op=UNLOAD Feb 9 00:43:02.285000 audit: BPF prog-id=11 op=LOAD Feb 9 00:43:02.285000 audit: BPF prog-id=11 op=UNLOAD Feb 9 00:43:02.319000 audit[900]: AVC avc: denied { associate } for pid=900 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 00:43:02.319000 audit[900]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00018f8e2 a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=883 pid=900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 00:43:02.319000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 00:43:02.320000 audit[900]: AVC avc: denied { associate } for pid=900 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 9 00:43:02.320000 audit[900]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00018f9b9 a2=1ed a3=0 items=2 ppid=883 pid=900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 00:43:02.320000 audit: CWD cwd="/" Feb 9 00:43:02.320000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:02.320000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:02.320000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 00:43:05.305000 audit: BPF prog-id=12 op=LOAD Feb 9 00:43:05.305000 audit: BPF prog-id=3 op=UNLOAD Feb 9 00:43:05.306000 audit: BPF prog-id=13 op=LOAD Feb 9 00:43:05.306000 audit: BPF prog-id=14 op=LOAD Feb 9 00:43:05.306000 audit: BPF prog-id=4 op=UNLOAD Feb 9 00:43:05.306000 audit: BPF prog-id=5 op=UNLOAD Feb 9 00:43:05.306000 audit: BPF prog-id=15 op=LOAD Feb 9 00:43:05.306000 audit: BPF prog-id=12 op=UNLOAD Feb 9 00:43:05.306000 audit: BPF prog-id=16 op=LOAD Feb 9 00:43:05.306000 audit: BPF prog-id=17 op=LOAD Feb 9 00:43:05.306000 audit: BPF prog-id=13 op=UNLOAD Feb 9 00:43:05.306000 audit: BPF prog-id=14 op=UNLOAD Feb 9 00:43:05.307000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:05.310000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:05.312000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:05.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:05.314000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:05.317000 audit: BPF prog-id=15 op=UNLOAD Feb 9 00:43:05.412000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:05.415000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:05.417000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:05.417000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:05.417000 audit: BPF prog-id=18 op=LOAD Feb 9 00:43:05.417000 audit: BPF prog-id=19 op=LOAD Feb 9 00:43:05.418000 audit: BPF prog-id=20 op=LOAD Feb 9 00:43:05.418000 audit: BPF prog-id=16 op=UNLOAD Feb 9 00:43:05.418000 audit: BPF prog-id=17 op=UNLOAD Feb 9 00:43:05.435000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:05.444000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 00:43:05.444000 audit[969]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffd01ee4e70 a2=4000 a3=7ffd01ee4f0c items=0 ppid=1 pid=969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 00:43:05.444000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 00:43:05.303926 systemd[1]: Queued start job for default target multi-user.target. Feb 9 00:43:02.317502 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-02-09T00:43:02Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 00:43:05.303937 systemd[1]: Unnecessary job was removed for dev-vda6.device. Feb 9 00:43:02.317723 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-02-09T00:43:02Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 00:43:05.307741 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 9 00:43:02.317746 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-02-09T00:43:02Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 00:43:02.317782 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-02-09T00:43:02Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 9 00:43:02.317795 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-02-09T00:43:02Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 9 00:43:02.317833 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-02-09T00:43:02Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 9 00:43:02.317849 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-02-09T00:43:02Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 9 00:43:02.318099 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-02-09T00:43:02Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 9 00:43:02.318155 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-02-09T00:43:02Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 00:43:02.318171 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-02-09T00:43:02Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 00:43:02.318488 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-02-09T00:43:02Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 9 00:43:05.448731 systemd[1]: Started systemd-journald.service. Feb 9 00:43:02.318530 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-02-09T00:43:02Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 9 00:43:02.318553 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-02-09T00:43:02Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 9 00:43:02.318572 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-02-09T00:43:02Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 9 00:43:02.318591 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-02-09T00:43:02Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 9 00:43:02.318608 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-02-09T00:43:02Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 9 00:43:04.968254 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-02-09T00:43:04Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 00:43:04.968528 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-02-09T00:43:04Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 00:43:04.968634 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-02-09T00:43:04Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 00:43:04.968795 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-02-09T00:43:04Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 00:43:04.968836 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-02-09T00:43:04Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 9 00:43:04.968897 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-02-09T00:43:04Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 9 00:43:05.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:05.450079 systemd[1]: Mounted tmp.mount. Feb 9 00:43:05.451150 systemd[1]: Finished kmod-static-nodes.service. Feb 9 00:43:05.451000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:05.452375 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 00:43:05.453000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:05.453396 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 00:43:05.453543 systemd[1]: Finished modprobe@configfs.service. Feb 9 00:43:05.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:05.454000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:05.454612 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 00:43:05.454725 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 00:43:05.455000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:05.455000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:05.455735 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 00:43:05.455872 systemd[1]: Finished modprobe@drm.service. Feb 9 00:43:05.456000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:05.456000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:05.456854 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 00:43:05.456990 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 00:43:05.457000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:05.457000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:05.458031 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 00:43:05.458229 systemd[1]: Finished modprobe@fuse.service. Feb 9 00:43:05.458000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:05.458000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:05.459212 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 00:43:05.459346 systemd[1]: Finished modprobe@loop.service. Feb 9 00:43:05.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:05.460000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:05.460397 systemd[1]: Finished systemd-modules-load.service. Feb 9 00:43:05.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:05.461485 systemd[1]: Finished systemd-network-generator.service. Feb 9 00:43:05.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:05.462653 systemd[1]: Finished systemd-remount-fs.service. Feb 9 00:43:05.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:05.463838 systemd[1]: Reached target network-pre.target. Feb 9 00:43:05.465658 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 00:43:05.467570 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 00:43:05.468339 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 00:43:05.469668 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 00:43:05.471418 systemd[1]: Starting systemd-journal-flush.service... Feb 9 00:43:05.474971 systemd-journald[969]: Time spent on flushing to /var/log/journal/054666b1e06349f2a08405ccc8691856 is 13.797ms for 1188 entries. Feb 9 00:43:05.474971 systemd-journald[969]: System Journal (/var/log/journal/054666b1e06349f2a08405ccc8691856) is 8.0M, max 195.6M, 187.6M free. Feb 9 00:43:05.669404 systemd-journald[969]: Received client request to flush runtime journal. Feb 9 00:43:05.505000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:05.516000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:05.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:05.570000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:05.653000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:05.472430 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 00:43:05.473380 systemd[1]: Starting systemd-random-seed.service... Feb 9 00:43:05.474241 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 00:43:05.475281 systemd[1]: Starting systemd-sysctl.service... Feb 9 00:43:05.477790 systemd[1]: Starting systemd-sysusers.service... Feb 9 00:43:05.670494 udevadm[1003]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 9 00:43:05.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:05.481624 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 00:43:05.482604 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 00:43:05.504612 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 00:43:05.506376 systemd[1]: Starting systemd-udev-settle.service... Feb 9 00:43:05.515617 systemd[1]: Finished systemd-sysctl.service. Feb 9 00:43:05.523790 systemd[1]: Finished systemd-sysusers.service. Feb 9 00:43:05.525577 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 00:43:05.569502 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 00:43:05.652958 systemd[1]: Finished systemd-random-seed.service. Feb 9 00:43:05.653748 systemd[1]: Reached target first-boot-complete.target. Feb 9 00:43:05.670536 systemd[1]: Finished systemd-journal-flush.service. Feb 9 00:43:06.013968 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 00:43:06.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:06.015366 kernel: kauditd_printk_skb: 101 callbacks suppressed Feb 9 00:43:06.015442 kernel: audit: type=1130 audit(1707439386.014:134): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:06.017000 audit: BPF prog-id=21 op=LOAD Feb 9 00:43:06.018452 kernel: audit: type=1334 audit(1707439386.017:135): prog-id=21 op=LOAD Feb 9 00:43:06.018529 kernel: audit: type=1334 audit(1707439386.018:136): prog-id=22 op=LOAD Feb 9 00:43:06.018000 audit: BPF prog-id=22 op=LOAD Feb 9 00:43:06.018000 audit: BPF prog-id=7 op=UNLOAD Feb 9 00:43:06.019394 systemd[1]: Starting systemd-udevd.service... Feb 9 00:43:06.018000 audit: BPF prog-id=8 op=UNLOAD Feb 9 00:43:06.020144 kernel: audit: type=1334 audit(1707439386.018:137): prog-id=7 op=UNLOAD Feb 9 00:43:06.020176 kernel: audit: type=1334 audit(1707439386.018:138): prog-id=8 op=UNLOAD Feb 9 00:43:06.035694 systemd-udevd[1008]: Using default interface naming scheme 'v252'. Feb 9 00:43:06.048216 systemd[1]: Started systemd-udevd.service. Feb 9 00:43:06.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:06.054303 kernel: audit: type=1130 audit(1707439386.048:139): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:06.054391 kernel: audit: type=1334 audit(1707439386.051:140): prog-id=23 op=LOAD Feb 9 00:43:06.051000 audit: BPF prog-id=23 op=LOAD Feb 9 00:43:06.053683 systemd[1]: Starting systemd-networkd.service... Feb 9 00:43:06.057000 audit: BPF prog-id=24 op=LOAD Feb 9 00:43:06.058000 audit: BPF prog-id=25 op=LOAD Feb 9 00:43:06.059593 kernel: audit: type=1334 audit(1707439386.057:141): prog-id=24 op=LOAD Feb 9 00:43:06.059638 kernel: audit: type=1334 audit(1707439386.058:142): prog-id=25 op=LOAD Feb 9 00:43:06.059655 kernel: audit: type=1334 audit(1707439386.059:143): prog-id=26 op=LOAD Feb 9 00:43:06.059000 audit: BPF prog-id=26 op=LOAD Feb 9 00:43:06.060205 systemd[1]: Starting systemd-userdbd.service... Feb 9 00:43:06.072683 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Feb 9 00:43:06.086420 systemd[1]: Started systemd-userdbd.service. Feb 9 00:43:06.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:06.111404 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 00:43:06.117174 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 9 00:43:06.125184 kernel: ACPI: button: Power Button [PWRF] Feb 9 00:43:06.129313 systemd-networkd[1017]: lo: Link UP Feb 9 00:43:06.129325 systemd-networkd[1017]: lo: Gained carrier Feb 9 00:43:06.129683 systemd-networkd[1017]: Enumeration completed Feb 9 00:43:06.129785 systemd-networkd[1017]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 00:43:06.129789 systemd[1]: Started systemd-networkd.service. Feb 9 00:43:06.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:06.130811 systemd-networkd[1017]: eth0: Link UP Feb 9 00:43:06.130815 systemd-networkd[1017]: eth0: Gained carrier Feb 9 00:43:06.136000 audit[1012]: AVC avc: denied { confidentiality } for pid=1012 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 00:43:06.140242 systemd-networkd[1017]: eth0: DHCPv4 address 10.0.0.31/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 9 00:43:06.136000 audit[1012]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55f07caa4a20 a1=32194 a2=7f18ef0febc5 a3=5 items=108 ppid=1008 pid=1012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 00:43:06.136000 audit: CWD cwd="/" Feb 9 00:43:06.136000 audit: PATH item=0 name=(null) inode=50 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=1 name=(null) inode=15402 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=2 name=(null) inode=15402 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=3 name=(null) inode=15403 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=4 name=(null) inode=15402 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=5 name=(null) inode=15404 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=6 name=(null) inode=15402 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=7 name=(null) inode=15405 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=8 name=(null) inode=15405 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=9 name=(null) inode=15406 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=10 name=(null) inode=15405 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=11 name=(null) inode=15407 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=12 name=(null) inode=15405 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=13 name=(null) inode=15408 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=14 name=(null) inode=15405 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=15 name=(null) inode=15409 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=16 name=(null) inode=15405 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=17 name=(null) inode=15410 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=18 name=(null) inode=15402 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=19 name=(null) inode=15411 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=20 name=(null) inode=15411 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=21 name=(null) inode=15412 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=22 name=(null) inode=15411 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=23 name=(null) inode=15413 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=24 name=(null) inode=15411 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=25 name=(null) inode=15414 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=26 name=(null) inode=15411 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=27 name=(null) inode=15415 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=28 name=(null) inode=15411 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=29 name=(null) inode=15416 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=30 name=(null) inode=15402 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=31 name=(null) inode=15417 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=32 name=(null) inode=15417 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=33 name=(null) inode=15418 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=34 name=(null) inode=15417 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=35 name=(null) inode=15419 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=36 name=(null) inode=15417 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=37 name=(null) inode=15420 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=38 name=(null) inode=15417 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=39 name=(null) inode=15421 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=40 name=(null) inode=15417 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=41 name=(null) inode=15422 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=42 name=(null) inode=15402 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=43 name=(null) inode=15423 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=44 name=(null) inode=15423 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=45 name=(null) inode=15424 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=46 name=(null) inode=15423 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=47 name=(null) inode=15425 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=48 name=(null) inode=15423 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=49 name=(null) inode=15426 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=50 name=(null) inode=15423 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=51 name=(null) inode=15427 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=52 name=(null) inode=15423 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=53 name=(null) inode=15428 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=54 name=(null) inode=50 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=55 name=(null) inode=15429 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=56 name=(null) inode=15429 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=57 name=(null) inode=15430 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=58 name=(null) inode=15429 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=59 name=(null) inode=15431 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=60 name=(null) inode=15429 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=61 name=(null) inode=15432 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=62 name=(null) inode=15432 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=63 name=(null) inode=15433 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=64 name=(null) inode=15432 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=65 name=(null) inode=15434 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=66 name=(null) inode=15432 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=67 name=(null) inode=15435 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=68 name=(null) inode=15432 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=69 name=(null) inode=15436 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=70 name=(null) inode=15432 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=71 name=(null) inode=15437 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=72 name=(null) inode=15429 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=73 name=(null) inode=15438 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=74 name=(null) inode=15438 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=75 name=(null) inode=15439 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=76 name=(null) inode=15438 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=77 name=(null) inode=15440 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=78 name=(null) inode=15438 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=79 name=(null) inode=15441 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=80 name=(null) inode=15438 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=81 name=(null) inode=15442 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=82 name=(null) inode=15438 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=83 name=(null) inode=15443 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=84 name=(null) inode=15429 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=85 name=(null) inode=15444 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=86 name=(null) inode=15444 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=87 name=(null) inode=15445 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=88 name=(null) inode=15444 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=89 name=(null) inode=15446 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=90 name=(null) inode=15444 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=91 name=(null) inode=15447 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=92 name=(null) inode=15444 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=93 name=(null) inode=15448 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=94 name=(null) inode=15444 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=95 name=(null) inode=15449 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=96 name=(null) inode=15429 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=97 name=(null) inode=15450 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=98 name=(null) inode=15450 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=99 name=(null) inode=15451 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=100 name=(null) inode=15450 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=101 name=(null) inode=15452 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=102 name=(null) inode=15450 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=103 name=(null) inode=15453 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=104 name=(null) inode=15450 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=105 name=(null) inode=15454 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=106 name=(null) inode=15450 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PATH item=107 name=(null) inode=15455 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 00:43:06.136000 audit: PROCTITLE proctitle="(udev-worker)" Feb 9 00:43:06.164147 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 0 Feb 9 00:43:06.169151 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 9 00:43:06.175145 kernel: mousedev: PS/2 mouse device common for all mice Feb 9 00:43:06.214777 kernel: kvm: Nested Virtualization enabled Feb 9 00:43:06.214867 kernel: SVM: kvm: Nested Paging enabled Feb 9 00:43:06.215072 kernel: SVM: Virtual VMLOAD VMSAVE supported Feb 9 00:43:06.215109 kernel: SVM: Virtual GIF supported Feb 9 00:43:06.231144 kernel: EDAC MC: Ver: 3.0.0 Feb 9 00:43:06.252719 systemd[1]: Finished systemd-udev-settle.service. Feb 9 00:43:06.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:06.254939 systemd[1]: Starting lvm2-activation-early.service... Feb 9 00:43:06.261891 lvm[1044]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 00:43:06.291299 systemd[1]: Finished lvm2-activation-early.service. Feb 9 00:43:06.291000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:06.292155 systemd[1]: Reached target cryptsetup.target. Feb 9 00:43:06.293844 systemd[1]: Starting lvm2-activation.service... Feb 9 00:43:06.298389 lvm[1045]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 00:43:06.327294 systemd[1]: Finished lvm2-activation.service. Feb 9 00:43:06.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:06.328124 systemd[1]: Reached target local-fs-pre.target. Feb 9 00:43:06.328805 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 00:43:06.328832 systemd[1]: Reached target local-fs.target. Feb 9 00:43:06.329480 systemd[1]: Reached target machines.target. Feb 9 00:43:06.331222 systemd[1]: Starting ldconfig.service... Feb 9 00:43:06.331989 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 00:43:06.332039 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 00:43:06.332950 systemd[1]: Starting systemd-boot-update.service... Feb 9 00:43:06.334634 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 00:43:06.337438 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 00:43:06.339387 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 00:43:06.339452 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 00:43:06.342326 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 00:43:06.342925 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1047 (bootctl) Feb 9 00:43:06.344645 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 00:43:06.347242 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 00:43:06.348000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:06.358712 systemd-tmpfiles[1050]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 00:43:06.359540 systemd-tmpfiles[1050]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 00:43:06.360928 systemd-tmpfiles[1050]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 00:43:06.851253 systemd-fsck[1055]: fsck.fat 4.2 (2021-01-31) Feb 9 00:43:06.851253 systemd-fsck[1055]: /dev/vda1: 790 files, 115355/258078 clusters Feb 9 00:43:06.852556 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 00:43:06.853000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:06.854912 systemd[1]: Mounting boot.mount... Feb 9 00:43:06.936762 ldconfig[1046]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 00:43:06.971536 systemd[1]: Mounted boot.mount. Feb 9 00:43:06.982851 systemd[1]: Finished systemd-boot-update.service. Feb 9 00:43:06.996000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:07.025070 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 00:43:07.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:07.027002 systemd[1]: Starting audit-rules.service... Feb 9 00:43:07.028448 systemd[1]: Starting clean-ca-certificates.service... Feb 9 00:43:07.030085 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 00:43:07.032111 systemd[1]: Starting systemd-resolved.service... Feb 9 00:43:07.031000 audit: BPF prog-id=27 op=LOAD Feb 9 00:43:07.033000 audit: BPF prog-id=28 op=LOAD Feb 9 00:43:07.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:07.034439 systemd[1]: Starting systemd-timesyncd.service... Feb 9 00:43:07.035844 systemd[1]: Starting systemd-update-utmp.service... Feb 9 00:43:07.036878 systemd[1]: Finished clean-ca-certificates.service. Feb 9 00:43:07.037950 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 00:43:07.041000 audit[1070]: SYSTEM_BOOT pid=1070 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 00:43:07.043896 systemd[1]: Finished systemd-update-utmp.service. Feb 9 00:43:07.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:07.064687 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 00:43:07.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 00:43:07.514275 systemd-networkd[1017]: eth0: Gained IPv6LL Feb 9 00:43:07.593000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 00:43:07.593000 audit[1079]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd5b2f9040 a2=420 a3=0 items=0 ppid=1059 pid=1079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 00:43:07.593000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 00:43:07.594162 augenrules[1079]: No rules Feb 9 00:43:07.594919 systemd[1]: Finished audit-rules.service. Feb 9 00:43:07.600850 systemd[1]: Started systemd-timesyncd.service. Feb 9 00:43:07.601549 systemd-timesyncd[1069]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 9 00:43:07.601586 systemd-timesyncd[1069]: Initial clock synchronization to Fri 2024-02-09 00:43:07.637711 UTC. Feb 9 00:43:07.602010 systemd[1]: Reached target time-set.target. Feb 9 00:43:07.603220 systemd[1]: Finished ldconfig.service. Feb 9 00:43:07.604643 systemd-resolved[1063]: Positive Trust Anchors: Feb 9 00:43:07.604654 systemd-resolved[1063]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 00:43:07.604680 systemd-resolved[1063]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 00:43:07.605097 systemd[1]: Starting systemd-update-done.service... Feb 9 00:43:07.612631 systemd-resolved[1063]: Defaulting to hostname 'linux'. Feb 9 00:43:07.613199 systemd[1]: Finished systemd-update-done.service. Feb 9 00:43:07.614035 systemd[1]: Started systemd-resolved.service. Feb 9 00:43:07.614758 systemd[1]: Reached target network.target. Feb 9 00:43:07.615388 systemd[1]: Reached target nss-lookup.target. Feb 9 00:43:07.616037 systemd[1]: Reached target sysinit.target. Feb 9 00:43:07.616900 systemd[1]: Started motdgen.path. Feb 9 00:43:07.617489 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 00:43:07.618419 systemd[1]: Started logrotate.timer. Feb 9 00:43:07.619087 systemd[1]: Started mdadm.timer. Feb 9 00:43:07.619666 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 00:43:07.620600 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 00:43:07.620628 systemd[1]: Reached target paths.target. Feb 9 00:43:07.621332 systemd[1]: Reached target timers.target. Feb 9 00:43:07.622285 systemd[1]: Listening on dbus.socket. Feb 9 00:43:07.623913 systemd[1]: Starting docker.socket... Feb 9 00:43:07.627040 systemd[1]: Listening on sshd.socket. Feb 9 00:43:07.628104 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 00:43:07.628512 systemd[1]: Listening on docker.socket. Feb 9 00:43:07.629662 systemd[1]: Reached target sockets.target. Feb 9 00:43:07.630756 systemd[1]: Reached target basic.target. Feb 9 00:43:07.631907 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 00:43:07.631932 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 00:43:07.633060 systemd[1]: Starting containerd.service... Feb 9 00:43:07.634858 systemd[1]: Starting dbus.service... Feb 9 00:43:07.636607 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 00:43:07.638546 systemd[1]: Starting extend-filesystems.service... Feb 9 00:43:07.640080 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 00:43:07.641362 jq[1092]: false Feb 9 00:43:07.641652 systemd[1]: Starting motdgen.service... Feb 9 00:43:07.643239 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 00:43:07.644980 systemd[1]: Starting prepare-critools.service... Feb 9 00:43:07.648594 systemd[1]: Starting prepare-helm.service... Feb 9 00:43:07.650640 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 00:43:07.652480 systemd[1]: Starting sshd-keygen.service... Feb 9 00:43:07.652824 extend-filesystems[1093]: Found sr0 Feb 9 00:43:07.656495 extend-filesystems[1093]: Found vda Feb 9 00:43:07.656495 extend-filesystems[1093]: Found vda1 Feb 9 00:43:07.656495 extend-filesystems[1093]: Found vda2 Feb 9 00:43:07.656495 extend-filesystems[1093]: Found vda3 Feb 9 00:43:07.656495 extend-filesystems[1093]: Found usr Feb 9 00:43:07.656495 extend-filesystems[1093]: Found vda4 Feb 9 00:43:07.656495 extend-filesystems[1093]: Found vda6 Feb 9 00:43:07.656495 extend-filesystems[1093]: Found vda7 Feb 9 00:43:07.656495 extend-filesystems[1093]: Found vda9 Feb 9 00:43:07.656495 extend-filesystems[1093]: Checking size of /dev/vda9 Feb 9 00:43:07.744056 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 9 00:43:07.744118 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 9 00:43:07.654256 dbus-daemon[1091]: [system] SELinux support is enabled Feb 9 00:43:07.657531 systemd[1]: Starting systemd-logind.service... Feb 9 00:43:07.771374 extend-filesystems[1093]: Resized partition /dev/vda9 Feb 9 00:43:07.658302 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 00:43:07.772275 extend-filesystems[1116]: resize2fs 1.46.5 (30-Dec-2021) Feb 9 00:43:07.772275 extend-filesystems[1116]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 9 00:43:07.772275 extend-filesystems[1116]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 9 00:43:07.772275 extend-filesystems[1116]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 9 00:43:07.658352 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 00:43:07.775948 extend-filesystems[1093]: Resized filesystem in /dev/vda9 Feb 9 00:43:07.658802 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 9 00:43:07.776884 jq[1112]: true Feb 9 00:43:07.659449 systemd[1]: Starting update-engine.service... Feb 9 00:43:07.661514 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 00:43:07.777208 tar[1120]: ./ Feb 9 00:43:07.777208 tar[1120]: ./loopback Feb 9 00:43:07.663523 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 00:43:07.777507 tar[1121]: crictl Feb 9 00:43:07.664451 systemd[1]: Started dbus.service. Feb 9 00:43:07.777735 tar[1122]: linux-amd64/helm Feb 9 00:43:07.668382 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 00:43:07.777968 jq[1123]: true Feb 9 00:43:07.672378 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 00:43:07.672517 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 00:43:07.672739 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 00:43:07.778365 bash[1142]: Updated "/home/core/.ssh/authorized_keys" Feb 9 00:43:07.672855 systemd[1]: Finished motdgen.service. Feb 9 00:43:07.675537 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 00:43:07.675682 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 00:43:07.685536 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 00:43:07.685553 systemd[1]: Reached target system-config.target. Feb 9 00:43:07.686559 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 00:43:07.686573 systemd[1]: Reached target user-config.target. Feb 9 00:43:07.732551 systemd-logind[1106]: Watching system buttons on /dev/input/event1 (Power Button) Feb 9 00:43:07.732566 systemd-logind[1106]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 9 00:43:07.734962 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 00:43:07.735100 systemd[1]: Finished extend-filesystems.service. Feb 9 00:43:07.741801 systemd-logind[1106]: New seat seat0. Feb 9 00:43:07.744310 systemd[1]: Started systemd-logind.service. Feb 9 00:43:07.770200 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 00:43:07.797388 tar[1120]: ./bandwidth Feb 9 00:43:07.800410 update_engine[1111]: I0209 00:43:07.800246 1111 main.cc:92] Flatcar Update Engine starting Feb 9 00:43:07.802578 systemd[1]: Started update-engine.service. Feb 9 00:43:07.803011 update_engine[1111]: I0209 00:43:07.802995 1111 update_check_scheduler.cc:74] Next update check in 3m29s Feb 9 00:43:07.805045 systemd[1]: Started locksmithd.service. Feb 9 00:43:07.833834 env[1124]: time="2024-02-09T00:43:07.833754697Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 00:43:07.875666 tar[1120]: ./ptp Feb 9 00:43:07.922003 env[1124]: time="2024-02-09T00:43:07.912213086Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 00:43:07.922003 env[1124]: time="2024-02-09T00:43:07.912428130Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 00:43:07.939214 tar[1120]: ./vlan Feb 9 00:43:07.982086 tar[1120]: ./host-device Feb 9 00:43:08.009217 env[1124]: time="2024-02-09T00:43:08.001669447Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 00:43:08.009217 env[1124]: time="2024-02-09T00:43:08.001726234Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 00:43:08.009217 env[1124]: time="2024-02-09T00:43:08.002073725Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 00:43:08.009217 env[1124]: time="2024-02-09T00:43:08.002114992Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 00:43:08.009217 env[1124]: time="2024-02-09T00:43:08.002155287Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 00:43:08.009217 env[1124]: time="2024-02-09T00:43:08.002167769Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 00:43:08.009217 env[1124]: time="2024-02-09T00:43:08.002257459Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 00:43:08.009217 env[1124]: time="2024-02-09T00:43:08.002494387Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 00:43:08.009217 env[1124]: time="2024-02-09T00:43:08.002632407Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 00:43:08.009217 env[1124]: time="2024-02-09T00:43:08.002657440Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 00:43:08.010240 env[1124]: time="2024-02-09T00:43:08.002715541Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 00:43:08.010240 env[1124]: time="2024-02-09T00:43:08.002729026Z" level=info msg="metadata content store policy set" policy=shared Feb 9 00:43:08.013909 env[1124]: time="2024-02-09T00:43:08.013869296Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 00:43:08.013963 env[1124]: time="2024-02-09T00:43:08.013915456Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 00:43:08.013963 env[1124]: time="2024-02-09T00:43:08.013931774Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 00:43:08.014004 env[1124]: time="2024-02-09T00:43:08.013971548Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 00:43:08.014004 env[1124]: time="2024-02-09T00:43:08.013989573Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 00:43:08.014045 env[1124]: time="2024-02-09T00:43:08.014005227Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 00:43:08.014045 env[1124]: time="2024-02-09T00:43:08.014025120Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 00:43:08.014045 env[1124]: time="2024-02-09T00:43:08.014041075Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 00:43:08.014112 env[1124]: time="2024-02-09T00:43:08.014057735Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 00:43:08.014112 env[1124]: time="2024-02-09T00:43:08.014082427Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 00:43:08.014112 env[1124]: time="2024-02-09T00:43:08.014096695Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 00:43:08.014184 env[1124]: time="2024-02-09T00:43:08.014111928Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 00:43:08.014271 env[1124]: time="2024-02-09T00:43:08.014244716Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 00:43:08.014475 env[1124]: time="2024-02-09T00:43:08.014448990Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 00:43:08.014738 env[1124]: time="2024-02-09T00:43:08.014712177Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 00:43:08.014780 env[1124]: time="2024-02-09T00:43:08.014745614Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 00:43:08.014780 env[1124]: time="2024-02-09T00:43:08.014762876Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 00:43:08.014823 env[1124]: time="2024-02-09T00:43:08.014812039Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 00:43:08.014846 env[1124]: time="2024-02-09T00:43:08.014827704Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 00:43:08.014867 env[1124]: time="2024-02-09T00:43:08.014844262Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 00:43:08.014867 env[1124]: time="2024-02-09T00:43:08.014859124Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 00:43:08.014906 env[1124]: time="2024-02-09T00:43:08.014876465Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 00:43:08.014906 env[1124]: time="2024-02-09T00:43:08.014891778Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 00:43:08.014944 env[1124]: time="2024-02-09T00:43:08.014906951Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 00:43:08.014944 env[1124]: time="2024-02-09T00:43:08.014921902Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 00:43:08.014944 env[1124]: time="2024-02-09T00:43:08.014938501Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 00:43:08.015085 env[1124]: time="2024-02-09T00:43:08.015055846Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 00:43:08.015126 env[1124]: time="2024-02-09T00:43:08.015086714Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 00:43:08.015126 env[1124]: time="2024-02-09T00:43:08.015102458Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 00:43:08.015126 env[1124]: time="2024-02-09T00:43:08.015117752Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 00:43:08.015206 env[1124]: time="2024-02-09T00:43:08.015144532Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 00:43:08.015206 env[1124]: time="2024-02-09T00:43:08.015158740Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 00:43:08.015206 env[1124]: time="2024-02-09T00:43:08.015182770Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 00:43:08.015266 env[1124]: time="2024-02-09T00:43:08.015221299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 00:43:08.015515 env[1124]: time="2024-02-09T00:43:08.015449863Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 00:43:08.015515 env[1124]: time="2024-02-09T00:43:08.015520675Z" level=info msg="Connect containerd service" Feb 9 00:43:08.016459 env[1124]: time="2024-02-09T00:43:08.015561212Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 00:43:08.016459 env[1124]: time="2024-02-09T00:43:08.016115923Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 00:43:08.016767 env[1124]: time="2024-02-09T00:43:08.016739197Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 00:43:08.016809 env[1124]: time="2024-02-09T00:43:08.016786803Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 00:43:08.017031 env[1124]: time="2024-02-09T00:43:08.016847243Z" level=info msg="containerd successfully booted in 0.183963s" Feb 9 00:43:08.016946 systemd[1]: Started containerd.service. Feb 9 00:43:08.020210 env[1124]: time="2024-02-09T00:43:08.020170797Z" level=info msg="Start subscribing containerd event" Feb 9 00:43:08.020261 env[1124]: time="2024-02-09T00:43:08.020219468Z" level=info msg="Start recovering state" Feb 9 00:43:08.020286 env[1124]: time="2024-02-09T00:43:08.020274937Z" level=info msg="Start event monitor" Feb 9 00:43:08.020307 env[1124]: time="2024-02-09T00:43:08.020299919Z" level=info msg="Start snapshots syncer" Feb 9 00:43:08.020327 env[1124]: time="2024-02-09T00:43:08.020311518Z" level=info msg="Start cni network conf syncer for default" Feb 9 00:43:08.020327 env[1124]: time="2024-02-09T00:43:08.020322081Z" level=info msg="Start streaming server" Feb 9 00:43:08.067205 tar[1120]: ./tuning Feb 9 00:43:08.116560 tar[1120]: ./vrf Feb 9 00:43:08.186264 tar[1120]: ./sbr Feb 9 00:43:08.232102 tar[1120]: ./tap Feb 9 00:43:08.288599 tar[1120]: ./dhcp Feb 9 00:43:08.352311 sshd_keygen[1118]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 00:43:08.407320 systemd[1]: Finished sshd-keygen.service. Feb 9 00:43:08.410988 systemd[1]: Starting issuegen.service... Feb 9 00:43:08.416767 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 00:43:08.416909 systemd[1]: Finished issuegen.service. Feb 9 00:43:08.418775 systemd[1]: Starting systemd-user-sessions.service... Feb 9 00:43:08.427140 systemd[1]: Finished systemd-user-sessions.service. Feb 9 00:43:08.429059 systemd[1]: Started getty@tty1.service. Feb 9 00:43:08.430540 systemd[1]: Started serial-getty@ttyS0.service. Feb 9 00:43:08.431329 systemd[1]: Reached target getty.target. Feb 9 00:43:08.470323 tar[1120]: ./static Feb 9 00:43:08.474747 locksmithd[1152]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 00:43:08.475960 systemd[1]: Finished prepare-critools.service. Feb 9 00:43:08.482123 tar[1122]: linux-amd64/LICENSE Feb 9 00:43:08.482219 tar[1122]: linux-amd64/README.md Feb 9 00:43:08.486542 systemd[1]: Finished prepare-helm.service. Feb 9 00:43:08.495793 tar[1120]: ./firewall Feb 9 00:43:08.529076 tar[1120]: ./macvlan Feb 9 00:43:08.558877 tar[1120]: ./dummy Feb 9 00:43:08.588928 tar[1120]: ./bridge Feb 9 00:43:08.622701 tar[1120]: ./ipvlan Feb 9 00:43:08.653987 tar[1120]: ./portmap Feb 9 00:43:08.683520 tar[1120]: ./host-local Feb 9 00:43:08.719395 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 00:43:08.720371 systemd[1]: Reached target multi-user.target. Feb 9 00:43:08.722153 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 00:43:08.729004 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 00:43:08.729167 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 00:43:08.730005 systemd[1]: Startup finished in 567ms (kernel) + 6.346s (initrd) + 7.678s (userspace) = 14.592s. Feb 9 00:43:08.934882 systemd[1]: Created slice system-sshd.slice. Feb 9 00:43:08.935783 systemd[1]: Started sshd@0-10.0.0.31:22-10.0.0.1:60872.service. Feb 9 00:43:08.976836 sshd[1180]: Accepted publickey for core from 10.0.0.1 port 60872 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:43:08.978204 sshd[1180]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:43:08.984952 systemd[1]: Created slice user-500.slice. Feb 9 00:43:08.985946 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 00:43:08.987348 systemd-logind[1106]: New session 1 of user core. Feb 9 00:43:08.993057 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 00:43:08.994258 systemd[1]: Starting user@500.service... Feb 9 00:43:08.996685 (systemd)[1183]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:43:09.062619 systemd[1183]: Queued start job for default target default.target. Feb 9 00:43:09.063028 systemd[1183]: Reached target paths.target. Feb 9 00:43:09.063047 systemd[1183]: Reached target sockets.target. Feb 9 00:43:09.063060 systemd[1183]: Reached target timers.target. Feb 9 00:43:09.063070 systemd[1183]: Reached target basic.target. Feb 9 00:43:09.063102 systemd[1183]: Reached target default.target. Feb 9 00:43:09.063121 systemd[1183]: Startup finished in 60ms. Feb 9 00:43:09.063200 systemd[1]: Started user@500.service. Feb 9 00:43:09.064023 systemd[1]: Started session-1.scope. Feb 9 00:43:09.115822 systemd[1]: Started sshd@1-10.0.0.31:22-10.0.0.1:60880.service. Feb 9 00:43:09.158053 sshd[1192]: Accepted publickey for core from 10.0.0.1 port 60880 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:43:09.159266 sshd[1192]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:43:09.162980 systemd-logind[1106]: New session 2 of user core. Feb 9 00:43:09.163863 systemd[1]: Started session-2.scope. Feb 9 00:43:09.220025 sshd[1192]: pam_unix(sshd:session): session closed for user core Feb 9 00:43:09.223245 systemd[1]: sshd@1-10.0.0.31:22-10.0.0.1:60880.service: Deactivated successfully. Feb 9 00:43:09.223906 systemd[1]: session-2.scope: Deactivated successfully. Feb 9 00:43:09.224407 systemd-logind[1106]: Session 2 logged out. Waiting for processes to exit. Feb 9 00:43:09.225742 systemd[1]: Started sshd@2-10.0.0.31:22-10.0.0.1:60886.service. Feb 9 00:43:09.226469 systemd-logind[1106]: Removed session 2. Feb 9 00:43:09.268612 sshd[1198]: Accepted publickey for core from 10.0.0.1 port 60886 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:43:09.269878 sshd[1198]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:43:09.273324 systemd-logind[1106]: New session 3 of user core. Feb 9 00:43:09.274016 systemd[1]: Started session-3.scope. Feb 9 00:43:09.323966 sshd[1198]: pam_unix(sshd:session): session closed for user core Feb 9 00:43:09.327425 systemd[1]: Started sshd@3-10.0.0.31:22-10.0.0.1:60900.service. Feb 9 00:43:09.327830 systemd[1]: sshd@2-10.0.0.31:22-10.0.0.1:60886.service: Deactivated successfully. Feb 9 00:43:09.328332 systemd[1]: session-3.scope: Deactivated successfully. Feb 9 00:43:09.328840 systemd-logind[1106]: Session 3 logged out. Waiting for processes to exit. Feb 9 00:43:09.329681 systemd-logind[1106]: Removed session 3. Feb 9 00:43:09.370527 sshd[1203]: Accepted publickey for core from 10.0.0.1 port 60900 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:43:09.371711 sshd[1203]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:43:09.374885 systemd-logind[1106]: New session 4 of user core. Feb 9 00:43:09.375567 systemd[1]: Started session-4.scope. Feb 9 00:43:09.428622 sshd[1203]: pam_unix(sshd:session): session closed for user core Feb 9 00:43:09.431068 systemd[1]: sshd@3-10.0.0.31:22-10.0.0.1:60900.service: Deactivated successfully. Feb 9 00:43:09.431552 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 00:43:09.432011 systemd-logind[1106]: Session 4 logged out. Waiting for processes to exit. Feb 9 00:43:09.432851 systemd[1]: Started sshd@4-10.0.0.31:22-10.0.0.1:60910.service. Feb 9 00:43:09.433559 systemd-logind[1106]: Removed session 4. Feb 9 00:43:09.474608 sshd[1210]: Accepted publickey for core from 10.0.0.1 port 60910 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:43:09.475723 sshd[1210]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:43:09.478845 systemd-logind[1106]: New session 5 of user core. Feb 9 00:43:09.479507 systemd[1]: Started session-5.scope. Feb 9 00:43:09.533924 sudo[1213]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 00:43:09.534080 sudo[1213]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 00:43:10.073145 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 00:43:10.651726 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 00:43:10.652021 systemd[1]: Reached target network-online.target. Feb 9 00:43:10.653230 systemd[1]: Starting docker.service... Feb 9 00:43:10.764287 env[1231]: time="2024-02-09T00:43:10.764230862Z" level=info msg="Starting up" Feb 9 00:43:10.765303 env[1231]: time="2024-02-09T00:43:10.765276947Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 00:43:10.765303 env[1231]: time="2024-02-09T00:43:10.765292095Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 00:43:10.765380 env[1231]: time="2024-02-09T00:43:10.765314803Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 00:43:10.765380 env[1231]: time="2024-02-09T00:43:10.765337841Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 00:43:10.767086 env[1231]: time="2024-02-09T00:43:10.767050697Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 00:43:10.767086 env[1231]: time="2024-02-09T00:43:10.767077672Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 00:43:10.767184 env[1231]: time="2024-02-09T00:43:10.767099285Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 00:43:10.767184 env[1231]: time="2024-02-09T00:43:10.767108962Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 00:43:10.772066 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2418119887-merged.mount: Deactivated successfully. Feb 9 00:43:11.546506 env[1231]: time="2024-02-09T00:43:11.546456188Z" level=info msg="Loading containers: start." Feb 9 00:43:11.865155 kernel: Initializing XFRM netlink socket Feb 9 00:43:11.893952 env[1231]: time="2024-02-09T00:43:11.893906571Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 9 00:43:11.941982 systemd-networkd[1017]: docker0: Link UP Feb 9 00:43:11.951758 env[1231]: time="2024-02-09T00:43:11.951704487Z" level=info msg="Loading containers: done." Feb 9 00:43:11.961689 env[1231]: time="2024-02-09T00:43:11.961643721Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 9 00:43:11.961906 env[1231]: time="2024-02-09T00:43:11.961778545Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 9 00:43:11.961906 env[1231]: time="2024-02-09T00:43:11.961858212Z" level=info msg="Daemon has completed initialization" Feb 9 00:43:11.978178 systemd[1]: Started docker.service. Feb 9 00:43:11.981763 env[1231]: time="2024-02-09T00:43:11.981708516Z" level=info msg="API listen on /run/docker.sock" Feb 9 00:43:11.997520 systemd[1]: Reloading. Feb 9 00:43:12.059731 /usr/lib/systemd/system-generators/torcx-generator[1373]: time="2024-02-09T00:43:12Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 00:43:12.059754 /usr/lib/systemd/system-generators/torcx-generator[1373]: time="2024-02-09T00:43:12Z" level=info msg="torcx already run" Feb 9 00:43:12.118857 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 00:43:12.118872 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 00:43:12.138207 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 00:43:12.204760 systemd[1]: Started kubelet.service. Feb 9 00:43:12.250257 kubelet[1414]: E0209 00:43:12.250206 1414 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Feb 9 00:43:12.254489 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 00:43:12.254604 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 00:43:12.568875 env[1124]: time="2024-02-09T00:43:12.568749073Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.27.10\"" Feb 9 00:43:14.530109 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount436453028.mount: Deactivated successfully. Feb 9 00:43:18.320332 env[1124]: time="2024-02-09T00:43:18.320261893Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:43:18.323976 env[1124]: time="2024-02-09T00:43:18.323922499Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7968fc5c824ed95404f421a90882835f250220c0fd799b4fceef340dd5585ed5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:43:18.325803 env[1124]: time="2024-02-09T00:43:18.325760521Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:43:18.327618 env[1124]: time="2024-02-09T00:43:18.327576154Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:cfcebda74d6e665b68931d3589ee69fde81cd503ff3169888e4502af65579d98,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:43:18.328445 env[1124]: time="2024-02-09T00:43:18.328388498Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.27.10\" returns image reference \"sha256:7968fc5c824ed95404f421a90882835f250220c0fd799b4fceef340dd5585ed5\"" Feb 9 00:43:18.362728 env[1124]: time="2024-02-09T00:43:18.362687472Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.27.10\"" Feb 9 00:43:20.922463 env[1124]: time="2024-02-09T00:43:20.922394812Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:43:20.924208 env[1124]: time="2024-02-09T00:43:20.924175414Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c8134be729ba23c6e0c3e5dd52c393fc8d3cfc688bcec33540f64bb0137b67e0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:43:20.926279 env[1124]: time="2024-02-09T00:43:20.926239892Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:43:20.929725 env[1124]: time="2024-02-09T00:43:20.929685095Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fa168ebca1f6dbfe86ef0a690e007531c1f53569274fc7dc2774fe228b6ce8c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:43:20.930887 env[1124]: time="2024-02-09T00:43:20.930781141Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.27.10\" returns image reference \"sha256:c8134be729ba23c6e0c3e5dd52c393fc8d3cfc688bcec33540f64bb0137b67e0\"" Feb 9 00:43:20.940732 env[1124]: time="2024-02-09T00:43:20.940686699Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.27.10\"" Feb 9 00:43:22.291692 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 9 00:43:22.291905 systemd[1]: Stopped kubelet.service. Feb 9 00:43:22.320672 systemd[1]: Started kubelet.service. Feb 9 00:43:22.390633 kubelet[1454]: E0209 00:43:22.390573 1454 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Feb 9 00:43:22.394222 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 00:43:22.394338 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 00:43:23.710464 env[1124]: time="2024-02-09T00:43:23.710381446Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:43:23.727826 env[1124]: time="2024-02-09T00:43:23.727747878Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5eed9876e7181341b7015e3486dfd234f8e0d0d7d3d19b1bb971d720cd320975,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:43:23.731061 env[1124]: time="2024-02-09T00:43:23.731007897Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:43:23.776332 env[1124]: time="2024-02-09T00:43:23.776269416Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:09294de61e63987f181077cbc2f5c82463878af9cd8ecc6110c54150c9ae3143,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:43:23.777375 env[1124]: time="2024-02-09T00:43:23.777281040Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.27.10\" returns image reference \"sha256:5eed9876e7181341b7015e3486dfd234f8e0d0d7d3d19b1bb971d720cd320975\"" Feb 9 00:43:23.791570 env[1124]: time="2024-02-09T00:43:23.791519535Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.10\"" Feb 9 00:43:24.955815 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4272479629.mount: Deactivated successfully. Feb 9 00:43:26.250059 env[1124]: time="2024-02-09T00:43:26.249977819Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:43:26.353570 env[1124]: time="2024-02-09T00:43:26.353519908Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:db7b01e105753475c198490cf875df1314fd1a599f67ea1b184586cb399e1cae,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:43:26.363444 env[1124]: time="2024-02-09T00:43:26.363407523Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:43:26.366781 env[1124]: time="2024-02-09T00:43:26.366741541Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:d084b53c772f62ec38fddb2348a82d4234016daf6cd43fedbf0b3281f3790f88,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:43:26.367291 env[1124]: time="2024-02-09T00:43:26.367230614Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.10\" returns image reference \"sha256:db7b01e105753475c198490cf875df1314fd1a599f67ea1b184586cb399e1cae\"" Feb 9 00:43:26.377491 env[1124]: time="2024-02-09T00:43:26.377437703Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 9 00:43:26.916288 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3589594762.mount: Deactivated successfully. Feb 9 00:43:26.921869 env[1124]: time="2024-02-09T00:43:26.921807692Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:43:26.923449 env[1124]: time="2024-02-09T00:43:26.923423213Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:43:26.924856 env[1124]: time="2024-02-09T00:43:26.924824321Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:43:26.926285 env[1124]: time="2024-02-09T00:43:26.926233488Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:43:26.926639 env[1124]: time="2024-02-09T00:43:26.926607083Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 9 00:43:26.939934 env[1124]: time="2024-02-09T00:43:26.939885702Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.7-0\"" Feb 9 00:43:28.062609 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4007454768.mount: Deactivated successfully. Feb 9 00:43:32.541480 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 9 00:43:32.541686 systemd[1]: Stopped kubelet.service. Feb 9 00:43:32.543060 systemd[1]: Started kubelet.service. Feb 9 00:43:32.702341 kubelet[1481]: E0209 00:43:32.598616 1481 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: op Feb 9 00:43:32.702341 kubelet[1481]: en /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Feb 9 00:43:32.600512 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 00:43:32.600624 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 00:43:32.986333 env[1124]: time="2024-02-09T00:43:32.986198000Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.7-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:43:32.988278 env[1124]: time="2024-02-09T00:43:32.988237723Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:43:32.989681 env[1124]: time="2024-02-09T00:43:32.989651001Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.7-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:43:32.991037 env[1124]: time="2024-02-09T00:43:32.991002413Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:43:32.991610 env[1124]: time="2024-02-09T00:43:32.991564627Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.7-0\" returns image reference \"sha256:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681\"" Feb 9 00:43:33.001645 env[1124]: time="2024-02-09T00:43:33.001605828Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Feb 9 00:43:33.645333 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3437060077.mount: Deactivated successfully. Feb 9 00:43:35.155312 env[1124]: time="2024-02-09T00:43:35.155221095Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:43:35.159805 env[1124]: time="2024-02-09T00:43:35.159770413Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:43:35.161349 env[1124]: time="2024-02-09T00:43:35.161322417Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:43:35.162781 env[1124]: time="2024-02-09T00:43:35.162748866Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:43:35.163232 env[1124]: time="2024-02-09T00:43:35.163208579Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Feb 9 00:43:36.967743 systemd[1]: Stopped kubelet.service. Feb 9 00:43:36.980089 systemd[1]: Reloading. Feb 9 00:43:37.041281 /usr/lib/systemd/system-generators/torcx-generator[1593]: time="2024-02-09T00:43:37Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 00:43:37.041311 /usr/lib/systemd/system-generators/torcx-generator[1593]: time="2024-02-09T00:43:37Z" level=info msg="torcx already run" Feb 9 00:43:37.101331 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 00:43:37.101346 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 00:43:37.119839 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 00:43:37.189955 systemd[1]: Started kubelet.service. Feb 9 00:43:37.241803 kubelet[1634]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 00:43:37.241803 kubelet[1634]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 9 00:43:37.241803 kubelet[1634]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 00:43:37.241803 kubelet[1634]: I0209 00:43:37.241776 1634 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 00:43:37.532363 kubelet[1634]: I0209 00:43:37.532249 1634 server.go:415] "Kubelet version" kubeletVersion="v1.27.2" Feb 9 00:43:37.532363 kubelet[1634]: I0209 00:43:37.532291 1634 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 00:43:37.532547 kubelet[1634]: I0209 00:43:37.532530 1634 server.go:837] "Client rotation is on, will bootstrap in background" Feb 9 00:43:37.535730 kubelet[1634]: I0209 00:43:37.535699 1634 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 00:43:37.536582 kubelet[1634]: E0209 00:43:37.536554 1634 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.31:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.31:6443: connect: connection refused Feb 9 00:43:37.538922 kubelet[1634]: I0209 00:43:37.538904 1634 server.go:662] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 00:43:37.539086 kubelet[1634]: I0209 00:43:37.539069 1634 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 00:43:37.539160 kubelet[1634]: I0209 00:43:37.539145 1634 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 00:43:37.539263 kubelet[1634]: I0209 00:43:37.539167 1634 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 00:43:37.539263 kubelet[1634]: I0209 00:43:37.539177 1634 container_manager_linux.go:302] "Creating device plugin manager" Feb 9 00:43:37.539263 kubelet[1634]: I0209 00:43:37.539251 1634 state_mem.go:36] "Initialized new in-memory state store" Feb 9 00:43:37.544190 kubelet[1634]: I0209 00:43:37.544164 1634 kubelet.go:405] "Attempting to sync node with API server" Feb 9 00:43:37.544190 kubelet[1634]: I0209 00:43:37.544190 1634 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 00:43:37.544374 kubelet[1634]: I0209 00:43:37.544213 1634 kubelet.go:309] "Adding apiserver pod source" Feb 9 00:43:37.544374 kubelet[1634]: I0209 00:43:37.544237 1634 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 00:43:37.545459 kubelet[1634]: W0209 00:43:37.545410 1634 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.31:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.31:6443: connect: connection refused Feb 9 00:43:37.545574 kubelet[1634]: W0209 00:43:37.545507 1634 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.31:6443: connect: connection refused Feb 9 00:43:37.545647 kubelet[1634]: E0209 00:43:37.545585 1634 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.31:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.31:6443: connect: connection refused Feb 9 00:43:37.545647 kubelet[1634]: E0209 00:43:37.545590 1634 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.31:6443: connect: connection refused Feb 9 00:43:37.545647 kubelet[1634]: I0209 00:43:37.545457 1634 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 00:43:37.546193 kubelet[1634]: W0209 00:43:37.546168 1634 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 00:43:37.546895 kubelet[1634]: I0209 00:43:37.546862 1634 server.go:1168] "Started kubelet" Feb 9 00:43:37.547466 kubelet[1634]: I0209 00:43:37.547439 1634 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 00:43:37.547705 kubelet[1634]: I0209 00:43:37.547676 1634 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 9 00:43:37.551471 kubelet[1634]: E0209 00:43:37.551447 1634 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 00:43:37.551546 kubelet[1634]: E0209 00:43:37.551478 1634 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 00:43:37.551696 kubelet[1634]: E0209 00:43:37.551545 1634 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b20b1a9660b2be", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 0, 43, 37, 546814142, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 0, 43, 37, 546814142, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.0.0.31:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.31:6443: connect: connection refused'(may retry after sleeping) Feb 9 00:43:37.553101 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 9 00:43:37.553178 kubelet[1634]: I0209 00:43:37.552231 1634 server.go:461] "Adding debug handlers to kubelet server" Feb 9 00:43:37.553279 kubelet[1634]: I0209 00:43:37.553236 1634 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 00:43:37.555084 kubelet[1634]: E0209 00:43:37.555053 1634 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 00:43:37.555392 kubelet[1634]: I0209 00:43:37.555365 1634 volume_manager.go:284] "Starting Kubelet Volume Manager" Feb 9 00:43:37.555492 kubelet[1634]: I0209 00:43:37.555477 1634 desired_state_of_world_populator.go:145] "Desired state populator starts to run" Feb 9 00:43:37.555874 kubelet[1634]: W0209 00:43:37.555832 1634 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.31:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.31:6443: connect: connection refused Feb 9 00:43:37.555926 kubelet[1634]: E0209 00:43:37.555882 1634 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.31:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.31:6443: connect: connection refused Feb 9 00:43:37.556344 kubelet[1634]: E0209 00:43:37.556319 1634 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.31:6443: connect: connection refused" interval="200ms" Feb 9 00:43:37.567448 kubelet[1634]: I0209 00:43:37.567419 1634 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 00:43:37.568228 kubelet[1634]: I0209 00:43:37.568208 1634 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 00:43:37.568294 kubelet[1634]: I0209 00:43:37.568236 1634 status_manager.go:207] "Starting to sync pod status with apiserver" Feb 9 00:43:37.568294 kubelet[1634]: I0209 00:43:37.568268 1634 kubelet.go:2257] "Starting kubelet main sync loop" Feb 9 00:43:37.568357 kubelet[1634]: E0209 00:43:37.568319 1634 kubelet.go:2281] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 00:43:37.572958 kubelet[1634]: W0209 00:43:37.572905 1634 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.31:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.31:6443: connect: connection refused Feb 9 00:43:37.573047 kubelet[1634]: E0209 00:43:37.572966 1634 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.31:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.31:6443: connect: connection refused Feb 9 00:43:37.576546 kubelet[1634]: I0209 00:43:37.576524 1634 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 00:43:37.576546 kubelet[1634]: I0209 00:43:37.576542 1634 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 00:43:37.576648 kubelet[1634]: I0209 00:43:37.576562 1634 state_mem.go:36] "Initialized new in-memory state store" Feb 9 00:43:37.581732 kubelet[1634]: I0209 00:43:37.581706 1634 policy_none.go:49] "None policy: Start" Feb 9 00:43:37.582157 kubelet[1634]: I0209 00:43:37.582144 1634 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 00:43:37.582212 kubelet[1634]: I0209 00:43:37.582163 1634 state_mem.go:35] "Initializing new in-memory state store" Feb 9 00:43:37.589313 systemd[1]: Created slice kubepods.slice. Feb 9 00:43:37.592614 systemd[1]: Created slice kubepods-burstable.slice. Feb 9 00:43:37.594700 systemd[1]: Created slice kubepods-besteffort.slice. Feb 9 00:43:37.600643 kubelet[1634]: I0209 00:43:37.600617 1634 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 00:43:37.600876 kubelet[1634]: I0209 00:43:37.600853 1634 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 00:43:37.606176 kubelet[1634]: E0209 00:43:37.606147 1634 eviction_manager.go:262] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 9 00:43:37.657598 kubelet[1634]: I0209 00:43:37.657571 1634 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 00:43:37.657933 kubelet[1634]: E0209 00:43:37.657910 1634 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.31:6443/api/v1/nodes\": dial tcp 10.0.0.31:6443: connect: connection refused" node="localhost" Feb 9 00:43:37.669138 kubelet[1634]: I0209 00:43:37.669094 1634 topology_manager.go:212] "Topology Admit Handler" Feb 9 00:43:37.669953 kubelet[1634]: I0209 00:43:37.669931 1634 topology_manager.go:212] "Topology Admit Handler" Feb 9 00:43:37.670842 kubelet[1634]: I0209 00:43:37.670821 1634 topology_manager.go:212] "Topology Admit Handler" Feb 9 00:43:37.675233 systemd[1]: Created slice kubepods-burstable-pod49bf5c686a5c0211d9f14d41ea3c51ac.slice. Feb 9 00:43:37.688315 systemd[1]: Created slice kubepods-burstable-pod7709ea05d7cdf82b0d7e594b61a10331.slice. Feb 9 00:43:37.697278 systemd[1]: Created slice kubepods-burstable-pod2b0e94b38682f4e439413801d3cc54db.slice. Feb 9 00:43:37.756864 kubelet[1634]: I0209 00:43:37.756805 1634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/49bf5c686a5c0211d9f14d41ea3c51ac-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"49bf5c686a5c0211d9f14d41ea3c51ac\") " pod="kube-system/kube-apiserver-localhost" Feb 9 00:43:37.756864 kubelet[1634]: I0209 00:43:37.756845 1634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7709ea05d7cdf82b0d7e594b61a10331-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7709ea05d7cdf82b0d7e594b61a10331\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 00:43:37.756864 kubelet[1634]: I0209 00:43:37.756863 1634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7709ea05d7cdf82b0d7e594b61a10331-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7709ea05d7cdf82b0d7e594b61a10331\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 00:43:37.756864 kubelet[1634]: I0209 00:43:37.756879 1634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/49bf5c686a5c0211d9f14d41ea3c51ac-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"49bf5c686a5c0211d9f14d41ea3c51ac\") " pod="kube-system/kube-apiserver-localhost" Feb 9 00:43:37.757105 kubelet[1634]: I0209 00:43:37.756896 1634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/49bf5c686a5c0211d9f14d41ea3c51ac-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"49bf5c686a5c0211d9f14d41ea3c51ac\") " pod="kube-system/kube-apiserver-localhost" Feb 9 00:43:37.757105 kubelet[1634]: I0209 00:43:37.756915 1634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7709ea05d7cdf82b0d7e594b61a10331-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7709ea05d7cdf82b0d7e594b61a10331\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 00:43:37.757105 kubelet[1634]: I0209 00:43:37.756931 1634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7709ea05d7cdf82b0d7e594b61a10331-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7709ea05d7cdf82b0d7e594b61a10331\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 00:43:37.757105 kubelet[1634]: I0209 00:43:37.756947 1634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7709ea05d7cdf82b0d7e594b61a10331-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7709ea05d7cdf82b0d7e594b61a10331\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 00:43:37.757105 kubelet[1634]: I0209 00:43:37.756993 1634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2b0e94b38682f4e439413801d3cc54db-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2b0e94b38682f4e439413801d3cc54db\") " pod="kube-system/kube-scheduler-localhost" Feb 9 00:43:37.757311 kubelet[1634]: E0209 00:43:37.757177 1634 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.31:6443: connect: connection refused" interval="400ms" Feb 9 00:43:37.859565 kubelet[1634]: I0209 00:43:37.859537 1634 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 00:43:37.859873 kubelet[1634]: E0209 00:43:37.859849 1634 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.31:6443/api/v1/nodes\": dial tcp 10.0.0.31:6443: connect: connection refused" node="localhost" Feb 9 00:43:37.986738 kubelet[1634]: E0209 00:43:37.986697 1634 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:37.987529 env[1124]: time="2024-02-09T00:43:37.987472655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:49bf5c686a5c0211d9f14d41ea3c51ac,Namespace:kube-system,Attempt:0,}" Feb 9 00:43:37.995699 kubelet[1634]: E0209 00:43:37.995675 1634 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:37.996246 env[1124]: time="2024-02-09T00:43:37.996196895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7709ea05d7cdf82b0d7e594b61a10331,Namespace:kube-system,Attempt:0,}" Feb 9 00:43:37.999430 kubelet[1634]: E0209 00:43:37.999378 1634 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:37.999828 env[1124]: time="2024-02-09T00:43:37.999789650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2b0e94b38682f4e439413801d3cc54db,Namespace:kube-system,Attempt:0,}" Feb 9 00:43:38.158373 kubelet[1634]: E0209 00:43:38.158236 1634 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.31:6443: connect: connection refused" interval="800ms" Feb 9 00:43:38.261682 kubelet[1634]: I0209 00:43:38.261640 1634 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 00:43:38.262082 kubelet[1634]: E0209 00:43:38.262059 1634 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.31:6443/api/v1/nodes\": dial tcp 10.0.0.31:6443: connect: connection refused" node="localhost" Feb 9 00:43:38.563279 kubelet[1634]: W0209 00:43:38.563235 1634 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.31:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.31:6443: connect: connection refused Feb 9 00:43:38.563279 kubelet[1634]: E0209 00:43:38.563281 1634 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.31:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.31:6443: connect: connection refused Feb 9 00:43:38.595770 kubelet[1634]: W0209 00:43:38.595709 1634 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.31:6443: connect: connection refused Feb 9 00:43:38.595770 kubelet[1634]: E0209 00:43:38.595760 1634 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.31:6443: connect: connection refused Feb 9 00:43:38.609476 kubelet[1634]: W0209 00:43:38.609444 1634 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.31:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.31:6443: connect: connection refused Feb 9 00:43:38.609476 kubelet[1634]: E0209 00:43:38.609472 1634 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.31:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.31:6443: connect: connection refused Feb 9 00:43:38.671375 kubelet[1634]: W0209 00:43:38.671328 1634 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.31:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.31:6443: connect: connection refused Feb 9 00:43:38.671375 kubelet[1634]: E0209 00:43:38.671380 1634 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.31:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.31:6443: connect: connection refused Feb 9 00:43:38.959225 kubelet[1634]: E0209 00:43:38.959055 1634 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.31:6443: connect: connection refused" interval="1.6s" Feb 9 00:43:39.065407 kubelet[1634]: I0209 00:43:39.065372 1634 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 00:43:39.065843 kubelet[1634]: E0209 00:43:39.065811 1634 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.31:6443/api/v1/nodes\": dial tcp 10.0.0.31:6443: connect: connection refused" node="localhost" Feb 9 00:43:39.379198 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4169454172.mount: Deactivated successfully. Feb 9 00:43:39.408677 env[1124]: time="2024-02-09T00:43:39.408620202Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:43:39.422549 env[1124]: time="2024-02-09T00:43:39.422493561Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:43:39.425575 env[1124]: time="2024-02-09T00:43:39.425534031Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:43:39.427333 env[1124]: time="2024-02-09T00:43:39.427294650Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:43:39.432256 env[1124]: time="2024-02-09T00:43:39.432220871Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:43:39.436740 env[1124]: time="2024-02-09T00:43:39.436706081Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:43:39.438758 env[1124]: time="2024-02-09T00:43:39.438725895Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:43:39.441567 env[1124]: time="2024-02-09T00:43:39.441505025Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:43:39.443371 env[1124]: time="2024-02-09T00:43:39.443347633Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:43:39.448660 env[1124]: time="2024-02-09T00:43:39.448619949Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:43:39.450094 env[1124]: time="2024-02-09T00:43:39.450061793Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:43:39.455541 env[1124]: time="2024-02-09T00:43:39.455518541Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:43:39.516221 env[1124]: time="2024-02-09T00:43:39.515039183Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 00:43:39.516221 env[1124]: time="2024-02-09T00:43:39.515086145Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 00:43:39.516221 env[1124]: time="2024-02-09T00:43:39.515099214Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 00:43:39.516221 env[1124]: time="2024-02-09T00:43:39.515234568Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7568cf7a417d806b70ceb5a36030570e55f6565096fde7036dd89a117c30151a pid=1688 runtime=io.containerd.runc.v2 Feb 9 00:43:39.533800 kubelet[1634]: E0209 00:43:39.533668 1634 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b20b1a9660b2be", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 0, 43, 37, 546814142, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 0, 43, 37, 546814142, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.0.0.31:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.31:6443: connect: connection refused'(may retry after sleeping) Feb 9 00:43:39.535689 env[1124]: time="2024-02-09T00:43:39.535519489Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 00:43:39.535689 env[1124]: time="2024-02-09T00:43:39.535567474Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 00:43:39.535689 env[1124]: time="2024-02-09T00:43:39.535577155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 00:43:39.535808 env[1124]: time="2024-02-09T00:43:39.535761817Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/793339848ae2715d7e0ebece763fcb07627a88c8907dbe6d3ac3e432578eceb3 pid=1675 runtime=io.containerd.runc.v2 Feb 9 00:43:39.551222 systemd[1]: Started cri-containerd-793339848ae2715d7e0ebece763fcb07627a88c8907dbe6d3ac3e432578eceb3.scope. Feb 9 00:43:39.557579 systemd[1]: Started cri-containerd-7568cf7a417d806b70ceb5a36030570e55f6565096fde7036dd89a117c30151a.scope. Feb 9 00:43:39.559080 env[1124]: time="2024-02-09T00:43:39.558917117Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 00:43:39.559080 env[1124]: time="2024-02-09T00:43:39.558959860Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 00:43:39.559080 env[1124]: time="2024-02-09T00:43:39.558969641Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 00:43:39.559351 env[1124]: time="2024-02-09T00:43:39.559284498Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/37043617a31bfdda4751e75de31614352cd55e08e2741a5a9ee5e87dc85fce02 pid=1727 runtime=io.containerd.runc.v2 Feb 9 00:43:39.568433 kubelet[1634]: E0209 00:43:39.568294 1634 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.31:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.31:6443: connect: connection refused Feb 9 00:43:39.617919 systemd[1]: Started cri-containerd-37043617a31bfdda4751e75de31614352cd55e08e2741a5a9ee5e87dc85fce02.scope. Feb 9 00:43:39.644639 env[1124]: time="2024-02-09T00:43:39.644524890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7709ea05d7cdf82b0d7e594b61a10331,Namespace:kube-system,Attempt:0,} returns sandbox id \"7568cf7a417d806b70ceb5a36030570e55f6565096fde7036dd89a117c30151a\"" Feb 9 00:43:39.646467 kubelet[1634]: E0209 00:43:39.646426 1634 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:39.651967 env[1124]: time="2024-02-09T00:43:39.651936100Z" level=info msg="CreateContainer within sandbox \"7568cf7a417d806b70ceb5a36030570e55f6565096fde7036dd89a117c30151a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 9 00:43:39.653008 env[1124]: time="2024-02-09T00:43:39.652979505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:49bf5c686a5c0211d9f14d41ea3c51ac,Namespace:kube-system,Attempt:0,} returns sandbox id \"793339848ae2715d7e0ebece763fcb07627a88c8907dbe6d3ac3e432578eceb3\"" Feb 9 00:43:39.653614 kubelet[1634]: E0209 00:43:39.653541 1634 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:39.655021 env[1124]: time="2024-02-09T00:43:39.654994579Z" level=info msg="CreateContainer within sandbox \"793339848ae2715d7e0ebece763fcb07627a88c8907dbe6d3ac3e432578eceb3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 9 00:43:39.657483 env[1124]: time="2024-02-09T00:43:39.657455204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2b0e94b38682f4e439413801d3cc54db,Namespace:kube-system,Attempt:0,} returns sandbox id \"37043617a31bfdda4751e75de31614352cd55e08e2741a5a9ee5e87dc85fce02\"" Feb 9 00:43:39.658029 kubelet[1634]: E0209 00:43:39.658007 1634 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:39.659494 env[1124]: time="2024-02-09T00:43:39.659470799Z" level=info msg="CreateContainer within sandbox \"37043617a31bfdda4751e75de31614352cd55e08e2741a5a9ee5e87dc85fce02\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 9 00:43:39.697659 env[1124]: time="2024-02-09T00:43:39.697581428Z" level=info msg="CreateContainer within sandbox \"7568cf7a417d806b70ceb5a36030570e55f6565096fde7036dd89a117c30151a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c66464006489e7b2eeba77c889c927ea9d58891a248877e681779d31675a605b\"" Feb 9 00:43:39.698591 env[1124]: time="2024-02-09T00:43:39.698559160Z" level=info msg="StartContainer for \"c66464006489e7b2eeba77c889c927ea9d58891a248877e681779d31675a605b\"" Feb 9 00:43:39.699241 env[1124]: time="2024-02-09T00:43:39.699182389Z" level=info msg="CreateContainer within sandbox \"793339848ae2715d7e0ebece763fcb07627a88c8907dbe6d3ac3e432578eceb3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ddfb57cd1b71f1281dbd8c7b77951c1f624090e602b2a56c11cd07dbf0424407\"" Feb 9 00:43:39.699794 env[1124]: time="2024-02-09T00:43:39.699746850Z" level=info msg="StartContainer for \"ddfb57cd1b71f1281dbd8c7b77951c1f624090e602b2a56c11cd07dbf0424407\"" Feb 9 00:43:39.702410 env[1124]: time="2024-02-09T00:43:39.702369558Z" level=info msg="CreateContainer within sandbox \"37043617a31bfdda4751e75de31614352cd55e08e2741a5a9ee5e87dc85fce02\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"143c945060441afc02050787322d94693f711fbe50d7bd01ba3bb455fff9a62a\"" Feb 9 00:43:39.702964 env[1124]: time="2024-02-09T00:43:39.702935974Z" level=info msg="StartContainer for \"143c945060441afc02050787322d94693f711fbe50d7bd01ba3bb455fff9a62a\"" Feb 9 00:43:39.713509 systemd[1]: Started cri-containerd-c66464006489e7b2eeba77c889c927ea9d58891a248877e681779d31675a605b.scope. Feb 9 00:43:39.731299 systemd[1]: Started cri-containerd-ddfb57cd1b71f1281dbd8c7b77951c1f624090e602b2a56c11cd07dbf0424407.scope. Feb 9 00:43:39.736752 systemd[1]: Started cri-containerd-143c945060441afc02050787322d94693f711fbe50d7bd01ba3bb455fff9a62a.scope. Feb 9 00:43:39.762184 env[1124]: time="2024-02-09T00:43:39.762114089Z" level=info msg="StartContainer for \"c66464006489e7b2eeba77c889c927ea9d58891a248877e681779d31675a605b\" returns successfully" Feb 9 00:43:39.789246 env[1124]: time="2024-02-09T00:43:39.789202723Z" level=info msg="StartContainer for \"ddfb57cd1b71f1281dbd8c7b77951c1f624090e602b2a56c11cd07dbf0424407\" returns successfully" Feb 9 00:43:39.790653 env[1124]: time="2024-02-09T00:43:39.790631499Z" level=info msg="StartContainer for \"143c945060441afc02050787322d94693f711fbe50d7bd01ba3bb455fff9a62a\" returns successfully" Feb 9 00:43:40.620599 kubelet[1634]: E0209 00:43:40.620561 1634 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:40.622837 kubelet[1634]: E0209 00:43:40.622813 1634 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:40.624414 kubelet[1634]: E0209 00:43:40.624393 1634 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:40.667739 kubelet[1634]: I0209 00:43:40.667717 1634 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 00:43:41.547275 kubelet[1634]: I0209 00:43:41.547236 1634 apiserver.go:52] "Watching apiserver" Feb 9 00:43:41.558164 kubelet[1634]: E0209 00:43:41.558119 1634 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 9 00:43:41.600063 kubelet[1634]: I0209 00:43:41.600008 1634 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Feb 9 00:43:41.631243 kubelet[1634]: E0209 00:43:41.631198 1634 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Feb 9 00:43:41.631743 kubelet[1634]: E0209 00:43:41.631705 1634 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:41.656350 kubelet[1634]: I0209 00:43:41.656299 1634 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world" Feb 9 00:43:41.684235 kubelet[1634]: I0209 00:43:41.684176 1634 reconciler.go:41] "Reconciler: start to sync state" Feb 9 00:43:44.296820 systemd[1]: Reloading. Feb 9 00:43:44.355391 /usr/lib/systemd/system-generators/torcx-generator[1930]: time="2024-02-09T00:43:44Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 00:43:44.355418 /usr/lib/systemd/system-generators/torcx-generator[1930]: time="2024-02-09T00:43:44Z" level=info msg="torcx already run" Feb 9 00:43:44.435709 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 00:43:44.435728 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 00:43:44.457992 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 00:43:44.552555 systemd[1]: Stopping kubelet.service... Feb 9 00:43:44.571617 systemd[1]: kubelet.service: Deactivated successfully. Feb 9 00:43:44.571854 systemd[1]: Stopped kubelet.service. Feb 9 00:43:44.573633 systemd[1]: Started kubelet.service. Feb 9 00:43:44.629056 kubelet[1972]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 00:43:44.629056 kubelet[1972]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 9 00:43:44.629056 kubelet[1972]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 00:43:44.629557 kubelet[1972]: I0209 00:43:44.629097 1972 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 00:43:44.634223 kubelet[1972]: I0209 00:43:44.634182 1972 server.go:415] "Kubelet version" kubeletVersion="v1.27.2" Feb 9 00:43:44.634223 kubelet[1972]: I0209 00:43:44.634218 1972 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 00:43:44.634502 kubelet[1972]: I0209 00:43:44.634489 1972 server.go:837] "Client rotation is on, will bootstrap in background" Feb 9 00:43:44.635980 kubelet[1972]: I0209 00:43:44.635958 1972 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 9 00:43:44.636983 kubelet[1972]: I0209 00:43:44.636917 1972 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 00:43:44.640681 kubelet[1972]: I0209 00:43:44.640647 1972 server.go:662] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 00:43:44.640886 kubelet[1972]: I0209 00:43:44.640865 1972 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 00:43:44.640986 kubelet[1972]: I0209 00:43:44.640964 1972 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 00:43:44.641112 kubelet[1972]: I0209 00:43:44.640993 1972 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 00:43:44.641112 kubelet[1972]: I0209 00:43:44.641006 1972 container_manager_linux.go:302] "Creating device plugin manager" Feb 9 00:43:44.641112 kubelet[1972]: I0209 00:43:44.641043 1972 state_mem.go:36] "Initialized new in-memory state store" Feb 9 00:43:44.645017 kubelet[1972]: I0209 00:43:44.644979 1972 kubelet.go:405] "Attempting to sync node with API server" Feb 9 00:43:44.645017 kubelet[1972]: I0209 00:43:44.645010 1972 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 00:43:44.645170 kubelet[1972]: I0209 00:43:44.645035 1972 kubelet.go:309] "Adding apiserver pod source" Feb 9 00:43:44.645170 kubelet[1972]: I0209 00:43:44.645053 1972 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 00:43:44.648591 kubelet[1972]: I0209 00:43:44.648563 1972 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 00:43:44.649220 kubelet[1972]: I0209 00:43:44.649193 1972 server.go:1168] "Started kubelet" Feb 9 00:43:44.650947 kubelet[1972]: I0209 00:43:44.650925 1972 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 9 00:43:44.651241 kubelet[1972]: I0209 00:43:44.651224 1972 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 00:43:44.651407 kubelet[1972]: I0209 00:43:44.651388 1972 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 00:43:44.651948 kubelet[1972]: I0209 00:43:44.651926 1972 server.go:461] "Adding debug handlers to kubelet server" Feb 9 00:43:44.658233 kubelet[1972]: E0209 00:43:44.658192 1972 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 00:43:44.658347 kubelet[1972]: E0209 00:43:44.658247 1972 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 00:43:44.667429 kubelet[1972]: I0209 00:43:44.667402 1972 volume_manager.go:284] "Starting Kubelet Volume Manager" Feb 9 00:43:44.667671 kubelet[1972]: I0209 00:43:44.667656 1972 desired_state_of_world_populator.go:145] "Desired state populator starts to run" Feb 9 00:43:44.673910 kubelet[1972]: I0209 00:43:44.673836 1972 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 00:43:44.674657 kubelet[1972]: I0209 00:43:44.674628 1972 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 00:43:44.674657 kubelet[1972]: I0209 00:43:44.674660 1972 status_manager.go:207] "Starting to sync pod status with apiserver" Feb 9 00:43:44.674804 kubelet[1972]: I0209 00:43:44.674683 1972 kubelet.go:2257] "Starting kubelet main sync loop" Feb 9 00:43:44.674804 kubelet[1972]: E0209 00:43:44.674758 1972 kubelet.go:2281] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 00:43:44.707235 kubelet[1972]: I0209 00:43:44.707200 1972 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 00:43:44.707235 kubelet[1972]: I0209 00:43:44.707222 1972 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 00:43:44.707235 kubelet[1972]: I0209 00:43:44.707237 1972 state_mem.go:36] "Initialized new in-memory state store" Feb 9 00:43:44.707439 kubelet[1972]: I0209 00:43:44.707394 1972 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 9 00:43:44.707439 kubelet[1972]: I0209 00:43:44.707408 1972 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 9 00:43:44.707439 kubelet[1972]: I0209 00:43:44.707414 1972 policy_none.go:49] "None policy: Start" Feb 9 00:43:44.708043 kubelet[1972]: I0209 00:43:44.708027 1972 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 00:43:44.708043 kubelet[1972]: I0209 00:43:44.708043 1972 state_mem.go:35] "Initializing new in-memory state store" Feb 9 00:43:44.708225 kubelet[1972]: I0209 00:43:44.708210 1972 state_mem.go:75] "Updated machine memory state" Feb 9 00:43:44.712642 kubelet[1972]: I0209 00:43:44.712616 1972 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 00:43:44.713209 kubelet[1972]: I0209 00:43:44.712931 1972 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 00:43:44.771982 kubelet[1972]: I0209 00:43:44.771941 1972 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 00:43:44.775050 kubelet[1972]: I0209 00:43:44.775021 1972 topology_manager.go:212] "Topology Admit Handler" Feb 9 00:43:44.775121 kubelet[1972]: I0209 00:43:44.775104 1972 topology_manager.go:212] "Topology Admit Handler" Feb 9 00:43:44.775171 kubelet[1972]: I0209 00:43:44.775151 1972 topology_manager.go:212] "Topology Admit Handler" Feb 9 00:43:44.909205 kubelet[1972]: I0209 00:43:44.909085 1972 kubelet_node_status.go:108] "Node was previously registered" node="localhost" Feb 9 00:43:44.909349 kubelet[1972]: I0209 00:43:44.909239 1972 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Feb 9 00:43:44.969509 kubelet[1972]: I0209 00:43:44.969468 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7709ea05d7cdf82b0d7e594b61a10331-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7709ea05d7cdf82b0d7e594b61a10331\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 00:43:44.969509 kubelet[1972]: I0209 00:43:44.969528 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2b0e94b38682f4e439413801d3cc54db-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2b0e94b38682f4e439413801d3cc54db\") " pod="kube-system/kube-scheduler-localhost" Feb 9 00:43:44.969716 kubelet[1972]: I0209 00:43:44.969559 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/49bf5c686a5c0211d9f14d41ea3c51ac-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"49bf5c686a5c0211d9f14d41ea3c51ac\") " pod="kube-system/kube-apiserver-localhost" Feb 9 00:43:44.969716 kubelet[1972]: I0209 00:43:44.969580 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7709ea05d7cdf82b0d7e594b61a10331-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7709ea05d7cdf82b0d7e594b61a10331\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 00:43:44.969716 kubelet[1972]: I0209 00:43:44.969602 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7709ea05d7cdf82b0d7e594b61a10331-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7709ea05d7cdf82b0d7e594b61a10331\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 00:43:44.969716 kubelet[1972]: I0209 00:43:44.969630 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7709ea05d7cdf82b0d7e594b61a10331-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7709ea05d7cdf82b0d7e594b61a10331\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 00:43:44.969716 kubelet[1972]: I0209 00:43:44.969650 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/49bf5c686a5c0211d9f14d41ea3c51ac-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"49bf5c686a5c0211d9f14d41ea3c51ac\") " pod="kube-system/kube-apiserver-localhost" Feb 9 00:43:44.969936 kubelet[1972]: I0209 00:43:44.969671 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/49bf5c686a5c0211d9f14d41ea3c51ac-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"49bf5c686a5c0211d9f14d41ea3c51ac\") " pod="kube-system/kube-apiserver-localhost" Feb 9 00:43:44.969936 kubelet[1972]: I0209 00:43:44.969703 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7709ea05d7cdf82b0d7e594b61a10331-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7709ea05d7cdf82b0d7e594b61a10331\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 00:43:44.997439 sudo[2004]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 9 00:43:44.997611 sudo[2004]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Feb 9 00:43:45.106557 kubelet[1972]: E0209 00:43:45.106503 1972 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:45.106557 kubelet[1972]: E0209 00:43:45.106522 1972 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:45.106810 kubelet[1972]: E0209 00:43:45.106773 1972 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:45.472794 sudo[2004]: pam_unix(sudo:session): session closed for user root Feb 9 00:43:45.649390 kubelet[1972]: I0209 00:43:45.649302 1972 apiserver.go:52] "Watching apiserver" Feb 9 00:43:45.668305 kubelet[1972]: I0209 00:43:45.668259 1972 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world" Feb 9 00:43:45.673509 kubelet[1972]: I0209 00:43:45.673473 1972 reconciler.go:41] "Reconciler: start to sync state" Feb 9 00:43:45.684154 kubelet[1972]: E0209 00:43:45.684104 1972 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:45.684524 kubelet[1972]: E0209 00:43:45.684498 1972 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:45.936985 kubelet[1972]: E0209 00:43:45.936944 1972 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 9 00:43:45.937561 kubelet[1972]: E0209 00:43:45.937535 1972 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:45.939342 kubelet[1972]: I0209 00:43:45.939300 1972 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.939216753 podCreationTimestamp="2024-02-09 00:43:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 00:43:45.939219609 +0000 UTC m=+1.362317329" watchObservedRunningTime="2024-02-09 00:43:45.939216753 +0000 UTC m=+1.362314463" Feb 9 00:43:45.997907 kubelet[1972]: I0209 00:43:45.997845 1972 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.9977960989999999 podCreationTimestamp="2024-02-09 00:43:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 00:43:45.997636346 +0000 UTC m=+1.420734056" watchObservedRunningTime="2024-02-09 00:43:45.997796099 +0000 UTC m=+1.420893799" Feb 9 00:43:46.406947 kubelet[1972]: I0209 00:43:46.406900 1972 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.406821473 podCreationTimestamp="2024-02-09 00:43:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 00:43:46.354460095 +0000 UTC m=+1.777557805" watchObservedRunningTime="2024-02-09 00:43:46.406821473 +0000 UTC m=+1.829919183" Feb 9 00:43:46.685968 kubelet[1972]: E0209 00:43:46.685855 1972 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:46.867962 sudo[1213]: pam_unix(sudo:session): session closed for user root Feb 9 00:43:46.869463 sshd[1210]: pam_unix(sshd:session): session closed for user core Feb 9 00:43:46.871591 systemd[1]: sshd@4-10.0.0.31:22-10.0.0.1:60910.service: Deactivated successfully. Feb 9 00:43:46.872348 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 00:43:46.872492 systemd[1]: session-5.scope: Consumed 3.606s CPU time. Feb 9 00:43:46.872920 systemd-logind[1106]: Session 5 logged out. Waiting for processes to exit. Feb 9 00:43:46.873653 systemd-logind[1106]: Removed session 5. Feb 9 00:43:48.682194 kubelet[1972]: E0209 00:43:48.682160 1972 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:49.472449 kubelet[1972]: E0209 00:43:49.472406 1972 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:49.691988 kubelet[1972]: E0209 00:43:49.691922 1972 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:53.140836 update_engine[1111]: I0209 00:43:53.140772 1111 update_attempter.cc:509] Updating boot flags... Feb 9 00:43:54.253933 kubelet[1972]: E0209 00:43:54.253897 1972 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:54.699631 kubelet[1972]: E0209 00:43:54.699600 1972 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:58.370071 kubelet[1972]: I0209 00:43:58.370040 1972 kuberuntime_manager.go:1460] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 9 00:43:58.370485 env[1124]: time="2024-02-09T00:43:58.370445492Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 00:43:58.370661 kubelet[1972]: I0209 00:43:58.370606 1972 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 9 00:43:58.687446 kubelet[1972]: E0209 00:43:58.687341 1972 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:58.942695 kubelet[1972]: I0209 00:43:58.942566 1972 topology_manager.go:212] "Topology Admit Handler" Feb 9 00:43:58.948445 systemd[1]: Created slice kubepods-besteffort-pod6b45eca7_5bd7_4abb_aa0f_37778f4d248b.slice. Feb 9 00:43:58.970894 kubelet[1972]: I0209 00:43:58.970847 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6b45eca7-5bd7-4abb-aa0f-37778f4d248b-kube-proxy\") pod \"kube-proxy-9c62t\" (UID: \"6b45eca7-5bd7-4abb-aa0f-37778f4d248b\") " pod="kube-system/kube-proxy-9c62t" Feb 9 00:43:58.970894 kubelet[1972]: I0209 00:43:58.970907 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6b45eca7-5bd7-4abb-aa0f-37778f4d248b-xtables-lock\") pod \"kube-proxy-9c62t\" (UID: \"6b45eca7-5bd7-4abb-aa0f-37778f4d248b\") " pod="kube-system/kube-proxy-9c62t" Feb 9 00:43:58.971148 kubelet[1972]: I0209 00:43:58.970934 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6b45eca7-5bd7-4abb-aa0f-37778f4d248b-lib-modules\") pod \"kube-proxy-9c62t\" (UID: \"6b45eca7-5bd7-4abb-aa0f-37778f4d248b\") " pod="kube-system/kube-proxy-9c62t" Feb 9 00:43:58.971148 kubelet[1972]: I0209 00:43:58.970960 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dn5sh\" (UniqueName: \"kubernetes.io/projected/6b45eca7-5bd7-4abb-aa0f-37778f4d248b-kube-api-access-dn5sh\") pod \"kube-proxy-9c62t\" (UID: \"6b45eca7-5bd7-4abb-aa0f-37778f4d248b\") " pod="kube-system/kube-proxy-9c62t" Feb 9 00:43:58.971505 kubelet[1972]: I0209 00:43:58.971480 1972 topology_manager.go:212] "Topology Admit Handler" Feb 9 00:43:58.977795 systemd[1]: Created slice kubepods-burstable-pod8c693ed1_debc_4c55_8424_5609186a78aa.slice. Feb 9 00:43:59.071431 kubelet[1972]: I0209 00:43:59.071377 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8c693ed1-debc-4c55-8424-5609186a78aa-cilium-cgroup\") pod \"cilium-zt9nh\" (UID: \"8c693ed1-debc-4c55-8424-5609186a78aa\") " pod="kube-system/cilium-zt9nh" Feb 9 00:43:59.071431 kubelet[1972]: I0209 00:43:59.071421 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8c693ed1-debc-4c55-8424-5609186a78aa-cilium-config-path\") pod \"cilium-zt9nh\" (UID: \"8c693ed1-debc-4c55-8424-5609186a78aa\") " pod="kube-system/cilium-zt9nh" Feb 9 00:43:59.071674 kubelet[1972]: I0209 00:43:59.071501 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8c693ed1-debc-4c55-8424-5609186a78aa-hubble-tls\") pod \"cilium-zt9nh\" (UID: \"8c693ed1-debc-4c55-8424-5609186a78aa\") " pod="kube-system/cilium-zt9nh" Feb 9 00:43:59.071674 kubelet[1972]: I0209 00:43:59.071521 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8c693ed1-debc-4c55-8424-5609186a78aa-cilium-run\") pod \"cilium-zt9nh\" (UID: \"8c693ed1-debc-4c55-8424-5609186a78aa\") " pod="kube-system/cilium-zt9nh" Feb 9 00:43:59.071674 kubelet[1972]: I0209 00:43:59.071538 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8c693ed1-debc-4c55-8424-5609186a78aa-host-proc-sys-net\") pod \"cilium-zt9nh\" (UID: \"8c693ed1-debc-4c55-8424-5609186a78aa\") " pod="kube-system/cilium-zt9nh" Feb 9 00:43:59.071674 kubelet[1972]: I0209 00:43:59.071571 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8c693ed1-debc-4c55-8424-5609186a78aa-lib-modules\") pod \"cilium-zt9nh\" (UID: \"8c693ed1-debc-4c55-8424-5609186a78aa\") " pod="kube-system/cilium-zt9nh" Feb 9 00:43:59.071674 kubelet[1972]: I0209 00:43:59.071642 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8c693ed1-debc-4c55-8424-5609186a78aa-cni-path\") pod \"cilium-zt9nh\" (UID: \"8c693ed1-debc-4c55-8424-5609186a78aa\") " pod="kube-system/cilium-zt9nh" Feb 9 00:43:59.071674 kubelet[1972]: I0209 00:43:59.071658 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8c693ed1-debc-4c55-8424-5609186a78aa-xtables-lock\") pod \"cilium-zt9nh\" (UID: \"8c693ed1-debc-4c55-8424-5609186a78aa\") " pod="kube-system/cilium-zt9nh" Feb 9 00:43:59.071865 kubelet[1972]: I0209 00:43:59.071673 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lc6w\" (UniqueName: \"kubernetes.io/projected/8c693ed1-debc-4c55-8424-5609186a78aa-kube-api-access-9lc6w\") pod \"cilium-zt9nh\" (UID: \"8c693ed1-debc-4c55-8424-5609186a78aa\") " pod="kube-system/cilium-zt9nh" Feb 9 00:43:59.071865 kubelet[1972]: I0209 00:43:59.071699 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8c693ed1-debc-4c55-8424-5609186a78aa-hostproc\") pod \"cilium-zt9nh\" (UID: \"8c693ed1-debc-4c55-8424-5609186a78aa\") " pod="kube-system/cilium-zt9nh" Feb 9 00:43:59.071865 kubelet[1972]: I0209 00:43:59.071714 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8c693ed1-debc-4c55-8424-5609186a78aa-etc-cni-netd\") pod \"cilium-zt9nh\" (UID: \"8c693ed1-debc-4c55-8424-5609186a78aa\") " pod="kube-system/cilium-zt9nh" Feb 9 00:43:59.071865 kubelet[1972]: I0209 00:43:59.071729 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8c693ed1-debc-4c55-8424-5609186a78aa-bpf-maps\") pod \"cilium-zt9nh\" (UID: \"8c693ed1-debc-4c55-8424-5609186a78aa\") " pod="kube-system/cilium-zt9nh" Feb 9 00:43:59.071865 kubelet[1972]: I0209 00:43:59.071751 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8c693ed1-debc-4c55-8424-5609186a78aa-clustermesh-secrets\") pod \"cilium-zt9nh\" (UID: \"8c693ed1-debc-4c55-8424-5609186a78aa\") " pod="kube-system/cilium-zt9nh" Feb 9 00:43:59.071865 kubelet[1972]: I0209 00:43:59.071767 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8c693ed1-debc-4c55-8424-5609186a78aa-host-proc-sys-kernel\") pod \"cilium-zt9nh\" (UID: \"8c693ed1-debc-4c55-8424-5609186a78aa\") " pod="kube-system/cilium-zt9nh" Feb 9 00:43:59.256584 kubelet[1972]: E0209 00:43:59.255968 1972 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:59.256948 env[1124]: time="2024-02-09T00:43:59.256902938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9c62t,Uid:6b45eca7-5bd7-4abb-aa0f-37778f4d248b,Namespace:kube-system,Attempt:0,}" Feb 9 00:43:59.278201 env[1124]: time="2024-02-09T00:43:59.278108490Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 00:43:59.278201 env[1124]: time="2024-02-09T00:43:59.278175271Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 00:43:59.278201 env[1124]: time="2024-02-09T00:43:59.278189418Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 00:43:59.278404 env[1124]: time="2024-02-09T00:43:59.278371805Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/263fd33e63880c70ab389005edc066beafe4ba8445583fac4edde5563c731dc0 pid=2081 runtime=io.containerd.runc.v2 Feb 9 00:43:59.285970 kubelet[1972]: E0209 00:43:59.285922 1972 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:59.287049 env[1124]: time="2024-02-09T00:43:59.286997684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zt9nh,Uid:8c693ed1-debc-4c55-8424-5609186a78aa,Namespace:kube-system,Attempt:0,}" Feb 9 00:43:59.290228 systemd[1]: Started cri-containerd-263fd33e63880c70ab389005edc066beafe4ba8445583fac4edde5563c731dc0.scope. Feb 9 00:43:59.323926 kubelet[1972]: I0209 00:43:59.323365 1972 topology_manager.go:212] "Topology Admit Handler" Feb 9 00:43:59.329597 systemd[1]: Created slice kubepods-besteffort-pod10143a08_930f_4cf6_82ca_e2fed827cd75.slice. Feb 9 00:43:59.335597 env[1124]: time="2024-02-09T00:43:59.335532225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9c62t,Uid:6b45eca7-5bd7-4abb-aa0f-37778f4d248b,Namespace:kube-system,Attempt:0,} returns sandbox id \"263fd33e63880c70ab389005edc066beafe4ba8445583fac4edde5563c731dc0\"" Feb 9 00:43:59.337681 kubelet[1972]: E0209 00:43:59.336792 1972 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:59.337899 env[1124]: time="2024-02-09T00:43:59.337487926Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 00:43:59.337899 env[1124]: time="2024-02-09T00:43:59.337530610Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 00:43:59.337899 env[1124]: time="2024-02-09T00:43:59.337542743Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 00:43:59.338044 env[1124]: time="2024-02-09T00:43:59.337876938Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8c269179f1c7dc6c6a5ba4c88f1aa2a0644bd9f00267f51a42746c4efaf0bcb1 pid=2122 runtime=io.containerd.runc.v2 Feb 9 00:43:59.347454 env[1124]: time="2024-02-09T00:43:59.347385396Z" level=info msg="CreateContainer within sandbox \"263fd33e63880c70ab389005edc066beafe4ba8445583fac4edde5563c731dc0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 00:43:59.352605 systemd[1]: Started cri-containerd-8c269179f1c7dc6c6a5ba4c88f1aa2a0644bd9f00267f51a42746c4efaf0bcb1.scope. Feb 9 00:43:59.374601 kubelet[1972]: I0209 00:43:59.374541 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/10143a08-930f-4cf6-82ca-e2fed827cd75-cilium-config-path\") pod \"cilium-operator-574c4bb98d-mp4np\" (UID: \"10143a08-930f-4cf6-82ca-e2fed827cd75\") " pod="kube-system/cilium-operator-574c4bb98d-mp4np" Feb 9 00:43:59.374601 kubelet[1972]: I0209 00:43:59.374602 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swvf5\" (UniqueName: \"kubernetes.io/projected/10143a08-930f-4cf6-82ca-e2fed827cd75-kube-api-access-swvf5\") pod \"cilium-operator-574c4bb98d-mp4np\" (UID: \"10143a08-930f-4cf6-82ca-e2fed827cd75\") " pod="kube-system/cilium-operator-574c4bb98d-mp4np" Feb 9 00:43:59.375203 env[1124]: time="2024-02-09T00:43:59.375161239Z" level=info msg="CreateContainer within sandbox \"263fd33e63880c70ab389005edc066beafe4ba8445583fac4edde5563c731dc0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"002da1cc4890ef287cb8732b70b091c04357883e6cbb4f03e7dd88d49bb4be9a\"" Feb 9 00:43:59.377177 env[1124]: time="2024-02-09T00:43:59.375910367Z" level=info msg="StartContainer for \"002da1cc4890ef287cb8732b70b091c04357883e6cbb4f03e7dd88d49bb4be9a\"" Feb 9 00:43:59.380081 env[1124]: time="2024-02-09T00:43:59.380046190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zt9nh,Uid:8c693ed1-debc-4c55-8424-5609186a78aa,Namespace:kube-system,Attempt:0,} returns sandbox id \"8c269179f1c7dc6c6a5ba4c88f1aa2a0644bd9f00267f51a42746c4efaf0bcb1\"" Feb 9 00:43:59.380684 kubelet[1972]: E0209 00:43:59.380658 1972 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:59.381774 env[1124]: time="2024-02-09T00:43:59.381748234Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 9 00:43:59.396235 systemd[1]: Started cri-containerd-002da1cc4890ef287cb8732b70b091c04357883e6cbb4f03e7dd88d49bb4be9a.scope. Feb 9 00:43:59.430733 env[1124]: time="2024-02-09T00:43:59.430678430Z" level=info msg="StartContainer for \"002da1cc4890ef287cb8732b70b091c04357883e6cbb4f03e7dd88d49bb4be9a\" returns successfully" Feb 9 00:43:59.634251 kubelet[1972]: E0209 00:43:59.634220 1972 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:59.634839 env[1124]: time="2024-02-09T00:43:59.634793218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-mp4np,Uid:10143a08-930f-4cf6-82ca-e2fed827cd75,Namespace:kube-system,Attempt:0,}" Feb 9 00:43:59.653738 env[1124]: time="2024-02-09T00:43:59.653652234Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 00:43:59.653738 env[1124]: time="2024-02-09T00:43:59.653714656Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 00:43:59.653965 env[1124]: time="2024-02-09T00:43:59.653929046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 00:43:59.654220 env[1124]: time="2024-02-09T00:43:59.654187913Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a126fdbb23d9bfb30c0e468040115ec4aa5ab9940f4531b89e71f76ca01efd74 pid=2276 runtime=io.containerd.runc.v2 Feb 9 00:43:59.664591 systemd[1]: Started cri-containerd-a126fdbb23d9bfb30c0e468040115ec4aa5ab9940f4531b89e71f76ca01efd74.scope. Feb 9 00:43:59.702663 env[1124]: time="2024-02-09T00:43:59.702598752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-mp4np,Uid:10143a08-930f-4cf6-82ca-e2fed827cd75,Namespace:kube-system,Attempt:0,} returns sandbox id \"a126fdbb23d9bfb30c0e468040115ec4aa5ab9940f4531b89e71f76ca01efd74\"" Feb 9 00:43:59.703422 kubelet[1972]: E0209 00:43:59.703401 1972 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:43:59.710677 kubelet[1972]: E0209 00:43:59.710632 1972 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:44:08.230382 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1966854266.mount: Deactivated successfully. Feb 9 00:44:11.984672 systemd[1]: Started sshd@5-10.0.0.31:22-10.0.0.1:55274.service. Feb 9 00:44:12.042754 sshd[2353]: Accepted publickey for core from 10.0.0.1 port 55274 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:44:12.044208 sshd[2353]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:44:12.048958 systemd-logind[1106]: New session 6 of user core. Feb 9 00:44:12.049254 systemd[1]: Started session-6.scope. Feb 9 00:44:13.441626 sshd[2353]: pam_unix(sshd:session): session closed for user core Feb 9 00:44:13.444240 systemd[1]: sshd@5-10.0.0.31:22-10.0.0.1:55274.service: Deactivated successfully. Feb 9 00:44:13.444952 systemd[1]: session-6.scope: Deactivated successfully. Feb 9 00:44:13.445502 systemd-logind[1106]: Session 6 logged out. Waiting for processes to exit. Feb 9 00:44:13.446161 systemd-logind[1106]: Removed session 6. Feb 9 00:44:13.451836 env[1124]: time="2024-02-09T00:44:13.451781061Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:44:13.453869 env[1124]: time="2024-02-09T00:44:13.453825449Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:44:13.455749 env[1124]: time="2024-02-09T00:44:13.455710516Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:44:13.456189 env[1124]: time="2024-02-09T00:44:13.456167458Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 9 00:44:13.456778 env[1124]: time="2024-02-09T00:44:13.456747339Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 9 00:44:13.457874 env[1124]: time="2024-02-09T00:44:13.457843608Z" level=info msg="CreateContainer within sandbox \"8c269179f1c7dc6c6a5ba4c88f1aa2a0644bd9f00267f51a42746c4efaf0bcb1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 00:44:13.469973 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2184236490.mount: Deactivated successfully. Feb 9 00:44:13.470992 env[1124]: time="2024-02-09T00:44:13.470944837Z" level=info msg="CreateContainer within sandbox \"8c269179f1c7dc6c6a5ba4c88f1aa2a0644bd9f00267f51a42746c4efaf0bcb1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9c9a8d4ab41f8de1d00a6c2249daba1e710ef5da1ed7a99283a3d9737bb20fb2\"" Feb 9 00:44:13.471621 env[1124]: time="2024-02-09T00:44:13.471593742Z" level=info msg="StartContainer for \"9c9a8d4ab41f8de1d00a6c2249daba1e710ef5da1ed7a99283a3d9737bb20fb2\"" Feb 9 00:44:13.488618 systemd[1]: Started cri-containerd-9c9a8d4ab41f8de1d00a6c2249daba1e710ef5da1ed7a99283a3d9737bb20fb2.scope. Feb 9 00:44:13.513613 env[1124]: time="2024-02-09T00:44:13.513561945Z" level=info msg="StartContainer for \"9c9a8d4ab41f8de1d00a6c2249daba1e710ef5da1ed7a99283a3d9737bb20fb2\" returns successfully" Feb 9 00:44:13.519710 systemd[1]: cri-containerd-9c9a8d4ab41f8de1d00a6c2249daba1e710ef5da1ed7a99283a3d9737bb20fb2.scope: Deactivated successfully. Feb 9 00:44:13.721119 env[1124]: time="2024-02-09T00:44:13.720956893Z" level=info msg="shim disconnected" id=9c9a8d4ab41f8de1d00a6c2249daba1e710ef5da1ed7a99283a3d9737bb20fb2 Feb 9 00:44:13.721119 env[1124]: time="2024-02-09T00:44:13.721009165Z" level=warning msg="cleaning up after shim disconnected" id=9c9a8d4ab41f8de1d00a6c2249daba1e710ef5da1ed7a99283a3d9737bb20fb2 namespace=k8s.io Feb 9 00:44:13.721119 env[1124]: time="2024-02-09T00:44:13.721018805Z" level=info msg="cleaning up dead shim" Feb 9 00:44:13.728652 env[1124]: time="2024-02-09T00:44:13.728601808Z" level=warning msg="cleanup warnings time=\"2024-02-09T00:44:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2417 runtime=io.containerd.runc.v2\n" Feb 9 00:44:13.734818 kubelet[1972]: E0209 00:44:13.734781 1972 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:44:13.738849 env[1124]: time="2024-02-09T00:44:13.738808719Z" level=info msg="CreateContainer within sandbox \"8c269179f1c7dc6c6a5ba4c88f1aa2a0644bd9f00267f51a42746c4efaf0bcb1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 00:44:13.750372 kubelet[1972]: I0209 00:44:13.750331 1972 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-9c62t" podStartSLOduration=15.750291089 podCreationTimestamp="2024-02-09 00:43:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 00:43:59.72042797 +0000 UTC m=+15.143525680" watchObservedRunningTime="2024-02-09 00:44:13.750291089 +0000 UTC m=+29.173388799" Feb 9 00:44:13.754594 env[1124]: time="2024-02-09T00:44:13.754541821Z" level=info msg="CreateContainer within sandbox \"8c269179f1c7dc6c6a5ba4c88f1aa2a0644bd9f00267f51a42746c4efaf0bcb1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"51d1dccc3af466fae28ddbb5931c8317151822f213b47fd92422f3a7abf6a718\"" Feb 9 00:44:13.755344 env[1124]: time="2024-02-09T00:44:13.755280874Z" level=info msg="StartContainer for \"51d1dccc3af466fae28ddbb5931c8317151822f213b47fd92422f3a7abf6a718\"" Feb 9 00:44:13.769029 systemd[1]: Started cri-containerd-51d1dccc3af466fae28ddbb5931c8317151822f213b47fd92422f3a7abf6a718.scope. Feb 9 00:44:13.794881 env[1124]: time="2024-02-09T00:44:13.794827171Z" level=info msg="StartContainer for \"51d1dccc3af466fae28ddbb5931c8317151822f213b47fd92422f3a7abf6a718\" returns successfully" Feb 9 00:44:13.803664 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 00:44:13.803915 systemd[1]: Stopped systemd-sysctl.service. Feb 9 00:44:13.804099 systemd[1]: Stopping systemd-sysctl.service... Feb 9 00:44:13.805652 systemd[1]: Starting systemd-sysctl.service... Feb 9 00:44:13.806658 systemd[1]: cri-containerd-51d1dccc3af466fae28ddbb5931c8317151822f213b47fd92422f3a7abf6a718.scope: Deactivated successfully. Feb 9 00:44:13.817182 systemd[1]: Finished systemd-sysctl.service. Feb 9 00:44:13.831581 env[1124]: time="2024-02-09T00:44:13.831517797Z" level=info msg="shim disconnected" id=51d1dccc3af466fae28ddbb5931c8317151822f213b47fd92422f3a7abf6a718 Feb 9 00:44:13.831581 env[1124]: time="2024-02-09T00:44:13.831580630Z" level=warning msg="cleaning up after shim disconnected" id=51d1dccc3af466fae28ddbb5931c8317151822f213b47fd92422f3a7abf6a718 namespace=k8s.io Feb 9 00:44:13.831581 env[1124]: time="2024-02-09T00:44:13.831589858Z" level=info msg="cleaning up dead shim" Feb 9 00:44:13.838475 env[1124]: time="2024-02-09T00:44:13.838419191Z" level=warning msg="cleanup warnings time=\"2024-02-09T00:44:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2481 runtime=io.containerd.runc.v2\n" Feb 9 00:44:14.468707 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9c9a8d4ab41f8de1d00a6c2249daba1e710ef5da1ed7a99283a3d9737bb20fb2-rootfs.mount: Deactivated successfully. Feb 9 00:44:14.682265 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1277319491.mount: Deactivated successfully. Feb 9 00:44:14.738408 kubelet[1972]: E0209 00:44:14.737269 1972 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:44:14.747211 env[1124]: time="2024-02-09T00:44:14.742327861Z" level=info msg="CreateContainer within sandbox \"8c269179f1c7dc6c6a5ba4c88f1aa2a0644bd9f00267f51a42746c4efaf0bcb1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 00:44:14.755810 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1950951933.mount: Deactivated successfully. Feb 9 00:44:14.761078 env[1124]: time="2024-02-09T00:44:14.761037501Z" level=info msg="CreateContainer within sandbox \"8c269179f1c7dc6c6a5ba4c88f1aa2a0644bd9f00267f51a42746c4efaf0bcb1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"71533b882720c5a5601a90ec173779ce47fbceded2b8a59e063f5d75b189d4d5\"" Feb 9 00:44:14.762448 env[1124]: time="2024-02-09T00:44:14.762416408Z" level=info msg="StartContainer for \"71533b882720c5a5601a90ec173779ce47fbceded2b8a59e063f5d75b189d4d5\"" Feb 9 00:44:14.784949 systemd[1]: Started cri-containerd-71533b882720c5a5601a90ec173779ce47fbceded2b8a59e063f5d75b189d4d5.scope. Feb 9 00:44:14.817397 systemd[1]: cri-containerd-71533b882720c5a5601a90ec173779ce47fbceded2b8a59e063f5d75b189d4d5.scope: Deactivated successfully. Feb 9 00:44:14.825140 env[1124]: time="2024-02-09T00:44:14.825059451Z" level=info msg="StartContainer for \"71533b882720c5a5601a90ec173779ce47fbceded2b8a59e063f5d75b189d4d5\" returns successfully" Feb 9 00:44:14.931718 env[1124]: time="2024-02-09T00:44:14.931667320Z" level=info msg="shim disconnected" id=71533b882720c5a5601a90ec173779ce47fbceded2b8a59e063f5d75b189d4d5 Feb 9 00:44:14.931718 env[1124]: time="2024-02-09T00:44:14.931713040Z" level=warning msg="cleaning up after shim disconnected" id=71533b882720c5a5601a90ec173779ce47fbceded2b8a59e063f5d75b189d4d5 namespace=k8s.io Feb 9 00:44:14.931718 env[1124]: time="2024-02-09T00:44:14.931721756Z" level=info msg="cleaning up dead shim" Feb 9 00:44:14.939326 env[1124]: time="2024-02-09T00:44:14.939272631Z" level=warning msg="cleanup warnings time=\"2024-02-09T00:44:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2536 runtime=io.containerd.runc.v2\n" Feb 9 00:44:15.489712 env[1124]: time="2024-02-09T00:44:15.489652714Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:44:15.493029 env[1124]: time="2024-02-09T00:44:15.492990882Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:44:15.494348 env[1124]: time="2024-02-09T00:44:15.494319980Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 00:44:15.494662 env[1124]: time="2024-02-09T00:44:15.494627078Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 9 00:44:15.496322 env[1124]: time="2024-02-09T00:44:15.496296989Z" level=info msg="CreateContainer within sandbox \"a126fdbb23d9bfb30c0e468040115ec4aa5ab9940f4531b89e71f76ca01efd74\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 9 00:44:15.508115 env[1124]: time="2024-02-09T00:44:15.508055819Z" level=info msg="CreateContainer within sandbox \"a126fdbb23d9bfb30c0e468040115ec4aa5ab9940f4531b89e71f76ca01efd74\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"9ff2fc94c34a62aab81514b70009d2d8c47f0eaea7da97dc97409e1673e2a966\"" Feb 9 00:44:15.508814 env[1124]: time="2024-02-09T00:44:15.508567144Z" level=info msg="StartContainer for \"9ff2fc94c34a62aab81514b70009d2d8c47f0eaea7da97dc97409e1673e2a966\"" Feb 9 00:44:15.526667 systemd[1]: Started cri-containerd-9ff2fc94c34a62aab81514b70009d2d8c47f0eaea7da97dc97409e1673e2a966.scope. Feb 9 00:44:15.551033 env[1124]: time="2024-02-09T00:44:15.550980660Z" level=info msg="StartContainer for \"9ff2fc94c34a62aab81514b70009d2d8c47f0eaea7da97dc97409e1673e2a966\" returns successfully" Feb 9 00:44:15.740886 kubelet[1972]: E0209 00:44:15.740770 1972 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:44:15.742820 kubelet[1972]: E0209 00:44:15.742788 1972 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:44:15.744490 env[1124]: time="2024-02-09T00:44:15.744455475Z" level=info msg="CreateContainer within sandbox \"8c269179f1c7dc6c6a5ba4c88f1aa2a0644bd9f00267f51a42746c4efaf0bcb1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 00:44:15.865906 kubelet[1972]: I0209 00:44:15.865858 1972 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-574c4bb98d-mp4np" podStartSLOduration=1.075144652 podCreationTimestamp="2024-02-09 00:43:59 +0000 UTC" firstStartedPulling="2024-02-09 00:43:59.704234276 +0000 UTC m=+15.127331986" lastFinishedPulling="2024-02-09 00:44:15.494899939 +0000 UTC m=+30.917997649" observedRunningTime="2024-02-09 00:44:15.865773974 +0000 UTC m=+31.288871694" watchObservedRunningTime="2024-02-09 00:44:15.865810315 +0000 UTC m=+31.288908015" Feb 9 00:44:16.104880 env[1124]: time="2024-02-09T00:44:16.104818324Z" level=info msg="CreateContainer within sandbox \"8c269179f1c7dc6c6a5ba4c88f1aa2a0644bd9f00267f51a42746c4efaf0bcb1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"92c944a6e2e3f84511833de1c82e97ddb1c2a08be082cfea8d0079be768612b3\"" Feb 9 00:44:16.106926 env[1124]: time="2024-02-09T00:44:16.106892338Z" level=info msg="StartContainer for \"92c944a6e2e3f84511833de1c82e97ddb1c2a08be082cfea8d0079be768612b3\"" Feb 9 00:44:16.138160 systemd[1]: Started cri-containerd-92c944a6e2e3f84511833de1c82e97ddb1c2a08be082cfea8d0079be768612b3.scope. Feb 9 00:44:16.209655 systemd[1]: cri-containerd-92c944a6e2e3f84511833de1c82e97ddb1c2a08be082cfea8d0079be768612b3.scope: Deactivated successfully. Feb 9 00:44:16.211877 env[1124]: time="2024-02-09T00:44:16.211777490Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c693ed1_debc_4c55_8424_5609186a78aa.slice/cri-containerd-92c944a6e2e3f84511833de1c82e97ddb1c2a08be082cfea8d0079be768612b3.scope/memory.events\": no such file or directory" Feb 9 00:44:16.228572 env[1124]: time="2024-02-09T00:44:16.228505631Z" level=info msg="StartContainer for \"92c944a6e2e3f84511833de1c82e97ddb1c2a08be082cfea8d0079be768612b3\" returns successfully" Feb 9 00:44:16.468535 systemd[1]: run-containerd-runc-k8s.io-9ff2fc94c34a62aab81514b70009d2d8c47f0eaea7da97dc97409e1673e2a966-runc.PUOGHG.mount: Deactivated successfully. Feb 9 00:44:16.576964 env[1124]: time="2024-02-09T00:44:16.576893176Z" level=info msg="shim disconnected" id=92c944a6e2e3f84511833de1c82e97ddb1c2a08be082cfea8d0079be768612b3 Feb 9 00:44:16.576964 env[1124]: time="2024-02-09T00:44:16.576960567Z" level=warning msg="cleaning up after shim disconnected" id=92c944a6e2e3f84511833de1c82e97ddb1c2a08be082cfea8d0079be768612b3 namespace=k8s.io Feb 9 00:44:16.576964 env[1124]: time="2024-02-09T00:44:16.576980175Z" level=info msg="cleaning up dead shim" Feb 9 00:44:16.593813 env[1124]: time="2024-02-09T00:44:16.593759816Z" level=warning msg="cleanup warnings time=\"2024-02-09T00:44:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2630 runtime=io.containerd.runc.v2\n" Feb 9 00:44:16.747374 kubelet[1972]: E0209 00:44:16.747248 1972 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:44:16.747374 kubelet[1972]: E0209 00:44:16.747246 1972 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:44:16.749076 env[1124]: time="2024-02-09T00:44:16.749021938Z" level=info msg="CreateContainer within sandbox \"8c269179f1c7dc6c6a5ba4c88f1aa2a0644bd9f00267f51a42746c4efaf0bcb1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 00:44:16.967287 env[1124]: time="2024-02-09T00:44:16.967198299Z" level=info msg="CreateContainer within sandbox \"8c269179f1c7dc6c6a5ba4c88f1aa2a0644bd9f00267f51a42746c4efaf0bcb1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"59cc9d0f44e5f0d03515d97ed77eaf2ec715b3abbab75e8f678e8af64d0e4e84\"" Feb 9 00:44:16.968024 env[1124]: time="2024-02-09T00:44:16.967977925Z" level=info msg="StartContainer for \"59cc9d0f44e5f0d03515d97ed77eaf2ec715b3abbab75e8f678e8af64d0e4e84\"" Feb 9 00:44:16.984729 systemd[1]: Started cri-containerd-59cc9d0f44e5f0d03515d97ed77eaf2ec715b3abbab75e8f678e8af64d0e4e84.scope. Feb 9 00:44:17.062328 env[1124]: time="2024-02-09T00:44:17.062264750Z" level=info msg="StartContainer for \"59cc9d0f44e5f0d03515d97ed77eaf2ec715b3abbab75e8f678e8af64d0e4e84\" returns successfully" Feb 9 00:44:17.195923 kubelet[1972]: I0209 00:44:17.195890 1972 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 00:44:17.293187 kubelet[1972]: I0209 00:44:17.293121 1972 topology_manager.go:212] "Topology Admit Handler" Feb 9 00:44:17.293591 kubelet[1972]: I0209 00:44:17.293561 1972 topology_manager.go:212] "Topology Admit Handler" Feb 9 00:44:17.299268 systemd[1]: Created slice kubepods-burstable-podad8ee71b_fa05_4b6c_af4e_1395c5a49123.slice. Feb 9 00:44:17.305469 systemd[1]: Created slice kubepods-burstable-pode74609ec_d453_490b_9f2e_e445053f6ba4.slice. Feb 9 00:44:17.317723 kubelet[1972]: I0209 00:44:17.317605 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ad8ee71b-fa05-4b6c-af4e-1395c5a49123-config-volume\") pod \"coredns-5d78c9869d-lwlv7\" (UID: \"ad8ee71b-fa05-4b6c-af4e-1395c5a49123\") " pod="kube-system/coredns-5d78c9869d-lwlv7" Feb 9 00:44:17.317949 kubelet[1972]: I0209 00:44:17.317934 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5jz5\" (UniqueName: \"kubernetes.io/projected/e74609ec-d453-490b-9f2e-e445053f6ba4-kube-api-access-f5jz5\") pod \"coredns-5d78c9869d-2qjnh\" (UID: \"e74609ec-d453-490b-9f2e-e445053f6ba4\") " pod="kube-system/coredns-5d78c9869d-2qjnh" Feb 9 00:44:17.318087 kubelet[1972]: I0209 00:44:17.318073 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e74609ec-d453-490b-9f2e-e445053f6ba4-config-volume\") pod \"coredns-5d78c9869d-2qjnh\" (UID: \"e74609ec-d453-490b-9f2e-e445053f6ba4\") " pod="kube-system/coredns-5d78c9869d-2qjnh" Feb 9 00:44:17.318240 kubelet[1972]: I0209 00:44:17.318226 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lq762\" (UniqueName: \"kubernetes.io/projected/ad8ee71b-fa05-4b6c-af4e-1395c5a49123-kube-api-access-lq762\") pod \"coredns-5d78c9869d-lwlv7\" (UID: \"ad8ee71b-fa05-4b6c-af4e-1395c5a49123\") " pod="kube-system/coredns-5d78c9869d-lwlv7" Feb 9 00:44:17.602386 kubelet[1972]: E0209 00:44:17.602267 1972 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:44:17.602818 env[1124]: time="2024-02-09T00:44:17.602772202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-lwlv7,Uid:ad8ee71b-fa05-4b6c-af4e-1395c5a49123,Namespace:kube-system,Attempt:0,}" Feb 9 00:44:17.609938 kubelet[1972]: E0209 00:44:17.609906 1972 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:44:17.610479 env[1124]: time="2024-02-09T00:44:17.610440701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-2qjnh,Uid:e74609ec-d453-490b-9f2e-e445053f6ba4,Namespace:kube-system,Attempt:0,}" Feb 9 00:44:17.751956 kubelet[1972]: E0209 00:44:17.751922 1972 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:44:18.445970 systemd[1]: Started sshd@6-10.0.0.31:22-10.0.0.1:60348.service. Feb 9 00:44:18.486654 sshd[2802]: Accepted publickey for core from 10.0.0.1 port 60348 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:44:18.487861 sshd[2802]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:44:18.491518 systemd-logind[1106]: New session 7 of user core. Feb 9 00:44:18.492564 systemd[1]: Started session-7.scope. Feb 9 00:44:18.610124 sshd[2802]: pam_unix(sshd:session): session closed for user core Feb 9 00:44:18.612258 systemd[1]: sshd@6-10.0.0.31:22-10.0.0.1:60348.service: Deactivated successfully. Feb 9 00:44:18.613058 systemd[1]: session-7.scope: Deactivated successfully. Feb 9 00:44:18.613710 systemd-logind[1106]: Session 7 logged out. Waiting for processes to exit. Feb 9 00:44:18.614539 systemd-logind[1106]: Removed session 7. Feb 9 00:44:18.753823 kubelet[1972]: E0209 00:44:18.753699 1972 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:44:19.755286 kubelet[1972]: E0209 00:44:19.755250 1972 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:44:19.853856 systemd-networkd[1017]: cilium_host: Link UP Feb 9 00:44:19.853956 systemd-networkd[1017]: cilium_net: Link UP Feb 9 00:44:19.853959 systemd-networkd[1017]: cilium_net: Gained carrier Feb 9 00:44:19.854258 systemd-networkd[1017]: cilium_host: Gained carrier Feb 9 00:44:19.863293 systemd-networkd[1017]: cilium_host: Gained IPv6LL Feb 9 00:44:19.864200 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 9 00:44:19.925925 systemd-networkd[1017]: cilium_vxlan: Link UP Feb 9 00:44:19.925932 systemd-networkd[1017]: cilium_vxlan: Gained carrier Feb 9 00:44:20.136165 kernel: NET: Registered PF_ALG protocol family Feb 9 00:44:20.661594 systemd-networkd[1017]: lxc_health: Link UP Feb 9 00:44:20.687910 systemd-networkd[1017]: lxc_health: Gained carrier Feb 9 00:44:20.688150 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 00:44:20.794249 systemd-networkd[1017]: cilium_net: Gained IPv6LL Feb 9 00:44:20.972976 systemd-networkd[1017]: lxcd63ebd45b9e8: Link UP Feb 9 00:44:20.980149 kernel: eth0: renamed from tmp7e9b5 Feb 9 00:44:20.990780 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 00:44:20.990880 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcd63ebd45b9e8: link becomes ready Feb 9 00:44:20.998525 systemd-networkd[1017]: lxcd63ebd45b9e8: Gained carrier Feb 9 00:44:20.998701 systemd-networkd[1017]: lxc6fdbc38d2edc: Link UP Feb 9 00:44:21.020153 kernel: eth0: renamed from tmp33606 Feb 9 00:44:21.030304 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc6fdbc38d2edc: link becomes ready Feb 9 00:44:21.030177 systemd-networkd[1017]: lxc6fdbc38d2edc: Gained carrier Feb 9 00:44:21.289125 kubelet[1972]: E0209 00:44:21.288986 1972 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:44:21.308598 kubelet[1972]: I0209 00:44:21.308394 1972 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-zt9nh" podStartSLOduration=9.233114021 podCreationTimestamp="2024-02-09 00:43:58 +0000 UTC" firstStartedPulling="2024-02-09 00:43:59.381361916 +0000 UTC m=+14.804459627" lastFinishedPulling="2024-02-09 00:44:13.456574512 +0000 UTC m=+28.879672222" observedRunningTime="2024-02-09 00:44:17.763866808 +0000 UTC m=+33.186964518" watchObservedRunningTime="2024-02-09 00:44:21.308326616 +0000 UTC m=+36.731424327" Feb 9 00:44:21.565343 systemd-networkd[1017]: cilium_vxlan: Gained IPv6LL Feb 9 00:44:21.759398 kubelet[1972]: E0209 00:44:21.759338 1972 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:44:22.330353 systemd-networkd[1017]: lxc_health: Gained IPv6LL Feb 9 00:44:22.760818 kubelet[1972]: E0209 00:44:22.760707 1972 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:44:22.970970 systemd-networkd[1017]: lxcd63ebd45b9e8: Gained IPv6LL Feb 9 00:44:22.971331 systemd-networkd[1017]: lxc6fdbc38d2edc: Gained IPv6LL Feb 9 00:44:23.614957 systemd[1]: Started sshd@7-10.0.0.31:22-10.0.0.1:60356.service. Feb 9 00:44:23.658438 sshd[3193]: Accepted publickey for core from 10.0.0.1 port 60356 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:44:23.659779 sshd[3193]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:44:23.663554 systemd-logind[1106]: New session 8 of user core. Feb 9 00:44:23.664552 systemd[1]: Started session-8.scope. Feb 9 00:44:23.791958 sshd[3193]: pam_unix(sshd:session): session closed for user core Feb 9 00:44:23.794476 systemd[1]: sshd@7-10.0.0.31:22-10.0.0.1:60356.service: Deactivated successfully. Feb 9 00:44:23.795326 systemd[1]: session-8.scope: Deactivated successfully. Feb 9 00:44:23.795969 systemd-logind[1106]: Session 8 logged out. Waiting for processes to exit. Feb 9 00:44:23.796888 systemd-logind[1106]: Removed session 8. Feb 9 00:44:24.749954 env[1124]: time="2024-02-09T00:44:24.749867704Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 00:44:24.749954 env[1124]: time="2024-02-09T00:44:24.749926417Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 00:44:24.749954 env[1124]: time="2024-02-09T00:44:24.749939192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 00:44:24.750461 env[1124]: time="2024-02-09T00:44:24.750150009Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7e9b5323b555e18244ab4982db2b71e25d423c7cbe5c683231ac777d29ead2ce pid=3221 runtime=io.containerd.runc.v2 Feb 9 00:44:24.760564 env[1124]: time="2024-02-09T00:44:24.760182943Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 00:44:24.760564 env[1124]: time="2024-02-09T00:44:24.760232349Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 00:44:24.760564 env[1124]: time="2024-02-09T00:44:24.760243149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 00:44:24.760564 env[1124]: time="2024-02-09T00:44:24.760403699Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/33606b474e22a57861fdc891a8cfeae478442b949e345452ce29127b90d7d48a pid=3244 runtime=io.containerd.runc.v2 Feb 9 00:44:24.769846 systemd[1]: Started cri-containerd-7e9b5323b555e18244ab4982db2b71e25d423c7cbe5c683231ac777d29ead2ce.scope. Feb 9 00:44:24.782786 systemd[1]: Started cri-containerd-33606b474e22a57861fdc891a8cfeae478442b949e345452ce29127b90d7d48a.scope. Feb 9 00:44:24.785826 systemd-resolved[1063]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 00:44:24.796731 systemd-resolved[1063]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 00:44:24.826478 env[1124]: time="2024-02-09T00:44:24.826423826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-2qjnh,Uid:e74609ec-d453-490b-9f2e-e445053f6ba4,Namespace:kube-system,Attempt:0,} returns sandbox id \"33606b474e22a57861fdc891a8cfeae478442b949e345452ce29127b90d7d48a\"" Feb 9 00:44:24.827040 env[1124]: time="2024-02-09T00:44:24.827009006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-lwlv7,Uid:ad8ee71b-fa05-4b6c-af4e-1395c5a49123,Namespace:kube-system,Attempt:0,} returns sandbox id \"7e9b5323b555e18244ab4982db2b71e25d423c7cbe5c683231ac777d29ead2ce\"" Feb 9 00:44:24.827304 kubelet[1972]: E0209 00:44:24.827276 1972 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:44:24.829102 env[1124]: time="2024-02-09T00:44:24.829074605Z" level=info msg="CreateContainer within sandbox \"33606b474e22a57861fdc891a8cfeae478442b949e345452ce29127b90d7d48a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 00:44:24.829362 kubelet[1972]: E0209 00:44:24.829332 1972 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:44:24.831109 env[1124]: time="2024-02-09T00:44:24.831082381Z" level=info msg="CreateContainer within sandbox \"7e9b5323b555e18244ab4982db2b71e25d423c7cbe5c683231ac777d29ead2ce\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 00:44:25.753376 systemd[1]: run-containerd-runc-k8s.io-33606b474e22a57861fdc891a8cfeae478442b949e345452ce29127b90d7d48a-runc.ytRybh.mount: Deactivated successfully. Feb 9 00:44:26.176262 env[1124]: time="2024-02-09T00:44:26.176201011Z" level=info msg="CreateContainer within sandbox \"7e9b5323b555e18244ab4982db2b71e25d423c7cbe5c683231ac777d29ead2ce\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7359855c629d8b47212d4b00b14eeaf50130ef6e1d8f7f93209892a889864221\"" Feb 9 00:44:26.176993 env[1124]: time="2024-02-09T00:44:26.176829654Z" level=info msg="StartContainer for \"7359855c629d8b47212d4b00b14eeaf50130ef6e1d8f7f93209892a889864221\"" Feb 9 00:44:26.194201 systemd[1]: Started cri-containerd-7359855c629d8b47212d4b00b14eeaf50130ef6e1d8f7f93209892a889864221.scope. Feb 9 00:44:26.309298 env[1124]: time="2024-02-09T00:44:26.309236238Z" level=info msg="CreateContainer within sandbox \"33606b474e22a57861fdc891a8cfeae478442b949e345452ce29127b90d7d48a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b4cc5f930aedc6e644e68575901dca60d448845b48ebc90c462c0adaa4575585\"" Feb 9 00:44:26.309956 env[1124]: time="2024-02-09T00:44:26.309933292Z" level=info msg="StartContainer for \"b4cc5f930aedc6e644e68575901dca60d448845b48ebc90c462c0adaa4575585\"" Feb 9 00:44:26.332048 systemd[1]: Started cri-containerd-b4cc5f930aedc6e644e68575901dca60d448845b48ebc90c462c0adaa4575585.scope. Feb 9 00:44:26.488664 env[1124]: time="2024-02-09T00:44:26.488543582Z" level=info msg="StartContainer for \"7359855c629d8b47212d4b00b14eeaf50130ef6e1d8f7f93209892a889864221\" returns successfully" Feb 9 00:44:26.703158 env[1124]: time="2024-02-09T00:44:26.703048542Z" level=info msg="StartContainer for \"b4cc5f930aedc6e644e68575901dca60d448845b48ebc90c462c0adaa4575585\" returns successfully" Feb 9 00:44:26.753228 systemd[1]: run-containerd-runc-k8s.io-b4cc5f930aedc6e644e68575901dca60d448845b48ebc90c462c0adaa4575585-runc.2cskrF.mount: Deactivated successfully. Feb 9 00:44:26.774781 kubelet[1972]: E0209 00:44:26.774749 1972 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:44:26.780159 kubelet[1972]: E0209 00:44:26.779411 1972 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:44:26.861609 kubelet[1972]: I0209 00:44:26.860699 1972 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5d78c9869d-2qjnh" podStartSLOduration=27.860665715 podCreationTimestamp="2024-02-09 00:43:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 00:44:26.860058104 +0000 UTC m=+42.283155814" watchObservedRunningTime="2024-02-09 00:44:26.860665715 +0000 UTC m=+42.283763425" Feb 9 00:44:27.647891 kubelet[1972]: I0209 00:44:27.647840 1972 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5d78c9869d-lwlv7" podStartSLOduration=28.647788396 podCreationTimestamp="2024-02-09 00:43:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 00:44:27.026518241 +0000 UTC m=+42.449615951" watchObservedRunningTime="2024-02-09 00:44:27.647788396 +0000 UTC m=+43.070886106" Feb 9 00:44:27.781325 kubelet[1972]: E0209 00:44:27.781277 1972 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:44:27.781699 kubelet[1972]: E0209 00:44:27.781375 1972 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:44:28.782230 kubelet[1972]: E0209 00:44:28.782201 1972 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:44:28.782671 kubelet[1972]: E0209 00:44:28.782204 1972 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:44:28.795791 systemd[1]: Started sshd@8-10.0.0.31:22-10.0.0.1:42008.service. Feb 9 00:44:28.849409 sshd[3381]: Accepted publickey for core from 10.0.0.1 port 42008 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:44:28.850677 sshd[3381]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:44:28.854320 systemd-logind[1106]: New session 9 of user core. Feb 9 00:44:28.855038 systemd[1]: Started session-9.scope. Feb 9 00:44:28.970430 sshd[3381]: pam_unix(sshd:session): session closed for user core Feb 9 00:44:28.972399 systemd[1]: sshd@8-10.0.0.31:22-10.0.0.1:42008.service: Deactivated successfully. Feb 9 00:44:28.973203 systemd[1]: session-9.scope: Deactivated successfully. Feb 9 00:44:28.973809 systemd-logind[1106]: Session 9 logged out. Waiting for processes to exit. Feb 9 00:44:28.974547 systemd-logind[1106]: Removed session 9. Feb 9 00:44:33.979187 systemd[1]: Started sshd@9-10.0.0.31:22-10.0.0.1:42024.service. Feb 9 00:44:34.075192 sshd[3397]: Accepted publickey for core from 10.0.0.1 port 42024 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:44:34.076936 sshd[3397]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:44:34.098301 systemd-logind[1106]: New session 10 of user core. Feb 9 00:44:34.099099 systemd[1]: Started session-10.scope. Feb 9 00:44:34.553230 sshd[3397]: pam_unix(sshd:session): session closed for user core Feb 9 00:44:34.556920 systemd[1]: sshd@9-10.0.0.31:22-10.0.0.1:42024.service: Deactivated successfully. Feb 9 00:44:34.557750 systemd[1]: session-10.scope: Deactivated successfully. Feb 9 00:44:34.561116 systemd[1]: Started sshd@10-10.0.0.31:22-10.0.0.1:42028.service. Feb 9 00:44:34.562552 systemd-logind[1106]: Session 10 logged out. Waiting for processes to exit. Feb 9 00:44:34.564969 systemd-logind[1106]: Removed session 10. Feb 9 00:44:34.661862 sshd[3411]: Accepted publickey for core from 10.0.0.1 port 42028 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:44:34.666544 sshd[3411]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:44:34.680072 systemd-logind[1106]: New session 11 of user core. Feb 9 00:44:34.680369 systemd[1]: Started session-11.scope. Feb 9 00:44:35.547658 sshd[3411]: pam_unix(sshd:session): session closed for user core Feb 9 00:44:35.550799 systemd[1]: Started sshd@11-10.0.0.31:22-10.0.0.1:42042.service. Feb 9 00:44:35.552816 systemd[1]: sshd@10-10.0.0.31:22-10.0.0.1:42028.service: Deactivated successfully. Feb 9 00:44:35.553423 systemd[1]: session-11.scope: Deactivated successfully. Feb 9 00:44:35.554291 systemd-logind[1106]: Session 11 logged out. Waiting for processes to exit. Feb 9 00:44:35.555292 systemd-logind[1106]: Removed session 11. Feb 9 00:44:35.595302 sshd[3422]: Accepted publickey for core from 10.0.0.1 port 42042 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:44:35.596494 sshd[3422]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:44:35.599901 systemd-logind[1106]: New session 12 of user core. Feb 9 00:44:35.600687 systemd[1]: Started session-12.scope. Feb 9 00:44:35.793091 sshd[3422]: pam_unix(sshd:session): session closed for user core Feb 9 00:44:35.795542 systemd[1]: sshd@11-10.0.0.31:22-10.0.0.1:42042.service: Deactivated successfully. Feb 9 00:44:35.796480 systemd[1]: session-12.scope: Deactivated successfully. Feb 9 00:44:35.797218 systemd-logind[1106]: Session 12 logged out. Waiting for processes to exit. Feb 9 00:44:35.798018 systemd-logind[1106]: Removed session 12. Feb 9 00:44:40.793747 systemd[1]: Started sshd@12-10.0.0.31:22-10.0.0.1:57114.service. Feb 9 00:44:40.835365 sshd[3437]: Accepted publickey for core from 10.0.0.1 port 57114 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:44:40.836631 sshd[3437]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:44:40.840335 systemd-logind[1106]: New session 13 of user core. Feb 9 00:44:40.841371 systemd[1]: Started session-13.scope. Feb 9 00:44:40.948782 sshd[3437]: pam_unix(sshd:session): session closed for user core Feb 9 00:44:40.951585 systemd[1]: sshd@12-10.0.0.31:22-10.0.0.1:57114.service: Deactivated successfully. Feb 9 00:44:40.952433 systemd[1]: session-13.scope: Deactivated successfully. Feb 9 00:44:40.953333 systemd-logind[1106]: Session 13 logged out. Waiting for processes to exit. Feb 9 00:44:40.954033 systemd-logind[1106]: Removed session 13. Feb 9 00:44:45.953505 systemd[1]: Started sshd@13-10.0.0.31:22-10.0.0.1:57118.service. Feb 9 00:44:45.992449 sshd[3453]: Accepted publickey for core from 10.0.0.1 port 57118 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:44:45.993557 sshd[3453]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:44:45.996680 systemd-logind[1106]: New session 14 of user core. Feb 9 00:44:45.997535 systemd[1]: Started session-14.scope. Feb 9 00:44:46.104320 sshd[3453]: pam_unix(sshd:session): session closed for user core Feb 9 00:44:46.106351 systemd[1]: sshd@13-10.0.0.31:22-10.0.0.1:57118.service: Deactivated successfully. Feb 9 00:44:46.107052 systemd[1]: session-14.scope: Deactivated successfully. Feb 9 00:44:46.107701 systemd-logind[1106]: Session 14 logged out. Waiting for processes to exit. Feb 9 00:44:46.108421 systemd-logind[1106]: Removed session 14. Feb 9 00:44:51.116843 systemd[1]: Started sshd@14-10.0.0.31:22-10.0.0.1:55804.service. Feb 9 00:44:51.197812 sshd[3466]: Accepted publickey for core from 10.0.0.1 port 55804 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:44:51.200193 sshd[3466]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:44:51.237616 systemd[1]: Started session-15.scope. Feb 9 00:44:51.239756 systemd-logind[1106]: New session 15 of user core. Feb 9 00:44:51.557583 sshd[3466]: pam_unix(sshd:session): session closed for user core Feb 9 00:44:51.566308 systemd[1]: Started sshd@15-10.0.0.31:22-10.0.0.1:55820.service. Feb 9 00:44:51.575521 systemd[1]: sshd@14-10.0.0.31:22-10.0.0.1:55804.service: Deactivated successfully. Feb 9 00:44:51.576524 systemd[1]: session-15.scope: Deactivated successfully. Feb 9 00:44:51.578018 systemd-logind[1106]: Session 15 logged out. Waiting for processes to exit. Feb 9 00:44:51.579069 systemd-logind[1106]: Removed session 15. Feb 9 00:44:51.651797 sshd[3478]: Accepted publickey for core from 10.0.0.1 port 55820 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:44:51.655934 sshd[3478]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:44:51.662285 systemd[1]: Started session-16.scope. Feb 9 00:44:51.663331 systemd-logind[1106]: New session 16 of user core. Feb 9 00:44:53.087619 sshd[3478]: pam_unix(sshd:session): session closed for user core Feb 9 00:44:53.091698 systemd[1]: sshd@15-10.0.0.31:22-10.0.0.1:55820.service: Deactivated successfully. Feb 9 00:44:53.092452 systemd[1]: session-16.scope: Deactivated successfully. Feb 9 00:44:53.093254 systemd-logind[1106]: Session 16 logged out. Waiting for processes to exit. Feb 9 00:44:53.094569 systemd[1]: Started sshd@16-10.0.0.31:22-10.0.0.1:55836.service. Feb 9 00:44:53.095479 systemd-logind[1106]: Removed session 16. Feb 9 00:44:53.200808 sshd[3490]: Accepted publickey for core from 10.0.0.1 port 55836 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:44:53.202453 sshd[3490]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:44:53.214057 systemd-logind[1106]: New session 17 of user core. Feb 9 00:44:53.215028 systemd[1]: Started session-17.scope. Feb 9 00:44:54.877673 systemd[1]: Started sshd@17-10.0.0.31:22-10.0.0.1:55838.service. Feb 9 00:44:54.878209 sshd[3490]: pam_unix(sshd:session): session closed for user core Feb 9 00:44:54.906590 systemd[1]: sshd@16-10.0.0.31:22-10.0.0.1:55836.service: Deactivated successfully. Feb 9 00:44:54.907967 systemd[1]: session-17.scope: Deactivated successfully. Feb 9 00:44:54.914693 systemd-logind[1106]: Session 17 logged out. Waiting for processes to exit. Feb 9 00:44:54.916600 systemd-logind[1106]: Removed session 17. Feb 9 00:44:54.992450 sshd[3516]: Accepted publickey for core from 10.0.0.1 port 55838 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:44:54.993011 sshd[3516]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:44:55.015043 systemd-logind[1106]: New session 18 of user core. Feb 9 00:44:55.016249 systemd[1]: Started session-18.scope. Feb 9 00:44:55.877495 sshd[3516]: pam_unix(sshd:session): session closed for user core Feb 9 00:44:55.885006 systemd[1]: Started sshd@18-10.0.0.31:22-10.0.0.1:55840.service. Feb 9 00:44:55.893053 systemd[1]: sshd@17-10.0.0.31:22-10.0.0.1:55838.service: Deactivated successfully. Feb 9 00:44:55.894744 systemd[1]: session-18.scope: Deactivated successfully. Feb 9 00:44:55.898471 systemd-logind[1106]: Session 18 logged out. Waiting for processes to exit. Feb 9 00:44:55.913616 systemd-logind[1106]: Removed session 18. Feb 9 00:44:55.986033 sshd[3527]: Accepted publickey for core from 10.0.0.1 port 55840 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:44:55.996561 sshd[3527]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:44:56.017496 systemd[1]: Started session-19.scope. Feb 9 00:44:56.019682 systemd-logind[1106]: New session 19 of user core. Feb 9 00:44:56.332431 sshd[3527]: pam_unix(sshd:session): session closed for user core Feb 9 00:44:56.337753 systemd[1]: sshd@18-10.0.0.31:22-10.0.0.1:55840.service: Deactivated successfully. Feb 9 00:44:56.338827 systemd[1]: session-19.scope: Deactivated successfully. Feb 9 00:44:56.339335 systemd-logind[1106]: Session 19 logged out. Waiting for processes to exit. Feb 9 00:44:56.340854 systemd-logind[1106]: Removed session 19. Feb 9 00:45:01.344274 systemd[1]: Started sshd@19-10.0.0.31:22-10.0.0.1:52010.service. Feb 9 00:45:01.432565 sshd[3544]: Accepted publickey for core from 10.0.0.1 port 52010 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:45:01.437532 sshd[3544]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:45:01.448767 systemd-logind[1106]: New session 20 of user core. Feb 9 00:45:01.455636 systemd[1]: Started session-20.scope. Feb 9 00:45:01.668763 sshd[3544]: pam_unix(sshd:session): session closed for user core Feb 9 00:45:01.674343 systemd[1]: sshd@19-10.0.0.31:22-10.0.0.1:52010.service: Deactivated successfully. Feb 9 00:45:01.675339 systemd[1]: session-20.scope: Deactivated successfully. Feb 9 00:45:01.677069 systemd-logind[1106]: Session 20 logged out. Waiting for processes to exit. Feb 9 00:45:01.679053 systemd-logind[1106]: Removed session 20. Feb 9 00:45:03.675772 kubelet[1972]: E0209 00:45:03.675738 1972 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:45:03.676221 kubelet[1972]: E0209 00:45:03.675795 1972 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:45:05.675668 kubelet[1972]: E0209 00:45:05.675630 1972 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:45:06.668002 systemd[1]: Started sshd@20-10.0.0.31:22-10.0.0.1:36234.service. Feb 9 00:45:06.706872 sshd[3562]: Accepted publickey for core from 10.0.0.1 port 36234 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:45:06.708026 sshd[3562]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:45:06.711224 systemd-logind[1106]: New session 21 of user core. Feb 9 00:45:06.712155 systemd[1]: Started session-21.scope. Feb 9 00:45:06.810729 sshd[3562]: pam_unix(sshd:session): session closed for user core Feb 9 00:45:06.812684 systemd[1]: sshd@20-10.0.0.31:22-10.0.0.1:36234.service: Deactivated successfully. Feb 9 00:45:06.813447 systemd[1]: session-21.scope: Deactivated successfully. Feb 9 00:45:06.814185 systemd-logind[1106]: Session 21 logged out. Waiting for processes to exit. Feb 9 00:45:06.814792 systemd-logind[1106]: Removed session 21. Feb 9 00:45:11.815457 systemd[1]: Started sshd@21-10.0.0.31:22-10.0.0.1:36250.service. Feb 9 00:45:11.855378 sshd[3575]: Accepted publickey for core from 10.0.0.1 port 36250 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:45:11.856554 sshd[3575]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:45:11.860505 systemd-logind[1106]: New session 22 of user core. Feb 9 00:45:11.861513 systemd[1]: Started session-22.scope. Feb 9 00:45:11.962002 sshd[3575]: pam_unix(sshd:session): session closed for user core Feb 9 00:45:11.963946 systemd[1]: sshd@21-10.0.0.31:22-10.0.0.1:36250.service: Deactivated successfully. Feb 9 00:45:11.964737 systemd[1]: session-22.scope: Deactivated successfully. Feb 9 00:45:11.965270 systemd-logind[1106]: Session 22 logged out. Waiting for processes to exit. Feb 9 00:45:11.965899 systemd-logind[1106]: Removed session 22. Feb 9 00:45:16.965906 systemd[1]: Started sshd@22-10.0.0.31:22-10.0.0.1:40622.service. Feb 9 00:45:17.003983 sshd[3588]: Accepted publickey for core from 10.0.0.1 port 40622 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:45:17.004855 sshd[3588]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:45:17.007853 systemd-logind[1106]: New session 23 of user core. Feb 9 00:45:17.008722 systemd[1]: Started session-23.scope. Feb 9 00:45:17.107431 sshd[3588]: pam_unix(sshd:session): session closed for user core Feb 9 00:45:17.109741 systemd[1]: sshd@22-10.0.0.31:22-10.0.0.1:40622.service: Deactivated successfully. Feb 9 00:45:17.110458 systemd[1]: session-23.scope: Deactivated successfully. Feb 9 00:45:17.110959 systemd-logind[1106]: Session 23 logged out. Waiting for processes to exit. Feb 9 00:45:17.111576 systemd-logind[1106]: Removed session 23. Feb 9 00:45:22.111790 systemd[1]: Started sshd@23-10.0.0.31:22-10.0.0.1:40638.service. Feb 9 00:45:22.150521 sshd[3601]: Accepted publickey for core from 10.0.0.1 port 40638 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:45:22.151716 sshd[3601]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:45:22.155044 systemd-logind[1106]: New session 24 of user core. Feb 9 00:45:22.155957 systemd[1]: Started session-24.scope. Feb 9 00:45:22.264258 sshd[3601]: pam_unix(sshd:session): session closed for user core Feb 9 00:45:22.266910 systemd[1]: sshd@23-10.0.0.31:22-10.0.0.1:40638.service: Deactivated successfully. Feb 9 00:45:22.267427 systemd[1]: session-24.scope: Deactivated successfully. Feb 9 00:45:22.268074 systemd-logind[1106]: Session 24 logged out. Waiting for processes to exit. Feb 9 00:45:22.268938 systemd[1]: Started sshd@24-10.0.0.31:22-10.0.0.1:40646.service. Feb 9 00:45:22.269551 systemd-logind[1106]: Removed session 24. Feb 9 00:45:22.310084 sshd[3614]: Accepted publickey for core from 10.0.0.1 port 40646 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:45:22.311349 sshd[3614]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:45:22.314526 systemd-logind[1106]: New session 25 of user core. Feb 9 00:45:22.315313 systemd[1]: Started session-25.scope. Feb 9 00:45:22.676513 kubelet[1972]: E0209 00:45:22.676472 1972 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:45:23.655863 env[1124]: time="2024-02-09T00:45:23.655819234Z" level=info msg="StopContainer for \"9ff2fc94c34a62aab81514b70009d2d8c47f0eaea7da97dc97409e1673e2a966\" with timeout 30 (s)" Feb 9 00:45:23.656211 env[1124]: time="2024-02-09T00:45:23.656181648Z" level=info msg="Stop container \"9ff2fc94c34a62aab81514b70009d2d8c47f0eaea7da97dc97409e1673e2a966\" with signal terminated" Feb 9 00:45:23.670096 env[1124]: time="2024-02-09T00:45:23.669992524Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 00:45:23.670325 systemd[1]: cri-containerd-9ff2fc94c34a62aab81514b70009d2d8c47f0eaea7da97dc97409e1673e2a966.scope: Deactivated successfully. Feb 9 00:45:23.674124 env[1124]: time="2024-02-09T00:45:23.674081296Z" level=info msg="StopContainer for \"59cc9d0f44e5f0d03515d97ed77eaf2ec715b3abbab75e8f678e8af64d0e4e84\" with timeout 1 (s)" Feb 9 00:45:23.674681 env[1124]: time="2024-02-09T00:45:23.674640754Z" level=info msg="Stop container \"59cc9d0f44e5f0d03515d97ed77eaf2ec715b3abbab75e8f678e8af64d0e4e84\" with signal terminated" Feb 9 00:45:23.680527 systemd-networkd[1017]: lxc_health: Link DOWN Feb 9 00:45:23.680534 systemd-networkd[1017]: lxc_health: Lost carrier Feb 9 00:45:23.687406 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9ff2fc94c34a62aab81514b70009d2d8c47f0eaea7da97dc97409e1673e2a966-rootfs.mount: Deactivated successfully. Feb 9 00:45:23.701878 env[1124]: time="2024-02-09T00:45:23.701825489Z" level=info msg="shim disconnected" id=9ff2fc94c34a62aab81514b70009d2d8c47f0eaea7da97dc97409e1673e2a966 Feb 9 00:45:23.701878 env[1124]: time="2024-02-09T00:45:23.701874391Z" level=warning msg="cleaning up after shim disconnected" id=9ff2fc94c34a62aab81514b70009d2d8c47f0eaea7da97dc97409e1673e2a966 namespace=k8s.io Feb 9 00:45:23.702081 env[1124]: time="2024-02-09T00:45:23.701886063Z" level=info msg="cleaning up dead shim" Feb 9 00:45:23.708068 systemd[1]: cri-containerd-59cc9d0f44e5f0d03515d97ed77eaf2ec715b3abbab75e8f678e8af64d0e4e84.scope: Deactivated successfully. Feb 9 00:45:23.708330 systemd[1]: cri-containerd-59cc9d0f44e5f0d03515d97ed77eaf2ec715b3abbab75e8f678e8af64d0e4e84.scope: Consumed 6.765s CPU time. Feb 9 00:45:23.709626 env[1124]: time="2024-02-09T00:45:23.709604008Z" level=warning msg="cleanup warnings time=\"2024-02-09T00:45:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3673 runtime=io.containerd.runc.v2\n" Feb 9 00:45:23.712476 env[1124]: time="2024-02-09T00:45:23.712441703Z" level=info msg="StopContainer for \"9ff2fc94c34a62aab81514b70009d2d8c47f0eaea7da97dc97409e1673e2a966\" returns successfully" Feb 9 00:45:23.713054 env[1124]: time="2024-02-09T00:45:23.713028382Z" level=info msg="StopPodSandbox for \"a126fdbb23d9bfb30c0e468040115ec4aa5ab9940f4531b89e71f76ca01efd74\"" Feb 9 00:45:23.713110 env[1124]: time="2024-02-09T00:45:23.713084539Z" level=info msg="Container to stop \"9ff2fc94c34a62aab81514b70009d2d8c47f0eaea7da97dc97409e1673e2a966\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 00:45:23.714553 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a126fdbb23d9bfb30c0e468040115ec4aa5ab9940f4531b89e71f76ca01efd74-shm.mount: Deactivated successfully. Feb 9 00:45:23.720395 systemd[1]: cri-containerd-a126fdbb23d9bfb30c0e468040115ec4aa5ab9940f4531b89e71f76ca01efd74.scope: Deactivated successfully. Feb 9 00:45:23.726223 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-59cc9d0f44e5f0d03515d97ed77eaf2ec715b3abbab75e8f678e8af64d0e4e84-rootfs.mount: Deactivated successfully. Feb 9 00:45:23.732583 env[1124]: time="2024-02-09T00:45:23.732534277Z" level=info msg="shim disconnected" id=59cc9d0f44e5f0d03515d97ed77eaf2ec715b3abbab75e8f678e8af64d0e4e84 Feb 9 00:45:23.732798 env[1124]: time="2024-02-09T00:45:23.732780242Z" level=warning msg="cleaning up after shim disconnected" id=59cc9d0f44e5f0d03515d97ed77eaf2ec715b3abbab75e8f678e8af64d0e4e84 namespace=k8s.io Feb 9 00:45:23.732873 env[1124]: time="2024-02-09T00:45:23.732854663Z" level=info msg="cleaning up dead shim" Feb 9 00:45:23.739273 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a126fdbb23d9bfb30c0e468040115ec4aa5ab9940f4531b89e71f76ca01efd74-rootfs.mount: Deactivated successfully. Feb 9 00:45:23.739743 env[1124]: time="2024-02-09T00:45:23.739679268Z" level=info msg="shim disconnected" id=a126fdbb23d9bfb30c0e468040115ec4aa5ab9940f4531b89e71f76ca01efd74 Feb 9 00:45:23.739743 env[1124]: time="2024-02-09T00:45:23.739737077Z" level=warning msg="cleaning up after shim disconnected" id=a126fdbb23d9bfb30c0e468040115ec4aa5ab9940f4531b89e71f76ca01efd74 namespace=k8s.io Feb 9 00:45:23.739743 env[1124]: time="2024-02-09T00:45:23.739745403Z" level=info msg="cleaning up dead shim" Feb 9 00:45:23.739976 env[1124]: time="2024-02-09T00:45:23.739936805Z" level=warning msg="cleanup warnings time=\"2024-02-09T00:45:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3715 runtime=io.containerd.runc.v2\n" Feb 9 00:45:23.742361 env[1124]: time="2024-02-09T00:45:23.742318498Z" level=info msg="StopContainer for \"59cc9d0f44e5f0d03515d97ed77eaf2ec715b3abbab75e8f678e8af64d0e4e84\" returns successfully" Feb 9 00:45:23.742969 env[1124]: time="2024-02-09T00:45:23.742948540Z" level=info msg="StopPodSandbox for \"8c269179f1c7dc6c6a5ba4c88f1aa2a0644bd9f00267f51a42746c4efaf0bcb1\"" Feb 9 00:45:23.743096 env[1124]: time="2024-02-09T00:45:23.743074528Z" level=info msg="Container to stop \"71533b882720c5a5601a90ec173779ce47fbceded2b8a59e063f5d75b189d4d5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 00:45:23.743270 env[1124]: time="2024-02-09T00:45:23.743243858Z" level=info msg="Container to stop \"92c944a6e2e3f84511833de1c82e97ddb1c2a08be082cfea8d0079be768612b3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 00:45:23.743357 env[1124]: time="2024-02-09T00:45:23.743335703Z" level=info msg="Container to stop \"59cc9d0f44e5f0d03515d97ed77eaf2ec715b3abbab75e8f678e8af64d0e4e84\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 00:45:23.743440 env[1124]: time="2024-02-09T00:45:23.743419751Z" level=info msg="Container to stop \"9c9a8d4ab41f8de1d00a6c2249daba1e710ef5da1ed7a99283a3d9737bb20fb2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 00:45:23.743521 env[1124]: time="2024-02-09T00:45:23.743499902Z" level=info msg="Container to stop \"51d1dccc3af466fae28ddbb5931c8317151822f213b47fd92422f3a7abf6a718\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 00:45:23.746055 env[1124]: time="2024-02-09T00:45:23.746019036Z" level=warning msg="cleanup warnings time=\"2024-02-09T00:45:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3727 runtime=io.containerd.runc.v2\n" Feb 9 00:45:23.746293 env[1124]: time="2024-02-09T00:45:23.746258378Z" level=info msg="TearDown network for sandbox \"a126fdbb23d9bfb30c0e468040115ec4aa5ab9940f4531b89e71f76ca01efd74\" successfully" Feb 9 00:45:23.746293 env[1124]: time="2024-02-09T00:45:23.746280822Z" level=info msg="StopPodSandbox for \"a126fdbb23d9bfb30c0e468040115ec4aa5ab9940f4531b89e71f76ca01efd74\" returns successfully" Feb 9 00:45:23.748748 systemd[1]: cri-containerd-8c269179f1c7dc6c6a5ba4c88f1aa2a0644bd9f00267f51a42746c4efaf0bcb1.scope: Deactivated successfully. Feb 9 00:45:23.771612 env[1124]: time="2024-02-09T00:45:23.771557307Z" level=info msg="shim disconnected" id=8c269179f1c7dc6c6a5ba4c88f1aa2a0644bd9f00267f51a42746c4efaf0bcb1 Feb 9 00:45:23.771612 env[1124]: time="2024-02-09T00:45:23.771605929Z" level=warning msg="cleaning up after shim disconnected" id=8c269179f1c7dc6c6a5ba4c88f1aa2a0644bd9f00267f51a42746c4efaf0bcb1 namespace=k8s.io Feb 9 00:45:23.771612 env[1124]: time="2024-02-09T00:45:23.771614736Z" level=info msg="cleaning up dead shim" Feb 9 00:45:23.778332 env[1124]: time="2024-02-09T00:45:23.778280631Z" level=warning msg="cleanup warnings time=\"2024-02-09T00:45:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3757 runtime=io.containerd.runc.v2\n" Feb 9 00:45:23.778588 env[1124]: time="2024-02-09T00:45:23.778563135Z" level=info msg="TearDown network for sandbox \"8c269179f1c7dc6c6a5ba4c88f1aa2a0644bd9f00267f51a42746c4efaf0bcb1\" successfully" Feb 9 00:45:23.778653 env[1124]: time="2024-02-09T00:45:23.778587993Z" level=info msg="StopPodSandbox for \"8c269179f1c7dc6c6a5ba4c88f1aa2a0644bd9f00267f51a42746c4efaf0bcb1\" returns successfully" Feb 9 00:45:23.868607 kubelet[1972]: I0209 00:45:23.868553 1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swvf5\" (UniqueName: \"kubernetes.io/projected/10143a08-930f-4cf6-82ca-e2fed827cd75-kube-api-access-swvf5\") pod \"10143a08-930f-4cf6-82ca-e2fed827cd75\" (UID: \"10143a08-930f-4cf6-82ca-e2fed827cd75\") " Feb 9 00:45:23.868607 kubelet[1972]: I0209 00:45:23.868604 1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8c693ed1-debc-4c55-8424-5609186a78aa-cilium-config-path\") pod \"8c693ed1-debc-4c55-8424-5609186a78aa\" (UID: \"8c693ed1-debc-4c55-8424-5609186a78aa\") " Feb 9 00:45:23.868607 kubelet[1972]: I0209 00:45:23.868622 1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8c693ed1-debc-4c55-8424-5609186a78aa-host-proc-sys-kernel\") pod \"8c693ed1-debc-4c55-8424-5609186a78aa\" (UID: \"8c693ed1-debc-4c55-8424-5609186a78aa\") " Feb 9 00:45:23.869051 kubelet[1972]: I0209 00:45:23.868639 1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8c693ed1-debc-4c55-8424-5609186a78aa-cilium-cgroup\") pod \"8c693ed1-debc-4c55-8424-5609186a78aa\" (UID: \"8c693ed1-debc-4c55-8424-5609186a78aa\") " Feb 9 00:45:23.869051 kubelet[1972]: I0209 00:45:23.868654 1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8c693ed1-debc-4c55-8424-5609186a78aa-xtables-lock\") pod \"8c693ed1-debc-4c55-8424-5609186a78aa\" (UID: \"8c693ed1-debc-4c55-8424-5609186a78aa\") " Feb 9 00:45:23.869051 kubelet[1972]: I0209 00:45:23.868669 1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8c693ed1-debc-4c55-8424-5609186a78aa-etc-cni-netd\") pod \"8c693ed1-debc-4c55-8424-5609186a78aa\" (UID: \"8c693ed1-debc-4c55-8424-5609186a78aa\") " Feb 9 00:45:23.869051 kubelet[1972]: I0209 00:45:23.868685 1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8c693ed1-debc-4c55-8424-5609186a78aa-bpf-maps\") pod \"8c693ed1-debc-4c55-8424-5609186a78aa\" (UID: \"8c693ed1-debc-4c55-8424-5609186a78aa\") " Feb 9 00:45:23.869051 kubelet[1972]: I0209 00:45:23.868717 1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/10143a08-930f-4cf6-82ca-e2fed827cd75-cilium-config-path\") pod \"10143a08-930f-4cf6-82ca-e2fed827cd75\" (UID: \"10143a08-930f-4cf6-82ca-e2fed827cd75\") " Feb 9 00:45:23.869051 kubelet[1972]: I0209 00:45:23.868738 1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8c693ed1-debc-4c55-8424-5609186a78aa-hubble-tls\") pod \"8c693ed1-debc-4c55-8424-5609186a78aa\" (UID: \"8c693ed1-debc-4c55-8424-5609186a78aa\") " Feb 9 00:45:23.869220 kubelet[1972]: I0209 00:45:23.868759 1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8c693ed1-debc-4c55-8424-5609186a78aa-host-proc-sys-net\") pod \"8c693ed1-debc-4c55-8424-5609186a78aa\" (UID: \"8c693ed1-debc-4c55-8424-5609186a78aa\") " Feb 9 00:45:23.869220 kubelet[1972]: I0209 00:45:23.868752 1972 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c693ed1-debc-4c55-8424-5609186a78aa-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "8c693ed1-debc-4c55-8424-5609186a78aa" (UID: "8c693ed1-debc-4c55-8424-5609186a78aa"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 00:45:23.869220 kubelet[1972]: I0209 00:45:23.868785 1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lc6w\" (UniqueName: \"kubernetes.io/projected/8c693ed1-debc-4c55-8424-5609186a78aa-kube-api-access-9lc6w\") pod \"8c693ed1-debc-4c55-8424-5609186a78aa\" (UID: \"8c693ed1-debc-4c55-8424-5609186a78aa\") " Feb 9 00:45:23.869220 kubelet[1972]: I0209 00:45:23.868872 1972 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8c693ed1-debc-4c55-8424-5609186a78aa-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Feb 9 00:45:23.869220 kubelet[1972]: W0209 00:45:23.869020 1972 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/8c693ed1-debc-4c55-8424-5609186a78aa/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 00:45:23.869220 kubelet[1972]: I0209 00:45:23.869078 1972 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c693ed1-debc-4c55-8424-5609186a78aa-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "8c693ed1-debc-4c55-8424-5609186a78aa" (UID: "8c693ed1-debc-4c55-8424-5609186a78aa"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 00:45:23.869473 kubelet[1972]: I0209 00:45:23.869109 1972 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c693ed1-debc-4c55-8424-5609186a78aa-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "8c693ed1-debc-4c55-8424-5609186a78aa" (UID: "8c693ed1-debc-4c55-8424-5609186a78aa"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 00:45:23.869473 kubelet[1972]: I0209 00:45:23.869121 1972 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c693ed1-debc-4c55-8424-5609186a78aa-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "8c693ed1-debc-4c55-8424-5609186a78aa" (UID: "8c693ed1-debc-4c55-8424-5609186a78aa"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 00:45:23.869473 kubelet[1972]: I0209 00:45:23.869152 1972 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c693ed1-debc-4c55-8424-5609186a78aa-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "8c693ed1-debc-4c55-8424-5609186a78aa" (UID: "8c693ed1-debc-4c55-8424-5609186a78aa"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 00:45:23.869473 kubelet[1972]: W0209 00:45:23.869261 1972 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/10143a08-930f-4cf6-82ca-e2fed827cd75/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 00:45:23.871635 kubelet[1972]: I0209 00:45:23.871611 1972 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c693ed1-debc-4c55-8424-5609186a78aa-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "8c693ed1-debc-4c55-8424-5609186a78aa" (UID: "8c693ed1-debc-4c55-8424-5609186a78aa"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 00:45:23.871758 kubelet[1972]: I0209 00:45:23.871616 1972 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10143a08-930f-4cf6-82ca-e2fed827cd75-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "10143a08-930f-4cf6-82ca-e2fed827cd75" (UID: "10143a08-930f-4cf6-82ca-e2fed827cd75"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 00:45:23.871837 kubelet[1972]: I0209 00:45:23.871633 1972 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c693ed1-debc-4c55-8424-5609186a78aa-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "8c693ed1-debc-4c55-8424-5609186a78aa" (UID: "8c693ed1-debc-4c55-8424-5609186a78aa"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 00:45:23.871968 kubelet[1972]: I0209 00:45:23.871948 1972 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10143a08-930f-4cf6-82ca-e2fed827cd75-kube-api-access-swvf5" (OuterVolumeSpecName: "kube-api-access-swvf5") pod "10143a08-930f-4cf6-82ca-e2fed827cd75" (UID: "10143a08-930f-4cf6-82ca-e2fed827cd75"). InnerVolumeSpecName "kube-api-access-swvf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 00:45:23.872047 kubelet[1972]: I0209 00:45:23.871981 1972 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c693ed1-debc-4c55-8424-5609186a78aa-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8c693ed1-debc-4c55-8424-5609186a78aa" (UID: "8c693ed1-debc-4c55-8424-5609186a78aa"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 00:45:23.873928 kubelet[1972]: I0209 00:45:23.873896 1972 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c693ed1-debc-4c55-8424-5609186a78aa-kube-api-access-9lc6w" (OuterVolumeSpecName: "kube-api-access-9lc6w") pod "8c693ed1-debc-4c55-8424-5609186a78aa" (UID: "8c693ed1-debc-4c55-8424-5609186a78aa"). InnerVolumeSpecName "kube-api-access-9lc6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 00:45:23.915802 kubelet[1972]: I0209 00:45:23.914474 1972 scope.go:115] "RemoveContainer" containerID="9ff2fc94c34a62aab81514b70009d2d8c47f0eaea7da97dc97409e1673e2a966" Feb 9 00:45:23.917057 env[1124]: time="2024-02-09T00:45:23.916995833Z" level=info msg="RemoveContainer for \"9ff2fc94c34a62aab81514b70009d2d8c47f0eaea7da97dc97409e1673e2a966\"" Feb 9 00:45:23.918033 systemd[1]: Removed slice kubepods-besteffort-pod10143a08_930f_4cf6_82ca_e2fed827cd75.slice. Feb 9 00:45:23.921358 env[1124]: time="2024-02-09T00:45:23.921186407Z" level=info msg="RemoveContainer for \"9ff2fc94c34a62aab81514b70009d2d8c47f0eaea7da97dc97409e1673e2a966\" returns successfully" Feb 9 00:45:23.921500 kubelet[1972]: I0209 00:45:23.921466 1972 scope.go:115] "RemoveContainer" containerID="9ff2fc94c34a62aab81514b70009d2d8c47f0eaea7da97dc97409e1673e2a966" Feb 9 00:45:23.921782 env[1124]: time="2024-02-09T00:45:23.921682085Z" level=error msg="ContainerStatus for \"9ff2fc94c34a62aab81514b70009d2d8c47f0eaea7da97dc97409e1673e2a966\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9ff2fc94c34a62aab81514b70009d2d8c47f0eaea7da97dc97409e1673e2a966\": not found" Feb 9 00:45:23.921989 kubelet[1972]: E0209 00:45:23.921959 1972 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9ff2fc94c34a62aab81514b70009d2d8c47f0eaea7da97dc97409e1673e2a966\": not found" containerID="9ff2fc94c34a62aab81514b70009d2d8c47f0eaea7da97dc97409e1673e2a966" Feb 9 00:45:23.922063 kubelet[1972]: I0209 00:45:23.922009 1972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:9ff2fc94c34a62aab81514b70009d2d8c47f0eaea7da97dc97409e1673e2a966} err="failed to get container status \"9ff2fc94c34a62aab81514b70009d2d8c47f0eaea7da97dc97409e1673e2a966\": rpc error: code = NotFound desc = an error occurred when try to find container \"9ff2fc94c34a62aab81514b70009d2d8c47f0eaea7da97dc97409e1673e2a966\": not found" Feb 9 00:45:23.922063 kubelet[1972]: I0209 00:45:23.922028 1972 scope.go:115] "RemoveContainer" containerID="59cc9d0f44e5f0d03515d97ed77eaf2ec715b3abbab75e8f678e8af64d0e4e84" Feb 9 00:45:23.923223 env[1124]: time="2024-02-09T00:45:23.923196718Z" level=info msg="RemoveContainer for \"59cc9d0f44e5f0d03515d97ed77eaf2ec715b3abbab75e8f678e8af64d0e4e84\"" Feb 9 00:45:23.927027 env[1124]: time="2024-02-09T00:45:23.926989942Z" level=info msg="RemoveContainer for \"59cc9d0f44e5f0d03515d97ed77eaf2ec715b3abbab75e8f678e8af64d0e4e84\" returns successfully" Feb 9 00:45:23.927622 kubelet[1972]: I0209 00:45:23.927294 1972 scope.go:115] "RemoveContainer" containerID="92c944a6e2e3f84511833de1c82e97ddb1c2a08be082cfea8d0079be768612b3" Feb 9 00:45:23.928602 env[1124]: time="2024-02-09T00:45:23.928575179Z" level=info msg="RemoveContainer for \"92c944a6e2e3f84511833de1c82e97ddb1c2a08be082cfea8d0079be768612b3\"" Feb 9 00:45:23.931106 env[1124]: time="2024-02-09T00:45:23.931077090Z" level=info msg="RemoveContainer for \"92c944a6e2e3f84511833de1c82e97ddb1c2a08be082cfea8d0079be768612b3\" returns successfully" Feb 9 00:45:23.931235 kubelet[1972]: I0209 00:45:23.931217 1972 scope.go:115] "RemoveContainer" containerID="71533b882720c5a5601a90ec173779ce47fbceded2b8a59e063f5d75b189d4d5" Feb 9 00:45:23.932062 env[1124]: time="2024-02-09T00:45:23.932035984Z" level=info msg="RemoveContainer for \"71533b882720c5a5601a90ec173779ce47fbceded2b8a59e063f5d75b189d4d5\"" Feb 9 00:45:23.934900 env[1124]: time="2024-02-09T00:45:23.934874822Z" level=info msg="RemoveContainer for \"71533b882720c5a5601a90ec173779ce47fbceded2b8a59e063f5d75b189d4d5\" returns successfully" Feb 9 00:45:23.936789 kubelet[1972]: I0209 00:45:23.936764 1972 scope.go:115] "RemoveContainer" containerID="51d1dccc3af466fae28ddbb5931c8317151822f213b47fd92422f3a7abf6a718" Feb 9 00:45:23.937762 env[1124]: time="2024-02-09T00:45:23.937732997Z" level=info msg="RemoveContainer for \"51d1dccc3af466fae28ddbb5931c8317151822f213b47fd92422f3a7abf6a718\"" Feb 9 00:45:23.943526 env[1124]: time="2024-02-09T00:45:23.943501334Z" level=info msg="RemoveContainer for \"51d1dccc3af466fae28ddbb5931c8317151822f213b47fd92422f3a7abf6a718\" returns successfully" Feb 9 00:45:23.943669 kubelet[1972]: I0209 00:45:23.943631 1972 scope.go:115] "RemoveContainer" containerID="9c9a8d4ab41f8de1d00a6c2249daba1e710ef5da1ed7a99283a3d9737bb20fb2" Feb 9 00:45:23.944390 env[1124]: time="2024-02-09T00:45:23.944367972Z" level=info msg="RemoveContainer for \"9c9a8d4ab41f8de1d00a6c2249daba1e710ef5da1ed7a99283a3d9737bb20fb2\"" Feb 9 00:45:23.947051 env[1124]: time="2024-02-09T00:45:23.947022713Z" level=info msg="RemoveContainer for \"9c9a8d4ab41f8de1d00a6c2249daba1e710ef5da1ed7a99283a3d9737bb20fb2\" returns successfully" Feb 9 00:45:23.947171 kubelet[1972]: I0209 00:45:23.947148 1972 scope.go:115] "RemoveContainer" containerID="59cc9d0f44e5f0d03515d97ed77eaf2ec715b3abbab75e8f678e8af64d0e4e84" Feb 9 00:45:23.947341 env[1124]: time="2024-02-09T00:45:23.947286922Z" level=error msg="ContainerStatus for \"59cc9d0f44e5f0d03515d97ed77eaf2ec715b3abbab75e8f678e8af64d0e4e84\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"59cc9d0f44e5f0d03515d97ed77eaf2ec715b3abbab75e8f678e8af64d0e4e84\": not found" Feb 9 00:45:23.947449 kubelet[1972]: E0209 00:45:23.947433 1972 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"59cc9d0f44e5f0d03515d97ed77eaf2ec715b3abbab75e8f678e8af64d0e4e84\": not found" containerID="59cc9d0f44e5f0d03515d97ed77eaf2ec715b3abbab75e8f678e8af64d0e4e84" Feb 9 00:45:23.947491 kubelet[1972]: I0209 00:45:23.947473 1972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:59cc9d0f44e5f0d03515d97ed77eaf2ec715b3abbab75e8f678e8af64d0e4e84} err="failed to get container status \"59cc9d0f44e5f0d03515d97ed77eaf2ec715b3abbab75e8f678e8af64d0e4e84\": rpc error: code = NotFound desc = an error occurred when try to find container \"59cc9d0f44e5f0d03515d97ed77eaf2ec715b3abbab75e8f678e8af64d0e4e84\": not found" Feb 9 00:45:23.947491 kubelet[1972]: I0209 00:45:23.947485 1972 scope.go:115] "RemoveContainer" containerID="92c944a6e2e3f84511833de1c82e97ddb1c2a08be082cfea8d0079be768612b3" Feb 9 00:45:23.947697 env[1124]: time="2024-02-09T00:45:23.947643306Z" level=error msg="ContainerStatus for \"92c944a6e2e3f84511833de1c82e97ddb1c2a08be082cfea8d0079be768612b3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"92c944a6e2e3f84511833de1c82e97ddb1c2a08be082cfea8d0079be768612b3\": not found" Feb 9 00:45:23.947793 kubelet[1972]: E0209 00:45:23.947783 1972 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"92c944a6e2e3f84511833de1c82e97ddb1c2a08be082cfea8d0079be768612b3\": not found" containerID="92c944a6e2e3f84511833de1c82e97ddb1c2a08be082cfea8d0079be768612b3" Feb 9 00:45:23.947821 kubelet[1972]: I0209 00:45:23.947802 1972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:92c944a6e2e3f84511833de1c82e97ddb1c2a08be082cfea8d0079be768612b3} err="failed to get container status \"92c944a6e2e3f84511833de1c82e97ddb1c2a08be082cfea8d0079be768612b3\": rpc error: code = NotFound desc = an error occurred when try to find container \"92c944a6e2e3f84511833de1c82e97ddb1c2a08be082cfea8d0079be768612b3\": not found" Feb 9 00:45:23.947821 kubelet[1972]: I0209 00:45:23.947808 1972 scope.go:115] "RemoveContainer" containerID="71533b882720c5a5601a90ec173779ce47fbceded2b8a59e063f5d75b189d4d5" Feb 9 00:45:23.947968 env[1124]: time="2024-02-09T00:45:23.947931370Z" level=error msg="ContainerStatus for \"71533b882720c5a5601a90ec173779ce47fbceded2b8a59e063f5d75b189d4d5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"71533b882720c5a5601a90ec173779ce47fbceded2b8a59e063f5d75b189d4d5\": not found" Feb 9 00:45:23.948110 kubelet[1972]: E0209 00:45:23.948089 1972 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"71533b882720c5a5601a90ec173779ce47fbceded2b8a59e063f5d75b189d4d5\": not found" containerID="71533b882720c5a5601a90ec173779ce47fbceded2b8a59e063f5d75b189d4d5" Feb 9 00:45:23.948176 kubelet[1972]: I0209 00:45:23.948146 1972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:71533b882720c5a5601a90ec173779ce47fbceded2b8a59e063f5d75b189d4d5} err="failed to get container status \"71533b882720c5a5601a90ec173779ce47fbceded2b8a59e063f5d75b189d4d5\": rpc error: code = NotFound desc = an error occurred when try to find container \"71533b882720c5a5601a90ec173779ce47fbceded2b8a59e063f5d75b189d4d5\": not found" Feb 9 00:45:23.948176 kubelet[1972]: I0209 00:45:23.948163 1972 scope.go:115] "RemoveContainer" containerID="51d1dccc3af466fae28ddbb5931c8317151822f213b47fd92422f3a7abf6a718" Feb 9 00:45:23.948362 env[1124]: time="2024-02-09T00:45:23.948324835Z" level=error msg="ContainerStatus for \"51d1dccc3af466fae28ddbb5931c8317151822f213b47fd92422f3a7abf6a718\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"51d1dccc3af466fae28ddbb5931c8317151822f213b47fd92422f3a7abf6a718\": not found" Feb 9 00:45:23.948455 kubelet[1972]: E0209 00:45:23.948441 1972 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"51d1dccc3af466fae28ddbb5931c8317151822f213b47fd92422f3a7abf6a718\": not found" containerID="51d1dccc3af466fae28ddbb5931c8317151822f213b47fd92422f3a7abf6a718" Feb 9 00:45:23.948482 kubelet[1972]: I0209 00:45:23.948469 1972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:51d1dccc3af466fae28ddbb5931c8317151822f213b47fd92422f3a7abf6a718} err="failed to get container status \"51d1dccc3af466fae28ddbb5931c8317151822f213b47fd92422f3a7abf6a718\": rpc error: code = NotFound desc = an error occurred when try to find container \"51d1dccc3af466fae28ddbb5931c8317151822f213b47fd92422f3a7abf6a718\": not found" Feb 9 00:45:23.948482 kubelet[1972]: I0209 00:45:23.948478 1972 scope.go:115] "RemoveContainer" containerID="9c9a8d4ab41f8de1d00a6c2249daba1e710ef5da1ed7a99283a3d9737bb20fb2" Feb 9 00:45:23.948661 env[1124]: time="2024-02-09T00:45:23.948611917Z" level=error msg="ContainerStatus for \"9c9a8d4ab41f8de1d00a6c2249daba1e710ef5da1ed7a99283a3d9737bb20fb2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9c9a8d4ab41f8de1d00a6c2249daba1e710ef5da1ed7a99283a3d9737bb20fb2\": not found" Feb 9 00:45:23.948763 kubelet[1972]: E0209 00:45:23.948751 1972 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9c9a8d4ab41f8de1d00a6c2249daba1e710ef5da1ed7a99283a3d9737bb20fb2\": not found" containerID="9c9a8d4ab41f8de1d00a6c2249daba1e710ef5da1ed7a99283a3d9737bb20fb2" Feb 9 00:45:23.948802 kubelet[1972]: I0209 00:45:23.948781 1972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:9c9a8d4ab41f8de1d00a6c2249daba1e710ef5da1ed7a99283a3d9737bb20fb2} err="failed to get container status \"9c9a8d4ab41f8de1d00a6c2249daba1e710ef5da1ed7a99283a3d9737bb20fb2\": rpc error: code = NotFound desc = an error occurred when try to find container \"9c9a8d4ab41f8de1d00a6c2249daba1e710ef5da1ed7a99283a3d9737bb20fb2\": not found" Feb 9 00:45:23.969044 kubelet[1972]: I0209 00:45:23.969010 1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8c693ed1-debc-4c55-8424-5609186a78aa-hostproc\") pod \"8c693ed1-debc-4c55-8424-5609186a78aa\" (UID: \"8c693ed1-debc-4c55-8424-5609186a78aa\") " Feb 9 00:45:23.969044 kubelet[1972]: I0209 00:45:23.969049 1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8c693ed1-debc-4c55-8424-5609186a78aa-clustermesh-secrets\") pod \"8c693ed1-debc-4c55-8424-5609186a78aa\" (UID: \"8c693ed1-debc-4c55-8424-5609186a78aa\") " Feb 9 00:45:23.969217 kubelet[1972]: I0209 00:45:23.969065 1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8c693ed1-debc-4c55-8424-5609186a78aa-cilium-run\") pod \"8c693ed1-debc-4c55-8424-5609186a78aa\" (UID: \"8c693ed1-debc-4c55-8424-5609186a78aa\") " Feb 9 00:45:23.969217 kubelet[1972]: I0209 00:45:23.969082 1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8c693ed1-debc-4c55-8424-5609186a78aa-lib-modules\") pod \"8c693ed1-debc-4c55-8424-5609186a78aa\" (UID: \"8c693ed1-debc-4c55-8424-5609186a78aa\") " Feb 9 00:45:23.969217 kubelet[1972]: I0209 00:45:23.969097 1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8c693ed1-debc-4c55-8424-5609186a78aa-cni-path\") pod \"8c693ed1-debc-4c55-8424-5609186a78aa\" (UID: \"8c693ed1-debc-4c55-8424-5609186a78aa\") " Feb 9 00:45:23.969217 kubelet[1972]: I0209 00:45:23.969134 1972 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8c693ed1-debc-4c55-8424-5609186a78aa-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Feb 9 00:45:23.969217 kubelet[1972]: I0209 00:45:23.969143 1972 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8c693ed1-debc-4c55-8424-5609186a78aa-xtables-lock\") on node \"localhost\" DevicePath \"\"" Feb 9 00:45:23.969217 kubelet[1972]: I0209 00:45:23.969151 1972 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8c693ed1-debc-4c55-8424-5609186a78aa-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Feb 9 00:45:23.969217 kubelet[1972]: I0209 00:45:23.969142 1972 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c693ed1-debc-4c55-8424-5609186a78aa-hostproc" (OuterVolumeSpecName: "hostproc") pod "8c693ed1-debc-4c55-8424-5609186a78aa" (UID: "8c693ed1-debc-4c55-8424-5609186a78aa"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 00:45:23.969380 kubelet[1972]: I0209 00:45:23.969178 1972 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c693ed1-debc-4c55-8424-5609186a78aa-cni-path" (OuterVolumeSpecName: "cni-path") pod "8c693ed1-debc-4c55-8424-5609186a78aa" (UID: "8c693ed1-debc-4c55-8424-5609186a78aa"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 00:45:23.969380 kubelet[1972]: I0209 00:45:23.969179 1972 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c693ed1-debc-4c55-8424-5609186a78aa-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8c693ed1-debc-4c55-8424-5609186a78aa" (UID: "8c693ed1-debc-4c55-8424-5609186a78aa"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 00:45:23.969380 kubelet[1972]: I0209 00:45:23.969161 1972 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8c693ed1-debc-4c55-8424-5609186a78aa-bpf-maps\") on node \"localhost\" DevicePath \"\"" Feb 9 00:45:23.969380 kubelet[1972]: I0209 00:45:23.969239 1972 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/10143a08-930f-4cf6-82ca-e2fed827cd75-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 9 00:45:23.969380 kubelet[1972]: I0209 00:45:23.969261 1972 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8c693ed1-debc-4c55-8424-5609186a78aa-hubble-tls\") on node \"localhost\" DevicePath \"\"" Feb 9 00:45:23.969380 kubelet[1972]: I0209 00:45:23.969274 1972 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8c693ed1-debc-4c55-8424-5609186a78aa-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Feb 9 00:45:23.969507 kubelet[1972]: I0209 00:45:23.969159 1972 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c693ed1-debc-4c55-8424-5609186a78aa-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "8c693ed1-debc-4c55-8424-5609186a78aa" (UID: "8c693ed1-debc-4c55-8424-5609186a78aa"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 00:45:23.969507 kubelet[1972]: I0209 00:45:23.969288 1972 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-9lc6w\" (UniqueName: \"kubernetes.io/projected/8c693ed1-debc-4c55-8424-5609186a78aa-kube-api-access-9lc6w\") on node \"localhost\" DevicePath \"\"" Feb 9 00:45:23.969507 kubelet[1972]: I0209 00:45:23.969304 1972 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-swvf5\" (UniqueName: \"kubernetes.io/projected/10143a08-930f-4cf6-82ca-e2fed827cd75-kube-api-access-swvf5\") on node \"localhost\" DevicePath \"\"" Feb 9 00:45:23.969507 kubelet[1972]: I0209 00:45:23.969317 1972 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8c693ed1-debc-4c55-8424-5609186a78aa-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 9 00:45:23.971988 kubelet[1972]: I0209 00:45:23.971967 1972 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c693ed1-debc-4c55-8424-5609186a78aa-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "8c693ed1-debc-4c55-8424-5609186a78aa" (UID: "8c693ed1-debc-4c55-8424-5609186a78aa"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 00:45:24.070251 kubelet[1972]: I0209 00:45:24.070210 1972 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8c693ed1-debc-4c55-8424-5609186a78aa-lib-modules\") on node \"localhost\" DevicePath \"\"" Feb 9 00:45:24.070251 kubelet[1972]: I0209 00:45:24.070242 1972 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8c693ed1-debc-4c55-8424-5609186a78aa-cni-path\") on node \"localhost\" DevicePath \"\"" Feb 9 00:45:24.070251 kubelet[1972]: I0209 00:45:24.070250 1972 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8c693ed1-debc-4c55-8424-5609186a78aa-hostproc\") on node \"localhost\" DevicePath \"\"" Feb 9 00:45:24.070251 kubelet[1972]: I0209 00:45:24.070260 1972 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8c693ed1-debc-4c55-8424-5609186a78aa-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Feb 9 00:45:24.070481 kubelet[1972]: I0209 00:45:24.070267 1972 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8c693ed1-debc-4c55-8424-5609186a78aa-cilium-run\") on node \"localhost\" DevicePath \"\"" Feb 9 00:45:24.223224 systemd[1]: Removed slice kubepods-burstable-pod8c693ed1_debc_4c55_8424_5609186a78aa.slice. Feb 9 00:45:24.223297 systemd[1]: kubepods-burstable-pod8c693ed1_debc_4c55_8424_5609186a78aa.slice: Consumed 6.857s CPU time. Feb 9 00:45:24.655319 systemd[1]: var-lib-kubelet-pods-10143a08\x2d930f\x2d4cf6\x2d82ca\x2de2fed827cd75-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dswvf5.mount: Deactivated successfully. Feb 9 00:45:24.655431 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8c269179f1c7dc6c6a5ba4c88f1aa2a0644bd9f00267f51a42746c4efaf0bcb1-rootfs.mount: Deactivated successfully. Feb 9 00:45:24.655507 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8c269179f1c7dc6c6a5ba4c88f1aa2a0644bd9f00267f51a42746c4efaf0bcb1-shm.mount: Deactivated successfully. Feb 9 00:45:24.655579 systemd[1]: var-lib-kubelet-pods-8c693ed1\x2ddebc\x2d4c55\x2d8424\x2d5609186a78aa-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9lc6w.mount: Deactivated successfully. Feb 9 00:45:24.655690 systemd[1]: var-lib-kubelet-pods-8c693ed1\x2ddebc\x2d4c55\x2d8424\x2d5609186a78aa-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 00:45:24.655772 systemd[1]: var-lib-kubelet-pods-8c693ed1\x2ddebc\x2d4c55\x2d8424\x2d5609186a78aa-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 00:45:24.678198 kubelet[1972]: I0209 00:45:24.678168 1972 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=10143a08-930f-4cf6-82ca-e2fed827cd75 path="/var/lib/kubelet/pods/10143a08-930f-4cf6-82ca-e2fed827cd75/volumes" Feb 9 00:45:24.678517 kubelet[1972]: I0209 00:45:24.678503 1972 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=8c693ed1-debc-4c55-8424-5609186a78aa path="/var/lib/kubelet/pods/8c693ed1-debc-4c55-8424-5609186a78aa/volumes" Feb 9 00:45:24.742595 kubelet[1972]: E0209 00:45:24.742571 1972 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 00:45:25.620185 sshd[3614]: pam_unix(sshd:session): session closed for user core Feb 9 00:45:25.623291 systemd[1]: sshd@24-10.0.0.31:22-10.0.0.1:40646.service: Deactivated successfully. Feb 9 00:45:25.623905 systemd[1]: session-25.scope: Deactivated successfully. Feb 9 00:45:25.624518 systemd-logind[1106]: Session 25 logged out. Waiting for processes to exit. Feb 9 00:45:25.625592 systemd[1]: Started sshd@25-10.0.0.31:22-10.0.0.1:40656.service. Feb 9 00:45:25.626681 systemd-logind[1106]: Removed session 25. Feb 9 00:45:25.668491 sshd[3777]: Accepted publickey for core from 10.0.0.1 port 40656 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:45:25.669818 sshd[3777]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:45:25.673651 systemd-logind[1106]: New session 26 of user core. Feb 9 00:45:25.674375 systemd[1]: Started session-26.scope. Feb 9 00:45:26.101363 sshd[3777]: pam_unix(sshd:session): session closed for user core Feb 9 00:45:26.105583 systemd[1]: Started sshd@26-10.0.0.31:22-10.0.0.1:54358.service. Feb 9 00:45:26.106181 systemd[1]: sshd@25-10.0.0.31:22-10.0.0.1:40656.service: Deactivated successfully. Feb 9 00:45:26.106975 systemd[1]: session-26.scope: Deactivated successfully. Feb 9 00:45:26.109308 systemd-logind[1106]: Session 26 logged out. Waiting for processes to exit. Feb 9 00:45:26.110901 systemd-logind[1106]: Removed session 26. Feb 9 00:45:26.151563 sshd[3789]: Accepted publickey for core from 10.0.0.1 port 54358 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:45:26.152905 sshd[3789]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:45:26.158483 systemd[1]: Started session-27.scope. Feb 9 00:45:26.159468 systemd-logind[1106]: New session 27 of user core. Feb 9 00:45:26.174864 kubelet[1972]: I0209 00:45:26.174827 1972 topology_manager.go:212] "Topology Admit Handler" Feb 9 00:45:26.175339 kubelet[1972]: E0209 00:45:26.175322 1972 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="10143a08-930f-4cf6-82ca-e2fed827cd75" containerName="cilium-operator" Feb 9 00:45:26.175454 kubelet[1972]: E0209 00:45:26.175438 1972 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8c693ed1-debc-4c55-8424-5609186a78aa" containerName="clean-cilium-state" Feb 9 00:45:26.175553 kubelet[1972]: E0209 00:45:26.175537 1972 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8c693ed1-debc-4c55-8424-5609186a78aa" containerName="apply-sysctl-overwrites" Feb 9 00:45:26.175640 kubelet[1972]: E0209 00:45:26.175624 1972 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8c693ed1-debc-4c55-8424-5609186a78aa" containerName="mount-bpf-fs" Feb 9 00:45:26.175735 kubelet[1972]: E0209 00:45:26.175720 1972 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8c693ed1-debc-4c55-8424-5609186a78aa" containerName="cilium-agent" Feb 9 00:45:26.175822 kubelet[1972]: E0209 00:45:26.175806 1972 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8c693ed1-debc-4c55-8424-5609186a78aa" containerName="mount-cgroup" Feb 9 00:45:26.175930 kubelet[1972]: I0209 00:45:26.175913 1972 memory_manager.go:346] "RemoveStaleState removing state" podUID="8c693ed1-debc-4c55-8424-5609186a78aa" containerName="cilium-agent" Feb 9 00:45:26.176024 kubelet[1972]: I0209 00:45:26.176007 1972 memory_manager.go:346] "RemoveStaleState removing state" podUID="10143a08-930f-4cf6-82ca-e2fed827cd75" containerName="cilium-operator" Feb 9 00:45:26.182590 systemd[1]: Created slice kubepods-burstable-pod03c349b5_1bda_4911_a75e_c883b63f2942.slice. Feb 9 00:45:26.281024 kubelet[1972]: I0209 00:45:26.280971 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/03c349b5-1bda-4911-a75e-c883b63f2942-clustermesh-secrets\") pod \"cilium-jm6gp\" (UID: \"03c349b5-1bda-4911-a75e-c883b63f2942\") " pod="kube-system/cilium-jm6gp" Feb 9 00:45:26.281024 kubelet[1972]: I0209 00:45:26.281031 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/03c349b5-1bda-4911-a75e-c883b63f2942-cilium-ipsec-secrets\") pod \"cilium-jm6gp\" (UID: \"03c349b5-1bda-4911-a75e-c883b63f2942\") " pod="kube-system/cilium-jm6gp" Feb 9 00:45:26.281228 kubelet[1972]: I0209 00:45:26.281062 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/03c349b5-1bda-4911-a75e-c883b63f2942-cilium-run\") pod \"cilium-jm6gp\" (UID: \"03c349b5-1bda-4911-a75e-c883b63f2942\") " pod="kube-system/cilium-jm6gp" Feb 9 00:45:26.281228 kubelet[1972]: I0209 00:45:26.281085 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/03c349b5-1bda-4911-a75e-c883b63f2942-lib-modules\") pod \"cilium-jm6gp\" (UID: \"03c349b5-1bda-4911-a75e-c883b63f2942\") " pod="kube-system/cilium-jm6gp" Feb 9 00:45:26.281228 kubelet[1972]: I0209 00:45:26.281112 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/03c349b5-1bda-4911-a75e-c883b63f2942-host-proc-sys-kernel\") pod \"cilium-jm6gp\" (UID: \"03c349b5-1bda-4911-a75e-c883b63f2942\") " pod="kube-system/cilium-jm6gp" Feb 9 00:45:26.281228 kubelet[1972]: I0209 00:45:26.281153 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/03c349b5-1bda-4911-a75e-c883b63f2942-cni-path\") pod \"cilium-jm6gp\" (UID: \"03c349b5-1bda-4911-a75e-c883b63f2942\") " pod="kube-system/cilium-jm6gp" Feb 9 00:45:26.281228 kubelet[1972]: I0209 00:45:26.281178 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vw7qj\" (UniqueName: \"kubernetes.io/projected/03c349b5-1bda-4911-a75e-c883b63f2942-kube-api-access-vw7qj\") pod \"cilium-jm6gp\" (UID: \"03c349b5-1bda-4911-a75e-c883b63f2942\") " pod="kube-system/cilium-jm6gp" Feb 9 00:45:26.281228 kubelet[1972]: I0209 00:45:26.281199 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/03c349b5-1bda-4911-a75e-c883b63f2942-hubble-tls\") pod \"cilium-jm6gp\" (UID: \"03c349b5-1bda-4911-a75e-c883b63f2942\") " pod="kube-system/cilium-jm6gp" Feb 9 00:45:26.281367 kubelet[1972]: I0209 00:45:26.281223 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/03c349b5-1bda-4911-a75e-c883b63f2942-bpf-maps\") pod \"cilium-jm6gp\" (UID: \"03c349b5-1bda-4911-a75e-c883b63f2942\") " pod="kube-system/cilium-jm6gp" Feb 9 00:45:26.281367 kubelet[1972]: I0209 00:45:26.281245 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/03c349b5-1bda-4911-a75e-c883b63f2942-cilium-cgroup\") pod \"cilium-jm6gp\" (UID: \"03c349b5-1bda-4911-a75e-c883b63f2942\") " pod="kube-system/cilium-jm6gp" Feb 9 00:45:26.281367 kubelet[1972]: I0209 00:45:26.281268 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/03c349b5-1bda-4911-a75e-c883b63f2942-hostproc\") pod \"cilium-jm6gp\" (UID: \"03c349b5-1bda-4911-a75e-c883b63f2942\") " pod="kube-system/cilium-jm6gp" Feb 9 00:45:26.281367 kubelet[1972]: I0209 00:45:26.281295 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/03c349b5-1bda-4911-a75e-c883b63f2942-xtables-lock\") pod \"cilium-jm6gp\" (UID: \"03c349b5-1bda-4911-a75e-c883b63f2942\") " pod="kube-system/cilium-jm6gp" Feb 9 00:45:26.281367 kubelet[1972]: I0209 00:45:26.281318 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/03c349b5-1bda-4911-a75e-c883b63f2942-etc-cni-netd\") pod \"cilium-jm6gp\" (UID: \"03c349b5-1bda-4911-a75e-c883b63f2942\") " pod="kube-system/cilium-jm6gp" Feb 9 00:45:26.281367 kubelet[1972]: I0209 00:45:26.281342 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/03c349b5-1bda-4911-a75e-c883b63f2942-cilium-config-path\") pod \"cilium-jm6gp\" (UID: \"03c349b5-1bda-4911-a75e-c883b63f2942\") " pod="kube-system/cilium-jm6gp" Feb 9 00:45:26.281506 kubelet[1972]: I0209 00:45:26.281364 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/03c349b5-1bda-4911-a75e-c883b63f2942-host-proc-sys-net\") pod \"cilium-jm6gp\" (UID: \"03c349b5-1bda-4911-a75e-c883b63f2942\") " pod="kube-system/cilium-jm6gp" Feb 9 00:45:26.293359 sshd[3789]: pam_unix(sshd:session): session closed for user core Feb 9 00:45:26.296971 systemd[1]: Started sshd@27-10.0.0.31:22-10.0.0.1:54364.service. Feb 9 00:45:26.299401 systemd[1]: sshd@26-10.0.0.31:22-10.0.0.1:54358.service: Deactivated successfully. Feb 9 00:45:26.300150 systemd-logind[1106]: Session 27 logged out. Waiting for processes to exit. Feb 9 00:45:26.300184 systemd[1]: session-27.scope: Deactivated successfully. Feb 9 00:45:26.300992 systemd-logind[1106]: Removed session 27. Feb 9 00:45:26.335941 sshd[3802]: Accepted publickey for core from 10.0.0.1 port 54364 ssh2: RSA SHA256:sOEWoh+zUh4IcZRssM/naEKndpgE1eGtzyZR5MeTB1I Feb 9 00:45:26.337424 sshd[3802]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 00:45:26.341252 systemd-logind[1106]: New session 28 of user core. Feb 9 00:45:26.342002 systemd[1]: Started session-28.scope. Feb 9 00:45:26.486109 kubelet[1972]: E0209 00:45:26.486003 1972 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:45:26.486589 env[1124]: time="2024-02-09T00:45:26.486530207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jm6gp,Uid:03c349b5-1bda-4911-a75e-c883b63f2942,Namespace:kube-system,Attempt:0,}" Feb 9 00:45:26.498647 env[1124]: time="2024-02-09T00:45:26.498585125Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 00:45:26.498647 env[1124]: time="2024-02-09T00:45:26.498620973Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 00:45:26.498647 env[1124]: time="2024-02-09T00:45:26.498630371Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 00:45:26.498852 env[1124]: time="2024-02-09T00:45:26.498800922Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4da73933e65206b81a006d3a43a71aff6507bf8b8d40fc1b7b99c62418a81530 pid=3824 runtime=io.containerd.runc.v2 Feb 9 00:45:26.510101 systemd[1]: Started cri-containerd-4da73933e65206b81a006d3a43a71aff6507bf8b8d40fc1b7b99c62418a81530.scope. Feb 9 00:45:26.531638 env[1124]: time="2024-02-09T00:45:26.531579002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jm6gp,Uid:03c349b5-1bda-4911-a75e-c883b63f2942,Namespace:kube-system,Attempt:0,} returns sandbox id \"4da73933e65206b81a006d3a43a71aff6507bf8b8d40fc1b7b99c62418a81530\"" Feb 9 00:45:26.532473 kubelet[1972]: E0209 00:45:26.532432 1972 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:45:26.535396 env[1124]: time="2024-02-09T00:45:26.535356884Z" level=info msg="CreateContainer within sandbox \"4da73933e65206b81a006d3a43a71aff6507bf8b8d40fc1b7b99c62418a81530\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 00:45:26.547389 env[1124]: time="2024-02-09T00:45:26.547334434Z" level=info msg="CreateContainer within sandbox \"4da73933e65206b81a006d3a43a71aff6507bf8b8d40fc1b7b99c62418a81530\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"fcf48b89bf5b8a14771f2f3f1a63ee46eac9ac248081c83fc20cdd07d6cb2ac7\"" Feb 9 00:45:26.549227 env[1124]: time="2024-02-09T00:45:26.549189972Z" level=info msg="StartContainer for \"fcf48b89bf5b8a14771f2f3f1a63ee46eac9ac248081c83fc20cdd07d6cb2ac7\"" Feb 9 00:45:26.564418 systemd[1]: Started cri-containerd-fcf48b89bf5b8a14771f2f3f1a63ee46eac9ac248081c83fc20cdd07d6cb2ac7.scope. Feb 9 00:45:26.574458 systemd[1]: cri-containerd-fcf48b89bf5b8a14771f2f3f1a63ee46eac9ac248081c83fc20cdd07d6cb2ac7.scope: Deactivated successfully. Feb 9 00:45:26.574860 systemd[1]: Stopped cri-containerd-fcf48b89bf5b8a14771f2f3f1a63ee46eac9ac248081c83fc20cdd07d6cb2ac7.scope. Feb 9 00:45:26.588580 env[1124]: time="2024-02-09T00:45:26.588526589Z" level=info msg="shim disconnected" id=fcf48b89bf5b8a14771f2f3f1a63ee46eac9ac248081c83fc20cdd07d6cb2ac7 Feb 9 00:45:26.588768 env[1124]: time="2024-02-09T00:45:26.588584589Z" level=warning msg="cleaning up after shim disconnected" id=fcf48b89bf5b8a14771f2f3f1a63ee46eac9ac248081c83fc20cdd07d6cb2ac7 namespace=k8s.io Feb 9 00:45:26.588768 env[1124]: time="2024-02-09T00:45:26.588597754Z" level=info msg="cleaning up dead shim" Feb 9 00:45:26.596833 env[1124]: time="2024-02-09T00:45:26.596779680Z" level=warning msg="cleanup warnings time=\"2024-02-09T00:45:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3883 runtime=io.containerd.runc.v2\ntime=\"2024-02-09T00:45:26Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/fcf48b89bf5b8a14771f2f3f1a63ee46eac9ac248081c83fc20cdd07d6cb2ac7/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 9 00:45:26.597109 env[1124]: time="2024-02-09T00:45:26.597006279Z" level=error msg="copy shim log" error="read /proc/self/fd/41: file already closed" Feb 9 00:45:26.597258 env[1124]: time="2024-02-09T00:45:26.597219291Z" level=error msg="Failed to pipe stdout of container \"fcf48b89bf5b8a14771f2f3f1a63ee46eac9ac248081c83fc20cdd07d6cb2ac7\"" error="reading from a closed fifo" Feb 9 00:45:26.598672 env[1124]: time="2024-02-09T00:45:26.598623185Z" level=error msg="Failed to pipe stderr of container \"fcf48b89bf5b8a14771f2f3f1a63ee46eac9ac248081c83fc20cdd07d6cb2ac7\"" error="reading from a closed fifo" Feb 9 00:45:26.600816 env[1124]: time="2024-02-09T00:45:26.600778680Z" level=error msg="StartContainer for \"fcf48b89bf5b8a14771f2f3f1a63ee46eac9ac248081c83fc20cdd07d6cb2ac7\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 9 00:45:26.601072 kubelet[1972]: E0209 00:45:26.601047 1972 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="fcf48b89bf5b8a14771f2f3f1a63ee46eac9ac248081c83fc20cdd07d6cb2ac7" Feb 9 00:45:26.601262 kubelet[1972]: E0209 00:45:26.601237 1972 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 9 00:45:26.601262 kubelet[1972]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 9 00:45:26.601262 kubelet[1972]: rm /hostbin/cilium-mount Feb 9 00:45:26.601336 kubelet[1972]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-vw7qj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-jm6gp_kube-system(03c349b5-1bda-4911-a75e-c883b63f2942): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 9 00:45:26.601336 kubelet[1972]: E0209 00:45:26.601289 1972 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-jm6gp" podUID=03c349b5-1bda-4911-a75e-c883b63f2942 Feb 9 00:45:26.927078 env[1124]: time="2024-02-09T00:45:26.927020708Z" level=info msg="StopPodSandbox for \"4da73933e65206b81a006d3a43a71aff6507bf8b8d40fc1b7b99c62418a81530\"" Feb 9 00:45:26.927250 env[1124]: time="2024-02-09T00:45:26.927097012Z" level=info msg="Container to stop \"fcf48b89bf5b8a14771f2f3f1a63ee46eac9ac248081c83fc20cdd07d6cb2ac7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 00:45:26.933205 systemd[1]: cri-containerd-4da73933e65206b81a006d3a43a71aff6507bf8b8d40fc1b7b99c62418a81530.scope: Deactivated successfully. Feb 9 00:45:26.957239 env[1124]: time="2024-02-09T00:45:26.957176350Z" level=info msg="shim disconnected" id=4da73933e65206b81a006d3a43a71aff6507bf8b8d40fc1b7b99c62418a81530 Feb 9 00:45:26.957239 env[1124]: time="2024-02-09T00:45:26.957234901Z" level=warning msg="cleaning up after shim disconnected" id=4da73933e65206b81a006d3a43a71aff6507bf8b8d40fc1b7b99c62418a81530 namespace=k8s.io Feb 9 00:45:26.957507 env[1124]: time="2024-02-09T00:45:26.957248507Z" level=info msg="cleaning up dead shim" Feb 9 00:45:26.962949 env[1124]: time="2024-02-09T00:45:26.962907554Z" level=warning msg="cleanup warnings time=\"2024-02-09T00:45:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3913 runtime=io.containerd.runc.v2\n" Feb 9 00:45:26.963196 env[1124]: time="2024-02-09T00:45:26.963173497Z" level=info msg="TearDown network for sandbox \"4da73933e65206b81a006d3a43a71aff6507bf8b8d40fc1b7b99c62418a81530\" successfully" Feb 9 00:45:26.963249 env[1124]: time="2024-02-09T00:45:26.963195749Z" level=info msg="StopPodSandbox for \"4da73933e65206b81a006d3a43a71aff6507bf8b8d40fc1b7b99c62418a81530\" returns successfully" Feb 9 00:45:27.085217 kubelet[1972]: I0209 00:45:27.085166 1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/03c349b5-1bda-4911-a75e-c883b63f2942-cilium-ipsec-secrets\") pod \"03c349b5-1bda-4911-a75e-c883b63f2942\" (UID: \"03c349b5-1bda-4911-a75e-c883b63f2942\") " Feb 9 00:45:27.085217 kubelet[1972]: I0209 00:45:27.085202 1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/03c349b5-1bda-4911-a75e-c883b63f2942-cilium-run\") pod \"03c349b5-1bda-4911-a75e-c883b63f2942\" (UID: \"03c349b5-1bda-4911-a75e-c883b63f2942\") " Feb 9 00:45:27.085217 kubelet[1972]: I0209 00:45:27.085221 1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/03c349b5-1bda-4911-a75e-c883b63f2942-host-proc-sys-net\") pod \"03c349b5-1bda-4911-a75e-c883b63f2942\" (UID: \"03c349b5-1bda-4911-a75e-c883b63f2942\") " Feb 9 00:45:27.085651 kubelet[1972]: I0209 00:45:27.085235 1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/03c349b5-1bda-4911-a75e-c883b63f2942-cilium-cgroup\") pod \"03c349b5-1bda-4911-a75e-c883b63f2942\" (UID: \"03c349b5-1bda-4911-a75e-c883b63f2942\") " Feb 9 00:45:27.085651 kubelet[1972]: I0209 00:45:27.085249 1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/03c349b5-1bda-4911-a75e-c883b63f2942-lib-modules\") pod \"03c349b5-1bda-4911-a75e-c883b63f2942\" (UID: \"03c349b5-1bda-4911-a75e-c883b63f2942\") " Feb 9 00:45:27.085651 kubelet[1972]: I0209 00:45:27.085258 1972 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03c349b5-1bda-4911-a75e-c883b63f2942-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "03c349b5-1bda-4911-a75e-c883b63f2942" (UID: "03c349b5-1bda-4911-a75e-c883b63f2942"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 00:45:27.085651 kubelet[1972]: I0209 00:45:27.085270 1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/03c349b5-1bda-4911-a75e-c883b63f2942-hubble-tls\") pod \"03c349b5-1bda-4911-a75e-c883b63f2942\" (UID: \"03c349b5-1bda-4911-a75e-c883b63f2942\") " Feb 9 00:45:27.085651 kubelet[1972]: I0209 00:45:27.085326 1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/03c349b5-1bda-4911-a75e-c883b63f2942-hostproc\") pod \"03c349b5-1bda-4911-a75e-c883b63f2942\" (UID: \"03c349b5-1bda-4911-a75e-c883b63f2942\") " Feb 9 00:45:27.085651 kubelet[1972]: I0209 00:45:27.085350 1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/03c349b5-1bda-4911-a75e-c883b63f2942-cni-path\") pod \"03c349b5-1bda-4911-a75e-c883b63f2942\" (UID: \"03c349b5-1bda-4911-a75e-c883b63f2942\") " Feb 9 00:45:27.085651 kubelet[1972]: I0209 00:45:27.085370 1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/03c349b5-1bda-4911-a75e-c883b63f2942-etc-cni-netd\") pod \"03c349b5-1bda-4911-a75e-c883b63f2942\" (UID: \"03c349b5-1bda-4911-a75e-c883b63f2942\") " Feb 9 00:45:27.085651 kubelet[1972]: I0209 00:45:27.085398 1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vw7qj\" (UniqueName: \"kubernetes.io/projected/03c349b5-1bda-4911-a75e-c883b63f2942-kube-api-access-vw7qj\") pod \"03c349b5-1bda-4911-a75e-c883b63f2942\" (UID: \"03c349b5-1bda-4911-a75e-c883b63f2942\") " Feb 9 00:45:27.085651 kubelet[1972]: I0209 00:45:27.085423 1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/03c349b5-1bda-4911-a75e-c883b63f2942-xtables-lock\") pod \"03c349b5-1bda-4911-a75e-c883b63f2942\" (UID: \"03c349b5-1bda-4911-a75e-c883b63f2942\") " Feb 9 00:45:27.085651 kubelet[1972]: I0209 00:45:27.085447 1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/03c349b5-1bda-4911-a75e-c883b63f2942-host-proc-sys-kernel\") pod \"03c349b5-1bda-4911-a75e-c883b63f2942\" (UID: \"03c349b5-1bda-4911-a75e-c883b63f2942\") " Feb 9 00:45:27.085651 kubelet[1972]: I0209 00:45:27.085474 1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/03c349b5-1bda-4911-a75e-c883b63f2942-bpf-maps\") pod \"03c349b5-1bda-4911-a75e-c883b63f2942\" (UID: \"03c349b5-1bda-4911-a75e-c883b63f2942\") " Feb 9 00:45:27.085651 kubelet[1972]: I0209 00:45:27.085504 1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/03c349b5-1bda-4911-a75e-c883b63f2942-cilium-config-path\") pod \"03c349b5-1bda-4911-a75e-c883b63f2942\" (UID: \"03c349b5-1bda-4911-a75e-c883b63f2942\") " Feb 9 00:45:27.085651 kubelet[1972]: I0209 00:45:27.085500 1972 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03c349b5-1bda-4911-a75e-c883b63f2942-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "03c349b5-1bda-4911-a75e-c883b63f2942" (UID: "03c349b5-1bda-4911-a75e-c883b63f2942"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 00:45:27.085651 kubelet[1972]: I0209 00:45:27.085536 1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/03c349b5-1bda-4911-a75e-c883b63f2942-clustermesh-secrets\") pod \"03c349b5-1bda-4911-a75e-c883b63f2942\" (UID: \"03c349b5-1bda-4911-a75e-c883b63f2942\") " Feb 9 00:45:27.085651 kubelet[1972]: I0209 00:45:27.085535 1972 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03c349b5-1bda-4911-a75e-c883b63f2942-hostproc" (OuterVolumeSpecName: "hostproc") pod "03c349b5-1bda-4911-a75e-c883b63f2942" (UID: "03c349b5-1bda-4911-a75e-c883b63f2942"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 00:45:27.086266 kubelet[1972]: I0209 00:45:27.085557 1972 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03c349b5-1bda-4911-a75e-c883b63f2942-cni-path" (OuterVolumeSpecName: "cni-path") pod "03c349b5-1bda-4911-a75e-c883b63f2942" (UID: "03c349b5-1bda-4911-a75e-c883b63f2942"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 00:45:27.086266 kubelet[1972]: I0209 00:45:27.085577 1972 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03c349b5-1bda-4911-a75e-c883b63f2942-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "03c349b5-1bda-4911-a75e-c883b63f2942" (UID: "03c349b5-1bda-4911-a75e-c883b63f2942"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 00:45:27.086266 kubelet[1972]: I0209 00:45:27.085593 1972 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/03c349b5-1bda-4911-a75e-c883b63f2942-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Feb 9 00:45:27.086266 kubelet[1972]: I0209 00:45:27.085599 1972 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03c349b5-1bda-4911-a75e-c883b63f2942-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "03c349b5-1bda-4911-a75e-c883b63f2942" (UID: "03c349b5-1bda-4911-a75e-c883b63f2942"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 00:45:27.086266 kubelet[1972]: I0209 00:45:27.085610 1972 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/03c349b5-1bda-4911-a75e-c883b63f2942-hostproc\") on node \"localhost\" DevicePath \"\"" Feb 9 00:45:27.086266 kubelet[1972]: I0209 00:45:27.085623 1972 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/03c349b5-1bda-4911-a75e-c883b63f2942-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Feb 9 00:45:27.086266 kubelet[1972]: I0209 00:45:27.085640 1972 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03c349b5-1bda-4911-a75e-c883b63f2942-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "03c349b5-1bda-4911-a75e-c883b63f2942" (UID: "03c349b5-1bda-4911-a75e-c883b63f2942"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 00:45:27.086266 kubelet[1972]: W0209 00:45:27.085791 1972 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/03c349b5-1bda-4911-a75e-c883b63f2942/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 00:45:27.088010 kubelet[1972]: I0209 00:45:27.086692 1972 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03c349b5-1bda-4911-a75e-c883b63f2942-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "03c349b5-1bda-4911-a75e-c883b63f2942" (UID: "03c349b5-1bda-4911-a75e-c883b63f2942"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 00:45:27.088010 kubelet[1972]: I0209 00:45:27.087321 1972 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03c349b5-1bda-4911-a75e-c883b63f2942-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "03c349b5-1bda-4911-a75e-c883b63f2942" (UID: "03c349b5-1bda-4911-a75e-c883b63f2942"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 00:45:27.088010 kubelet[1972]: I0209 00:45:27.087346 1972 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03c349b5-1bda-4911-a75e-c883b63f2942-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "03c349b5-1bda-4911-a75e-c883b63f2942" (UID: "03c349b5-1bda-4911-a75e-c883b63f2942"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 00:45:27.088010 kubelet[1972]: I0209 00:45:27.087653 1972 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03c349b5-1bda-4911-a75e-c883b63f2942-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "03c349b5-1bda-4911-a75e-c883b63f2942" (UID: "03c349b5-1bda-4911-a75e-c883b63f2942"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 00:45:27.088010 kubelet[1972]: I0209 00:45:27.087968 1972 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03c349b5-1bda-4911-a75e-c883b63f2942-kube-api-access-vw7qj" (OuterVolumeSpecName: "kube-api-access-vw7qj") pod "03c349b5-1bda-4911-a75e-c883b63f2942" (UID: "03c349b5-1bda-4911-a75e-c883b63f2942"). InnerVolumeSpecName "kube-api-access-vw7qj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 00:45:27.088262 kubelet[1972]: I0209 00:45:27.088178 1972 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03c349b5-1bda-4911-a75e-c883b63f2942-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "03c349b5-1bda-4911-a75e-c883b63f2942" (UID: "03c349b5-1bda-4911-a75e-c883b63f2942"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 00:45:27.088809 kubelet[1972]: I0209 00:45:27.088782 1972 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03c349b5-1bda-4911-a75e-c883b63f2942-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "03c349b5-1bda-4911-a75e-c883b63f2942" (UID: "03c349b5-1bda-4911-a75e-c883b63f2942"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 00:45:27.089161 kubelet[1972]: I0209 00:45:27.089119 1972 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03c349b5-1bda-4911-a75e-c883b63f2942-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "03c349b5-1bda-4911-a75e-c883b63f2942" (UID: "03c349b5-1bda-4911-a75e-c883b63f2942"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 00:45:27.186043 kubelet[1972]: I0209 00:45:27.185966 1972 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/03c349b5-1bda-4911-a75e-c883b63f2942-lib-modules\") on node \"localhost\" DevicePath \"\"" Feb 9 00:45:27.186043 kubelet[1972]: I0209 00:45:27.185996 1972 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/03c349b5-1bda-4911-a75e-c883b63f2942-hubble-tls\") on node \"localhost\" DevicePath \"\"" Feb 9 00:45:27.186043 kubelet[1972]: I0209 00:45:27.186008 1972 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/03c349b5-1bda-4911-a75e-c883b63f2942-cni-path\") on node \"localhost\" DevicePath \"\"" Feb 9 00:45:27.186043 kubelet[1972]: I0209 00:45:27.186019 1972 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/03c349b5-1bda-4911-a75e-c883b63f2942-xtables-lock\") on node \"localhost\" DevicePath \"\"" Feb 9 00:45:27.186043 kubelet[1972]: I0209 00:45:27.186031 1972 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-vw7qj\" (UniqueName: \"kubernetes.io/projected/03c349b5-1bda-4911-a75e-c883b63f2942-kube-api-access-vw7qj\") on node \"localhost\" DevicePath \"\"" Feb 9 00:45:27.186043 kubelet[1972]: I0209 00:45:27.186044 1972 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/03c349b5-1bda-4911-a75e-c883b63f2942-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Feb 9 00:45:27.186420 kubelet[1972]: I0209 00:45:27.186054 1972 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/03c349b5-1bda-4911-a75e-c883b63f2942-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Feb 9 00:45:27.186420 kubelet[1972]: I0209 00:45:27.186064 1972 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/03c349b5-1bda-4911-a75e-c883b63f2942-bpf-maps\") on node \"localhost\" DevicePath \"\"" Feb 9 00:45:27.186420 kubelet[1972]: I0209 00:45:27.186075 1972 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/03c349b5-1bda-4911-a75e-c883b63f2942-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 9 00:45:27.186420 kubelet[1972]: I0209 00:45:27.186086 1972 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/03c349b5-1bda-4911-a75e-c883b63f2942-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Feb 9 00:45:27.186420 kubelet[1972]: I0209 00:45:27.186097 1972 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/03c349b5-1bda-4911-a75e-c883b63f2942-cilium-run\") on node \"localhost\" DevicePath \"\"" Feb 9 00:45:27.186420 kubelet[1972]: I0209 00:45:27.186110 1972 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/03c349b5-1bda-4911-a75e-c883b63f2942-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Feb 9 00:45:27.387606 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4da73933e65206b81a006d3a43a71aff6507bf8b8d40fc1b7b99c62418a81530-rootfs.mount: Deactivated successfully. Feb 9 00:45:27.387693 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4da73933e65206b81a006d3a43a71aff6507bf8b8d40fc1b7b99c62418a81530-shm.mount: Deactivated successfully. Feb 9 00:45:27.387745 systemd[1]: var-lib-kubelet-pods-03c349b5\x2d1bda\x2d4911\x2da75e\x2dc883b63f2942-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvw7qj.mount: Deactivated successfully. Feb 9 00:45:27.387796 systemd[1]: var-lib-kubelet-pods-03c349b5\x2d1bda\x2d4911\x2da75e\x2dc883b63f2942-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 00:45:27.387851 systemd[1]: var-lib-kubelet-pods-03c349b5\x2d1bda\x2d4911\x2da75e\x2dc883b63f2942-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 00:45:27.387896 systemd[1]: var-lib-kubelet-pods-03c349b5\x2d1bda\x2d4911\x2da75e\x2dc883b63f2942-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 9 00:45:27.728199 kubelet[1972]: I0209 00:45:27.728162 1972 setters.go:548] "Node became not ready" node="localhost" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-09 00:45:27.728097818 +0000 UTC m=+103.151195528 LastTransitionTime:2024-02-09 00:45:27.728097818 +0000 UTC m=+103.151195528 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 9 00:45:27.929980 kubelet[1972]: I0209 00:45:27.929950 1972 scope.go:115] "RemoveContainer" containerID="fcf48b89bf5b8a14771f2f3f1a63ee46eac9ac248081c83fc20cdd07d6cb2ac7" Feb 9 00:45:27.930953 env[1124]: time="2024-02-09T00:45:27.930913175Z" level=info msg="RemoveContainer for \"fcf48b89bf5b8a14771f2f3f1a63ee46eac9ac248081c83fc20cdd07d6cb2ac7\"" Feb 9 00:45:27.933805 systemd[1]: Removed slice kubepods-burstable-pod03c349b5_1bda_4911_a75e_c883b63f2942.slice. Feb 9 00:45:28.011252 env[1124]: time="2024-02-09T00:45:28.011114930Z" level=info msg="RemoveContainer for \"fcf48b89bf5b8a14771f2f3f1a63ee46eac9ac248081c83fc20cdd07d6cb2ac7\" returns successfully" Feb 9 00:45:28.026985 kubelet[1972]: I0209 00:45:28.026633 1972 topology_manager.go:212] "Topology Admit Handler" Feb 9 00:45:28.026985 kubelet[1972]: E0209 00:45:28.026714 1972 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="03c349b5-1bda-4911-a75e-c883b63f2942" containerName="mount-cgroup" Feb 9 00:45:28.026985 kubelet[1972]: I0209 00:45:28.026744 1972 memory_manager.go:346] "RemoveStaleState removing state" podUID="03c349b5-1bda-4911-a75e-c883b63f2942" containerName="mount-cgroup" Feb 9 00:45:28.038756 systemd[1]: Created slice kubepods-burstable-pod962b166f_4c8e_48ae_bd1d_7b7ac4f40407.slice. Feb 9 00:45:28.192316 kubelet[1972]: I0209 00:45:28.192279 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vh92p\" (UniqueName: \"kubernetes.io/projected/962b166f-4c8e-48ae-bd1d-7b7ac4f40407-kube-api-access-vh92p\") pod \"cilium-kvlgx\" (UID: \"962b166f-4c8e-48ae-bd1d-7b7ac4f40407\") " pod="kube-system/cilium-kvlgx" Feb 9 00:45:28.192316 kubelet[1972]: I0209 00:45:28.192329 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/962b166f-4c8e-48ae-bd1d-7b7ac4f40407-host-proc-sys-net\") pod \"cilium-kvlgx\" (UID: \"962b166f-4c8e-48ae-bd1d-7b7ac4f40407\") " pod="kube-system/cilium-kvlgx" Feb 9 00:45:28.192837 kubelet[1972]: I0209 00:45:28.192415 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/962b166f-4c8e-48ae-bd1d-7b7ac4f40407-cilium-run\") pod \"cilium-kvlgx\" (UID: \"962b166f-4c8e-48ae-bd1d-7b7ac4f40407\") " pod="kube-system/cilium-kvlgx" Feb 9 00:45:28.192837 kubelet[1972]: I0209 00:45:28.192457 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/962b166f-4c8e-48ae-bd1d-7b7ac4f40407-cilium-cgroup\") pod \"cilium-kvlgx\" (UID: \"962b166f-4c8e-48ae-bd1d-7b7ac4f40407\") " pod="kube-system/cilium-kvlgx" Feb 9 00:45:28.192837 kubelet[1972]: I0209 00:45:28.192476 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/962b166f-4c8e-48ae-bd1d-7b7ac4f40407-xtables-lock\") pod \"cilium-kvlgx\" (UID: \"962b166f-4c8e-48ae-bd1d-7b7ac4f40407\") " pod="kube-system/cilium-kvlgx" Feb 9 00:45:28.192837 kubelet[1972]: I0209 00:45:28.192535 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/962b166f-4c8e-48ae-bd1d-7b7ac4f40407-clustermesh-secrets\") pod \"cilium-kvlgx\" (UID: \"962b166f-4c8e-48ae-bd1d-7b7ac4f40407\") " pod="kube-system/cilium-kvlgx" Feb 9 00:45:28.192837 kubelet[1972]: I0209 00:45:28.192559 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/962b166f-4c8e-48ae-bd1d-7b7ac4f40407-cilium-config-path\") pod \"cilium-kvlgx\" (UID: \"962b166f-4c8e-48ae-bd1d-7b7ac4f40407\") " pod="kube-system/cilium-kvlgx" Feb 9 00:45:28.192837 kubelet[1972]: I0209 00:45:28.192585 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/962b166f-4c8e-48ae-bd1d-7b7ac4f40407-host-proc-sys-kernel\") pod \"cilium-kvlgx\" (UID: \"962b166f-4c8e-48ae-bd1d-7b7ac4f40407\") " pod="kube-system/cilium-kvlgx" Feb 9 00:45:28.192837 kubelet[1972]: I0209 00:45:28.192605 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/962b166f-4c8e-48ae-bd1d-7b7ac4f40407-hubble-tls\") pod \"cilium-kvlgx\" (UID: \"962b166f-4c8e-48ae-bd1d-7b7ac4f40407\") " pod="kube-system/cilium-kvlgx" Feb 9 00:45:28.192837 kubelet[1972]: I0209 00:45:28.192632 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/962b166f-4c8e-48ae-bd1d-7b7ac4f40407-cni-path\") pod \"cilium-kvlgx\" (UID: \"962b166f-4c8e-48ae-bd1d-7b7ac4f40407\") " pod="kube-system/cilium-kvlgx" Feb 9 00:45:28.192837 kubelet[1972]: I0209 00:45:28.192653 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/962b166f-4c8e-48ae-bd1d-7b7ac4f40407-lib-modules\") pod \"cilium-kvlgx\" (UID: \"962b166f-4c8e-48ae-bd1d-7b7ac4f40407\") " pod="kube-system/cilium-kvlgx" Feb 9 00:45:28.192837 kubelet[1972]: I0209 00:45:28.192703 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/962b166f-4c8e-48ae-bd1d-7b7ac4f40407-hostproc\") pod \"cilium-kvlgx\" (UID: \"962b166f-4c8e-48ae-bd1d-7b7ac4f40407\") " pod="kube-system/cilium-kvlgx" Feb 9 00:45:28.192837 kubelet[1972]: I0209 00:45:28.192724 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/962b166f-4c8e-48ae-bd1d-7b7ac4f40407-bpf-maps\") pod \"cilium-kvlgx\" (UID: \"962b166f-4c8e-48ae-bd1d-7b7ac4f40407\") " pod="kube-system/cilium-kvlgx" Feb 9 00:45:28.192837 kubelet[1972]: I0209 00:45:28.192764 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/962b166f-4c8e-48ae-bd1d-7b7ac4f40407-etc-cni-netd\") pod \"cilium-kvlgx\" (UID: \"962b166f-4c8e-48ae-bd1d-7b7ac4f40407\") " pod="kube-system/cilium-kvlgx" Feb 9 00:45:28.192837 kubelet[1972]: I0209 00:45:28.192784 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/962b166f-4c8e-48ae-bd1d-7b7ac4f40407-cilium-ipsec-secrets\") pod \"cilium-kvlgx\" (UID: \"962b166f-4c8e-48ae-bd1d-7b7ac4f40407\") " pod="kube-system/cilium-kvlgx" Feb 9 00:45:28.341372 kubelet[1972]: E0209 00:45:28.341331 1972 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:45:28.341892 env[1124]: time="2024-02-09T00:45:28.341840128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kvlgx,Uid:962b166f-4c8e-48ae-bd1d-7b7ac4f40407,Namespace:kube-system,Attempt:0,}" Feb 9 00:45:28.353350 env[1124]: time="2024-02-09T00:45:28.353275220Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 00:45:28.353350 env[1124]: time="2024-02-09T00:45:28.353330805Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 00:45:28.353350 env[1124]: time="2024-02-09T00:45:28.353344911Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 00:45:28.353630 env[1124]: time="2024-02-09T00:45:28.353559517Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6c00d8e418f9d25fe63c4fb503bf4b857f316e3d2b5fbd1d8b1bad3ab0639ff8 pid=3940 runtime=io.containerd.runc.v2 Feb 9 00:45:28.365403 systemd[1]: Started cri-containerd-6c00d8e418f9d25fe63c4fb503bf4b857f316e3d2b5fbd1d8b1bad3ab0639ff8.scope. Feb 9 00:45:28.390217 env[1124]: time="2024-02-09T00:45:28.390165498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kvlgx,Uid:962b166f-4c8e-48ae-bd1d-7b7ac4f40407,Namespace:kube-system,Attempt:0,} returns sandbox id \"6c00d8e418f9d25fe63c4fb503bf4b857f316e3d2b5fbd1d8b1bad3ab0639ff8\"" Feb 9 00:45:28.391096 kubelet[1972]: E0209 00:45:28.391076 1972 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:45:28.393281 env[1124]: time="2024-02-09T00:45:28.393234237Z" level=info msg="CreateContainer within sandbox \"6c00d8e418f9d25fe63c4fb503bf4b857f316e3d2b5fbd1d8b1bad3ab0639ff8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 00:45:28.405843 env[1124]: time="2024-02-09T00:45:28.405780668Z" level=info msg="CreateContainer within sandbox \"6c00d8e418f9d25fe63c4fb503bf4b857f316e3d2b5fbd1d8b1bad3ab0639ff8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d4aed946ad17f89fdf0ca00955ad28bc494561f5115a4839d4e5f8f7343887f5\"" Feb 9 00:45:28.406301 env[1124]: time="2024-02-09T00:45:28.406267758Z" level=info msg="StartContainer for \"d4aed946ad17f89fdf0ca00955ad28bc494561f5115a4839d4e5f8f7343887f5\"" Feb 9 00:45:28.420930 systemd[1]: Started cri-containerd-d4aed946ad17f89fdf0ca00955ad28bc494561f5115a4839d4e5f8f7343887f5.scope. Feb 9 00:45:28.441999 env[1124]: time="2024-02-09T00:45:28.441958810Z" level=info msg="StartContainer for \"d4aed946ad17f89fdf0ca00955ad28bc494561f5115a4839d4e5f8f7343887f5\" returns successfully" Feb 9 00:45:28.448624 systemd[1]: cri-containerd-d4aed946ad17f89fdf0ca00955ad28bc494561f5115a4839d4e5f8f7343887f5.scope: Deactivated successfully. Feb 9 00:45:28.478897 env[1124]: time="2024-02-09T00:45:28.478833499Z" level=info msg="shim disconnected" id=d4aed946ad17f89fdf0ca00955ad28bc494561f5115a4839d4e5f8f7343887f5 Feb 9 00:45:28.479202 env[1124]: time="2024-02-09T00:45:28.479177679Z" level=warning msg="cleaning up after shim disconnected" id=d4aed946ad17f89fdf0ca00955ad28bc494561f5115a4839d4e5f8f7343887f5 namespace=k8s.io Feb 9 00:45:28.479293 env[1124]: time="2024-02-09T00:45:28.479271416Z" level=info msg="cleaning up dead shim" Feb 9 00:45:28.490258 env[1124]: time="2024-02-09T00:45:28.490195171Z" level=warning msg="cleanup warnings time=\"2024-02-09T00:45:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4022 runtime=io.containerd.runc.v2\n" Feb 9 00:45:28.678211 kubelet[1972]: I0209 00:45:28.678090 1972 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=03c349b5-1bda-4911-a75e-c883b63f2942 path="/var/lib/kubelet/pods/03c349b5-1bda-4911-a75e-c883b63f2942/volumes" Feb 9 00:45:28.933899 kubelet[1972]: E0209 00:45:28.933785 1972 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:45:28.935575 env[1124]: time="2024-02-09T00:45:28.935536619Z" level=info msg="CreateContainer within sandbox \"6c00d8e418f9d25fe63c4fb503bf4b857f316e3d2b5fbd1d8b1bad3ab0639ff8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 00:45:29.001781 env[1124]: time="2024-02-09T00:45:29.001731640Z" level=info msg="CreateContainer within sandbox \"6c00d8e418f9d25fe63c4fb503bf4b857f316e3d2b5fbd1d8b1bad3ab0639ff8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"fc09dd92b00269bcbf3bda410b53138301d7dac847f74ea0e6d854e7bcb8f5e8\"" Feb 9 00:45:29.002373 env[1124]: time="2024-02-09T00:45:29.002314281Z" level=info msg="StartContainer for \"fc09dd92b00269bcbf3bda410b53138301d7dac847f74ea0e6d854e7bcb8f5e8\"" Feb 9 00:45:29.018139 systemd[1]: Started cri-containerd-fc09dd92b00269bcbf3bda410b53138301d7dac847f74ea0e6d854e7bcb8f5e8.scope. Feb 9 00:45:29.042003 env[1124]: time="2024-02-09T00:45:29.041956953Z" level=info msg="StartContainer for \"fc09dd92b00269bcbf3bda410b53138301d7dac847f74ea0e6d854e7bcb8f5e8\" returns successfully" Feb 9 00:45:29.045769 systemd[1]: cri-containerd-fc09dd92b00269bcbf3bda410b53138301d7dac847f74ea0e6d854e7bcb8f5e8.scope: Deactivated successfully. Feb 9 00:45:29.068761 env[1124]: time="2024-02-09T00:45:29.068711771Z" level=info msg="shim disconnected" id=fc09dd92b00269bcbf3bda410b53138301d7dac847f74ea0e6d854e7bcb8f5e8 Feb 9 00:45:29.068761 env[1124]: time="2024-02-09T00:45:29.068758439Z" level=warning msg="cleaning up after shim disconnected" id=fc09dd92b00269bcbf3bda410b53138301d7dac847f74ea0e6d854e7bcb8f5e8 namespace=k8s.io Feb 9 00:45:29.068761 env[1124]: time="2024-02-09T00:45:29.068767116Z" level=info msg="cleaning up dead shim" Feb 9 00:45:29.076165 env[1124]: time="2024-02-09T00:45:29.076092428Z" level=warning msg="cleanup warnings time=\"2024-02-09T00:45:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4085 runtime=io.containerd.runc.v2\n" Feb 9 00:45:29.388005 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d4aed946ad17f89fdf0ca00955ad28bc494561f5115a4839d4e5f8f7343887f5-rootfs.mount: Deactivated successfully. Feb 9 00:45:29.693267 kubelet[1972]: W0209 00:45:29.693150 1972 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03c349b5_1bda_4911_a75e_c883b63f2942.slice/cri-containerd-fcf48b89bf5b8a14771f2f3f1a63ee46eac9ac248081c83fc20cdd07d6cb2ac7.scope WatchSource:0}: container "fcf48b89bf5b8a14771f2f3f1a63ee46eac9ac248081c83fc20cdd07d6cb2ac7" in namespace "k8s.io": not found Feb 9 00:45:29.743465 kubelet[1972]: E0209 00:45:29.743435 1972 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 00:45:29.937014 kubelet[1972]: E0209 00:45:29.936984 1972 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:45:29.938491 env[1124]: time="2024-02-09T00:45:29.938453644Z" level=info msg="CreateContainer within sandbox \"6c00d8e418f9d25fe63c4fb503bf4b857f316e3d2b5fbd1d8b1bad3ab0639ff8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 00:45:29.953835 env[1124]: time="2024-02-09T00:45:29.953718319Z" level=info msg="CreateContainer within sandbox \"6c00d8e418f9d25fe63c4fb503bf4b857f316e3d2b5fbd1d8b1bad3ab0639ff8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"734ad80aa30858b0b4a9ae13487cfcb376f555b8c896d2fb4653728cbd4288c3\"" Feb 9 00:45:29.954253 env[1124]: time="2024-02-09T00:45:29.954225969Z" level=info msg="StartContainer for \"734ad80aa30858b0b4a9ae13487cfcb376f555b8c896d2fb4653728cbd4288c3\"" Feb 9 00:45:29.969559 systemd[1]: Started cri-containerd-734ad80aa30858b0b4a9ae13487cfcb376f555b8c896d2fb4653728cbd4288c3.scope. Feb 9 00:45:29.992101 env[1124]: time="2024-02-09T00:45:29.992054181Z" level=info msg="StartContainer for \"734ad80aa30858b0b4a9ae13487cfcb376f555b8c896d2fb4653728cbd4288c3\" returns successfully" Feb 9 00:45:29.993073 systemd[1]: cri-containerd-734ad80aa30858b0b4a9ae13487cfcb376f555b8c896d2fb4653728cbd4288c3.scope: Deactivated successfully. Feb 9 00:45:30.014126 env[1124]: time="2024-02-09T00:45:30.014067377Z" level=info msg="shim disconnected" id=734ad80aa30858b0b4a9ae13487cfcb376f555b8c896d2fb4653728cbd4288c3 Feb 9 00:45:30.014126 env[1124]: time="2024-02-09T00:45:30.014119405Z" level=warning msg="cleaning up after shim disconnected" id=734ad80aa30858b0b4a9ae13487cfcb376f555b8c896d2fb4653728cbd4288c3 namespace=k8s.io Feb 9 00:45:30.014375 env[1124]: time="2024-02-09T00:45:30.014147828Z" level=info msg="cleaning up dead shim" Feb 9 00:45:30.020619 env[1124]: time="2024-02-09T00:45:30.020565265Z" level=warning msg="cleanup warnings time=\"2024-02-09T00:45:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4143 runtime=io.containerd.runc.v2\n" Feb 9 00:45:30.387884 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-734ad80aa30858b0b4a9ae13487cfcb376f555b8c896d2fb4653728cbd4288c3-rootfs.mount: Deactivated successfully. Feb 9 00:45:30.939580 kubelet[1972]: E0209 00:45:30.939558 1972 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:45:30.941978 env[1124]: time="2024-02-09T00:45:30.941557532Z" level=info msg="CreateContainer within sandbox \"6c00d8e418f9d25fe63c4fb503bf4b857f316e3d2b5fbd1d8b1bad3ab0639ff8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 00:45:31.211643 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1460912904.mount: Deactivated successfully. Feb 9 00:45:31.216191 env[1124]: time="2024-02-09T00:45:31.216161535Z" level=info msg="CreateContainer within sandbox \"6c00d8e418f9d25fe63c4fb503bf4b857f316e3d2b5fbd1d8b1bad3ab0639ff8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9eca330dd5dd1db0953a75d7f65d80e1ae3abd5072838a27c7360b0d7971c5a1\"" Feb 9 00:45:31.216614 env[1124]: time="2024-02-09T00:45:31.216585145Z" level=info msg="StartContainer for \"9eca330dd5dd1db0953a75d7f65d80e1ae3abd5072838a27c7360b0d7971c5a1\"" Feb 9 00:45:31.230836 systemd[1]: Started cri-containerd-9eca330dd5dd1db0953a75d7f65d80e1ae3abd5072838a27c7360b0d7971c5a1.scope. Feb 9 00:45:31.250449 systemd[1]: cri-containerd-9eca330dd5dd1db0953a75d7f65d80e1ae3abd5072838a27c7360b0d7971c5a1.scope: Deactivated successfully. Feb 9 00:45:31.252720 env[1124]: time="2024-02-09T00:45:31.252672856Z" level=info msg="StartContainer for \"9eca330dd5dd1db0953a75d7f65d80e1ae3abd5072838a27c7360b0d7971c5a1\" returns successfully" Feb 9 00:45:31.270968 env[1124]: time="2024-02-09T00:45:31.270915162Z" level=info msg="shim disconnected" id=9eca330dd5dd1db0953a75d7f65d80e1ae3abd5072838a27c7360b0d7971c5a1 Feb 9 00:45:31.270968 env[1124]: time="2024-02-09T00:45:31.270962752Z" level=warning msg="cleaning up after shim disconnected" id=9eca330dd5dd1db0953a75d7f65d80e1ae3abd5072838a27c7360b0d7971c5a1 namespace=k8s.io Feb 9 00:45:31.271157 env[1124]: time="2024-02-09T00:45:31.270974083Z" level=info msg="cleaning up dead shim" Feb 9 00:45:31.276712 env[1124]: time="2024-02-09T00:45:31.276674182Z" level=warning msg="cleanup warnings time=\"2024-02-09T00:45:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4198 runtime=io.containerd.runc.v2\n" Feb 9 00:45:31.387837 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9eca330dd5dd1db0953a75d7f65d80e1ae3abd5072838a27c7360b0d7971c5a1-rootfs.mount: Deactivated successfully. Feb 9 00:45:31.944701 kubelet[1972]: E0209 00:45:31.944658 1972 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:45:31.946906 env[1124]: time="2024-02-09T00:45:31.946866552Z" level=info msg="CreateContainer within sandbox \"6c00d8e418f9d25fe63c4fb503bf4b857f316e3d2b5fbd1d8b1bad3ab0639ff8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 00:45:31.966369 env[1124]: time="2024-02-09T00:45:31.966315197Z" level=info msg="CreateContainer within sandbox \"6c00d8e418f9d25fe63c4fb503bf4b857f316e3d2b5fbd1d8b1bad3ab0639ff8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"db3c89dca10a6d264dff2b1743a689bc6276878d5b9235bc34ade7dd570ff824\"" Feb 9 00:45:31.966971 env[1124]: time="2024-02-09T00:45:31.966943334Z" level=info msg="StartContainer for \"db3c89dca10a6d264dff2b1743a689bc6276878d5b9235bc34ade7dd570ff824\"" Feb 9 00:45:31.982874 systemd[1]: Started cri-containerd-db3c89dca10a6d264dff2b1743a689bc6276878d5b9235bc34ade7dd570ff824.scope. Feb 9 00:45:32.008373 env[1124]: time="2024-02-09T00:45:32.008319134Z" level=info msg="StartContainer for \"db3c89dca10a6d264dff2b1743a689bc6276878d5b9235bc34ade7dd570ff824\" returns successfully" Feb 9 00:45:32.381159 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 9 00:45:32.801123 kubelet[1972]: W0209 00:45:32.801086 1972 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod962b166f_4c8e_48ae_bd1d_7b7ac4f40407.slice/cri-containerd-d4aed946ad17f89fdf0ca00955ad28bc494561f5115a4839d4e5f8f7343887f5.scope WatchSource:0}: task d4aed946ad17f89fdf0ca00955ad28bc494561f5115a4839d4e5f8f7343887f5 not found: not found Feb 9 00:45:32.949193 kubelet[1972]: E0209 00:45:32.949168 1972 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:45:33.033802 kubelet[1972]: I0209 00:45:33.033740 1972 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-kvlgx" podStartSLOduration=5.033686634 podCreationTimestamp="2024-02-09 00:45:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 00:45:33.032769222 +0000 UTC m=+108.455866962" watchObservedRunningTime="2024-02-09 00:45:33.033686634 +0000 UTC m=+108.456784344" Feb 9 00:45:34.342570 kubelet[1972]: E0209 00:45:34.342535 1972 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:45:35.229669 systemd-networkd[1017]: lxc_health: Link UP Feb 9 00:45:35.236754 systemd-networkd[1017]: lxc_health: Gained carrier Feb 9 00:45:35.237450 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 00:45:35.909113 kubelet[1972]: W0209 00:45:35.909052 1972 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod962b166f_4c8e_48ae_bd1d_7b7ac4f40407.slice/cri-containerd-fc09dd92b00269bcbf3bda410b53138301d7dac847f74ea0e6d854e7bcb8f5e8.scope WatchSource:0}: task fc09dd92b00269bcbf3bda410b53138301d7dac847f74ea0e6d854e7bcb8f5e8 not found: not found Feb 9 00:45:36.314375 systemd-networkd[1017]: lxc_health: Gained IPv6LL Feb 9 00:45:36.344219 kubelet[1972]: E0209 00:45:36.344170 1972 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:45:36.955592 kubelet[1972]: E0209 00:45:36.955562 1972 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:45:37.958021 kubelet[1972]: E0209 00:45:37.957846 1972 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 00:45:39.018047 kubelet[1972]: W0209 00:45:39.017998 1972 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod962b166f_4c8e_48ae_bd1d_7b7ac4f40407.slice/cri-containerd-734ad80aa30858b0b4a9ae13487cfcb376f555b8c896d2fb4653728cbd4288c3.scope WatchSource:0}: task 734ad80aa30858b0b4a9ae13487cfcb376f555b8c896d2fb4653728cbd4288c3 not found: not found Feb 9 00:45:41.507420 sshd[3802]: pam_unix(sshd:session): session closed for user core Feb 9 00:45:41.509473 systemd[1]: sshd@27-10.0.0.31:22-10.0.0.1:54364.service: Deactivated successfully. Feb 9 00:45:41.510266 systemd[1]: session-28.scope: Deactivated successfully. Feb 9 00:45:41.510804 systemd-logind[1106]: Session 28 logged out. Waiting for processes to exit. Feb 9 00:45:41.511557 systemd-logind[1106]: Removed session 28. Feb 9 00:45:42.127511 kubelet[1972]: W0209 00:45:42.127446 1972 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod962b166f_4c8e_48ae_bd1d_7b7ac4f40407.slice/cri-containerd-9eca330dd5dd1db0953a75d7f65d80e1ae3abd5072838a27c7360b0d7971c5a1.scope WatchSource:0}: task 9eca330dd5dd1db0953a75d7f65d80e1ae3abd5072838a27c7360b0d7971c5a1 not found: not found