Feb 9 19:45:15.857907 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Feb 9 17:23:38 -00 2024 Feb 9 19:45:15.857932 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:45:15.857944 kernel: BIOS-provided physical RAM map: Feb 9 19:45:15.857951 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 9 19:45:15.857958 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Feb 9 19:45:15.857965 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Feb 9 19:45:15.857974 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Feb 9 19:45:15.857982 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Feb 9 19:45:15.857989 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Feb 9 19:45:15.857998 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Feb 9 19:45:15.858006 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Feb 9 19:45:15.858013 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Feb 9 19:45:15.858021 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Feb 9 19:45:15.858029 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Feb 9 19:45:15.858039 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Feb 9 19:45:15.858049 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Feb 9 19:45:15.858057 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Feb 9 19:45:15.858065 kernel: NX (Execute Disable) protection: active Feb 9 19:45:15.858073 kernel: e820: update [mem 0x9b3f7018-0x9b400c57] usable ==> usable Feb 9 19:45:15.858082 kernel: e820: update [mem 0x9b3f7018-0x9b400c57] usable ==> usable Feb 9 19:45:15.858090 kernel: e820: update [mem 0x9b1aa018-0x9b1e6e57] usable ==> usable Feb 9 19:45:15.858098 kernel: e820: update [mem 0x9b1aa018-0x9b1e6e57] usable ==> usable Feb 9 19:45:15.858106 kernel: extended physical RAM map: Feb 9 19:45:15.858114 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 9 19:45:15.858122 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Feb 9 19:45:15.858132 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Feb 9 19:45:15.858140 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Feb 9 19:45:15.858149 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Feb 9 19:45:15.858157 kernel: reserve setup_data: [mem 0x000000000080c000-0x000000000080ffff] usable Feb 9 19:45:15.858165 kernel: reserve setup_data: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Feb 9 19:45:15.858174 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b1aa017] usable Feb 9 19:45:15.858193 kernel: reserve setup_data: [mem 0x000000009b1aa018-0x000000009b1e6e57] usable Feb 9 19:45:15.858202 kernel: reserve setup_data: [mem 0x000000009b1e6e58-0x000000009b3f7017] usable Feb 9 19:45:15.858211 kernel: reserve setup_data: [mem 0x000000009b3f7018-0x000000009b400c57] usable Feb 9 19:45:15.858219 kernel: reserve setup_data: [mem 0x000000009b400c58-0x000000009c8eefff] usable Feb 9 19:45:15.858228 kernel: reserve setup_data: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Feb 9 19:45:15.858238 kernel: reserve setup_data: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Feb 9 19:45:15.858246 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Feb 9 19:45:15.858254 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Feb 9 19:45:15.858263 kernel: reserve setup_data: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Feb 9 19:45:15.858275 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Feb 9 19:45:15.858284 kernel: efi: EFI v2.70 by EDK II Feb 9 19:45:15.858293 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b773018 RNG=0x9cb75018 Feb 9 19:45:15.858304 kernel: random: crng init done Feb 9 19:45:15.858312 kernel: SMBIOS 2.8 present. Feb 9 19:45:15.858321 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 0.0.0 02/06/2015 Feb 9 19:45:15.858331 kernel: Hypervisor detected: KVM Feb 9 19:45:15.858340 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 9 19:45:15.858348 kernel: kvm-clock: cpu 0, msr 6bfaa001, primary cpu clock Feb 9 19:45:15.858357 kernel: kvm-clock: using sched offset of 3921312996 cycles Feb 9 19:45:15.858367 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 9 19:45:15.858377 kernel: tsc: Detected 2794.750 MHz processor Feb 9 19:45:15.858388 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 9 19:45:15.858397 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 9 19:45:15.858406 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Feb 9 19:45:15.858415 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 9 19:45:15.858424 kernel: Using GB pages for direct mapping Feb 9 19:45:15.858432 kernel: Secure boot disabled Feb 9 19:45:15.858441 kernel: ACPI: Early table checksum verification disabled Feb 9 19:45:15.858449 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Feb 9 19:45:15.858458 kernel: ACPI: XSDT 0x000000009CB7D0E8 00004C (v01 BOCHS BXPC 00000001 01000013) Feb 9 19:45:15.858469 kernel: ACPI: FACP 0x000000009CB7A000 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 19:45:15.858478 kernel: ACPI: DSDT 0x000000009CB7B000 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 19:45:15.858487 kernel: ACPI: FACS 0x000000009CBDD000 000040 Feb 9 19:45:15.858497 kernel: ACPI: APIC 0x000000009CB79000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 19:45:15.858506 kernel: ACPI: HPET 0x000000009CB78000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 19:45:15.858515 kernel: ACPI: WAET 0x000000009CB77000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 19:45:15.858525 kernel: ACPI: BGRT 0x000000009CB76000 000038 (v01 INTEL EDK2 00000002 01000013) Feb 9 19:45:15.858534 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb7a000-0x9cb7a073] Feb 9 19:45:15.858543 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7b000-0x9cb7ca38] Feb 9 19:45:15.858554 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Feb 9 19:45:15.858563 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb79000-0x9cb7908f] Feb 9 19:45:15.858572 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb78000-0x9cb78037] Feb 9 19:45:15.858581 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb77000-0x9cb77027] Feb 9 19:45:15.858590 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb76000-0x9cb76037] Feb 9 19:45:15.858599 kernel: No NUMA configuration found Feb 9 19:45:15.858608 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Feb 9 19:45:15.858618 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Feb 9 19:45:15.858627 kernel: Zone ranges: Feb 9 19:45:15.858682 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 9 19:45:15.858692 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Feb 9 19:45:15.858701 kernel: Normal empty Feb 9 19:45:15.858711 kernel: Movable zone start for each node Feb 9 19:45:15.858720 kernel: Early memory node ranges Feb 9 19:45:15.858729 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Feb 9 19:45:15.858739 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Feb 9 19:45:15.858751 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Feb 9 19:45:15.858761 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Feb 9 19:45:15.858774 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Feb 9 19:45:15.858784 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Feb 9 19:45:15.858793 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Feb 9 19:45:15.858803 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 9 19:45:15.858814 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Feb 9 19:45:15.858824 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Feb 9 19:45:15.858834 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 9 19:45:15.858843 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Feb 9 19:45:15.858853 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Feb 9 19:45:15.858864 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Feb 9 19:45:15.858873 kernel: ACPI: PM-Timer IO Port: 0xb008 Feb 9 19:45:15.858883 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 9 19:45:15.858893 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 9 19:45:15.858902 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 9 19:45:15.858912 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 9 19:45:15.858921 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 9 19:45:15.858931 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 9 19:45:15.858940 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 9 19:45:15.858951 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 9 19:45:15.858960 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 9 19:45:15.858970 kernel: TSC deadline timer available Feb 9 19:45:15.858979 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Feb 9 19:45:15.858988 kernel: kvm-guest: KVM setup pv remote TLB flush Feb 9 19:45:15.858998 kernel: kvm-guest: setup PV sched yield Feb 9 19:45:15.859007 kernel: [mem 0x9d000000-0xffffffff] available for PCI devices Feb 9 19:45:15.859017 kernel: Booting paravirtualized kernel on KVM Feb 9 19:45:15.859026 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 9 19:45:15.859036 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Feb 9 19:45:15.859047 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u524288 Feb 9 19:45:15.859057 kernel: pcpu-alloc: s185624 r8192 d31464 u524288 alloc=1*2097152 Feb 9 19:45:15.859071 kernel: pcpu-alloc: [0] 0 1 2 3 Feb 9 19:45:15.859082 kernel: kvm-guest: setup async PF for cpu 0 Feb 9 19:45:15.859092 kernel: kvm-guest: stealtime: cpu 0, msr 9ae1c0c0 Feb 9 19:45:15.859102 kernel: kvm-guest: PV spinlocks enabled Feb 9 19:45:15.859112 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 9 19:45:15.859122 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Feb 9 19:45:15.859132 kernel: Policy zone: DMA32 Feb 9 19:45:15.859143 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:45:15.859154 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 19:45:15.859165 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 9 19:45:15.859175 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 19:45:15.859196 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 19:45:15.859207 kernel: Memory: 2400436K/2567000K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 166304K reserved, 0K cma-reserved) Feb 9 19:45:15.859218 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 9 19:45:15.859230 kernel: ftrace: allocating 34475 entries in 135 pages Feb 9 19:45:15.859239 kernel: ftrace: allocated 135 pages with 4 groups Feb 9 19:45:15.859248 kernel: rcu: Hierarchical RCU implementation. Feb 9 19:45:15.859258 kernel: rcu: RCU event tracing is enabled. Feb 9 19:45:15.859267 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 9 19:45:15.859276 kernel: Rude variant of Tasks RCU enabled. Feb 9 19:45:15.859285 kernel: Tracing variant of Tasks RCU enabled. Feb 9 19:45:15.859297 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 19:45:15.859307 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 9 19:45:15.859317 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Feb 9 19:45:15.859338 kernel: Console: colour dummy device 80x25 Feb 9 19:45:15.859349 kernel: printk: console [ttyS0] enabled Feb 9 19:45:15.859358 kernel: ACPI: Core revision 20210730 Feb 9 19:45:15.859366 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 9 19:45:15.859374 kernel: APIC: Switch to symmetric I/O mode setup Feb 9 19:45:15.859382 kernel: x2apic enabled Feb 9 19:45:15.859391 kernel: Switched APIC routing to physical x2apic. Feb 9 19:45:15.859399 kernel: kvm-guest: setup PV IPIs Feb 9 19:45:15.859409 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 9 19:45:15.859417 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 9 19:45:15.859425 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Feb 9 19:45:15.859433 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 9 19:45:15.859441 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Feb 9 19:45:15.859450 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Feb 9 19:45:15.859458 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 9 19:45:15.859466 kernel: Spectre V2 : Mitigation: Retpolines Feb 9 19:45:15.859474 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 9 19:45:15.859483 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 9 19:45:15.859491 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Feb 9 19:45:15.859502 kernel: RETBleed: Mitigation: untrained return thunk Feb 9 19:45:15.859510 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 9 19:45:15.859518 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Feb 9 19:45:15.859527 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 9 19:45:15.859538 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 9 19:45:15.859546 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 9 19:45:15.859554 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 9 19:45:15.859564 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 9 19:45:15.859572 kernel: Freeing SMP alternatives memory: 32K Feb 9 19:45:15.859580 kernel: pid_max: default: 32768 minimum: 301 Feb 9 19:45:15.859588 kernel: LSM: Security Framework initializing Feb 9 19:45:15.859596 kernel: SELinux: Initializing. Feb 9 19:45:15.859604 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 19:45:15.859613 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 19:45:15.859621 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Feb 9 19:45:15.859630 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Feb 9 19:45:15.859645 kernel: ... version: 0 Feb 9 19:45:15.859653 kernel: ... bit width: 48 Feb 9 19:45:15.859662 kernel: ... generic registers: 6 Feb 9 19:45:15.859670 kernel: ... value mask: 0000ffffffffffff Feb 9 19:45:15.859678 kernel: ... max period: 00007fffffffffff Feb 9 19:45:15.859686 kernel: ... fixed-purpose events: 0 Feb 9 19:45:15.859694 kernel: ... event mask: 000000000000003f Feb 9 19:45:15.859702 kernel: signal: max sigframe size: 1776 Feb 9 19:45:15.859710 kernel: rcu: Hierarchical SRCU implementation. Feb 9 19:45:15.859719 kernel: smp: Bringing up secondary CPUs ... Feb 9 19:45:15.859727 kernel: x86: Booting SMP configuration: Feb 9 19:45:15.859735 kernel: .... node #0, CPUs: #1 Feb 9 19:45:15.859743 kernel: kvm-clock: cpu 1, msr 6bfaa041, secondary cpu clock Feb 9 19:45:15.859751 kernel: kvm-guest: setup async PF for cpu 1 Feb 9 19:45:15.859759 kernel: kvm-guest: stealtime: cpu 1, msr 9ae9c0c0 Feb 9 19:45:15.859767 kernel: #2 Feb 9 19:45:15.859776 kernel: kvm-clock: cpu 2, msr 6bfaa081, secondary cpu clock Feb 9 19:45:15.859784 kernel: kvm-guest: setup async PF for cpu 2 Feb 9 19:45:15.859793 kernel: kvm-guest: stealtime: cpu 2, msr 9af1c0c0 Feb 9 19:45:15.859801 kernel: #3 Feb 9 19:45:15.859809 kernel: kvm-clock: cpu 3, msr 6bfaa0c1, secondary cpu clock Feb 9 19:45:15.859817 kernel: kvm-guest: setup async PF for cpu 3 Feb 9 19:45:15.859825 kernel: kvm-guest: stealtime: cpu 3, msr 9af9c0c0 Feb 9 19:45:15.859833 kernel: smp: Brought up 1 node, 4 CPUs Feb 9 19:45:15.859841 kernel: smpboot: Max logical packages: 1 Feb 9 19:45:15.859850 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Feb 9 19:45:15.859858 kernel: devtmpfs: initialized Feb 9 19:45:15.859867 kernel: x86/mm: Memory block size: 128MB Feb 9 19:45:15.859875 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Feb 9 19:45:15.859884 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Feb 9 19:45:15.859892 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Feb 9 19:45:15.859900 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Feb 9 19:45:15.859909 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Feb 9 19:45:15.859917 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 19:45:15.859925 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 9 19:45:15.859935 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 19:45:15.859945 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 19:45:15.859953 kernel: audit: initializing netlink subsys (disabled) Feb 9 19:45:15.859961 kernel: audit: type=2000 audit(1707507914.580:1): state=initialized audit_enabled=0 res=1 Feb 9 19:45:15.859971 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 19:45:15.859979 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 9 19:45:15.859988 kernel: cpuidle: using governor menu Feb 9 19:45:15.859996 kernel: ACPI: bus type PCI registered Feb 9 19:45:15.860004 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 19:45:15.860012 kernel: dca service started, version 1.12.1 Feb 9 19:45:15.860038 kernel: PCI: Using configuration type 1 for base access Feb 9 19:45:15.860047 kernel: PCI: Using configuration type 1 for extended access Feb 9 19:45:15.860055 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 9 19:45:15.860063 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 19:45:15.860071 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 19:45:15.860083 kernel: ACPI: Added _OSI(Module Device) Feb 9 19:45:15.860092 kernel: ACPI: Added _OSI(Processor Device) Feb 9 19:45:15.860101 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 19:45:15.860110 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 19:45:15.860122 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 19:45:15.860135 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 19:45:15.860145 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 19:45:15.860211 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 9 19:45:15.860221 kernel: ACPI: Interpreter enabled Feb 9 19:45:15.860231 kernel: ACPI: PM: (supports S0 S3 S5) Feb 9 19:45:15.860240 kernel: ACPI: Using IOAPIC for interrupt routing Feb 9 19:45:15.860250 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 9 19:45:15.860260 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Feb 9 19:45:15.860273 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 9 19:45:15.860425 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 9 19:45:15.860443 kernel: acpiphp: Slot [3] registered Feb 9 19:45:15.860453 kernel: acpiphp: Slot [4] registered Feb 9 19:45:15.860463 kernel: acpiphp: Slot [5] registered Feb 9 19:45:15.860473 kernel: acpiphp: Slot [6] registered Feb 9 19:45:15.860482 kernel: acpiphp: Slot [7] registered Feb 9 19:45:15.860491 kernel: acpiphp: Slot [8] registered Feb 9 19:45:15.860501 kernel: acpiphp: Slot [9] registered Feb 9 19:45:15.860514 kernel: acpiphp: Slot [10] registered Feb 9 19:45:15.860523 kernel: acpiphp: Slot [11] registered Feb 9 19:45:15.860533 kernel: acpiphp: Slot [12] registered Feb 9 19:45:15.860543 kernel: acpiphp: Slot [13] registered Feb 9 19:45:15.860553 kernel: acpiphp: Slot [14] registered Feb 9 19:45:15.860563 kernel: acpiphp: Slot [15] registered Feb 9 19:45:15.860573 kernel: acpiphp: Slot [16] registered Feb 9 19:45:15.860582 kernel: acpiphp: Slot [17] registered Feb 9 19:45:15.860592 kernel: acpiphp: Slot [18] registered Feb 9 19:45:15.860603 kernel: acpiphp: Slot [19] registered Feb 9 19:45:15.860613 kernel: acpiphp: Slot [20] registered Feb 9 19:45:15.860623 kernel: acpiphp: Slot [21] registered Feb 9 19:45:15.860633 kernel: acpiphp: Slot [22] registered Feb 9 19:45:15.860652 kernel: acpiphp: Slot [23] registered Feb 9 19:45:15.860662 kernel: acpiphp: Slot [24] registered Feb 9 19:45:15.860672 kernel: acpiphp: Slot [25] registered Feb 9 19:45:15.860681 kernel: acpiphp: Slot [26] registered Feb 9 19:45:15.860691 kernel: acpiphp: Slot [27] registered Feb 9 19:45:15.860701 kernel: acpiphp: Slot [28] registered Feb 9 19:45:15.860713 kernel: acpiphp: Slot [29] registered Feb 9 19:45:15.860723 kernel: acpiphp: Slot [30] registered Feb 9 19:45:15.860733 kernel: acpiphp: Slot [31] registered Feb 9 19:45:15.860743 kernel: PCI host bridge to bus 0000:00 Feb 9 19:45:15.860848 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 9 19:45:15.860937 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 9 19:45:15.861017 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 9 19:45:15.861098 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Feb 9 19:45:15.861227 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0x87fffffff window] Feb 9 19:45:15.861323 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 9 19:45:15.861428 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 9 19:45:15.861528 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 9 19:45:15.861646 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Feb 9 19:45:15.861743 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Feb 9 19:45:15.861838 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Feb 9 19:45:15.861929 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Feb 9 19:45:15.862029 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Feb 9 19:45:15.862120 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Feb 9 19:45:15.862242 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 9 19:45:15.862339 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Feb 9 19:45:15.862436 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Feb 9 19:45:15.862536 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Feb 9 19:45:15.862630 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Feb 9 19:45:15.862734 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc1043000-0xc1043fff] Feb 9 19:45:15.862827 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Feb 9 19:45:15.862921 kernel: pci 0000:00:02.0: BAR 0: assigned to efifb Feb 9 19:45:15.863015 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 9 19:45:15.863120 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Feb 9 19:45:15.863229 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0a0-0xc0bf] Feb 9 19:45:15.863328 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Feb 9 19:45:15.863424 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Feb 9 19:45:15.863527 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Feb 9 19:45:15.863626 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Feb 9 19:45:15.863737 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Feb 9 19:45:15.863837 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Feb 9 19:45:15.863938 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Feb 9 19:45:15.864033 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Feb 9 19:45:15.864127 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc1040000-0xc1040fff] Feb 9 19:45:15.864262 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Feb 9 19:45:15.864359 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Feb 9 19:45:15.864373 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 9 19:45:15.864386 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 9 19:45:15.864396 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 9 19:45:15.864406 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 9 19:45:15.864416 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 9 19:45:15.864426 kernel: iommu: Default domain type: Translated Feb 9 19:45:15.864435 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 9 19:45:15.864528 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Feb 9 19:45:15.864621 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 9 19:45:15.864726 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Feb 9 19:45:15.864743 kernel: vgaarb: loaded Feb 9 19:45:15.864753 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 19:45:15.864763 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 19:45:15.864773 kernel: PTP clock support registered Feb 9 19:45:15.864783 kernel: Registered efivars operations Feb 9 19:45:15.864792 kernel: PCI: Using ACPI for IRQ routing Feb 9 19:45:15.864802 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 9 19:45:15.864812 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Feb 9 19:45:15.864821 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Feb 9 19:45:15.864833 kernel: e820: reserve RAM buffer [mem 0x9b1aa018-0x9bffffff] Feb 9 19:45:15.864843 kernel: e820: reserve RAM buffer [mem 0x9b3f7018-0x9bffffff] Feb 9 19:45:15.864852 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Feb 9 19:45:15.864862 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Feb 9 19:45:15.864871 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 9 19:45:15.864881 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 9 19:45:15.864890 kernel: clocksource: Switched to clocksource kvm-clock Feb 9 19:45:15.864900 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 19:45:15.864910 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 19:45:15.864921 kernel: pnp: PnP ACPI init Feb 9 19:45:15.865019 kernel: pnp 00:02: [dma 2] Feb 9 19:45:15.865034 kernel: pnp: PnP ACPI: found 6 devices Feb 9 19:45:15.865044 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 9 19:45:15.865054 kernel: NET: Registered PF_INET protocol family Feb 9 19:45:15.865064 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 19:45:15.865074 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 9 19:45:15.865085 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 19:45:15.865097 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 9 19:45:15.865107 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 9 19:45:15.865117 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 9 19:45:15.865127 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 19:45:15.865136 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 19:45:15.865146 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 19:45:15.865156 kernel: NET: Registered PF_XDP protocol family Feb 9 19:45:15.865266 kernel: pci 0000:00:05.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Feb 9 19:45:15.865382 kernel: pci 0000:00:05.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Feb 9 19:45:15.865492 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 9 19:45:15.865616 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 9 19:45:15.865726 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 9 19:45:15.865813 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Feb 9 19:45:15.865901 kernel: pci_bus 0000:00: resource 8 [mem 0x800000000-0x87fffffff window] Feb 9 19:45:15.866002 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Feb 9 19:45:15.866106 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 9 19:45:15.866236 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Feb 9 19:45:15.866253 kernel: PCI: CLS 0 bytes, default 64 Feb 9 19:45:15.866264 kernel: Initialise system trusted keyrings Feb 9 19:45:15.866274 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 9 19:45:15.866285 kernel: Key type asymmetric registered Feb 9 19:45:15.866295 kernel: Asymmetric key parser 'x509' registered Feb 9 19:45:15.866306 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 19:45:15.866317 kernel: io scheduler mq-deadline registered Feb 9 19:45:15.866327 kernel: io scheduler kyber registered Feb 9 19:45:15.866341 kernel: io scheduler bfq registered Feb 9 19:45:15.866352 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 9 19:45:15.866363 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 9 19:45:15.866374 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Feb 9 19:45:15.866385 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 9 19:45:15.866395 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 19:45:15.866406 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 9 19:45:15.866416 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 9 19:45:15.866426 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 9 19:45:15.866438 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 9 19:45:15.866539 kernel: rtc_cmos 00:05: RTC can wake from S4 Feb 9 19:45:15.866556 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 9 19:45:15.866648 kernel: rtc_cmos 00:05: registered as rtc0 Feb 9 19:45:15.866742 kernel: rtc_cmos 00:05: setting system clock to 2024-02-09T19:45:15 UTC (1707507915) Feb 9 19:45:15.866833 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Feb 9 19:45:15.866847 kernel: efifb: probing for efifb Feb 9 19:45:15.866857 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Feb 9 19:45:15.866867 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Feb 9 19:45:15.866877 kernel: efifb: scrolling: redraw Feb 9 19:45:15.866887 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 9 19:45:15.866897 kernel: Console: switching to colour frame buffer device 160x50 Feb 9 19:45:15.866907 kernel: fb0: EFI VGA frame buffer device Feb 9 19:45:15.866919 kernel: pstore: Registered efi as persistent store backend Feb 9 19:45:15.866928 kernel: NET: Registered PF_INET6 protocol family Feb 9 19:45:15.866938 kernel: Segment Routing with IPv6 Feb 9 19:45:15.866948 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 19:45:15.866957 kernel: NET: Registered PF_PACKET protocol family Feb 9 19:45:15.866967 kernel: Key type dns_resolver registered Feb 9 19:45:15.866976 kernel: IPI shorthand broadcast: enabled Feb 9 19:45:15.866986 kernel: sched_clock: Marking stable (352416265, 90088366)->(467296791, -24792160) Feb 9 19:45:15.866996 kernel: registered taskstats version 1 Feb 9 19:45:15.867005 kernel: Loading compiled-in X.509 certificates Feb 9 19:45:15.867017 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 56154408a02b3bd349a9e9180c9bd837fd1d636a' Feb 9 19:45:15.867027 kernel: Key type .fscrypt registered Feb 9 19:45:15.867036 kernel: Key type fscrypt-provisioning registered Feb 9 19:45:15.867047 kernel: pstore: Using crash dump compression: deflate Feb 9 19:45:15.867058 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 19:45:15.867068 kernel: ima: Allocated hash algorithm: sha1 Feb 9 19:45:15.867079 kernel: ima: No architecture policies found Feb 9 19:45:15.867089 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 9 19:45:15.867101 kernel: Write protecting the kernel read-only data: 28672k Feb 9 19:45:15.867112 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 9 19:45:15.867122 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 9 19:45:15.867133 kernel: Run /init as init process Feb 9 19:45:15.867143 kernel: with arguments: Feb 9 19:45:15.867153 kernel: /init Feb 9 19:45:15.867163 kernel: with environment: Feb 9 19:45:15.867176 kernel: HOME=/ Feb 9 19:45:15.867198 kernel: TERM=linux Feb 9 19:45:15.867208 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 19:45:15.867223 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 19:45:15.867237 systemd[1]: Detected virtualization kvm. Feb 9 19:45:15.867248 systemd[1]: Detected architecture x86-64. Feb 9 19:45:15.867259 systemd[1]: Running in initrd. Feb 9 19:45:15.867270 systemd[1]: No hostname configured, using default hostname. Feb 9 19:45:15.867280 systemd[1]: Hostname set to . Feb 9 19:45:15.867294 systemd[1]: Initializing machine ID from VM UUID. Feb 9 19:45:15.867307 systemd[1]: Queued start job for default target initrd.target. Feb 9 19:45:15.867318 systemd[1]: Started systemd-ask-password-console.path. Feb 9 19:45:15.867328 systemd[1]: Reached target cryptsetup.target. Feb 9 19:45:15.867340 systemd[1]: Reached target paths.target. Feb 9 19:45:15.867351 systemd[1]: Reached target slices.target. Feb 9 19:45:15.867362 systemd[1]: Reached target swap.target. Feb 9 19:45:15.867373 systemd[1]: Reached target timers.target. Feb 9 19:45:15.867386 systemd[1]: Listening on iscsid.socket. Feb 9 19:45:15.867397 systemd[1]: Listening on iscsiuio.socket. Feb 9 19:45:15.867408 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 19:45:15.867419 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 19:45:15.867429 systemd[1]: Listening on systemd-journald.socket. Feb 9 19:45:15.867440 systemd[1]: Listening on systemd-networkd.socket. Feb 9 19:45:15.867451 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 19:45:15.867461 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 19:45:15.867473 systemd[1]: Reached target sockets.target. Feb 9 19:45:15.867486 systemd[1]: Starting kmod-static-nodes.service... Feb 9 19:45:15.867497 systemd[1]: Finished network-cleanup.service. Feb 9 19:45:15.867509 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 19:45:15.867520 systemd[1]: Starting systemd-journald.service... Feb 9 19:45:15.867531 systemd[1]: Starting systemd-modules-load.service... Feb 9 19:45:15.867542 systemd[1]: Starting systemd-resolved.service... Feb 9 19:45:15.867553 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 19:45:15.867565 systemd[1]: Finished kmod-static-nodes.service. Feb 9 19:45:15.867576 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 19:45:15.867589 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 19:45:15.867599 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 19:45:15.867610 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 19:45:15.867623 systemd-journald[197]: Journal started Feb 9 19:45:15.867687 systemd-journald[197]: Runtime Journal (/run/log/journal/0ea1c2b867d84dbb98763e8b60b9c545) is 6.0M, max 48.4M, 42.4M free. Feb 9 19:45:15.857900 systemd-modules-load[198]: Inserted module 'overlay' Feb 9 19:45:15.869293 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 19:45:15.870253 systemd[1]: Started systemd-journald.service. Feb 9 19:45:15.874873 kernel: audit: type=1130 audit(1707507915.869:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:15.874916 kernel: audit: type=1130 audit(1707507915.869:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:15.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:15.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:15.875777 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 19:45:15.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:15.878129 systemd[1]: Starting dracut-cmdline.service... Feb 9 19:45:15.883121 kernel: audit: type=1130 audit(1707507915.876:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:15.886202 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 19:45:15.888019 dracut-cmdline[215]: dracut-dracut-053 Feb 9 19:45:15.888720 kernel: Bridge firewalling registered Feb 9 19:45:15.888180 systemd-modules-load[198]: Inserted module 'br_netfilter' Feb 9 19:45:15.890980 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:45:15.900677 systemd-resolved[199]: Positive Trust Anchors: Feb 9 19:45:15.900691 systemd-resolved[199]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 19:45:15.900728 systemd-resolved[199]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 19:45:15.908313 kernel: SCSI subsystem initialized Feb 9 19:45:15.909824 systemd-resolved[199]: Defaulting to hostname 'linux'. Feb 9 19:45:15.911367 systemd[1]: Started systemd-resolved.service. Feb 9 19:45:15.914371 kernel: audit: type=1130 audit(1707507915.911:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:15.911000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:15.911508 systemd[1]: Reached target nss-lookup.target. Feb 9 19:45:15.921649 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 19:45:15.921679 kernel: device-mapper: uevent: version 1.0.3 Feb 9 19:45:15.922563 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 19:45:15.925217 systemd-modules-load[198]: Inserted module 'dm_multipath' Feb 9 19:45:15.925843 systemd[1]: Finished systemd-modules-load.service. Feb 9 19:45:15.927312 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:45:15.930515 kernel: audit: type=1130 audit(1707507915.926:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:15.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:15.936259 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:45:15.939383 kernel: audit: type=1130 audit(1707507915.936:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:15.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:15.953216 kernel: Loading iSCSI transport class v2.0-870. Feb 9 19:45:15.965202 kernel: iscsi: registered transport (tcp) Feb 9 19:45:15.985400 kernel: iscsi: registered transport (qla4xxx) Feb 9 19:45:15.985459 kernel: QLogic iSCSI HBA Driver Feb 9 19:45:16.007334 systemd[1]: Finished dracut-cmdline.service. Feb 9 19:45:16.007000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:16.008332 systemd[1]: Starting dracut-pre-udev.service... Feb 9 19:45:16.012147 kernel: audit: type=1130 audit(1707507916.007:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:16.052203 kernel: raid6: avx2x4 gen() 27779 MB/s Feb 9 19:45:16.069199 kernel: raid6: avx2x4 xor() 6810 MB/s Feb 9 19:45:16.086197 kernel: raid6: avx2x2 gen() 24596 MB/s Feb 9 19:45:16.103197 kernel: raid6: avx2x2 xor() 15285 MB/s Feb 9 19:45:16.120196 kernel: raid6: avx2x1 gen() 19003 MB/s Feb 9 19:45:16.137196 kernel: raid6: avx2x1 xor() 11802 MB/s Feb 9 19:45:16.154196 kernel: raid6: sse2x4 gen() 11497 MB/s Feb 9 19:45:16.171195 kernel: raid6: sse2x4 xor() 5670 MB/s Feb 9 19:45:16.188199 kernel: raid6: sse2x2 gen() 12807 MB/s Feb 9 19:45:16.205198 kernel: raid6: sse2x2 xor() 8291 MB/s Feb 9 19:45:16.222201 kernel: raid6: sse2x1 gen() 10097 MB/s Feb 9 19:45:16.239213 kernel: raid6: sse2x1 xor() 6613 MB/s Feb 9 19:45:16.239226 kernel: raid6: using algorithm avx2x4 gen() 27779 MB/s Feb 9 19:45:16.239235 kernel: raid6: .... xor() 6810 MB/s, rmw enabled Feb 9 19:45:16.240199 kernel: raid6: using avx2x2 recovery algorithm Feb 9 19:45:16.251201 kernel: xor: automatically using best checksumming function avx Feb 9 19:45:16.337210 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 9 19:45:16.343681 systemd[1]: Finished dracut-pre-udev.service. Feb 9 19:45:16.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:16.344000 audit: BPF prog-id=7 op=LOAD Feb 9 19:45:16.347131 systemd[1]: Starting systemd-udevd.service... Feb 9 19:45:16.347541 kernel: audit: type=1130 audit(1707507916.343:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:16.347557 kernel: audit: type=1334 audit(1707507916.344:10): prog-id=7 op=LOAD Feb 9 19:45:16.346000 audit: BPF prog-id=8 op=LOAD Feb 9 19:45:16.358495 systemd-udevd[400]: Using default interface naming scheme 'v252'. Feb 9 19:45:16.362447 systemd[1]: Started systemd-udevd.service. Feb 9 19:45:16.363246 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 19:45:16.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:16.372871 dracut-pre-trigger[404]: rd.md=0: removing MD RAID activation Feb 9 19:45:16.393401 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 19:45:16.393000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:16.394243 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 19:45:16.427283 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 19:45:16.427000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:16.453214 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 19:45:16.453264 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 9 19:45:16.456638 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 9 19:45:16.456660 kernel: GPT:9289727 != 19775487 Feb 9 19:45:16.456669 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 9 19:45:16.458482 kernel: GPT:9289727 != 19775487 Feb 9 19:45:16.458501 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 9 19:45:16.458511 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 19:45:16.465392 kernel: AVX2 version of gcm_enc/dec engaged. Feb 9 19:45:16.465417 kernel: AES CTR mode by8 optimization enabled Feb 9 19:45:16.485206 kernel: libata version 3.00 loaded. Feb 9 19:45:16.502227 kernel: ata_piix 0000:00:01.1: version 2.13 Feb 9 19:45:16.503197 kernel: scsi host0: ata_piix Feb 9 19:45:16.504389 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 19:45:16.506716 kernel: scsi host1: ata_piix Feb 9 19:45:16.506826 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (455) Feb 9 19:45:16.506836 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Feb 9 19:45:16.506845 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Feb 9 19:45:16.510856 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 19:45:16.513164 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 19:45:16.517472 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 19:45:16.523921 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 19:45:16.525733 systemd[1]: Starting disk-uuid.service... Feb 9 19:45:16.530863 disk-uuid[534]: Primary Header is updated. Feb 9 19:45:16.530863 disk-uuid[534]: Secondary Entries is updated. Feb 9 19:45:16.530863 disk-uuid[534]: Secondary Header is updated. Feb 9 19:45:16.533548 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 19:45:16.536203 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 19:45:16.659217 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Feb 9 19:45:16.659271 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 9 19:45:16.686206 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Feb 9 19:45:16.686365 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 9 19:45:16.703260 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Feb 9 19:45:17.538208 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 19:45:17.538482 disk-uuid[536]: The operation has completed successfully. Feb 9 19:45:17.557770 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 19:45:17.558000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:17.558000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:17.557843 systemd[1]: Finished disk-uuid.service. Feb 9 19:45:17.565388 systemd[1]: Starting verity-setup.service... Feb 9 19:45:17.576209 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Feb 9 19:45:17.595787 systemd[1]: Found device dev-mapper-usr.device. Feb 9 19:45:17.597641 systemd[1]: Mounting sysusr-usr.mount... Feb 9 19:45:17.600109 systemd[1]: Finished verity-setup.service. Feb 9 19:45:17.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:17.656973 systemd[1]: Mounted sysusr-usr.mount. Feb 9 19:45:17.658033 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 19:45:17.657125 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 19:45:17.657723 systemd[1]: Starting ignition-setup.service... Feb 9 19:45:17.659362 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 19:45:17.665483 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:45:17.665515 kernel: BTRFS info (device vda6): using free space tree Feb 9 19:45:17.665525 kernel: BTRFS info (device vda6): has skinny extents Feb 9 19:45:17.672803 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 19:45:17.679617 systemd[1]: Finished ignition-setup.service. Feb 9 19:45:17.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:17.680892 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 19:45:17.715366 ignition[634]: Ignition 2.14.0 Feb 9 19:45:17.715391 ignition[634]: Stage: fetch-offline Feb 9 19:45:17.715446 ignition[634]: no configs at "/usr/lib/ignition/base.d" Feb 9 19:45:17.715458 ignition[634]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 19:45:17.715590 ignition[634]: parsed url from cmdline: "" Feb 9 19:45:17.715605 ignition[634]: no config URL provided Feb 9 19:45:17.715611 ignition[634]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 19:45:17.715621 ignition[634]: no config at "/usr/lib/ignition/user.ign" Feb 9 19:45:17.715642 ignition[634]: op(1): [started] loading QEMU firmware config module Feb 9 19:45:17.715648 ignition[634]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 9 19:45:17.720713 ignition[634]: op(1): [finished] loading QEMU firmware config module Feb 9 19:45:17.727198 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 19:45:17.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:17.728000 audit: BPF prog-id=9 op=LOAD Feb 9 19:45:17.728854 systemd[1]: Starting systemd-networkd.service... Feb 9 19:45:17.784390 ignition[634]: parsing config with SHA512: b46b715fd8cbb24c12f993a5c82102c963309d74aad4f1fbb73a9a1cd16bb3f89482b9aff626fa855e2e0241e7dbcfcac7dc05e1e0b09c4867ac044fce4cd921 Feb 9 19:45:17.808433 systemd-networkd[716]: lo: Link UP Feb 9 19:45:17.808454 systemd-networkd[716]: lo: Gained carrier Feb 9 19:45:17.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:17.808973 systemd-networkd[716]: Enumeration completed Feb 9 19:45:17.809237 systemd-networkd[716]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:45:17.809312 systemd[1]: Started systemd-networkd.service. Feb 9 19:45:17.810346 systemd[1]: Reached target network.target. Feb 9 19:45:17.810665 systemd-networkd[716]: eth0: Link UP Feb 9 19:45:17.810669 systemd-networkd[716]: eth0: Gained carrier Feb 9 19:45:17.812470 systemd[1]: Starting iscsiuio.service... Feb 9 19:45:17.818493 systemd[1]: Started iscsiuio.service. Feb 9 19:45:17.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:17.819982 systemd[1]: Starting iscsid.service... Feb 9 19:45:17.824521 iscsid[722]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 19:45:17.824521 iscsid[722]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 19:45:17.824521 iscsid[722]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 19:45:17.824521 iscsid[722]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 19:45:17.824521 iscsid[722]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 19:45:17.824521 iscsid[722]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 19:45:17.830000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:17.825677 systemd[1]: Started iscsid.service. Feb 9 19:45:17.827266 systemd-networkd[716]: eth0: DHCPv4 address 10.0.0.68/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 9 19:45:17.830956 systemd[1]: Starting dracut-initqueue.service... Feb 9 19:45:17.837350 unknown[634]: fetched base config from "system" Feb 9 19:45:17.837361 unknown[634]: fetched user config from "qemu" Feb 9 19:45:17.839138 ignition[634]: fetch-offline: fetch-offline passed Feb 9 19:45:17.839224 ignition[634]: Ignition finished successfully Feb 9 19:45:17.841102 systemd[1]: Finished dracut-initqueue.service. Feb 9 19:45:17.840000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:17.841385 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 19:45:17.842000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:17.843170 systemd[1]: Reached target remote-fs-pre.target. Feb 9 19:45:17.844381 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 19:45:17.845019 systemd[1]: Reached target remote-fs.target. Feb 9 19:45:17.847904 systemd[1]: Starting dracut-pre-mount.service... Feb 9 19:45:17.847994 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 9 19:45:17.848723 systemd[1]: Starting ignition-kargs.service... Feb 9 19:45:17.857357 ignition[732]: Ignition 2.14.0 Feb 9 19:45:17.857366 ignition[732]: Stage: kargs Feb 9 19:45:17.857447 ignition[732]: no configs at "/usr/lib/ignition/base.d" Feb 9 19:45:17.857455 ignition[732]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 19:45:17.858561 ignition[732]: kargs: kargs passed Feb 9 19:45:17.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:17.859758 systemd[1]: Finished ignition-kargs.service. Feb 9 19:45:17.858607 ignition[732]: Ignition finished successfully Feb 9 19:45:17.861449 systemd[1]: Starting ignition-disks.service... Feb 9 19:45:17.866039 systemd[1]: Finished dracut-pre-mount.service. Feb 9 19:45:17.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:17.867762 ignition[740]: Ignition 2.14.0 Feb 9 19:45:17.867772 ignition[740]: Stage: disks Feb 9 19:45:17.867856 ignition[740]: no configs at "/usr/lib/ignition/base.d" Feb 9 19:45:17.867865 ignition[740]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 19:45:17.868979 ignition[740]: disks: disks passed Feb 9 19:45:17.869012 ignition[740]: Ignition finished successfully Feb 9 19:45:17.871403 systemd[1]: Finished ignition-disks.service. Feb 9 19:45:17.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:17.872502 systemd[1]: Reached target initrd-root-device.target. Feb 9 19:45:17.873157 systemd[1]: Reached target local-fs-pre.target. Feb 9 19:45:17.874412 systemd[1]: Reached target local-fs.target. Feb 9 19:45:17.874475 systemd[1]: Reached target sysinit.target. Feb 9 19:45:17.874771 systemd[1]: Reached target basic.target. Feb 9 19:45:17.875943 systemd[1]: Starting systemd-fsck-root.service... Feb 9 19:45:17.884761 systemd-fsck[750]: ROOT: clean, 602/553520 files, 56014/553472 blocks Feb 9 19:45:17.890075 systemd[1]: Finished systemd-fsck-root.service. Feb 9 19:45:17.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:17.892783 systemd[1]: Mounting sysroot.mount... Feb 9 19:45:17.898999 systemd[1]: Mounted sysroot.mount. Feb 9 19:45:17.900636 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 19:45:17.899636 systemd[1]: Reached target initrd-root-fs.target. Feb 9 19:45:17.901426 systemd[1]: Mounting sysroot-usr.mount... Feb 9 19:45:17.902135 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 9 19:45:17.902164 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 19:45:17.902196 systemd[1]: Reached target ignition-diskful.target. Feb 9 19:45:17.904449 systemd[1]: Mounted sysroot-usr.mount. Feb 9 19:45:17.906300 systemd[1]: Starting initrd-setup-root.service... Feb 9 19:45:17.910417 initrd-setup-root[760]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 19:45:17.913242 initrd-setup-root[768]: cut: /sysroot/etc/group: No such file or directory Feb 9 19:45:17.916043 initrd-setup-root[776]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 19:45:17.918703 initrd-setup-root[784]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 19:45:17.944631 systemd[1]: Finished initrd-setup-root.service. Feb 9 19:45:17.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:17.945940 systemd[1]: Starting ignition-mount.service... Feb 9 19:45:17.947039 systemd[1]: Starting sysroot-boot.service... Feb 9 19:45:17.950798 bash[801]: umount: /sysroot/usr/share/oem: not mounted. Feb 9 19:45:17.959688 ignition[802]: INFO : Ignition 2.14.0 Feb 9 19:45:17.959688 ignition[802]: INFO : Stage: mount Feb 9 19:45:17.961064 ignition[802]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 19:45:17.961064 ignition[802]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 19:45:17.961064 ignition[802]: INFO : mount: mount passed Feb 9 19:45:17.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:17.964434 ignition[802]: INFO : Ignition finished successfully Feb 9 19:45:17.961799 systemd[1]: Finished ignition-mount.service. Feb 9 19:45:17.967523 systemd[1]: Finished sysroot-boot.service. Feb 9 19:45:17.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:18.605499 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 19:45:18.612291 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (812) Feb 9 19:45:18.612326 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:45:18.612336 kernel: BTRFS info (device vda6): using free space tree Feb 9 19:45:18.613368 kernel: BTRFS info (device vda6): has skinny extents Feb 9 19:45:18.616367 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 19:45:18.616967 systemd[1]: Starting ignition-files.service... Feb 9 19:45:18.629541 ignition[832]: INFO : Ignition 2.14.0 Feb 9 19:45:18.629541 ignition[832]: INFO : Stage: files Feb 9 19:45:18.630766 ignition[832]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 19:45:18.630766 ignition[832]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 19:45:18.633217 ignition[832]: DEBUG : files: compiled without relabeling support, skipping Feb 9 19:45:18.634209 ignition[832]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 19:45:18.634209 ignition[832]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 19:45:18.636359 ignition[832]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 19:45:18.637408 ignition[832]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 19:45:18.638550 unknown[832]: wrote ssh authorized keys file for user: core Feb 9 19:45:18.639326 ignition[832]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 19:45:18.640302 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 9 19:45:18.640302 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 9 19:45:18.766721 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 9 19:45:18.874799 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 9 19:45:18.874799 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 9 19:45:18.877751 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-amd64.tar.gz: attempt #1 Feb 9 19:45:19.222508 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 9 19:45:19.299651 ignition[832]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: a3a2c02a90b008686c20babaf272e703924db2a3e2a0d4e2a7c81d994cbc68c47458a4a354ecc243af095b390815c7f203348b9749351ae817bd52a522300449 Feb 9 19:45:19.315253 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 9 19:45:19.316684 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 9 19:45:19.316684 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Feb 9 19:45:19.322283 systemd-networkd[716]: eth0: Gained IPv6LL Feb 9 19:45:19.627506 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 9 19:45:19.737319 ignition[832]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Feb 9 19:45:19.739595 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 9 19:45:19.739595 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 19:45:19.739595 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 19:45:19.739595 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubectl" Feb 9 19:45:19.739595 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubectl: attempt #1 Feb 9 19:45:19.806069 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 9 19:45:20.217369 ignition[832]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 97840854134909d75a1a2563628cc4ba632067369ce7fc8a8a1e90a387d32dd7bfd73f4f5b5a82ef842088e7470692951eb7fc869c5f297dd740f855672ee628 Feb 9 19:45:20.217369 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 9 19:45:20.217369 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 19:45:20.221920 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubelet: attempt #1 Feb 9 19:45:20.266692 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 9 19:45:20.760395 ignition[832]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 40daf2a9b9e666c14b10e627da931bd79978628b1f23ef6429c1cb4fcba261f86ccff440c0dbb0070ee760fe55772b4fd279c4582dfbb17fa30bc94b7f00126b Feb 9 19:45:20.763017 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 19:45:20.763017 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 19:45:20.763017 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubeadm: attempt #1 Feb 9 19:45:20.810926 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET result: OK Feb 9 19:45:21.072902 ignition[832]: DEBUG : files: createFilesystemsFiles: createFiles: op(9): file matches expected sum of: 1c324cd645a7bf93d19d24c87498d9a17878eb1cc927e2680200ffeab2f85051ddec47d85b79b8e774042dc6726299ad3d7caf52c060701f00deba30dc33f660 Feb 9 19:45:21.072902 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 19:45:21.076219 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 19:45:21.076219 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 9 19:45:21.470433 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 9 19:45:21.530200 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 19:45:21.531556 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Feb 9 19:45:21.532858 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 19:45:21.534048 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 9 19:45:21.535219 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 9 19:45:21.535219 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 19:45:21.537551 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 19:45:21.537551 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 19:45:21.537551 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 19:45:21.541076 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 19:45:21.541076 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 19:45:21.541076 ignition[832]: INFO : files: op(10): [started] processing unit "prepare-critools.service" Feb 9 19:45:21.544384 ignition[832]: INFO : files: op(10): op(11): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 19:45:21.544384 ignition[832]: INFO : files: op(10): op(11): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 19:45:21.544384 ignition[832]: INFO : files: op(10): [finished] processing unit "prepare-critools.service" Feb 9 19:45:21.544384 ignition[832]: INFO : files: op(12): [started] processing unit "prepare-helm.service" Feb 9 19:45:21.548771 ignition[832]: INFO : files: op(12): op(13): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 19:45:21.548771 ignition[832]: INFO : files: op(12): op(13): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 19:45:21.548771 ignition[832]: INFO : files: op(12): [finished] processing unit "prepare-helm.service" Feb 9 19:45:21.548771 ignition[832]: INFO : files: op(14): [started] processing unit "coreos-metadata.service" Feb 9 19:45:21.548771 ignition[832]: INFO : files: op(14): op(15): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 9 19:45:21.548771 ignition[832]: INFO : files: op(14): op(15): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 9 19:45:21.548771 ignition[832]: INFO : files: op(14): [finished] processing unit "coreos-metadata.service" Feb 9 19:45:21.548771 ignition[832]: INFO : files: op(16): [started] processing unit "prepare-cni-plugins.service" Feb 9 19:45:21.548771 ignition[832]: INFO : files: op(16): op(17): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 19:45:21.558847 ignition[832]: INFO : files: op(16): op(17): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 19:45:21.558847 ignition[832]: INFO : files: op(16): [finished] processing unit "prepare-cni-plugins.service" Feb 9 19:45:21.558847 ignition[832]: INFO : files: op(18): [started] setting preset to enabled for "prepare-critools.service" Feb 9 19:45:21.558847 ignition[832]: INFO : files: op(18): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 19:45:21.558847 ignition[832]: INFO : files: op(19): [started] setting preset to enabled for "prepare-helm.service" Feb 9 19:45:21.558847 ignition[832]: INFO : files: op(19): [finished] setting preset to enabled for "prepare-helm.service" Feb 9 19:45:21.558847 ignition[832]: INFO : files: op(1a): [started] setting preset to disabled for "coreos-metadata.service" Feb 9 19:45:21.558847 ignition[832]: INFO : files: op(1a): op(1b): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 9 19:45:21.583255 ignition[832]: INFO : files: op(1a): op(1b): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 9 19:45:21.584390 ignition[832]: INFO : files: op(1a): [finished] setting preset to disabled for "coreos-metadata.service" Feb 9 19:45:21.584390 ignition[832]: INFO : files: op(1c): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 19:45:21.584390 ignition[832]: INFO : files: op(1c): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 19:45:21.584390 ignition[832]: INFO : files: createResultFile: createFiles: op(1d): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 19:45:21.584390 ignition[832]: INFO : files: createResultFile: createFiles: op(1d): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 19:45:21.584390 ignition[832]: INFO : files: files passed Feb 9 19:45:21.584390 ignition[832]: INFO : Ignition finished successfully Feb 9 19:45:21.591179 systemd[1]: Finished ignition-files.service. Feb 9 19:45:21.595720 kernel: kauditd_printk_skb: 22 callbacks suppressed Feb 9 19:45:21.595747 kernel: audit: type=1130 audit(1707507921.591:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:21.591000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:21.592576 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 19:45:21.596287 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 19:45:21.596783 systemd[1]: Starting ignition-quench.service... Feb 9 19:45:21.599813 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 19:45:21.606003 kernel: audit: type=1130 audit(1707507921.600:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:21.606019 kernel: audit: type=1131 audit(1707507921.600:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:21.600000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:21.600000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:21.599893 systemd[1]: Finished ignition-quench.service. Feb 9 19:45:21.607555 initrd-setup-root-after-ignition[857]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Feb 9 19:45:21.609730 initrd-setup-root-after-ignition[859]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 19:45:21.610371 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 19:45:21.614236 kernel: audit: type=1130 audit(1707507921.610:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:21.610000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:21.611021 systemd[1]: Reached target ignition-complete.target. Feb 9 19:45:21.615696 systemd[1]: Starting initrd-parse-etc.service... Feb 9 19:45:21.628182 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 19:45:21.628286 systemd[1]: Finished initrd-parse-etc.service. Feb 9 19:45:21.633345 kernel: audit: type=1130 audit(1707507921.628:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:21.633366 kernel: audit: type=1131 audit(1707507921.628:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:21.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:21.628000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:21.629132 systemd[1]: Reached target initrd-fs.target. Feb 9 19:45:21.634429 systemd[1]: Reached target initrd.target. Feb 9 19:45:21.635486 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 19:45:21.636846 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 19:45:21.646749 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 19:45:21.649756 kernel: audit: type=1130 audit(1707507921.646:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:21.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:21.649771 systemd[1]: Starting initrd-cleanup.service... Feb 9 19:45:21.659112 systemd[1]: Stopped target nss-lookup.target. Feb 9 19:45:21.659770 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 19:45:21.660930 systemd[1]: Stopped target timers.target. Feb 9 19:45:21.662014 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 19:45:21.665618 kernel: audit: type=1131 audit(1707507921.662:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:21.662000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:21.662101 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 19:45:21.663147 systemd[1]: Stopped target initrd.target. Feb 9 19:45:21.666203 systemd[1]: Stopped target basic.target. Feb 9 19:45:21.667230 systemd[1]: Stopped target ignition-complete.target. Feb 9 19:45:21.668319 systemd[1]: Stopped target ignition-diskful.target. Feb 9 19:45:21.669376 systemd[1]: Stopped target initrd-root-device.target. Feb 9 19:45:21.670555 systemd[1]: Stopped target remote-fs.target. Feb 9 19:45:21.671662 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 19:45:21.672832 systemd[1]: Stopped target sysinit.target. Feb 9 19:45:21.673884 systemd[1]: Stopped target local-fs.target. Feb 9 19:45:21.674992 systemd[1]: Stopped target local-fs-pre.target. Feb 9 19:45:21.676054 systemd[1]: Stopped target swap.target. Feb 9 19:45:21.677000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:21.677057 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 19:45:21.681735 kernel: audit: type=1131 audit(1707507921.677:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:21.677151 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 19:45:21.682000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:21.678232 systemd[1]: Stopped target cryptsetup.target. Feb 9 19:45:21.685819 kernel: audit: type=1131 audit(1707507921.682:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:21.685000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:21.681100 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 19:45:21.681175 systemd[1]: Stopped dracut-initqueue.service. Feb 9 19:45:21.682363 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 19:45:21.682439 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 19:45:21.685393 systemd[1]: Stopped target paths.target. Feb 9 19:45:21.686342 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 19:45:21.691227 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 19:45:21.691348 systemd[1]: Stopped target slices.target. Feb 9 19:45:21.692911 systemd[1]: Stopped target sockets.target. Feb 9 19:45:21.693938 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 19:45:21.694000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:21.694018 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 19:45:21.696000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:21.695135 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 19:45:21.695221 systemd[1]: Stopped ignition-files.service. Feb 9 19:45:21.697139 systemd[1]: Stopping ignition-mount.service... Feb 9 19:45:21.698309 systemd[1]: Stopping iscsid.service... Feb 9 19:45:21.700522 iscsid[722]: iscsid shutting down. Feb 9 19:45:21.703000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:21.704000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:21.699631 systemd[1]: Stopping sysroot-boot.service... Feb 9 19:45:21.707345 ignition[872]: INFO : Ignition 2.14.0 Feb 9 19:45:21.707345 ignition[872]: INFO : Stage: umount Feb 9 19:45:21.707345 ignition[872]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 19:45:21.707345 ignition[872]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 19:45:21.707345 ignition[872]: INFO : umount: umount passed Feb 9 19:45:21.707345 ignition[872]: INFO : Ignition finished successfully Feb 9 19:45:21.707000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:21.711000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:21.713000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:21.699708 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 19:45:21.714000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:21.699829 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 19:45:21.703483 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 19:45:21.703591 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 19:45:21.716000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:21.717000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:21.717000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:21.706016 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 19:45:21.706102 systemd[1]: Stopped ignition-mount.service. Feb 9 19:45:21.718000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:21.707451 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 19:45:21.708455 systemd[1]: Stopped ignition-disks.service. Feb 9 19:45:21.711331 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 19:45:21.721000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:21.711477 systemd[1]: Stopped ignition-kargs.service. Feb 9 19:45:21.713658 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 19:45:21.713690 systemd[1]: Stopped ignition-setup.service. Feb 9 19:45:21.715396 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 19:45:21.715909 systemd[1]: iscsid.service: Deactivated successfully. Feb 9 19:45:21.725000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:21.716017 systemd[1]: Stopped iscsid.service. Feb 9 19:45:21.717135 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 19:45:21.717216 systemd[1]: Finished initrd-cleanup.service. Feb 9 19:45:21.718153 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 19:45:21.718228 systemd[1]: Stopped sysroot-boot.service. Feb 9 19:45:21.720115 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 19:45:21.720151 systemd[1]: Closed iscsid.socket. Feb 9 19:45:21.720760 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 19:45:21.720796 systemd[1]: Stopped initrd-setup-root.service. Feb 9 19:45:21.721933 systemd[1]: Stopping iscsiuio.service... Feb 9 19:45:21.734000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:21.735000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:21.725054 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 19:45:21.725117 systemd[1]: Stopped iscsiuio.service. Feb 9 19:45:21.725820 systemd[1]: Stopped target network.target. Feb 9 19:45:21.726936 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 19:45:21.726963 systemd[1]: Closed iscsiuio.socket. Feb 9 19:45:21.727087 systemd[1]: Stopping systemd-networkd.service... Feb 9 19:45:21.740000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:21.741000 audit: BPF prog-id=6 op=UNLOAD Feb 9 19:45:21.741000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:21.727181 systemd[1]: Stopping systemd-resolved.service... Feb 9 19:45:21.741000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:21.732233 systemd-networkd[716]: eth0: DHCPv6 lease lost Feb 9 19:45:21.733347 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 19:45:21.743000 audit: BPF prog-id=9 op=UNLOAD Feb 9 19:45:21.733431 systemd[1]: Stopped systemd-networkd.service. Feb 9 19:45:21.734607 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 19:45:21.746000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:21.734673 systemd[1]: Stopped systemd-resolved.service. Feb 9 19:45:21.737263 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 19:45:21.737287 systemd[1]: Closed systemd-networkd.socket. Feb 9 19:45:21.738731 systemd[1]: Stopping network-cleanup.service... Feb 9 19:45:21.739219 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 19:45:21.739252 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 19:45:21.740382 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 19:45:21.751000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:21.740415 systemd[1]: Stopped systemd-sysctl.service. Feb 9 19:45:21.741534 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 19:45:21.741564 systemd[1]: Stopped systemd-modules-load.service. Feb 9 19:45:21.755000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:21.756000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:21.741691 systemd[1]: Stopping systemd-udevd.service... Feb 9 19:45:21.756000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:21.742949 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 19:45:21.759000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:21.745410 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 19:45:21.760000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:21.745475 systemd[1]: Stopped network-cleanup.service. Feb 9 19:45:21.761000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:21.750591 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 19:45:21.750703 systemd[1]: Stopped systemd-udevd.service. Feb 9 19:45:21.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:21.763000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:21.751663 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 19:45:21.751694 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 19:45:21.752925 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 19:45:21.752978 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 19:45:21.754079 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 19:45:21.754109 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 19:45:21.755376 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 19:45:21.755405 systemd[1]: Stopped dracut-cmdline.service. Feb 9 19:45:21.756509 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 19:45:21.756541 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 19:45:21.757675 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 19:45:21.758253 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 9 19:45:21.758289 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 9 19:45:21.759795 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 19:45:21.759824 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 19:45:21.761001 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 19:45:21.761033 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 19:45:21.762311 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 9 19:45:21.762747 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 19:45:21.762809 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 19:45:21.763519 systemd[1]: Reached target initrd-switch-root.target. Feb 9 19:45:21.765041 systemd[1]: Starting initrd-switch-root.service... Feb 9 19:45:21.778798 systemd[1]: Switching root. Feb 9 19:45:21.796005 systemd-journald[197]: Journal stopped Feb 9 19:45:24.717980 systemd-journald[197]: Received SIGTERM from PID 1 (systemd). Feb 9 19:45:24.718029 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 19:45:24.718042 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 19:45:24.718052 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 19:45:24.718063 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 19:45:24.718075 kernel: SELinux: policy capability open_perms=1 Feb 9 19:45:24.718085 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 19:45:24.718094 kernel: SELinux: policy capability always_check_network=0 Feb 9 19:45:24.718103 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 19:45:24.718112 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 19:45:24.718122 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 19:45:24.718131 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 19:45:24.718140 systemd[1]: Successfully loaded SELinux policy in 35.258ms. Feb 9 19:45:24.718158 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.563ms. Feb 9 19:45:24.718171 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 19:45:24.718220 systemd[1]: Detected virtualization kvm. Feb 9 19:45:24.718232 systemd[1]: Detected architecture x86-64. Feb 9 19:45:24.718245 systemd[1]: Detected first boot. Feb 9 19:45:24.718255 systemd[1]: Initializing machine ID from VM UUID. Feb 9 19:45:24.718266 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 19:45:24.718276 systemd[1]: Populated /etc with preset unit settings. Feb 9 19:45:24.718287 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:45:24.718306 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:45:24.718317 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:45:24.718328 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 9 19:45:24.718338 systemd[1]: Stopped initrd-switch-root.service. Feb 9 19:45:24.718348 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 9 19:45:24.718358 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 19:45:24.718368 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 19:45:24.718378 systemd[1]: Created slice system-getty.slice. Feb 9 19:45:24.718390 systemd[1]: Created slice system-modprobe.slice. Feb 9 19:45:24.718399 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 19:45:24.718409 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 19:45:24.718419 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 19:45:24.718436 systemd[1]: Created slice user.slice. Feb 9 19:45:24.718446 systemd[1]: Started systemd-ask-password-console.path. Feb 9 19:45:24.718456 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 19:45:24.718466 systemd[1]: Set up automount boot.automount. Feb 9 19:45:24.718476 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 19:45:24.718488 systemd[1]: Stopped target initrd-switch-root.target. Feb 9 19:45:24.718500 systemd[1]: Stopped target initrd-fs.target. Feb 9 19:45:24.718510 systemd[1]: Stopped target initrd-root-fs.target. Feb 9 19:45:24.718519 systemd[1]: Reached target integritysetup.target. Feb 9 19:45:24.718530 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 19:45:24.718540 systemd[1]: Reached target remote-fs.target. Feb 9 19:45:24.718550 systemd[1]: Reached target slices.target. Feb 9 19:45:24.718560 systemd[1]: Reached target swap.target. Feb 9 19:45:24.718571 systemd[1]: Reached target torcx.target. Feb 9 19:45:24.718581 systemd[1]: Reached target veritysetup.target. Feb 9 19:45:24.718591 systemd[1]: Listening on systemd-coredump.socket. Feb 9 19:45:24.718601 systemd[1]: Listening on systemd-initctl.socket. Feb 9 19:45:24.718611 systemd[1]: Listening on systemd-networkd.socket. Feb 9 19:45:24.718621 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 19:45:24.718631 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 19:45:24.718640 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 19:45:24.718650 systemd[1]: Mounting dev-hugepages.mount... Feb 9 19:45:24.718662 systemd[1]: Mounting dev-mqueue.mount... Feb 9 19:45:24.718672 systemd[1]: Mounting media.mount... Feb 9 19:45:24.718682 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 19:45:24.718691 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 19:45:24.718701 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 19:45:24.718711 systemd[1]: Mounting tmp.mount... Feb 9 19:45:24.718720 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 19:45:24.718730 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 19:45:24.718740 systemd[1]: Starting kmod-static-nodes.service... Feb 9 19:45:24.718751 systemd[1]: Starting modprobe@configfs.service... Feb 9 19:45:24.718761 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 19:45:24.718771 systemd[1]: Starting modprobe@drm.service... Feb 9 19:45:24.718781 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 19:45:24.718791 systemd[1]: Starting modprobe@fuse.service... Feb 9 19:45:24.718806 systemd[1]: Starting modprobe@loop.service... Feb 9 19:45:24.718817 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 19:45:24.718827 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 9 19:45:24.718837 systemd[1]: Stopped systemd-fsck-root.service. Feb 9 19:45:24.718847 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 9 19:45:24.718857 systemd[1]: Stopped systemd-fsck-usr.service. Feb 9 19:45:24.718867 systemd[1]: Stopped systemd-journald.service. Feb 9 19:45:24.718877 kernel: fuse: init (API version 7.34) Feb 9 19:45:24.718886 kernel: loop: module loaded Feb 9 19:45:24.718897 systemd[1]: Starting systemd-journald.service... Feb 9 19:45:24.718907 systemd[1]: Starting systemd-modules-load.service... Feb 9 19:45:24.718917 systemd[1]: Starting systemd-network-generator.service... Feb 9 19:45:24.718927 systemd[1]: Starting systemd-remount-fs.service... Feb 9 19:45:24.718937 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 19:45:24.718947 systemd[1]: verity-setup.service: Deactivated successfully. Feb 9 19:45:24.718957 systemd[1]: Stopped verity-setup.service. Feb 9 19:45:24.718967 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 19:45:24.718979 systemd-journald[987]: Journal started Feb 9 19:45:24.719015 systemd-journald[987]: Runtime Journal (/run/log/journal/0ea1c2b867d84dbb98763e8b60b9c545) is 6.0M, max 48.4M, 42.4M free. Feb 9 19:45:21.851000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 9 19:45:22.485000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 19:45:22.485000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 19:45:22.486000 audit: BPF prog-id=10 op=LOAD Feb 9 19:45:22.486000 audit: BPF prog-id=10 op=UNLOAD Feb 9 19:45:22.486000 audit: BPF prog-id=11 op=LOAD Feb 9 19:45:22.486000 audit: BPF prog-id=11 op=UNLOAD Feb 9 19:45:22.516000 audit[906]: AVC avc: denied { associate } for pid=906 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 19:45:22.516000 audit[906]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001c58b2 a1=c000146de0 a2=c00014f0c0 a3=32 items=0 ppid=889 pid=906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:22.516000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 19:45:22.518000 audit[906]: AVC avc: denied { associate } for pid=906 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 9 19:45:22.518000 audit[906]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001c5989 a2=1ed a3=0 items=2 ppid=889 pid=906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:22.518000 audit: CWD cwd="/" Feb 9 19:45:22.518000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:22.518000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:22.518000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 19:45:24.617000 audit: BPF prog-id=12 op=LOAD Feb 9 19:45:24.617000 audit: BPF prog-id=3 op=UNLOAD Feb 9 19:45:24.617000 audit: BPF prog-id=13 op=LOAD Feb 9 19:45:24.617000 audit: BPF prog-id=14 op=LOAD Feb 9 19:45:24.617000 audit: BPF prog-id=4 op=UNLOAD Feb 9 19:45:24.617000 audit: BPF prog-id=5 op=UNLOAD Feb 9 19:45:24.618000 audit: BPF prog-id=15 op=LOAD Feb 9 19:45:24.618000 audit: BPF prog-id=12 op=UNLOAD Feb 9 19:45:24.618000 audit: BPF prog-id=16 op=LOAD Feb 9 19:45:24.618000 audit: BPF prog-id=17 op=LOAD Feb 9 19:45:24.618000 audit: BPF prog-id=13 op=UNLOAD Feb 9 19:45:24.618000 audit: BPF prog-id=14 op=UNLOAD Feb 9 19:45:24.618000 audit: BPF prog-id=18 op=LOAD Feb 9 19:45:24.618000 audit: BPF prog-id=15 op=UNLOAD Feb 9 19:45:24.618000 audit: BPF prog-id=19 op=LOAD Feb 9 19:45:24.618000 audit: BPF prog-id=20 op=LOAD Feb 9 19:45:24.618000 audit: BPF prog-id=16 op=UNLOAD Feb 9 19:45:24.618000 audit: BPF prog-id=17 op=UNLOAD Feb 9 19:45:24.619000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:24.621000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:24.621000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:24.631000 audit: BPF prog-id=18 op=UNLOAD Feb 9 19:45:24.696000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:24.698000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:24.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:24.699000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:24.700000 audit: BPF prog-id=21 op=LOAD Feb 9 19:45:24.700000 audit: BPF prog-id=22 op=LOAD Feb 9 19:45:24.700000 audit: BPF prog-id=23 op=LOAD Feb 9 19:45:24.700000 audit: BPF prog-id=19 op=UNLOAD Feb 9 19:45:24.700000 audit: BPF prog-id=20 op=UNLOAD Feb 9 19:45:24.715000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:24.715000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 19:45:24.715000 audit[987]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffc60c3e500 a2=4000 a3=7ffc60c3e59c items=0 ppid=1 pid=987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:24.715000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 19:45:24.615930 systemd[1]: Queued start job for default target multi-user.target. Feb 9 19:45:22.515097 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2024-02-09T19:45:22Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:45:24.615940 systemd[1]: Unnecessary job was removed for dev-vda6.device. Feb 9 19:45:22.515358 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2024-02-09T19:45:22Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 19:45:24.619449 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 9 19:45:22.515381 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2024-02-09T19:45:22Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 19:45:22.515416 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2024-02-09T19:45:22Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 9 19:45:22.515429 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2024-02-09T19:45:22Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 9 19:45:22.515464 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2024-02-09T19:45:22Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 9 19:45:22.515489 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2024-02-09T19:45:22Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 9 19:45:22.515717 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2024-02-09T19:45:22Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 9 19:45:22.515761 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2024-02-09T19:45:22Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 19:45:22.515777 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2024-02-09T19:45:22Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 19:45:22.516115 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2024-02-09T19:45:22Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 9 19:45:22.516156 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2024-02-09T19:45:22Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 9 19:45:22.516177 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2024-02-09T19:45:22Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 9 19:45:24.721201 systemd[1]: Started systemd-journald.service. Feb 9 19:45:22.516210 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2024-02-09T19:45:22Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 9 19:45:24.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:22.516231 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2024-02-09T19:45:22Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 9 19:45:22.516249 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2024-02-09T19:45:22Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 9 19:45:24.367787 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2024-02-09T19:45:24Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:45:24.368121 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2024-02-09T19:45:24Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:45:24.368249 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2024-02-09T19:45:24Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:45:24.368404 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2024-02-09T19:45:24Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:45:24.368456 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2024-02-09T19:45:24Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 9 19:45:24.368507 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2024-02-09T19:45:24Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 9 19:45:24.721985 systemd[1]: Mounted dev-hugepages.mount. Feb 9 19:45:24.722906 systemd[1]: Mounted dev-mqueue.mount. Feb 9 19:45:24.723756 systemd[1]: Mounted media.mount. Feb 9 19:45:24.724591 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 19:45:24.725490 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 19:45:24.726417 systemd[1]: Mounted tmp.mount. Feb 9 19:45:24.727323 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 19:45:24.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:24.728530 systemd[1]: Finished kmod-static-nodes.service. Feb 9 19:45:24.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:24.729637 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 19:45:24.729856 systemd[1]: Finished modprobe@configfs.service. Feb 9 19:45:24.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:24.730000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:24.730901 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 19:45:24.731134 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 19:45:24.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:24.731000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:24.732206 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 19:45:24.732351 systemd[1]: Finished modprobe@drm.service. Feb 9 19:45:24.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:24.732000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:24.733393 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 19:45:24.733675 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 19:45:24.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:24.734000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:24.734689 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 19:45:24.734888 systemd[1]: Finished modprobe@fuse.service. Feb 9 19:45:24.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:24.735000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:24.735753 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 19:45:24.735907 systemd[1]: Finished modprobe@loop.service. Feb 9 19:45:24.736000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:24.736000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:24.736971 systemd[1]: Finished systemd-modules-load.service. Feb 9 19:45:24.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:24.738040 systemd[1]: Finished systemd-network-generator.service. Feb 9 19:45:24.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:24.739180 systemd[1]: Finished systemd-remount-fs.service. Feb 9 19:45:24.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:24.740481 systemd[1]: Reached target network-pre.target. Feb 9 19:45:24.742209 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 19:45:24.744039 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 19:45:24.744712 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 19:45:24.745893 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 19:45:24.747561 systemd[1]: Starting systemd-journal-flush.service... Feb 9 19:45:24.748350 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 19:45:24.749249 systemd[1]: Starting systemd-random-seed.service... Feb 9 19:45:24.752035 systemd-journald[987]: Time spent on flushing to /var/log/journal/0ea1c2b867d84dbb98763e8b60b9c545 is 18.221ms for 1199 entries. Feb 9 19:45:24.752035 systemd-journald[987]: System Journal (/var/log/journal/0ea1c2b867d84dbb98763e8b60b9c545) is 8.0M, max 195.6M, 187.6M free. Feb 9 19:45:24.786584 systemd-journald[987]: Received client request to flush runtime journal. Feb 9 19:45:24.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:24.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:24.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:24.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:24.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:24.749974 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 19:45:24.750973 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:45:24.753149 systemd[1]: Starting systemd-sysusers.service... Feb 9 19:45:24.756708 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 19:45:24.787678 udevadm[1011]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 9 19:45:24.757509 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 19:45:24.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:24.758326 systemd[1]: Finished systemd-random-seed.service. Feb 9 19:45:24.759636 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 19:45:24.760757 systemd[1]: Reached target first-boot-complete.target. Feb 9 19:45:24.762645 systemd[1]: Starting systemd-udev-settle.service... Feb 9 19:45:24.763736 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:45:24.767000 systemd[1]: Finished systemd-sysusers.service. Feb 9 19:45:24.770565 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 19:45:24.783567 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 19:45:24.787174 systemd[1]: Finished systemd-journal-flush.service. Feb 9 19:45:25.150736 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 19:45:25.151000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:25.151000 audit: BPF prog-id=24 op=LOAD Feb 9 19:45:25.151000 audit: BPF prog-id=25 op=LOAD Feb 9 19:45:25.151000 audit: BPF prog-id=7 op=UNLOAD Feb 9 19:45:25.151000 audit: BPF prog-id=8 op=UNLOAD Feb 9 19:45:25.152863 systemd[1]: Starting systemd-udevd.service... Feb 9 19:45:25.168038 systemd-udevd[1015]: Using default interface naming scheme 'v252'. Feb 9 19:45:25.180695 systemd[1]: Started systemd-udevd.service. Feb 9 19:45:25.181000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:25.181000 audit: BPF prog-id=26 op=LOAD Feb 9 19:45:25.182850 systemd[1]: Starting systemd-networkd.service... Feb 9 19:45:25.188000 audit: BPF prog-id=27 op=LOAD Feb 9 19:45:25.188000 audit: BPF prog-id=28 op=LOAD Feb 9 19:45:25.188000 audit: BPF prog-id=29 op=LOAD Feb 9 19:45:25.189502 systemd[1]: Starting systemd-userdbd.service... Feb 9 19:45:25.207551 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Feb 9 19:45:25.216273 systemd[1]: Started systemd-userdbd.service. Feb 9 19:45:25.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:25.230902 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 19:45:25.248204 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 9 19:45:25.255000 audit[1040]: AVC avc: denied { confidentiality } for pid=1040 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 19:45:25.255000 audit[1040]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=557d8900e5d0 a1=32194 a2=7fe47ecc8bc5 a3=5 items=108 ppid=1015 pid=1040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:25.255000 audit: CWD cwd="/" Feb 9 19:45:25.255000 audit: PATH item=0 name=(null) inode=51 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=1 name=(null) inode=13847 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=2 name=(null) inode=13847 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=3 name=(null) inode=13848 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=4 name=(null) inode=13847 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=5 name=(null) inode=13849 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=6 name=(null) inode=13847 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=7 name=(null) inode=13850 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=8 name=(null) inode=13850 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=9 name=(null) inode=13851 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=10 name=(null) inode=13850 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=11 name=(null) inode=13852 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=12 name=(null) inode=13850 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=13 name=(null) inode=13853 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=14 name=(null) inode=13850 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=15 name=(null) inode=13854 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=16 name=(null) inode=13850 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=17 name=(null) inode=13855 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=18 name=(null) inode=13847 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=19 name=(null) inode=13856 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=20 name=(null) inode=13856 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=21 name=(null) inode=13857 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=22 name=(null) inode=13856 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=23 name=(null) inode=13858 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=24 name=(null) inode=13856 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=25 name=(null) inode=13859 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=26 name=(null) inode=13856 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=27 name=(null) inode=13860 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=28 name=(null) inode=13856 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=29 name=(null) inode=13861 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=30 name=(null) inode=13847 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=31 name=(null) inode=13862 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=32 name=(null) inode=13862 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=33 name=(null) inode=13863 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=34 name=(null) inode=13862 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=35 name=(null) inode=13864 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=36 name=(null) inode=13862 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=37 name=(null) inode=13865 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=38 name=(null) inode=13862 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=39 name=(null) inode=13866 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=40 name=(null) inode=13862 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=41 name=(null) inode=13867 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=42 name=(null) inode=13847 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=43 name=(null) inode=13868 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=44 name=(null) inode=13868 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=45 name=(null) inode=13869 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=46 name=(null) inode=13868 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=47 name=(null) inode=13870 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=48 name=(null) inode=13868 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=49 name=(null) inode=13871 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=50 name=(null) inode=13868 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=51 name=(null) inode=13872 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=52 name=(null) inode=13868 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=53 name=(null) inode=13873 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=54 name=(null) inode=51 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=55 name=(null) inode=13874 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=56 name=(null) inode=13874 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=57 name=(null) inode=13875 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=58 name=(null) inode=13874 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=59 name=(null) inode=13876 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=60 name=(null) inode=13874 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=61 name=(null) inode=13877 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=62 name=(null) inode=13877 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=63 name=(null) inode=13878 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=64 name=(null) inode=13877 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=65 name=(null) inode=13879 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=66 name=(null) inode=13877 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=67 name=(null) inode=13880 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=68 name=(null) inode=13877 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=69 name=(null) inode=13881 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=70 name=(null) inode=13877 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=71 name=(null) inode=13882 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=72 name=(null) inode=13874 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=73 name=(null) inode=13883 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=74 name=(null) inode=13883 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=75 name=(null) inode=13884 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=76 name=(null) inode=13883 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=77 name=(null) inode=13885 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=78 name=(null) inode=13883 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=79 name=(null) inode=13886 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=80 name=(null) inode=13883 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=81 name=(null) inode=13887 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=82 name=(null) inode=13883 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=83 name=(null) inode=13888 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=84 name=(null) inode=13874 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=85 name=(null) inode=13889 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=86 name=(null) inode=13889 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=87 name=(null) inode=13890 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=88 name=(null) inode=13889 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=89 name=(null) inode=13891 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=90 name=(null) inode=13889 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=91 name=(null) inode=13892 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=92 name=(null) inode=13889 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=93 name=(null) inode=13893 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=94 name=(null) inode=13889 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=95 name=(null) inode=13894 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=96 name=(null) inode=13874 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=97 name=(null) inode=13895 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=98 name=(null) inode=13895 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=99 name=(null) inode=13896 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=100 name=(null) inode=13895 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=101 name=(null) inode=13897 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=102 name=(null) inode=13895 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=103 name=(null) inode=13898 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=104 name=(null) inode=13895 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=105 name=(null) inode=13899 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=106 name=(null) inode=13895 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PATH item=107 name=(null) inode=13900 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:45:25.255000 audit: PROCTITLE proctitle="(udev-worker)" Feb 9 19:45:25.270202 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 0 Feb 9 19:45:25.272288 systemd-networkd[1022]: lo: Link UP Feb 9 19:45:25.272299 systemd-networkd[1022]: lo: Gained carrier Feb 9 19:45:25.272655 systemd-networkd[1022]: Enumeration completed Feb 9 19:45:25.272756 systemd[1]: Started systemd-networkd.service. Feb 9 19:45:25.273000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:25.273880 systemd-networkd[1022]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:45:25.274720 systemd-networkd[1022]: eth0: Link UP Feb 9 19:45:25.274731 systemd-networkd[1022]: eth0: Gained carrier Feb 9 19:45:25.276213 kernel: ACPI: button: Power Button [PWRF] Feb 9 19:45:25.299219 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 9 19:45:25.300322 systemd-networkd[1022]: eth0: DHCPv4 address 10.0.0.68/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 9 19:45:25.343215 kernel: mousedev: PS/2 mouse device common for all mice Feb 9 19:45:25.379514 kernel: kvm: Nested Virtualization enabled Feb 9 19:45:25.379605 kernel: SVM: kvm: Nested Paging enabled Feb 9 19:45:25.379619 kernel: SVM: Virtual VMLOAD VMSAVE supported Feb 9 19:45:25.379631 kernel: SVM: Virtual GIF supported Feb 9 19:45:25.394222 kernel: EDAC MC: Ver: 3.0.0 Feb 9 19:45:25.409629 systemd[1]: Finished systemd-udev-settle.service. Feb 9 19:45:25.410000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:25.411517 systemd[1]: Starting lvm2-activation-early.service... Feb 9 19:45:25.418446 lvm[1051]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 19:45:25.445272 systemd[1]: Finished lvm2-activation-early.service. Feb 9 19:45:25.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:25.446071 systemd[1]: Reached target cryptsetup.target. Feb 9 19:45:25.447690 systemd[1]: Starting lvm2-activation.service... Feb 9 19:45:25.451869 lvm[1052]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 19:45:25.482285 systemd[1]: Finished lvm2-activation.service. Feb 9 19:45:25.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:25.483040 systemd[1]: Reached target local-fs-pre.target. Feb 9 19:45:25.483646 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 19:45:25.483668 systemd[1]: Reached target local-fs.target. Feb 9 19:45:25.484495 systemd[1]: Reached target machines.target. Feb 9 19:45:25.486593 systemd[1]: Starting ldconfig.service... Feb 9 19:45:25.487530 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 19:45:25.487600 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:45:25.488706 systemd[1]: Starting systemd-boot-update.service... Feb 9 19:45:25.490220 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 19:45:25.491796 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 19:45:25.492561 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 19:45:25.492599 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 19:45:25.493371 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 19:45:25.494287 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1054 (bootctl) Feb 9 19:45:25.495281 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 19:45:25.498489 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 19:45:25.499000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:25.502505 systemd-tmpfiles[1057]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 19:45:25.505127 systemd-tmpfiles[1057]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 19:45:25.506419 systemd-tmpfiles[1057]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 19:45:25.531319 systemd-fsck[1062]: fsck.fat 4.2 (2021-01-31) Feb 9 19:45:25.531319 systemd-fsck[1062]: /dev/vda1: 790 files, 115362/258078 clusters Feb 9 19:45:25.532923 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 19:45:25.534000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:25.535901 systemd[1]: Mounting boot.mount... Feb 9 19:45:25.548713 systemd[1]: Mounted boot.mount. Feb 9 19:45:25.771520 systemd[1]: Finished systemd-boot-update.service. Feb 9 19:45:25.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:25.809890 ldconfig[1053]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 19:45:25.821193 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 19:45:25.821000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:25.823447 systemd[1]: Starting audit-rules.service... Feb 9 19:45:25.824948 systemd[1]: Starting clean-ca-certificates.service... Feb 9 19:45:25.826516 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 19:45:25.827000 audit: BPF prog-id=30 op=LOAD Feb 9 19:45:25.828800 systemd[1]: Starting systemd-resolved.service... Feb 9 19:45:25.829000 audit: BPF prog-id=31 op=LOAD Feb 9 19:45:25.831099 systemd[1]: Starting systemd-timesyncd.service... Feb 9 19:45:25.832925 systemd[1]: Starting systemd-update-utmp.service... Feb 9 19:45:25.834025 systemd[1]: Finished clean-ca-certificates.service. Feb 9 19:45:25.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:25.835046 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 19:45:25.837000 audit[1078]: SYSTEM_BOOT pid=1078 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 19:45:25.840740 systemd[1]: Finished systemd-update-utmp.service. Feb 9 19:45:25.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:25.842259 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 19:45:25.842000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:25.857000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 19:45:25.857000 audit[1087]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe4814c2b0 a2=420 a3=0 items=0 ppid=1067 pid=1087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:25.857000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 19:45:25.858495 augenrules[1087]: No rules Feb 9 19:45:25.859387 systemd[1]: Finished audit-rules.service. Feb 9 19:45:25.879082 systemd-resolved[1071]: Positive Trust Anchors: Feb 9 19:45:25.879094 systemd-resolved[1071]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 19:45:25.879120 systemd-resolved[1071]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 19:45:25.880910 systemd[1]: Started systemd-timesyncd.service. Feb 9 19:45:25.881718 systemd[1]: Reached target time-set.target. Feb 9 19:45:25.340648 systemd-timesyncd[1077]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 9 19:45:25.567293 systemd-journald[987]: Time jumped backwards, rotating. Feb 9 19:45:25.340691 systemd-timesyncd[1077]: Initial clock synchronization to Fri 2024-02-09 19:45:25.340589 UTC. Feb 9 19:45:25.346004 systemd-resolved[1071]: Defaulting to hostname 'linux'. Feb 9 19:45:25.347619 systemd[1]: Started systemd-resolved.service. Feb 9 19:45:25.348238 systemd[1]: Reached target network.target. Feb 9 19:45:25.348752 systemd[1]: Reached target nss-lookup.target. Feb 9 19:45:25.551132 systemd[1]: Finished ldconfig.service. Feb 9 19:45:25.553050 systemd[1]: Starting systemd-update-done.service... Feb 9 19:45:25.559252 systemd[1]: Finished systemd-update-done.service. Feb 9 19:45:25.560231 systemd[1]: Reached target sysinit.target. Feb 9 19:45:25.561253 systemd[1]: Started motdgen.path. Feb 9 19:45:25.561912 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 19:45:25.562987 systemd[1]: Started logrotate.timer. Feb 9 19:45:25.563947 systemd[1]: Started mdadm.timer. Feb 9 19:45:25.564653 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 19:45:25.565439 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 19:45:25.565469 systemd[1]: Reached target paths.target. Feb 9 19:45:25.566177 systemd[1]: Reached target timers.target. Feb 9 19:45:25.567171 systemd[1]: Listening on dbus.socket. Feb 9 19:45:25.568695 systemd[1]: Starting docker.socket... Feb 9 19:45:25.571341 systemd[1]: Listening on sshd.socket. Feb 9 19:45:25.572034 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:45:25.573240 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 19:45:25.573794 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 19:45:25.574675 systemd[1]: Listening on docker.socket. Feb 9 19:45:25.575486 systemd[1]: Reached target sockets.target. Feb 9 19:45:25.576299 systemd[1]: Reached target basic.target. Feb 9 19:45:25.577125 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 19:45:25.577157 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 19:45:25.578264 systemd[1]: Starting containerd.service... Feb 9 19:45:25.579956 systemd[1]: Starting dbus.service... Feb 9 19:45:25.581471 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 19:45:25.583586 systemd[1]: Starting extend-filesystems.service... Feb 9 19:45:25.584399 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 19:45:25.585263 jq[1100]: false Feb 9 19:45:25.585447 systemd[1]: Starting motdgen.service... Feb 9 19:45:25.587172 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 19:45:25.590836 systemd[1]: Starting prepare-critools.service... Feb 9 19:45:25.592518 systemd[1]: Starting prepare-helm.service... Feb 9 19:45:25.594590 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 19:45:25.596264 dbus-daemon[1099]: [system] SELinux support is enabled Feb 9 19:45:25.597111 systemd[1]: Starting sshd-keygen.service... Feb 9 19:45:25.599190 extend-filesystems[1101]: Found sr0 Feb 9 19:45:25.599932 extend-filesystems[1101]: Found vda Feb 9 19:45:25.599932 extend-filesystems[1101]: Found vda1 Feb 9 19:45:25.599932 extend-filesystems[1101]: Found vda2 Feb 9 19:45:25.601589 extend-filesystems[1101]: Found vda3 Feb 9 19:45:25.601589 extend-filesystems[1101]: Found usr Feb 9 19:45:25.601589 extend-filesystems[1101]: Found vda4 Feb 9 19:45:25.601589 extend-filesystems[1101]: Found vda6 Feb 9 19:45:25.601589 extend-filesystems[1101]: Found vda7 Feb 9 19:45:25.601589 extend-filesystems[1101]: Found vda9 Feb 9 19:45:25.601589 extend-filesystems[1101]: Checking size of /dev/vda9 Feb 9 19:45:25.606119 systemd[1]: Starting systemd-logind.service... Feb 9 19:45:25.606798 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:45:25.606860 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 19:45:25.607337 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 9 19:45:25.607989 systemd[1]: Starting update-engine.service... Feb 9 19:45:25.610783 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 19:45:25.615569 systemd[1]: Started dbus.service. Feb 9 19:45:25.619546 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 19:45:25.619720 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 19:45:25.619995 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 19:45:25.620140 systemd[1]: Finished motdgen.service. Feb 9 19:45:25.622966 extend-filesystems[1101]: Resized partition /dev/vda9 Feb 9 19:45:25.627607 tar[1127]: ./ Feb 9 19:45:25.627607 tar[1127]: ./macvlan Feb 9 19:45:25.623672 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 19:45:25.627876 jq[1123]: true Feb 9 19:45:25.624159 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 19:45:25.632144 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 19:45:25.632169 systemd[1]: Reached target system-config.target. Feb 9 19:45:25.632869 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 19:45:25.632884 systemd[1]: Reached target user-config.target. Feb 9 19:45:25.633029 tar[1129]: linux-amd64/helm Feb 9 19:45:25.635042 tar[1128]: crictl Feb 9 19:45:25.642949 extend-filesystems[1130]: resize2fs 1.46.5 (30-Dec-2021) Feb 9 19:45:25.645752 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 9 19:45:25.648126 jq[1134]: true Feb 9 19:45:25.661668 update_engine[1122]: I0209 19:45:25.661481 1122 main.cc:92] Flatcar Update Engine starting Feb 9 19:45:25.661956 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 9 19:45:25.666931 systemd[1]: Started update-engine.service. Feb 9 19:45:25.667114 update_engine[1122]: I0209 19:45:25.667093 1122 update_check_scheduler.cc:74] Next update check in 4m12s Feb 9 19:45:25.680462 systemd[1]: Started locksmithd.service. Feb 9 19:45:25.681280 systemd-logind[1121]: Watching system buttons on /dev/input/event1 (Power Button) Feb 9 19:45:25.685595 extend-filesystems[1130]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 9 19:45:25.685595 extend-filesystems[1130]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 9 19:45:25.685595 extend-filesystems[1130]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 9 19:45:25.681301 systemd-logind[1121]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 9 19:45:25.690444 extend-filesystems[1101]: Resized filesystem in /dev/vda9 Feb 9 19:45:25.686098 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 19:45:25.686252 systemd[1]: Finished extend-filesystems.service. Feb 9 19:45:25.686412 systemd-logind[1121]: New seat seat0. Feb 9 19:45:25.688346 systemd[1]: Started systemd-logind.service. Feb 9 19:45:25.709011 env[1135]: time="2024-02-09T19:45:25.708967173Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 19:45:25.714048 bash[1160]: Updated "/home/core/.ssh/authorized_keys" Feb 9 19:45:25.714789 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 19:45:25.741414 tar[1127]: ./static Feb 9 19:45:25.769071 env[1135]: time="2024-02-09T19:45:25.769017578Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 19:45:25.769402 env[1135]: time="2024-02-09T19:45:25.769382502Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:45:25.770866 env[1135]: time="2024-02-09T19:45:25.770705312Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:45:25.770866 env[1135]: time="2024-02-09T19:45:25.770758462Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:45:25.771155 env[1135]: time="2024-02-09T19:45:25.771135709Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:45:25.771240 env[1135]: time="2024-02-09T19:45:25.771221340Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 19:45:25.771314 env[1135]: time="2024-02-09T19:45:25.771292754Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 19:45:25.771383 env[1135]: time="2024-02-09T19:45:25.771364729Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 19:45:25.771515 env[1135]: time="2024-02-09T19:45:25.771497758Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:45:25.771817 env[1135]: time="2024-02-09T19:45:25.771800997Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:45:25.772010 env[1135]: time="2024-02-09T19:45:25.771990532Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:45:25.772084 env[1135]: time="2024-02-09T19:45:25.772065172Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 19:45:25.772189 env[1135]: time="2024-02-09T19:45:25.772171862Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 19:45:25.772263 env[1135]: time="2024-02-09T19:45:25.772245230Z" level=info msg="metadata content store policy set" policy=shared Feb 9 19:45:25.778266 env[1135]: time="2024-02-09T19:45:25.778225212Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 19:45:25.778308 env[1135]: time="2024-02-09T19:45:25.778279594Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 19:45:25.778308 env[1135]: time="2024-02-09T19:45:25.778295013Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 19:45:25.778365 env[1135]: time="2024-02-09T19:45:25.778336631Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 19:45:25.778403 env[1135]: time="2024-02-09T19:45:25.778365626Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 19:45:25.778403 env[1135]: time="2024-02-09T19:45:25.778385323Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 19:45:25.778456 env[1135]: time="2024-02-09T19:45:25.778401212Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 19:45:25.778456 env[1135]: time="2024-02-09T19:45:25.778418344Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 19:45:25.778456 env[1135]: time="2024-02-09T19:45:25.778433904Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 19:45:25.778456 env[1135]: time="2024-02-09T19:45:25.778452038Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 19:45:25.778529 env[1135]: time="2024-02-09T19:45:25.778468569Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 19:45:25.778529 env[1135]: time="2024-02-09T19:45:25.778483827Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 19:45:25.778653 env[1135]: time="2024-02-09T19:45:25.778624080Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 19:45:25.778767 env[1135]: time="2024-02-09T19:45:25.778739667Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 19:45:25.779052 env[1135]: time="2024-02-09T19:45:25.779025833Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 19:45:25.779091 env[1135]: time="2024-02-09T19:45:25.779063204Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 19:45:25.779091 env[1135]: time="2024-02-09T19:45:25.779081498Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 19:45:25.779163 env[1135]: time="2024-02-09T19:45:25.779138374Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 19:45:25.779201 env[1135]: time="2024-02-09T19:45:25.779162910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 19:45:25.779201 env[1135]: time="2024-02-09T19:45:25.779178610Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 19:45:25.779201 env[1135]: time="2024-02-09T19:45:25.779192245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 19:45:25.779257 env[1135]: time="2024-02-09T19:45:25.779207674Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 19:45:25.779257 env[1135]: time="2024-02-09T19:45:25.779223444Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 19:45:25.779257 env[1135]: time="2024-02-09T19:45:25.779236879Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 19:45:25.779312 env[1135]: time="2024-02-09T19:45:25.779257047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 19:45:25.779312 env[1135]: time="2024-02-09T19:45:25.779274570Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 19:45:25.779424 env[1135]: time="2024-02-09T19:45:25.779396839Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 19:45:25.779424 env[1135]: time="2024-02-09T19:45:25.779423429Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 19:45:25.779486 env[1135]: time="2024-02-09T19:45:25.779438167Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 19:45:25.779486 env[1135]: time="2024-02-09T19:45:25.779451732Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 19:45:25.779486 env[1135]: time="2024-02-09T19:45:25.779468754Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 19:45:25.779486 env[1135]: time="2024-02-09T19:45:25.779482900Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 19:45:25.779559 env[1135]: time="2024-02-09T19:45:25.779505062Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 19:45:25.779559 env[1135]: time="2024-02-09T19:45:25.779545117Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 19:45:25.779840 env[1135]: time="2024-02-09T19:45:25.779775138Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 19:45:25.780447 env[1135]: time="2024-02-09T19:45:25.779844769Z" level=info msg="Connect containerd service" Feb 9 19:45:25.780447 env[1135]: time="2024-02-09T19:45:25.779885906Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 19:45:25.780499 env[1135]: time="2024-02-09T19:45:25.780472286Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 19:45:25.780708 env[1135]: time="2024-02-09T19:45:25.780623339Z" level=info msg="Start subscribing containerd event" Feb 9 19:45:25.785198 env[1135]: time="2024-02-09T19:45:25.785164163Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 19:45:25.787384 env[1135]: time="2024-02-09T19:45:25.787365250Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 19:45:25.787610 systemd[1]: Started containerd.service. Feb 9 19:45:25.787893 env[1135]: time="2024-02-09T19:45:25.787874044Z" level=info msg="containerd successfully booted in 0.080783s" Feb 9 19:45:25.796505 tar[1127]: ./vlan Feb 9 19:45:25.809521 env[1135]: time="2024-02-09T19:45:25.809484714Z" level=info msg="Start recovering state" Feb 9 19:45:25.809748 env[1135]: time="2024-02-09T19:45:25.809712321Z" level=info msg="Start event monitor" Feb 9 19:45:25.809835 env[1135]: time="2024-02-09T19:45:25.809815093Z" level=info msg="Start snapshots syncer" Feb 9 19:45:25.809921 env[1135]: time="2024-02-09T19:45:25.809899852Z" level=info msg="Start cni network conf syncer for default" Feb 9 19:45:25.810009 env[1135]: time="2024-02-09T19:45:25.809989170Z" level=info msg="Start streaming server" Feb 9 19:45:25.835435 tar[1127]: ./portmap Feb 9 19:45:25.871116 tar[1127]: ./host-local Feb 9 19:45:25.903703 tar[1127]: ./vrf Feb 9 19:45:25.937300 tar[1127]: ./bridge Feb 9 19:45:25.977494 tar[1127]: ./tuning Feb 9 19:45:26.011236 tar[1127]: ./firewall Feb 9 19:45:26.054564 tar[1127]: ./host-device Feb 9 19:45:26.091973 tar[1127]: ./sbr Feb 9 19:45:26.104416 tar[1129]: linux-amd64/LICENSE Feb 9 19:45:26.104677 tar[1129]: linux-amd64/README.md Feb 9 19:45:26.110328 systemd[1]: Finished prepare-helm.service. Feb 9 19:45:26.126398 tar[1127]: ./loopback Feb 9 19:45:26.130876 sshd_keygen[1120]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 19:45:26.139271 systemd[1]: Finished prepare-critools.service. Feb 9 19:45:26.140063 systemd-networkd[1022]: eth0: Gained IPv6LL Feb 9 19:45:26.143951 locksmithd[1152]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 19:45:26.153837 systemd[1]: Finished sshd-keygen.service. Feb 9 19:45:26.155799 tar[1127]: ./dhcp Feb 9 19:45:26.156335 systemd[1]: Starting issuegen.service... Feb 9 19:45:26.161567 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 19:45:26.161714 systemd[1]: Finished issuegen.service. Feb 9 19:45:26.163695 systemd[1]: Starting systemd-user-sessions.service... Feb 9 19:45:26.170317 systemd[1]: Finished systemd-user-sessions.service. Feb 9 19:45:26.172224 systemd[1]: Started getty@tty1.service. Feb 9 19:45:26.173695 systemd[1]: Started serial-getty@ttyS0.service. Feb 9 19:45:26.174495 systemd[1]: Reached target getty.target. Feb 9 19:45:26.225606 tar[1127]: ./ptp Feb 9 19:45:26.253830 tar[1127]: ./ipvlan Feb 9 19:45:26.280627 tar[1127]: ./bandwidth Feb 9 19:45:26.313686 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 19:45:26.314703 systemd[1]: Reached target multi-user.target. Feb 9 19:45:26.316513 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 19:45:26.321963 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 19:45:26.322085 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 19:45:26.322883 systemd[1]: Startup finished in 558ms (kernel) + 6.093s (initrd) + 5.049s (userspace) = 11.702s. Feb 9 19:45:30.558196 systemd[1]: Created slice system-sshd.slice. Feb 9 19:45:30.559468 systemd[1]: Started sshd@0-10.0.0.68:22-10.0.0.1:59650.service. Feb 9 19:45:30.593336 sshd[1187]: Accepted publickey for core from 10.0.0.1 port 59650 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:45:30.594616 sshd[1187]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:45:30.602197 systemd-logind[1121]: New session 1 of user core. Feb 9 19:45:30.602990 systemd[1]: Created slice user-500.slice. Feb 9 19:45:30.603837 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 19:45:30.610340 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 19:45:30.611356 systemd[1]: Starting user@500.service... Feb 9 19:45:30.613317 (systemd)[1190]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:45:30.681236 systemd[1190]: Queued start job for default target default.target. Feb 9 19:45:30.681678 systemd[1190]: Reached target paths.target. Feb 9 19:45:30.681703 systemd[1190]: Reached target sockets.target. Feb 9 19:45:30.681719 systemd[1190]: Reached target timers.target. Feb 9 19:45:30.681749 systemd[1190]: Reached target basic.target. Feb 9 19:45:30.681792 systemd[1190]: Reached target default.target. Feb 9 19:45:30.681831 systemd[1190]: Startup finished in 64ms. Feb 9 19:45:30.681868 systemd[1]: Started user@500.service. Feb 9 19:45:30.682931 systemd[1]: Started session-1.scope. Feb 9 19:45:30.733271 systemd[1]: Started sshd@1-10.0.0.68:22-10.0.0.1:59652.service. Feb 9 19:45:30.767317 sshd[1199]: Accepted publickey for core from 10.0.0.1 port 59652 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:45:30.768607 sshd[1199]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:45:30.771762 systemd-logind[1121]: New session 2 of user core. Feb 9 19:45:30.772387 systemd[1]: Started session-2.scope. Feb 9 19:45:30.825786 sshd[1199]: pam_unix(sshd:session): session closed for user core Feb 9 19:45:30.828183 systemd[1]: sshd@1-10.0.0.68:22-10.0.0.1:59652.service: Deactivated successfully. Feb 9 19:45:30.828673 systemd[1]: session-2.scope: Deactivated successfully. Feb 9 19:45:30.829113 systemd-logind[1121]: Session 2 logged out. Waiting for processes to exit. Feb 9 19:45:30.830216 systemd[1]: Started sshd@2-10.0.0.68:22-10.0.0.1:59664.service. Feb 9 19:45:30.830748 systemd-logind[1121]: Removed session 2. Feb 9 19:45:30.862952 sshd[1205]: Accepted publickey for core from 10.0.0.1 port 59664 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:45:30.863864 sshd[1205]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:45:30.866959 systemd-logind[1121]: New session 3 of user core. Feb 9 19:45:30.867888 systemd[1]: Started session-3.scope. Feb 9 19:45:30.915921 sshd[1205]: pam_unix(sshd:session): session closed for user core Feb 9 19:45:30.918204 systemd[1]: sshd@2-10.0.0.68:22-10.0.0.1:59664.service: Deactivated successfully. Feb 9 19:45:30.918659 systemd[1]: session-3.scope: Deactivated successfully. Feb 9 19:45:30.919136 systemd-logind[1121]: Session 3 logged out. Waiting for processes to exit. Feb 9 19:45:30.919930 systemd[1]: Started sshd@3-10.0.0.68:22-10.0.0.1:59676.service. Feb 9 19:45:30.920473 systemd-logind[1121]: Removed session 3. Feb 9 19:45:30.952669 sshd[1211]: Accepted publickey for core from 10.0.0.1 port 59676 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:45:30.953625 sshd[1211]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:45:30.956406 systemd-logind[1121]: New session 4 of user core. Feb 9 19:45:30.957107 systemd[1]: Started session-4.scope. Feb 9 19:45:31.008904 sshd[1211]: pam_unix(sshd:session): session closed for user core Feb 9 19:45:31.011574 systemd[1]: sshd@3-10.0.0.68:22-10.0.0.1:59676.service: Deactivated successfully. Feb 9 19:45:31.012084 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 19:45:31.012568 systemd-logind[1121]: Session 4 logged out. Waiting for processes to exit. Feb 9 19:45:31.013475 systemd[1]: Started sshd@4-10.0.0.68:22-10.0.0.1:59678.service. Feb 9 19:45:31.014154 systemd-logind[1121]: Removed session 4. Feb 9 19:45:31.045321 sshd[1217]: Accepted publickey for core from 10.0.0.1 port 59678 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:45:31.046519 sshd[1217]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:45:31.049783 systemd-logind[1121]: New session 5 of user core. Feb 9 19:45:31.050470 systemd[1]: Started session-5.scope. Feb 9 19:45:31.103527 sudo[1220]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 19:45:31.103680 sudo[1220]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 19:45:31.621812 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 19:45:31.626678 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 19:45:31.627011 systemd[1]: Reached target network-online.target. Feb 9 19:45:31.628088 systemd[1]: Starting docker.service... Feb 9 19:45:31.659172 env[1238]: time="2024-02-09T19:45:31.659118315Z" level=info msg="Starting up" Feb 9 19:45:31.660372 env[1238]: time="2024-02-09T19:45:31.660347400Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 19:45:31.660440 env[1238]: time="2024-02-09T19:45:31.660422621Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 19:45:31.660519 env[1238]: time="2024-02-09T19:45:31.660498163Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 19:45:31.660585 env[1238]: time="2024-02-09T19:45:31.660567523Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 19:45:31.662064 env[1238]: time="2024-02-09T19:45:31.662047287Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 19:45:31.662140 env[1238]: time="2024-02-09T19:45:31.662122789Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 19:45:31.662214 env[1238]: time="2024-02-09T19:45:31.662195014Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 19:45:31.662278 env[1238]: time="2024-02-09T19:45:31.662260888Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 19:45:31.666065 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport4066623428-merged.mount: Deactivated successfully. Feb 9 19:45:32.411607 env[1238]: time="2024-02-09T19:45:32.411553341Z" level=info msg="Loading containers: start." Feb 9 19:45:32.499748 kernel: Initializing XFRM netlink socket Feb 9 19:45:32.524643 env[1238]: time="2024-02-09T19:45:32.524601807Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 9 19:45:32.568881 systemd-networkd[1022]: docker0: Link UP Feb 9 19:45:32.577963 env[1238]: time="2024-02-09T19:45:32.577935798Z" level=info msg="Loading containers: done." Feb 9 19:45:32.588154 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1225522760-merged.mount: Deactivated successfully. Feb 9 19:45:32.591023 env[1238]: time="2024-02-09T19:45:32.590989463Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 9 19:45:32.591155 env[1238]: time="2024-02-09T19:45:32.591138333Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 9 19:45:32.591248 env[1238]: time="2024-02-09T19:45:32.591229484Z" level=info msg="Daemon has completed initialization" Feb 9 19:45:32.606283 systemd[1]: Started docker.service. Feb 9 19:45:32.609858 env[1238]: time="2024-02-09T19:45:32.609812266Z" level=info msg="API listen on /run/docker.sock" Feb 9 19:45:32.624545 systemd[1]: Reloading. Feb 9 19:45:32.673799 /usr/lib/systemd/system-generators/torcx-generator[1382]: time="2024-02-09T19:45:32Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:45:32.673826 /usr/lib/systemd/system-generators/torcx-generator[1382]: time="2024-02-09T19:45:32Z" level=info msg="torcx already run" Feb 9 19:45:32.740840 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:45:32.740855 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:45:32.758854 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:45:32.824090 systemd[1]: Started kubelet.service. Feb 9 19:45:32.873641 kubelet[1423]: E0209 19:45:32.873574 1423 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 19:45:32.875549 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:45:32.875662 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:45:33.180453 env[1135]: time="2024-02-09T19:45:33.180404386Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\"" Feb 9 19:45:33.788991 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1326765089.mount: Deactivated successfully. Feb 9 19:45:36.223880 env[1135]: time="2024-02-09T19:45:36.223805222Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:36.407097 env[1135]: time="2024-02-09T19:45:36.407043243Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:36.510881 env[1135]: time="2024-02-09T19:45:36.510779950Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:36.522808 env[1135]: time="2024-02-09T19:45:36.522751296Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2f28bed4096abd572a56595ac0304238bdc271dcfe22c650707c09bf97ec16fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:36.523413 env[1135]: time="2024-02-09T19:45:36.523385876Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\" returns image reference \"sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f\"" Feb 9 19:45:36.531257 env[1135]: time="2024-02-09T19:45:36.531221207Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\"" Feb 9 19:45:38.784912 env[1135]: time="2024-02-09T19:45:38.784839651Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:38.787308 env[1135]: time="2024-02-09T19:45:38.787272563Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:38.789250 env[1135]: time="2024-02-09T19:45:38.789198594Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:38.791054 env[1135]: time="2024-02-09T19:45:38.791025680Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fda420c6c15cdd01c4eba3404f0662fe486a9c7f38fa13c741a21334673841a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:38.792069 env[1135]: time="2024-02-09T19:45:38.792018351Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\" returns image reference \"sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486\"" Feb 9 19:45:38.801045 env[1135]: time="2024-02-09T19:45:38.801021512Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\"" Feb 9 19:45:40.512963 env[1135]: time="2024-02-09T19:45:40.512913719Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:40.514487 env[1135]: time="2024-02-09T19:45:40.514460369Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:40.516089 env[1135]: time="2024-02-09T19:45:40.516067903Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:40.517564 env[1135]: time="2024-02-09T19:45:40.517547558Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c3c7303ee6d01c8e5a769db28661cf854b55175aa72c67e9b6a7b9d47ac42af3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:40.518152 env[1135]: time="2024-02-09T19:45:40.518120953Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\" returns image reference \"sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e\"" Feb 9 19:45:40.528825 env[1135]: time="2024-02-09T19:45:40.528785890Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 9 19:45:41.898124 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3878132235.mount: Deactivated successfully. Feb 9 19:45:42.333832 env[1135]: time="2024-02-09T19:45:42.333778140Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:42.335532 env[1135]: time="2024-02-09T19:45:42.335479981Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:42.337032 env[1135]: time="2024-02-09T19:45:42.337006593Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:42.338285 env[1135]: time="2024-02-09T19:45:42.338249113Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:42.338600 env[1135]: time="2024-02-09T19:45:42.338567189Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f\"" Feb 9 19:45:42.346669 env[1135]: time="2024-02-09T19:45:42.346642731Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 9 19:45:42.854940 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3497468702.mount: Deactivated successfully. Feb 9 19:45:42.995651 env[1135]: time="2024-02-09T19:45:42.995588704Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:43.038857 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 9 19:45:43.039027 systemd[1]: Stopped kubelet.service. Feb 9 19:45:43.040217 systemd[1]: Started kubelet.service. Feb 9 19:45:43.060782 env[1135]: time="2024-02-09T19:45:43.060709556Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:43.077717 kubelet[1471]: E0209 19:45:43.077671 1471 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 19:45:43.080749 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:45:43.080864 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:45:43.086919 env[1135]: time="2024-02-09T19:45:43.086885866Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:43.132575 env[1135]: time="2024-02-09T19:45:43.132471339Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:43.132832 env[1135]: time="2024-02-09T19:45:43.132791389Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 9 19:45:43.142278 env[1135]: time="2024-02-09T19:45:43.142243051Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\"" Feb 9 19:45:43.995339 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1640870302.mount: Deactivated successfully. Feb 9 19:45:49.221221 env[1135]: time="2024-02-09T19:45:49.221160620Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:49.222961 env[1135]: time="2024-02-09T19:45:49.222918346Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:49.224310 env[1135]: time="2024-02-09T19:45:49.224284347Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:49.225786 env[1135]: time="2024-02-09T19:45:49.225748753Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:49.226270 env[1135]: time="2024-02-09T19:45:49.226243711Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\" returns image reference \"sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7\"" Feb 9 19:45:49.234200 env[1135]: time="2024-02-09T19:45:49.234170343Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\"" Feb 9 19:45:49.819574 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount406660805.mount: Deactivated successfully. Feb 9 19:45:50.810999 env[1135]: time="2024-02-09T19:45:50.810922824Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:50.818253 env[1135]: time="2024-02-09T19:45:50.818187395Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:50.821309 env[1135]: time="2024-02-09T19:45:50.821278310Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:50.824370 env[1135]: time="2024-02-09T19:45:50.824292271Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\" returns image reference \"sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a\"" Feb 9 19:45:50.825096 env[1135]: time="2024-02-09T19:45:50.825066082Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:53.288894 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 9 19:45:53.289110 systemd[1]: Stopped kubelet.service. Feb 9 19:45:53.290600 systemd[1]: Started kubelet.service. Feb 9 19:45:53.299921 systemd[1]: Stopping kubelet.service... Feb 9 19:45:53.300933 systemd[1]: kubelet.service: Deactivated successfully. Feb 9 19:45:53.301071 systemd[1]: Stopped kubelet.service. Feb 9 19:45:53.311958 systemd[1]: Reloading. Feb 9 19:45:53.379841 /usr/lib/systemd/system-generators/torcx-generator[1586]: time="2024-02-09T19:45:53Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:45:53.379867 /usr/lib/systemd/system-generators/torcx-generator[1586]: time="2024-02-09T19:45:53Z" level=info msg="torcx already run" Feb 9 19:45:53.442797 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:45:53.442814 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:45:53.460642 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:45:53.531744 systemd[1]: Started kubelet.service. Feb 9 19:45:53.576998 kubelet[1628]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:45:53.577360 kubelet[1628]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:45:53.577613 kubelet[1628]: I0209 19:45:53.577523 1628 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 19:45:53.578787 kubelet[1628]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:45:53.578787 kubelet[1628]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:45:53.746304 kubelet[1628]: I0209 19:45:53.746256 1628 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 19:45:53.746304 kubelet[1628]: I0209 19:45:53.746290 1628 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 19:45:53.746502 kubelet[1628]: I0209 19:45:53.746492 1628 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 19:45:53.749037 kubelet[1628]: I0209 19:45:53.748994 1628 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 19:45:53.749835 kubelet[1628]: E0209 19:45:53.749811 1628 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.68:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.68:6443: connect: connection refused Feb 9 19:45:53.753104 kubelet[1628]: I0209 19:45:53.753088 1628 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 19:45:53.753279 kubelet[1628]: I0209 19:45:53.753266 1628 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 19:45:53.753343 kubelet[1628]: I0209 19:45:53.753333 1628 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 19:45:53.753420 kubelet[1628]: I0209 19:45:53.753351 1628 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 19:45:53.753420 kubelet[1628]: I0209 19:45:53.753361 1628 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 19:45:53.753465 kubelet[1628]: I0209 19:45:53.753440 1628 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:45:53.757291 kubelet[1628]: I0209 19:45:53.757259 1628 kubelet.go:398] "Attempting to sync node with API server" Feb 9 19:45:53.757291 kubelet[1628]: I0209 19:45:53.757289 1628 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 19:45:53.757442 kubelet[1628]: I0209 19:45:53.757312 1628 kubelet.go:297] "Adding apiserver pod source" Feb 9 19:45:53.757442 kubelet[1628]: I0209 19:45:53.757329 1628 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 19:45:53.757889 kubelet[1628]: I0209 19:45:53.757874 1628 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 19:45:53.757939 kubelet[1628]: W0209 19:45:53.757881 1628 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.68:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.68:6443: connect: connection refused Feb 9 19:45:53.757964 kubelet[1628]: E0209 19:45:53.757925 1628 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.68:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.68:6443: connect: connection refused Feb 9 19:45:53.758167 kubelet[1628]: W0209 19:45:53.758153 1628 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 19:45:53.758504 kubelet[1628]: I0209 19:45:53.758483 1628 server.go:1186] "Started kubelet" Feb 9 19:45:53.758619 kubelet[1628]: I0209 19:45:53.758601 1628 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 19:45:53.759242 kubelet[1628]: I0209 19:45:53.759226 1628 server.go:451] "Adding debug handlers to kubelet server" Feb 9 19:45:53.759274 kubelet[1628]: E0209 19:45:53.759191 1628 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b2496feaed4fc9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 45, 53, 758457801, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 45, 53, 758457801, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.0.0.68:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.68:6443: connect: connection refused'(may retry after sleeping) Feb 9 19:45:53.760174 kubelet[1628]: E0209 19:45:53.760158 1628 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 19:45:53.760215 kubelet[1628]: E0209 19:45:53.760179 1628 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 19:45:53.761502 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 9 19:45:53.761601 kubelet[1628]: I0209 19:45:53.761579 1628 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 19:45:53.761760 kubelet[1628]: I0209 19:45:53.761747 1628 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 19:45:53.762211 kubelet[1628]: I0209 19:45:53.762187 1628 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 19:45:53.762490 kubelet[1628]: W0209 19:45:53.762454 1628 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.68:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.68:6443: connect: connection refused Feb 9 19:45:53.762542 kubelet[1628]: E0209 19:45:53.762500 1628 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.68:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.68:6443: connect: connection refused Feb 9 19:45:53.762776 kubelet[1628]: E0209 19:45:53.762719 1628 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 19:45:53.763045 kubelet[1628]: E0209 19:45:53.763024 1628 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://10.0.0.68:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.68:6443: connect: connection refused Feb 9 19:45:53.763695 kubelet[1628]: W0209 19:45:53.763657 1628 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.68:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.68:6443: connect: connection refused Feb 9 19:45:53.763774 kubelet[1628]: E0209 19:45:53.763708 1628 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.68:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.68:6443: connect: connection refused Feb 9 19:45:53.779774 kubelet[1628]: I0209 19:45:53.779704 1628 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 19:45:53.779938 kubelet[1628]: I0209 19:45:53.779921 1628 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 19:45:53.779985 kubelet[1628]: I0209 19:45:53.779942 1628 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:45:53.787669 kubelet[1628]: I0209 19:45:53.787653 1628 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 19:45:53.807134 kubelet[1628]: I0209 19:45:53.807100 1628 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 19:45:53.807134 kubelet[1628]: I0209 19:45:53.807123 1628 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 19:45:53.807134 kubelet[1628]: I0209 19:45:53.807141 1628 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 19:45:53.807325 kubelet[1628]: E0209 19:45:53.807216 1628 kubelet.go:2137] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 19:45:53.807657 kubelet[1628]: W0209 19:45:53.807632 1628 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.68:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.68:6443: connect: connection refused Feb 9 19:45:53.807749 kubelet[1628]: E0209 19:45:53.807665 1628 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.68:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.68:6443: connect: connection refused Feb 9 19:45:53.815559 kubelet[1628]: I0209 19:45:53.815507 1628 policy_none.go:49] "None policy: Start" Feb 9 19:45:53.816217 kubelet[1628]: I0209 19:45:53.816193 1628 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 19:45:53.816217 kubelet[1628]: I0209 19:45:53.816220 1628 state_mem.go:35] "Initializing new in-memory state store" Feb 9 19:45:53.861513 systemd[1]: Created slice kubepods.slice. Feb 9 19:45:53.863769 kubelet[1628]: I0209 19:45:53.863744 1628 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 19:45:53.864157 kubelet[1628]: E0209 19:45:53.864130 1628 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.68:6443/api/v1/nodes\": dial tcp 10.0.0.68:6443: connect: connection refused" node="localhost" Feb 9 19:45:53.865330 systemd[1]: Created slice kubepods-burstable.slice. Feb 9 19:45:53.867619 systemd[1]: Created slice kubepods-besteffort.slice. Feb 9 19:45:53.872368 kubelet[1628]: I0209 19:45:53.872337 1628 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 19:45:53.872615 kubelet[1628]: I0209 19:45:53.872591 1628 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 19:45:53.872906 kubelet[1628]: E0209 19:45:53.872879 1628 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 9 19:45:53.907833 kubelet[1628]: I0209 19:45:53.907761 1628 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:45:53.908678 kubelet[1628]: I0209 19:45:53.908654 1628 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:45:53.909268 kubelet[1628]: I0209 19:45:53.909250 1628 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:45:53.910467 kubelet[1628]: I0209 19:45:53.910451 1628 status_manager.go:698] "Failed to get status for pod" podUID=446b65f9ed9897583538056d7f152bfb pod="kube-system/kube-apiserver-localhost" err="Get \"https://10.0.0.68:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.68:6443: connect: connection refused" Feb 9 19:45:53.910661 kubelet[1628]: I0209 19:45:53.910646 1628 status_manager.go:698] "Failed to get status for pod" podUID=550020dd9f101bcc23e1d3c651841c4d pod="kube-system/kube-controller-manager-localhost" err="Get \"https://10.0.0.68:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 10.0.0.68:6443: connect: connection refused" Feb 9 19:45:53.910857 kubelet[1628]: I0209 19:45:53.910840 1628 status_manager.go:698] "Failed to get status for pod" podUID=72ae17a74a2eae76daac6d298477aff0 pod="kube-system/kube-scheduler-localhost" err="Get \"https://10.0.0.68:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 10.0.0.68:6443: connect: connection refused" Feb 9 19:45:53.914612 systemd[1]: Created slice kubepods-burstable-pod446b65f9ed9897583538056d7f152bfb.slice. Feb 9 19:45:53.931380 systemd[1]: Created slice kubepods-burstable-pod550020dd9f101bcc23e1d3c651841c4d.slice. Feb 9 19:45:53.938213 systemd[1]: Created slice kubepods-burstable-pod72ae17a74a2eae76daac6d298477aff0.slice. Feb 9 19:45:53.963876 kubelet[1628]: E0209 19:45:53.963841 1628 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://10.0.0.68:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.68:6443: connect: connection refused Feb 9 19:45:54.063362 kubelet[1628]: I0209 19:45:54.063300 1628 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae17a74a2eae76daac6d298477aff0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae17a74a2eae76daac6d298477aff0\") " pod="kube-system/kube-scheduler-localhost" Feb 9 19:45:54.063362 kubelet[1628]: I0209 19:45:54.063357 1628 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/446b65f9ed9897583538056d7f152bfb-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"446b65f9ed9897583538056d7f152bfb\") " pod="kube-system/kube-apiserver-localhost" Feb 9 19:45:54.063362 kubelet[1628]: I0209 19:45:54.063377 1628 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 19:45:54.063573 kubelet[1628]: I0209 19:45:54.063417 1628 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 19:45:54.063573 kubelet[1628]: I0209 19:45:54.063441 1628 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 19:45:54.063573 kubelet[1628]: I0209 19:45:54.063457 1628 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/446b65f9ed9897583538056d7f152bfb-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"446b65f9ed9897583538056d7f152bfb\") " pod="kube-system/kube-apiserver-localhost" Feb 9 19:45:54.063573 kubelet[1628]: I0209 19:45:54.063502 1628 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/446b65f9ed9897583538056d7f152bfb-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"446b65f9ed9897583538056d7f152bfb\") " pod="kube-system/kube-apiserver-localhost" Feb 9 19:45:54.063573 kubelet[1628]: I0209 19:45:54.063518 1628 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 19:45:54.063680 kubelet[1628]: I0209 19:45:54.063535 1628 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 19:45:54.065117 kubelet[1628]: I0209 19:45:54.065091 1628 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 19:45:54.065485 kubelet[1628]: E0209 19:45:54.065468 1628 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.68:6443/api/v1/nodes\": dial tcp 10.0.0.68:6443: connect: connection refused" node="localhost" Feb 9 19:45:54.230028 kubelet[1628]: E0209 19:45:54.229919 1628 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:54.230579 env[1135]: time="2024-02-09T19:45:54.230522602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:446b65f9ed9897583538056d7f152bfb,Namespace:kube-system,Attempt:0,}" Feb 9 19:45:54.236884 kubelet[1628]: E0209 19:45:54.236861 1628 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:54.237241 env[1135]: time="2024-02-09T19:45:54.237203369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:550020dd9f101bcc23e1d3c651841c4d,Namespace:kube-system,Attempt:0,}" Feb 9 19:45:54.240412 kubelet[1628]: E0209 19:45:54.240386 1628 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:54.240640 env[1135]: time="2024-02-09T19:45:54.240613894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae17a74a2eae76daac6d298477aff0,Namespace:kube-system,Attempt:0,}" Feb 9 19:45:54.364717 kubelet[1628]: E0209 19:45:54.364680 1628 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://10.0.0.68:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.68:6443: connect: connection refused Feb 9 19:45:54.467061 kubelet[1628]: I0209 19:45:54.467025 1628 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 19:45:54.467314 kubelet[1628]: E0209 19:45:54.467290 1628 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.68:6443/api/v1/nodes\": dial tcp 10.0.0.68:6443: connect: connection refused" node="localhost" Feb 9 19:45:54.655813 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount771260792.mount: Deactivated successfully. Feb 9 19:45:54.661607 env[1135]: time="2024-02-09T19:45:54.661551041Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:54.662370 env[1135]: time="2024-02-09T19:45:54.662344458Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:54.664057 env[1135]: time="2024-02-09T19:45:54.664004481Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:54.666183 env[1135]: time="2024-02-09T19:45:54.666150194Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:54.667305 env[1135]: time="2024-02-09T19:45:54.667259564Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:54.668414 env[1135]: time="2024-02-09T19:45:54.668380766Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:54.669757 env[1135]: time="2024-02-09T19:45:54.669709578Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:54.671442 env[1135]: time="2024-02-09T19:45:54.671398965Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:54.673222 env[1135]: time="2024-02-09T19:45:54.673190164Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:54.675083 env[1135]: time="2024-02-09T19:45:54.675051554Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:54.675589 env[1135]: time="2024-02-09T19:45:54.675565117Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:54.676105 env[1135]: time="2024-02-09T19:45:54.676077688Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:54.697080 env[1135]: time="2024-02-09T19:45:54.696916922Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:45:54.697080 env[1135]: time="2024-02-09T19:45:54.696954482Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:45:54.697080 env[1135]: time="2024-02-09T19:45:54.696964341Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:45:54.697841 env[1135]: time="2024-02-09T19:45:54.697137195Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ee70d3b1da17c001d74ee6fe59f224e3f9f752f181c905fb6c76841f3dd3cb72 pid=1714 runtime=io.containerd.runc.v2 Feb 9 19:45:54.698160 env[1135]: time="2024-02-09T19:45:54.698012486Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:45:54.698160 env[1135]: time="2024-02-09T19:45:54.698040879Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:45:54.698160 env[1135]: time="2024-02-09T19:45:54.698049816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:45:54.698292 env[1135]: time="2024-02-09T19:45:54.698212701Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/eb06e17da35c8f984a48f5d7bcdb25bc3568c8c0723c1f439cb6e537dea1b994 pid=1721 runtime=io.containerd.runc.v2 Feb 9 19:45:54.700822 env[1135]: time="2024-02-09T19:45:54.700759276Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:45:54.700883 env[1135]: time="2024-02-09T19:45:54.700843634Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:45:54.700910 env[1135]: time="2024-02-09T19:45:54.700872999Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:45:54.701272 env[1135]: time="2024-02-09T19:45:54.701233645Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c8c6a791832a861b20bffcd6d700345554e50b788d80a411ba65ce2870555d2e pid=1733 runtime=io.containerd.runc.v2 Feb 9 19:45:54.708053 systemd[1]: Started cri-containerd-ee70d3b1da17c001d74ee6fe59f224e3f9f752f181c905fb6c76841f3dd3cb72.scope. Feb 9 19:45:54.716948 kubelet[1628]: W0209 19:45:54.715191 1628 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.68:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.68:6443: connect: connection refused Feb 9 19:45:54.716948 kubelet[1628]: E0209 19:45:54.715226 1628 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.68:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.68:6443: connect: connection refused Feb 9 19:45:54.716378 systemd[1]: Started cri-containerd-eb06e17da35c8f984a48f5d7bcdb25bc3568c8c0723c1f439cb6e537dea1b994.scope. Feb 9 19:45:54.721842 systemd[1]: Started cri-containerd-c8c6a791832a861b20bffcd6d700345554e50b788d80a411ba65ce2870555d2e.scope. Feb 9 19:45:54.744886 env[1135]: time="2024-02-09T19:45:54.744832543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:550020dd9f101bcc23e1d3c651841c4d,Namespace:kube-system,Attempt:0,} returns sandbox id \"ee70d3b1da17c001d74ee6fe59f224e3f9f752f181c905fb6c76841f3dd3cb72\"" Feb 9 19:45:54.745743 kubelet[1628]: E0209 19:45:54.745703 1628 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:54.748396 env[1135]: time="2024-02-09T19:45:54.748313250Z" level=info msg="CreateContainer within sandbox \"ee70d3b1da17c001d74ee6fe59f224e3f9f752f181c905fb6c76841f3dd3cb72\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 9 19:45:54.756168 env[1135]: time="2024-02-09T19:45:54.756096383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:446b65f9ed9897583538056d7f152bfb,Namespace:kube-system,Attempt:0,} returns sandbox id \"c8c6a791832a861b20bffcd6d700345554e50b788d80a411ba65ce2870555d2e\"" Feb 9 19:45:54.756812 kubelet[1628]: E0209 19:45:54.756791 1628 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:54.758569 env[1135]: time="2024-02-09T19:45:54.758531419Z" level=info msg="CreateContainer within sandbox \"c8c6a791832a861b20bffcd6d700345554e50b788d80a411ba65ce2870555d2e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 9 19:45:54.760430 env[1135]: time="2024-02-09T19:45:54.760398510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae17a74a2eae76daac6d298477aff0,Namespace:kube-system,Attempt:0,} returns sandbox id \"eb06e17da35c8f984a48f5d7bcdb25bc3568c8c0723c1f439cb6e537dea1b994\"" Feb 9 19:45:54.760818 kubelet[1628]: E0209 19:45:54.760803 1628 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:54.762284 env[1135]: time="2024-02-09T19:45:54.762205798Z" level=info msg="CreateContainer within sandbox \"eb06e17da35c8f984a48f5d7bcdb25bc3568c8c0723c1f439cb6e537dea1b994\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 9 19:45:54.764442 env[1135]: time="2024-02-09T19:45:54.764409570Z" level=info msg="CreateContainer within sandbox \"ee70d3b1da17c001d74ee6fe59f224e3f9f752f181c905fb6c76841f3dd3cb72\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9da5808bb582ca37b55e2730d1a1a6b3dc839feb6e00b50554260326990d4ae5\"" Feb 9 19:45:54.764930 env[1135]: time="2024-02-09T19:45:54.764899990Z" level=info msg="StartContainer for \"9da5808bb582ca37b55e2730d1a1a6b3dc839feb6e00b50554260326990d4ae5\"" Feb 9 19:45:54.776405 env[1135]: time="2024-02-09T19:45:54.776360869Z" level=info msg="CreateContainer within sandbox \"eb06e17da35c8f984a48f5d7bcdb25bc3568c8c0723c1f439cb6e537dea1b994\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5fe14d5e8f2c57dfd7ea853c10cc6fffece12fd8d39f75880ac9224405af1dad\"" Feb 9 19:45:54.776850 env[1135]: time="2024-02-09T19:45:54.776823587Z" level=info msg="StartContainer for \"5fe14d5e8f2c57dfd7ea853c10cc6fffece12fd8d39f75880ac9224405af1dad\"" Feb 9 19:45:54.779804 env[1135]: time="2024-02-09T19:45:54.779770652Z" level=info msg="CreateContainer within sandbox \"c8c6a791832a861b20bffcd6d700345554e50b788d80a411ba65ce2870555d2e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"94bdfd2964369505b28f2348e5b9fb6f3854f598ebbb5abe883b6aed6a7f0e65\"" Feb 9 19:45:54.779956 systemd[1]: Started cri-containerd-9da5808bb582ca37b55e2730d1a1a6b3dc839feb6e00b50554260326990d4ae5.scope. Feb 9 19:45:54.783010 env[1135]: time="2024-02-09T19:45:54.782978597Z" level=info msg="StartContainer for \"94bdfd2964369505b28f2348e5b9fb6f3854f598ebbb5abe883b6aed6a7f0e65\"" Feb 9 19:45:54.791708 systemd[1]: Started cri-containerd-5fe14d5e8f2c57dfd7ea853c10cc6fffece12fd8d39f75880ac9224405af1dad.scope. Feb 9 19:45:54.798321 kubelet[1628]: W0209 19:45:54.798266 1628 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.68:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.68:6443: connect: connection refused Feb 9 19:45:54.798321 kubelet[1628]: E0209 19:45:54.798326 1628 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.68:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.68:6443: connect: connection refused Feb 9 19:45:54.804238 systemd[1]: Started cri-containerd-94bdfd2964369505b28f2348e5b9fb6f3854f598ebbb5abe883b6aed6a7f0e65.scope. Feb 9 19:45:54.817010 kubelet[1628]: W0209 19:45:54.816952 1628 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.68:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.68:6443: connect: connection refused Feb 9 19:45:54.817010 kubelet[1628]: E0209 19:45:54.816998 1628 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.68:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.68:6443: connect: connection refused Feb 9 19:45:54.828264 env[1135]: time="2024-02-09T19:45:54.828219253Z" level=info msg="StartContainer for \"9da5808bb582ca37b55e2730d1a1a6b3dc839feb6e00b50554260326990d4ae5\" returns successfully" Feb 9 19:45:54.844072 env[1135]: time="2024-02-09T19:45:54.844028967Z" level=info msg="StartContainer for \"5fe14d5e8f2c57dfd7ea853c10cc6fffece12fd8d39f75880ac9224405af1dad\" returns successfully" Feb 9 19:45:54.854901 env[1135]: time="2024-02-09T19:45:54.854854525Z" level=info msg="StartContainer for \"94bdfd2964369505b28f2348e5b9fb6f3854f598ebbb5abe883b6aed6a7f0e65\" returns successfully" Feb 9 19:45:55.268700 kubelet[1628]: I0209 19:45:55.268663 1628 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 19:45:55.825398 kubelet[1628]: E0209 19:45:55.825363 1628 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:55.826899 kubelet[1628]: E0209 19:45:55.826867 1628 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:55.827881 kubelet[1628]: E0209 19:45:55.827861 1628 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:56.290116 kubelet[1628]: E0209 19:45:56.290088 1628 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 9 19:45:56.385247 kubelet[1628]: I0209 19:45:56.385193 1628 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Feb 9 19:45:56.397271 kubelet[1628]: E0209 19:45:56.397240 1628 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 19:45:56.419566 kubelet[1628]: E0209 19:45:56.419458 1628 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b2496feaed4fc9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 45, 53, 758457801, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 45, 53, 758457801, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:45:56.473304 kubelet[1628]: E0209 19:45:56.473173 1628 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b2496feb077238", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 45, 53, 760170552, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 45, 53, 760170552, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:45:56.498383 kubelet[1628]: E0209 19:45:56.498337 1628 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 19:45:56.527012 kubelet[1628]: E0209 19:45:56.526903 1628 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b2496fec2bfa7d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node localhost status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 45, 53, 779341949, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 45, 53, 779341949, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:45:56.580776 kubelet[1628]: E0209 19:45:56.580600 1628 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b2496fec2c0ac5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node localhost status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 45, 53, 779346117, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 45, 53, 779346117, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:45:56.598786 kubelet[1628]: E0209 19:45:56.598739 1628 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 19:45:56.633889 kubelet[1628]: E0209 19:45:56.633783 1628 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b2496fec2c1415", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node localhost status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 45, 53, 779348501, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 45, 53, 779348501, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:45:56.688517 kubelet[1628]: E0209 19:45:56.688426 1628 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b2496fec2bfa7d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node localhost status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 45, 53, 779341949, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 45, 53, 863658472, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:45:56.699291 kubelet[1628]: E0209 19:45:56.699250 1628 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 19:45:56.742905 kubelet[1628]: E0209 19:45:56.742810 1628 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b2496fec2c0ac5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node localhost status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 45, 53, 779346117, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 45, 53, 863673110, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:45:56.797034 kubelet[1628]: E0209 19:45:56.796933 1628 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b2496fec2c1415", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node localhost status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 45, 53, 779348501, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 45, 53, 863677859, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:45:56.800053 kubelet[1628]: E0209 19:45:56.800032 1628 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 19:45:56.829364 kubelet[1628]: E0209 19:45:56.829334 1628 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:56.829364 kubelet[1628]: E0209 19:45:56.829371 1628 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:56.829685 kubelet[1628]: E0209 19:45:56.829586 1628 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:56.850096 kubelet[1628]: E0209 19:45:56.849970 1628 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b2496ff1c4fe84", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 45, 53, 873256068, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 45, 53, 873256068, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:45:56.901107 kubelet[1628]: E0209 19:45:56.901069 1628 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 19:45:57.001777 kubelet[1628]: E0209 19:45:57.001731 1628 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 19:45:57.073744 kubelet[1628]: E0209 19:45:57.073643 1628 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b2496fec2bfa7d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node localhost status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 45, 53, 779341949, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 45, 53, 908595649, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:45:57.101947 kubelet[1628]: E0209 19:45:57.101852 1628 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 19:45:57.202211 kubelet[1628]: E0209 19:45:57.202153 1628 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 19:45:57.302486 kubelet[1628]: E0209 19:45:57.302439 1628 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 19:45:57.403283 kubelet[1628]: E0209 19:45:57.403181 1628 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 19:45:57.472833 kubelet[1628]: E0209 19:45:57.472737 1628 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b2496fec2c0ac5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node localhost status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 45, 53, 779346117, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 45, 53, 908603905, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:45:57.504043 kubelet[1628]: E0209 19:45:57.504007 1628 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 19:45:57.758941 kubelet[1628]: I0209 19:45:57.758833 1628 apiserver.go:52] "Watching apiserver" Feb 9 19:45:57.763087 kubelet[1628]: I0209 19:45:57.763051 1628 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 19:45:57.784198 kubelet[1628]: I0209 19:45:57.784161 1628 reconciler.go:41] "Reconciler: start to sync state" Feb 9 19:45:57.873992 kubelet[1628]: E0209 19:45:57.873884 1628 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b2496fec2c1415", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node localhost status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 45, 53, 779348501, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 45, 53, 908607341, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:45:57.965385 kubelet[1628]: E0209 19:45:57.965358 1628 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:58.830585 kubelet[1628]: E0209 19:45:58.830560 1628 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:59.099344 systemd[1]: Reloading. Feb 9 19:45:59.151979 /usr/lib/systemd/system-generators/torcx-generator[1964]: time="2024-02-09T19:45:59Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:45:59.152006 /usr/lib/systemd/system-generators/torcx-generator[1964]: time="2024-02-09T19:45:59Z" level=info msg="torcx already run" Feb 9 19:45:59.183309 kubelet[1628]: E0209 19:45:59.183272 1628 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:59.213883 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:45:59.213898 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:45:59.232796 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:45:59.313575 systemd[1]: Stopping kubelet.service... Feb 9 19:45:59.333040 systemd[1]: kubelet.service: Deactivated successfully. Feb 9 19:45:59.333197 systemd[1]: Stopped kubelet.service. Feb 9 19:45:59.334618 systemd[1]: Started kubelet.service. Feb 9 19:45:59.391368 kubelet[2005]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:45:59.391368 kubelet[2005]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:45:59.391368 kubelet[2005]: I0209 19:45:59.391333 2005 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 19:45:59.394716 kubelet[2005]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:45:59.394716 kubelet[2005]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:45:59.397441 kubelet[2005]: I0209 19:45:59.397411 2005 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 19:45:59.397441 kubelet[2005]: I0209 19:45:59.397438 2005 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 19:45:59.397687 kubelet[2005]: I0209 19:45:59.397668 2005 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 19:45:59.398855 kubelet[2005]: I0209 19:45:59.398840 2005 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 9 19:45:59.399417 kubelet[2005]: I0209 19:45:59.399394 2005 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 19:45:59.402920 kubelet[2005]: I0209 19:45:59.402901 2005 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 19:45:59.403081 kubelet[2005]: I0209 19:45:59.403068 2005 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 19:45:59.403128 kubelet[2005]: I0209 19:45:59.403123 2005 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 19:45:59.403202 kubelet[2005]: I0209 19:45:59.403138 2005 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 19:45:59.403202 kubelet[2005]: I0209 19:45:59.403147 2005 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 19:45:59.403202 kubelet[2005]: I0209 19:45:59.403178 2005 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:45:59.405627 kubelet[2005]: I0209 19:45:59.405614 2005 kubelet.go:398] "Attempting to sync node with API server" Feb 9 19:45:59.405687 kubelet[2005]: I0209 19:45:59.405633 2005 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 19:45:59.405687 kubelet[2005]: I0209 19:45:59.405653 2005 kubelet.go:297] "Adding apiserver pod source" Feb 9 19:45:59.405687 kubelet[2005]: I0209 19:45:59.405666 2005 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 19:45:59.406663 kubelet[2005]: I0209 19:45:59.406633 2005 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 19:45:59.407455 kubelet[2005]: I0209 19:45:59.407433 2005 server.go:1186] "Started kubelet" Feb 9 19:45:59.409698 kubelet[2005]: I0209 19:45:59.409678 2005 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 19:45:59.410293 kubelet[2005]: E0209 19:45:59.410267 2005 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 19:45:59.410293 kubelet[2005]: E0209 19:45:59.410295 2005 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 19:45:59.414760 kubelet[2005]: I0209 19:45:59.412074 2005 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 19:45:59.414760 kubelet[2005]: I0209 19:45:59.412151 2005 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 19:45:59.414760 kubelet[2005]: I0209 19:45:59.412581 2005 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 19:45:59.420177 kubelet[2005]: I0209 19:45:59.420155 2005 server.go:451] "Adding debug handlers to kubelet server" Feb 9 19:45:59.443820 kubelet[2005]: I0209 19:45:59.443792 2005 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 19:45:59.460094 kubelet[2005]: I0209 19:45:59.460068 2005 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 19:45:59.460094 kubelet[2005]: I0209 19:45:59.460088 2005 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 19:45:59.460162 kubelet[2005]: I0209 19:45:59.460104 2005 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 19:45:59.460162 kubelet[2005]: E0209 19:45:59.460153 2005 kubelet.go:2137] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 19:45:59.464082 kubelet[2005]: I0209 19:45:59.464063 2005 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 19:45:59.464082 kubelet[2005]: I0209 19:45:59.464082 2005 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 19:45:59.464156 kubelet[2005]: I0209 19:45:59.464098 2005 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:45:59.464284 kubelet[2005]: I0209 19:45:59.464262 2005 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 9 19:45:59.464284 kubelet[2005]: I0209 19:45:59.464284 2005 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 9 19:45:59.464328 kubelet[2005]: I0209 19:45:59.464292 2005 policy_none.go:49] "None policy: Start" Feb 9 19:45:59.464867 kubelet[2005]: I0209 19:45:59.464851 2005 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 19:45:59.464867 kubelet[2005]: I0209 19:45:59.464868 2005 state_mem.go:35] "Initializing new in-memory state store" Feb 9 19:45:59.464973 kubelet[2005]: I0209 19:45:59.464958 2005 state_mem.go:75] "Updated machine memory state" Feb 9 19:45:59.467937 kubelet[2005]: I0209 19:45:59.467917 2005 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 19:45:59.468229 kubelet[2005]: I0209 19:45:59.468192 2005 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 19:45:59.485412 sudo[2058]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 9 19:45:59.485579 sudo[2058]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Feb 9 19:45:59.515998 kubelet[2005]: I0209 19:45:59.515976 2005 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 19:45:59.521300 kubelet[2005]: I0209 19:45:59.521273 2005 kubelet_node_status.go:108] "Node was previously registered" node="localhost" Feb 9 19:45:59.521459 kubelet[2005]: I0209 19:45:59.521435 2005 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Feb 9 19:45:59.561251 kubelet[2005]: I0209 19:45:59.561215 2005 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:45:59.561330 kubelet[2005]: I0209 19:45:59.561292 2005 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:45:59.561330 kubelet[2005]: I0209 19:45:59.561316 2005 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:45:59.566287 kubelet[2005]: E0209 19:45:59.566242 2005 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Feb 9 19:45:59.613678 kubelet[2005]: I0209 19:45:59.613654 2005 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 19:45:59.613678 kubelet[2005]: I0209 19:45:59.613687 2005 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/446b65f9ed9897583538056d7f152bfb-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"446b65f9ed9897583538056d7f152bfb\") " pod="kube-system/kube-apiserver-localhost" Feb 9 19:45:59.613839 kubelet[2005]: I0209 19:45:59.613706 2005 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 19:45:59.613839 kubelet[2005]: I0209 19:45:59.613732 2005 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 19:45:59.613839 kubelet[2005]: I0209 19:45:59.613750 2005 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 19:45:59.614075 kubelet[2005]: I0209 19:45:59.614062 2005 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae17a74a2eae76daac6d298477aff0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae17a74a2eae76daac6d298477aff0\") " pod="kube-system/kube-scheduler-localhost" Feb 9 19:45:59.614118 kubelet[2005]: I0209 19:45:59.614109 2005 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/446b65f9ed9897583538056d7f152bfb-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"446b65f9ed9897583538056d7f152bfb\") " pod="kube-system/kube-apiserver-localhost" Feb 9 19:45:59.614248 kubelet[2005]: I0209 19:45:59.614235 2005 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/446b65f9ed9897583538056d7f152bfb-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"446b65f9ed9897583538056d7f152bfb\") " pod="kube-system/kube-apiserver-localhost" Feb 9 19:45:59.614290 kubelet[2005]: I0209 19:45:59.614273 2005 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 19:45:59.811431 kubelet[2005]: E0209 19:45:59.811387 2005 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 9 19:45:59.812277 kubelet[2005]: E0209 19:45:59.812249 2005 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:59.867802 kubelet[2005]: E0209 19:45:59.867763 2005 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:59.910699 kubelet[2005]: E0209 19:45:59.910669 2005 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:59.937503 sudo[2058]: pam_unix(sudo:session): session closed for user root Feb 9 19:46:00.406331 kubelet[2005]: I0209 19:46:00.406270 2005 apiserver.go:52] "Watching apiserver" Feb 9 19:46:00.612795 kubelet[2005]: I0209 19:46:00.612757 2005 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 19:46:00.620969 kubelet[2005]: I0209 19:46:00.620932 2005 reconciler.go:41] "Reconciler: start to sync state" Feb 9 19:46:01.011180 kubelet[2005]: E0209 19:46:01.011133 2005 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Feb 9 19:46:01.011437 kubelet[2005]: E0209 19:46:01.011411 2005 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:46:01.209915 kubelet[2005]: E0209 19:46:01.209877 2005 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Feb 9 19:46:01.210585 kubelet[2005]: E0209 19:46:01.210560 2005 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:46:01.382251 sudo[1220]: pam_unix(sudo:session): session closed for user root Feb 9 19:46:01.383580 sshd[1217]: pam_unix(sshd:session): session closed for user core Feb 9 19:46:01.385974 systemd[1]: sshd@4-10.0.0.68:22-10.0.0.1:59678.service: Deactivated successfully. Feb 9 19:46:01.386654 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 19:46:01.386817 systemd[1]: session-5.scope: Consumed 4.367s CPU time. Feb 9 19:46:01.387208 systemd-logind[1121]: Session 5 logged out. Waiting for processes to exit. Feb 9 19:46:01.387914 systemd-logind[1121]: Removed session 5. Feb 9 19:46:01.411376 kubelet[2005]: E0209 19:46:01.411331 2005 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 9 19:46:01.411876 kubelet[2005]: E0209 19:46:01.411863 2005 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:46:01.468298 kubelet[2005]: E0209 19:46:01.468270 2005 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:46:01.469009 kubelet[2005]: E0209 19:46:01.468985 2005 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:46:01.469583 kubelet[2005]: E0209 19:46:01.469558 2005 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:46:02.011326 kubelet[2005]: I0209 19:46:02.011285 2005 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.011219349 pod.CreationTimestamp="2024-02-09 19:45:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:46:02.011106213 +0000 UTC m=+2.673256469" watchObservedRunningTime="2024-02-09 19:46:02.011219349 +0000 UTC m=+2.673369585" Feb 9 19:46:02.011509 kubelet[2005]: I0209 19:46:02.011370 2005 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=5.011356991 pod.CreationTimestamp="2024-02-09 19:45:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:46:01.61379596 +0000 UTC m=+2.275946196" watchObservedRunningTime="2024-02-09 19:46:02.011356991 +0000 UTC m=+2.673507227" Feb 9 19:46:02.509629 kubelet[2005]: E0209 19:46:02.509592 2005 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:46:02.813862 kubelet[2005]: I0209 19:46:02.813832 2005 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.813611126 pod.CreationTimestamp="2024-02-09 19:45:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:46:02.410529712 +0000 UTC m=+3.072679978" watchObservedRunningTime="2024-02-09 19:46:02.813611126 +0000 UTC m=+3.475761362" Feb 9 19:46:05.249824 kubelet[2005]: E0209 19:46:05.249786 2005 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:46:05.472926 kubelet[2005]: E0209 19:46:05.472900 2005 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:46:07.522662 kubelet[2005]: E0209 19:46:07.522626 2005 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:46:08.475864 kubelet[2005]: E0209 19:46:08.475833 2005 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:46:10.630875 update_engine[1122]: I0209 19:46:10.630818 1122 update_attempter.cc:509] Updating boot flags... Feb 9 19:46:12.515311 kubelet[2005]: E0209 19:46:12.515261 2005 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:46:13.481475 kubelet[2005]: I0209 19:46:13.481446 2005 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 9 19:46:13.481823 env[1135]: time="2024-02-09T19:46:13.481785997Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 19:46:13.482057 kubelet[2005]: I0209 19:46:13.481977 2005 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 9 19:46:14.340839 kubelet[2005]: I0209 19:46:14.340778 2005 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:46:14.345400 kubelet[2005]: I0209 19:46:14.345362 2005 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:46:14.346350 systemd[1]: Created slice kubepods-besteffort-podf8d6b0cc_d7fa_4994_86d6_671206be4bb3.slice. Feb 9 19:46:14.358397 systemd[1]: Created slice kubepods-burstable-pod3357fc38_4056_48b1_b5a7_61df320fa741.slice. Feb 9 19:46:14.412661 kubelet[2005]: I0209 19:46:14.412617 2005 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3357fc38-4056-48b1-b5a7-61df320fa741-cilium-config-path\") pod \"cilium-x5f4j\" (UID: \"3357fc38-4056-48b1-b5a7-61df320fa741\") " pod="kube-system/cilium-x5f4j" Feb 9 19:46:14.412661 kubelet[2005]: I0209 19:46:14.412666 2005 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzvgh\" (UniqueName: \"kubernetes.io/projected/3357fc38-4056-48b1-b5a7-61df320fa741-kube-api-access-mzvgh\") pod \"cilium-x5f4j\" (UID: \"3357fc38-4056-48b1-b5a7-61df320fa741\") " pod="kube-system/cilium-x5f4j" Feb 9 19:46:14.412903 kubelet[2005]: I0209 19:46:14.412699 2005 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f8d6b0cc-d7fa-4994-86d6-671206be4bb3-xtables-lock\") pod \"kube-proxy-x96tv\" (UID: \"f8d6b0cc-d7fa-4994-86d6-671206be4bb3\") " pod="kube-system/kube-proxy-x96tv" Feb 9 19:46:14.412903 kubelet[2005]: I0209 19:46:14.412802 2005 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f8d6b0cc-d7fa-4994-86d6-671206be4bb3-lib-modules\") pod \"kube-proxy-x96tv\" (UID: \"f8d6b0cc-d7fa-4994-86d6-671206be4bb3\") " pod="kube-system/kube-proxy-x96tv" Feb 9 19:46:14.412903 kubelet[2005]: I0209 19:46:14.412867 2005 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3357fc38-4056-48b1-b5a7-61df320fa741-cilium-cgroup\") pod \"cilium-x5f4j\" (UID: \"3357fc38-4056-48b1-b5a7-61df320fa741\") " pod="kube-system/cilium-x5f4j" Feb 9 19:46:14.412903 kubelet[2005]: I0209 19:46:14.412906 2005 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3357fc38-4056-48b1-b5a7-61df320fa741-clustermesh-secrets\") pod \"cilium-x5f4j\" (UID: \"3357fc38-4056-48b1-b5a7-61df320fa741\") " pod="kube-system/cilium-x5f4j" Feb 9 19:46:14.413026 kubelet[2005]: I0209 19:46:14.412970 2005 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3357fc38-4056-48b1-b5a7-61df320fa741-hostproc\") pod \"cilium-x5f4j\" (UID: \"3357fc38-4056-48b1-b5a7-61df320fa741\") " pod="kube-system/cilium-x5f4j" Feb 9 19:46:14.413026 kubelet[2005]: I0209 19:46:14.413017 2005 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3357fc38-4056-48b1-b5a7-61df320fa741-cni-path\") pod \"cilium-x5f4j\" (UID: \"3357fc38-4056-48b1-b5a7-61df320fa741\") " pod="kube-system/cilium-x5f4j" Feb 9 19:46:14.413093 kubelet[2005]: I0209 19:46:14.413059 2005 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3357fc38-4056-48b1-b5a7-61df320fa741-lib-modules\") pod \"cilium-x5f4j\" (UID: \"3357fc38-4056-48b1-b5a7-61df320fa741\") " pod="kube-system/cilium-x5f4j" Feb 9 19:46:14.413129 kubelet[2005]: I0209 19:46:14.413097 2005 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3357fc38-4056-48b1-b5a7-61df320fa741-host-proc-sys-net\") pod \"cilium-x5f4j\" (UID: \"3357fc38-4056-48b1-b5a7-61df320fa741\") " pod="kube-system/cilium-x5f4j" Feb 9 19:46:14.413163 kubelet[2005]: I0209 19:46:14.413147 2005 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt8q5\" (UniqueName: \"kubernetes.io/projected/f8d6b0cc-d7fa-4994-86d6-671206be4bb3-kube-api-access-jt8q5\") pod \"kube-proxy-x96tv\" (UID: \"f8d6b0cc-d7fa-4994-86d6-671206be4bb3\") " pod="kube-system/kube-proxy-x96tv" Feb 9 19:46:14.413197 kubelet[2005]: I0209 19:46:14.413189 2005 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3357fc38-4056-48b1-b5a7-61df320fa741-host-proc-sys-kernel\") pod \"cilium-x5f4j\" (UID: \"3357fc38-4056-48b1-b5a7-61df320fa741\") " pod="kube-system/cilium-x5f4j" Feb 9 19:46:14.413244 kubelet[2005]: I0209 19:46:14.413229 2005 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3357fc38-4056-48b1-b5a7-61df320fa741-etc-cni-netd\") pod \"cilium-x5f4j\" (UID: \"3357fc38-4056-48b1-b5a7-61df320fa741\") " pod="kube-system/cilium-x5f4j" Feb 9 19:46:14.413282 kubelet[2005]: I0209 19:46:14.413257 2005 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3357fc38-4056-48b1-b5a7-61df320fa741-xtables-lock\") pod \"cilium-x5f4j\" (UID: \"3357fc38-4056-48b1-b5a7-61df320fa741\") " pod="kube-system/cilium-x5f4j" Feb 9 19:46:14.413338 kubelet[2005]: I0209 19:46:14.413321 2005 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3357fc38-4056-48b1-b5a7-61df320fa741-cilium-run\") pod \"cilium-x5f4j\" (UID: \"3357fc38-4056-48b1-b5a7-61df320fa741\") " pod="kube-system/cilium-x5f4j" Feb 9 19:46:14.413375 kubelet[2005]: I0209 19:46:14.413349 2005 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3357fc38-4056-48b1-b5a7-61df320fa741-hubble-tls\") pod \"cilium-x5f4j\" (UID: \"3357fc38-4056-48b1-b5a7-61df320fa741\") " pod="kube-system/cilium-x5f4j" Feb 9 19:46:14.413375 kubelet[2005]: I0209 19:46:14.413370 2005 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3357fc38-4056-48b1-b5a7-61df320fa741-bpf-maps\") pod \"cilium-x5f4j\" (UID: \"3357fc38-4056-48b1-b5a7-61df320fa741\") " pod="kube-system/cilium-x5f4j" Feb 9 19:46:14.413461 kubelet[2005]: I0209 19:46:14.413390 2005 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f8d6b0cc-d7fa-4994-86d6-671206be4bb3-kube-proxy\") pod \"kube-proxy-x96tv\" (UID: \"f8d6b0cc-d7fa-4994-86d6-671206be4bb3\") " pod="kube-system/kube-proxy-x96tv" Feb 9 19:46:14.547260 kubelet[2005]: I0209 19:46:14.547225 2005 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:46:14.552153 systemd[1]: Created slice kubepods-besteffort-pod06e56adb_371b_4034_9656_69246ae66ba4.slice. Feb 9 19:46:14.615227 kubelet[2005]: I0209 19:46:14.615144 2005 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxkwk\" (UniqueName: \"kubernetes.io/projected/06e56adb-371b-4034-9656-69246ae66ba4-kube-api-access-nxkwk\") pod \"cilium-operator-f59cbd8c6-rrvts\" (UID: \"06e56adb-371b-4034-9656-69246ae66ba4\") " pod="kube-system/cilium-operator-f59cbd8c6-rrvts" Feb 9 19:46:14.615227 kubelet[2005]: I0209 19:46:14.615197 2005 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/06e56adb-371b-4034-9656-69246ae66ba4-cilium-config-path\") pod \"cilium-operator-f59cbd8c6-rrvts\" (UID: \"06e56adb-371b-4034-9656-69246ae66ba4\") " pod="kube-system/cilium-operator-f59cbd8c6-rrvts" Feb 9 19:46:14.955947 kubelet[2005]: E0209 19:46:14.955855 2005 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:46:14.956426 env[1135]: time="2024-02-09T19:46:14.956388728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-x96tv,Uid:f8d6b0cc-d7fa-4994-86d6-671206be4bb3,Namespace:kube-system,Attempt:0,}" Feb 9 19:46:14.961183 kubelet[2005]: E0209 19:46:14.961162 2005 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:46:14.961514 env[1135]: time="2024-02-09T19:46:14.961490332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x5f4j,Uid:3357fc38-4056-48b1-b5a7-61df320fa741,Namespace:kube-system,Attempt:0,}" Feb 9 19:46:15.049412 env[1135]: time="2024-02-09T19:46:15.049330938Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:46:15.049412 env[1135]: time="2024-02-09T19:46:15.049395140Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:46:15.049591 env[1135]: time="2024-02-09T19:46:15.049428773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:46:15.049715 env[1135]: time="2024-02-09T19:46:15.049661823Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/46052bc783784fbac9c200b7b13d75c376364f13307ad783b54a612f315c881f pid=2139 runtime=io.containerd.runc.v2 Feb 9 19:46:15.053541 env[1135]: time="2024-02-09T19:46:15.053472446Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:46:15.053541 env[1135]: time="2024-02-09T19:46:15.053516891Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:46:15.053541 env[1135]: time="2024-02-09T19:46:15.053530116Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:46:15.053774 env[1135]: time="2024-02-09T19:46:15.053667084Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ef497fcf867abf178da4488269bd2b9c0f75041f45e6aadd73d59313218df0b7 pid=2157 runtime=io.containerd.runc.v2 Feb 9 19:46:15.062417 systemd[1]: Started cri-containerd-46052bc783784fbac9c200b7b13d75c376364f13307ad783b54a612f315c881f.scope. Feb 9 19:46:15.067564 systemd[1]: Started cri-containerd-ef497fcf867abf178da4488269bd2b9c0f75041f45e6aadd73d59313218df0b7.scope. Feb 9 19:46:15.091588 env[1135]: time="2024-02-09T19:46:15.091530091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-x96tv,Uid:f8d6b0cc-d7fa-4994-86d6-671206be4bb3,Namespace:kube-system,Attempt:0,} returns sandbox id \"46052bc783784fbac9c200b7b13d75c376364f13307ad783b54a612f315c881f\"" Feb 9 19:46:15.092179 kubelet[2005]: E0209 19:46:15.092151 2005 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:46:15.095672 env[1135]: time="2024-02-09T19:46:15.095618629Z" level=info msg="CreateContainer within sandbox \"46052bc783784fbac9c200b7b13d75c376364f13307ad783b54a612f315c881f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 19:46:15.101108 env[1135]: time="2024-02-09T19:46:15.101060150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x5f4j,Uid:3357fc38-4056-48b1-b5a7-61df320fa741,Namespace:kube-system,Attempt:0,} returns sandbox id \"ef497fcf867abf178da4488269bd2b9c0f75041f45e6aadd73d59313218df0b7\"" Feb 9 19:46:15.101788 kubelet[2005]: E0209 19:46:15.101760 2005 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:46:15.103067 env[1135]: time="2024-02-09T19:46:15.103027845Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 9 19:46:15.124478 env[1135]: time="2024-02-09T19:46:15.124437541Z" level=info msg="CreateContainer within sandbox \"46052bc783784fbac9c200b7b13d75c376364f13307ad783b54a612f315c881f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7bdbc1a5edc476df2db84804e1742f7df459783836416b164d70f29cc2e57c8f\"" Feb 9 19:46:15.124954 env[1135]: time="2024-02-09T19:46:15.124930762Z" level=info msg="StartContainer for \"7bdbc1a5edc476df2db84804e1742f7df459783836416b164d70f29cc2e57c8f\"" Feb 9 19:46:15.137764 systemd[1]: Started cri-containerd-7bdbc1a5edc476df2db84804e1742f7df459783836416b164d70f29cc2e57c8f.scope. Feb 9 19:46:15.154300 kubelet[2005]: E0209 19:46:15.154270 2005 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:46:15.156130 env[1135]: time="2024-02-09T19:46:15.156092499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-rrvts,Uid:06e56adb-371b-4034-9656-69246ae66ba4,Namespace:kube-system,Attempt:0,}" Feb 9 19:46:15.165533 env[1135]: time="2024-02-09T19:46:15.165462306Z" level=info msg="StartContainer for \"7bdbc1a5edc476df2db84804e1742f7df459783836416b164d70f29cc2e57c8f\" returns successfully" Feb 9 19:46:15.174472 env[1135]: time="2024-02-09T19:46:15.174393786Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:46:15.174696 env[1135]: time="2024-02-09T19:46:15.174670629Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:46:15.174834 env[1135]: time="2024-02-09T19:46:15.174808530Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:46:15.175235 env[1135]: time="2024-02-09T19:46:15.175190511Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b6724d1291f65176db2186041c0bbbd261740c6dd915248ce5b020acd4874a29 pid=2250 runtime=io.containerd.runc.v2 Feb 9 19:46:15.186604 systemd[1]: Started cri-containerd-b6724d1291f65176db2186041c0bbbd261740c6dd915248ce5b020acd4874a29.scope. Feb 9 19:46:15.233314 env[1135]: time="2024-02-09T19:46:15.233207673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-rrvts,Uid:06e56adb-371b-4034-9656-69246ae66ba4,Namespace:kube-system,Attempt:0,} returns sandbox id \"b6724d1291f65176db2186041c0bbbd261740c6dd915248ce5b020acd4874a29\"" Feb 9 19:46:15.234413 kubelet[2005]: E0209 19:46:15.233919 2005 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:46:15.486461 kubelet[2005]: E0209 19:46:15.486361 2005 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:46:15.609813 kubelet[2005]: I0209 19:46:15.609766 2005 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-x96tv" podStartSLOduration=1.6097003779999999 pod.CreationTimestamp="2024-02-09 19:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:46:15.609337403 +0000 UTC m=+16.271487639" watchObservedRunningTime="2024-02-09 19:46:15.609700378 +0000 UTC m=+16.271850614" Feb 9 19:46:16.488027 kubelet[2005]: E0209 19:46:16.487997 2005 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:46:24.785704 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3684718580.mount: Deactivated successfully. Feb 9 19:46:29.674357 env[1135]: time="2024-02-09T19:46:29.674277621Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:46:29.675977 env[1135]: time="2024-02-09T19:46:29.675934205Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:46:29.677535 env[1135]: time="2024-02-09T19:46:29.677507102Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:46:29.678047 env[1135]: time="2024-02-09T19:46:29.678017773Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 9 19:46:29.679466 env[1135]: time="2024-02-09T19:46:29.678790866Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 9 19:46:29.679466 env[1135]: time="2024-02-09T19:46:29.679434867Z" level=info msg="CreateContainer within sandbox \"ef497fcf867abf178da4488269bd2b9c0f75041f45e6aadd73d59313218df0b7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 19:46:29.690755 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3218707455.mount: Deactivated successfully. Feb 9 19:46:29.692269 env[1135]: time="2024-02-09T19:46:29.692216740Z" level=info msg="CreateContainer within sandbox \"ef497fcf867abf178da4488269bd2b9c0f75041f45e6aadd73d59313218df0b7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3e3be0af426693119526da27487bb269d722544d684d98ebb08398c2f703215b\"" Feb 9 19:46:29.692811 env[1135]: time="2024-02-09T19:46:29.692761563Z" level=info msg="StartContainer for \"3e3be0af426693119526da27487bb269d722544d684d98ebb08398c2f703215b\"" Feb 9 19:46:29.710123 systemd[1]: Started cri-containerd-3e3be0af426693119526da27487bb269d722544d684d98ebb08398c2f703215b.scope. Feb 9 19:46:29.731485 env[1135]: time="2024-02-09T19:46:29.731446278Z" level=info msg="StartContainer for \"3e3be0af426693119526da27487bb269d722544d684d98ebb08398c2f703215b\" returns successfully" Feb 9 19:46:29.739564 systemd[1]: cri-containerd-3e3be0af426693119526da27487bb269d722544d684d98ebb08398c2f703215b.scope: Deactivated successfully. Feb 9 19:46:29.827539 env[1135]: time="2024-02-09T19:46:29.827487250Z" level=info msg="shim disconnected" id=3e3be0af426693119526da27487bb269d722544d684d98ebb08398c2f703215b Feb 9 19:46:29.827539 env[1135]: time="2024-02-09T19:46:29.827534499Z" level=warning msg="cleaning up after shim disconnected" id=3e3be0af426693119526da27487bb269d722544d684d98ebb08398c2f703215b namespace=k8s.io Feb 9 19:46:29.827539 env[1135]: time="2024-02-09T19:46:29.827543776Z" level=info msg="cleaning up dead shim" Feb 9 19:46:29.833693 env[1135]: time="2024-02-09T19:46:29.833666519Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:46:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2445 runtime=io.containerd.runc.v2\n" Feb 9 19:46:30.506851 kubelet[2005]: E0209 19:46:30.506824 2005 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:46:30.509396 env[1135]: time="2024-02-09T19:46:30.509330916Z" level=info msg="CreateContainer within sandbox \"ef497fcf867abf178da4488269bd2b9c0f75041f45e6aadd73d59313218df0b7\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 19:46:30.689178 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3e3be0af426693119526da27487bb269d722544d684d98ebb08398c2f703215b-rootfs.mount: Deactivated successfully. Feb 9 19:46:30.698636 env[1135]: time="2024-02-09T19:46:30.698581658Z" level=info msg="CreateContainer within sandbox \"ef497fcf867abf178da4488269bd2b9c0f75041f45e6aadd73d59313218df0b7\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9af290ea14ea4815f246c97a68295de382fe9610a253c8551ee9731e6e90e0f9\"" Feb 9 19:46:30.699181 env[1135]: time="2024-02-09T19:46:30.699140790Z" level=info msg="StartContainer for \"9af290ea14ea4815f246c97a68295de382fe9610a253c8551ee9731e6e90e0f9\"" Feb 9 19:46:30.713430 systemd[1]: Started cri-containerd-9af290ea14ea4815f246c97a68295de382fe9610a253c8551ee9731e6e90e0f9.scope. Feb 9 19:46:30.753115 env[1135]: time="2024-02-09T19:46:30.752194392Z" level=info msg="StartContainer for \"9af290ea14ea4815f246c97a68295de382fe9610a253c8551ee9731e6e90e0f9\" returns successfully" Feb 9 19:46:30.755917 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 19:46:30.756271 systemd[1]: Stopped systemd-sysctl.service. Feb 9 19:46:30.756484 systemd[1]: Stopping systemd-sysctl.service... Feb 9 19:46:30.758194 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:46:30.759275 systemd[1]: cri-containerd-9af290ea14ea4815f246c97a68295de382fe9610a253c8551ee9731e6e90e0f9.scope: Deactivated successfully. Feb 9 19:46:30.768037 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:46:30.779871 env[1135]: time="2024-02-09T19:46:30.779823981Z" level=info msg="shim disconnected" id=9af290ea14ea4815f246c97a68295de382fe9610a253c8551ee9731e6e90e0f9 Feb 9 19:46:30.779871 env[1135]: time="2024-02-09T19:46:30.779869978Z" level=warning msg="cleaning up after shim disconnected" id=9af290ea14ea4815f246c97a68295de382fe9610a253c8551ee9731e6e90e0f9 namespace=k8s.io Feb 9 19:46:30.780056 env[1135]: time="2024-02-09T19:46:30.779879877Z" level=info msg="cleaning up dead shim" Feb 9 19:46:30.785662 env[1135]: time="2024-02-09T19:46:30.785626789Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:46:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2508 runtime=io.containerd.runc.v2\n" Feb 9 19:46:31.509115 kubelet[2005]: E0209 19:46:31.509064 2005 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:46:31.511445 env[1135]: time="2024-02-09T19:46:31.511414703Z" level=info msg="CreateContainer within sandbox \"ef497fcf867abf178da4488269bd2b9c0f75041f45e6aadd73d59313218df0b7\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 19:46:31.525626 env[1135]: time="2024-02-09T19:46:31.525573385Z" level=info msg="CreateContainer within sandbox \"ef497fcf867abf178da4488269bd2b9c0f75041f45e6aadd73d59313218df0b7\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ef1db5674323939a7272d258a7200db815dd57e27daac6b4193e6daaad0aa031\"" Feb 9 19:46:31.526256 env[1135]: time="2024-02-09T19:46:31.526220351Z" level=info msg="StartContainer for \"ef1db5674323939a7272d258a7200db815dd57e27daac6b4193e6daaad0aa031\"" Feb 9 19:46:31.540244 systemd[1]: Started cri-containerd-ef1db5674323939a7272d258a7200db815dd57e27daac6b4193e6daaad0aa031.scope. Feb 9 19:46:31.568005 systemd[1]: cri-containerd-ef1db5674323939a7272d258a7200db815dd57e27daac6b4193e6daaad0aa031.scope: Deactivated successfully. Feb 9 19:46:31.608615 env[1135]: time="2024-02-09T19:46:31.608520948Z" level=info msg="StartContainer for \"ef1db5674323939a7272d258a7200db815dd57e27daac6b4193e6daaad0aa031\" returns successfully" Feb 9 19:46:31.688533 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9af290ea14ea4815f246c97a68295de382fe9610a253c8551ee9731e6e90e0f9-rootfs.mount: Deactivated successfully. Feb 9 19:46:32.013369 env[1135]: time="2024-02-09T19:46:32.013297218Z" level=info msg="shim disconnected" id=ef1db5674323939a7272d258a7200db815dd57e27daac6b4193e6daaad0aa031 Feb 9 19:46:32.013369 env[1135]: time="2024-02-09T19:46:32.013351651Z" level=warning msg="cleaning up after shim disconnected" id=ef1db5674323939a7272d258a7200db815dd57e27daac6b4193e6daaad0aa031 namespace=k8s.io Feb 9 19:46:32.013369 env[1135]: time="2024-02-09T19:46:32.013363323Z" level=info msg="cleaning up dead shim" Feb 9 19:46:32.020114 env[1135]: time="2024-02-09T19:46:32.020065948Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:46:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2562 runtime=io.containerd.runc.v2\n" Feb 9 19:46:32.042000 env[1135]: time="2024-02-09T19:46:32.041950907Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:46:32.044451 env[1135]: time="2024-02-09T19:46:32.044396362Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:46:32.046438 env[1135]: time="2024-02-09T19:46:32.046396301Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:46:32.046950 env[1135]: time="2024-02-09T19:46:32.046913983Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 9 19:46:32.048391 env[1135]: time="2024-02-09T19:46:32.048363768Z" level=info msg="CreateContainer within sandbox \"b6724d1291f65176db2186041c0bbbd261740c6dd915248ce5b020acd4874a29\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 9 19:46:32.059854 env[1135]: time="2024-02-09T19:46:32.059820778Z" level=info msg="CreateContainer within sandbox \"b6724d1291f65176db2186041c0bbbd261740c6dd915248ce5b020acd4874a29\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"0e6e0f72ee9efd6158714013dce7cf576a66dbe9d3270248e55bf4a9a7343b81\"" Feb 9 19:46:32.060203 env[1135]: time="2024-02-09T19:46:32.060114961Z" level=info msg="StartContainer for \"0e6e0f72ee9efd6158714013dce7cf576a66dbe9d3270248e55bf4a9a7343b81\"" Feb 9 19:46:32.078370 systemd[1]: Started cri-containerd-0e6e0f72ee9efd6158714013dce7cf576a66dbe9d3270248e55bf4a9a7343b81.scope. Feb 9 19:46:32.100289 env[1135]: time="2024-02-09T19:46:32.100244324Z" level=info msg="StartContainer for \"0e6e0f72ee9efd6158714013dce7cf576a66dbe9d3270248e55bf4a9a7343b81\" returns successfully" Feb 9 19:46:32.512102 kubelet[2005]: E0209 19:46:32.512066 2005 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:46:32.513902 kubelet[2005]: E0209 19:46:32.513878 2005 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:46:32.515196 env[1135]: time="2024-02-09T19:46:32.515142034Z" level=info msg="CreateContainer within sandbox \"ef497fcf867abf178da4488269bd2b9c0f75041f45e6aadd73d59313218df0b7\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 19:46:32.533281 kubelet[2005]: I0209 19:46:32.533242 2005 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-f59cbd8c6-rrvts" podStartSLOduration=-9.223372018321566e+09 pod.CreationTimestamp="2024-02-09 19:46:14 +0000 UTC" firstStartedPulling="2024-02-09 19:46:15.234648913 +0000 UTC m=+15.896799149" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:46:32.532511806 +0000 UTC m=+33.194662032" watchObservedRunningTime="2024-02-09 19:46:32.533209467 +0000 UTC m=+33.195359703" Feb 9 19:46:32.536022 env[1135]: time="2024-02-09T19:46:32.535975826Z" level=info msg="CreateContainer within sandbox \"ef497fcf867abf178da4488269bd2b9c0f75041f45e6aadd73d59313218df0b7\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a0c099d2afa846e43659af9584041dc77331b463b3941268de1b4be6770e9612\"" Feb 9 19:46:32.536511 env[1135]: time="2024-02-09T19:46:32.536480314Z" level=info msg="StartContainer for \"a0c099d2afa846e43659af9584041dc77331b463b3941268de1b4be6770e9612\"" Feb 9 19:46:32.564838 systemd[1]: Started cri-containerd-a0c099d2afa846e43659af9584041dc77331b463b3941268de1b4be6770e9612.scope. Feb 9 19:46:32.589361 systemd[1]: cri-containerd-a0c099d2afa846e43659af9584041dc77331b463b3941268de1b4be6770e9612.scope: Deactivated successfully. Feb 9 19:46:32.595077 env[1135]: time="2024-02-09T19:46:32.595033761Z" level=info msg="StartContainer for \"a0c099d2afa846e43659af9584041dc77331b463b3941268de1b4be6770e9612\" returns successfully" Feb 9 19:46:32.727243 env[1135]: time="2024-02-09T19:46:32.727189326Z" level=info msg="shim disconnected" id=a0c099d2afa846e43659af9584041dc77331b463b3941268de1b4be6770e9612 Feb 9 19:46:32.727243 env[1135]: time="2024-02-09T19:46:32.727232166Z" level=warning msg="cleaning up after shim disconnected" id=a0c099d2afa846e43659af9584041dc77331b463b3941268de1b4be6770e9612 namespace=k8s.io Feb 9 19:46:32.727243 env[1135]: time="2024-02-09T19:46:32.727240382Z" level=info msg="cleaning up dead shim" Feb 9 19:46:32.743594 env[1135]: time="2024-02-09T19:46:32.743536527Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:46:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2655 runtime=io.containerd.runc.v2\n" Feb 9 19:46:33.517508 kubelet[2005]: E0209 19:46:33.517479 2005 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:46:33.517886 kubelet[2005]: E0209 19:46:33.517682 2005 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:46:33.519807 env[1135]: time="2024-02-09T19:46:33.519765804Z" level=info msg="CreateContainer within sandbox \"ef497fcf867abf178da4488269bd2b9c0f75041f45e6aadd73d59313218df0b7\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 19:46:33.535565 env[1135]: time="2024-02-09T19:46:33.535502914Z" level=info msg="CreateContainer within sandbox \"ef497fcf867abf178da4488269bd2b9c0f75041f45e6aadd73d59313218df0b7\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"493a822e9800a86c39b217351e11214f6889ccc6f3da8d3b7b2a0163386a3c66\"" Feb 9 19:46:33.536109 env[1135]: time="2024-02-09T19:46:33.536065270Z" level=info msg="StartContainer for \"493a822e9800a86c39b217351e11214f6889ccc6f3da8d3b7b2a0163386a3c66\"" Feb 9 19:46:33.553783 systemd[1]: Started cri-containerd-493a822e9800a86c39b217351e11214f6889ccc6f3da8d3b7b2a0163386a3c66.scope. Feb 9 19:46:33.584691 env[1135]: time="2024-02-09T19:46:33.584618140Z" level=info msg="StartContainer for \"493a822e9800a86c39b217351e11214f6889ccc6f3da8d3b7b2a0163386a3c66\" returns successfully" Feb 9 19:46:33.682346 kubelet[2005]: I0209 19:46:33.682309 2005 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 19:46:33.861760 kubelet[2005]: I0209 19:46:33.861699 2005 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:46:33.866183 systemd[1]: Created slice kubepods-burstable-pod8b4a7805_58e8_44f7_b6a8_57bc85c7f8d6.slice. Feb 9 19:46:33.869740 kubelet[2005]: W0209 19:46:33.869701 2005 reflector.go:424] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Feb 9 19:46:33.869867 kubelet[2005]: E0209 19:46:33.869751 2005 reflector.go:140] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Feb 9 19:46:33.870170 kubelet[2005]: I0209 19:46:33.870153 2005 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:46:33.877208 systemd[1]: Created slice kubepods-burstable-pod61d0901f_b294_4e27_a3e9_79d1c3c014b4.slice. Feb 9 19:46:33.946297 kubelet[2005]: I0209 19:46:33.946261 2005 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/61d0901f-b294-4e27-a3e9-79d1c3c014b4-config-volume\") pod \"coredns-787d4945fb-rx4c7\" (UID: \"61d0901f-b294-4e27-a3e9-79d1c3c014b4\") " pod="kube-system/coredns-787d4945fb-rx4c7" Feb 9 19:46:33.946297 kubelet[2005]: I0209 19:46:33.946298 2005 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8b4a7805-58e8-44f7-b6a8-57bc85c7f8d6-config-volume\") pod \"coredns-787d4945fb-dn6zs\" (UID: \"8b4a7805-58e8-44f7-b6a8-57bc85c7f8d6\") " pod="kube-system/coredns-787d4945fb-dn6zs" Feb 9 19:46:33.946297 kubelet[2005]: I0209 19:46:33.946319 2005 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5xpm\" (UniqueName: \"kubernetes.io/projected/61d0901f-b294-4e27-a3e9-79d1c3c014b4-kube-api-access-v5xpm\") pod \"coredns-787d4945fb-rx4c7\" (UID: \"61d0901f-b294-4e27-a3e9-79d1c3c014b4\") " pod="kube-system/coredns-787d4945fb-rx4c7" Feb 9 19:46:33.946481 kubelet[2005]: I0209 19:46:33.946336 2005 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8nb7\" (UniqueName: \"kubernetes.io/projected/8b4a7805-58e8-44f7-b6a8-57bc85c7f8d6-kube-api-access-h8nb7\") pod \"coredns-787d4945fb-dn6zs\" (UID: \"8b4a7805-58e8-44f7-b6a8-57bc85c7f8d6\") " pod="kube-system/coredns-787d4945fb-dn6zs" Feb 9 19:46:34.521747 kubelet[2005]: E0209 19:46:34.521703 2005 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:46:34.532510 kubelet[2005]: I0209 19:46:34.532486 2005 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-x5f4j" podStartSLOduration=-9.223372016322317e+09 pod.CreationTimestamp="2024-02-09 19:46:14 +0000 UTC" firstStartedPulling="2024-02-09 19:46:15.102462017 +0000 UTC m=+15.764612253" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:46:34.532074041 +0000 UTC m=+35.194224267" watchObservedRunningTime="2024-02-09 19:46:34.532458012 +0000 UTC m=+35.194608238" Feb 9 19:46:34.770586 kubelet[2005]: E0209 19:46:34.770560 2005 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:46:34.771188 env[1135]: time="2024-02-09T19:46:34.771154822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-dn6zs,Uid:8b4a7805-58e8-44f7-b6a8-57bc85c7f8d6,Namespace:kube-system,Attempt:0,}" Feb 9 19:46:34.779350 kubelet[2005]: E0209 19:46:34.779275 2005 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:46:34.779617 env[1135]: time="2024-02-09T19:46:34.779577575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-rx4c7,Uid:61d0901f-b294-4e27-a3e9-79d1c3c014b4,Namespace:kube-system,Attempt:0,}" Feb 9 19:46:35.473758 systemd-networkd[1022]: cilium_host: Link UP Feb 9 19:46:35.474218 systemd-networkd[1022]: cilium_net: Link UP Feb 9 19:46:35.477806 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 9 19:46:35.477868 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 9 19:46:35.475055 systemd-networkd[1022]: cilium_net: Gained carrier Feb 9 19:46:35.475853 systemd-networkd[1022]: cilium_host: Gained carrier Feb 9 19:46:35.489817 systemd-networkd[1022]: cilium_net: Gained IPv6LL Feb 9 19:46:35.523018 kubelet[2005]: E0209 19:46:35.522981 2005 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:46:35.548662 systemd-networkd[1022]: cilium_vxlan: Link UP Feb 9 19:46:35.548674 systemd-networkd[1022]: cilium_vxlan: Gained carrier Feb 9 19:46:35.723862 systemd-networkd[1022]: cilium_host: Gained IPv6LL Feb 9 19:46:35.743752 kernel: NET: Registered PF_ALG protocol family Feb 9 19:46:36.239254 systemd-networkd[1022]: lxc_health: Link UP Feb 9 19:46:36.250151 systemd-networkd[1022]: lxc_health: Gained carrier Feb 9 19:46:36.250776 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 19:46:36.524074 kubelet[2005]: E0209 19:46:36.523826 2005 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:46:36.814876 systemd-networkd[1022]: lxcf3514104f6a4: Link UP Feb 9 19:46:36.829824 kernel: eth0: renamed from tmpb3900 Feb 9 19:46:36.838789 kernel: eth0: renamed from tmpe54eb Feb 9 19:46:36.839162 systemd-networkd[1022]: lxc021a42cef561: Link UP Feb 9 19:46:36.843780 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc021a42cef561: link becomes ready Feb 9 19:46:36.843856 systemd-networkd[1022]: lxc021a42cef561: Gained carrier Feb 9 19:46:36.846273 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:46:36.846312 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcf3514104f6a4: link becomes ready Feb 9 19:46:36.846657 systemd-networkd[1022]: lxcf3514104f6a4: Gained carrier Feb 9 19:46:37.371844 systemd-networkd[1022]: cilium_vxlan: Gained IPv6LL Feb 9 19:46:37.525722 kubelet[2005]: E0209 19:46:37.525688 2005 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:46:38.139988 systemd-networkd[1022]: lxcf3514104f6a4: Gained IPv6LL Feb 9 19:46:38.267844 systemd-networkd[1022]: lxc_health: Gained IPv6LL Feb 9 19:46:38.523863 systemd-networkd[1022]: lxc021a42cef561: Gained IPv6LL Feb 9 19:46:38.951895 kubelet[2005]: I0209 19:46:38.951856 2005 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness" Feb 9 19:46:38.952896 kubelet[2005]: E0209 19:46:38.952883 2005 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:46:39.527612 kubelet[2005]: E0209 19:46:39.527571 2005 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:46:40.109794 env[1135]: time="2024-02-09T19:46:40.109701964Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:46:40.109794 env[1135]: time="2024-02-09T19:46:40.109760785Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:46:40.109794 env[1135]: time="2024-02-09T19:46:40.109771355Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:46:40.110174 env[1135]: time="2024-02-09T19:46:40.109922899Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e54ebfbcccc73b1fcae37e48c961571ad51b0ef91ef25e6b211645ac58200c95 pid=3225 runtime=io.containerd.runc.v2 Feb 9 19:46:40.121445 systemd[1]: Started cri-containerd-e54ebfbcccc73b1fcae37e48c961571ad51b0ef91ef25e6b211645ac58200c95.scope. Feb 9 19:46:40.131398 systemd-resolved[1071]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 19:46:40.135039 env[1135]: time="2024-02-09T19:46:40.134982400Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:46:40.135127 env[1135]: time="2024-02-09T19:46:40.135052962Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:46:40.135127 env[1135]: time="2024-02-09T19:46:40.135078810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:46:40.135599 env[1135]: time="2024-02-09T19:46:40.135462330Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b390050eacf32dda0bbeff90eea289bdd91c473c1c59477cd0f014b074fc09a3 pid=3257 runtime=io.containerd.runc.v2 Feb 9 19:46:40.148623 systemd[1]: Started cri-containerd-b390050eacf32dda0bbeff90eea289bdd91c473c1c59477cd0f014b074fc09a3.scope. Feb 9 19:46:40.159526 env[1135]: time="2024-02-09T19:46:40.159045088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-rx4c7,Uid:61d0901f-b294-4e27-a3e9-79d1c3c014b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"e54ebfbcccc73b1fcae37e48c961571ad51b0ef91ef25e6b211645ac58200c95\"" Feb 9 19:46:40.159614 kubelet[2005]: E0209 19:46:40.159596 2005 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:46:40.160452 systemd-resolved[1071]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 19:46:40.162760 env[1135]: time="2024-02-09T19:46:40.162366013Z" level=info msg="CreateContainer within sandbox \"e54ebfbcccc73b1fcae37e48c961571ad51b0ef91ef25e6b211645ac58200c95\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 19:46:40.181410 env[1135]: time="2024-02-09T19:46:40.181345949Z" level=info msg="CreateContainer within sandbox \"e54ebfbcccc73b1fcae37e48c961571ad51b0ef91ef25e6b211645ac58200c95\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"49fac75b919349263c16efc7b4a24e9e6dc52f552a6ce3707b65388dea8d820f\"" Feb 9 19:46:40.181810 env[1135]: time="2024-02-09T19:46:40.181762581Z" level=info msg="StartContainer for \"49fac75b919349263c16efc7b4a24e9e6dc52f552a6ce3707b65388dea8d820f\"" Feb 9 19:46:40.187258 env[1135]: time="2024-02-09T19:46:40.187216902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-dn6zs,Uid:8b4a7805-58e8-44f7-b6a8-57bc85c7f8d6,Namespace:kube-system,Attempt:0,} returns sandbox id \"b390050eacf32dda0bbeff90eea289bdd91c473c1c59477cd0f014b074fc09a3\"" Feb 9 19:46:40.187921 kubelet[2005]: E0209 19:46:40.187899 2005 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:46:40.190738 env[1135]: time="2024-02-09T19:46:40.190694461Z" level=info msg="CreateContainer within sandbox \"b390050eacf32dda0bbeff90eea289bdd91c473c1c59477cd0f014b074fc09a3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 19:46:40.199783 systemd[1]: Started cri-containerd-49fac75b919349263c16efc7b4a24e9e6dc52f552a6ce3707b65388dea8d820f.scope. Feb 9 19:46:40.207320 env[1135]: time="2024-02-09T19:46:40.207278438Z" level=info msg="CreateContainer within sandbox \"b390050eacf32dda0bbeff90eea289bdd91c473c1c59477cd0f014b074fc09a3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6c7601e34c5745db58efeec6281b586e2fbd24b96cb17346c44cbdcd9337b575\"" Feb 9 19:46:40.209013 env[1135]: time="2024-02-09T19:46:40.208972418Z" level=info msg="StartContainer for \"6c7601e34c5745db58efeec6281b586e2fbd24b96cb17346c44cbdcd9337b575\"" Feb 9 19:46:40.230503 systemd[1]: Started sshd@5-10.0.0.68:22-10.0.0.1:36858.service. Feb 9 19:46:40.233636 env[1135]: time="2024-02-09T19:46:40.233327646Z" level=info msg="StartContainer for \"49fac75b919349263c16efc7b4a24e9e6dc52f552a6ce3707b65388dea8d820f\" returns successfully" Feb 9 19:46:40.235302 systemd[1]: Started cri-containerd-6c7601e34c5745db58efeec6281b586e2fbd24b96cb17346c44cbdcd9337b575.scope. Feb 9 19:46:40.264934 env[1135]: time="2024-02-09T19:46:40.263163134Z" level=info msg="StartContainer for \"6c7601e34c5745db58efeec6281b586e2fbd24b96cb17346c44cbdcd9337b575\" returns successfully" Feb 9 19:46:40.273435 sshd[3342]: Accepted publickey for core from 10.0.0.1 port 36858 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:46:40.274899 sshd[3342]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:46:40.279644 systemd[1]: Started session-6.scope. Feb 9 19:46:40.279961 systemd-logind[1121]: New session 6 of user core. Feb 9 19:46:40.407944 sshd[3342]: pam_unix(sshd:session): session closed for user core Feb 9 19:46:40.410924 systemd[1]: sshd@5-10.0.0.68:22-10.0.0.1:36858.service: Deactivated successfully. Feb 9 19:46:40.411592 systemd[1]: session-6.scope: Deactivated successfully. Feb 9 19:46:40.412093 systemd-logind[1121]: Session 6 logged out. Waiting for processes to exit. Feb 9 19:46:40.412709 systemd-logind[1121]: Removed session 6. Feb 9 19:46:40.531301 kubelet[2005]: E0209 19:46:40.531069 2005 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:46:40.532678 kubelet[2005]: E0209 19:46:40.532662 2005 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:46:40.540017 kubelet[2005]: I0209 19:46:40.539977 2005 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-rx4c7" podStartSLOduration=26.539936768 pod.CreationTimestamp="2024-02-09 19:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:46:40.53921374 +0000 UTC m=+41.201363966" watchObservedRunningTime="2024-02-09 19:46:40.539936768 +0000 UTC m=+41.202087004" Feb 9 19:46:40.670818 kubelet[2005]: I0209 19:46:40.668623 2005 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-dn6zs" podStartSLOduration=26.668588176 pod.CreationTimestamp="2024-02-09 19:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:46:40.66813252 +0000 UTC m=+41.330282756" watchObservedRunningTime="2024-02-09 19:46:40.668588176 +0000 UTC m=+41.330738422" Feb 9 19:46:41.113376 systemd[1]: run-containerd-runc-k8s.io-b390050eacf32dda0bbeff90eea289bdd91c473c1c59477cd0f014b074fc09a3-runc.RRtTPO.mount: Deactivated successfully. Feb 9 19:46:41.534630 kubelet[2005]: E0209 19:46:41.534540 2005 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:46:41.535043 kubelet[2005]: E0209 19:46:41.534833 2005 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:46:42.536167 kubelet[2005]: E0209 19:46:42.536141 2005 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:46:42.536556 kubelet[2005]: E0209 19:46:42.536314 2005 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:46:45.412483 systemd[1]: Started sshd@6-10.0.0.68:22-10.0.0.1:36862.service. Feb 9 19:46:45.452168 sshd[3470]: Accepted publickey for core from 10.0.0.1 port 36862 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:46:45.453305 sshd[3470]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:46:45.456813 systemd-logind[1121]: New session 7 of user core. Feb 9 19:46:45.457674 systemd[1]: Started session-7.scope. Feb 9 19:46:45.557073 sshd[3470]: pam_unix(sshd:session): session closed for user core Feb 9 19:46:45.559325 systemd[1]: sshd@6-10.0.0.68:22-10.0.0.1:36862.service: Deactivated successfully. Feb 9 19:46:45.560008 systemd[1]: session-7.scope: Deactivated successfully. Feb 9 19:46:45.560590 systemd-logind[1121]: Session 7 logged out. Waiting for processes to exit. Feb 9 19:46:45.561168 systemd-logind[1121]: Removed session 7. Feb 9 19:46:50.561759 systemd[1]: Started sshd@7-10.0.0.68:22-10.0.0.1:38650.service. Feb 9 19:46:50.594110 sshd[3484]: Accepted publickey for core from 10.0.0.1 port 38650 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:46:50.595332 sshd[3484]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:46:50.598930 systemd-logind[1121]: New session 8 of user core. Feb 9 19:46:50.599830 systemd[1]: Started session-8.scope. Feb 9 19:46:50.699857 sshd[3484]: pam_unix(sshd:session): session closed for user core Feb 9 19:46:50.701915 systemd[1]: sshd@7-10.0.0.68:22-10.0.0.1:38650.service: Deactivated successfully. Feb 9 19:46:50.702604 systemd[1]: session-8.scope: Deactivated successfully. Feb 9 19:46:50.703146 systemd-logind[1121]: Session 8 logged out. Waiting for processes to exit. Feb 9 19:46:50.703939 systemd-logind[1121]: Removed session 8. Feb 9 19:46:55.705428 systemd[1]: Started sshd@8-10.0.0.68:22-10.0.0.1:38658.service. Feb 9 19:46:55.738885 sshd[3498]: Accepted publickey for core from 10.0.0.1 port 38658 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:46:55.740093 sshd[3498]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:46:55.743692 systemd-logind[1121]: New session 9 of user core. Feb 9 19:46:55.744745 systemd[1]: Started session-9.scope. Feb 9 19:46:55.847186 sshd[3498]: pam_unix(sshd:session): session closed for user core Feb 9 19:46:55.849019 systemd[1]: sshd@8-10.0.0.68:22-10.0.0.1:38658.service: Deactivated successfully. Feb 9 19:46:55.849672 systemd[1]: session-9.scope: Deactivated successfully. Feb 9 19:46:55.850179 systemd-logind[1121]: Session 9 logged out. Waiting for processes to exit. Feb 9 19:46:55.850823 systemd-logind[1121]: Removed session 9. Feb 9 19:47:00.851835 systemd[1]: Started sshd@9-10.0.0.68:22-10.0.0.1:38146.service. Feb 9 19:47:00.936744 sshd[3517]: Accepted publickey for core from 10.0.0.1 port 38146 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:47:00.937643 sshd[3517]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:47:00.940633 systemd-logind[1121]: New session 10 of user core. Feb 9 19:47:00.941435 systemd[1]: Started session-10.scope. Feb 9 19:47:01.038650 sshd[3517]: pam_unix(sshd:session): session closed for user core Feb 9 19:47:01.040791 systemd[1]: sshd@9-10.0.0.68:22-10.0.0.1:38146.service: Deactivated successfully. Feb 9 19:47:01.041463 systemd[1]: session-10.scope: Deactivated successfully. Feb 9 19:47:01.042160 systemd-logind[1121]: Session 10 logged out. Waiting for processes to exit. Feb 9 19:47:01.042776 systemd-logind[1121]: Removed session 10. Feb 9 19:47:06.043542 systemd[1]: Started sshd@10-10.0.0.68:22-10.0.0.1:38154.service. Feb 9 19:47:06.074530 sshd[3532]: Accepted publickey for core from 10.0.0.1 port 38154 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:47:06.075590 sshd[3532]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:47:06.078944 systemd-logind[1121]: New session 11 of user core. Feb 9 19:47:06.079713 systemd[1]: Started session-11.scope. Feb 9 19:47:06.182539 sshd[3532]: pam_unix(sshd:session): session closed for user core Feb 9 19:47:06.185114 systemd[1]: sshd@10-10.0.0.68:22-10.0.0.1:38154.service: Deactivated successfully. Feb 9 19:47:06.185799 systemd[1]: session-11.scope: Deactivated successfully. Feb 9 19:47:06.186309 systemd-logind[1121]: Session 11 logged out. Waiting for processes to exit. Feb 9 19:47:06.187026 systemd-logind[1121]: Removed session 11. Feb 9 19:47:10.461772 kubelet[2005]: E0209 19:47:10.461739 2005 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:47:11.181942 systemd[1]: Started sshd@11-10.0.0.68:22-10.0.0.1:51006.service. Feb 9 19:47:11.214823 sshd[3547]: Accepted publickey for core from 10.0.0.1 port 51006 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:47:11.215858 sshd[3547]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:47:11.219182 systemd-logind[1121]: New session 12 of user core. Feb 9 19:47:11.220098 systemd[1]: Started session-12.scope. Feb 9 19:47:11.321901 sshd[3547]: pam_unix(sshd:session): session closed for user core Feb 9 19:47:11.325198 systemd[1]: sshd@11-10.0.0.68:22-10.0.0.1:51006.service: Deactivated successfully. Feb 9 19:47:11.325936 systemd[1]: session-12.scope: Deactivated successfully. Feb 9 19:47:11.326571 systemd-logind[1121]: Session 12 logged out. Waiting for processes to exit. Feb 9 19:47:11.327904 systemd[1]: Started sshd@12-10.0.0.68:22-10.0.0.1:51012.service. Feb 9 19:47:11.328810 systemd-logind[1121]: Removed session 12. Feb 9 19:47:11.360000 sshd[3561]: Accepted publickey for core from 10.0.0.1 port 51012 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:47:11.361322 sshd[3561]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:47:11.365253 systemd-logind[1121]: New session 13 of user core. Feb 9 19:47:11.366490 systemd[1]: Started session-13.scope. Feb 9 19:47:12.212569 sshd[3561]: pam_unix(sshd:session): session closed for user core Feb 9 19:47:12.217803 systemd[1]: Started sshd@13-10.0.0.68:22-10.0.0.1:51024.service. Feb 9 19:47:12.218365 systemd[1]: sshd@12-10.0.0.68:22-10.0.0.1:51012.service: Deactivated successfully. Feb 9 19:47:12.219222 systemd[1]: session-13.scope: Deactivated successfully. Feb 9 19:47:12.220118 systemd-logind[1121]: Session 13 logged out. Waiting for processes to exit. Feb 9 19:47:12.222659 systemd-logind[1121]: Removed session 13. Feb 9 19:47:12.254185 sshd[3571]: Accepted publickey for core from 10.0.0.1 port 51024 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:47:12.255279 sshd[3571]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:47:12.258236 systemd-logind[1121]: New session 14 of user core. Feb 9 19:47:12.258967 systemd[1]: Started session-14.scope. Feb 9 19:47:12.363656 sshd[3571]: pam_unix(sshd:session): session closed for user core Feb 9 19:47:12.365744 systemd[1]: sshd@13-10.0.0.68:22-10.0.0.1:51024.service: Deactivated successfully. Feb 9 19:47:12.366454 systemd[1]: session-14.scope: Deactivated successfully. Feb 9 19:47:12.366983 systemd-logind[1121]: Session 14 logged out. Waiting for processes to exit. Feb 9 19:47:12.367667 systemd-logind[1121]: Removed session 14. Feb 9 19:47:15.461483 kubelet[2005]: E0209 19:47:15.461453 2005 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:47:17.368208 systemd[1]: Started sshd@14-10.0.0.68:22-10.0.0.1:51034.service. Feb 9 19:47:17.401001 sshd[3587]: Accepted publickey for core from 10.0.0.1 port 51034 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:47:17.402177 sshd[3587]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:47:17.405548 systemd-logind[1121]: New session 15 of user core. Feb 9 19:47:17.406491 systemd[1]: Started session-15.scope. Feb 9 19:47:17.460863 kubelet[2005]: E0209 19:47:17.460828 2005 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:47:17.509167 sshd[3587]: pam_unix(sshd:session): session closed for user core Feb 9 19:47:17.511434 systemd[1]: sshd@14-10.0.0.68:22-10.0.0.1:51034.service: Deactivated successfully. Feb 9 19:47:17.511953 systemd[1]: session-15.scope: Deactivated successfully. Feb 9 19:47:17.513286 systemd[1]: Started sshd@15-10.0.0.68:22-10.0.0.1:51040.service. Feb 9 19:47:17.513775 systemd-logind[1121]: Session 15 logged out. Waiting for processes to exit. Feb 9 19:47:17.514439 systemd-logind[1121]: Removed session 15. Feb 9 19:47:17.544283 sshd[3600]: Accepted publickey for core from 10.0.0.1 port 51040 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:47:17.545264 sshd[3600]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:47:17.548194 systemd-logind[1121]: New session 16 of user core. Feb 9 19:47:17.549031 systemd[1]: Started session-16.scope. Feb 9 19:47:17.764388 sshd[3600]: pam_unix(sshd:session): session closed for user core Feb 9 19:47:17.767232 systemd[1]: Started sshd@16-10.0.0.68:22-10.0.0.1:51052.service. Feb 9 19:47:17.767634 systemd[1]: sshd@15-10.0.0.68:22-10.0.0.1:51040.service: Deactivated successfully. Feb 9 19:47:17.768206 systemd[1]: session-16.scope: Deactivated successfully. Feb 9 19:47:17.768912 systemd-logind[1121]: Session 16 logged out. Waiting for processes to exit. Feb 9 19:47:17.769680 systemd-logind[1121]: Removed session 16. Feb 9 19:47:17.802634 sshd[3611]: Accepted publickey for core from 10.0.0.1 port 51052 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:47:17.803609 sshd[3611]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:47:17.806598 systemd-logind[1121]: New session 17 of user core. Feb 9 19:47:17.807447 systemd[1]: Started session-17.scope. Feb 9 19:47:18.646333 sshd[3611]: pam_unix(sshd:session): session closed for user core Feb 9 19:47:18.649352 systemd[1]: sshd@16-10.0.0.68:22-10.0.0.1:51052.service: Deactivated successfully. Feb 9 19:47:18.649897 systemd[1]: session-17.scope: Deactivated successfully. Feb 9 19:47:18.652133 systemd[1]: Started sshd@17-10.0.0.68:22-10.0.0.1:56672.service. Feb 9 19:47:18.653100 systemd-logind[1121]: Session 17 logged out. Waiting for processes to exit. Feb 9 19:47:18.654582 systemd-logind[1121]: Removed session 17. Feb 9 19:47:18.685267 sshd[3639]: Accepted publickey for core from 10.0.0.1 port 56672 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:47:18.686323 sshd[3639]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:47:18.690792 systemd-logind[1121]: New session 18 of user core. Feb 9 19:47:18.691015 systemd[1]: Started session-18.scope. Feb 9 19:47:18.887572 sshd[3639]: pam_unix(sshd:session): session closed for user core Feb 9 19:47:18.891183 systemd[1]: Started sshd@18-10.0.0.68:22-10.0.0.1:56678.service. Feb 9 19:47:18.891623 systemd[1]: sshd@17-10.0.0.68:22-10.0.0.1:56672.service: Deactivated successfully. Feb 9 19:47:18.892358 systemd[1]: session-18.scope: Deactivated successfully. Feb 9 19:47:18.893825 systemd-logind[1121]: Session 18 logged out. Waiting for processes to exit. Feb 9 19:47:18.894766 systemd-logind[1121]: Removed session 18. Feb 9 19:47:18.924974 sshd[3689]: Accepted publickey for core from 10.0.0.1 port 56678 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:47:18.925895 sshd[3689]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:47:18.929092 systemd-logind[1121]: New session 19 of user core. Feb 9 19:47:18.929867 systemd[1]: Started session-19.scope. Feb 9 19:47:19.029808 sshd[3689]: pam_unix(sshd:session): session closed for user core Feb 9 19:47:19.032035 systemd[1]: sshd@18-10.0.0.68:22-10.0.0.1:56678.service: Deactivated successfully. Feb 9 19:47:19.032688 systemd[1]: session-19.scope: Deactivated successfully. Feb 9 19:47:19.033175 systemd-logind[1121]: Session 19 logged out. Waiting for processes to exit. Feb 9 19:47:19.033799 systemd-logind[1121]: Removed session 19. Feb 9 19:47:23.461607 kubelet[2005]: E0209 19:47:23.461576 2005 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:47:24.033471 systemd[1]: Started sshd@19-10.0.0.68:22-10.0.0.1:56686.service. Feb 9 19:47:24.064409 sshd[3703]: Accepted publickey for core from 10.0.0.1 port 56686 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:47:24.065355 sshd[3703]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:47:24.068538 systemd-logind[1121]: New session 20 of user core. Feb 9 19:47:24.069491 systemd[1]: Started session-20.scope. Feb 9 19:47:24.170643 sshd[3703]: pam_unix(sshd:session): session closed for user core Feb 9 19:47:24.172701 systemd[1]: sshd@19-10.0.0.68:22-10.0.0.1:56686.service: Deactivated successfully. Feb 9 19:47:24.173389 systemd[1]: session-20.scope: Deactivated successfully. Feb 9 19:47:24.173891 systemd-logind[1121]: Session 20 logged out. Waiting for processes to exit. Feb 9 19:47:24.174496 systemd-logind[1121]: Removed session 20. Feb 9 19:47:29.175032 systemd[1]: Started sshd@20-10.0.0.68:22-10.0.0.1:40136.service. Feb 9 19:47:29.206223 sshd[3743]: Accepted publickey for core from 10.0.0.1 port 40136 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:47:29.207399 sshd[3743]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:47:29.210529 systemd-logind[1121]: New session 21 of user core. Feb 9 19:47:29.211522 systemd[1]: Started session-21.scope. Feb 9 19:47:29.308024 sshd[3743]: pam_unix(sshd:session): session closed for user core Feb 9 19:47:29.310428 systemd[1]: sshd@20-10.0.0.68:22-10.0.0.1:40136.service: Deactivated successfully. Feb 9 19:47:29.311198 systemd[1]: session-21.scope: Deactivated successfully. Feb 9 19:47:29.311786 systemd-logind[1121]: Session 21 logged out. Waiting for processes to exit. Feb 9 19:47:29.312665 systemd-logind[1121]: Removed session 21. Feb 9 19:47:34.312096 systemd[1]: Started sshd@21-10.0.0.68:22-10.0.0.1:40140.service. Feb 9 19:47:34.342488 sshd[3756]: Accepted publickey for core from 10.0.0.1 port 40140 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:47:34.343318 sshd[3756]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:47:34.346172 systemd-logind[1121]: New session 22 of user core. Feb 9 19:47:34.346913 systemd[1]: Started session-22.scope. Feb 9 19:47:34.438264 sshd[3756]: pam_unix(sshd:session): session closed for user core Feb 9 19:47:34.439919 systemd[1]: sshd@21-10.0.0.68:22-10.0.0.1:40140.service: Deactivated successfully. Feb 9 19:47:34.440506 systemd[1]: session-22.scope: Deactivated successfully. Feb 9 19:47:34.441042 systemd-logind[1121]: Session 22 logged out. Waiting for processes to exit. Feb 9 19:47:34.441625 systemd-logind[1121]: Removed session 22. Feb 9 19:47:37.461399 kubelet[2005]: E0209 19:47:37.461358 2005 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:47:39.442205 systemd[1]: Started sshd@22-10.0.0.68:22-10.0.0.1:51712.service. Feb 9 19:47:39.473222 sshd[3769]: Accepted publickey for core from 10.0.0.1 port 51712 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:47:39.474415 sshd[3769]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:47:39.477562 systemd-logind[1121]: New session 23 of user core. Feb 9 19:47:39.478359 systemd[1]: Started session-23.scope. Feb 9 19:47:39.576223 sshd[3769]: pam_unix(sshd:session): session closed for user core Feb 9 19:47:39.579207 systemd[1]: sshd@22-10.0.0.68:22-10.0.0.1:51712.service: Deactivated successfully. Feb 9 19:47:39.579749 systemd[1]: session-23.scope: Deactivated successfully. Feb 9 19:47:39.580258 systemd-logind[1121]: Session 23 logged out. Waiting for processes to exit. Feb 9 19:47:39.581385 systemd[1]: Started sshd@23-10.0.0.68:22-10.0.0.1:51728.service. Feb 9 19:47:39.582007 systemd-logind[1121]: Removed session 23. Feb 9 19:47:39.612291 sshd[3782]: Accepted publickey for core from 10.0.0.1 port 51728 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:47:39.613419 sshd[3782]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:47:39.616567 systemd-logind[1121]: New session 24 of user core. Feb 9 19:47:39.617330 systemd[1]: Started session-24.scope. Feb 9 19:47:40.933482 env[1135]: time="2024-02-09T19:47:40.933426246Z" level=info msg="StopContainer for \"0e6e0f72ee9efd6158714013dce7cf576a66dbe9d3270248e55bf4a9a7343b81\" with timeout 30 (s)" Feb 9 19:47:40.933888 env[1135]: time="2024-02-09T19:47:40.933816095Z" level=info msg="Stop container \"0e6e0f72ee9efd6158714013dce7cf576a66dbe9d3270248e55bf4a9a7343b81\" with signal terminated" Feb 9 19:47:40.942462 systemd[1]: run-containerd-runc-k8s.io-493a822e9800a86c39b217351e11214f6889ccc6f3da8d3b7b2a0163386a3c66-runc.jEbCjg.mount: Deactivated successfully. Feb 9 19:47:40.946191 systemd[1]: cri-containerd-0e6e0f72ee9efd6158714013dce7cf576a66dbe9d3270248e55bf4a9a7343b81.scope: Deactivated successfully. Feb 9 19:47:40.960440 env[1135]: time="2024-02-09T19:47:40.960363499Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 19:47:40.963887 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0e6e0f72ee9efd6158714013dce7cf576a66dbe9d3270248e55bf4a9a7343b81-rootfs.mount: Deactivated successfully. Feb 9 19:47:40.966263 env[1135]: time="2024-02-09T19:47:40.966230617Z" level=info msg="StopContainer for \"493a822e9800a86c39b217351e11214f6889ccc6f3da8d3b7b2a0163386a3c66\" with timeout 1 (s)" Feb 9 19:47:40.966659 env[1135]: time="2024-02-09T19:47:40.966625806Z" level=info msg="Stop container \"493a822e9800a86c39b217351e11214f6889ccc6f3da8d3b7b2a0163386a3c66\" with signal terminated" Feb 9 19:47:40.972301 env[1135]: time="2024-02-09T19:47:40.972234273Z" level=info msg="shim disconnected" id=0e6e0f72ee9efd6158714013dce7cf576a66dbe9d3270248e55bf4a9a7343b81 Feb 9 19:47:40.972301 env[1135]: time="2024-02-09T19:47:40.972285540Z" level=warning msg="cleaning up after shim disconnected" id=0e6e0f72ee9efd6158714013dce7cf576a66dbe9d3270248e55bf4a9a7343b81 namespace=k8s.io Feb 9 19:47:40.972301 env[1135]: time="2024-02-09T19:47:40.972300770Z" level=info msg="cleaning up dead shim" Feb 9 19:47:40.973717 systemd-networkd[1022]: lxc_health: Link DOWN Feb 9 19:47:40.973736 systemd-networkd[1022]: lxc_health: Lost carrier Feb 9 19:47:40.979848 env[1135]: time="2024-02-09T19:47:40.979796543Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:47:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3834 runtime=io.containerd.runc.v2\n" Feb 9 19:47:40.982719 env[1135]: time="2024-02-09T19:47:40.982681721Z" level=info msg="StopContainer for \"0e6e0f72ee9efd6158714013dce7cf576a66dbe9d3270248e55bf4a9a7343b81\" returns successfully" Feb 9 19:47:40.983268 env[1135]: time="2024-02-09T19:47:40.983235450Z" level=info msg="StopPodSandbox for \"b6724d1291f65176db2186041c0bbbd261740c6dd915248ce5b020acd4874a29\"" Feb 9 19:47:40.983336 env[1135]: time="2024-02-09T19:47:40.983313187Z" level=info msg="Container to stop \"0e6e0f72ee9efd6158714013dce7cf576a66dbe9d3270248e55bf4a9a7343b81\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:47:40.984792 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b6724d1291f65176db2186041c0bbbd261740c6dd915248ce5b020acd4874a29-shm.mount: Deactivated successfully. Feb 9 19:47:40.990335 systemd[1]: cri-containerd-b6724d1291f65176db2186041c0bbbd261740c6dd915248ce5b020acd4874a29.scope: Deactivated successfully. Feb 9 19:47:41.008519 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b6724d1291f65176db2186041c0bbbd261740c6dd915248ce5b020acd4874a29-rootfs.mount: Deactivated successfully. Feb 9 19:47:41.009197 systemd[1]: cri-containerd-493a822e9800a86c39b217351e11214f6889ccc6f3da8d3b7b2a0163386a3c66.scope: Deactivated successfully. Feb 9 19:47:41.009491 systemd[1]: cri-containerd-493a822e9800a86c39b217351e11214f6889ccc6f3da8d3b7b2a0163386a3c66.scope: Consumed 5.984s CPU time. Feb 9 19:47:41.132799 env[1135]: time="2024-02-09T19:47:41.132750700Z" level=info msg="shim disconnected" id=493a822e9800a86c39b217351e11214f6889ccc6f3da8d3b7b2a0163386a3c66 Feb 9 19:47:41.133062 env[1135]: time="2024-02-09T19:47:41.133044807Z" level=warning msg="cleaning up after shim disconnected" id=493a822e9800a86c39b217351e11214f6889ccc6f3da8d3b7b2a0163386a3c66 namespace=k8s.io Feb 9 19:47:41.133153 env[1135]: time="2024-02-09T19:47:41.133136220Z" level=info msg="cleaning up dead shim" Feb 9 19:47:41.133381 env[1135]: time="2024-02-09T19:47:41.132935931Z" level=info msg="shim disconnected" id=b6724d1291f65176db2186041c0bbbd261740c6dd915248ce5b020acd4874a29 Feb 9 19:47:41.133381 env[1135]: time="2024-02-09T19:47:41.133379311Z" level=warning msg="cleaning up after shim disconnected" id=b6724d1291f65176db2186041c0bbbd261740c6dd915248ce5b020acd4874a29 namespace=k8s.io Feb 9 19:47:41.133460 env[1135]: time="2024-02-09T19:47:41.133387176Z" level=info msg="cleaning up dead shim" Feb 9 19:47:41.139117 env[1135]: time="2024-02-09T19:47:41.139080672Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:47:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3880 runtime=io.containerd.runc.v2\n" Feb 9 19:47:41.139916 env[1135]: time="2024-02-09T19:47:41.139534702Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:47:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3881 runtime=io.containerd.runc.v2\n" Feb 9 19:47:41.140195 env[1135]: time="2024-02-09T19:47:41.140168563Z" level=info msg="TearDown network for sandbox \"b6724d1291f65176db2186041c0bbbd261740c6dd915248ce5b020acd4874a29\" successfully" Feb 9 19:47:41.140195 env[1135]: time="2024-02-09T19:47:41.140190975Z" level=info msg="StopPodSandbox for \"b6724d1291f65176db2186041c0bbbd261740c6dd915248ce5b020acd4874a29\" returns successfully" Feb 9 19:47:41.205570 env[1135]: time="2024-02-09T19:47:41.205437322Z" level=info msg="StopContainer for \"493a822e9800a86c39b217351e11214f6889ccc6f3da8d3b7b2a0163386a3c66\" returns successfully" Feb 9 19:47:41.206524 env[1135]: time="2024-02-09T19:47:41.206491800Z" level=info msg="StopPodSandbox for \"ef497fcf867abf178da4488269bd2b9c0f75041f45e6aadd73d59313218df0b7\"" Feb 9 19:47:41.206590 env[1135]: time="2024-02-09T19:47:41.206545462Z" level=info msg="Container to stop \"9af290ea14ea4815f246c97a68295de382fe9610a253c8551ee9731e6e90e0f9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:47:41.206590 env[1135]: time="2024-02-09T19:47:41.206560019Z" level=info msg="Container to stop \"a0c099d2afa846e43659af9584041dc77331b463b3941268de1b4be6770e9612\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:47:41.206590 env[1135]: time="2024-02-09T19:47:41.206568786Z" level=info msg="Container to stop \"493a822e9800a86c39b217351e11214f6889ccc6f3da8d3b7b2a0163386a3c66\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:47:41.206590 env[1135]: time="2024-02-09T19:47:41.206577944Z" level=info msg="Container to stop \"3e3be0af426693119526da27487bb269d722544d684d98ebb08398c2f703215b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:47:41.206590 env[1135]: time="2024-02-09T19:47:41.206586440Z" level=info msg="Container to stop \"ef1db5674323939a7272d258a7200db815dd57e27daac6b4193e6daaad0aa031\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:47:41.211471 systemd[1]: cri-containerd-ef497fcf867abf178da4488269bd2b9c0f75041f45e6aadd73d59313218df0b7.scope: Deactivated successfully. Feb 9 19:47:41.230066 env[1135]: time="2024-02-09T19:47:41.229998954Z" level=info msg="shim disconnected" id=ef497fcf867abf178da4488269bd2b9c0f75041f45e6aadd73d59313218df0b7 Feb 9 19:47:41.230066 env[1135]: time="2024-02-09T19:47:41.230045161Z" level=warning msg="cleaning up after shim disconnected" id=ef497fcf867abf178da4488269bd2b9c0f75041f45e6aadd73d59313218df0b7 namespace=k8s.io Feb 9 19:47:41.230066 env[1135]: time="2024-02-09T19:47:41.230053527Z" level=info msg="cleaning up dead shim" Feb 9 19:47:41.235975 env[1135]: time="2024-02-09T19:47:41.235935701Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:47:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3922 runtime=io.containerd.runc.v2\n" Feb 9 19:47:41.236232 env[1135]: time="2024-02-09T19:47:41.236203368Z" level=info msg="TearDown network for sandbox \"ef497fcf867abf178da4488269bd2b9c0f75041f45e6aadd73d59313218df0b7\" successfully" Feb 9 19:47:41.236232 env[1135]: time="2024-02-09T19:47:41.236225350Z" level=info msg="StopPodSandbox for \"ef497fcf867abf178da4488269bd2b9c0f75041f45e6aadd73d59313218df0b7\" returns successfully" Feb 9 19:47:41.318056 kubelet[2005]: I0209 19:47:41.318030 2005 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3357fc38-4056-48b1-b5a7-61df320fa741-host-proc-sys-net\") pod \"3357fc38-4056-48b1-b5a7-61df320fa741\" (UID: \"3357fc38-4056-48b1-b5a7-61df320fa741\") " Feb 9 19:47:41.318344 kubelet[2005]: I0209 19:47:41.318064 2005 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3357fc38-4056-48b1-b5a7-61df320fa741-cni-path\") pod \"3357fc38-4056-48b1-b5a7-61df320fa741\" (UID: \"3357fc38-4056-48b1-b5a7-61df320fa741\") " Feb 9 19:47:41.318344 kubelet[2005]: I0209 19:47:41.318081 2005 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3357fc38-4056-48b1-b5a7-61df320fa741-cilium-run\") pod \"3357fc38-4056-48b1-b5a7-61df320fa741\" (UID: \"3357fc38-4056-48b1-b5a7-61df320fa741\") " Feb 9 19:47:41.318344 kubelet[2005]: I0209 19:47:41.318119 2005 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzvgh\" (UniqueName: \"kubernetes.io/projected/3357fc38-4056-48b1-b5a7-61df320fa741-kube-api-access-mzvgh\") pod \"3357fc38-4056-48b1-b5a7-61df320fa741\" (UID: \"3357fc38-4056-48b1-b5a7-61df320fa741\") " Feb 9 19:47:41.318344 kubelet[2005]: I0209 19:47:41.318140 2005 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3357fc38-4056-48b1-b5a7-61df320fa741-lib-modules\") pod \"3357fc38-4056-48b1-b5a7-61df320fa741\" (UID: \"3357fc38-4056-48b1-b5a7-61df320fa741\") " Feb 9 19:47:41.318344 kubelet[2005]: I0209 19:47:41.318162 2005 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3357fc38-4056-48b1-b5a7-61df320fa741-cilium-config-path\") pod \"3357fc38-4056-48b1-b5a7-61df320fa741\" (UID: \"3357fc38-4056-48b1-b5a7-61df320fa741\") " Feb 9 19:47:41.318344 kubelet[2005]: I0209 19:47:41.318182 2005 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/06e56adb-371b-4034-9656-69246ae66ba4-cilium-config-path\") pod \"06e56adb-371b-4034-9656-69246ae66ba4\" (UID: \"06e56adb-371b-4034-9656-69246ae66ba4\") " Feb 9 19:47:41.318483 kubelet[2005]: I0209 19:47:41.318173 2005 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3357fc38-4056-48b1-b5a7-61df320fa741-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "3357fc38-4056-48b1-b5a7-61df320fa741" (UID: "3357fc38-4056-48b1-b5a7-61df320fa741"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:47:41.318483 kubelet[2005]: I0209 19:47:41.318169 2005 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3357fc38-4056-48b1-b5a7-61df320fa741-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "3357fc38-4056-48b1-b5a7-61df320fa741" (UID: "3357fc38-4056-48b1-b5a7-61df320fa741"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:47:41.318483 kubelet[2005]: I0209 19:47:41.318203 2005 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3357fc38-4056-48b1-b5a7-61df320fa741-host-proc-sys-kernel\") pod \"3357fc38-4056-48b1-b5a7-61df320fa741\" (UID: \"3357fc38-4056-48b1-b5a7-61df320fa741\") " Feb 9 19:47:41.318483 kubelet[2005]: I0209 19:47:41.318212 2005 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3357fc38-4056-48b1-b5a7-61df320fa741-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3357fc38-4056-48b1-b5a7-61df320fa741" (UID: "3357fc38-4056-48b1-b5a7-61df320fa741"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:47:41.318483 kubelet[2005]: I0209 19:47:41.318221 2005 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3357fc38-4056-48b1-b5a7-61df320fa741-xtables-lock\") pod \"3357fc38-4056-48b1-b5a7-61df320fa741\" (UID: \"3357fc38-4056-48b1-b5a7-61df320fa741\") " Feb 9 19:47:41.318769 kubelet[2005]: I0209 19:47:41.318240 2005 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3357fc38-4056-48b1-b5a7-61df320fa741-hubble-tls\") pod \"3357fc38-4056-48b1-b5a7-61df320fa741\" (UID: \"3357fc38-4056-48b1-b5a7-61df320fa741\") " Feb 9 19:47:41.318769 kubelet[2005]: I0209 19:47:41.318258 2005 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3357fc38-4056-48b1-b5a7-61df320fa741-bpf-maps\") pod \"3357fc38-4056-48b1-b5a7-61df320fa741\" (UID: \"3357fc38-4056-48b1-b5a7-61df320fa741\") " Feb 9 19:47:41.318769 kubelet[2005]: I0209 19:47:41.318282 2005 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3357fc38-4056-48b1-b5a7-61df320fa741-clustermesh-secrets\") pod \"3357fc38-4056-48b1-b5a7-61df320fa741\" (UID: \"3357fc38-4056-48b1-b5a7-61df320fa741\") " Feb 9 19:47:41.318769 kubelet[2005]: I0209 19:47:41.318296 2005 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3357fc38-4056-48b1-b5a7-61df320fa741-hostproc\") pod \"3357fc38-4056-48b1-b5a7-61df320fa741\" (UID: \"3357fc38-4056-48b1-b5a7-61df320fa741\") " Feb 9 19:47:41.318769 kubelet[2005]: I0209 19:47:41.318311 2005 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3357fc38-4056-48b1-b5a7-61df320fa741-etc-cni-netd\") pod \"3357fc38-4056-48b1-b5a7-61df320fa741\" (UID: \"3357fc38-4056-48b1-b5a7-61df320fa741\") " Feb 9 19:47:41.318769 kubelet[2005]: I0209 19:47:41.318326 2005 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3357fc38-4056-48b1-b5a7-61df320fa741-cilium-cgroup\") pod \"3357fc38-4056-48b1-b5a7-61df320fa741\" (UID: \"3357fc38-4056-48b1-b5a7-61df320fa741\") " Feb 9 19:47:41.318899 kubelet[2005]: I0209 19:47:41.318345 2005 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxkwk\" (UniqueName: \"kubernetes.io/projected/06e56adb-371b-4034-9656-69246ae66ba4-kube-api-access-nxkwk\") pod \"06e56adb-371b-4034-9656-69246ae66ba4\" (UID: \"06e56adb-371b-4034-9656-69246ae66ba4\") " Feb 9 19:47:41.318899 kubelet[2005]: I0209 19:47:41.318372 2005 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3357fc38-4056-48b1-b5a7-61df320fa741-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Feb 9 19:47:41.318899 kubelet[2005]: I0209 19:47:41.318381 2005 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3357fc38-4056-48b1-b5a7-61df320fa741-cilium-run\") on node \"localhost\" DevicePath \"\"" Feb 9 19:47:41.318899 kubelet[2005]: I0209 19:47:41.318390 2005 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3357fc38-4056-48b1-b5a7-61df320fa741-lib-modules\") on node \"localhost\" DevicePath \"\"" Feb 9 19:47:41.318899 kubelet[2005]: W0209 19:47:41.318385 2005 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/06e56adb-371b-4034-9656-69246ae66ba4/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 19:47:41.318899 kubelet[2005]: I0209 19:47:41.318536 2005 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3357fc38-4056-48b1-b5a7-61df320fa741-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "3357fc38-4056-48b1-b5a7-61df320fa741" (UID: "3357fc38-4056-48b1-b5a7-61df320fa741"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:47:41.319028 kubelet[2005]: I0209 19:47:41.318553 2005 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3357fc38-4056-48b1-b5a7-61df320fa741-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "3357fc38-4056-48b1-b5a7-61df320fa741" (UID: "3357fc38-4056-48b1-b5a7-61df320fa741"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:47:41.319028 kubelet[2005]: I0209 19:47:41.318681 2005 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3357fc38-4056-48b1-b5a7-61df320fa741-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "3357fc38-4056-48b1-b5a7-61df320fa741" (UID: "3357fc38-4056-48b1-b5a7-61df320fa741"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:47:41.319028 kubelet[2005]: W0209 19:47:41.318790 2005 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/3357fc38-4056-48b1-b5a7-61df320fa741/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 19:47:41.319028 kubelet[2005]: I0209 19:47:41.318861 2005 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3357fc38-4056-48b1-b5a7-61df320fa741-hostproc" (OuterVolumeSpecName: "hostproc") pod "3357fc38-4056-48b1-b5a7-61df320fa741" (UID: "3357fc38-4056-48b1-b5a7-61df320fa741"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:47:41.319028 kubelet[2005]: I0209 19:47:41.318882 2005 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3357fc38-4056-48b1-b5a7-61df320fa741-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "3357fc38-4056-48b1-b5a7-61df320fa741" (UID: "3357fc38-4056-48b1-b5a7-61df320fa741"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:47:41.319136 kubelet[2005]: I0209 19:47:41.318893 2005 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3357fc38-4056-48b1-b5a7-61df320fa741-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "3357fc38-4056-48b1-b5a7-61df320fa741" (UID: "3357fc38-4056-48b1-b5a7-61df320fa741"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:47:41.319870 kubelet[2005]: I0209 19:47:41.319843 2005 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3357fc38-4056-48b1-b5a7-61df320fa741-cni-path" (OuterVolumeSpecName: "cni-path") pod "3357fc38-4056-48b1-b5a7-61df320fa741" (UID: "3357fc38-4056-48b1-b5a7-61df320fa741"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:47:41.320683 kubelet[2005]: I0209 19:47:41.320659 2005 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06e56adb-371b-4034-9656-69246ae66ba4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "06e56adb-371b-4034-9656-69246ae66ba4" (UID: "06e56adb-371b-4034-9656-69246ae66ba4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 19:47:41.322432 kubelet[2005]: I0209 19:47:41.322406 2005 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3357fc38-4056-48b1-b5a7-61df320fa741-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3357fc38-4056-48b1-b5a7-61df320fa741" (UID: "3357fc38-4056-48b1-b5a7-61df320fa741"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 19:47:41.322532 kubelet[2005]: I0209 19:47:41.322491 2005 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3357fc38-4056-48b1-b5a7-61df320fa741-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "3357fc38-4056-48b1-b5a7-61df320fa741" (UID: "3357fc38-4056-48b1-b5a7-61df320fa741"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:47:41.322589 kubelet[2005]: I0209 19:47:41.322535 2005 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3357fc38-4056-48b1-b5a7-61df320fa741-kube-api-access-mzvgh" (OuterVolumeSpecName: "kube-api-access-mzvgh") pod "3357fc38-4056-48b1-b5a7-61df320fa741" (UID: "3357fc38-4056-48b1-b5a7-61df320fa741"). InnerVolumeSpecName "kube-api-access-mzvgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:47:41.322917 kubelet[2005]: I0209 19:47:41.322898 2005 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3357fc38-4056-48b1-b5a7-61df320fa741-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "3357fc38-4056-48b1-b5a7-61df320fa741" (UID: "3357fc38-4056-48b1-b5a7-61df320fa741"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:47:41.322917 kubelet[2005]: I0209 19:47:41.322909 2005 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06e56adb-371b-4034-9656-69246ae66ba4-kube-api-access-nxkwk" (OuterVolumeSpecName: "kube-api-access-nxkwk") pod "06e56adb-371b-4034-9656-69246ae66ba4" (UID: "06e56adb-371b-4034-9656-69246ae66ba4"). InnerVolumeSpecName "kube-api-access-nxkwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:47:41.419097 kubelet[2005]: I0209 19:47:41.419060 2005 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3357fc38-4056-48b1-b5a7-61df320fa741-cni-path\") on node \"localhost\" DevicePath \"\"" Feb 9 19:47:41.419097 kubelet[2005]: I0209 19:47:41.419081 2005 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-mzvgh\" (UniqueName: \"kubernetes.io/projected/3357fc38-4056-48b1-b5a7-61df320fa741-kube-api-access-mzvgh\") on node \"localhost\" DevicePath \"\"" Feb 9 19:47:41.419097 kubelet[2005]: I0209 19:47:41.419091 2005 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3357fc38-4056-48b1-b5a7-61df320fa741-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 9 19:47:41.419345 kubelet[2005]: I0209 19:47:41.419113 2005 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3357fc38-4056-48b1-b5a7-61df320fa741-hubble-tls\") on node \"localhost\" DevicePath \"\"" Feb 9 19:47:41.419345 kubelet[2005]: I0209 19:47:41.419122 2005 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3357fc38-4056-48b1-b5a7-61df320fa741-bpf-maps\") on node \"localhost\" DevicePath \"\"" Feb 9 19:47:41.419345 kubelet[2005]: I0209 19:47:41.419131 2005 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/06e56adb-371b-4034-9656-69246ae66ba4-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 9 19:47:41.419345 kubelet[2005]: I0209 19:47:41.419140 2005 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3357fc38-4056-48b1-b5a7-61df320fa741-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Feb 9 19:47:41.419345 kubelet[2005]: I0209 19:47:41.419149 2005 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3357fc38-4056-48b1-b5a7-61df320fa741-xtables-lock\") on node \"localhost\" DevicePath \"\"" Feb 9 19:47:41.419345 kubelet[2005]: I0209 19:47:41.419157 2005 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3357fc38-4056-48b1-b5a7-61df320fa741-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Feb 9 19:47:41.419345 kubelet[2005]: I0209 19:47:41.419165 2005 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3357fc38-4056-48b1-b5a7-61df320fa741-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Feb 9 19:47:41.419345 kubelet[2005]: I0209 19:47:41.419173 2005 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3357fc38-4056-48b1-b5a7-61df320fa741-hostproc\") on node \"localhost\" DevicePath \"\"" Feb 9 19:47:41.419510 kubelet[2005]: I0209 19:47:41.419182 2005 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3357fc38-4056-48b1-b5a7-61df320fa741-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Feb 9 19:47:41.419510 kubelet[2005]: I0209 19:47:41.419191 2005 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-nxkwk\" (UniqueName: \"kubernetes.io/projected/06e56adb-371b-4034-9656-69246ae66ba4-kube-api-access-nxkwk\") on node \"localhost\" DevicePath \"\"" Feb 9 19:47:41.466595 systemd[1]: Removed slice kubepods-besteffort-pod06e56adb_371b_4034_9656_69246ae66ba4.slice. Feb 9 19:47:41.467437 systemd[1]: Removed slice kubepods-burstable-pod3357fc38_4056_48b1_b5a7_61df320fa741.slice. Feb 9 19:47:41.467512 systemd[1]: kubepods-burstable-pod3357fc38_4056_48b1_b5a7_61df320fa741.slice: Consumed 6.066s CPU time. Feb 9 19:47:41.638390 kubelet[2005]: I0209 19:47:41.638364 2005 scope.go:115] "RemoveContainer" containerID="0e6e0f72ee9efd6158714013dce7cf576a66dbe9d3270248e55bf4a9a7343b81" Feb 9 19:47:41.640932 env[1135]: time="2024-02-09T19:47:41.640343631Z" level=info msg="RemoveContainer for \"0e6e0f72ee9efd6158714013dce7cf576a66dbe9d3270248e55bf4a9a7343b81\"" Feb 9 19:47:41.647278 env[1135]: time="2024-02-09T19:47:41.647230648Z" level=info msg="RemoveContainer for \"0e6e0f72ee9efd6158714013dce7cf576a66dbe9d3270248e55bf4a9a7343b81\" returns successfully" Feb 9 19:47:41.647737 kubelet[2005]: I0209 19:47:41.647686 2005 scope.go:115] "RemoveContainer" containerID="0e6e0f72ee9efd6158714013dce7cf576a66dbe9d3270248e55bf4a9a7343b81" Feb 9 19:47:41.648039 env[1135]: time="2024-02-09T19:47:41.647974678Z" level=error msg="ContainerStatus for \"0e6e0f72ee9efd6158714013dce7cf576a66dbe9d3270248e55bf4a9a7343b81\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0e6e0f72ee9efd6158714013dce7cf576a66dbe9d3270248e55bf4a9a7343b81\": not found" Feb 9 19:47:41.648210 kubelet[2005]: E0209 19:47:41.648194 2005 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0e6e0f72ee9efd6158714013dce7cf576a66dbe9d3270248e55bf4a9a7343b81\": not found" containerID="0e6e0f72ee9efd6158714013dce7cf576a66dbe9d3270248e55bf4a9a7343b81" Feb 9 19:47:41.648317 kubelet[2005]: I0209 19:47:41.648290 2005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:0e6e0f72ee9efd6158714013dce7cf576a66dbe9d3270248e55bf4a9a7343b81} err="failed to get container status \"0e6e0f72ee9efd6158714013dce7cf576a66dbe9d3270248e55bf4a9a7343b81\": rpc error: code = NotFound desc = an error occurred when try to find container \"0e6e0f72ee9efd6158714013dce7cf576a66dbe9d3270248e55bf4a9a7343b81\": not found" Feb 9 19:47:41.648317 kubelet[2005]: I0209 19:47:41.648311 2005 scope.go:115] "RemoveContainer" containerID="493a822e9800a86c39b217351e11214f6889ccc6f3da8d3b7b2a0163386a3c66" Feb 9 19:47:41.649089 env[1135]: time="2024-02-09T19:47:41.649041569Z" level=info msg="RemoveContainer for \"493a822e9800a86c39b217351e11214f6889ccc6f3da8d3b7b2a0163386a3c66\"" Feb 9 19:47:41.653190 env[1135]: time="2024-02-09T19:47:41.653143380Z" level=info msg="RemoveContainer for \"493a822e9800a86c39b217351e11214f6889ccc6f3da8d3b7b2a0163386a3c66\" returns successfully" Feb 9 19:47:41.654087 kubelet[2005]: I0209 19:47:41.654066 2005 scope.go:115] "RemoveContainer" containerID="a0c099d2afa846e43659af9584041dc77331b463b3941268de1b4be6770e9612" Feb 9 19:47:41.655980 env[1135]: time="2024-02-09T19:47:41.655949766Z" level=info msg="RemoveContainer for \"a0c099d2afa846e43659af9584041dc77331b463b3941268de1b4be6770e9612\"" Feb 9 19:47:41.664294 env[1135]: time="2024-02-09T19:47:41.664224813Z" level=info msg="RemoveContainer for \"a0c099d2afa846e43659af9584041dc77331b463b3941268de1b4be6770e9612\" returns successfully" Feb 9 19:47:41.664591 kubelet[2005]: I0209 19:47:41.664535 2005 scope.go:115] "RemoveContainer" containerID="ef1db5674323939a7272d258a7200db815dd57e27daac6b4193e6daaad0aa031" Feb 9 19:47:41.665830 env[1135]: time="2024-02-09T19:47:41.665804436Z" level=info msg="RemoveContainer for \"ef1db5674323939a7272d258a7200db815dd57e27daac6b4193e6daaad0aa031\"" Feb 9 19:47:41.700985 env[1135]: time="2024-02-09T19:47:41.700923838Z" level=info msg="RemoveContainer for \"ef1db5674323939a7272d258a7200db815dd57e27daac6b4193e6daaad0aa031\" returns successfully" Feb 9 19:47:41.701199 kubelet[2005]: I0209 19:47:41.701142 2005 scope.go:115] "RemoveContainer" containerID="9af290ea14ea4815f246c97a68295de382fe9610a253c8551ee9731e6e90e0f9" Feb 9 19:47:41.703039 env[1135]: time="2024-02-09T19:47:41.703010040Z" level=info msg="RemoveContainer for \"9af290ea14ea4815f246c97a68295de382fe9610a253c8551ee9731e6e90e0f9\"" Feb 9 19:47:41.734091 env[1135]: time="2024-02-09T19:47:41.733959072Z" level=info msg="RemoveContainer for \"9af290ea14ea4815f246c97a68295de382fe9610a253c8551ee9731e6e90e0f9\" returns successfully" Feb 9 19:47:41.734245 kubelet[2005]: I0209 19:47:41.734141 2005 scope.go:115] "RemoveContainer" containerID="3e3be0af426693119526da27487bb269d722544d684d98ebb08398c2f703215b" Feb 9 19:47:41.735416 env[1135]: time="2024-02-09T19:47:41.735378832Z" level=info msg="RemoveContainer for \"3e3be0af426693119526da27487bb269d722544d684d98ebb08398c2f703215b\"" Feb 9 19:47:41.749990 env[1135]: time="2024-02-09T19:47:41.749936281Z" level=info msg="RemoveContainer for \"3e3be0af426693119526da27487bb269d722544d684d98ebb08398c2f703215b\" returns successfully" Feb 9 19:47:41.750203 kubelet[2005]: I0209 19:47:41.750181 2005 scope.go:115] "RemoveContainer" containerID="493a822e9800a86c39b217351e11214f6889ccc6f3da8d3b7b2a0163386a3c66" Feb 9 19:47:41.750451 env[1135]: time="2024-02-09T19:47:41.750396723Z" level=error msg="ContainerStatus for \"493a822e9800a86c39b217351e11214f6889ccc6f3da8d3b7b2a0163386a3c66\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"493a822e9800a86c39b217351e11214f6889ccc6f3da8d3b7b2a0163386a3c66\": not found" Feb 9 19:47:41.750563 kubelet[2005]: E0209 19:47:41.750547 2005 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"493a822e9800a86c39b217351e11214f6889ccc6f3da8d3b7b2a0163386a3c66\": not found" containerID="493a822e9800a86c39b217351e11214f6889ccc6f3da8d3b7b2a0163386a3c66" Feb 9 19:47:41.750606 kubelet[2005]: I0209 19:47:41.750580 2005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:493a822e9800a86c39b217351e11214f6889ccc6f3da8d3b7b2a0163386a3c66} err="failed to get container status \"493a822e9800a86c39b217351e11214f6889ccc6f3da8d3b7b2a0163386a3c66\": rpc error: code = NotFound desc = an error occurred when try to find container \"493a822e9800a86c39b217351e11214f6889ccc6f3da8d3b7b2a0163386a3c66\": not found" Feb 9 19:47:41.750606 kubelet[2005]: I0209 19:47:41.750591 2005 scope.go:115] "RemoveContainer" containerID="a0c099d2afa846e43659af9584041dc77331b463b3941268de1b4be6770e9612" Feb 9 19:47:41.750899 env[1135]: time="2024-02-09T19:47:41.750829613Z" level=error msg="ContainerStatus for \"a0c099d2afa846e43659af9584041dc77331b463b3941268de1b4be6770e9612\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a0c099d2afa846e43659af9584041dc77331b463b3941268de1b4be6770e9612\": not found" Feb 9 19:47:41.750998 kubelet[2005]: E0209 19:47:41.750985 2005 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a0c099d2afa846e43659af9584041dc77331b463b3941268de1b4be6770e9612\": not found" containerID="a0c099d2afa846e43659af9584041dc77331b463b3941268de1b4be6770e9612" Feb 9 19:47:41.751046 kubelet[2005]: I0209 19:47:41.751010 2005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:a0c099d2afa846e43659af9584041dc77331b463b3941268de1b4be6770e9612} err="failed to get container status \"a0c099d2afa846e43659af9584041dc77331b463b3941268de1b4be6770e9612\": rpc error: code = NotFound desc = an error occurred when try to find container \"a0c099d2afa846e43659af9584041dc77331b463b3941268de1b4be6770e9612\": not found" Feb 9 19:47:41.751046 kubelet[2005]: I0209 19:47:41.751023 2005 scope.go:115] "RemoveContainer" containerID="ef1db5674323939a7272d258a7200db815dd57e27daac6b4193e6daaad0aa031" Feb 9 19:47:41.751317 env[1135]: time="2024-02-09T19:47:41.751237245Z" level=error msg="ContainerStatus for \"ef1db5674323939a7272d258a7200db815dd57e27daac6b4193e6daaad0aa031\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ef1db5674323939a7272d258a7200db815dd57e27daac6b4193e6daaad0aa031\": not found" Feb 9 19:47:41.751497 kubelet[2005]: E0209 19:47:41.751394 2005 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ef1db5674323939a7272d258a7200db815dd57e27daac6b4193e6daaad0aa031\": not found" containerID="ef1db5674323939a7272d258a7200db815dd57e27daac6b4193e6daaad0aa031" Feb 9 19:47:41.751497 kubelet[2005]: I0209 19:47:41.751417 2005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:ef1db5674323939a7272d258a7200db815dd57e27daac6b4193e6daaad0aa031} err="failed to get container status \"ef1db5674323939a7272d258a7200db815dd57e27daac6b4193e6daaad0aa031\": rpc error: code = NotFound desc = an error occurred when try to find container \"ef1db5674323939a7272d258a7200db815dd57e27daac6b4193e6daaad0aa031\": not found" Feb 9 19:47:41.751497 kubelet[2005]: I0209 19:47:41.751425 2005 scope.go:115] "RemoveContainer" containerID="9af290ea14ea4815f246c97a68295de382fe9610a253c8551ee9731e6e90e0f9" Feb 9 19:47:41.751610 env[1135]: time="2024-02-09T19:47:41.751541812Z" level=error msg="ContainerStatus for \"9af290ea14ea4815f246c97a68295de382fe9610a253c8551ee9731e6e90e0f9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9af290ea14ea4815f246c97a68295de382fe9610a253c8551ee9731e6e90e0f9\": not found" Feb 9 19:47:41.751663 kubelet[2005]: E0209 19:47:41.751649 2005 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9af290ea14ea4815f246c97a68295de382fe9610a253c8551ee9731e6e90e0f9\": not found" containerID="9af290ea14ea4815f246c97a68295de382fe9610a253c8551ee9731e6e90e0f9" Feb 9 19:47:41.751749 kubelet[2005]: I0209 19:47:41.751669 2005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:9af290ea14ea4815f246c97a68295de382fe9610a253c8551ee9731e6e90e0f9} err="failed to get container status \"9af290ea14ea4815f246c97a68295de382fe9610a253c8551ee9731e6e90e0f9\": rpc error: code = NotFound desc = an error occurred when try to find container \"9af290ea14ea4815f246c97a68295de382fe9610a253c8551ee9731e6e90e0f9\": not found" Feb 9 19:47:41.751749 kubelet[2005]: I0209 19:47:41.751678 2005 scope.go:115] "RemoveContainer" containerID="3e3be0af426693119526da27487bb269d722544d684d98ebb08398c2f703215b" Feb 9 19:47:41.751849 env[1135]: time="2024-02-09T19:47:41.751807976Z" level=error msg="ContainerStatus for \"3e3be0af426693119526da27487bb269d722544d684d98ebb08398c2f703215b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3e3be0af426693119526da27487bb269d722544d684d98ebb08398c2f703215b\": not found" Feb 9 19:47:41.751969 kubelet[2005]: E0209 19:47:41.751954 2005 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3e3be0af426693119526da27487bb269d722544d684d98ebb08398c2f703215b\": not found" containerID="3e3be0af426693119526da27487bb269d722544d684d98ebb08398c2f703215b" Feb 9 19:47:41.752001 kubelet[2005]: I0209 19:47:41.751989 2005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:3e3be0af426693119526da27487bb269d722544d684d98ebb08398c2f703215b} err="failed to get container status \"3e3be0af426693119526da27487bb269d722544d684d98ebb08398c2f703215b\": rpc error: code = NotFound desc = an error occurred when try to find container \"3e3be0af426693119526da27487bb269d722544d684d98ebb08398c2f703215b\": not found" Feb 9 19:47:41.938910 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-493a822e9800a86c39b217351e11214f6889ccc6f3da8d3b7b2a0163386a3c66-rootfs.mount: Deactivated successfully. Feb 9 19:47:41.939042 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ef497fcf867abf178da4488269bd2b9c0f75041f45e6aadd73d59313218df0b7-rootfs.mount: Deactivated successfully. Feb 9 19:47:41.939092 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ef497fcf867abf178da4488269bd2b9c0f75041f45e6aadd73d59313218df0b7-shm.mount: Deactivated successfully. Feb 9 19:47:41.939148 systemd[1]: var-lib-kubelet-pods-06e56adb\x2d371b\x2d4034\x2d9656\x2d69246ae66ba4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnxkwk.mount: Deactivated successfully. Feb 9 19:47:41.939202 systemd[1]: var-lib-kubelet-pods-3357fc38\x2d4056\x2d48b1\x2db5a7\x2d61df320fa741-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmzvgh.mount: Deactivated successfully. Feb 9 19:47:41.939256 systemd[1]: var-lib-kubelet-pods-3357fc38\x2d4056\x2d48b1\x2db5a7\x2d61df320fa741-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 19:47:41.939335 systemd[1]: var-lib-kubelet-pods-3357fc38\x2d4056\x2d48b1\x2db5a7\x2d61df320fa741-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 19:47:42.907071 sshd[3782]: pam_unix(sshd:session): session closed for user core Feb 9 19:47:42.910205 systemd[1]: Started sshd@24-10.0.0.68:22-10.0.0.1:51740.service. Feb 9 19:47:42.910609 systemd[1]: sshd@23-10.0.0.68:22-10.0.0.1:51728.service: Deactivated successfully. Feb 9 19:47:42.911354 systemd[1]: session-24.scope: Deactivated successfully. Feb 9 19:47:42.912009 systemd-logind[1121]: Session 24 logged out. Waiting for processes to exit. Feb 9 19:47:42.913069 systemd-logind[1121]: Removed session 24. Feb 9 19:47:42.944876 sshd[3942]: Accepted publickey for core from 10.0.0.1 port 51740 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:47:42.946017 sshd[3942]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:47:42.949301 systemd-logind[1121]: New session 25 of user core. Feb 9 19:47:42.950240 systemd[1]: Started session-25.scope. Feb 9 19:47:43.462471 kubelet[2005]: I0209 19:47:43.462438 2005 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=06e56adb-371b-4034-9656-69246ae66ba4 path="/var/lib/kubelet/pods/06e56adb-371b-4034-9656-69246ae66ba4/volumes" Feb 9 19:47:43.462850 kubelet[2005]: I0209 19:47:43.462789 2005 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=3357fc38-4056-48b1-b5a7-61df320fa741 path="/var/lib/kubelet/pods/3357fc38-4056-48b1-b5a7-61df320fa741/volumes" Feb 9 19:47:43.761166 sshd[3942]: pam_unix(sshd:session): session closed for user core Feb 9 19:47:43.764397 systemd[1]: Started sshd@25-10.0.0.68:22-10.0.0.1:51742.service. Feb 9 19:47:43.772464 systemd[1]: sshd@24-10.0.0.68:22-10.0.0.1:51740.service: Deactivated successfully. Feb 9 19:47:43.773190 systemd[1]: session-25.scope: Deactivated successfully. Feb 9 19:47:43.773927 kubelet[2005]: I0209 19:47:43.773656 2005 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:47:43.773927 kubelet[2005]: E0209 19:47:43.773752 2005 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3357fc38-4056-48b1-b5a7-61df320fa741" containerName="apply-sysctl-overwrites" Feb 9 19:47:43.773927 kubelet[2005]: E0209 19:47:43.773769 2005 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="06e56adb-371b-4034-9656-69246ae66ba4" containerName="cilium-operator" Feb 9 19:47:43.773927 kubelet[2005]: E0209 19:47:43.773777 2005 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3357fc38-4056-48b1-b5a7-61df320fa741" containerName="clean-cilium-state" Feb 9 19:47:43.773927 kubelet[2005]: E0209 19:47:43.773785 2005 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3357fc38-4056-48b1-b5a7-61df320fa741" containerName="cilium-agent" Feb 9 19:47:43.773927 kubelet[2005]: E0209 19:47:43.773793 2005 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3357fc38-4056-48b1-b5a7-61df320fa741" containerName="mount-cgroup" Feb 9 19:47:43.773927 kubelet[2005]: E0209 19:47:43.773801 2005 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3357fc38-4056-48b1-b5a7-61df320fa741" containerName="mount-bpf-fs" Feb 9 19:47:43.773927 kubelet[2005]: I0209 19:47:43.773835 2005 memory_manager.go:346] "RemoveStaleState removing state" podUID="3357fc38-4056-48b1-b5a7-61df320fa741" containerName="cilium-agent" Feb 9 19:47:43.773927 kubelet[2005]: I0209 19:47:43.773848 2005 memory_manager.go:346] "RemoveStaleState removing state" podUID="06e56adb-371b-4034-9656-69246ae66ba4" containerName="cilium-operator" Feb 9 19:47:43.774860 systemd-logind[1121]: Session 25 logged out. Waiting for processes to exit. Feb 9 19:47:43.776345 systemd-logind[1121]: Removed session 25. Feb 9 19:47:43.778713 systemd[1]: Created slice kubepods-burstable-pod3c9507fb_ad81_4828_a482_ba8626b668a2.slice. Feb 9 19:47:43.806135 sshd[3954]: Accepted publickey for core from 10.0.0.1 port 51742 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:47:43.807217 sshd[3954]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:47:43.811268 systemd[1]: Started session-26.scope. Feb 9 19:47:43.812357 systemd-logind[1121]: New session 26 of user core. Feb 9 19:47:43.831754 kubelet[2005]: I0209 19:47:43.831712 2005 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3c9507fb-ad81-4828-a482-ba8626b668a2-etc-cni-netd\") pod \"cilium-k8s2t\" (UID: \"3c9507fb-ad81-4828-a482-ba8626b668a2\") " pod="kube-system/cilium-k8s2t" Feb 9 19:47:43.831868 kubelet[2005]: I0209 19:47:43.831764 2005 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3c9507fb-ad81-4828-a482-ba8626b668a2-lib-modules\") pod \"cilium-k8s2t\" (UID: \"3c9507fb-ad81-4828-a482-ba8626b668a2\") " pod="kube-system/cilium-k8s2t" Feb 9 19:47:43.831868 kubelet[2005]: I0209 19:47:43.831815 2005 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3c9507fb-ad81-4828-a482-ba8626b668a2-host-proc-sys-net\") pod \"cilium-k8s2t\" (UID: \"3c9507fb-ad81-4828-a482-ba8626b668a2\") " pod="kube-system/cilium-k8s2t" Feb 9 19:47:43.831868 kubelet[2005]: I0209 19:47:43.831833 2005 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tq78c\" (UniqueName: \"kubernetes.io/projected/3c9507fb-ad81-4828-a482-ba8626b668a2-kube-api-access-tq78c\") pod \"cilium-k8s2t\" (UID: \"3c9507fb-ad81-4828-a482-ba8626b668a2\") " pod="kube-system/cilium-k8s2t" Feb 9 19:47:43.831993 kubelet[2005]: I0209 19:47:43.831903 2005 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3c9507fb-ad81-4828-a482-ba8626b668a2-cilium-run\") pod \"cilium-k8s2t\" (UID: \"3c9507fb-ad81-4828-a482-ba8626b668a2\") " pod="kube-system/cilium-k8s2t" Feb 9 19:47:43.831993 kubelet[2005]: I0209 19:47:43.831936 2005 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3c9507fb-ad81-4828-a482-ba8626b668a2-cilium-cgroup\") pod \"cilium-k8s2t\" (UID: \"3c9507fb-ad81-4828-a482-ba8626b668a2\") " pod="kube-system/cilium-k8s2t" Feb 9 19:47:43.831993 kubelet[2005]: I0209 19:47:43.831965 2005 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3c9507fb-ad81-4828-a482-ba8626b668a2-cilium-config-path\") pod \"cilium-k8s2t\" (UID: \"3c9507fb-ad81-4828-a482-ba8626b668a2\") " pod="kube-system/cilium-k8s2t" Feb 9 19:47:43.832086 kubelet[2005]: I0209 19:47:43.831996 2005 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3c9507fb-ad81-4828-a482-ba8626b668a2-host-proc-sys-kernel\") pod \"cilium-k8s2t\" (UID: \"3c9507fb-ad81-4828-a482-ba8626b668a2\") " pod="kube-system/cilium-k8s2t" Feb 9 19:47:43.832086 kubelet[2005]: I0209 19:47:43.832031 2005 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3c9507fb-ad81-4828-a482-ba8626b668a2-hostproc\") pod \"cilium-k8s2t\" (UID: \"3c9507fb-ad81-4828-a482-ba8626b668a2\") " pod="kube-system/cilium-k8s2t" Feb 9 19:47:43.832086 kubelet[2005]: I0209 19:47:43.832048 2005 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3c9507fb-ad81-4828-a482-ba8626b668a2-xtables-lock\") pod \"cilium-k8s2t\" (UID: \"3c9507fb-ad81-4828-a482-ba8626b668a2\") " pod="kube-system/cilium-k8s2t" Feb 9 19:47:43.832086 kubelet[2005]: I0209 19:47:43.832066 2005 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3c9507fb-ad81-4828-a482-ba8626b668a2-cilium-ipsec-secrets\") pod \"cilium-k8s2t\" (UID: \"3c9507fb-ad81-4828-a482-ba8626b668a2\") " pod="kube-system/cilium-k8s2t" Feb 9 19:47:43.832205 kubelet[2005]: I0209 19:47:43.832097 2005 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3c9507fb-ad81-4828-a482-ba8626b668a2-cni-path\") pod \"cilium-k8s2t\" (UID: \"3c9507fb-ad81-4828-a482-ba8626b668a2\") " pod="kube-system/cilium-k8s2t" Feb 9 19:47:43.832205 kubelet[2005]: I0209 19:47:43.832127 2005 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3c9507fb-ad81-4828-a482-ba8626b668a2-hubble-tls\") pod \"cilium-k8s2t\" (UID: \"3c9507fb-ad81-4828-a482-ba8626b668a2\") " pod="kube-system/cilium-k8s2t" Feb 9 19:47:43.832205 kubelet[2005]: I0209 19:47:43.832153 2005 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3c9507fb-ad81-4828-a482-ba8626b668a2-bpf-maps\") pod \"cilium-k8s2t\" (UID: \"3c9507fb-ad81-4828-a482-ba8626b668a2\") " pod="kube-system/cilium-k8s2t" Feb 9 19:47:43.832205 kubelet[2005]: I0209 19:47:43.832192 2005 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3c9507fb-ad81-4828-a482-ba8626b668a2-clustermesh-secrets\") pod \"cilium-k8s2t\" (UID: \"3c9507fb-ad81-4828-a482-ba8626b668a2\") " pod="kube-system/cilium-k8s2t" Feb 9 19:47:43.920360 sshd[3954]: pam_unix(sshd:session): session closed for user core Feb 9 19:47:43.924146 systemd[1]: Started sshd@26-10.0.0.68:22-10.0.0.1:51756.service. Feb 9 19:47:43.927152 systemd[1]: sshd@25-10.0.0.68:22-10.0.0.1:51742.service: Deactivated successfully. Feb 9 19:47:43.927869 systemd[1]: session-26.scope: Deactivated successfully. Feb 9 19:47:43.929772 systemd-logind[1121]: Session 26 logged out. Waiting for processes to exit. Feb 9 19:47:43.930796 systemd-logind[1121]: Removed session 26. Feb 9 19:47:43.964116 sshd[3967]: Accepted publickey for core from 10.0.0.1 port 51756 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:47:43.965204 sshd[3967]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:47:43.968748 systemd-logind[1121]: New session 27 of user core. Feb 9 19:47:43.969911 systemd[1]: Started session-27.scope. Feb 9 19:47:44.083073 kubelet[2005]: E0209 19:47:44.083033 2005 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:47:44.083610 env[1135]: time="2024-02-09T19:47:44.083569659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k8s2t,Uid:3c9507fb-ad81-4828-a482-ba8626b668a2,Namespace:kube-system,Attempt:0,}" Feb 9 19:47:44.098455 env[1135]: time="2024-02-09T19:47:44.098389366Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:47:44.098455 env[1135]: time="2024-02-09T19:47:44.098426246Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:47:44.098455 env[1135]: time="2024-02-09T19:47:44.098438770Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:47:44.099031 env[1135]: time="2024-02-09T19:47:44.098745230Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/874712be06cb26c9eac9dbaf75994259f9f42a20c01f5a1f7f3db22e0a4b6c14 pid=3988 runtime=io.containerd.runc.v2 Feb 9 19:47:44.111238 systemd[1]: Started cri-containerd-874712be06cb26c9eac9dbaf75994259f9f42a20c01f5a1f7f3db22e0a4b6c14.scope. Feb 9 19:47:44.132185 env[1135]: time="2024-02-09T19:47:44.132124565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k8s2t,Uid:3c9507fb-ad81-4828-a482-ba8626b668a2,Namespace:kube-system,Attempt:0,} returns sandbox id \"874712be06cb26c9eac9dbaf75994259f9f42a20c01f5a1f7f3db22e0a4b6c14\"" Feb 9 19:47:44.132754 kubelet[2005]: E0209 19:47:44.132733 2005 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:47:44.134569 env[1135]: time="2024-02-09T19:47:44.134542773Z" level=info msg="CreateContainer within sandbox \"874712be06cb26c9eac9dbaf75994259f9f42a20c01f5a1f7f3db22e0a4b6c14\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 19:47:44.145827 env[1135]: time="2024-02-09T19:47:44.145763516Z" level=info msg="CreateContainer within sandbox \"874712be06cb26c9eac9dbaf75994259f9f42a20c01f5a1f7f3db22e0a4b6c14\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2842e7116bf233bd43a8b5dfeb606396d65bd44d25b5c865b61525f8e5007211\"" Feb 9 19:47:44.146646 env[1135]: time="2024-02-09T19:47:44.146591163Z" level=info msg="StartContainer for \"2842e7116bf233bd43a8b5dfeb606396d65bd44d25b5c865b61525f8e5007211\"" Feb 9 19:47:44.164044 systemd[1]: Started cri-containerd-2842e7116bf233bd43a8b5dfeb606396d65bd44d25b5c865b61525f8e5007211.scope. Feb 9 19:47:44.172353 systemd[1]: cri-containerd-2842e7116bf233bd43a8b5dfeb606396d65bd44d25b5c865b61525f8e5007211.scope: Deactivated successfully. Feb 9 19:47:44.172576 systemd[1]: Stopped cri-containerd-2842e7116bf233bd43a8b5dfeb606396d65bd44d25b5c865b61525f8e5007211.scope. Feb 9 19:47:44.186077 env[1135]: time="2024-02-09T19:47:44.186016485Z" level=info msg="shim disconnected" id=2842e7116bf233bd43a8b5dfeb606396d65bd44d25b5c865b61525f8e5007211 Feb 9 19:47:44.186077 env[1135]: time="2024-02-09T19:47:44.186075236Z" level=warning msg="cleaning up after shim disconnected" id=2842e7116bf233bd43a8b5dfeb606396d65bd44d25b5c865b61525f8e5007211 namespace=k8s.io Feb 9 19:47:44.186352 env[1135]: time="2024-02-09T19:47:44.186087309Z" level=info msg="cleaning up dead shim" Feb 9 19:47:44.193153 env[1135]: time="2024-02-09T19:47:44.193089766Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:47:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4047 runtime=io.containerd.runc.v2\ntime=\"2024-02-09T19:47:44Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/2842e7116bf233bd43a8b5dfeb606396d65bd44d25b5c865b61525f8e5007211/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 9 19:47:44.193414 env[1135]: time="2024-02-09T19:47:44.193313720Z" level=error msg="copy shim log" error="read /proc/self/fd/40: file already closed" Feb 9 19:47:44.194437 env[1135]: time="2024-02-09T19:47:44.194385971Z" level=error msg="Failed to pipe stdout of container \"2842e7116bf233bd43a8b5dfeb606396d65bd44d25b5c865b61525f8e5007211\"" error="reading from a closed fifo" Feb 9 19:47:44.194514 env[1135]: time="2024-02-09T19:47:44.194478786Z" level=error msg="Failed to pipe stderr of container \"2842e7116bf233bd43a8b5dfeb606396d65bd44d25b5c865b61525f8e5007211\"" error="reading from a closed fifo" Feb 9 19:47:44.196863 env[1135]: time="2024-02-09T19:47:44.196805030Z" level=error msg="StartContainer for \"2842e7116bf233bd43a8b5dfeb606396d65bd44d25b5c865b61525f8e5007211\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 9 19:47:44.197085 kubelet[2005]: E0209 19:47:44.197055 2005 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="2842e7116bf233bd43a8b5dfeb606396d65bd44d25b5c865b61525f8e5007211" Feb 9 19:47:44.197202 kubelet[2005]: E0209 19:47:44.197176 2005 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 9 19:47:44.197202 kubelet[2005]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 9 19:47:44.197202 kubelet[2005]: rm /hostbin/cilium-mount Feb 9 19:47:44.197202 kubelet[2005]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-tq78c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-k8s2t_kube-system(3c9507fb-ad81-4828-a482-ba8626b668a2): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 9 19:47:44.197409 kubelet[2005]: E0209 19:47:44.197214 2005 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-k8s2t" podUID=3c9507fb-ad81-4828-a482-ba8626b668a2 Feb 9 19:47:44.487874 kubelet[2005]: E0209 19:47:44.487779 2005 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 19:47:44.648958 env[1135]: time="2024-02-09T19:47:44.648918648Z" level=info msg="StopPodSandbox for \"874712be06cb26c9eac9dbaf75994259f9f42a20c01f5a1f7f3db22e0a4b6c14\"" Feb 9 19:47:44.649121 env[1135]: time="2024-02-09T19:47:44.648968653Z" level=info msg="Container to stop \"2842e7116bf233bd43a8b5dfeb606396d65bd44d25b5c865b61525f8e5007211\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:47:44.654377 systemd[1]: cri-containerd-874712be06cb26c9eac9dbaf75994259f9f42a20c01f5a1f7f3db22e0a4b6c14.scope: Deactivated successfully. Feb 9 19:47:44.672563 env[1135]: time="2024-02-09T19:47:44.672520281Z" level=info msg="shim disconnected" id=874712be06cb26c9eac9dbaf75994259f9f42a20c01f5a1f7f3db22e0a4b6c14 Feb 9 19:47:44.672779 env[1135]: time="2024-02-09T19:47:44.672748784Z" level=warning msg="cleaning up after shim disconnected" id=874712be06cb26c9eac9dbaf75994259f9f42a20c01f5a1f7f3db22e0a4b6c14 namespace=k8s.io Feb 9 19:47:44.672779 env[1135]: time="2024-02-09T19:47:44.672766518Z" level=info msg="cleaning up dead shim" Feb 9 19:47:44.678467 env[1135]: time="2024-02-09T19:47:44.678430300Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:47:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4077 runtime=io.containerd.runc.v2\n" Feb 9 19:47:44.678738 env[1135]: time="2024-02-09T19:47:44.678698688Z" level=info msg="TearDown network for sandbox \"874712be06cb26c9eac9dbaf75994259f9f42a20c01f5a1f7f3db22e0a4b6c14\" successfully" Feb 9 19:47:44.678772 env[1135]: time="2024-02-09T19:47:44.678721972Z" level=info msg="StopPodSandbox for \"874712be06cb26c9eac9dbaf75994259f9f42a20c01f5a1f7f3db22e0a4b6c14\" returns successfully" Feb 9 19:47:44.737343 kubelet[2005]: I0209 19:47:44.737307 2005 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3c9507fb-ad81-4828-a482-ba8626b668a2-cilium-cgroup\") pod \"3c9507fb-ad81-4828-a482-ba8626b668a2\" (UID: \"3c9507fb-ad81-4828-a482-ba8626b668a2\") " Feb 9 19:47:44.737343 kubelet[2005]: I0209 19:47:44.737340 2005 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3c9507fb-ad81-4828-a482-ba8626b668a2-hostproc\") pod \"3c9507fb-ad81-4828-a482-ba8626b668a2\" (UID: \"3c9507fb-ad81-4828-a482-ba8626b668a2\") " Feb 9 19:47:44.737514 kubelet[2005]: I0209 19:47:44.737360 2005 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3c9507fb-ad81-4828-a482-ba8626b668a2-lib-modules\") pod \"3c9507fb-ad81-4828-a482-ba8626b668a2\" (UID: \"3c9507fb-ad81-4828-a482-ba8626b668a2\") " Feb 9 19:47:44.737514 kubelet[2005]: I0209 19:47:44.737377 2005 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3c9507fb-ad81-4828-a482-ba8626b668a2-cni-path\") pod \"3c9507fb-ad81-4828-a482-ba8626b668a2\" (UID: \"3c9507fb-ad81-4828-a482-ba8626b668a2\") " Feb 9 19:47:44.737514 kubelet[2005]: I0209 19:47:44.737376 2005 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c9507fb-ad81-4828-a482-ba8626b668a2-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "3c9507fb-ad81-4828-a482-ba8626b668a2" (UID: "3c9507fb-ad81-4828-a482-ba8626b668a2"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:47:44.737514 kubelet[2005]: I0209 19:47:44.737397 2005 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3c9507fb-ad81-4828-a482-ba8626b668a2-hubble-tls\") pod \"3c9507fb-ad81-4828-a482-ba8626b668a2\" (UID: \"3c9507fb-ad81-4828-a482-ba8626b668a2\") " Feb 9 19:47:44.737514 kubelet[2005]: I0209 19:47:44.737409 2005 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c9507fb-ad81-4828-a482-ba8626b668a2-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3c9507fb-ad81-4828-a482-ba8626b668a2" (UID: "3c9507fb-ad81-4828-a482-ba8626b668a2"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:47:44.737514 kubelet[2005]: I0209 19:47:44.737415 2005 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3c9507fb-ad81-4828-a482-ba8626b668a2-host-proc-sys-net\") pod \"3c9507fb-ad81-4828-a482-ba8626b668a2\" (UID: \"3c9507fb-ad81-4828-a482-ba8626b668a2\") " Feb 9 19:47:44.737847 kubelet[2005]: I0209 19:47:44.737423 2005 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c9507fb-ad81-4828-a482-ba8626b668a2-hostproc" (OuterVolumeSpecName: "hostproc") pod "3c9507fb-ad81-4828-a482-ba8626b668a2" (UID: "3c9507fb-ad81-4828-a482-ba8626b668a2"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:47:44.737847 kubelet[2005]: I0209 19:47:44.737431 2005 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3c9507fb-ad81-4828-a482-ba8626b668a2-xtables-lock\") pod \"3c9507fb-ad81-4828-a482-ba8626b668a2\" (UID: \"3c9507fb-ad81-4828-a482-ba8626b668a2\") " Feb 9 19:47:44.737847 kubelet[2005]: I0209 19:47:44.737437 2005 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c9507fb-ad81-4828-a482-ba8626b668a2-cni-path" (OuterVolumeSpecName: "cni-path") pod "3c9507fb-ad81-4828-a482-ba8626b668a2" (UID: "3c9507fb-ad81-4828-a482-ba8626b668a2"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:47:44.737847 kubelet[2005]: I0209 19:47:44.737453 2005 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3c9507fb-ad81-4828-a482-ba8626b668a2-clustermesh-secrets\") pod \"3c9507fb-ad81-4828-a482-ba8626b668a2\" (UID: \"3c9507fb-ad81-4828-a482-ba8626b668a2\") " Feb 9 19:47:44.737847 kubelet[2005]: I0209 19:47:44.737469 2005 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3c9507fb-ad81-4828-a482-ba8626b668a2-etc-cni-netd\") pod \"3c9507fb-ad81-4828-a482-ba8626b668a2\" (UID: \"3c9507fb-ad81-4828-a482-ba8626b668a2\") " Feb 9 19:47:44.738778 kubelet[2005]: I0209 19:47:44.737469 2005 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c9507fb-ad81-4828-a482-ba8626b668a2-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "3c9507fb-ad81-4828-a482-ba8626b668a2" (UID: "3c9507fb-ad81-4828-a482-ba8626b668a2"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:47:44.738778 kubelet[2005]: I0209 19:47:44.737485 2005 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3c9507fb-ad81-4828-a482-ba8626b668a2-cilium-run\") pod \"3c9507fb-ad81-4828-a482-ba8626b668a2\" (UID: \"3c9507fb-ad81-4828-a482-ba8626b668a2\") " Feb 9 19:47:44.738778 kubelet[2005]: I0209 19:47:44.737504 2005 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3c9507fb-ad81-4828-a482-ba8626b668a2-cilium-config-path\") pod \"3c9507fb-ad81-4828-a482-ba8626b668a2\" (UID: \"3c9507fb-ad81-4828-a482-ba8626b668a2\") " Feb 9 19:47:44.738778 kubelet[2005]: I0209 19:47:44.737523 2005 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tq78c\" (UniqueName: \"kubernetes.io/projected/3c9507fb-ad81-4828-a482-ba8626b668a2-kube-api-access-tq78c\") pod \"3c9507fb-ad81-4828-a482-ba8626b668a2\" (UID: \"3c9507fb-ad81-4828-a482-ba8626b668a2\") " Feb 9 19:47:44.738778 kubelet[2005]: I0209 19:47:44.737539 2005 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3c9507fb-ad81-4828-a482-ba8626b668a2-host-proc-sys-kernel\") pod \"3c9507fb-ad81-4828-a482-ba8626b668a2\" (UID: \"3c9507fb-ad81-4828-a482-ba8626b668a2\") " Feb 9 19:47:44.738778 kubelet[2005]: I0209 19:47:44.737558 2005 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3c9507fb-ad81-4828-a482-ba8626b668a2-cilium-ipsec-secrets\") pod \"3c9507fb-ad81-4828-a482-ba8626b668a2\" (UID: \"3c9507fb-ad81-4828-a482-ba8626b668a2\") " Feb 9 19:47:44.739003 kubelet[2005]: I0209 19:47:44.737574 2005 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3c9507fb-ad81-4828-a482-ba8626b668a2-bpf-maps\") pod \"3c9507fb-ad81-4828-a482-ba8626b668a2\" (UID: \"3c9507fb-ad81-4828-a482-ba8626b668a2\") " Feb 9 19:47:44.739003 kubelet[2005]: I0209 19:47:44.737603 2005 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3c9507fb-ad81-4828-a482-ba8626b668a2-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Feb 9 19:47:44.739003 kubelet[2005]: I0209 19:47:44.737612 2005 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3c9507fb-ad81-4828-a482-ba8626b668a2-hostproc\") on node \"localhost\" DevicePath \"\"" Feb 9 19:47:44.739003 kubelet[2005]: I0209 19:47:44.737620 2005 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3c9507fb-ad81-4828-a482-ba8626b668a2-lib-modules\") on node \"localhost\" DevicePath \"\"" Feb 9 19:47:44.739003 kubelet[2005]: I0209 19:47:44.737629 2005 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3c9507fb-ad81-4828-a482-ba8626b668a2-cni-path\") on node \"localhost\" DevicePath \"\"" Feb 9 19:47:44.739003 kubelet[2005]: I0209 19:47:44.737638 2005 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3c9507fb-ad81-4828-a482-ba8626b668a2-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Feb 9 19:47:44.739003 kubelet[2005]: I0209 19:47:44.737658 2005 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c9507fb-ad81-4828-a482-ba8626b668a2-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "3c9507fb-ad81-4828-a482-ba8626b668a2" (UID: "3c9507fb-ad81-4828-a482-ba8626b668a2"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:47:44.739325 kubelet[2005]: I0209 19:47:44.737679 2005 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c9507fb-ad81-4828-a482-ba8626b668a2-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "3c9507fb-ad81-4828-a482-ba8626b668a2" (UID: "3c9507fb-ad81-4828-a482-ba8626b668a2"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:47:44.739325 kubelet[2005]: I0209 19:47:44.737700 2005 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c9507fb-ad81-4828-a482-ba8626b668a2-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "3c9507fb-ad81-4828-a482-ba8626b668a2" (UID: "3c9507fb-ad81-4828-a482-ba8626b668a2"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:47:44.739325 kubelet[2005]: I0209 19:47:44.737707 2005 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c9507fb-ad81-4828-a482-ba8626b668a2-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "3c9507fb-ad81-4828-a482-ba8626b668a2" (UID: "3c9507fb-ad81-4828-a482-ba8626b668a2"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:47:44.739325 kubelet[2005]: I0209 19:47:44.737755 2005 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c9507fb-ad81-4828-a482-ba8626b668a2-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "3c9507fb-ad81-4828-a482-ba8626b668a2" (UID: "3c9507fb-ad81-4828-a482-ba8626b668a2"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:47:44.739325 kubelet[2005]: W0209 19:47:44.737836 2005 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/3c9507fb-ad81-4828-a482-ba8626b668a2/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 19:47:44.739582 kubelet[2005]: I0209 19:47:44.739554 2005 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c9507fb-ad81-4828-a482-ba8626b668a2-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "3c9507fb-ad81-4828-a482-ba8626b668a2" (UID: "3c9507fb-ad81-4828-a482-ba8626b668a2"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:47:44.740231 kubelet[2005]: I0209 19:47:44.739677 2005 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c9507fb-ad81-4828-a482-ba8626b668a2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3c9507fb-ad81-4828-a482-ba8626b668a2" (UID: "3c9507fb-ad81-4828-a482-ba8626b668a2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 19:47:44.740529 kubelet[2005]: I0209 19:47:44.740502 2005 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c9507fb-ad81-4828-a482-ba8626b668a2-kube-api-access-tq78c" (OuterVolumeSpecName: "kube-api-access-tq78c") pod "3c9507fb-ad81-4828-a482-ba8626b668a2" (UID: "3c9507fb-ad81-4828-a482-ba8626b668a2"). InnerVolumeSpecName "kube-api-access-tq78c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:47:44.740933 kubelet[2005]: I0209 19:47:44.740911 2005 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c9507fb-ad81-4828-a482-ba8626b668a2-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "3c9507fb-ad81-4828-a482-ba8626b668a2" (UID: "3c9507fb-ad81-4828-a482-ba8626b668a2"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:47:44.741539 kubelet[2005]: I0209 19:47:44.741505 2005 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c9507fb-ad81-4828-a482-ba8626b668a2-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "3c9507fb-ad81-4828-a482-ba8626b668a2" (UID: "3c9507fb-ad81-4828-a482-ba8626b668a2"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:47:44.838162 kubelet[2005]: I0209 19:47:44.838130 2005 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3c9507fb-ad81-4828-a482-ba8626b668a2-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Feb 9 19:47:44.838162 kubelet[2005]: I0209 19:47:44.838154 2005 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3c9507fb-ad81-4828-a482-ba8626b668a2-cilium-run\") on node \"localhost\" DevicePath \"\"" Feb 9 19:47:44.838162 kubelet[2005]: I0209 19:47:44.838166 2005 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3c9507fb-ad81-4828-a482-ba8626b668a2-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 9 19:47:44.838302 kubelet[2005]: I0209 19:47:44.838174 2005 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3c9507fb-ad81-4828-a482-ba8626b668a2-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Feb 9 19:47:44.838302 kubelet[2005]: I0209 19:47:44.838184 2005 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3c9507fb-ad81-4828-a482-ba8626b668a2-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Feb 9 19:47:44.838302 kubelet[2005]: I0209 19:47:44.838194 2005 reconciler_common.go:295] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3c9507fb-ad81-4828-a482-ba8626b668a2-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Feb 9 19:47:44.838302 kubelet[2005]: I0209 19:47:44.838202 2005 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3c9507fb-ad81-4828-a482-ba8626b668a2-bpf-maps\") on node \"localhost\" DevicePath \"\"" Feb 9 19:47:44.838302 kubelet[2005]: I0209 19:47:44.838210 2005 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-tq78c\" (UniqueName: \"kubernetes.io/projected/3c9507fb-ad81-4828-a482-ba8626b668a2-kube-api-access-tq78c\") on node \"localhost\" DevicePath \"\"" Feb 9 19:47:44.838302 kubelet[2005]: I0209 19:47:44.838219 2005 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3c9507fb-ad81-4828-a482-ba8626b668a2-hubble-tls\") on node \"localhost\" DevicePath \"\"" Feb 9 19:47:44.838302 kubelet[2005]: I0209 19:47:44.838227 2005 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3c9507fb-ad81-4828-a482-ba8626b668a2-xtables-lock\") on node \"localhost\" DevicePath \"\"" Feb 9 19:47:44.937872 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-874712be06cb26c9eac9dbaf75994259f9f42a20c01f5a1f7f3db22e0a4b6c14-shm.mount: Deactivated successfully. Feb 9 19:47:44.937976 systemd[1]: var-lib-kubelet-pods-3c9507fb\x2dad81\x2d4828\x2da482\x2dba8626b668a2-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 19:47:44.938033 systemd[1]: var-lib-kubelet-pods-3c9507fb\x2dad81\x2d4828\x2da482\x2dba8626b668a2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtq78c.mount: Deactivated successfully. Feb 9 19:47:44.938081 systemd[1]: var-lib-kubelet-pods-3c9507fb\x2dad81\x2d4828\x2da482\x2dba8626b668a2-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 9 19:47:44.938124 systemd[1]: var-lib-kubelet-pods-3c9507fb\x2dad81\x2d4828\x2da482\x2dba8626b668a2-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 19:47:45.466437 systemd[1]: Removed slice kubepods-burstable-pod3c9507fb_ad81_4828_a482_ba8626b668a2.slice. Feb 9 19:47:45.651000 kubelet[2005]: I0209 19:47:45.650972 2005 scope.go:115] "RemoveContainer" containerID="2842e7116bf233bd43a8b5dfeb606396d65bd44d25b5c865b61525f8e5007211" Feb 9 19:47:45.653085 env[1135]: time="2024-02-09T19:47:45.651691688Z" level=info msg="RemoveContainer for \"2842e7116bf233bd43a8b5dfeb606396d65bd44d25b5c865b61525f8e5007211\"" Feb 9 19:47:45.655589 env[1135]: time="2024-02-09T19:47:45.655551925Z" level=info msg="RemoveContainer for \"2842e7116bf233bd43a8b5dfeb606396d65bd44d25b5c865b61525f8e5007211\" returns successfully" Feb 9 19:47:45.668487 kubelet[2005]: I0209 19:47:45.668368 2005 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:47:45.668487 kubelet[2005]: E0209 19:47:45.668415 2005 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3c9507fb-ad81-4828-a482-ba8626b668a2" containerName="mount-cgroup" Feb 9 19:47:45.668487 kubelet[2005]: I0209 19:47:45.668436 2005 memory_manager.go:346] "RemoveStaleState removing state" podUID="3c9507fb-ad81-4828-a482-ba8626b668a2" containerName="mount-cgroup" Feb 9 19:47:45.673669 systemd[1]: Created slice kubepods-burstable-podb0343f49_d64c_40b8_9f8d_e964452ed7d6.slice. Feb 9 19:47:45.743670 kubelet[2005]: I0209 19:47:45.743571 2005 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b0343f49-d64c-40b8-9f8d-e964452ed7d6-cilium-run\") pod \"cilium-n22rf\" (UID: \"b0343f49-d64c-40b8-9f8d-e964452ed7d6\") " pod="kube-system/cilium-n22rf" Feb 9 19:47:45.743670 kubelet[2005]: I0209 19:47:45.743619 2005 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b0343f49-d64c-40b8-9f8d-e964452ed7d6-cilium-cgroup\") pod \"cilium-n22rf\" (UID: \"b0343f49-d64c-40b8-9f8d-e964452ed7d6\") " pod="kube-system/cilium-n22rf" Feb 9 19:47:45.743670 kubelet[2005]: I0209 19:47:45.743662 2005 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b0343f49-d64c-40b8-9f8d-e964452ed7d6-host-proc-sys-net\") pod \"cilium-n22rf\" (UID: \"b0343f49-d64c-40b8-9f8d-e964452ed7d6\") " pod="kube-system/cilium-n22rf" Feb 9 19:47:45.743860 kubelet[2005]: I0209 19:47:45.743701 2005 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b0343f49-d64c-40b8-9f8d-e964452ed7d6-host-proc-sys-kernel\") pod \"cilium-n22rf\" (UID: \"b0343f49-d64c-40b8-9f8d-e964452ed7d6\") " pod="kube-system/cilium-n22rf" Feb 9 19:47:45.743860 kubelet[2005]: I0209 19:47:45.743736 2005 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b0343f49-d64c-40b8-9f8d-e964452ed7d6-hostproc\") pod \"cilium-n22rf\" (UID: \"b0343f49-d64c-40b8-9f8d-e964452ed7d6\") " pod="kube-system/cilium-n22rf" Feb 9 19:47:45.743860 kubelet[2005]: I0209 19:47:45.743763 2005 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b0343f49-d64c-40b8-9f8d-e964452ed7d6-xtables-lock\") pod \"cilium-n22rf\" (UID: \"b0343f49-d64c-40b8-9f8d-e964452ed7d6\") " pod="kube-system/cilium-n22rf" Feb 9 19:47:45.743860 kubelet[2005]: I0209 19:47:45.743783 2005 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b0343f49-d64c-40b8-9f8d-e964452ed7d6-clustermesh-secrets\") pod \"cilium-n22rf\" (UID: \"b0343f49-d64c-40b8-9f8d-e964452ed7d6\") " pod="kube-system/cilium-n22rf" Feb 9 19:47:45.743860 kubelet[2005]: I0209 19:47:45.743809 2005 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b0343f49-d64c-40b8-9f8d-e964452ed7d6-hubble-tls\") pod \"cilium-n22rf\" (UID: \"b0343f49-d64c-40b8-9f8d-e964452ed7d6\") " pod="kube-system/cilium-n22rf" Feb 9 19:47:45.743860 kubelet[2005]: I0209 19:47:45.743827 2005 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b0343f49-d64c-40b8-9f8d-e964452ed7d6-cni-path\") pod \"cilium-n22rf\" (UID: \"b0343f49-d64c-40b8-9f8d-e964452ed7d6\") " pod="kube-system/cilium-n22rf" Feb 9 19:47:45.743993 kubelet[2005]: I0209 19:47:45.743853 2005 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b0343f49-d64c-40b8-9f8d-e964452ed7d6-etc-cni-netd\") pod \"cilium-n22rf\" (UID: \"b0343f49-d64c-40b8-9f8d-e964452ed7d6\") " pod="kube-system/cilium-n22rf" Feb 9 19:47:45.743993 kubelet[2005]: I0209 19:47:45.743874 2005 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cmn2\" (UniqueName: \"kubernetes.io/projected/b0343f49-d64c-40b8-9f8d-e964452ed7d6-kube-api-access-2cmn2\") pod \"cilium-n22rf\" (UID: \"b0343f49-d64c-40b8-9f8d-e964452ed7d6\") " pod="kube-system/cilium-n22rf" Feb 9 19:47:45.743993 kubelet[2005]: I0209 19:47:45.743889 2005 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b0343f49-d64c-40b8-9f8d-e964452ed7d6-bpf-maps\") pod \"cilium-n22rf\" (UID: \"b0343f49-d64c-40b8-9f8d-e964452ed7d6\") " pod="kube-system/cilium-n22rf" Feb 9 19:47:45.743993 kubelet[2005]: I0209 19:47:45.743905 2005 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b0343f49-d64c-40b8-9f8d-e964452ed7d6-cilium-config-path\") pod \"cilium-n22rf\" (UID: \"b0343f49-d64c-40b8-9f8d-e964452ed7d6\") " pod="kube-system/cilium-n22rf" Feb 9 19:47:45.743993 kubelet[2005]: I0209 19:47:45.743954 2005 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b0343f49-d64c-40b8-9f8d-e964452ed7d6-lib-modules\") pod \"cilium-n22rf\" (UID: \"b0343f49-d64c-40b8-9f8d-e964452ed7d6\") " pod="kube-system/cilium-n22rf" Feb 9 19:47:45.744101 kubelet[2005]: I0209 19:47:45.743997 2005 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b0343f49-d64c-40b8-9f8d-e964452ed7d6-cilium-ipsec-secrets\") pod \"cilium-n22rf\" (UID: \"b0343f49-d64c-40b8-9f8d-e964452ed7d6\") " pod="kube-system/cilium-n22rf" Feb 9 19:47:45.977085 kubelet[2005]: E0209 19:47:45.977033 2005 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:47:45.977635 env[1135]: time="2024-02-09T19:47:45.977575045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-n22rf,Uid:b0343f49-d64c-40b8-9f8d-e964452ed7d6,Namespace:kube-system,Attempt:0,}" Feb 9 19:47:45.990859 env[1135]: time="2024-02-09T19:47:45.990804016Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:47:45.990859 env[1135]: time="2024-02-09T19:47:45.990836518Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:47:45.990859 env[1135]: time="2024-02-09T19:47:45.990845846Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:47:45.991029 env[1135]: time="2024-02-09T19:47:45.990962106Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5a35c431d548934343747b727dd55fc1cced21060abca39202de99ffb41479aa pid=4105 runtime=io.containerd.runc.v2 Feb 9 19:47:46.004647 systemd[1]: Started cri-containerd-5a35c431d548934343747b727dd55fc1cced21060abca39202de99ffb41479aa.scope. Feb 9 19:47:46.021217 env[1135]: time="2024-02-09T19:47:46.021166089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-n22rf,Uid:b0343f49-d64c-40b8-9f8d-e964452ed7d6,Namespace:kube-system,Attempt:0,} returns sandbox id \"5a35c431d548934343747b727dd55fc1cced21060abca39202de99ffb41479aa\"" Feb 9 19:47:46.021815 kubelet[2005]: E0209 19:47:46.021790 2005 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:47:46.023344 env[1135]: time="2024-02-09T19:47:46.023305619Z" level=info msg="CreateContainer within sandbox \"5a35c431d548934343747b727dd55fc1cced21060abca39202de99ffb41479aa\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 19:47:46.038699 env[1135]: time="2024-02-09T19:47:46.038640579Z" level=info msg="CreateContainer within sandbox \"5a35c431d548934343747b727dd55fc1cced21060abca39202de99ffb41479aa\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1cb49f6d866f74033f0c5477d307f03fcab55d9b1822aa6533ea0aa107952ee0\"" Feb 9 19:47:46.039210 env[1135]: time="2024-02-09T19:47:46.039174240Z" level=info msg="StartContainer for \"1cb49f6d866f74033f0c5477d307f03fcab55d9b1822aa6533ea0aa107952ee0\"" Feb 9 19:47:46.051465 systemd[1]: Started cri-containerd-1cb49f6d866f74033f0c5477d307f03fcab55d9b1822aa6533ea0aa107952ee0.scope. Feb 9 19:47:46.077914 env[1135]: time="2024-02-09T19:47:46.077867786Z" level=info msg="StartContainer for \"1cb49f6d866f74033f0c5477d307f03fcab55d9b1822aa6533ea0aa107952ee0\" returns successfully" Feb 9 19:47:46.081369 systemd[1]: cri-containerd-1cb49f6d866f74033f0c5477d307f03fcab55d9b1822aa6533ea0aa107952ee0.scope: Deactivated successfully. Feb 9 19:47:46.108701 env[1135]: time="2024-02-09T19:47:46.108651523Z" level=info msg="shim disconnected" id=1cb49f6d866f74033f0c5477d307f03fcab55d9b1822aa6533ea0aa107952ee0 Feb 9 19:47:46.108701 env[1135]: time="2024-02-09T19:47:46.108701769Z" level=warning msg="cleaning up after shim disconnected" id=1cb49f6d866f74033f0c5477d307f03fcab55d9b1822aa6533ea0aa107952ee0 namespace=k8s.io Feb 9 19:47:46.108883 env[1135]: time="2024-02-09T19:47:46.108709764Z" level=info msg="cleaning up dead shim" Feb 9 19:47:46.114236 env[1135]: time="2024-02-09T19:47:46.114203049Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:47:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4186 runtime=io.containerd.runc.v2\n" Feb 9 19:47:46.653214 kubelet[2005]: E0209 19:47:46.653180 2005 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:47:46.655314 env[1135]: time="2024-02-09T19:47:46.655238416Z" level=info msg="CreateContainer within sandbox \"5a35c431d548934343747b727dd55fc1cced21060abca39202de99ffb41479aa\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 19:47:46.667589 env[1135]: time="2024-02-09T19:47:46.667534546Z" level=info msg="CreateContainer within sandbox \"5a35c431d548934343747b727dd55fc1cced21060abca39202de99ffb41479aa\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5192038c44a8d1e0a3a225e12aec1c23a4c57ec42925266421013032f7d7a409\"" Feb 9 19:47:46.668006 env[1135]: time="2024-02-09T19:47:46.667978616Z" level=info msg="StartContainer for \"5192038c44a8d1e0a3a225e12aec1c23a4c57ec42925266421013032f7d7a409\"" Feb 9 19:47:46.681200 systemd[1]: Started cri-containerd-5192038c44a8d1e0a3a225e12aec1c23a4c57ec42925266421013032f7d7a409.scope. Feb 9 19:47:46.704054 env[1135]: time="2024-02-09T19:47:46.703980739Z" level=info msg="StartContainer for \"5192038c44a8d1e0a3a225e12aec1c23a4c57ec42925266421013032f7d7a409\" returns successfully" Feb 9 19:47:46.709300 systemd[1]: cri-containerd-5192038c44a8d1e0a3a225e12aec1c23a4c57ec42925266421013032f7d7a409.scope: Deactivated successfully. Feb 9 19:47:46.728059 env[1135]: time="2024-02-09T19:47:46.728001358Z" level=info msg="shim disconnected" id=5192038c44a8d1e0a3a225e12aec1c23a4c57ec42925266421013032f7d7a409 Feb 9 19:47:46.728059 env[1135]: time="2024-02-09T19:47:46.728055240Z" level=warning msg="cleaning up after shim disconnected" id=5192038c44a8d1e0a3a225e12aec1c23a4c57ec42925266421013032f7d7a409 namespace=k8s.io Feb 9 19:47:46.728238 env[1135]: time="2024-02-09T19:47:46.728067052Z" level=info msg="cleaning up dead shim" Feb 9 19:47:46.734822 env[1135]: time="2024-02-09T19:47:46.734773674Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:47:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4249 runtime=io.containerd.runc.v2\n" Feb 9 19:47:47.289842 kubelet[2005]: W0209 19:47:47.289789 2005 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3c9507fb_ad81_4828_a482_ba8626b668a2.slice/cri-containerd-2842e7116bf233bd43a8b5dfeb606396d65bd44d25b5c865b61525f8e5007211.scope WatchSource:0}: container "2842e7116bf233bd43a8b5dfeb606396d65bd44d25b5c865b61525f8e5007211" in namespace "k8s.io": not found Feb 9 19:47:47.463433 kubelet[2005]: I0209 19:47:47.463406 2005 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=3c9507fb-ad81-4828-a482-ba8626b668a2 path="/var/lib/kubelet/pods/3c9507fb-ad81-4828-a482-ba8626b668a2/volumes" Feb 9 19:47:47.656770 kubelet[2005]: E0209 19:47:47.656639 2005 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:47:47.658295 env[1135]: time="2024-02-09T19:47:47.658268577Z" level=info msg="CreateContainer within sandbox \"5a35c431d548934343747b727dd55fc1cced21060abca39202de99ffb41479aa\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 19:47:47.669487 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2785178096.mount: Deactivated successfully. Feb 9 19:47:47.671311 env[1135]: time="2024-02-09T19:47:47.671267071Z" level=info msg="CreateContainer within sandbox \"5a35c431d548934343747b727dd55fc1cced21060abca39202de99ffb41479aa\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8a74588c41ab5e4bd0c64fce8b18160a110334a1404a771dc14294f01243c7d3\"" Feb 9 19:47:47.671932 env[1135]: time="2024-02-09T19:47:47.671895721Z" level=info msg="StartContainer for \"8a74588c41ab5e4bd0c64fce8b18160a110334a1404a771dc14294f01243c7d3\"" Feb 9 19:47:47.690055 systemd[1]: Started cri-containerd-8a74588c41ab5e4bd0c64fce8b18160a110334a1404a771dc14294f01243c7d3.scope. Feb 9 19:47:47.713292 systemd[1]: cri-containerd-8a74588c41ab5e4bd0c64fce8b18160a110334a1404a771dc14294f01243c7d3.scope: Deactivated successfully. Feb 9 19:47:47.714799 env[1135]: time="2024-02-09T19:47:47.714711283Z" level=info msg="StartContainer for \"8a74588c41ab5e4bd0c64fce8b18160a110334a1404a771dc14294f01243c7d3\" returns successfully" Feb 9 19:47:47.734976 env[1135]: time="2024-02-09T19:47:47.734925501Z" level=info msg="shim disconnected" id=8a74588c41ab5e4bd0c64fce8b18160a110334a1404a771dc14294f01243c7d3 Feb 9 19:47:47.734976 env[1135]: time="2024-02-09T19:47:47.734978762Z" level=warning msg="cleaning up after shim disconnected" id=8a74588c41ab5e4bd0c64fce8b18160a110334a1404a771dc14294f01243c7d3 namespace=k8s.io Feb 9 19:47:47.735185 env[1135]: time="2024-02-09T19:47:47.734988119Z" level=info msg="cleaning up dead shim" Feb 9 19:47:47.741954 env[1135]: time="2024-02-09T19:47:47.741905527Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:47:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4307 runtime=io.containerd.runc.v2\n" Feb 9 19:47:47.985997 systemd[1]: run-containerd-runc-k8s.io-8a74588c41ab5e4bd0c64fce8b18160a110334a1404a771dc14294f01243c7d3-runc.dYMM7F.mount: Deactivated successfully. Feb 9 19:47:47.986106 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8a74588c41ab5e4bd0c64fce8b18160a110334a1404a771dc14294f01243c7d3-rootfs.mount: Deactivated successfully. Feb 9 19:47:48.658978 kubelet[2005]: E0209 19:47:48.658954 2005 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:47:48.661423 env[1135]: time="2024-02-09T19:47:48.661385004Z" level=info msg="CreateContainer within sandbox \"5a35c431d548934343747b727dd55fc1cced21060abca39202de99ffb41479aa\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 19:47:48.670555 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3854248873.mount: Deactivated successfully. Feb 9 19:47:48.675628 env[1135]: time="2024-02-09T19:47:48.675577093Z" level=info msg="CreateContainer within sandbox \"5a35c431d548934343747b727dd55fc1cced21060abca39202de99ffb41479aa\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"29a455a7d7e12b48fee3ff718e5f992a588785e06e452fec3ec48eac8d51b16c\"" Feb 9 19:47:48.676027 env[1135]: time="2024-02-09T19:47:48.676004100Z" level=info msg="StartContainer for \"29a455a7d7e12b48fee3ff718e5f992a588785e06e452fec3ec48eac8d51b16c\"" Feb 9 19:47:48.688161 systemd[1]: Started cri-containerd-29a455a7d7e12b48fee3ff718e5f992a588785e06e452fec3ec48eac8d51b16c.scope. Feb 9 19:47:48.706397 systemd[1]: cri-containerd-29a455a7d7e12b48fee3ff718e5f992a588785e06e452fec3ec48eac8d51b16c.scope: Deactivated successfully. Feb 9 19:47:48.707935 env[1135]: time="2024-02-09T19:47:48.707897995Z" level=info msg="StartContainer for \"29a455a7d7e12b48fee3ff718e5f992a588785e06e452fec3ec48eac8d51b16c\" returns successfully" Feb 9 19:47:48.723613 env[1135]: time="2024-02-09T19:47:48.723564634Z" level=info msg="shim disconnected" id=29a455a7d7e12b48fee3ff718e5f992a588785e06e452fec3ec48eac8d51b16c Feb 9 19:47:48.723613 env[1135]: time="2024-02-09T19:47:48.723604278Z" level=warning msg="cleaning up after shim disconnected" id=29a455a7d7e12b48fee3ff718e5f992a588785e06e452fec3ec48eac8d51b16c namespace=k8s.io Feb 9 19:47:48.723613 env[1135]: time="2024-02-09T19:47:48.723611782Z" level=info msg="cleaning up dead shim" Feb 9 19:47:48.729107 env[1135]: time="2024-02-09T19:47:48.729088342Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:47:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4361 runtime=io.containerd.runc.v2\n" Feb 9 19:47:48.986016 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-29a455a7d7e12b48fee3ff718e5f992a588785e06e452fec3ec48eac8d51b16c-rootfs.mount: Deactivated successfully. Feb 9 19:47:49.488146 kubelet[2005]: E0209 19:47:49.488121 2005 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 19:47:49.661818 kubelet[2005]: E0209 19:47:49.661783 2005 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:47:49.663556 env[1135]: time="2024-02-09T19:47:49.663488770Z" level=info msg="CreateContainer within sandbox \"5a35c431d548934343747b727dd55fc1cced21060abca39202de99ffb41479aa\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 19:47:49.681664 env[1135]: time="2024-02-09T19:47:49.681612570Z" level=info msg="CreateContainer within sandbox \"5a35c431d548934343747b727dd55fc1cced21060abca39202de99ffb41479aa\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8f5060777aa916584384203d612e26845c5912a0116bb0661afbb155ecc8939d\"" Feb 9 19:47:49.682140 env[1135]: time="2024-02-09T19:47:49.682113728Z" level=info msg="StartContainer for \"8f5060777aa916584384203d612e26845c5912a0116bb0661afbb155ecc8939d\"" Feb 9 19:47:49.697451 systemd[1]: Started cri-containerd-8f5060777aa916584384203d612e26845c5912a0116bb0661afbb155ecc8939d.scope. Feb 9 19:47:49.720313 env[1135]: time="2024-02-09T19:47:49.720217989Z" level=info msg="StartContainer for \"8f5060777aa916584384203d612e26845c5912a0116bb0661afbb155ecc8939d\" returns successfully" Feb 9 19:47:49.940752 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 9 19:47:49.986253 systemd[1]: run-containerd-runc-k8s.io-8f5060777aa916584384203d612e26845c5912a0116bb0661afbb155ecc8939d-runc.6KkrOc.mount: Deactivated successfully. Feb 9 19:47:50.397825 kubelet[2005]: W0209 19:47:50.397784 2005 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0343f49_d64c_40b8_9f8d_e964452ed7d6.slice/cri-containerd-1cb49f6d866f74033f0c5477d307f03fcab55d9b1822aa6533ea0aa107952ee0.scope WatchSource:0}: task 1cb49f6d866f74033f0c5477d307f03fcab55d9b1822aa6533ea0aa107952ee0 not found: not found Feb 9 19:47:50.666696 kubelet[2005]: E0209 19:47:50.666345 2005 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:47:51.607086 kubelet[2005]: I0209 19:47:51.607057 2005 setters.go:548] "Node became not ready" node="localhost" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-09 19:47:51.607006076 +0000 UTC m=+112.269156312 LastTransitionTime:2024-02-09 19:47:51.607006076 +0000 UTC m=+112.269156312 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 9 19:47:51.667846 kubelet[2005]: E0209 19:47:51.667815 2005 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:47:52.332606 systemd-networkd[1022]: lxc_health: Link UP Feb 9 19:47:52.341050 systemd-networkd[1022]: lxc_health: Gained carrier Feb 9 19:47:52.341749 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 19:47:52.669466 kubelet[2005]: E0209 19:47:52.669371 2005 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:47:53.503940 kubelet[2005]: W0209 19:47:53.503897 2005 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0343f49_d64c_40b8_9f8d_e964452ed7d6.slice/cri-containerd-5192038c44a8d1e0a3a225e12aec1c23a4c57ec42925266421013032f7d7a409.scope WatchSource:0}: task 5192038c44a8d1e0a3a225e12aec1c23a4c57ec42925266421013032f7d7a409 not found: not found Feb 9 19:47:53.979307 kubelet[2005]: E0209 19:47:53.979277 2005 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:47:53.992026 kubelet[2005]: I0209 19:47:53.991989 2005 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-n22rf" podStartSLOduration=8.991950206 pod.CreationTimestamp="2024-02-09 19:47:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:47:50.676297704 +0000 UTC m=+111.338447940" watchObservedRunningTime="2024-02-09 19:47:53.991950206 +0000 UTC m=+114.654100442" Feb 9 19:47:54.238337 systemd-networkd[1022]: lxc_health: Gained IPv6LL Feb 9 19:47:54.673766 kubelet[2005]: E0209 19:47:54.673721 2005 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:47:56.612678 kubelet[2005]: W0209 19:47:56.612634 2005 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0343f49_d64c_40b8_9f8d_e964452ed7d6.slice/cri-containerd-8a74588c41ab5e4bd0c64fce8b18160a110334a1404a771dc14294f01243c7d3.scope WatchSource:0}: task 8a74588c41ab5e4bd0c64fce8b18160a110334a1404a771dc14294f01243c7d3 not found: not found Feb 9 19:47:58.468733 systemd[1]: run-containerd-runc-k8s.io-8f5060777aa916584384203d612e26845c5912a0116bb0661afbb155ecc8939d-runc.6NviaZ.mount: Deactivated successfully. Feb 9 19:47:58.510393 sshd[3967]: pam_unix(sshd:session): session closed for user core Feb 9 19:47:58.512513 systemd[1]: sshd@26-10.0.0.68:22-10.0.0.1:51756.service: Deactivated successfully. Feb 9 19:47:58.513295 systemd[1]: session-27.scope: Deactivated successfully. Feb 9 19:47:58.513955 systemd-logind[1121]: Session 27 logged out. Waiting for processes to exit. Feb 9 19:47:58.514655 systemd-logind[1121]: Removed session 27. Feb 9 19:47:59.417282 env[1135]: time="2024-02-09T19:47:59.417227848Z" level=info msg="StopPodSandbox for \"b6724d1291f65176db2186041c0bbbd261740c6dd915248ce5b020acd4874a29\"" Feb 9 19:47:59.417634 env[1135]: time="2024-02-09T19:47:59.417336072Z" level=info msg="TearDown network for sandbox \"b6724d1291f65176db2186041c0bbbd261740c6dd915248ce5b020acd4874a29\" successfully" Feb 9 19:47:59.417634 env[1135]: time="2024-02-09T19:47:59.417367061Z" level=info msg="StopPodSandbox for \"b6724d1291f65176db2186041c0bbbd261740c6dd915248ce5b020acd4874a29\" returns successfully" Feb 9 19:47:59.417721 env[1135]: time="2024-02-09T19:47:59.417697805Z" level=info msg="RemovePodSandbox for \"b6724d1291f65176db2186041c0bbbd261740c6dd915248ce5b020acd4874a29\"" Feb 9 19:47:59.417767 env[1135]: time="2024-02-09T19:47:59.417741098Z" level=info msg="Forcibly stopping sandbox \"b6724d1291f65176db2186041c0bbbd261740c6dd915248ce5b020acd4874a29\"" Feb 9 19:47:59.417807 env[1135]: time="2024-02-09T19:47:59.417793126Z" level=info msg="TearDown network for sandbox \"b6724d1291f65176db2186041c0bbbd261740c6dd915248ce5b020acd4874a29\" successfully" Feb 9 19:47:59.485926 env[1135]: time="2024-02-09T19:47:59.485871846Z" level=info msg="RemovePodSandbox \"b6724d1291f65176db2186041c0bbbd261740c6dd915248ce5b020acd4874a29\" returns successfully" Feb 9 19:47:59.486217 env[1135]: time="2024-02-09T19:47:59.486187973Z" level=info msg="StopPodSandbox for \"ef497fcf867abf178da4488269bd2b9c0f75041f45e6aadd73d59313218df0b7\"" Feb 9 19:47:59.486365 env[1135]: time="2024-02-09T19:47:59.486252845Z" level=info msg="TearDown network for sandbox \"ef497fcf867abf178da4488269bd2b9c0f75041f45e6aadd73d59313218df0b7\" successfully" Feb 9 19:47:59.486365 env[1135]: time="2024-02-09T19:47:59.486284174Z" level=info msg="StopPodSandbox for \"ef497fcf867abf178da4488269bd2b9c0f75041f45e6aadd73d59313218df0b7\" returns successfully" Feb 9 19:47:59.486664 env[1135]: time="2024-02-09T19:47:59.486624728Z" level=info msg="RemovePodSandbox for \"ef497fcf867abf178da4488269bd2b9c0f75041f45e6aadd73d59313218df0b7\"" Feb 9 19:47:59.486804 env[1135]: time="2024-02-09T19:47:59.486666968Z" level=info msg="Forcibly stopping sandbox \"ef497fcf867abf178da4488269bd2b9c0f75041f45e6aadd73d59313218df0b7\"" Feb 9 19:47:59.486804 env[1135]: time="2024-02-09T19:47:59.486766335Z" level=info msg="TearDown network for sandbox \"ef497fcf867abf178da4488269bd2b9c0f75041f45e6aadd73d59313218df0b7\" successfully" Feb 9 19:47:59.620148 env[1135]: time="2024-02-09T19:47:59.620082408Z" level=info msg="RemovePodSandbox \"ef497fcf867abf178da4488269bd2b9c0f75041f45e6aadd73d59313218df0b7\" returns successfully" Feb 9 19:47:59.620610 env[1135]: time="2024-02-09T19:47:59.620586611Z" level=info msg="StopPodSandbox for \"874712be06cb26c9eac9dbaf75994259f9f42a20c01f5a1f7f3db22e0a4b6c14\"" Feb 9 19:47:59.620759 env[1135]: time="2024-02-09T19:47:59.620690637Z" level=info msg="TearDown network for sandbox \"874712be06cb26c9eac9dbaf75994259f9f42a20c01f5a1f7f3db22e0a4b6c14\" successfully" Feb 9 19:47:59.620810 env[1135]: time="2024-02-09T19:47:59.620759757Z" level=info msg="StopPodSandbox for \"874712be06cb26c9eac9dbaf75994259f9f42a20c01f5a1f7f3db22e0a4b6c14\" returns successfully" Feb 9 19:47:59.621115 env[1135]: time="2024-02-09T19:47:59.621081255Z" level=info msg="RemovePodSandbox for \"874712be06cb26c9eac9dbaf75994259f9f42a20c01f5a1f7f3db22e0a4b6c14\"" Feb 9 19:47:59.621178 env[1135]: time="2024-02-09T19:47:59.621122032Z" level=info msg="Forcibly stopping sandbox \"874712be06cb26c9eac9dbaf75994259f9f42a20c01f5a1f7f3db22e0a4b6c14\"" Feb 9 19:47:59.621229 env[1135]: time="2024-02-09T19:47:59.621213013Z" level=info msg="TearDown network for sandbox \"874712be06cb26c9eac9dbaf75994259f9f42a20c01f5a1f7f3db22e0a4b6c14\" successfully" Feb 9 19:47:59.692583 env[1135]: time="2024-02-09T19:47:59.692460709Z" level=info msg="RemovePodSandbox \"874712be06cb26c9eac9dbaf75994259f9f42a20c01f5a1f7f3db22e0a4b6c14\" returns successfully" Feb 9 19:47:59.722391 kubelet[2005]: W0209 19:47:59.722340 2005 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0343f49_d64c_40b8_9f8d_e964452ed7d6.slice/cri-containerd-29a455a7d7e12b48fee3ff718e5f992a588785e06e452fec3ec48eac8d51b16c.scope WatchSource:0}: task 29a455a7d7e12b48fee3ff718e5f992a588785e06e452fec3ec48eac8d51b16c not found: not found