Feb 9 19:50:48.799567 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Feb 9 17:23:38 -00 2024 Feb 9 19:50:48.799585 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:50:48.799594 kernel: BIOS-provided physical RAM map: Feb 9 19:50:48.799600 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 9 19:50:48.799605 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Feb 9 19:50:48.799610 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Feb 9 19:50:48.799617 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Feb 9 19:50:48.799622 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Feb 9 19:50:48.799628 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Feb 9 19:50:48.799634 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Feb 9 19:50:48.799640 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Feb 9 19:50:48.799645 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Feb 9 19:50:48.799651 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Feb 9 19:50:48.799657 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Feb 9 19:50:48.799663 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Feb 9 19:50:48.799671 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Feb 9 19:50:48.799676 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Feb 9 19:50:48.799682 kernel: NX (Execute Disable) protection: active Feb 9 19:50:48.799688 kernel: e820: update [mem 0x9b3fa018-0x9b403c57] usable ==> usable Feb 9 19:50:48.799694 kernel: e820: update [mem 0x9b3fa018-0x9b403c57] usable ==> usable Feb 9 19:50:48.799700 kernel: e820: update [mem 0x9b3bd018-0x9b3f9e57] usable ==> usable Feb 9 19:50:48.799705 kernel: e820: update [mem 0x9b3bd018-0x9b3f9e57] usable ==> usable Feb 9 19:50:48.799711 kernel: extended physical RAM map: Feb 9 19:50:48.799716 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 9 19:50:48.799722 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Feb 9 19:50:48.799729 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Feb 9 19:50:48.799735 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Feb 9 19:50:48.799741 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Feb 9 19:50:48.799747 kernel: reserve setup_data: [mem 0x000000000080c000-0x000000000080ffff] usable Feb 9 19:50:48.799760 kernel: reserve setup_data: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Feb 9 19:50:48.799766 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b3bd017] usable Feb 9 19:50:48.799771 kernel: reserve setup_data: [mem 0x000000009b3bd018-0x000000009b3f9e57] usable Feb 9 19:50:48.799777 kernel: reserve setup_data: [mem 0x000000009b3f9e58-0x000000009b3fa017] usable Feb 9 19:50:48.799783 kernel: reserve setup_data: [mem 0x000000009b3fa018-0x000000009b403c57] usable Feb 9 19:50:48.799788 kernel: reserve setup_data: [mem 0x000000009b403c58-0x000000009c8eefff] usable Feb 9 19:50:48.799794 kernel: reserve setup_data: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Feb 9 19:50:48.799801 kernel: reserve setup_data: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Feb 9 19:50:48.799807 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Feb 9 19:50:48.799813 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Feb 9 19:50:48.799819 kernel: reserve setup_data: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Feb 9 19:50:48.799828 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Feb 9 19:50:48.799834 kernel: efi: EFI v2.70 by EDK II Feb 9 19:50:48.799840 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b773018 RNG=0x9cb75018 Feb 9 19:50:48.799848 kernel: random: crng init done Feb 9 19:50:48.799854 kernel: SMBIOS 2.8 present. Feb 9 19:50:48.799860 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 0.0.0 02/06/2015 Feb 9 19:50:48.799867 kernel: Hypervisor detected: KVM Feb 9 19:50:48.799873 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 9 19:50:48.799879 kernel: kvm-clock: cpu 0, msr 66faa001, primary cpu clock Feb 9 19:50:48.799886 kernel: kvm-clock: using sched offset of 3911317094 cycles Feb 9 19:50:48.799893 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 9 19:50:48.799899 kernel: tsc: Detected 2794.750 MHz processor Feb 9 19:50:48.799908 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 9 19:50:48.799914 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 9 19:50:48.799921 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Feb 9 19:50:48.799935 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 9 19:50:48.799942 kernel: Using GB pages for direct mapping Feb 9 19:50:48.799949 kernel: Secure boot disabled Feb 9 19:50:48.799967 kernel: ACPI: Early table checksum verification disabled Feb 9 19:50:48.799973 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Feb 9 19:50:48.799980 kernel: ACPI: XSDT 0x000000009CB7D0E8 00004C (v01 BOCHS BXPC 00000001 01000013) Feb 9 19:50:48.799988 kernel: ACPI: FACP 0x000000009CB7A000 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 19:50:48.799995 kernel: ACPI: DSDT 0x000000009CB7B000 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 19:50:48.800002 kernel: ACPI: FACS 0x000000009CBDD000 000040 Feb 9 19:50:48.800008 kernel: ACPI: APIC 0x000000009CB79000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 19:50:48.800015 kernel: ACPI: HPET 0x000000009CB78000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 19:50:48.800021 kernel: ACPI: WAET 0x000000009CB77000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 19:50:48.800028 kernel: ACPI: BGRT 0x000000009CB76000 000038 (v01 INTEL EDK2 00000002 01000013) Feb 9 19:50:48.800034 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb7a000-0x9cb7a073] Feb 9 19:50:48.800040 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7b000-0x9cb7ca38] Feb 9 19:50:48.800057 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Feb 9 19:50:48.800064 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb79000-0x9cb7908f] Feb 9 19:50:48.800070 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb78000-0x9cb78037] Feb 9 19:50:48.800077 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb77000-0x9cb77027] Feb 9 19:50:48.800083 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb76000-0x9cb76037] Feb 9 19:50:48.800089 kernel: No NUMA configuration found Feb 9 19:50:48.800096 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Feb 9 19:50:48.800102 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Feb 9 19:50:48.800109 kernel: Zone ranges: Feb 9 19:50:48.800117 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 9 19:50:48.800124 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Feb 9 19:50:48.800130 kernel: Normal empty Feb 9 19:50:48.800136 kernel: Movable zone start for each node Feb 9 19:50:48.800143 kernel: Early memory node ranges Feb 9 19:50:48.800149 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Feb 9 19:50:48.800155 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Feb 9 19:50:48.800162 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Feb 9 19:50:48.800168 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Feb 9 19:50:48.800176 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Feb 9 19:50:48.800183 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Feb 9 19:50:48.800189 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Feb 9 19:50:48.800195 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 9 19:50:48.800202 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Feb 9 19:50:48.800208 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Feb 9 19:50:48.800214 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 9 19:50:48.800221 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Feb 9 19:50:48.800227 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Feb 9 19:50:48.800234 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Feb 9 19:50:48.800241 kernel: ACPI: PM-Timer IO Port: 0xb008 Feb 9 19:50:48.800248 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 9 19:50:48.800254 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 9 19:50:48.800261 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 9 19:50:48.800267 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 9 19:50:48.800274 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 9 19:50:48.800280 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 9 19:50:48.800286 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 9 19:50:48.800293 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 9 19:50:48.800300 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 9 19:50:48.800307 kernel: TSC deadline timer available Feb 9 19:50:48.800313 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Feb 9 19:50:48.800319 kernel: kvm-guest: KVM setup pv remote TLB flush Feb 9 19:50:48.800326 kernel: kvm-guest: setup PV sched yield Feb 9 19:50:48.800332 kernel: [mem 0x9d000000-0xffffffff] available for PCI devices Feb 9 19:50:48.800339 kernel: Booting paravirtualized kernel on KVM Feb 9 19:50:48.800345 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 9 19:50:48.800352 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Feb 9 19:50:48.800359 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u524288 Feb 9 19:50:48.800366 kernel: pcpu-alloc: s185624 r8192 d31464 u524288 alloc=1*2097152 Feb 9 19:50:48.800376 kernel: pcpu-alloc: [0] 0 1 2 3 Feb 9 19:50:48.800384 kernel: kvm-guest: setup async PF for cpu 0 Feb 9 19:50:48.800391 kernel: kvm-guest: stealtime: cpu 0, msr 9b01c0c0 Feb 9 19:50:48.800397 kernel: kvm-guest: PV spinlocks enabled Feb 9 19:50:48.800404 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 9 19:50:48.800411 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Feb 9 19:50:48.800417 kernel: Policy zone: DMA32 Feb 9 19:50:48.800425 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:50:48.800432 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 19:50:48.800441 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 9 19:50:48.800447 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 19:50:48.800454 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 19:50:48.800462 kernel: Memory: 2400512K/2567000K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 166228K reserved, 0K cma-reserved) Feb 9 19:50:48.800469 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 9 19:50:48.800477 kernel: ftrace: allocating 34475 entries in 135 pages Feb 9 19:50:48.800484 kernel: ftrace: allocated 135 pages with 4 groups Feb 9 19:50:48.800491 kernel: rcu: Hierarchical RCU implementation. Feb 9 19:50:48.800498 kernel: rcu: RCU event tracing is enabled. Feb 9 19:50:48.800505 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 9 19:50:48.800512 kernel: Rude variant of Tasks RCU enabled. Feb 9 19:50:48.800519 kernel: Tracing variant of Tasks RCU enabled. Feb 9 19:50:48.800525 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 19:50:48.800532 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 9 19:50:48.800540 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Feb 9 19:50:48.800547 kernel: Console: colour dummy device 80x25 Feb 9 19:50:48.800553 kernel: printk: console [ttyS0] enabled Feb 9 19:50:48.800560 kernel: ACPI: Core revision 20210730 Feb 9 19:50:48.800567 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 9 19:50:48.800574 kernel: APIC: Switch to symmetric I/O mode setup Feb 9 19:50:48.800580 kernel: x2apic enabled Feb 9 19:50:48.800587 kernel: Switched APIC routing to physical x2apic. Feb 9 19:50:48.800594 kernel: kvm-guest: setup PV IPIs Feb 9 19:50:48.800602 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 9 19:50:48.800609 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 9 19:50:48.800615 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Feb 9 19:50:48.800622 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 9 19:50:48.800629 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Feb 9 19:50:48.800636 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Feb 9 19:50:48.800642 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 9 19:50:48.800649 kernel: Spectre V2 : Mitigation: Retpolines Feb 9 19:50:48.800656 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 9 19:50:48.800665 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 9 19:50:48.800671 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Feb 9 19:50:48.800678 kernel: RETBleed: Mitigation: untrained return thunk Feb 9 19:50:48.800685 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 9 19:50:48.800692 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Feb 9 19:50:48.800698 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 9 19:50:48.800705 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 9 19:50:48.800712 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 9 19:50:48.800719 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 9 19:50:48.800727 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 9 19:50:48.800734 kernel: Freeing SMP alternatives memory: 32K Feb 9 19:50:48.800740 kernel: pid_max: default: 32768 minimum: 301 Feb 9 19:50:48.800747 kernel: LSM: Security Framework initializing Feb 9 19:50:48.800761 kernel: SELinux: Initializing. Feb 9 19:50:48.800768 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 19:50:48.800776 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 19:50:48.800783 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Feb 9 19:50:48.800790 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Feb 9 19:50:48.800798 kernel: ... version: 0 Feb 9 19:50:48.800805 kernel: ... bit width: 48 Feb 9 19:50:48.800811 kernel: ... generic registers: 6 Feb 9 19:50:48.800818 kernel: ... value mask: 0000ffffffffffff Feb 9 19:50:48.800825 kernel: ... max period: 00007fffffffffff Feb 9 19:50:48.800832 kernel: ... fixed-purpose events: 0 Feb 9 19:50:48.800838 kernel: ... event mask: 000000000000003f Feb 9 19:50:48.800845 kernel: signal: max sigframe size: 1776 Feb 9 19:50:48.800852 kernel: rcu: Hierarchical SRCU implementation. Feb 9 19:50:48.800860 kernel: smp: Bringing up secondary CPUs ... Feb 9 19:50:48.800866 kernel: x86: Booting SMP configuration: Feb 9 19:50:48.800873 kernel: .... node #0, CPUs: #1 Feb 9 19:50:48.800880 kernel: kvm-clock: cpu 1, msr 66faa041, secondary cpu clock Feb 9 19:50:48.801493 kernel: kvm-guest: setup async PF for cpu 1 Feb 9 19:50:48.801502 kernel: kvm-guest: stealtime: cpu 1, msr 9b09c0c0 Feb 9 19:50:48.801509 kernel: #2 Feb 9 19:50:48.803622 kernel: kvm-clock: cpu 2, msr 66faa081, secondary cpu clock Feb 9 19:50:48.803630 kernel: kvm-guest: setup async PF for cpu 2 Feb 9 19:50:48.803640 kernel: kvm-guest: stealtime: cpu 2, msr 9b11c0c0 Feb 9 19:50:48.803646 kernel: #3 Feb 9 19:50:48.803653 kernel: kvm-clock: cpu 3, msr 66faa0c1, secondary cpu clock Feb 9 19:50:48.803660 kernel: kvm-guest: setup async PF for cpu 3 Feb 9 19:50:48.803666 kernel: kvm-guest: stealtime: cpu 3, msr 9b19c0c0 Feb 9 19:50:48.803673 kernel: smp: Brought up 1 node, 4 CPUs Feb 9 19:50:48.803680 kernel: smpboot: Max logical packages: 1 Feb 9 19:50:48.803687 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Feb 9 19:50:48.803693 kernel: devtmpfs: initialized Feb 9 19:50:48.803700 kernel: x86/mm: Memory block size: 128MB Feb 9 19:50:48.803709 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Feb 9 19:50:48.803716 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Feb 9 19:50:48.803723 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Feb 9 19:50:48.803730 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Feb 9 19:50:48.803737 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Feb 9 19:50:48.803744 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 19:50:48.803751 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 9 19:50:48.803764 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 19:50:48.803772 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 19:50:48.803779 kernel: audit: initializing netlink subsys (disabled) Feb 9 19:50:48.803786 kernel: audit: type=2000 audit(1707508247.612:1): state=initialized audit_enabled=0 res=1 Feb 9 19:50:48.803793 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 19:50:48.803800 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 9 19:50:48.803806 kernel: cpuidle: using governor menu Feb 9 19:50:48.803813 kernel: ACPI: bus type PCI registered Feb 9 19:50:48.803820 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 19:50:48.803827 kernel: dca service started, version 1.12.1 Feb 9 19:50:48.803835 kernel: PCI: Using configuration type 1 for base access Feb 9 19:50:48.803841 kernel: PCI: Using configuration type 1 for extended access Feb 9 19:50:48.803848 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 9 19:50:48.803855 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 19:50:48.803862 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 19:50:48.803869 kernel: ACPI: Added _OSI(Module Device) Feb 9 19:50:48.803875 kernel: ACPI: Added _OSI(Processor Device) Feb 9 19:50:48.803882 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 19:50:48.803889 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 19:50:48.803897 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 19:50:48.803904 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 19:50:48.803911 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 19:50:48.803918 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 9 19:50:48.803925 kernel: ACPI: Interpreter enabled Feb 9 19:50:48.803939 kernel: ACPI: PM: (supports S0 S3 S5) Feb 9 19:50:48.803946 kernel: ACPI: Using IOAPIC for interrupt routing Feb 9 19:50:48.803953 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 9 19:50:48.803960 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Feb 9 19:50:48.803967 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 9 19:50:48.804087 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 9 19:50:48.804099 kernel: acpiphp: Slot [3] registered Feb 9 19:50:48.804106 kernel: acpiphp: Slot [4] registered Feb 9 19:50:48.804113 kernel: acpiphp: Slot [5] registered Feb 9 19:50:48.804120 kernel: acpiphp: Slot [6] registered Feb 9 19:50:48.804127 kernel: acpiphp: Slot [7] registered Feb 9 19:50:48.804133 kernel: acpiphp: Slot [8] registered Feb 9 19:50:48.804140 kernel: acpiphp: Slot [9] registered Feb 9 19:50:48.804148 kernel: acpiphp: Slot [10] registered Feb 9 19:50:48.804155 kernel: acpiphp: Slot [11] registered Feb 9 19:50:48.804162 kernel: acpiphp: Slot [12] registered Feb 9 19:50:48.804168 kernel: acpiphp: Slot [13] registered Feb 9 19:50:48.804175 kernel: acpiphp: Slot [14] registered Feb 9 19:50:48.804182 kernel: acpiphp: Slot [15] registered Feb 9 19:50:48.804188 kernel: acpiphp: Slot [16] registered Feb 9 19:50:48.804195 kernel: acpiphp: Slot [17] registered Feb 9 19:50:48.804201 kernel: acpiphp: Slot [18] registered Feb 9 19:50:48.804209 kernel: acpiphp: Slot [19] registered Feb 9 19:50:48.804216 kernel: acpiphp: Slot [20] registered Feb 9 19:50:48.804223 kernel: acpiphp: Slot [21] registered Feb 9 19:50:48.804229 kernel: acpiphp: Slot [22] registered Feb 9 19:50:48.804236 kernel: acpiphp: Slot [23] registered Feb 9 19:50:48.804243 kernel: acpiphp: Slot [24] registered Feb 9 19:50:48.804249 kernel: acpiphp: Slot [25] registered Feb 9 19:50:48.804256 kernel: acpiphp: Slot [26] registered Feb 9 19:50:48.804263 kernel: acpiphp: Slot [27] registered Feb 9 19:50:48.804269 kernel: acpiphp: Slot [28] registered Feb 9 19:50:48.804916 kernel: acpiphp: Slot [29] registered Feb 9 19:50:48.804924 kernel: acpiphp: Slot [30] registered Feb 9 19:50:48.804940 kernel: acpiphp: Slot [31] registered Feb 9 19:50:48.804947 kernel: PCI host bridge to bus 0000:00 Feb 9 19:50:48.805031 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 9 19:50:48.805095 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 9 19:50:48.805155 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 9 19:50:48.805214 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Feb 9 19:50:48.805278 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0x87fffffff window] Feb 9 19:50:48.805339 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 9 19:50:48.805422 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 9 19:50:48.805501 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 9 19:50:48.805579 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Feb 9 19:50:48.805650 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Feb 9 19:50:48.805722 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Feb 9 19:50:48.805804 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Feb 9 19:50:48.805876 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Feb 9 19:50:48.805957 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Feb 9 19:50:48.806039 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 9 19:50:48.806110 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Feb 9 19:50:48.806192 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Feb 9 19:50:48.806269 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Feb 9 19:50:48.806339 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Feb 9 19:50:48.806408 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc1043000-0xc1043fff] Feb 9 19:50:48.806475 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Feb 9 19:50:48.806543 kernel: pci 0000:00:02.0: BAR 0: assigned to efifb Feb 9 19:50:48.806609 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 9 19:50:48.806689 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Feb 9 19:50:48.806766 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0a0-0xc0bf] Feb 9 19:50:48.806845 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Feb 9 19:50:48.806913 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Feb 9 19:50:48.807041 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Feb 9 19:50:48.807152 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Feb 9 19:50:48.807317 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Feb 9 19:50:48.807414 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Feb 9 19:50:48.807494 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Feb 9 19:50:48.807581 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Feb 9 19:50:48.807649 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc1040000-0xc1040fff] Feb 9 19:50:48.807718 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Feb 9 19:50:48.807799 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Feb 9 19:50:48.807809 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 9 19:50:48.807820 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 9 19:50:48.807828 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 9 19:50:48.807837 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 9 19:50:48.807843 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 9 19:50:48.807850 kernel: iommu: Default domain type: Translated Feb 9 19:50:48.807857 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 9 19:50:48.807936 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Feb 9 19:50:48.808007 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 9 19:50:48.808072 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Feb 9 19:50:48.808085 kernel: vgaarb: loaded Feb 9 19:50:48.808092 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 19:50:48.808099 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 19:50:48.808105 kernel: PTP clock support registered Feb 9 19:50:48.808112 kernel: Registered efivars operations Feb 9 19:50:48.808119 kernel: PCI: Using ACPI for IRQ routing Feb 9 19:50:48.808126 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 9 19:50:48.808133 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Feb 9 19:50:48.808149 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Feb 9 19:50:48.808158 kernel: e820: reserve RAM buffer [mem 0x9b3bd018-0x9bffffff] Feb 9 19:50:48.808164 kernel: e820: reserve RAM buffer [mem 0x9b3fa018-0x9bffffff] Feb 9 19:50:48.808176 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Feb 9 19:50:48.808182 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Feb 9 19:50:48.808189 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 9 19:50:48.808196 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 9 19:50:48.808203 kernel: clocksource: Switched to clocksource kvm-clock Feb 9 19:50:48.808210 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 19:50:48.808217 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 19:50:48.808766 kernel: pnp: PnP ACPI init Feb 9 19:50:48.808935 kernel: pnp 00:02: [dma 2] Feb 9 19:50:48.808948 kernel: pnp: PnP ACPI: found 6 devices Feb 9 19:50:48.808955 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 9 19:50:48.808962 kernel: NET: Registered PF_INET protocol family Feb 9 19:50:48.808969 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 19:50:48.808977 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 9 19:50:48.808995 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 19:50:48.809005 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 9 19:50:48.809012 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 9 19:50:48.809019 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 9 19:50:48.809026 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 19:50:48.809033 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 19:50:48.809047 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 19:50:48.809057 kernel: NET: Registered PF_XDP protocol family Feb 9 19:50:48.809145 kernel: pci 0000:00:05.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Feb 9 19:50:48.809244 kernel: pci 0000:00:05.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Feb 9 19:50:48.809329 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 9 19:50:48.809401 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 9 19:50:48.809476 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 9 19:50:48.809553 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Feb 9 19:50:48.809626 kernel: pci_bus 0000:00: resource 8 [mem 0x800000000-0x87fffffff window] Feb 9 19:50:48.809704 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Feb 9 19:50:48.809848 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 9 19:50:48.809967 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Feb 9 19:50:48.809979 kernel: PCI: CLS 0 bytes, default 64 Feb 9 19:50:48.809987 kernel: Initialise system trusted keyrings Feb 9 19:50:48.809994 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 9 19:50:48.810001 kernel: Key type asymmetric registered Feb 9 19:50:48.810020 kernel: Asymmetric key parser 'x509' registered Feb 9 19:50:48.810027 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 19:50:48.810035 kernel: io scheduler mq-deadline registered Feb 9 19:50:48.810042 kernel: io scheduler kyber registered Feb 9 19:50:48.810051 kernel: io scheduler bfq registered Feb 9 19:50:48.810066 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 9 19:50:48.810077 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 9 19:50:48.810084 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Feb 9 19:50:48.810091 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 9 19:50:48.810099 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 19:50:48.810106 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 9 19:50:48.810124 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 9 19:50:48.810131 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 9 19:50:48.810140 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 9 19:50:48.810241 kernel: rtc_cmos 00:05: RTC can wake from S4 Feb 9 19:50:48.810267 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 9 19:50:48.810344 kernel: rtc_cmos 00:05: registered as rtc0 Feb 9 19:50:48.810422 kernel: rtc_cmos 00:05: setting system clock to 2024-02-09T19:50:48 UTC (1707508248) Feb 9 19:50:48.810507 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Feb 9 19:50:48.810522 kernel: efifb: probing for efifb Feb 9 19:50:48.810529 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Feb 9 19:50:48.810537 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Feb 9 19:50:48.810544 kernel: efifb: scrolling: redraw Feb 9 19:50:48.810551 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 9 19:50:48.810569 kernel: Console: switching to colour frame buffer device 160x50 Feb 9 19:50:48.810576 kernel: fb0: EFI VGA frame buffer device Feb 9 19:50:48.810586 kernel: pstore: Registered efi as persistent store backend Feb 9 19:50:48.810593 kernel: NET: Registered PF_INET6 protocol family Feb 9 19:50:48.810600 kernel: Segment Routing with IPv6 Feb 9 19:50:48.810614 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 19:50:48.810625 kernel: NET: Registered PF_PACKET protocol family Feb 9 19:50:48.810632 kernel: Key type dns_resolver registered Feb 9 19:50:48.810639 kernel: IPI shorthand broadcast: enabled Feb 9 19:50:48.810646 kernel: sched_clock: Marking stable (362183449, 89037245)->(473100784, -21880090) Feb 9 19:50:48.810660 kernel: registered taskstats version 1 Feb 9 19:50:48.810672 kernel: Loading compiled-in X.509 certificates Feb 9 19:50:48.810681 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 56154408a02b3bd349a9e9180c9bd837fd1d636a' Feb 9 19:50:48.810688 kernel: Key type .fscrypt registered Feb 9 19:50:48.810695 kernel: Key type fscrypt-provisioning registered Feb 9 19:50:48.810702 kernel: pstore: Using crash dump compression: deflate Feb 9 19:50:48.810720 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 19:50:48.810728 kernel: ima: Allocated hash algorithm: sha1 Feb 9 19:50:48.810735 kernel: ima: No architecture policies found Feb 9 19:50:48.810742 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 9 19:50:48.810760 kernel: Write protecting the kernel read-only data: 28672k Feb 9 19:50:48.810767 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 9 19:50:48.810775 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 9 19:50:48.810782 kernel: Run /init as init process Feb 9 19:50:48.810789 kernel: with arguments: Feb 9 19:50:48.810798 kernel: /init Feb 9 19:50:48.810806 kernel: with environment: Feb 9 19:50:48.810813 kernel: HOME=/ Feb 9 19:50:48.810819 kernel: TERM=linux Feb 9 19:50:48.810826 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 19:50:48.810837 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 19:50:48.810847 systemd[1]: Detected virtualization kvm. Feb 9 19:50:48.810855 systemd[1]: Detected architecture x86-64. Feb 9 19:50:48.810862 systemd[1]: Running in initrd. Feb 9 19:50:48.810871 systemd[1]: No hostname configured, using default hostname. Feb 9 19:50:48.810878 systemd[1]: Hostname set to . Feb 9 19:50:48.810886 systemd[1]: Initializing machine ID from VM UUID. Feb 9 19:50:48.810895 systemd[1]: Queued start job for default target initrd.target. Feb 9 19:50:48.810903 systemd[1]: Started systemd-ask-password-console.path. Feb 9 19:50:48.810910 systemd[1]: Reached target cryptsetup.target. Feb 9 19:50:48.810918 systemd[1]: Reached target paths.target. Feb 9 19:50:48.810935 systemd[1]: Reached target slices.target. Feb 9 19:50:48.810942 systemd[1]: Reached target swap.target. Feb 9 19:50:48.810957 systemd[1]: Reached target timers.target. Feb 9 19:50:48.810968 systemd[1]: Listening on iscsid.socket. Feb 9 19:50:48.810976 systemd[1]: Listening on iscsiuio.socket. Feb 9 19:50:48.810983 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 19:50:48.810991 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 19:50:48.810999 systemd[1]: Listening on systemd-journald.socket. Feb 9 19:50:48.811006 systemd[1]: Listening on systemd-networkd.socket. Feb 9 19:50:48.811014 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 19:50:48.811022 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 19:50:48.811029 systemd[1]: Reached target sockets.target. Feb 9 19:50:48.811038 systemd[1]: Starting kmod-static-nodes.service... Feb 9 19:50:48.811046 systemd[1]: Finished network-cleanup.service. Feb 9 19:50:48.811053 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 19:50:48.811061 systemd[1]: Starting systemd-journald.service... Feb 9 19:50:48.811069 systemd[1]: Starting systemd-modules-load.service... Feb 9 19:50:48.811076 systemd[1]: Starting systemd-resolved.service... Feb 9 19:50:48.811084 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 19:50:48.811091 systemd[1]: Finished kmod-static-nodes.service. Feb 9 19:50:48.811099 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 19:50:48.811108 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 19:50:48.811116 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 19:50:48.811124 kernel: audit: type=1130 audit(1707508248.801:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:48.811131 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 19:50:48.811139 kernel: audit: type=1130 audit(1707508248.804:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:48.811147 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 19:50:48.811159 systemd-journald[198]: Journal started Feb 9 19:50:48.811199 systemd-journald[198]: Runtime Journal (/run/log/journal/4777902f983d4db9a1f9eb82c56041e6) is 6.0M, max 48.4M, 42.4M free. Feb 9 19:50:48.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:48.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:48.807390 systemd-modules-load[199]: Inserted module 'overlay' Feb 9 19:50:48.817727 systemd[1]: Started systemd-journald.service. Feb 9 19:50:48.818699 systemd-resolved[200]: Positive Trust Anchors: Feb 9 19:50:48.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:48.818720 systemd-resolved[200]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 19:50:48.822507 kernel: audit: type=1130 audit(1707508248.818:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:48.825733 kernel: audit: type=1130 audit(1707508248.821:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:48.821000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:48.818749 systemd-resolved[200]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 19:50:48.820888 systemd-resolved[200]: Defaulting to hostname 'linux'. Feb 9 19:50:48.821605 systemd[1]: Started systemd-resolved.service. Feb 9 19:50:48.833034 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 19:50:48.825105 systemd[1]: Reached target nss-lookup.target. Feb 9 19:50:48.833828 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 19:50:48.837051 kernel: audit: type=1130 audit(1707508248.833:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:48.837064 kernel: Bridge firewalling registered Feb 9 19:50:48.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:48.837197 systemd[1]: Starting dracut-cmdline.service... Feb 9 19:50:48.837518 systemd-modules-load[199]: Inserted module 'br_netfilter' Feb 9 19:50:48.845451 dracut-cmdline[214]: dracut-dracut-053 Feb 9 19:50:48.847296 dracut-cmdline[214]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:50:48.854947 kernel: SCSI subsystem initialized Feb 9 19:50:48.868685 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 19:50:48.868718 kernel: device-mapper: uevent: version 1.0.3 Feb 9 19:50:48.868736 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 19:50:48.871526 systemd-modules-load[199]: Inserted module 'dm_multipath' Feb 9 19:50:48.872202 systemd[1]: Finished systemd-modules-load.service. Feb 9 19:50:48.876622 kernel: audit: type=1130 audit(1707508248.872:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:48.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:48.873573 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:50:48.881701 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:50:48.885041 kernel: audit: type=1130 audit(1707508248.882:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:48.882000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:48.902953 kernel: Loading iSCSI transport class v2.0-870. Feb 9 19:50:48.912952 kernel: iscsi: registered transport (tcp) Feb 9 19:50:48.931957 kernel: iscsi: registered transport (qla4xxx) Feb 9 19:50:48.931996 kernel: QLogic iSCSI HBA Driver Feb 9 19:50:48.959057 systemd[1]: Finished dracut-cmdline.service. Feb 9 19:50:48.962905 kernel: audit: type=1130 audit(1707508248.958:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:48.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:48.960670 systemd[1]: Starting dracut-pre-udev.service... Feb 9 19:50:49.005958 kernel: raid6: avx2x4 gen() 31073 MB/s Feb 9 19:50:49.022951 kernel: raid6: avx2x4 xor() 6610 MB/s Feb 9 19:50:49.039956 kernel: raid6: avx2x2 gen() 21360 MB/s Feb 9 19:50:49.056950 kernel: raid6: avx2x2 xor() 16335 MB/s Feb 9 19:50:49.073949 kernel: raid6: avx2x1 gen() 26478 MB/s Feb 9 19:50:49.090945 kernel: raid6: avx2x1 xor() 15301 MB/s Feb 9 19:50:49.107950 kernel: raid6: sse2x4 gen() 13913 MB/s Feb 9 19:50:49.124956 kernel: raid6: sse2x4 xor() 7306 MB/s Feb 9 19:50:49.141954 kernel: raid6: sse2x2 gen() 16275 MB/s Feb 9 19:50:49.172947 kernel: raid6: sse2x2 xor() 9445 MB/s Feb 9 19:50:49.189944 kernel: raid6: sse2x1 gen() 12364 MB/s Feb 9 19:50:49.207003 kernel: raid6: sse2x1 xor() 7100 MB/s Feb 9 19:50:49.207022 kernel: raid6: using algorithm avx2x4 gen() 31073 MB/s Feb 9 19:50:49.207040 kernel: raid6: .... xor() 6610 MB/s, rmw enabled Feb 9 19:50:49.208012 kernel: raid6: using avx2x2 recovery algorithm Feb 9 19:50:49.218945 kernel: xor: automatically using best checksumming function avx Feb 9 19:50:49.305947 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 9 19:50:49.314650 systemd[1]: Finished dracut-pre-udev.service. Feb 9 19:50:49.317949 kernel: audit: type=1130 audit(1707508249.315:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:49.315000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:49.316000 audit: BPF prog-id=7 op=LOAD Feb 9 19:50:49.317000 audit: BPF prog-id=8 op=LOAD Feb 9 19:50:49.318328 systemd[1]: Starting systemd-udevd.service... Feb 9 19:50:49.330104 systemd-udevd[399]: Using default interface naming scheme 'v252'. Feb 9 19:50:49.334984 systemd[1]: Started systemd-udevd.service. Feb 9 19:50:49.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:49.336868 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 19:50:49.346670 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Feb 9 19:50:49.369720 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 19:50:49.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:49.370784 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 19:50:49.402281 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 19:50:49.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:49.429950 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 9 19:50:49.437946 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 19:50:49.441954 kernel: libata version 3.00 loaded. Feb 9 19:50:49.441988 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 9 19:50:49.442004 kernel: GPT:9289727 != 19775487 Feb 9 19:50:49.443259 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 9 19:50:49.443276 kernel: GPT:9289727 != 19775487 Feb 9 19:50:49.443284 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 9 19:50:49.444301 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 19:50:49.462309 kernel: AVX2 version of gcm_enc/dec engaged. Feb 9 19:50:49.462370 kernel: AES CTR mode by8 optimization enabled Feb 9 19:50:49.467956 kernel: ata_piix 0000:00:01.1: version 2.13 Feb 9 19:50:49.473003 kernel: scsi host0: ata_piix Feb 9 19:50:49.473189 kernel: scsi host1: ata_piix Feb 9 19:50:49.473294 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Feb 9 19:50:49.474389 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Feb 9 19:50:49.479943 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (435) Feb 9 19:50:49.489017 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 19:50:49.492913 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 19:50:49.496192 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 19:50:49.496454 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 19:50:49.499971 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 19:50:49.501122 systemd[1]: Starting disk-uuid.service... Feb 9 19:50:49.507493 disk-uuid[511]: Primary Header is updated. Feb 9 19:50:49.507493 disk-uuid[511]: Secondary Entries is updated. Feb 9 19:50:49.507493 disk-uuid[511]: Secondary Header is updated. Feb 9 19:50:49.510943 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 19:50:49.513955 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 19:50:49.633952 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Feb 9 19:50:49.634020 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 9 19:50:49.664954 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Feb 9 19:50:49.665104 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 9 19:50:49.681952 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Feb 9 19:50:50.514962 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 19:50:50.515078 disk-uuid[512]: The operation has completed successfully. Feb 9 19:50:50.534938 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 19:50:50.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:50.535000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:50.535020 systemd[1]: Finished disk-uuid.service. Feb 9 19:50:50.542675 systemd[1]: Starting verity-setup.service... Feb 9 19:50:50.552948 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Feb 9 19:50:50.567815 systemd[1]: Found device dev-mapper-usr.device. Feb 9 19:50:50.570009 systemd[1]: Mounting sysusr-usr.mount... Feb 9 19:50:50.571703 systemd[1]: Finished verity-setup.service. Feb 9 19:50:50.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:50.624946 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 19:50:50.625428 systemd[1]: Mounted sysusr-usr.mount. Feb 9 19:50:50.626456 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 19:50:50.627446 systemd[1]: Starting ignition-setup.service... Feb 9 19:50:50.628245 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 19:50:50.636279 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:50:50.636312 kernel: BTRFS info (device vda6): using free space tree Feb 9 19:50:50.636325 kernel: BTRFS info (device vda6): has skinny extents Feb 9 19:50:50.643175 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 19:50:50.650697 systemd[1]: Finished ignition-setup.service. Feb 9 19:50:50.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:50.651692 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 19:50:50.680452 ignition[637]: Ignition 2.14.0 Feb 9 19:50:50.680494 ignition[637]: Stage: fetch-offline Feb 9 19:50:50.680536 ignition[637]: no configs at "/usr/lib/ignition/base.d" Feb 9 19:50:50.680543 ignition[637]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 19:50:50.680637 ignition[637]: parsed url from cmdline: "" Feb 9 19:50:50.680639 ignition[637]: no config URL provided Feb 9 19:50:50.680644 ignition[637]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 19:50:50.680650 ignition[637]: no config at "/usr/lib/ignition/user.ign" Feb 9 19:50:50.680664 ignition[637]: op(1): [started] loading QEMU firmware config module Feb 9 19:50:50.680668 ignition[637]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 9 19:50:50.684094 ignition[637]: op(1): [finished] loading QEMU firmware config module Feb 9 19:50:50.684108 ignition[637]: QEMU firmware config was not found. Ignoring... Feb 9 19:50:50.691509 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 19:50:50.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:50.692000 audit: BPF prog-id=9 op=LOAD Feb 9 19:50:50.693367 systemd[1]: Starting systemd-networkd.service... Feb 9 19:50:50.698582 ignition[637]: parsing config with SHA512: f8bd84422f75ad066f9eabb3aa07abc9b4ef4378df3f17f189e533326977121c6e09f68b16c5766f4b3114bb309503a6dfed3bddf78f605f0d8be5e08c8dbdd8 Feb 9 19:50:50.713570 unknown[637]: fetched base config from "system" Feb 9 19:50:50.713585 unknown[637]: fetched user config from "qemu" Feb 9 19:50:50.714267 ignition[637]: fetch-offline: fetch-offline passed Feb 9 19:50:50.714342 ignition[637]: Ignition finished successfully Feb 9 19:50:50.715391 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 19:50:50.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:50.716426 systemd-networkd[705]: lo: Link UP Feb 9 19:50:50.717000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:50.716430 systemd-networkd[705]: lo: Gained carrier Feb 9 19:50:50.716806 systemd-networkd[705]: Enumeration completed Feb 9 19:50:50.717064 systemd-networkd[705]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:50:50.717085 systemd[1]: Started systemd-networkd.service. Feb 9 19:50:50.717706 systemd-networkd[705]: eth0: Link UP Feb 9 19:50:50.717716 systemd-networkd[705]: eth0: Gained carrier Feb 9 19:50:50.717956 systemd[1]: Reached target network.target. Feb 9 19:50:50.718542 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 9 19:50:50.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:50.719216 systemd[1]: Starting ignition-kargs.service... Feb 9 19:50:50.720530 systemd[1]: Starting iscsiuio.service... Feb 9 19:50:50.724046 systemd[1]: Started iscsiuio.service. Feb 9 19:50:50.728026 ignition[708]: Ignition 2.14.0 Feb 9 19:50:50.725606 systemd[1]: Starting iscsid.service... Feb 9 19:50:50.728031 ignition[708]: Stage: kargs Feb 9 19:50:50.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:50.730304 systemd[1]: Finished ignition-kargs.service. Feb 9 19:50:50.728108 ignition[708]: no configs at "/usr/lib/ignition/base.d" Feb 9 19:50:50.731535 systemd[1]: Starting ignition-disks.service... Feb 9 19:50:50.733738 iscsid[718]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 19:50:50.733738 iscsid[718]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 19:50:50.733738 iscsid[718]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 19:50:50.733738 iscsid[718]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 19:50:50.733738 iscsid[718]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 19:50:50.733738 iscsid[718]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 19:50:50.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:50.728116 ignition[708]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 19:50:50.733425 systemd[1]: Started iscsid.service. Feb 9 19:50:50.728972 ignition[708]: kargs: kargs passed Feb 9 19:50:50.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:50.734685 systemd[1]: Starting dracut-initqueue.service... Feb 9 19:50:50.729002 ignition[708]: Ignition finished successfully Feb 9 19:50:50.735206 systemd-networkd[705]: eth0: DHCPv4 address 10.0.0.131/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 9 19:50:50.743044 systemd[1]: Finished dracut-initqueue.service. Feb 9 19:50:50.743452 systemd[1]: Reached target remote-fs-pre.target. Feb 9 19:50:50.743641 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 19:50:50.743865 systemd[1]: Reached target remote-fs.target. Feb 9 19:50:50.744718 systemd[1]: Starting dracut-pre-mount.service... Feb 9 19:50:50.752029 systemd[1]: Finished dracut-pre-mount.service. Feb 9 19:50:50.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:50.757250 ignition[719]: Ignition 2.14.0 Feb 9 19:50:50.757260 ignition[719]: Stage: disks Feb 9 19:50:50.757360 ignition[719]: no configs at "/usr/lib/ignition/base.d" Feb 9 19:50:50.757370 ignition[719]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 19:50:50.758516 ignition[719]: disks: disks passed Feb 9 19:50:50.758552 ignition[719]: Ignition finished successfully Feb 9 19:50:50.761155 systemd[1]: Finished ignition-disks.service. Feb 9 19:50:50.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:50.761543 systemd[1]: Reached target initrd-root-device.target. Feb 9 19:50:50.762425 systemd[1]: Reached target local-fs-pre.target. Feb 9 19:50:50.762633 systemd[1]: Reached target local-fs.target. Feb 9 19:50:50.762857 systemd[1]: Reached target sysinit.target. Feb 9 19:50:50.763297 systemd[1]: Reached target basic.target. Feb 9 19:50:50.766755 systemd[1]: Starting systemd-fsck-root.service... Feb 9 19:50:50.776586 systemd-fsck[740]: ROOT: clean, 602/553520 files, 56014/553472 blocks Feb 9 19:50:50.780896 systemd[1]: Finished systemd-fsck-root.service. Feb 9 19:50:50.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:50.782162 systemd[1]: Mounting sysroot.mount... Feb 9 19:50:50.788499 systemd[1]: Mounted sysroot.mount. Feb 9 19:50:50.789996 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 19:50:50.789060 systemd[1]: Reached target initrd-root-fs.target. Feb 9 19:50:50.790606 systemd[1]: Mounting sysroot-usr.mount... Feb 9 19:50:50.791178 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 9 19:50:50.791207 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 19:50:50.791223 systemd[1]: Reached target ignition-diskful.target. Feb 9 19:50:50.793158 systemd[1]: Mounted sysroot-usr.mount. Feb 9 19:50:50.795226 systemd[1]: Starting initrd-setup-root.service... Feb 9 19:50:50.799205 initrd-setup-root[750]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 19:50:50.801902 initrd-setup-root[758]: cut: /sysroot/etc/group: No such file or directory Feb 9 19:50:50.805271 initrd-setup-root[766]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 19:50:50.807815 initrd-setup-root[774]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 19:50:50.831523 systemd[1]: Finished initrd-setup-root.service. Feb 9 19:50:50.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:50.833339 systemd[1]: Starting ignition-mount.service... Feb 9 19:50:50.834842 systemd[1]: Starting sysroot-boot.service... Feb 9 19:50:50.837494 bash[791]: umount: /sysroot/usr/share/oem: not mounted. Feb 9 19:50:50.844956 ignition[792]: INFO : Ignition 2.14.0 Feb 9 19:50:50.844956 ignition[792]: INFO : Stage: mount Feb 9 19:50:50.846162 ignition[792]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 19:50:50.846162 ignition[792]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 19:50:50.846162 ignition[792]: INFO : mount: mount passed Feb 9 19:50:50.846162 ignition[792]: INFO : Ignition finished successfully Feb 9 19:50:50.849795 systemd[1]: Finished ignition-mount.service. Feb 9 19:50:50.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:50.851892 systemd[1]: Finished sysroot-boot.service. Feb 9 19:50:50.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:51.577552 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 19:50:51.583138 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (801) Feb 9 19:50:51.583160 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:50:51.583169 kernel: BTRFS info (device vda6): using free space tree Feb 9 19:50:51.584192 kernel: BTRFS info (device vda6): has skinny extents Feb 9 19:50:51.587009 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 19:50:51.588849 systemd[1]: Starting ignition-files.service... Feb 9 19:50:51.600578 ignition[821]: INFO : Ignition 2.14.0 Feb 9 19:50:51.600578 ignition[821]: INFO : Stage: files Feb 9 19:50:51.601716 ignition[821]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 19:50:51.601716 ignition[821]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 19:50:51.601716 ignition[821]: DEBUG : files: compiled without relabeling support, skipping Feb 9 19:50:51.604432 ignition[821]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 19:50:51.604432 ignition[821]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 19:50:51.607453 ignition[821]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 19:50:51.608450 ignition[821]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 19:50:51.609376 ignition[821]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 19:50:51.609003 unknown[821]: wrote ssh authorized keys file for user: core Feb 9 19:50:51.611079 ignition[821]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 9 19:50:51.611079 ignition[821]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz: attempt #1 Feb 9 19:50:51.864147 systemd-networkd[705]: eth0: Gained IPv6LL Feb 9 19:50:52.001900 ignition[821]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 9 19:50:52.145100 ignition[821]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: 5d0324ca8a3c90c680b6e1fddb245a2255582fa15949ba1f3c6bb7323df9d3af754dae98d6e40ac9ccafb2999c932df2c4288d418949a4915d928eb23c090540 Feb 9 19:50:52.145100 ignition[821]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 9 19:50:52.148441 ignition[821]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 9 19:50:52.148441 ignition[821]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-amd64.tar.gz: attempt #1 Feb 9 19:50:52.468905 ignition[821]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 9 19:50:52.544843 ignition[821]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: aa622325bf05520939f9e020d7a28ab48ac23e2fae6f47d5a4e52174c88c1ebc31b464853e4fd65bd8f5331f330a6ca96fd370d247d3eeaed042da4ee2d1219a Feb 9 19:50:52.546951 ignition[821]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 9 19:50:52.546951 ignition[821]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 19:50:52.546951 ignition[821]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://dl.k8s.io/release/v1.27.2/bin/linux/amd64/kubeadm: attempt #1 Feb 9 19:50:52.626697 ignition[821]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 9 19:50:52.811266 ignition[821]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: f40216b7d14046931c58072d10c7122934eac5a23c08821371f8b08ac1779443ad11d3458a4c5dcde7cf80fc600a9fefb14b1942aa46a52330248d497ca88836 Feb 9 19:50:52.811266 ignition[821]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 19:50:52.814387 ignition[821]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 19:50:52.814387 ignition[821]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.27.2/bin/linux/amd64/kubelet: attempt #1 Feb 9 19:50:52.862261 ignition[821]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 9 19:50:53.426475 ignition[821]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: a283da2224d456958b2cb99b4f6faf4457c4ed89e9e95f37d970c637f6a7f64ff4dd4d2bfce538759b2d2090933bece599a285ef8fd132eb383fece9a3941560 Feb 9 19:50:53.429464 ignition[821]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 19:50:53.429464 ignition[821]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/install.sh" Feb 9 19:50:53.429464 ignition[821]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 19:50:53.429464 ignition[821]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 19:50:53.429464 ignition[821]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 19:50:53.429464 ignition[821]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 19:50:53.429464 ignition[821]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 19:50:53.429464 ignition[821]: INFO : files: op(a): [started] processing unit "prepare-cni-plugins.service" Feb 9 19:50:53.429464 ignition[821]: INFO : files: op(a): op(b): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 19:50:53.429464 ignition[821]: INFO : files: op(a): op(b): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 19:50:53.429464 ignition[821]: INFO : files: op(a): [finished] processing unit "prepare-cni-plugins.service" Feb 9 19:50:53.429464 ignition[821]: INFO : files: op(c): [started] processing unit "prepare-critools.service" Feb 9 19:50:53.429464 ignition[821]: INFO : files: op(c): op(d): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 19:50:53.429464 ignition[821]: INFO : files: op(c): op(d): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 19:50:53.429464 ignition[821]: INFO : files: op(c): [finished] processing unit "prepare-critools.service" Feb 9 19:50:53.429464 ignition[821]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Feb 9 19:50:53.429464 ignition[821]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 9 19:50:53.459233 kernel: kauditd_printk_skb: 23 callbacks suppressed Feb 9 19:50:53.459257 kernel: audit: type=1130 audit(1707508253.449:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:53.459269 kernel: audit: type=1130 audit(1707508253.459:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:53.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:53.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:53.459344 ignition[821]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 9 19:50:53.459344 ignition[821]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Feb 9 19:50:53.459344 ignition[821]: INFO : files: op(10): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 19:50:53.459344 ignition[821]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 19:50:53.459344 ignition[821]: INFO : files: op(11): [started] setting preset to enabled for "prepare-critools.service" Feb 9 19:50:53.459344 ignition[821]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 19:50:53.459344 ignition[821]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Feb 9 19:50:53.459344 ignition[821]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 9 19:50:53.459344 ignition[821]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 9 19:50:53.459344 ignition[821]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Feb 9 19:50:53.459344 ignition[821]: INFO : files: createResultFile: createFiles: op(14): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 19:50:53.459344 ignition[821]: INFO : files: createResultFile: createFiles: op(14): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 19:50:53.459344 ignition[821]: INFO : files: files passed Feb 9 19:50:53.459344 ignition[821]: INFO : Ignition finished successfully Feb 9 19:50:53.487337 kernel: audit: type=1130 audit(1707508253.463:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:53.487365 kernel: audit: type=1131 audit(1707508253.463:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:53.487376 kernel: audit: type=1130 audit(1707508253.478:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:53.487387 kernel: audit: type=1131 audit(1707508253.478:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:53.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:53.463000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:53.478000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:53.478000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:53.448090 systemd[1]: Finished ignition-files.service. Feb 9 19:50:53.450205 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 19:50:53.489475 initrd-setup-root-after-ignition[846]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Feb 9 19:50:53.454450 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 19:50:53.492137 initrd-setup-root-after-ignition[849]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 19:50:53.455043 systemd[1]: Starting ignition-quench.service... Feb 9 19:50:53.457164 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 19:50:53.459383 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 19:50:53.459443 systemd[1]: Finished ignition-quench.service. Feb 9 19:50:53.463577 systemd[1]: Reached target ignition-complete.target. Feb 9 19:50:53.468603 systemd[1]: Starting initrd-parse-etc.service... Feb 9 19:50:53.478799 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 19:50:53.478871 systemd[1]: Finished initrd-parse-etc.service. Feb 9 19:50:53.479708 systemd[1]: Reached target initrd-fs.target. Feb 9 19:50:53.485347 systemd[1]: Reached target initrd.target. Feb 9 19:50:53.486494 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 19:50:53.487149 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 19:50:53.504663 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 19:50:53.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:53.507823 systemd[1]: Starting initrd-cleanup.service... Feb 9 19:50:53.508145 kernel: audit: type=1130 audit(1707508253.504:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:53.515913 systemd[1]: Stopped target network.target. Feb 9 19:50:53.516333 systemd[1]: Stopped target nss-lookup.target. Feb 9 19:50:53.517392 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 19:50:53.518500 systemd[1]: Stopped target timers.target. Feb 9 19:50:53.519701 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 19:50:53.523622 kernel: audit: type=1131 audit(1707508253.519:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:53.519000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:53.519778 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 19:50:53.520769 systemd[1]: Stopped target initrd.target. Feb 9 19:50:53.524018 systemd[1]: Stopped target basic.target. Feb 9 19:50:53.525200 systemd[1]: Stopped target ignition-complete.target. Feb 9 19:50:53.526092 systemd[1]: Stopped target ignition-diskful.target. Feb 9 19:50:53.527478 systemd[1]: Stopped target initrd-root-device.target. Feb 9 19:50:53.527883 systemd[1]: Stopped target remote-fs.target. Feb 9 19:50:53.529578 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 19:50:53.529827 systemd[1]: Stopped target sysinit.target. Feb 9 19:50:53.532025 systemd[1]: Stopped target local-fs.target. Feb 9 19:50:53.533155 systemd[1]: Stopped target local-fs-pre.target. Feb 9 19:50:53.538685 kernel: audit: type=1131 audit(1707508253.534:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:53.534000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:53.534327 systemd[1]: Stopped target swap.target. Feb 9 19:50:53.535016 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 19:50:53.539000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:53.535133 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 19:50:53.544528 kernel: audit: type=1131 audit(1707508253.539:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:53.542000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:53.535517 systemd[1]: Stopped target cryptsetup.target. Feb 9 19:50:53.539154 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 19:50:53.539264 systemd[1]: Stopped dracut-initqueue.service. Feb 9 19:50:53.540341 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 19:50:53.540454 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 19:50:53.543815 systemd[1]: Stopped target paths.target. Feb 9 19:50:53.544953 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 19:50:53.548979 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 19:50:53.549432 systemd[1]: Stopped target slices.target. Feb 9 19:50:53.549668 systemd[1]: Stopped target sockets.target. Feb 9 19:50:53.549911 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 19:50:53.549976 systemd[1]: Closed iscsid.socket. Feb 9 19:50:53.553000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:53.552916 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 19:50:53.555000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:53.552984 systemd[1]: Closed iscsiuio.socket. Feb 9 19:50:53.554100 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 19:50:53.557000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:53.554179 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 19:50:53.554507 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 19:50:53.562000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:53.563000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:53.564561 ignition[863]: INFO : Ignition 2.14.0 Feb 9 19:50:53.564561 ignition[863]: INFO : Stage: umount Feb 9 19:50:53.564561 ignition[863]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 19:50:53.564561 ignition[863]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 19:50:53.554575 systemd[1]: Stopped ignition-files.service. Feb 9 19:50:53.569030 ignition[863]: INFO : umount: umount passed Feb 9 19:50:53.569030 ignition[863]: INFO : Ignition finished successfully Feb 9 19:50:53.556794 systemd[1]: Stopping ignition-mount.service... Feb 9 19:50:53.570000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:53.557129 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 19:50:53.557214 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 19:50:53.558702 systemd[1]: Stopping sysroot-boot.service... Feb 9 19:50:53.559736 systemd[1]: Stopping systemd-networkd.service... Feb 9 19:50:53.561306 systemd[1]: Stopping systemd-resolved.service... Feb 9 19:50:53.562259 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 19:50:53.562407 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 19:50:53.563194 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 19:50:53.563324 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 19:50:53.569058 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 19:50:53.570326 systemd[1]: Stopped systemd-resolved.service. Feb 9 19:50:53.576003 systemd-networkd[705]: eth0: DHCPv6 lease lost Feb 9 19:50:53.580456 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 19:50:53.581552 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 19:50:53.582337 systemd[1]: Stopped systemd-networkd.service. Feb 9 19:50:53.582000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:53.584182 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 19:50:53.584250 systemd[1]: Stopped ignition-mount.service. Feb 9 19:50:53.585000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:53.586321 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 19:50:53.587050 systemd[1]: Stopped sysroot-boot.service. Feb 9 19:50:53.587000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:53.587000 audit: BPF prog-id=6 op=UNLOAD Feb 9 19:50:53.587000 audit: BPF prog-id=9 op=UNLOAD Feb 9 19:50:53.588513 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 19:50:53.589264 systemd[1]: Closed systemd-networkd.socket. Feb 9 19:50:53.590406 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 19:50:53.590000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:53.590437 systemd[1]: Stopped ignition-disks.service. Feb 9 19:50:53.591000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:53.591752 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 19:50:53.591782 systemd[1]: Stopped ignition-kargs.service. Feb 9 19:50:53.593000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:53.592958 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 19:50:53.596000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:53.592987 systemd[1]: Stopped ignition-setup.service. Feb 9 19:50:53.594308 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 19:50:53.594861 systemd[1]: Stopped initrd-setup-root.service. Feb 9 19:50:53.598669 systemd[1]: Stopping network-cleanup.service... Feb 9 19:50:53.599256 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 19:50:53.601000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:53.599915 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 19:50:53.601000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:53.601355 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 19:50:53.602000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:53.601386 systemd[1]: Stopped systemd-sysctl.service. Feb 9 19:50:53.602753 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 19:50:53.602782 systemd[1]: Stopped systemd-modules-load.service. Feb 9 19:50:53.604241 systemd[1]: Stopping systemd-udevd.service... Feb 9 19:50:53.607873 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 19:50:53.609186 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 19:50:53.609960 systemd[1]: Finished initrd-cleanup.service. Feb 9 19:50:53.611000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:53.611000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:53.611373 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 19:50:53.612178 systemd[1]: Stopped systemd-udevd.service. Feb 9 19:50:53.613000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:53.614785 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 19:50:53.614820 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 19:50:53.616290 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 19:50:53.616314 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 19:50:53.618209 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 19:50:53.620000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:53.618764 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 19:50:53.620073 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 19:50:53.620102 systemd[1]: Stopped dracut-cmdline.service. Feb 9 19:50:53.622000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:53.622261 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 19:50:53.622000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:53.622291 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 19:50:53.624781 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 19:50:53.626053 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 19:50:53.626090 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 19:50:53.627000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:53.628182 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 19:50:53.628970 systemd[1]: Stopped network-cleanup.service. Feb 9 19:50:53.630000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:53.630273 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 19:50:53.631117 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 19:50:53.632000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:53.632000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:53.632550 systemd[1]: Reached target initrd-switch-root.target. Feb 9 19:50:53.634413 systemd[1]: Starting initrd-switch-root.service... Feb 9 19:50:53.649634 systemd[1]: Switching root. Feb 9 19:50:53.668230 iscsid[718]: iscsid shutting down. Feb 9 19:50:53.668768 systemd-journald[198]: Journal stopped Feb 9 19:50:56.442239 systemd-journald[198]: Received SIGTERM from PID 1 (systemd). Feb 9 19:50:56.442305 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 19:50:56.442317 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 19:50:56.442330 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 19:50:56.442339 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 19:50:56.442348 kernel: SELinux: policy capability open_perms=1 Feb 9 19:50:56.442359 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 19:50:56.442373 kernel: SELinux: policy capability always_check_network=0 Feb 9 19:50:56.442382 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 19:50:56.442392 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 19:50:56.442401 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 19:50:56.442410 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 19:50:56.442423 systemd[1]: Successfully loaded SELinux policy in 36.005ms. Feb 9 19:50:56.442442 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.755ms. Feb 9 19:50:56.442453 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 19:50:56.442465 systemd[1]: Detected virtualization kvm. Feb 9 19:50:56.442475 systemd[1]: Detected architecture x86-64. Feb 9 19:50:56.442490 systemd[1]: Detected first boot. Feb 9 19:50:56.442499 systemd[1]: Initializing machine ID from VM UUID. Feb 9 19:50:56.442509 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 19:50:56.442520 systemd[1]: Populated /etc with preset unit settings. Feb 9 19:50:56.442531 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:50:56.442542 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:50:56.442553 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:50:56.442564 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 19:50:56.442581 systemd[1]: Stopped iscsiuio.service. Feb 9 19:50:56.442591 systemd[1]: iscsid.service: Deactivated successfully. Feb 9 19:50:56.442603 systemd[1]: Stopped iscsid.service. Feb 9 19:50:56.442613 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 9 19:50:56.442624 systemd[1]: Stopped initrd-switch-root.service. Feb 9 19:50:56.442633 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 9 19:50:56.442644 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 19:50:56.442653 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 19:50:56.442664 systemd[1]: Created slice system-getty.slice. Feb 9 19:50:56.442674 systemd[1]: Created slice system-modprobe.slice. Feb 9 19:50:56.442684 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 19:50:56.442696 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 19:50:56.442706 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 19:50:56.442716 systemd[1]: Created slice user.slice. Feb 9 19:50:56.442726 systemd[1]: Started systemd-ask-password-console.path. Feb 9 19:50:56.442736 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 19:50:56.442746 systemd[1]: Set up automount boot.automount. Feb 9 19:50:56.442759 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 19:50:56.442769 systemd[1]: Stopped target initrd-switch-root.target. Feb 9 19:50:56.442780 systemd[1]: Stopped target initrd-fs.target. Feb 9 19:50:56.442790 systemd[1]: Stopped target initrd-root-fs.target. Feb 9 19:50:56.442800 systemd[1]: Reached target integritysetup.target. Feb 9 19:50:56.442810 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 19:50:56.442820 systemd[1]: Reached target remote-fs.target. Feb 9 19:50:56.442830 systemd[1]: Reached target slices.target. Feb 9 19:50:56.442840 systemd[1]: Reached target swap.target. Feb 9 19:50:56.442849 systemd[1]: Reached target torcx.target. Feb 9 19:50:56.442859 systemd[1]: Reached target veritysetup.target. Feb 9 19:50:56.442870 systemd[1]: Listening on systemd-coredump.socket. Feb 9 19:50:56.442882 systemd[1]: Listening on systemd-initctl.socket. Feb 9 19:50:56.442891 systemd[1]: Listening on systemd-networkd.socket. Feb 9 19:50:56.442902 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 19:50:56.442912 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 19:50:56.442922 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 19:50:56.442950 systemd[1]: Mounting dev-hugepages.mount... Feb 9 19:50:56.442961 systemd[1]: Mounting dev-mqueue.mount... Feb 9 19:50:56.442971 systemd[1]: Mounting media.mount... Feb 9 19:50:56.442981 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 19:50:56.442993 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 19:50:56.443003 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 19:50:56.443012 systemd[1]: Mounting tmp.mount... Feb 9 19:50:56.443032 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 19:50:56.443043 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 19:50:56.443052 systemd[1]: Starting kmod-static-nodes.service... Feb 9 19:50:56.443062 systemd[1]: Starting modprobe@configfs.service... Feb 9 19:50:56.443072 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 19:50:56.443082 systemd[1]: Starting modprobe@drm.service... Feb 9 19:50:56.443095 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 19:50:56.443105 systemd[1]: Starting modprobe@fuse.service... Feb 9 19:50:56.443115 systemd[1]: Starting modprobe@loop.service... Feb 9 19:50:56.443126 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 19:50:56.443136 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 9 19:50:56.443146 systemd[1]: Stopped systemd-fsck-root.service. Feb 9 19:50:56.443156 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 9 19:50:56.443167 systemd[1]: Stopped systemd-fsck-usr.service. Feb 9 19:50:56.443176 kernel: loop: module loaded Feb 9 19:50:56.443187 systemd[1]: Stopped systemd-journald.service. Feb 9 19:50:56.443197 systemd[1]: Starting systemd-journald.service... Feb 9 19:50:56.443207 systemd[1]: Starting systemd-modules-load.service... Feb 9 19:50:56.443217 systemd[1]: Starting systemd-network-generator.service... Feb 9 19:50:56.443227 kernel: fuse: init (API version 7.34) Feb 9 19:50:56.443236 systemd[1]: Starting systemd-remount-fs.service... Feb 9 19:50:56.443246 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 19:50:56.443256 systemd[1]: verity-setup.service: Deactivated successfully. Feb 9 19:50:56.443266 systemd[1]: Stopped verity-setup.service. Feb 9 19:50:56.443277 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 19:50:56.443287 systemd[1]: Mounted dev-hugepages.mount. Feb 9 19:50:56.443297 systemd[1]: Mounted dev-mqueue.mount. Feb 9 19:50:56.443306 systemd[1]: Mounted media.mount. Feb 9 19:50:56.443316 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 19:50:56.443326 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 19:50:56.443336 systemd[1]: Mounted tmp.mount. Feb 9 19:50:56.443349 systemd[1]: Finished kmod-static-nodes.service. Feb 9 19:50:56.443358 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 19:50:56.443368 systemd[1]: Finished modprobe@configfs.service. Feb 9 19:50:56.443378 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 19:50:56.443388 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 19:50:56.443404 systemd-journald[974]: Journal started Feb 9 19:50:56.443458 systemd-journald[974]: Runtime Journal (/run/log/journal/4777902f983d4db9a1f9eb82c56041e6) is 6.0M, max 48.4M, 42.4M free. Feb 9 19:50:53.722000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 9 19:50:54.287000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 19:50:54.287000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 19:50:54.287000 audit: BPF prog-id=10 op=LOAD Feb 9 19:50:54.287000 audit: BPF prog-id=10 op=UNLOAD Feb 9 19:50:54.287000 audit: BPF prog-id=11 op=LOAD Feb 9 19:50:54.287000 audit: BPF prog-id=11 op=UNLOAD Feb 9 19:50:54.319000 audit[897]: AVC avc: denied { associate } for pid=897 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 19:50:54.319000 audit[897]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001858e2 a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=880 pid=897 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:50:54.319000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 19:50:54.321000 audit[897]: AVC avc: denied { associate } for pid=897 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 9 19:50:54.321000 audit[897]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001859b9 a2=1ed a3=0 items=2 ppid=880 pid=897 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:50:54.321000 audit: CWD cwd="/" Feb 9 19:50:54.321000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:54.321000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:54.321000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 19:50:56.326000 audit: BPF prog-id=12 op=LOAD Feb 9 19:50:56.326000 audit: BPF prog-id=3 op=UNLOAD Feb 9 19:50:56.326000 audit: BPF prog-id=13 op=LOAD Feb 9 19:50:56.326000 audit: BPF prog-id=14 op=LOAD Feb 9 19:50:56.326000 audit: BPF prog-id=4 op=UNLOAD Feb 9 19:50:56.326000 audit: BPF prog-id=5 op=UNLOAD Feb 9 19:50:56.327000 audit: BPF prog-id=15 op=LOAD Feb 9 19:50:56.327000 audit: BPF prog-id=12 op=UNLOAD Feb 9 19:50:56.327000 audit: BPF prog-id=16 op=LOAD Feb 9 19:50:56.327000 audit: BPF prog-id=17 op=LOAD Feb 9 19:50:56.327000 audit: BPF prog-id=13 op=UNLOAD Feb 9 19:50:56.327000 audit: BPF prog-id=14 op=UNLOAD Feb 9 19:50:56.328000 audit: BPF prog-id=18 op=LOAD Feb 9 19:50:56.328000 audit: BPF prog-id=15 op=UNLOAD Feb 9 19:50:56.328000 audit: BPF prog-id=19 op=LOAD Feb 9 19:50:56.328000 audit: BPF prog-id=20 op=LOAD Feb 9 19:50:56.328000 audit: BPF prog-id=16 op=UNLOAD Feb 9 19:50:56.328000 audit: BPF prog-id=17 op=UNLOAD Feb 9 19:50:56.329000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:56.331000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:56.333000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:56.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:56.335000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:56.343000 audit: BPF prog-id=18 op=UNLOAD Feb 9 19:50:56.409000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:56.412000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:56.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:56.413000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:56.414000 audit: BPF prog-id=21 op=LOAD Feb 9 19:50:56.414000 audit: BPF prog-id=22 op=LOAD Feb 9 19:50:56.414000 audit: BPF prog-id=23 op=LOAD Feb 9 19:50:56.414000 audit: BPF prog-id=19 op=UNLOAD Feb 9 19:50:56.414000 audit: BPF prog-id=20 op=UNLOAD Feb 9 19:50:56.428000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:56.437000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 19:50:56.437000 audit[974]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffdf310eec0 a2=4000 a3=7ffdf310ef5c items=0 ppid=1 pid=974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:50:56.437000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 19:50:56.438000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:56.441000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:56.441000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:56.443000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:56.443000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:54.319716 /usr/lib/systemd/system-generators/torcx-generator[897]: time="2024-02-09T19:50:54Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:50:56.325846 systemd[1]: Queued start job for default target multi-user.target. Feb 9 19:50:54.319905 /usr/lib/systemd/system-generators/torcx-generator[897]: time="2024-02-09T19:50:54Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 19:50:56.325855 systemd[1]: Unnecessary job was removed for dev-vda6.device. Feb 9 19:50:54.319920 /usr/lib/systemd/system-generators/torcx-generator[897]: time="2024-02-09T19:50:54Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 19:50:56.329803 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 9 19:50:54.319962 /usr/lib/systemd/system-generators/torcx-generator[897]: time="2024-02-09T19:50:54Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 9 19:50:54.319971 /usr/lib/systemd/system-generators/torcx-generator[897]: time="2024-02-09T19:50:54Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 9 19:50:54.319998 /usr/lib/systemd/system-generators/torcx-generator[897]: time="2024-02-09T19:50:54Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 9 19:50:54.320009 /usr/lib/systemd/system-generators/torcx-generator[897]: time="2024-02-09T19:50:54Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 9 19:50:54.320178 /usr/lib/systemd/system-generators/torcx-generator[897]: time="2024-02-09T19:50:54Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 9 19:50:54.320208 /usr/lib/systemd/system-generators/torcx-generator[897]: time="2024-02-09T19:50:54Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 19:50:54.320219 /usr/lib/systemd/system-generators/torcx-generator[897]: time="2024-02-09T19:50:54Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 19:50:54.320472 /usr/lib/systemd/system-generators/torcx-generator[897]: time="2024-02-09T19:50:54Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 9 19:50:54.320503 /usr/lib/systemd/system-generators/torcx-generator[897]: time="2024-02-09T19:50:54Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 9 19:50:54.320517 /usr/lib/systemd/system-generators/torcx-generator[897]: time="2024-02-09T19:50:54Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 9 19:50:54.320530 /usr/lib/systemd/system-generators/torcx-generator[897]: time="2024-02-09T19:50:54Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 9 19:50:54.320543 /usr/lib/systemd/system-generators/torcx-generator[897]: time="2024-02-09T19:50:54Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 9 19:50:54.320554 /usr/lib/systemd/system-generators/torcx-generator[897]: time="2024-02-09T19:50:54Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 9 19:50:56.445956 systemd[1]: Started systemd-journald.service. Feb 9 19:50:56.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:56.081361 /usr/lib/systemd/system-generators/torcx-generator[897]: time="2024-02-09T19:50:56Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:50:56.081610 /usr/lib/systemd/system-generators/torcx-generator[897]: time="2024-02-09T19:50:56Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:50:56.081700 /usr/lib/systemd/system-generators/torcx-generator[897]: time="2024-02-09T19:50:56Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:50:56.081837 /usr/lib/systemd/system-generators/torcx-generator[897]: time="2024-02-09T19:50:56Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:50:56.081880 /usr/lib/systemd/system-generators/torcx-generator[897]: time="2024-02-09T19:50:56Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 9 19:50:56.081941 /usr/lib/systemd/system-generators/torcx-generator[897]: time="2024-02-09T19:50:56Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 9 19:50:56.446509 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 19:50:56.446000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:56.447279 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 19:50:56.447434 systemd[1]: Finished modprobe@drm.service. Feb 9 19:50:56.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:56.447000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:56.448186 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 19:50:56.448375 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 19:50:56.448000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:56.448000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:56.449175 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 19:50:56.449335 systemd[1]: Finished modprobe@fuse.service. Feb 9 19:50:56.448000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:56.449000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:56.450259 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 19:50:56.450431 systemd[1]: Finished modprobe@loop.service. Feb 9 19:50:56.450000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:56.450000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:56.451306 systemd[1]: Finished systemd-modules-load.service. Feb 9 19:50:56.450000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:56.452208 systemd[1]: Finished systemd-network-generator.service. Feb 9 19:50:56.451000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:56.453288 systemd[1]: Finished systemd-remount-fs.service. Feb 9 19:50:56.453000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:56.454421 systemd[1]: Reached target network-pre.target. Feb 9 19:50:56.456009 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 19:50:56.457614 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 19:50:56.458219 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 19:50:56.460503 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 19:50:56.462195 systemd[1]: Starting systemd-journal-flush.service... Feb 9 19:50:56.462863 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 19:50:56.468526 systemd-journald[974]: Time spent on flushing to /var/log/journal/4777902f983d4db9a1f9eb82c56041e6 is 16.502ms for 1169 entries. Feb 9 19:50:56.468526 systemd-journald[974]: System Journal (/var/log/journal/4777902f983d4db9a1f9eb82c56041e6) is 8.0M, max 195.6M, 187.6M free. Feb 9 19:50:56.497586 systemd-journald[974]: Received client request to flush runtime journal. Feb 9 19:50:56.470000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:56.473000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:56.480000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:56.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:56.463884 systemd[1]: Starting systemd-random-seed.service... Feb 9 19:50:56.464503 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 19:50:56.465435 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:50:56.467296 systemd[1]: Starting systemd-sysusers.service... Feb 9 19:50:56.498263 udevadm[1000]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 9 19:50:56.470202 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 19:50:56.471044 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 19:50:56.471764 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 19:50:56.473259 systemd[1]: Starting systemd-udev-settle.service... Feb 9 19:50:56.474010 systemd[1]: Finished systemd-random-seed.service. Feb 9 19:50:56.474739 systemd[1]: Reached target first-boot-complete.target. Feb 9 19:50:56.480520 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:50:56.483498 systemd[1]: Finished systemd-sysusers.service. Feb 9 19:50:56.498225 systemd[1]: Finished systemd-journal-flush.service. Feb 9 19:50:56.497000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:56.892011 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 19:50:56.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:56.892000 audit: BPF prog-id=24 op=LOAD Feb 9 19:50:56.892000 audit: BPF prog-id=25 op=LOAD Feb 9 19:50:56.892000 audit: BPF prog-id=7 op=UNLOAD Feb 9 19:50:56.892000 audit: BPF prog-id=8 op=UNLOAD Feb 9 19:50:56.894081 systemd[1]: Starting systemd-udevd.service... Feb 9 19:50:56.913404 systemd-udevd[1003]: Using default interface naming scheme 'v252'. Feb 9 19:50:56.926090 systemd[1]: Started systemd-udevd.service. Feb 9 19:50:56.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:56.929000 audit: BPF prog-id=26 op=LOAD Feb 9 19:50:56.931150 systemd[1]: Starting systemd-networkd.service... Feb 9 19:50:56.934000 audit: BPF prog-id=27 op=LOAD Feb 9 19:50:56.934000 audit: BPF prog-id=28 op=LOAD Feb 9 19:50:56.934000 audit: BPF prog-id=29 op=LOAD Feb 9 19:50:56.936170 systemd[1]: Starting systemd-userdbd.service... Feb 9 19:50:56.954543 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Feb 9 19:50:56.965513 systemd[1]: Started systemd-userdbd.service. Feb 9 19:50:56.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:56.975437 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 19:50:57.003875 systemd-networkd[1015]: lo: Link UP Feb 9 19:50:57.003884 systemd-networkd[1015]: lo: Gained carrier Feb 9 19:50:57.004317 systemd-networkd[1015]: Enumeration completed Feb 9 19:50:57.004405 systemd[1]: Started systemd-networkd.service. Feb 9 19:50:57.004919 systemd-networkd[1015]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:50:57.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:57.005681 systemd-networkd[1015]: eth0: Link UP Feb 9 19:50:57.005689 systemd-networkd[1015]: eth0: Gained carrier Feb 9 19:50:57.012964 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 9 19:50:57.006000 audit[1006]: AVC avc: denied { confidentiality } for pid=1006 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 19:50:57.016946 kernel: ACPI: button: Power Button [PWRF] Feb 9 19:50:57.020047 systemd-networkd[1015]: eth0: DHCPv4 address 10.0.0.131/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 9 19:50:57.006000 audit[1006]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5641b8a38160 a1=32194 a2=7f6582d49bc5 a3=5 items=108 ppid=1003 pid=1006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:50:57.006000 audit: CWD cwd="/" Feb 9 19:50:57.006000 audit: PATH item=0 name=(null) inode=50 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=1 name=(null) inode=13788 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=2 name=(null) inode=13788 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=3 name=(null) inode=13789 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=4 name=(null) inode=13788 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=5 name=(null) inode=13790 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=6 name=(null) inode=13788 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=7 name=(null) inode=13791 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=8 name=(null) inode=13791 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=9 name=(null) inode=13792 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=10 name=(null) inode=13791 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=11 name=(null) inode=13793 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=12 name=(null) inode=13791 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=13 name=(null) inode=13794 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=14 name=(null) inode=13791 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=15 name=(null) inode=13795 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=16 name=(null) inode=13791 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=17 name=(null) inode=13796 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=18 name=(null) inode=13788 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=19 name=(null) inode=13797 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=20 name=(null) inode=13797 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=21 name=(null) inode=13798 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=22 name=(null) inode=13797 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=23 name=(null) inode=13799 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=24 name=(null) inode=13797 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=25 name=(null) inode=13800 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=26 name=(null) inode=13797 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=27 name=(null) inode=13801 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=28 name=(null) inode=13797 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=29 name=(null) inode=13802 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=30 name=(null) inode=13788 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=31 name=(null) inode=13803 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=32 name=(null) inode=13803 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=33 name=(null) inode=13804 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=34 name=(null) inode=13803 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=35 name=(null) inode=13805 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=36 name=(null) inode=13803 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=37 name=(null) inode=13806 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=38 name=(null) inode=13803 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=39 name=(null) inode=13807 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=40 name=(null) inode=13803 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=41 name=(null) inode=13808 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=42 name=(null) inode=13788 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=43 name=(null) inode=13809 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=44 name=(null) inode=13809 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=45 name=(null) inode=13810 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=46 name=(null) inode=13809 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=47 name=(null) inode=13811 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=48 name=(null) inode=13809 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=49 name=(null) inode=13812 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=50 name=(null) inode=13809 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=51 name=(null) inode=13813 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=52 name=(null) inode=13809 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=53 name=(null) inode=13814 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=54 name=(null) inode=50 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=55 name=(null) inode=13815 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=56 name=(null) inode=13815 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=57 name=(null) inode=13816 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=58 name=(null) inode=13815 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=59 name=(null) inode=13817 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=60 name=(null) inode=13815 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=61 name=(null) inode=13818 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=62 name=(null) inode=13818 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=63 name=(null) inode=13819 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=64 name=(null) inode=13818 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=65 name=(null) inode=13820 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=66 name=(null) inode=13818 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=67 name=(null) inode=13821 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=68 name=(null) inode=13818 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=69 name=(null) inode=13822 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=70 name=(null) inode=13818 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=71 name=(null) inode=13823 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=72 name=(null) inode=13815 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=73 name=(null) inode=13824 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=74 name=(null) inode=13824 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=75 name=(null) inode=13825 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=76 name=(null) inode=13824 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=77 name=(null) inode=13826 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=78 name=(null) inode=13824 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=79 name=(null) inode=13827 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=80 name=(null) inode=13824 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=81 name=(null) inode=13828 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=82 name=(null) inode=13824 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=83 name=(null) inode=13829 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=84 name=(null) inode=13815 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=85 name=(null) inode=13830 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.054950 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 9 19:50:57.006000 audit: PATH item=86 name=(null) inode=13830 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=87 name=(null) inode=13831 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=88 name=(null) inode=13830 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=89 name=(null) inode=13832 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=90 name=(null) inode=13830 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=91 name=(null) inode=13833 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=92 name=(null) inode=13830 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=93 name=(null) inode=13834 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=94 name=(null) inode=13830 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=95 name=(null) inode=13835 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=96 name=(null) inode=13815 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=97 name=(null) inode=13836 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=98 name=(null) inode=13836 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=99 name=(null) inode=13837 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=100 name=(null) inode=13836 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=101 name=(null) inode=13838 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=102 name=(null) inode=13836 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=103 name=(null) inode=13839 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=104 name=(null) inode=13836 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=105 name=(null) inode=13840 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=106 name=(null) inode=13836 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PATH item=107 name=(null) inode=13841 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:50:57.006000 audit: PROCTITLE proctitle="(udev-worker)" Feb 9 19:50:57.060944 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 0 Feb 9 19:50:57.061147 kernel: mousedev: PS/2 mouse device common for all mice Feb 9 19:50:57.098153 kernel: kvm: Nested Virtualization enabled Feb 9 19:50:57.098235 kernel: SVM: kvm: Nested Paging enabled Feb 9 19:50:57.098249 kernel: SVM: Virtual VMLOAD VMSAVE supported Feb 9 19:50:57.098262 kernel: SVM: Virtual GIF supported Feb 9 19:50:57.111002 kernel: EDAC MC: Ver: 3.0.0 Feb 9 19:50:57.129383 systemd[1]: Finished systemd-udev-settle.service. Feb 9 19:50:57.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:57.131516 systemd[1]: Starting lvm2-activation-early.service... Feb 9 19:50:57.137705 lvm[1039]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 19:50:57.166692 systemd[1]: Finished lvm2-activation-early.service. Feb 9 19:50:57.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:57.167525 systemd[1]: Reached target cryptsetup.target. Feb 9 19:50:57.169100 systemd[1]: Starting lvm2-activation.service... Feb 9 19:50:57.172005 lvm[1040]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 19:50:57.196119 systemd[1]: Finished lvm2-activation.service. Feb 9 19:50:57.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:57.196902 systemd[1]: Reached target local-fs-pre.target. Feb 9 19:50:57.197514 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 19:50:57.197535 systemd[1]: Reached target local-fs.target. Feb 9 19:50:57.198147 systemd[1]: Reached target machines.target. Feb 9 19:50:57.199638 systemd[1]: Starting ldconfig.service... Feb 9 19:50:57.200306 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 19:50:57.200343 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:50:57.201192 systemd[1]: Starting systemd-boot-update.service... Feb 9 19:50:57.202672 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 19:50:57.204481 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 19:50:57.205389 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 19:50:57.205439 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 19:50:57.206460 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 19:50:57.207660 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1042 (bootctl) Feb 9 19:50:57.208762 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 19:50:57.218569 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 19:50:57.217000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:57.222732 systemd-tmpfiles[1045]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 19:50:57.224887 systemd-tmpfiles[1045]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 19:50:57.228414 systemd-tmpfiles[1045]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 19:50:57.245090 systemd-fsck[1050]: fsck.fat 4.2 (2021-01-31) Feb 9 19:50:57.245090 systemd-fsck[1050]: /dev/vda1: 790 files, 115362/258078 clusters Feb 9 19:50:57.248384 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 19:50:57.250000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:57.252433 systemd[1]: Mounting boot.mount... Feb 9 19:50:57.409410 systemd[1]: Mounted boot.mount. Feb 9 19:50:57.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:57.415466 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 19:50:57.420169 systemd[1]: Finished systemd-boot-update.service. Feb 9 19:50:57.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:57.431801 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 19:50:57.468466 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 19:50:57.468000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:57.470472 systemd[1]: Starting audit-rules.service... Feb 9 19:50:57.471151 ldconfig[1041]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 19:50:57.471858 systemd[1]: Starting clean-ca-certificates.service... Feb 9 19:50:57.473190 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 19:50:57.473000 audit: BPF prog-id=30 op=LOAD Feb 9 19:50:57.475244 systemd[1]: Starting systemd-resolved.service... Feb 9 19:50:57.475000 audit: BPF prog-id=31 op=LOAD Feb 9 19:50:57.476912 systemd[1]: Starting systemd-timesyncd.service... Feb 9 19:50:57.478205 systemd[1]: Starting systemd-update-utmp.service... Feb 9 19:50:57.479000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:57.479228 systemd[1]: Finished clean-ca-certificates.service. Feb 9 19:50:57.480597 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 19:50:57.484634 systemd[1]: Finished ldconfig.service. Feb 9 19:50:57.484000 audit[1061]: SYSTEM_BOOT pid=1061 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 19:50:57.484000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:57.488064 systemd[1]: Finished systemd-update-utmp.service. Feb 9 19:50:57.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:57.492460 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 19:50:57.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:57.494239 systemd[1]: Starting systemd-update-done.service... Feb 9 19:50:57.499856 systemd[1]: Finished systemd-update-done.service. Feb 9 19:50:57.499000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:50:57.502000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 19:50:57.502000 audit[1075]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd7db48d30 a2=420 a3=0 items=0 ppid=1053 pid=1075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:50:57.502000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 19:50:57.504067 augenrules[1075]: No rules Feb 9 19:50:57.504700 systemd[1]: Finished audit-rules.service. Feb 9 19:50:57.521161 systemd[1]: Started systemd-timesyncd.service. Feb 9 19:50:57.521987 systemd[1]: Reached target time-set.target. Feb 9 19:50:57.016983 systemd-timesyncd[1058]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 9 19:50:57.033606 systemd-journald[974]: Time jumped backwards, rotating. Feb 9 19:50:57.017022 systemd-timesyncd[1058]: Initial clock synchronization to Fri 2024-02-09 19:50:57.016911 UTC. Feb 9 19:50:57.020369 systemd-resolved[1057]: Positive Trust Anchors: Feb 9 19:50:57.020376 systemd-resolved[1057]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 19:50:57.020403 systemd-resolved[1057]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 19:50:57.028453 systemd-resolved[1057]: Defaulting to hostname 'linux'. Feb 9 19:50:57.029744 systemd[1]: Started systemd-resolved.service. Feb 9 19:50:57.030539 systemd[1]: Reached target network.target. Feb 9 19:50:57.031220 systemd[1]: Reached target nss-lookup.target. Feb 9 19:50:57.031917 systemd[1]: Reached target sysinit.target. Feb 9 19:50:57.032754 systemd[1]: Started motdgen.path. Feb 9 19:50:57.033525 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 19:50:57.034523 systemd[1]: Started logrotate.timer. Feb 9 19:50:57.035272 systemd[1]: Started mdadm.timer. Feb 9 19:50:57.035795 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 19:50:57.036458 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 19:50:57.036480 systemd[1]: Reached target paths.target. Feb 9 19:50:57.037075 systemd[1]: Reached target timers.target. Feb 9 19:50:57.038024 systemd[1]: Listening on dbus.socket. Feb 9 19:50:57.039830 systemd[1]: Starting docker.socket... Feb 9 19:50:57.042399 systemd[1]: Listening on sshd.socket. Feb 9 19:50:57.043228 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:50:57.043635 systemd[1]: Listening on docker.socket. Feb 9 19:50:57.044423 systemd[1]: Reached target sockets.target. Feb 9 19:50:57.045138 systemd[1]: Reached target basic.target. Feb 9 19:50:57.045894 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 19:50:57.045922 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 19:50:57.046728 systemd[1]: Starting containerd.service... Feb 9 19:50:57.048244 systemd[1]: Starting dbus.service... Feb 9 19:50:57.049830 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 19:50:57.051577 systemd[1]: Starting extend-filesystems.service... Feb 9 19:50:57.052419 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 19:50:57.053453 systemd[1]: Starting motdgen.service... Feb 9 19:50:57.054829 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 19:50:57.056480 systemd[1]: Starting prepare-critools.service... Feb 9 19:50:57.057219 jq[1086]: false Feb 9 19:50:57.059714 dbus-daemon[1085]: [system] SELinux support is enabled Feb 9 19:50:57.059872 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 19:50:57.061355 systemd[1]: Starting sshd-keygen.service... Feb 9 19:50:57.063761 systemd[1]: Starting systemd-logind.service... Feb 9 19:50:57.064353 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:50:57.064394 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 19:50:57.064702 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 9 19:50:57.065278 systemd[1]: Starting update-engine.service... Feb 9 19:50:57.066657 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 19:50:57.066764 extend-filesystems[1087]: Found sr0 Feb 9 19:50:57.066764 extend-filesystems[1087]: Found vda Feb 9 19:50:57.066764 extend-filesystems[1087]: Found vda1 Feb 9 19:50:57.066764 extend-filesystems[1087]: Found vda2 Feb 9 19:50:57.076142 extend-filesystems[1087]: Found vda3 Feb 9 19:50:57.076142 extend-filesystems[1087]: Found usr Feb 9 19:50:57.076142 extend-filesystems[1087]: Found vda4 Feb 9 19:50:57.076142 extend-filesystems[1087]: Found vda6 Feb 9 19:50:57.076142 extend-filesystems[1087]: Found vda7 Feb 9 19:50:57.076142 extend-filesystems[1087]: Found vda9 Feb 9 19:50:57.076142 extend-filesystems[1087]: Checking size of /dev/vda9 Feb 9 19:50:57.067973 systemd[1]: Started dbus.service. Feb 9 19:50:57.071127 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 19:50:57.085701 jq[1105]: true Feb 9 19:50:57.071282 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 19:50:57.072410 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 19:50:57.086016 tar[1110]: ./ Feb 9 19:50:57.086016 tar[1110]: ./loopback Feb 9 19:50:57.072537 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 19:50:57.086259 tar[1111]: crictl Feb 9 19:50:57.075933 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 19:50:57.075954 systemd[1]: Reached target system-config.target. Feb 9 19:50:57.076464 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 19:50:57.076475 systemd[1]: Reached target user-config.target. Feb 9 19:50:57.087479 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 19:50:57.087612 systemd[1]: Finished motdgen.service. Feb 9 19:50:57.091367 jq[1113]: true Feb 9 19:50:57.099191 update_engine[1102]: I0209 19:50:57.099001 1102 main.cc:92] Flatcar Update Engine starting Feb 9 19:50:57.100724 systemd[1]: Started update-engine.service. Feb 9 19:50:57.100934 update_engine[1102]: I0209 19:50:57.100804 1102 update_check_scheduler.cc:74] Next update check in 8m50s Feb 9 19:50:57.102907 systemd[1]: Started locksmithd.service. Feb 9 19:50:57.107836 extend-filesystems[1087]: Resized partition /dev/vda9 Feb 9 19:50:57.109627 extend-filesystems[1136]: resize2fs 1.46.5 (30-Dec-2021) Feb 9 19:50:57.117220 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 9 19:50:57.119326 env[1114]: time="2024-02-09T19:50:57.118236205Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 19:50:57.137355 systemd-logind[1101]: Watching system buttons on /dev/input/event1 (Power Button) Feb 9 19:50:57.137382 systemd-logind[1101]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 9 19:50:57.142197 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 9 19:50:57.142742 systemd-logind[1101]: New seat seat0. Feb 9 19:50:57.145733 systemd[1]: Started systemd-logind.service. Feb 9 19:50:57.156115 env[1114]: time="2024-02-09T19:50:57.155966640Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 19:50:57.159271 tar[1110]: ./bandwidth Feb 9 19:50:57.160609 extend-filesystems[1136]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 9 19:50:57.160609 extend-filesystems[1136]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 9 19:50:57.160609 extend-filesystems[1136]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 9 19:50:57.164110 extend-filesystems[1087]: Resized filesystem in /dev/vda9 Feb 9 19:50:57.164951 bash[1140]: Updated "/home/core/.ssh/authorized_keys" Feb 9 19:50:57.165057 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 19:50:57.165263 systemd[1]: Finished extend-filesystems.service. Feb 9 19:50:57.165507 env[1114]: time="2024-02-09T19:50:57.165324586Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:50:57.166415 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 19:50:57.171838 env[1114]: time="2024-02-09T19:50:57.171039211Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:50:57.171838 env[1114]: time="2024-02-09T19:50:57.171067735Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:50:57.171838 env[1114]: time="2024-02-09T19:50:57.171276376Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:50:57.171838 env[1114]: time="2024-02-09T19:50:57.171297465Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 19:50:57.171838 env[1114]: time="2024-02-09T19:50:57.171309869Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 19:50:57.171838 env[1114]: time="2024-02-09T19:50:57.171318795Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 19:50:57.171838 env[1114]: time="2024-02-09T19:50:57.171381242Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:50:57.171838 env[1114]: time="2024-02-09T19:50:57.171553335Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:50:57.171838 env[1114]: time="2024-02-09T19:50:57.171662620Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:50:57.171838 env[1114]: time="2024-02-09T19:50:57.171675534Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 19:50:57.172267 env[1114]: time="2024-02-09T19:50:57.171712774Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 19:50:57.172267 env[1114]: time="2024-02-09T19:50:57.171722592Z" level=info msg="metadata content store policy set" policy=shared Feb 9 19:50:57.177199 env[1114]: time="2024-02-09T19:50:57.176584859Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 19:50:57.177199 env[1114]: time="2024-02-09T19:50:57.176610908Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 19:50:57.177199 env[1114]: time="2024-02-09T19:50:57.176622690Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 19:50:57.177199 env[1114]: time="2024-02-09T19:50:57.176649300Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 19:50:57.177199 env[1114]: time="2024-02-09T19:50:57.176662274Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 19:50:57.177199 env[1114]: time="2024-02-09T19:50:57.176674277Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 19:50:57.177199 env[1114]: time="2024-02-09T19:50:57.176685057Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 19:50:57.177199 env[1114]: time="2024-02-09T19:50:57.176697019Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 19:50:57.177199 env[1114]: time="2024-02-09T19:50:57.176708521Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 19:50:57.177199 env[1114]: time="2024-02-09T19:50:57.176720233Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 19:50:57.177199 env[1114]: time="2024-02-09T19:50:57.176730803Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 19:50:57.177199 env[1114]: time="2024-02-09T19:50:57.176740671Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 19:50:57.177199 env[1114]: time="2024-02-09T19:50:57.176825681Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 19:50:57.177199 env[1114]: time="2024-02-09T19:50:57.176892616Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 19:50:57.177450 env[1114]: time="2024-02-09T19:50:57.177091950Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 19:50:57.177450 env[1114]: time="2024-02-09T19:50:57.177112288Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 19:50:57.177450 env[1114]: time="2024-02-09T19:50:57.177124541Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 19:50:57.177450 env[1114]: time="2024-02-09T19:50:57.177162853Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 19:50:57.179505 env[1114]: time="2024-02-09T19:50:57.177174124Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 19:50:57.179505 env[1114]: time="2024-02-09T19:50:57.177555028Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 19:50:57.179505 env[1114]: time="2024-02-09T19:50:57.177567772Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 19:50:57.179505 env[1114]: time="2024-02-09T19:50:57.177578883Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 19:50:57.179505 env[1114]: time="2024-02-09T19:50:57.177589513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 19:50:57.179505 env[1114]: time="2024-02-09T19:50:57.177608258Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 19:50:57.179505 env[1114]: time="2024-02-09T19:50:57.177627494Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 19:50:57.179505 env[1114]: time="2024-02-09T19:50:57.177640589Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 19:50:57.179505 env[1114]: time="2024-02-09T19:50:57.177735787Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 19:50:57.179505 env[1114]: time="2024-02-09T19:50:57.177748301Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 19:50:57.179505 env[1114]: time="2024-02-09T19:50:57.177758379Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 19:50:57.179505 env[1114]: time="2024-02-09T19:50:57.177768108Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 19:50:57.179505 env[1114]: time="2024-02-09T19:50:57.177780721Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 19:50:57.179505 env[1114]: time="2024-02-09T19:50:57.177790780Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 19:50:57.178875 systemd[1]: Started containerd.service. Feb 9 19:50:57.179822 env[1114]: time="2024-02-09T19:50:57.177813212Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 19:50:57.179822 env[1114]: time="2024-02-09T19:50:57.177843980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 19:50:57.179863 env[1114]: time="2024-02-09T19:50:57.178009190Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 19:50:57.179863 env[1114]: time="2024-02-09T19:50:57.178053322Z" level=info msg="Connect containerd service" Feb 9 19:50:57.179863 env[1114]: time="2024-02-09T19:50:57.178081014Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 19:50:57.179863 env[1114]: time="2024-02-09T19:50:57.178512233Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 19:50:57.179863 env[1114]: time="2024-02-09T19:50:57.178743617Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 19:50:57.179863 env[1114]: time="2024-02-09T19:50:57.178780947Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 19:50:57.179863 env[1114]: time="2024-02-09T19:50:57.178823156Z" level=info msg="containerd successfully booted in 0.063690s" Feb 9 19:50:57.181970 env[1114]: time="2024-02-09T19:50:57.181941843Z" level=info msg="Start subscribing containerd event" Feb 9 19:50:57.183934 env[1114]: time="2024-02-09T19:50:57.183916596Z" level=info msg="Start recovering state" Feb 9 19:50:57.184071 env[1114]: time="2024-02-09T19:50:57.184047822Z" level=info msg="Start event monitor" Feb 9 19:50:57.187395 env[1114]: time="2024-02-09T19:50:57.187379519Z" level=info msg="Start snapshots syncer" Feb 9 19:50:57.187561 env[1114]: time="2024-02-09T19:50:57.187537766Z" level=info msg="Start cni network conf syncer for default" Feb 9 19:50:57.187632 env[1114]: time="2024-02-09T19:50:57.187615792Z" level=info msg="Start streaming server" Feb 9 19:50:57.190477 tar[1110]: ./ptp Feb 9 19:50:57.212487 locksmithd[1130]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 19:50:57.225554 tar[1110]: ./vlan Feb 9 19:50:57.258399 tar[1110]: ./host-device Feb 9 19:50:57.290552 tar[1110]: ./tuning Feb 9 19:50:57.319025 tar[1110]: ./vrf Feb 9 19:50:57.348982 tar[1110]: ./sbr Feb 9 19:50:57.379665 tar[1110]: ./tap Feb 9 19:50:57.413343 tar[1110]: ./dhcp Feb 9 19:50:57.497637 tar[1110]: ./static Feb 9 19:50:57.521545 tar[1110]: ./firewall Feb 9 19:50:57.555191 systemd[1]: Finished prepare-critools.service. Feb 9 19:50:57.558025 tar[1110]: ./macvlan Feb 9 19:50:57.587361 tar[1110]: ./dummy Feb 9 19:50:57.616134 tar[1110]: ./bridge Feb 9 19:50:57.647945 tar[1110]: ./ipvlan Feb 9 19:50:57.677239 tar[1110]: ./portmap Feb 9 19:50:57.705175 tar[1110]: ./host-local Feb 9 19:50:57.737964 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 19:50:57.758362 systemd-networkd[1015]: eth0: Gained IPv6LL Feb 9 19:50:58.189419 sshd_keygen[1108]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 19:50:58.207010 systemd[1]: Finished sshd-keygen.service. Feb 9 19:50:58.208939 systemd[1]: Starting issuegen.service... Feb 9 19:50:58.213187 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 19:50:58.213302 systemd[1]: Finished issuegen.service. Feb 9 19:50:58.214905 systemd[1]: Starting systemd-user-sessions.service... Feb 9 19:50:58.219974 systemd[1]: Finished systemd-user-sessions.service. Feb 9 19:50:58.221636 systemd[1]: Started getty@tty1.service. Feb 9 19:50:58.223069 systemd[1]: Started serial-getty@ttyS0.service. Feb 9 19:50:58.223847 systemd[1]: Reached target getty.target. Feb 9 19:50:58.224470 systemd[1]: Reached target multi-user.target. Feb 9 19:50:58.225906 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 19:50:58.231890 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 19:50:58.231999 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 19:50:58.232792 systemd[1]: Startup finished in 521ms (kernel) + 5.011s (initrd) + 5.052s (userspace) = 10.585s. Feb 9 19:51:06.686565 systemd[1]: Created slice system-sshd.slice. Feb 9 19:51:06.687453 systemd[1]: Started sshd@0-10.0.0.131:22-10.0.0.1:53678.service. Feb 9 19:51:06.725230 sshd[1173]: Accepted publickey for core from 10.0.0.1 port 53678 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:51:06.726424 sshd[1173]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:51:06.733934 systemd-logind[1101]: New session 1 of user core. Feb 9 19:51:06.734766 systemd[1]: Created slice user-500.slice. Feb 9 19:51:06.735864 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 19:51:06.742350 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 19:51:06.743553 systemd[1]: Starting user@500.service... Feb 9 19:51:06.745637 (systemd)[1176]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:51:06.809057 systemd[1176]: Queued start job for default target default.target. Feb 9 19:51:06.809449 systemd[1176]: Reached target paths.target. Feb 9 19:51:06.809467 systemd[1176]: Reached target sockets.target. Feb 9 19:51:06.809478 systemd[1176]: Reached target timers.target. Feb 9 19:51:06.809488 systemd[1176]: Reached target basic.target. Feb 9 19:51:06.809518 systemd[1176]: Reached target default.target. Feb 9 19:51:06.809538 systemd[1176]: Startup finished in 59ms. Feb 9 19:51:06.809693 systemd[1]: Started user@500.service. Feb 9 19:51:06.810711 systemd[1]: Started session-1.scope. Feb 9 19:51:06.859351 systemd[1]: Started sshd@1-10.0.0.131:22-10.0.0.1:53684.service. Feb 9 19:51:06.894255 sshd[1185]: Accepted publickey for core from 10.0.0.1 port 53684 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:51:06.895460 sshd[1185]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:51:06.899310 systemd-logind[1101]: New session 2 of user core. Feb 9 19:51:06.899967 systemd[1]: Started session-2.scope. Feb 9 19:51:06.951628 sshd[1185]: pam_unix(sshd:session): session closed for user core Feb 9 19:51:06.954141 systemd[1]: sshd@1-10.0.0.131:22-10.0.0.1:53684.service: Deactivated successfully. Feb 9 19:51:06.954698 systemd[1]: session-2.scope: Deactivated successfully. Feb 9 19:51:06.955153 systemd-logind[1101]: Session 2 logged out. Waiting for processes to exit. Feb 9 19:51:06.956252 systemd[1]: Started sshd@2-10.0.0.131:22-10.0.0.1:53696.service. Feb 9 19:51:06.956990 systemd-logind[1101]: Removed session 2. Feb 9 19:51:06.990435 sshd[1191]: Accepted publickey for core from 10.0.0.1 port 53696 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:51:06.991376 sshd[1191]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:51:06.994320 systemd-logind[1101]: New session 3 of user core. Feb 9 19:51:06.995004 systemd[1]: Started session-3.scope. Feb 9 19:51:07.044032 sshd[1191]: pam_unix(sshd:session): session closed for user core Feb 9 19:51:07.046430 systemd[1]: sshd@2-10.0.0.131:22-10.0.0.1:53696.service: Deactivated successfully. Feb 9 19:51:07.046878 systemd[1]: session-3.scope: Deactivated successfully. Feb 9 19:51:07.047287 systemd-logind[1101]: Session 3 logged out. Waiting for processes to exit. Feb 9 19:51:07.048095 systemd[1]: Started sshd@3-10.0.0.131:22-10.0.0.1:53712.service. Feb 9 19:51:07.048697 systemd-logind[1101]: Removed session 3. Feb 9 19:51:07.083269 sshd[1197]: Accepted publickey for core from 10.0.0.1 port 53712 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:51:07.084256 sshd[1197]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:51:07.086809 systemd-logind[1101]: New session 4 of user core. Feb 9 19:51:07.087410 systemd[1]: Started session-4.scope. Feb 9 19:51:07.141005 sshd[1197]: pam_unix(sshd:session): session closed for user core Feb 9 19:51:07.144077 systemd[1]: Started sshd@4-10.0.0.131:22-10.0.0.1:53728.service. Feb 9 19:51:07.144512 systemd[1]: sshd@3-10.0.0.131:22-10.0.0.1:53712.service: Deactivated successfully. Feb 9 19:51:07.145001 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 19:51:07.145452 systemd-logind[1101]: Session 4 logged out. Waiting for processes to exit. Feb 9 19:51:07.146120 systemd-logind[1101]: Removed session 4. Feb 9 19:51:07.179557 sshd[1202]: Accepted publickey for core from 10.0.0.1 port 53728 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:51:07.180559 sshd[1202]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:51:07.183612 systemd-logind[1101]: New session 5 of user core. Feb 9 19:51:07.184390 systemd[1]: Started session-5.scope. Feb 9 19:51:07.237459 sudo[1206]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 19:51:07.237618 sudo[1206]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 19:51:07.743146 systemd[1]: Reloading. Feb 9 19:51:07.800674 /usr/lib/systemd/system-generators/torcx-generator[1235]: time="2024-02-09T19:51:07Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:51:07.800963 /usr/lib/systemd/system-generators/torcx-generator[1235]: time="2024-02-09T19:51:07Z" level=info msg="torcx already run" Feb 9 19:51:08.066666 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:51:08.066679 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:51:08.083003 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:51:08.150122 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 19:51:08.154483 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 19:51:08.154975 systemd[1]: Reached target network-online.target. Feb 9 19:51:08.156382 systemd[1]: Started kubelet.service. Feb 9 19:51:08.165126 systemd[1]: Starting coreos-metadata.service... Feb 9 19:51:08.172300 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 9 19:51:08.172475 systemd[1]: Finished coreos-metadata.service. Feb 9 19:51:08.200752 kubelet[1277]: E0209 19:51:08.200695 1277 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Feb 9 19:51:08.203011 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:51:08.203125 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:51:08.335383 systemd[1]: Stopped kubelet.service. Feb 9 19:51:08.349111 systemd[1]: Reloading. Feb 9 19:51:08.416023 /usr/lib/systemd/system-generators/torcx-generator[1344]: time="2024-02-09T19:51:08Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:51:08.416051 /usr/lib/systemd/system-generators/torcx-generator[1344]: time="2024-02-09T19:51:08Z" level=info msg="torcx already run" Feb 9 19:51:08.472471 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:51:08.472486 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:51:08.488561 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:51:08.559104 systemd[1]: Started kubelet.service. Feb 9 19:51:08.599073 kubelet[1385]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:51:08.599073 kubelet[1385]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 9 19:51:08.599073 kubelet[1385]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:51:08.599073 kubelet[1385]: I0209 19:51:08.599040 1385 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 19:51:08.721947 kubelet[1385]: I0209 19:51:08.721916 1385 server.go:415] "Kubelet version" kubeletVersion="v1.27.2" Feb 9 19:51:08.721947 kubelet[1385]: I0209 19:51:08.721944 1385 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 19:51:08.722166 kubelet[1385]: I0209 19:51:08.722151 1385 server.go:837] "Client rotation is on, will bootstrap in background" Feb 9 19:51:08.724190 kubelet[1385]: I0209 19:51:08.724127 1385 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 19:51:08.728125 kubelet[1385]: I0209 19:51:08.728106 1385 server.go:662] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 19:51:08.728324 kubelet[1385]: I0209 19:51:08.728303 1385 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 19:51:08.728395 kubelet[1385]: I0209 19:51:08.728375 1385 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 19:51:08.728470 kubelet[1385]: I0209 19:51:08.728404 1385 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 19:51:08.728470 kubelet[1385]: I0209 19:51:08.728415 1385 container_manager_linux.go:302] "Creating device plugin manager" Feb 9 19:51:08.728532 kubelet[1385]: I0209 19:51:08.728492 1385 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:51:08.732589 kubelet[1385]: I0209 19:51:08.732573 1385 kubelet.go:405] "Attempting to sync node with API server" Feb 9 19:51:08.732589 kubelet[1385]: I0209 19:51:08.732591 1385 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 19:51:08.732678 kubelet[1385]: I0209 19:51:08.732609 1385 kubelet.go:309] "Adding apiserver pod source" Feb 9 19:51:08.732678 kubelet[1385]: I0209 19:51:08.732624 1385 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 19:51:08.732738 kubelet[1385]: E0209 19:51:08.732718 1385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:51:08.732761 kubelet[1385]: E0209 19:51:08.732753 1385 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:51:08.733020 kubelet[1385]: I0209 19:51:08.732998 1385 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 19:51:08.733283 kubelet[1385]: W0209 19:51:08.733272 1385 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 19:51:08.733668 kubelet[1385]: I0209 19:51:08.733651 1385 server.go:1168] "Started kubelet" Feb 9 19:51:08.733819 kubelet[1385]: I0209 19:51:08.733804 1385 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 9 19:51:08.733947 kubelet[1385]: I0209 19:51:08.733925 1385 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 19:51:08.734412 kubelet[1385]: E0209 19:51:08.734398 1385 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 19:51:08.734486 kubelet[1385]: E0209 19:51:08.734472 1385 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 19:51:08.734640 kubelet[1385]: I0209 19:51:08.734608 1385 server.go:461] "Adding debug handlers to kubelet server" Feb 9 19:51:08.735872 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 9 19:51:08.735981 kubelet[1385]: I0209 19:51:08.735949 1385 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 19:51:08.736034 kubelet[1385]: I0209 19:51:08.736015 1385 volume_manager.go:284] "Starting Kubelet Volume Manager" Feb 9 19:51:08.736490 kubelet[1385]: E0209 19:51:08.736475 1385 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.131\" not found" Feb 9 19:51:08.737394 kubelet[1385]: I0209 19:51:08.737374 1385 desired_state_of_world_populator.go:145] "Desired state populator starts to run" Feb 9 19:51:08.743769 kubelet[1385]: W0209 19:51:08.743745 1385 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:51:08.743817 kubelet[1385]: E0209 19:51:08.743777 1385 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:51:08.743953 kubelet[1385]: W0209 19:51:08.743931 1385 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:51:08.743994 kubelet[1385]: E0209 19:51:08.743967 1385 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:51:08.744056 kubelet[1385]: W0209 19:51:08.744035 1385 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.131" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:51:08.744056 kubelet[1385]: E0209 19:51:08.744052 1385 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.131" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:51:08.744198 kubelet[1385]: E0209 19:51:08.744073 1385 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.131.17b249b940e90957", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.131", UID:"10.0.0.131", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.131"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 51, 8, 733630807, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 51, 8, 733630807, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:51:08.744372 kubelet[1385]: E0209 19:51:08.744352 1385 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.131\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Feb 9 19:51:08.745346 kubelet[1385]: E0209 19:51:08.745298 1385 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.131.17b249b940f5b578", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.131", UID:"10.0.0.131", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.131"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 51, 8, 734461304, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 51, 8, 734461304, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:51:08.756624 kubelet[1385]: I0209 19:51:08.756606 1385 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 19:51:08.756624 kubelet[1385]: I0209 19:51:08.756620 1385 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 19:51:08.756697 kubelet[1385]: I0209 19:51:08.756631 1385 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:51:08.759274 kubelet[1385]: I0209 19:51:08.759253 1385 policy_none.go:49] "None policy: Start" Feb 9 19:51:08.759414 kubelet[1385]: E0209 19:51:08.759338 1385 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.131.17b249b9423fdfdb", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.131", UID:"10.0.0.131", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.131 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.131"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 51, 8, 756099035, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 51, 8, 756099035, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:51:08.759670 kubelet[1385]: I0209 19:51:08.759656 1385 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 19:51:08.759670 kubelet[1385]: I0209 19:51:08.759672 1385 state_mem.go:35] "Initializing new in-memory state store" Feb 9 19:51:08.761287 kubelet[1385]: E0209 19:51:08.761224 1385 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.131.17b249b9423feec4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.131", UID:"10.0.0.131", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.131 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.131"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 51, 8, 756102852, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 51, 8, 756102852, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:51:08.761892 kubelet[1385]: E0209 19:51:08.761856 1385 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.131.17b249b9423ff634", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.131", UID:"10.0.0.131", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.131 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.131"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 51, 8, 756104756, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 51, 8, 756104756, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:51:08.763826 systemd[1]: Created slice kubepods.slice. Feb 9 19:51:08.766605 systemd[1]: Created slice kubepods-burstable.slice. Feb 9 19:51:08.768656 systemd[1]: Created slice kubepods-besteffort.slice. Feb 9 19:51:08.775979 kubelet[1385]: I0209 19:51:08.775906 1385 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 19:51:08.776264 kubelet[1385]: I0209 19:51:08.776172 1385 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 19:51:08.777230 kubelet[1385]: E0209 19:51:08.776656 1385 eviction_manager.go:262] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.131\" not found" Feb 9 19:51:08.777573 kubelet[1385]: E0209 19:51:08.777483 1385 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.131.17b249b9437c4488", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.131", UID:"10.0.0.131", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.131"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 51, 8, 776834184, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 51, 8, 776834184, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:51:08.801618 kubelet[1385]: I0209 19:51:08.801592 1385 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 19:51:08.802303 kubelet[1385]: I0209 19:51:08.802279 1385 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 19:51:08.802303 kubelet[1385]: I0209 19:51:08.802307 1385 status_manager.go:207] "Starting to sync pod status with apiserver" Feb 9 19:51:08.802442 kubelet[1385]: I0209 19:51:08.802329 1385 kubelet.go:2257] "Starting kubelet main sync loop" Feb 9 19:51:08.802442 kubelet[1385]: E0209 19:51:08.802366 1385 kubelet.go:2281] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 9 19:51:08.803583 kubelet[1385]: W0209 19:51:08.803566 1385 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:51:08.803672 kubelet[1385]: E0209 19:51:08.803659 1385 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:51:08.837317 kubelet[1385]: I0209 19:51:08.837291 1385 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.131" Feb 9 19:51:08.839813 kubelet[1385]: E0209 19:51:08.839790 1385 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.131" Feb 9 19:51:08.840584 kubelet[1385]: E0209 19:51:08.840534 1385 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.131.17b249b9423fdfdb", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.131", UID:"10.0.0.131", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.131 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.131"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 51, 8, 756099035, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 51, 8, 837252277, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.131.17b249b9423fdfdb" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:51:08.841472 kubelet[1385]: E0209 19:51:08.841428 1385 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.131.17b249b9423feec4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.131", UID:"10.0.0.131", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.131 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.131"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 51, 8, 756102852, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 51, 8, 837262306, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.131.17b249b9423feec4" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:51:08.842147 kubelet[1385]: E0209 19:51:08.842083 1385 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.131.17b249b9423ff634", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.131", UID:"10.0.0.131", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.131 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.131"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 51, 8, 756104756, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 51, 8, 837264741, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.131.17b249b9423ff634" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:51:08.945671 kubelet[1385]: E0209 19:51:08.945568 1385 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.131\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="400ms" Feb 9 19:51:09.040780 kubelet[1385]: I0209 19:51:09.040751 1385 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.131" Feb 9 19:51:09.041669 kubelet[1385]: E0209 19:51:09.041603 1385 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.131.17b249b9423fdfdb", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.131", UID:"10.0.0.131", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.131 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.131"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 51, 8, 756099035, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 51, 9, 40711073, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.131.17b249b9423fdfdb" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:51:09.041980 kubelet[1385]: E0209 19:51:09.041962 1385 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.131" Feb 9 19:51:09.042630 kubelet[1385]: E0209 19:51:09.042585 1385 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.131.17b249b9423feec4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.131", UID:"10.0.0.131", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.131 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.131"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 51, 8, 756102852, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 51, 9, 40716813, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.131.17b249b9423feec4" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:51:09.043256 kubelet[1385]: E0209 19:51:09.043196 1385 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.131.17b249b9423ff634", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.131", UID:"10.0.0.131", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.131 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.131"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 51, 8, 756104756, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 51, 9, 40722033, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.131.17b249b9423ff634" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:51:09.347321 kubelet[1385]: E0209 19:51:09.347196 1385 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.131\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="800ms" Feb 9 19:51:09.443299 kubelet[1385]: I0209 19:51:09.443261 1385 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.131" Feb 9 19:51:09.444461 kubelet[1385]: E0209 19:51:09.444426 1385 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.131" Feb 9 19:51:09.444602 kubelet[1385]: E0209 19:51:09.444430 1385 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.131.17b249b9423fdfdb", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.131", UID:"10.0.0.131", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.131 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.131"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 51, 8, 756099035, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 51, 9, 443220538, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.131.17b249b9423fdfdb" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:51:09.445277 kubelet[1385]: E0209 19:51:09.445206 1385 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.131.17b249b9423feec4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.131", UID:"10.0.0.131", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.131 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.131"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 51, 8, 756102852, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 51, 9, 443232280, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.131.17b249b9423feec4" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:51:09.446041 kubelet[1385]: E0209 19:51:09.445995 1385 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.131.17b249b9423ff634", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.131", UID:"10.0.0.131", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.131 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.131"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 51, 8, 756104756, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 51, 9, 443234815, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.131.17b249b9423ff634" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:51:09.594909 kubelet[1385]: W0209 19:51:09.594877 1385 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.131" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:51:09.594909 kubelet[1385]: E0209 19:51:09.594905 1385 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.131" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:51:09.647557 kubelet[1385]: W0209 19:51:09.647478 1385 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:51:09.647557 kubelet[1385]: E0209 19:51:09.647499 1385 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:51:09.708596 kubelet[1385]: W0209 19:51:09.708581 1385 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:51:09.708596 kubelet[1385]: E0209 19:51:09.708597 1385 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:51:09.723783 kubelet[1385]: I0209 19:51:09.723749 1385 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 9 19:51:09.732952 kubelet[1385]: E0209 19:51:09.732932 1385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:51:10.086125 kubelet[1385]: E0209 19:51:10.086024 1385 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.0.0.131" not found Feb 9 19:51:10.150088 kubelet[1385]: E0209 19:51:10.150051 1385 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.131\" not found" node="10.0.0.131" Feb 9 19:51:10.245929 kubelet[1385]: I0209 19:51:10.245896 1385 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.131" Feb 9 19:51:10.248395 kubelet[1385]: I0209 19:51:10.248379 1385 kubelet_node_status.go:73] "Successfully registered node" node="10.0.0.131" Feb 9 19:51:10.255949 kubelet[1385]: E0209 19:51:10.255915 1385 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.131\" not found" Feb 9 19:51:10.356897 kubelet[1385]: E0209 19:51:10.356793 1385 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.131\" not found" Feb 9 19:51:10.457428 kubelet[1385]: E0209 19:51:10.457384 1385 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.131\" not found" Feb 9 19:51:10.557926 kubelet[1385]: E0209 19:51:10.557881 1385 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.131\" not found" Feb 9 19:51:10.574805 sudo[1206]: pam_unix(sudo:session): session closed for user root Feb 9 19:51:10.576220 sshd[1202]: pam_unix(sshd:session): session closed for user core Feb 9 19:51:10.578315 systemd[1]: sshd@4-10.0.0.131:22-10.0.0.1:53728.service: Deactivated successfully. Feb 9 19:51:10.579139 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 19:51:10.579809 systemd-logind[1101]: Session 5 logged out. Waiting for processes to exit. Feb 9 19:51:10.580562 systemd-logind[1101]: Removed session 5. Feb 9 19:51:10.658497 kubelet[1385]: E0209 19:51:10.658363 1385 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.131\" not found" Feb 9 19:51:10.733934 kubelet[1385]: E0209 19:51:10.733862 1385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:51:10.759344 kubelet[1385]: E0209 19:51:10.759299 1385 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.131\" not found" Feb 9 19:51:10.860397 kubelet[1385]: E0209 19:51:10.860343 1385 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.131\" not found" Feb 9 19:51:10.961148 kubelet[1385]: E0209 19:51:10.961021 1385 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.131\" not found" Feb 9 19:51:11.061733 kubelet[1385]: E0209 19:51:11.061672 1385 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.131\" not found" Feb 9 19:51:11.162295 kubelet[1385]: E0209 19:51:11.162237 1385 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.131\" not found" Feb 9 19:51:11.263047 kubelet[1385]: E0209 19:51:11.262929 1385 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.131\" not found" Feb 9 19:51:11.363564 kubelet[1385]: E0209 19:51:11.363508 1385 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.131\" not found" Feb 9 19:51:11.463882 kubelet[1385]: E0209 19:51:11.463828 1385 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.131\" not found" Feb 9 19:51:11.564603 kubelet[1385]: E0209 19:51:11.564482 1385 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.131\" not found" Feb 9 19:51:11.665109 kubelet[1385]: E0209 19:51:11.665071 1385 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.131\" not found" Feb 9 19:51:11.734552 kubelet[1385]: E0209 19:51:11.734510 1385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:51:11.765958 kubelet[1385]: E0209 19:51:11.765930 1385 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.131\" not found" Feb 9 19:51:11.866710 kubelet[1385]: E0209 19:51:11.866608 1385 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.131\" not found" Feb 9 19:51:11.967207 kubelet[1385]: E0209 19:51:11.967168 1385 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.131\" not found" Feb 9 19:51:12.067682 kubelet[1385]: E0209 19:51:12.067650 1385 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.131\" not found" Feb 9 19:51:12.168329 kubelet[1385]: E0209 19:51:12.168216 1385 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.131\" not found" Feb 9 19:51:12.268980 kubelet[1385]: E0209 19:51:12.268920 1385 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.131\" not found" Feb 9 19:51:12.369743 kubelet[1385]: I0209 19:51:12.369714 1385 kuberuntime_manager.go:1460] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 9 19:51:12.369991 env[1114]: time="2024-02-09T19:51:12.369948151Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 19:51:12.370310 kubelet[1385]: I0209 19:51:12.370106 1385 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 9 19:51:12.735332 kubelet[1385]: E0209 19:51:12.735297 1385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:51:12.735690 kubelet[1385]: I0209 19:51:12.735656 1385 apiserver.go:52] "Watching apiserver" Feb 9 19:51:12.737661 kubelet[1385]: I0209 19:51:12.737633 1385 topology_manager.go:212] "Topology Admit Handler" Feb 9 19:51:12.737748 kubelet[1385]: I0209 19:51:12.737734 1385 topology_manager.go:212] "Topology Admit Handler" Feb 9 19:51:12.742718 systemd[1]: Created slice kubepods-besteffort-podac2102bf_0aa7_41dd_8006_4a364be540c0.slice. Feb 9 19:51:12.750111 systemd[1]: Created slice kubepods-burstable-pod466b2df2_3e4e_4f14_9d6e_1ce61ce73459.slice. Feb 9 19:51:12.838723 kubelet[1385]: I0209 19:51:12.838677 1385 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world" Feb 9 19:51:12.860484 kubelet[1385]: I0209 19:51:12.860442 1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7w22g\" (UniqueName: \"kubernetes.io/projected/466b2df2-3e4e-4f14-9d6e-1ce61ce73459-kube-api-access-7w22g\") pod \"cilium-5pwrl\" (UID: \"466b2df2-3e4e-4f14-9d6e-1ce61ce73459\") " pod="kube-system/cilium-5pwrl" Feb 9 19:51:12.860484 kubelet[1385]: I0209 19:51:12.860494 1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkr8s\" (UniqueName: \"kubernetes.io/projected/ac2102bf-0aa7-41dd-8006-4a364be540c0-kube-api-access-mkr8s\") pod \"kube-proxy-rwgsg\" (UID: \"ac2102bf-0aa7-41dd-8006-4a364be540c0\") " pod="kube-system/kube-proxy-rwgsg" Feb 9 19:51:12.860710 kubelet[1385]: I0209 19:51:12.860516 1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/466b2df2-3e4e-4f14-9d6e-1ce61ce73459-etc-cni-netd\") pod \"cilium-5pwrl\" (UID: \"466b2df2-3e4e-4f14-9d6e-1ce61ce73459\") " pod="kube-system/cilium-5pwrl" Feb 9 19:51:12.860710 kubelet[1385]: I0209 19:51:12.860611 1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/466b2df2-3e4e-4f14-9d6e-1ce61ce73459-cilium-config-path\") pod \"cilium-5pwrl\" (UID: \"466b2df2-3e4e-4f14-9d6e-1ce61ce73459\") " pod="kube-system/cilium-5pwrl" Feb 9 19:51:12.860710 kubelet[1385]: I0209 19:51:12.860645 1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/466b2df2-3e4e-4f14-9d6e-1ce61ce73459-host-proc-sys-net\") pod \"cilium-5pwrl\" (UID: \"466b2df2-3e4e-4f14-9d6e-1ce61ce73459\") " pod="kube-system/cilium-5pwrl" Feb 9 19:51:12.860825 kubelet[1385]: I0209 19:51:12.860725 1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/466b2df2-3e4e-4f14-9d6e-1ce61ce73459-host-proc-sys-kernel\") pod \"cilium-5pwrl\" (UID: \"466b2df2-3e4e-4f14-9d6e-1ce61ce73459\") " pod="kube-system/cilium-5pwrl" Feb 9 19:51:12.860825 kubelet[1385]: I0209 19:51:12.860774 1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/466b2df2-3e4e-4f14-9d6e-1ce61ce73459-hubble-tls\") pod \"cilium-5pwrl\" (UID: \"466b2df2-3e4e-4f14-9d6e-1ce61ce73459\") " pod="kube-system/cilium-5pwrl" Feb 9 19:51:12.860825 kubelet[1385]: I0209 19:51:12.860802 1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/466b2df2-3e4e-4f14-9d6e-1ce61ce73459-cilium-run\") pod \"cilium-5pwrl\" (UID: \"466b2df2-3e4e-4f14-9d6e-1ce61ce73459\") " pod="kube-system/cilium-5pwrl" Feb 9 19:51:12.860825 kubelet[1385]: I0209 19:51:12.860826 1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/466b2df2-3e4e-4f14-9d6e-1ce61ce73459-hostproc\") pod \"cilium-5pwrl\" (UID: \"466b2df2-3e4e-4f14-9d6e-1ce61ce73459\") " pod="kube-system/cilium-5pwrl" Feb 9 19:51:12.860938 kubelet[1385]: I0209 19:51:12.860848 1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/466b2df2-3e4e-4f14-9d6e-1ce61ce73459-cni-path\") pod \"cilium-5pwrl\" (UID: \"466b2df2-3e4e-4f14-9d6e-1ce61ce73459\") " pod="kube-system/cilium-5pwrl" Feb 9 19:51:12.860938 kubelet[1385]: I0209 19:51:12.860911 1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ac2102bf-0aa7-41dd-8006-4a364be540c0-kube-proxy\") pod \"kube-proxy-rwgsg\" (UID: \"ac2102bf-0aa7-41dd-8006-4a364be540c0\") " pod="kube-system/kube-proxy-rwgsg" Feb 9 19:51:12.861007 kubelet[1385]: I0209 19:51:12.860953 1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/466b2df2-3e4e-4f14-9d6e-1ce61ce73459-clustermesh-secrets\") pod \"cilium-5pwrl\" (UID: \"466b2df2-3e4e-4f14-9d6e-1ce61ce73459\") " pod="kube-system/cilium-5pwrl" Feb 9 19:51:12.861007 kubelet[1385]: I0209 19:51:12.860983 1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/466b2df2-3e4e-4f14-9d6e-1ce61ce73459-cilium-cgroup\") pod \"cilium-5pwrl\" (UID: \"466b2df2-3e4e-4f14-9d6e-1ce61ce73459\") " pod="kube-system/cilium-5pwrl" Feb 9 19:51:12.861007 kubelet[1385]: I0209 19:51:12.861007 1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/466b2df2-3e4e-4f14-9d6e-1ce61ce73459-lib-modules\") pod \"cilium-5pwrl\" (UID: \"466b2df2-3e4e-4f14-9d6e-1ce61ce73459\") " pod="kube-system/cilium-5pwrl" Feb 9 19:51:12.861090 kubelet[1385]: I0209 19:51:12.861030 1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/466b2df2-3e4e-4f14-9d6e-1ce61ce73459-xtables-lock\") pod \"cilium-5pwrl\" (UID: \"466b2df2-3e4e-4f14-9d6e-1ce61ce73459\") " pod="kube-system/cilium-5pwrl" Feb 9 19:51:12.861090 kubelet[1385]: I0209 19:51:12.861056 1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/466b2df2-3e4e-4f14-9d6e-1ce61ce73459-bpf-maps\") pod \"cilium-5pwrl\" (UID: \"466b2df2-3e4e-4f14-9d6e-1ce61ce73459\") " pod="kube-system/cilium-5pwrl" Feb 9 19:51:12.861152 kubelet[1385]: I0209 19:51:12.861094 1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ac2102bf-0aa7-41dd-8006-4a364be540c0-xtables-lock\") pod \"kube-proxy-rwgsg\" (UID: \"ac2102bf-0aa7-41dd-8006-4a364be540c0\") " pod="kube-system/kube-proxy-rwgsg" Feb 9 19:51:12.861152 kubelet[1385]: I0209 19:51:12.861123 1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ac2102bf-0aa7-41dd-8006-4a364be540c0-lib-modules\") pod \"kube-proxy-rwgsg\" (UID: \"ac2102bf-0aa7-41dd-8006-4a364be540c0\") " pod="kube-system/kube-proxy-rwgsg" Feb 9 19:51:12.861235 kubelet[1385]: I0209 19:51:12.861165 1385 reconciler.go:41] "Reconciler: start to sync state" Feb 9 19:51:13.050490 kubelet[1385]: E0209 19:51:13.049541 1385 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:51:13.050602 env[1114]: time="2024-02-09T19:51:13.050315380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rwgsg,Uid:ac2102bf-0aa7-41dd-8006-4a364be540c0,Namespace:kube-system,Attempt:0,}" Feb 9 19:51:13.056100 kubelet[1385]: E0209 19:51:13.056062 1385 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:51:13.056523 env[1114]: time="2024-02-09T19:51:13.056475921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5pwrl,Uid:466b2df2-3e4e-4f14-9d6e-1ce61ce73459,Namespace:kube-system,Attempt:0,}" Feb 9 19:51:13.565063 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount654490587.mount: Deactivated successfully. Feb 9 19:51:13.571951 env[1114]: time="2024-02-09T19:51:13.571901914Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:51:13.573109 env[1114]: time="2024-02-09T19:51:13.573075976Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:51:13.575857 env[1114]: time="2024-02-09T19:51:13.575821473Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:51:13.577011 env[1114]: time="2024-02-09T19:51:13.576981268Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:51:13.578172 env[1114]: time="2024-02-09T19:51:13.578144539Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:51:13.579455 env[1114]: time="2024-02-09T19:51:13.579432364Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:51:13.580813 env[1114]: time="2024-02-09T19:51:13.580786924Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:51:13.582243 env[1114]: time="2024-02-09T19:51:13.582209952Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:51:13.600362 env[1114]: time="2024-02-09T19:51:13.600287287Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:51:13.600362 env[1114]: time="2024-02-09T19:51:13.600329105Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:51:13.600362 env[1114]: time="2024-02-09T19:51:13.600341488Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:51:13.600573 env[1114]: time="2024-02-09T19:51:13.600534130Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ce57dc48611f9641a9834cfb967acf329e890a4150a2ea7c00d7b0d0751d19c0 pid=1443 runtime=io.containerd.runc.v2 Feb 9 19:51:13.601396 env[1114]: time="2024-02-09T19:51:13.601350440Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:51:13.601396 env[1114]: time="2024-02-09T19:51:13.601379505Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:51:13.601396 env[1114]: time="2024-02-09T19:51:13.601388522Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:51:13.601542 env[1114]: time="2024-02-09T19:51:13.601503167Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c38b56d35692731a814ddbeb47a14c8bdc1b82ef7c1b9426319f56d76b5e11b4 pid=1450 runtime=io.containerd.runc.v2 Feb 9 19:51:13.611807 systemd[1]: Started cri-containerd-c38b56d35692731a814ddbeb47a14c8bdc1b82ef7c1b9426319f56d76b5e11b4.scope. Feb 9 19:51:13.613283 systemd[1]: Started cri-containerd-ce57dc48611f9641a9834cfb967acf329e890a4150a2ea7c00d7b0d0751d19c0.scope. Feb 9 19:51:13.634774 env[1114]: time="2024-02-09T19:51:13.634725138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rwgsg,Uid:ac2102bf-0aa7-41dd-8006-4a364be540c0,Namespace:kube-system,Attempt:0,} returns sandbox id \"ce57dc48611f9641a9834cfb967acf329e890a4150a2ea7c00d7b0d0751d19c0\"" Feb 9 19:51:13.635665 kubelet[1385]: E0209 19:51:13.635642 1385 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:51:13.636789 env[1114]: time="2024-02-09T19:51:13.636758430Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.10\"" Feb 9 19:51:13.636888 env[1114]: time="2024-02-09T19:51:13.636748682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5pwrl,Uid:466b2df2-3e4e-4f14-9d6e-1ce61ce73459,Namespace:kube-system,Attempt:0,} returns sandbox id \"c38b56d35692731a814ddbeb47a14c8bdc1b82ef7c1b9426319f56d76b5e11b4\"" Feb 9 19:51:13.637287 kubelet[1385]: E0209 19:51:13.637271 1385 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:51:13.736202 kubelet[1385]: E0209 19:51:13.736149 1385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:51:14.624985 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2797885975.mount: Deactivated successfully. Feb 9 19:51:14.737290 kubelet[1385]: E0209 19:51:14.737255 1385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:51:15.150077 env[1114]: time="2024-02-09T19:51:15.150016368Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:51:15.151868 env[1114]: time="2024-02-09T19:51:15.151827554Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:db7b01e105753475c198490cf875df1314fd1a599f67ea1b184586cb399e1cae,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:51:15.153516 env[1114]: time="2024-02-09T19:51:15.153489380Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:51:15.158520 env[1114]: time="2024-02-09T19:51:15.158432849Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:d084b53c772f62ec38fddb2348a82d4234016daf6cd43fedbf0b3281f3790f88,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:51:15.159282 env[1114]: time="2024-02-09T19:51:15.159251795Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.10\" returns image reference \"sha256:db7b01e105753475c198490cf875df1314fd1a599f67ea1b184586cb399e1cae\"" Feb 9 19:51:15.159954 env[1114]: time="2024-02-09T19:51:15.159917162Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 9 19:51:15.161254 env[1114]: time="2024-02-09T19:51:15.161222159Z" level=info msg="CreateContainer within sandbox \"ce57dc48611f9641a9834cfb967acf329e890a4150a2ea7c00d7b0d0751d19c0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 19:51:15.179239 env[1114]: time="2024-02-09T19:51:15.179200970Z" level=info msg="CreateContainer within sandbox \"ce57dc48611f9641a9834cfb967acf329e890a4150a2ea7c00d7b0d0751d19c0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f7b2ce9c029b778a5a0c02d0fb0b5b6e39a801aa7dd7547e9f66d25eaecdef83\"" Feb 9 19:51:15.180484 env[1114]: time="2024-02-09T19:51:15.180458838Z" level=info msg="StartContainer for \"f7b2ce9c029b778a5a0c02d0fb0b5b6e39a801aa7dd7547e9f66d25eaecdef83\"" Feb 9 19:51:15.197866 systemd[1]: Started cri-containerd-f7b2ce9c029b778a5a0c02d0fb0b5b6e39a801aa7dd7547e9f66d25eaecdef83.scope. Feb 9 19:51:15.221026 env[1114]: time="2024-02-09T19:51:15.220972451Z" level=info msg="StartContainer for \"f7b2ce9c029b778a5a0c02d0fb0b5b6e39a801aa7dd7547e9f66d25eaecdef83\" returns successfully" Feb 9 19:51:15.738344 kubelet[1385]: E0209 19:51:15.738312 1385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:51:15.814655 kubelet[1385]: E0209 19:51:15.814628 1385 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:51:16.738694 kubelet[1385]: E0209 19:51:16.738658 1385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:51:16.816043 kubelet[1385]: E0209 19:51:16.816010 1385 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:51:17.739387 kubelet[1385]: E0209 19:51:17.739333 1385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:51:18.740055 kubelet[1385]: E0209 19:51:18.740013 1385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:51:19.542335 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4167439661.mount: Deactivated successfully. Feb 9 19:51:19.740789 kubelet[1385]: E0209 19:51:19.740748 1385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:51:20.741186 kubelet[1385]: E0209 19:51:20.741127 1385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:51:21.742023 kubelet[1385]: E0209 19:51:21.741979 1385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:51:22.742980 kubelet[1385]: E0209 19:51:22.742921 1385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:51:23.743845 kubelet[1385]: E0209 19:51:23.743806 1385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:51:23.811805 env[1114]: time="2024-02-09T19:51:23.811751780Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:51:23.814104 env[1114]: time="2024-02-09T19:51:23.814051773Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:51:23.816391 env[1114]: time="2024-02-09T19:51:23.816358788Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:51:23.817086 env[1114]: time="2024-02-09T19:51:23.817015750Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 9 19:51:23.819357 env[1114]: time="2024-02-09T19:51:23.818881979Z" level=info msg="CreateContainer within sandbox \"c38b56d35692731a814ddbeb47a14c8bdc1b82ef7c1b9426319f56d76b5e11b4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 19:51:23.832658 env[1114]: time="2024-02-09T19:51:23.832606282Z" level=info msg="CreateContainer within sandbox \"c38b56d35692731a814ddbeb47a14c8bdc1b82ef7c1b9426319f56d76b5e11b4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"60b5e5a3b65761ead52b5ed06e4d53d2af89988a661a80070b56f1ef708d17a5\"" Feb 9 19:51:23.833115 env[1114]: time="2024-02-09T19:51:23.833088206Z" level=info msg="StartContainer for \"60b5e5a3b65761ead52b5ed06e4d53d2af89988a661a80070b56f1ef708d17a5\"" Feb 9 19:51:23.847075 systemd[1]: Started cri-containerd-60b5e5a3b65761ead52b5ed06e4d53d2af89988a661a80070b56f1ef708d17a5.scope. Feb 9 19:51:23.881481 systemd[1]: cri-containerd-60b5e5a3b65761ead52b5ed06e4d53d2af89988a661a80070b56f1ef708d17a5.scope: Deactivated successfully. Feb 9 19:51:23.947983 env[1114]: time="2024-02-09T19:51:23.947880010Z" level=info msg="StartContainer for \"60b5e5a3b65761ead52b5ed06e4d53d2af89988a661a80070b56f1ef708d17a5\" returns successfully" Feb 9 19:51:24.525100 env[1114]: time="2024-02-09T19:51:24.525038973Z" level=info msg="shim disconnected" id=60b5e5a3b65761ead52b5ed06e4d53d2af89988a661a80070b56f1ef708d17a5 Feb 9 19:51:24.525100 env[1114]: time="2024-02-09T19:51:24.525083266Z" level=warning msg="cleaning up after shim disconnected" id=60b5e5a3b65761ead52b5ed06e4d53d2af89988a661a80070b56f1ef708d17a5 namespace=k8s.io Feb 9 19:51:24.525100 env[1114]: time="2024-02-09T19:51:24.525093195Z" level=info msg="cleaning up dead shim" Feb 9 19:51:24.530982 env[1114]: time="2024-02-09T19:51:24.530945969Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:51:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1731 runtime=io.containerd.runc.v2\n" Feb 9 19:51:24.744931 kubelet[1385]: E0209 19:51:24.744879 1385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:51:24.826552 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-60b5e5a3b65761ead52b5ed06e4d53d2af89988a661a80070b56f1ef708d17a5-rootfs.mount: Deactivated successfully. Feb 9 19:51:24.827853 kubelet[1385]: E0209 19:51:24.827825 1385 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:51:24.829313 env[1114]: time="2024-02-09T19:51:24.829276659Z" level=info msg="CreateContainer within sandbox \"c38b56d35692731a814ddbeb47a14c8bdc1b82ef7c1b9426319f56d76b5e11b4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 19:51:24.911756 kubelet[1385]: I0209 19:51:24.911705 1385 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-rwgsg" podStartSLOduration=13.388342688 podCreationTimestamp="2024-02-09 19:51:10 +0000 UTC" firstStartedPulling="2024-02-09 19:51:13.636348832 +0000 UTC m=+5.074506599" lastFinishedPulling="2024-02-09 19:51:15.159673706 +0000 UTC m=+6.597831473" observedRunningTime="2024-02-09 19:51:15.820951722 +0000 UTC m=+7.259109489" watchObservedRunningTime="2024-02-09 19:51:24.911667562 +0000 UTC m=+16.349825329" Feb 9 19:51:25.102184 env[1114]: time="2024-02-09T19:51:25.102054307Z" level=info msg="CreateContainer within sandbox \"c38b56d35692731a814ddbeb47a14c8bdc1b82ef7c1b9426319f56d76b5e11b4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e0b016a37bb91cf034965010a85ff437e8804b94f19516c2dd3c164903ac00b5\"" Feb 9 19:51:25.102737 env[1114]: time="2024-02-09T19:51:25.102679569Z" level=info msg="StartContainer for \"e0b016a37bb91cf034965010a85ff437e8804b94f19516c2dd3c164903ac00b5\"" Feb 9 19:51:25.115777 systemd[1]: Started cri-containerd-e0b016a37bb91cf034965010a85ff437e8804b94f19516c2dd3c164903ac00b5.scope. Feb 9 19:51:25.135648 env[1114]: time="2024-02-09T19:51:25.135599935Z" level=info msg="StartContainer for \"e0b016a37bb91cf034965010a85ff437e8804b94f19516c2dd3c164903ac00b5\" returns successfully" Feb 9 19:51:25.142332 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 19:51:25.142523 systemd[1]: Stopped systemd-sysctl.service. Feb 9 19:51:25.142754 systemd[1]: Stopping systemd-sysctl.service... Feb 9 19:51:25.144275 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:51:25.144559 systemd[1]: cri-containerd-e0b016a37bb91cf034965010a85ff437e8804b94f19516c2dd3c164903ac00b5.scope: Deactivated successfully. Feb 9 19:51:25.149992 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:51:25.164254 env[1114]: time="2024-02-09T19:51:25.164199389Z" level=info msg="shim disconnected" id=e0b016a37bb91cf034965010a85ff437e8804b94f19516c2dd3c164903ac00b5 Feb 9 19:51:25.164254 env[1114]: time="2024-02-09T19:51:25.164242169Z" level=warning msg="cleaning up after shim disconnected" id=e0b016a37bb91cf034965010a85ff437e8804b94f19516c2dd3c164903ac00b5 namespace=k8s.io Feb 9 19:51:25.164254 env[1114]: time="2024-02-09T19:51:25.164250555Z" level=info msg="cleaning up dead shim" Feb 9 19:51:25.170672 env[1114]: time="2024-02-09T19:51:25.170635246Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:51:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1794 runtime=io.containerd.runc.v2\n" Feb 9 19:51:25.745119 kubelet[1385]: E0209 19:51:25.745079 1385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:51:25.826658 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e0b016a37bb91cf034965010a85ff437e8804b94f19516c2dd3c164903ac00b5-rootfs.mount: Deactivated successfully. Feb 9 19:51:25.830560 kubelet[1385]: E0209 19:51:25.830539 1385 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:51:25.832203 env[1114]: time="2024-02-09T19:51:25.832154645Z" level=info msg="CreateContainer within sandbox \"c38b56d35692731a814ddbeb47a14c8bdc1b82ef7c1b9426319f56d76b5e11b4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 19:51:25.847435 env[1114]: time="2024-02-09T19:51:25.847371687Z" level=info msg="CreateContainer within sandbox \"c38b56d35692731a814ddbeb47a14c8bdc1b82ef7c1b9426319f56d76b5e11b4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7304c259c965a6548424a3adb65d0fe9b53dad3a2ea33c24fde4686da426ddce\"" Feb 9 19:51:25.847921 env[1114]: time="2024-02-09T19:51:25.847892734Z" level=info msg="StartContainer for \"7304c259c965a6548424a3adb65d0fe9b53dad3a2ea33c24fde4686da426ddce\"" Feb 9 19:51:25.862506 systemd[1]: Started cri-containerd-7304c259c965a6548424a3adb65d0fe9b53dad3a2ea33c24fde4686da426ddce.scope. Feb 9 19:51:25.887105 systemd[1]: cri-containerd-7304c259c965a6548424a3adb65d0fe9b53dad3a2ea33c24fde4686da426ddce.scope: Deactivated successfully. Feb 9 19:51:25.887302 env[1114]: time="2024-02-09T19:51:25.887160571Z" level=info msg="StartContainer for \"7304c259c965a6548424a3adb65d0fe9b53dad3a2ea33c24fde4686da426ddce\" returns successfully" Feb 9 19:51:25.906351 env[1114]: time="2024-02-09T19:51:25.906293245Z" level=info msg="shim disconnected" id=7304c259c965a6548424a3adb65d0fe9b53dad3a2ea33c24fde4686da426ddce Feb 9 19:51:25.906351 env[1114]: time="2024-02-09T19:51:25.906347977Z" level=warning msg="cleaning up after shim disconnected" id=7304c259c965a6548424a3adb65d0fe9b53dad3a2ea33c24fde4686da426ddce namespace=k8s.io Feb 9 19:51:25.906542 env[1114]: time="2024-02-09T19:51:25.906360741Z" level=info msg="cleaning up dead shim" Feb 9 19:51:25.911951 env[1114]: time="2024-02-09T19:51:25.911899557Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:51:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1850 runtime=io.containerd.runc.v2\n" Feb 9 19:51:26.745482 kubelet[1385]: E0209 19:51:26.745436 1385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:51:26.826172 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7304c259c965a6548424a3adb65d0fe9b53dad3a2ea33c24fde4686da426ddce-rootfs.mount: Deactivated successfully. Feb 9 19:51:26.832927 kubelet[1385]: E0209 19:51:26.832910 1385 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:51:26.834389 env[1114]: time="2024-02-09T19:51:26.834347997Z" level=info msg="CreateContainer within sandbox \"c38b56d35692731a814ddbeb47a14c8bdc1b82ef7c1b9426319f56d76b5e11b4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 19:51:26.848803 env[1114]: time="2024-02-09T19:51:26.848767213Z" level=info msg="CreateContainer within sandbox \"c38b56d35692731a814ddbeb47a14c8bdc1b82ef7c1b9426319f56d76b5e11b4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b521e7c81c7c17b76446eea50185374c4fd526737e355380e3c1452042a9996d\"" Feb 9 19:51:26.849199 env[1114]: time="2024-02-09T19:51:26.849156233Z" level=info msg="StartContainer for \"b521e7c81c7c17b76446eea50185374c4fd526737e355380e3c1452042a9996d\"" Feb 9 19:51:26.862010 systemd[1]: Started cri-containerd-b521e7c81c7c17b76446eea50185374c4fd526737e355380e3c1452042a9996d.scope. Feb 9 19:51:26.881666 systemd[1]: cri-containerd-b521e7c81c7c17b76446eea50185374c4fd526737e355380e3c1452042a9996d.scope: Deactivated successfully. Feb 9 19:51:26.883382 env[1114]: time="2024-02-09T19:51:26.883348914Z" level=info msg="StartContainer for \"b521e7c81c7c17b76446eea50185374c4fd526737e355380e3c1452042a9996d\" returns successfully" Feb 9 19:51:26.901739 env[1114]: time="2024-02-09T19:51:26.901692248Z" level=info msg="shim disconnected" id=b521e7c81c7c17b76446eea50185374c4fd526737e355380e3c1452042a9996d Feb 9 19:51:26.901739 env[1114]: time="2024-02-09T19:51:26.901733064Z" level=warning msg="cleaning up after shim disconnected" id=b521e7c81c7c17b76446eea50185374c4fd526737e355380e3c1452042a9996d namespace=k8s.io Feb 9 19:51:26.901739 env[1114]: time="2024-02-09T19:51:26.901741941Z" level=info msg="cleaning up dead shim" Feb 9 19:51:26.907633 env[1114]: time="2024-02-09T19:51:26.907598061Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:51:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1905 runtime=io.containerd.runc.v2\n" Feb 9 19:51:27.746500 kubelet[1385]: E0209 19:51:27.746447 1385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:51:27.826674 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b521e7c81c7c17b76446eea50185374c4fd526737e355380e3c1452042a9996d-rootfs.mount: Deactivated successfully. Feb 9 19:51:27.836006 kubelet[1385]: E0209 19:51:27.835988 1385 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:51:27.838052 env[1114]: time="2024-02-09T19:51:27.838011466Z" level=info msg="CreateContainer within sandbox \"c38b56d35692731a814ddbeb47a14c8bdc1b82ef7c1b9426319f56d76b5e11b4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 19:51:27.852505 env[1114]: time="2024-02-09T19:51:27.852460007Z" level=info msg="CreateContainer within sandbox \"c38b56d35692731a814ddbeb47a14c8bdc1b82ef7c1b9426319f56d76b5e11b4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a2360f03f56d0f364936af2353cdfeced2f150edc93db0ee74feff9345a02c81\"" Feb 9 19:51:27.852849 env[1114]: time="2024-02-09T19:51:27.852817447Z" level=info msg="StartContainer for \"a2360f03f56d0f364936af2353cdfeced2f150edc93db0ee74feff9345a02c81\"" Feb 9 19:51:27.866816 systemd[1]: Started cri-containerd-a2360f03f56d0f364936af2353cdfeced2f150edc93db0ee74feff9345a02c81.scope. Feb 9 19:51:27.890668 env[1114]: time="2024-02-09T19:51:27.890623373Z" level=info msg="StartContainer for \"a2360f03f56d0f364936af2353cdfeced2f150edc93db0ee74feff9345a02c81\" returns successfully" Feb 9 19:51:28.012582 kubelet[1385]: I0209 19:51:28.012475 1385 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 19:51:28.161205 kernel: Initializing XFRM netlink socket Feb 9 19:51:28.733135 kubelet[1385]: E0209 19:51:28.733083 1385 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:51:28.747476 kubelet[1385]: E0209 19:51:28.747439 1385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:51:28.841548 kubelet[1385]: E0209 19:51:28.841526 1385 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:51:28.852649 kubelet[1385]: I0209 19:51:28.852635 1385 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-5pwrl" podStartSLOduration=8.672944447999999 podCreationTimestamp="2024-02-09 19:51:10 +0000 UTC" firstStartedPulling="2024-02-09 19:51:13.637631257 +0000 UTC m=+5.075789024" lastFinishedPulling="2024-02-09 19:51:23.817294252 +0000 UTC m=+15.255452019" observedRunningTime="2024-02-09 19:51:28.852128555 +0000 UTC m=+20.290286322" watchObservedRunningTime="2024-02-09 19:51:28.852607443 +0000 UTC m=+20.290765210" Feb 9 19:51:29.748316 kubelet[1385]: E0209 19:51:29.748268 1385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:51:29.801206 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 9 19:51:29.801306 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 9 19:51:29.799810 systemd-networkd[1015]: cilium_host: Link UP Feb 9 19:51:29.799990 systemd-networkd[1015]: cilium_net: Link UP Feb 9 19:51:29.800219 systemd-networkd[1015]: cilium_net: Gained carrier Feb 9 19:51:29.800467 systemd-networkd[1015]: cilium_host: Gained carrier Feb 9 19:51:29.844016 kubelet[1385]: E0209 19:51:29.843649 1385 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:51:29.860867 systemd-networkd[1015]: cilium_vxlan: Link UP Feb 9 19:51:29.860877 systemd-networkd[1015]: cilium_vxlan: Gained carrier Feb 9 19:51:30.030208 kernel: NET: Registered PF_ALG protocol family Feb 9 19:51:30.270312 systemd-networkd[1015]: cilium_host: Gained IPv6LL Feb 9 19:51:30.503444 systemd-networkd[1015]: lxc_health: Link UP Feb 9 19:51:30.504832 systemd-networkd[1015]: lxc_health: Gained carrier Feb 9 19:51:30.505202 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 19:51:30.749234 kubelet[1385]: E0209 19:51:30.749131 1385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:51:30.783355 systemd-networkd[1015]: cilium_net: Gained IPv6LL Feb 9 19:51:30.844510 kubelet[1385]: E0209 19:51:30.844480 1385 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:51:31.750270 kubelet[1385]: E0209 19:51:31.750213 1385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:51:31.846657 kubelet[1385]: E0209 19:51:31.846622 1385 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:51:31.936404 systemd-networkd[1015]: cilium_vxlan: Gained IPv6LL Feb 9 19:51:31.998352 systemd-networkd[1015]: lxc_health: Gained IPv6LL Feb 9 19:51:32.109799 kubelet[1385]: I0209 19:51:32.109646 1385 topology_manager.go:212] "Topology Admit Handler" Feb 9 19:51:32.113940 systemd[1]: Created slice kubepods-besteffort-podce9a5881_32e3_4b80_8ce8_0ee8c66d53d2.slice. Feb 9 19:51:32.156720 kubelet[1385]: I0209 19:51:32.156688 1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdc4r\" (UniqueName: \"kubernetes.io/projected/ce9a5881-32e3-4b80-8ce8-0ee8c66d53d2-kube-api-access-hdc4r\") pod \"nginx-deployment-845c78c8b9-82nl9\" (UID: \"ce9a5881-32e3-4b80-8ce8-0ee8c66d53d2\") " pod="default/nginx-deployment-845c78c8b9-82nl9" Feb 9 19:51:32.417015 env[1114]: time="2024-02-09T19:51:32.416901716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-845c78c8b9-82nl9,Uid:ce9a5881-32e3-4b80-8ce8-0ee8c66d53d2,Namespace:default,Attempt:0,}" Feb 9 19:51:32.450098 systemd-networkd[1015]: lxc9f4f53bc7842: Link UP Feb 9 19:51:32.459202 kernel: eth0: renamed from tmp49bc6 Feb 9 19:51:32.463846 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:51:32.463898 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc9f4f53bc7842: link becomes ready Feb 9 19:51:32.463979 systemd-networkd[1015]: lxc9f4f53bc7842: Gained carrier Feb 9 19:51:32.751461 kubelet[1385]: E0209 19:51:32.751323 1385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:51:32.848253 kubelet[1385]: E0209 19:51:32.848220 1385 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:51:33.599602 systemd-networkd[1015]: lxc9f4f53bc7842: Gained IPv6LL Feb 9 19:51:33.751721 kubelet[1385]: E0209 19:51:33.751664 1385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:51:33.849811 kubelet[1385]: E0209 19:51:33.849776 1385 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:51:33.922389 env[1114]: time="2024-02-09T19:51:33.922257815Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:51:33.922389 env[1114]: time="2024-02-09T19:51:33.922307580Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:51:33.922389 env[1114]: time="2024-02-09T19:51:33.922323029Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:51:33.922725 env[1114]: time="2024-02-09T19:51:33.922475932Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/49bc6e47fb4872645ebe95fffd21acaed4e3368c4bf9469462a7b8b312beb1ea pid=2458 runtime=io.containerd.runc.v2 Feb 9 19:51:33.933969 systemd[1]: run-containerd-runc-k8s.io-49bc6e47fb4872645ebe95fffd21acaed4e3368c4bf9469462a7b8b312beb1ea-runc.EQxNMX.mount: Deactivated successfully. Feb 9 19:51:33.938462 systemd[1]: Started cri-containerd-49bc6e47fb4872645ebe95fffd21acaed4e3368c4bf9469462a7b8b312beb1ea.scope. Feb 9 19:51:33.949398 systemd-resolved[1057]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 19:51:33.970914 env[1114]: time="2024-02-09T19:51:33.970868983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-845c78c8b9-82nl9,Uid:ce9a5881-32e3-4b80-8ce8-0ee8c66d53d2,Namespace:default,Attempt:0,} returns sandbox id \"49bc6e47fb4872645ebe95fffd21acaed4e3368c4bf9469462a7b8b312beb1ea\"" Feb 9 19:51:33.972464 env[1114]: time="2024-02-09T19:51:33.972443062Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 9 19:51:34.752687 kubelet[1385]: E0209 19:51:34.752650 1385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:51:35.753373 kubelet[1385]: E0209 19:51:35.753329 1385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:51:36.632716 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2601667885.mount: Deactivated successfully. Feb 9 19:51:36.754522 kubelet[1385]: E0209 19:51:36.754475 1385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:51:37.507207 env[1114]: time="2024-02-09T19:51:37.507152587Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:51:37.508715 env[1114]: time="2024-02-09T19:51:37.508689503Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:51:37.510084 env[1114]: time="2024-02-09T19:51:37.510055414Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:51:37.511501 env[1114]: time="2024-02-09T19:51:37.511481618Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:51:37.511984 env[1114]: time="2024-02-09T19:51:37.511958336Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\"" Feb 9 19:51:37.513133 env[1114]: time="2024-02-09T19:51:37.513111691Z" level=info msg="CreateContainer within sandbox \"49bc6e47fb4872645ebe95fffd21acaed4e3368c4bf9469462a7b8b312beb1ea\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 9 19:51:37.523814 env[1114]: time="2024-02-09T19:51:37.523773788Z" level=info msg="CreateContainer within sandbox \"49bc6e47fb4872645ebe95fffd21acaed4e3368c4bf9469462a7b8b312beb1ea\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"1909ff09e5c26b145c4514cf22e90f6a75489560376b722e33a58e6be42bccad\"" Feb 9 19:51:37.524045 env[1114]: time="2024-02-09T19:51:37.524027631Z" level=info msg="StartContainer for \"1909ff09e5c26b145c4514cf22e90f6a75489560376b722e33a58e6be42bccad\"" Feb 9 19:51:37.536488 systemd[1]: Started cri-containerd-1909ff09e5c26b145c4514cf22e90f6a75489560376b722e33a58e6be42bccad.scope. Feb 9 19:51:37.556994 env[1114]: time="2024-02-09T19:51:37.556849242Z" level=info msg="StartContainer for \"1909ff09e5c26b145c4514cf22e90f6a75489560376b722e33a58e6be42bccad\" returns successfully" Feb 9 19:51:37.754785 kubelet[1385]: E0209 19:51:37.754750 1385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:51:37.863153 kubelet[1385]: I0209 19:51:37.863071 1385 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-845c78c8b9-82nl9" podStartSLOduration=2.322867569 podCreationTimestamp="2024-02-09 19:51:32 +0000 UTC" firstStartedPulling="2024-02-09 19:51:33.971990507 +0000 UTC m=+25.410148264" lastFinishedPulling="2024-02-09 19:51:37.512157385 +0000 UTC m=+28.950315152" observedRunningTime="2024-02-09 19:51:37.862898619 +0000 UTC m=+29.301056386" watchObservedRunningTime="2024-02-09 19:51:37.863034457 +0000 UTC m=+29.301192225" Feb 9 19:51:38.755883 kubelet[1385]: E0209 19:51:38.755849 1385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:51:39.756348 kubelet[1385]: E0209 19:51:39.756310 1385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:51:40.756523 kubelet[1385]: E0209 19:51:40.756488 1385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:51:41.757341 kubelet[1385]: E0209 19:51:41.757284 1385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:51:42.652907 update_engine[1102]: I0209 19:51:42.652851 1102 update_attempter.cc:509] Updating boot flags... Feb 9 19:51:42.757934 kubelet[1385]: E0209 19:51:42.757897 1385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:51:43.622885 kubelet[1385]: I0209 19:51:43.622853 1385 topology_manager.go:212] "Topology Admit Handler" Feb 9 19:51:43.626525 systemd[1]: Created slice kubepods-besteffort-pod2d99128f_e301_4c3c_87f5_9abf387df8aa.slice. Feb 9 19:51:43.711606 kubelet[1385]: I0209 19:51:43.711577 1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/2d99128f-e301-4c3c-87f5-9abf387df8aa-data\") pod \"nfs-server-provisioner-0\" (UID: \"2d99128f-e301-4c3c-87f5-9abf387df8aa\") " pod="default/nfs-server-provisioner-0" Feb 9 19:51:43.711731 kubelet[1385]: I0209 19:51:43.711639 1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52484\" (UniqueName: \"kubernetes.io/projected/2d99128f-e301-4c3c-87f5-9abf387df8aa-kube-api-access-52484\") pod \"nfs-server-provisioner-0\" (UID: \"2d99128f-e301-4c3c-87f5-9abf387df8aa\") " pod="default/nfs-server-provisioner-0" Feb 9 19:51:43.758871 kubelet[1385]: E0209 19:51:43.758845 1385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:51:43.929193 env[1114]: time="2024-02-09T19:51:43.929085216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:2d99128f-e301-4c3c-87f5-9abf387df8aa,Namespace:default,Attempt:0,}" Feb 9 19:51:43.954438 systemd-networkd[1015]: lxc8ecf8cb94bd9: Link UP Feb 9 19:51:43.960204 kernel: eth0: renamed from tmpe4e65 Feb 9 19:51:43.966390 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:51:43.966460 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc8ecf8cb94bd9: link becomes ready Feb 9 19:51:43.966582 systemd-networkd[1015]: lxc8ecf8cb94bd9: Gained carrier Feb 9 19:51:44.182496 env[1114]: time="2024-02-09T19:51:44.182363347Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:51:44.182496 env[1114]: time="2024-02-09T19:51:44.182402882Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:51:44.182496 env[1114]: time="2024-02-09T19:51:44.182415596Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:51:44.182745 env[1114]: time="2024-02-09T19:51:44.182624241Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e4e65fb8bb853cb820e304800eb73cf8e8ca50f558f50c719aacc16993cc4d5b pid=2596 runtime=io.containerd.runc.v2 Feb 9 19:51:44.195454 systemd[1]: Started cri-containerd-e4e65fb8bb853cb820e304800eb73cf8e8ca50f558f50c719aacc16993cc4d5b.scope. Feb 9 19:51:44.204308 systemd-resolved[1057]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 19:51:44.224100 env[1114]: time="2024-02-09T19:51:44.224065719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:2d99128f-e301-4c3c-87f5-9abf387df8aa,Namespace:default,Attempt:0,} returns sandbox id \"e4e65fb8bb853cb820e304800eb73cf8e8ca50f558f50c719aacc16993cc4d5b\"" Feb 9 19:51:44.225298 env[1114]: time="2024-02-09T19:51:44.225251263Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 9 19:51:44.759856 kubelet[1385]: E0209 19:51:44.759818 1385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:51:45.246341 systemd-networkd[1015]: lxc8ecf8cb94bd9: Gained IPv6LL Feb 9 19:51:45.760473 kubelet[1385]: E0209 19:51:45.760428 1385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:51:46.505471 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount373615493.mount: Deactivated successfully. Feb 9 19:51:46.761291 kubelet[1385]: E0209 19:51:46.761168 1385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:51:47.761617 kubelet[1385]: E0209 19:51:47.761572 1385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:51:48.733679 kubelet[1385]: E0209 19:51:48.733633 1385 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:51:48.761944 kubelet[1385]: E0209 19:51:48.761895 1385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:51:48.832873 env[1114]: time="2024-02-09T19:51:48.832831861Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:51:48.834966 env[1114]: time="2024-02-09T19:51:48.834908465Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:51:48.836774 env[1114]: time="2024-02-09T19:51:48.836738812Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:51:48.838529 env[1114]: time="2024-02-09T19:51:48.838495791Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:51:48.839198 env[1114]: time="2024-02-09T19:51:48.839162411Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Feb 9 19:51:48.840747 env[1114]: time="2024-02-09T19:51:48.840713330Z" level=info msg="CreateContainer within sandbox \"e4e65fb8bb853cb820e304800eb73cf8e8ca50f558f50c719aacc16993cc4d5b\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 9 19:51:48.853118 env[1114]: time="2024-02-09T19:51:48.853078429Z" level=info msg="CreateContainer within sandbox \"e4e65fb8bb853cb820e304800eb73cf8e8ca50f558f50c719aacc16993cc4d5b\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"5b79713906d8b12f8b35e8b52a96bbcfc507006244b8bbf6a8a1a0dcb737d5e4\"" Feb 9 19:51:48.853470 env[1114]: time="2024-02-09T19:51:48.853442497Z" level=info msg="StartContainer for \"5b79713906d8b12f8b35e8b52a96bbcfc507006244b8bbf6a8a1a0dcb737d5e4\"" Feb 9 19:51:48.868476 systemd[1]: run-containerd-runc-k8s.io-5b79713906d8b12f8b35e8b52a96bbcfc507006244b8bbf6a8a1a0dcb737d5e4-runc.XKPFXU.mount: Deactivated successfully. Feb 9 19:51:48.869632 systemd[1]: Started cri-containerd-5b79713906d8b12f8b35e8b52a96bbcfc507006244b8bbf6a8a1a0dcb737d5e4.scope. Feb 9 19:51:48.893361 env[1114]: time="2024-02-09T19:51:48.893316613Z" level=info msg="StartContainer for \"5b79713906d8b12f8b35e8b52a96bbcfc507006244b8bbf6a8a1a0dcb737d5e4\" returns successfully" Feb 9 19:51:49.763067 kubelet[1385]: E0209 19:51:49.763026 1385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:51:49.883133 kubelet[1385]: I0209 19:51:49.883087 1385 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.268633743 podCreationTimestamp="2024-02-09 19:51:43 +0000 UTC" firstStartedPulling="2024-02-09 19:51:44.224992123 +0000 UTC m=+35.663149890" lastFinishedPulling="2024-02-09 19:51:48.839412323 +0000 UTC m=+40.277570100" observedRunningTime="2024-02-09 19:51:49.882690438 +0000 UTC m=+41.320848205" watchObservedRunningTime="2024-02-09 19:51:49.883053953 +0000 UTC m=+41.321211720" Feb 9 19:51:50.764192 kubelet[1385]: E0209 19:51:50.764145 1385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:51:51.764593 kubelet[1385]: E0209 19:51:51.764537 1385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:51:52.764914 kubelet[1385]: E0209 19:51:52.764836 1385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:51:53.765524 kubelet[1385]: E0209 19:51:53.765469 1385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:51:54.765856 kubelet[1385]: E0209 19:51:54.765823 1385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:51:55.766893 kubelet[1385]: E0209 19:51:55.766842 1385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:51:56.767886 kubelet[1385]: E0209 19:51:56.767845 1385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:51:57.768810 kubelet[1385]: E0209 19:51:57.768754 1385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:51:58.769834 kubelet[1385]: E0209 19:51:58.769775 1385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:51:59.094425 kubelet[1385]: I0209 19:51:59.094323 1385 topology_manager.go:212] "Topology Admit Handler" Feb 9 19:51:59.098444 systemd[1]: Created slice kubepods-besteffort-podd9ad7af9_95ee_44cf_9766_aafddbaf610e.slice. Feb 9 19:51:59.282429 kubelet[1385]: I0209 19:51:59.282401 1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vj6ht\" (UniqueName: \"kubernetes.io/projected/d9ad7af9-95ee-44cf-9766-aafddbaf610e-kube-api-access-vj6ht\") pod \"test-pod-1\" (UID: \"d9ad7af9-95ee-44cf-9766-aafddbaf610e\") " pod="default/test-pod-1" Feb 9 19:51:59.282527 kubelet[1385]: I0209 19:51:59.282444 1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-93789823-b285-4b0a-be0a-aead66f63323\" (UniqueName: \"kubernetes.io/nfs/d9ad7af9-95ee-44cf-9766-aafddbaf610e-pvc-93789823-b285-4b0a-be0a-aead66f63323\") pod \"test-pod-1\" (UID: \"d9ad7af9-95ee-44cf-9766-aafddbaf610e\") " pod="default/test-pod-1" Feb 9 19:51:59.432210 kernel: FS-Cache: Loaded Feb 9 19:51:59.506444 kernel: RPC: Registered named UNIX socket transport module. Feb 9 19:51:59.506577 kernel: RPC: Registered udp transport module. Feb 9 19:51:59.506597 kernel: RPC: Registered tcp transport module. Feb 9 19:51:59.507534 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 9 19:51:59.545232 kernel: FS-Cache: Netfs 'nfs' registered for caching Feb 9 19:51:59.714603 kernel: NFS: Registering the id_resolver key type Feb 9 19:51:59.714760 kernel: Key type id_resolver registered Feb 9 19:51:59.714796 kernel: Key type id_legacy registered Feb 9 19:51:59.733555 nfsidmap[2716]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 9 19:51:59.736096 nfsidmap[2719]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 9 19:51:59.770264 kubelet[1385]: E0209 19:51:59.770171 1385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:52:00.000632 env[1114]: time="2024-02-09T19:52:00.000591251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:d9ad7af9-95ee-44cf-9766-aafddbaf610e,Namespace:default,Attempt:0,}" Feb 9 19:52:00.632035 systemd-networkd[1015]: lxc1435d565e173: Link UP Feb 9 19:52:00.640212 kernel: eth0: renamed from tmpc2d8b Feb 9 19:52:00.646454 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:52:00.646487 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc1435d565e173: link becomes ready Feb 9 19:52:00.646245 systemd-networkd[1015]: lxc1435d565e173: Gained carrier Feb 9 19:52:00.771065 kubelet[1385]: E0209 19:52:00.771011 1385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:52:00.894237 env[1114]: time="2024-02-09T19:52:00.894113484Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:52:00.894385 env[1114]: time="2024-02-09T19:52:00.894361491Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:52:00.894489 env[1114]: time="2024-02-09T19:52:00.894457150Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:52:00.894742 env[1114]: time="2024-02-09T19:52:00.894705839Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c2d8b59a5c7aa9ae1b9b2716eefed66530ac799fa45db44aab98bb266cbd3eb3 pid=2751 runtime=io.containerd.runc.v2 Feb 9 19:52:00.907074 systemd[1]: Started cri-containerd-c2d8b59a5c7aa9ae1b9b2716eefed66530ac799fa45db44aab98bb266cbd3eb3.scope. Feb 9 19:52:00.916287 systemd-resolved[1057]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 19:52:00.941464 env[1114]: time="2024-02-09T19:52:00.941405294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:d9ad7af9-95ee-44cf-9766-aafddbaf610e,Namespace:default,Attempt:0,} returns sandbox id \"c2d8b59a5c7aa9ae1b9b2716eefed66530ac799fa45db44aab98bb266cbd3eb3\"" Feb 9 19:52:00.942776 env[1114]: time="2024-02-09T19:52:00.942747579Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 9 19:52:01.495196 env[1114]: time="2024-02-09T19:52:01.495127228Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:52:01.497412 env[1114]: time="2024-02-09T19:52:01.497376910Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:52:01.504534 env[1114]: time="2024-02-09T19:52:01.504472307Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:52:01.506803 env[1114]: time="2024-02-09T19:52:01.506772483Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:52:01.507425 env[1114]: time="2024-02-09T19:52:01.507392750Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\"" Feb 9 19:52:01.509064 env[1114]: time="2024-02-09T19:52:01.509020673Z" level=info msg="CreateContainer within sandbox \"c2d8b59a5c7aa9ae1b9b2716eefed66530ac799fa45db44aab98bb266cbd3eb3\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 9 19:52:01.576467 env[1114]: time="2024-02-09T19:52:01.576409116Z" level=info msg="CreateContainer within sandbox \"c2d8b59a5c7aa9ae1b9b2716eefed66530ac799fa45db44aab98bb266cbd3eb3\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"fe052cdf26b92f07b16a8afd60e18f6d39ab9c1a3da7c27c1999019867561ea7\"" Feb 9 19:52:01.576963 env[1114]: time="2024-02-09T19:52:01.576927762Z" level=info msg="StartContainer for \"fe052cdf26b92f07b16a8afd60e18f6d39ab9c1a3da7c27c1999019867561ea7\"" Feb 9 19:52:01.593533 systemd[1]: Started cri-containerd-fe052cdf26b92f07b16a8afd60e18f6d39ab9c1a3da7c27c1999019867561ea7.scope. Feb 9 19:52:01.624694 env[1114]: time="2024-02-09T19:52:01.624642036Z" level=info msg="StartContainer for \"fe052cdf26b92f07b16a8afd60e18f6d39ab9c1a3da7c27c1999019867561ea7\" returns successfully" Feb 9 19:52:01.771285 kubelet[1385]: E0209 19:52:01.771150 1385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:52:01.899263 kubelet[1385]: I0209 19:52:01.899223 1385 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=18.334041456 podCreationTimestamp="2024-02-09 19:51:43 +0000 UTC" firstStartedPulling="2024-02-09 19:52:00.942483843 +0000 UTC m=+52.380641610" lastFinishedPulling="2024-02-09 19:52:01.507636268 +0000 UTC m=+52.945794035" observedRunningTime="2024-02-09 19:52:01.898893756 +0000 UTC m=+53.337051523" watchObservedRunningTime="2024-02-09 19:52:01.899193881 +0000 UTC m=+53.337351648" Feb 9 19:52:02.334338 systemd-networkd[1015]: lxc1435d565e173: Gained IPv6LL Feb 9 19:52:02.772293 kubelet[1385]: E0209 19:52:02.772259 1385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:52:03.772638 kubelet[1385]: E0209 19:52:03.772593 1385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:52:04.773194 kubelet[1385]: E0209 19:52:04.773140 1385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:52:05.774126 kubelet[1385]: E0209 19:52:05.774085 1385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:52:06.775101 kubelet[1385]: E0209 19:52:06.775045 1385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:52:06.908927 env[1114]: time="2024-02-09T19:52:06.908864742Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 19:52:06.913558 env[1114]: time="2024-02-09T19:52:06.913523949Z" level=info msg="StopContainer for \"a2360f03f56d0f364936af2353cdfeced2f150edc93db0ee74feff9345a02c81\" with timeout 1 (s)" Feb 9 19:52:06.913705 env[1114]: time="2024-02-09T19:52:06.913690120Z" level=info msg="Stop container \"a2360f03f56d0f364936af2353cdfeced2f150edc93db0ee74feff9345a02c81\" with signal terminated" Feb 9 19:52:06.918915 systemd-networkd[1015]: lxc_health: Link DOWN Feb 9 19:52:06.918925 systemd-networkd[1015]: lxc_health: Lost carrier Feb 9 19:52:06.947646 systemd[1]: cri-containerd-a2360f03f56d0f364936af2353cdfeced2f150edc93db0ee74feff9345a02c81.scope: Deactivated successfully. Feb 9 19:52:06.947889 systemd[1]: cri-containerd-a2360f03f56d0f364936af2353cdfeced2f150edc93db0ee74feff9345a02c81.scope: Consumed 5.836s CPU time. Feb 9 19:52:06.960490 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a2360f03f56d0f364936af2353cdfeced2f150edc93db0ee74feff9345a02c81-rootfs.mount: Deactivated successfully. Feb 9 19:52:07.496803 env[1114]: time="2024-02-09T19:52:07.496742988Z" level=info msg="shim disconnected" id=a2360f03f56d0f364936af2353cdfeced2f150edc93db0ee74feff9345a02c81 Feb 9 19:52:07.496803 env[1114]: time="2024-02-09T19:52:07.496785978Z" level=warning msg="cleaning up after shim disconnected" id=a2360f03f56d0f364936af2353cdfeced2f150edc93db0ee74feff9345a02c81 namespace=k8s.io Feb 9 19:52:07.496803 env[1114]: time="2024-02-09T19:52:07.496794835Z" level=info msg="cleaning up dead shim" Feb 9 19:52:07.503077 env[1114]: time="2024-02-09T19:52:07.503016166Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:52:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2884 runtime=io.containerd.runc.v2\n" Feb 9 19:52:07.621598 env[1114]: time="2024-02-09T19:52:07.621539706Z" level=info msg="StopContainer for \"a2360f03f56d0f364936af2353cdfeced2f150edc93db0ee74feff9345a02c81\" returns successfully" Feb 9 19:52:07.622219 env[1114]: time="2024-02-09T19:52:07.622194276Z" level=info msg="StopPodSandbox for \"c38b56d35692731a814ddbeb47a14c8bdc1b82ef7c1b9426319f56d76b5e11b4\"" Feb 9 19:52:07.622273 env[1114]: time="2024-02-09T19:52:07.622248177Z" level=info msg="Container to stop \"e0b016a37bb91cf034965010a85ff437e8804b94f19516c2dd3c164903ac00b5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:52:07.622273 env[1114]: time="2024-02-09T19:52:07.622260320Z" level=info msg="Container to stop \"7304c259c965a6548424a3adb65d0fe9b53dad3a2ea33c24fde4686da426ddce\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:52:07.622273 env[1114]: time="2024-02-09T19:52:07.622269296Z" level=info msg="Container to stop \"b521e7c81c7c17b76446eea50185374c4fd526737e355380e3c1452042a9996d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:52:07.622458 env[1114]: time="2024-02-09T19:52:07.622280798Z" level=info msg="Container to stop \"a2360f03f56d0f364936af2353cdfeced2f150edc93db0ee74feff9345a02c81\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:52:07.622458 env[1114]: time="2024-02-09T19:52:07.622290316Z" level=info msg="Container to stop \"60b5e5a3b65761ead52b5ed06e4d53d2af89988a661a80070b56f1ef708d17a5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:52:07.623821 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c38b56d35692731a814ddbeb47a14c8bdc1b82ef7c1b9426319f56d76b5e11b4-shm.mount: Deactivated successfully. Feb 9 19:52:07.627607 systemd[1]: cri-containerd-c38b56d35692731a814ddbeb47a14c8bdc1b82ef7c1b9426319f56d76b5e11b4.scope: Deactivated successfully. Feb 9 19:52:07.646029 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c38b56d35692731a814ddbeb47a14c8bdc1b82ef7c1b9426319f56d76b5e11b4-rootfs.mount: Deactivated successfully. Feb 9 19:52:07.703888 env[1114]: time="2024-02-09T19:52:07.703834960Z" level=info msg="shim disconnected" id=c38b56d35692731a814ddbeb47a14c8bdc1b82ef7c1b9426319f56d76b5e11b4 Feb 9 19:52:07.704100 env[1114]: time="2024-02-09T19:52:07.704068960Z" level=warning msg="cleaning up after shim disconnected" id=c38b56d35692731a814ddbeb47a14c8bdc1b82ef7c1b9426319f56d76b5e11b4 namespace=k8s.io Feb 9 19:52:07.704100 env[1114]: time="2024-02-09T19:52:07.704090741Z" level=info msg="cleaning up dead shim" Feb 9 19:52:07.710663 env[1114]: time="2024-02-09T19:52:07.710610142Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:52:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2913 runtime=io.containerd.runc.v2\n" Feb 9 19:52:07.710980 env[1114]: time="2024-02-09T19:52:07.710935864Z" level=info msg="TearDown network for sandbox \"c38b56d35692731a814ddbeb47a14c8bdc1b82ef7c1b9426319f56d76b5e11b4\" successfully" Feb 9 19:52:07.710980 env[1114]: time="2024-02-09T19:52:07.710960790Z" level=info msg="StopPodSandbox for \"c38b56d35692731a814ddbeb47a14c8bdc1b82ef7c1b9426319f56d76b5e11b4\" returns successfully" Feb 9 19:52:07.776098 kubelet[1385]: E0209 19:52:07.775991 1385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:52:07.820677 kubelet[1385]: I0209 19:52:07.820637 1385 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/466b2df2-3e4e-4f14-9d6e-1ce61ce73459-cilium-config-path\") pod \"466b2df2-3e4e-4f14-9d6e-1ce61ce73459\" (UID: \"466b2df2-3e4e-4f14-9d6e-1ce61ce73459\") " Feb 9 19:52:07.820677 kubelet[1385]: I0209 19:52:07.820671 1385 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/466b2df2-3e4e-4f14-9d6e-1ce61ce73459-bpf-maps\") pod \"466b2df2-3e4e-4f14-9d6e-1ce61ce73459\" (UID: \"466b2df2-3e4e-4f14-9d6e-1ce61ce73459\") " Feb 9 19:52:07.820677 kubelet[1385]: I0209 19:52:07.820690 1385 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/466b2df2-3e4e-4f14-9d6e-1ce61ce73459-hostproc\") pod \"466b2df2-3e4e-4f14-9d6e-1ce61ce73459\" (UID: \"466b2df2-3e4e-4f14-9d6e-1ce61ce73459\") " Feb 9 19:52:07.820929 kubelet[1385]: I0209 19:52:07.820705 1385 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/466b2df2-3e4e-4f14-9d6e-1ce61ce73459-cilium-cgroup\") pod \"466b2df2-3e4e-4f14-9d6e-1ce61ce73459\" (UID: \"466b2df2-3e4e-4f14-9d6e-1ce61ce73459\") " Feb 9 19:52:07.820929 kubelet[1385]: I0209 19:52:07.820722 1385 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/466b2df2-3e4e-4f14-9d6e-1ce61ce73459-lib-modules\") pod \"466b2df2-3e4e-4f14-9d6e-1ce61ce73459\" (UID: \"466b2df2-3e4e-4f14-9d6e-1ce61ce73459\") " Feb 9 19:52:07.820929 kubelet[1385]: I0209 19:52:07.820738 1385 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/466b2df2-3e4e-4f14-9d6e-1ce61ce73459-etc-cni-netd\") pod \"466b2df2-3e4e-4f14-9d6e-1ce61ce73459\" (UID: \"466b2df2-3e4e-4f14-9d6e-1ce61ce73459\") " Feb 9 19:52:07.820929 kubelet[1385]: I0209 19:52:07.820755 1385 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7w22g\" (UniqueName: \"kubernetes.io/projected/466b2df2-3e4e-4f14-9d6e-1ce61ce73459-kube-api-access-7w22g\") pod \"466b2df2-3e4e-4f14-9d6e-1ce61ce73459\" (UID: \"466b2df2-3e4e-4f14-9d6e-1ce61ce73459\") " Feb 9 19:52:07.820929 kubelet[1385]: I0209 19:52:07.820750 1385 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/466b2df2-3e4e-4f14-9d6e-1ce61ce73459-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "466b2df2-3e4e-4f14-9d6e-1ce61ce73459" (UID: "466b2df2-3e4e-4f14-9d6e-1ce61ce73459"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:52:07.820929 kubelet[1385]: I0209 19:52:07.820770 1385 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/466b2df2-3e4e-4f14-9d6e-1ce61ce73459-hubble-tls\") pod \"466b2df2-3e4e-4f14-9d6e-1ce61ce73459\" (UID: \"466b2df2-3e4e-4f14-9d6e-1ce61ce73459\") " Feb 9 19:52:07.821066 kubelet[1385]: I0209 19:52:07.820865 1385 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/466b2df2-3e4e-4f14-9d6e-1ce61ce73459-clustermesh-secrets\") pod \"466b2df2-3e4e-4f14-9d6e-1ce61ce73459\" (UID: \"466b2df2-3e4e-4f14-9d6e-1ce61ce73459\") " Feb 9 19:52:07.821066 kubelet[1385]: I0209 19:52:07.820885 1385 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/466b2df2-3e4e-4f14-9d6e-1ce61ce73459-xtables-lock\") pod \"466b2df2-3e4e-4f14-9d6e-1ce61ce73459\" (UID: \"466b2df2-3e4e-4f14-9d6e-1ce61ce73459\") " Feb 9 19:52:07.821066 kubelet[1385]: I0209 19:52:07.820902 1385 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/466b2df2-3e4e-4f14-9d6e-1ce61ce73459-host-proc-sys-net\") pod \"466b2df2-3e4e-4f14-9d6e-1ce61ce73459\" (UID: \"466b2df2-3e4e-4f14-9d6e-1ce61ce73459\") " Feb 9 19:52:07.821066 kubelet[1385]: I0209 19:52:07.820919 1385 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/466b2df2-3e4e-4f14-9d6e-1ce61ce73459-host-proc-sys-kernel\") pod \"466b2df2-3e4e-4f14-9d6e-1ce61ce73459\" (UID: \"466b2df2-3e4e-4f14-9d6e-1ce61ce73459\") " Feb 9 19:52:07.821066 kubelet[1385]: I0209 19:52:07.820937 1385 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/466b2df2-3e4e-4f14-9d6e-1ce61ce73459-cilium-run\") pod \"466b2df2-3e4e-4f14-9d6e-1ce61ce73459\" (UID: \"466b2df2-3e4e-4f14-9d6e-1ce61ce73459\") " Feb 9 19:52:07.821066 kubelet[1385]: I0209 19:52:07.820953 1385 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/466b2df2-3e4e-4f14-9d6e-1ce61ce73459-cni-path\") pod \"466b2df2-3e4e-4f14-9d6e-1ce61ce73459\" (UID: \"466b2df2-3e4e-4f14-9d6e-1ce61ce73459\") " Feb 9 19:52:07.821222 kubelet[1385]: W0209 19:52:07.820924 1385 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/466b2df2-3e4e-4f14-9d6e-1ce61ce73459/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 19:52:07.821222 kubelet[1385]: I0209 19:52:07.820986 1385 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/466b2df2-3e4e-4f14-9d6e-1ce61ce73459-bpf-maps\") on node \"10.0.0.131\" DevicePath \"\"" Feb 9 19:52:07.821222 kubelet[1385]: I0209 19:52:07.821004 1385 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/466b2df2-3e4e-4f14-9d6e-1ce61ce73459-cni-path" (OuterVolumeSpecName: "cni-path") pod "466b2df2-3e4e-4f14-9d6e-1ce61ce73459" (UID: "466b2df2-3e4e-4f14-9d6e-1ce61ce73459"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:52:07.823739 kubelet[1385]: I0209 19:52:07.821351 1385 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/466b2df2-3e4e-4f14-9d6e-1ce61ce73459-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "466b2df2-3e4e-4f14-9d6e-1ce61ce73459" (UID: "466b2df2-3e4e-4f14-9d6e-1ce61ce73459"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:52:07.823739 kubelet[1385]: I0209 19:52:07.821410 1385 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/466b2df2-3e4e-4f14-9d6e-1ce61ce73459-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "466b2df2-3e4e-4f14-9d6e-1ce61ce73459" (UID: "466b2df2-3e4e-4f14-9d6e-1ce61ce73459"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:52:07.823739 kubelet[1385]: I0209 19:52:07.821427 1385 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/466b2df2-3e4e-4f14-9d6e-1ce61ce73459-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "466b2df2-3e4e-4f14-9d6e-1ce61ce73459" (UID: "466b2df2-3e4e-4f14-9d6e-1ce61ce73459"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:52:07.823739 kubelet[1385]: I0209 19:52:07.821442 1385 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/466b2df2-3e4e-4f14-9d6e-1ce61ce73459-hostproc" (OuterVolumeSpecName: "hostproc") pod "466b2df2-3e4e-4f14-9d6e-1ce61ce73459" (UID: "466b2df2-3e4e-4f14-9d6e-1ce61ce73459"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:52:07.823739 kubelet[1385]: I0209 19:52:07.821454 1385 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/466b2df2-3e4e-4f14-9d6e-1ce61ce73459-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "466b2df2-3e4e-4f14-9d6e-1ce61ce73459" (UID: "466b2df2-3e4e-4f14-9d6e-1ce61ce73459"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:52:07.823929 kubelet[1385]: I0209 19:52:07.821466 1385 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/466b2df2-3e4e-4f14-9d6e-1ce61ce73459-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "466b2df2-3e4e-4f14-9d6e-1ce61ce73459" (UID: "466b2df2-3e4e-4f14-9d6e-1ce61ce73459"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:52:07.823929 kubelet[1385]: I0209 19:52:07.822037 1385 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/466b2df2-3e4e-4f14-9d6e-1ce61ce73459-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "466b2df2-3e4e-4f14-9d6e-1ce61ce73459" (UID: "466b2df2-3e4e-4f14-9d6e-1ce61ce73459"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:52:07.823929 kubelet[1385]: I0209 19:52:07.822063 1385 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/466b2df2-3e4e-4f14-9d6e-1ce61ce73459-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "466b2df2-3e4e-4f14-9d6e-1ce61ce73459" (UID: "466b2df2-3e4e-4f14-9d6e-1ce61ce73459"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:52:07.823929 kubelet[1385]: I0209 19:52:07.822590 1385 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/466b2df2-3e4e-4f14-9d6e-1ce61ce73459-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "466b2df2-3e4e-4f14-9d6e-1ce61ce73459" (UID: "466b2df2-3e4e-4f14-9d6e-1ce61ce73459"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 19:52:07.824054 systemd[1]: var-lib-kubelet-pods-466b2df2\x2d3e4e\x2d4f14\x2d9d6e\x2d1ce61ce73459-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 19:52:07.824474 kubelet[1385]: I0209 19:52:07.824445 1385 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/466b2df2-3e4e-4f14-9d6e-1ce61ce73459-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "466b2df2-3e4e-4f14-9d6e-1ce61ce73459" (UID: "466b2df2-3e4e-4f14-9d6e-1ce61ce73459"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:52:07.824566 kubelet[1385]: I0209 19:52:07.824545 1385 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/466b2df2-3e4e-4f14-9d6e-1ce61ce73459-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "466b2df2-3e4e-4f14-9d6e-1ce61ce73459" (UID: "466b2df2-3e4e-4f14-9d6e-1ce61ce73459"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:52:07.824616 kubelet[1385]: I0209 19:52:07.824600 1385 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/466b2df2-3e4e-4f14-9d6e-1ce61ce73459-kube-api-access-7w22g" (OuterVolumeSpecName: "kube-api-access-7w22g") pod "466b2df2-3e4e-4f14-9d6e-1ce61ce73459" (UID: "466b2df2-3e4e-4f14-9d6e-1ce61ce73459"). InnerVolumeSpecName "kube-api-access-7w22g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:52:07.898477 systemd[1]: var-lib-kubelet-pods-466b2df2\x2d3e4e\x2d4f14\x2d9d6e\x2d1ce61ce73459-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7w22g.mount: Deactivated successfully. Feb 9 19:52:07.898589 systemd[1]: var-lib-kubelet-pods-466b2df2\x2d3e4e\x2d4f14\x2d9d6e\x2d1ce61ce73459-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 19:52:07.903781 kubelet[1385]: I0209 19:52:07.903761 1385 scope.go:115] "RemoveContainer" containerID="a2360f03f56d0f364936af2353cdfeced2f150edc93db0ee74feff9345a02c81" Feb 9 19:52:07.904817 env[1114]: time="2024-02-09T19:52:07.904774381Z" level=info msg="RemoveContainer for \"a2360f03f56d0f364936af2353cdfeced2f150edc93db0ee74feff9345a02c81\"" Feb 9 19:52:07.907021 systemd[1]: Removed slice kubepods-burstable-pod466b2df2_3e4e_4f14_9d6e_1ce61ce73459.slice. Feb 9 19:52:07.907096 systemd[1]: kubepods-burstable-pod466b2df2_3e4e_4f14_9d6e_1ce61ce73459.slice: Consumed 5.921s CPU time. Feb 9 19:52:07.907713 env[1114]: time="2024-02-09T19:52:07.907688908Z" level=info msg="RemoveContainer for \"a2360f03f56d0f364936af2353cdfeced2f150edc93db0ee74feff9345a02c81\" returns successfully" Feb 9 19:52:07.907928 kubelet[1385]: I0209 19:52:07.907899 1385 scope.go:115] "RemoveContainer" containerID="b521e7c81c7c17b76446eea50185374c4fd526737e355380e3c1452042a9996d" Feb 9 19:52:07.908896 env[1114]: time="2024-02-09T19:52:07.908855831Z" level=info msg="RemoveContainer for \"b521e7c81c7c17b76446eea50185374c4fd526737e355380e3c1452042a9996d\"" Feb 9 19:52:07.911665 env[1114]: time="2024-02-09T19:52:07.911640213Z" level=info msg="RemoveContainer for \"b521e7c81c7c17b76446eea50185374c4fd526737e355380e3c1452042a9996d\" returns successfully" Feb 9 19:52:07.911966 kubelet[1385]: I0209 19:52:07.911790 1385 scope.go:115] "RemoveContainer" containerID="7304c259c965a6548424a3adb65d0fe9b53dad3a2ea33c24fde4686da426ddce" Feb 9 19:52:07.912736 env[1114]: time="2024-02-09T19:52:07.912698221Z" level=info msg="RemoveContainer for \"7304c259c965a6548424a3adb65d0fe9b53dad3a2ea33c24fde4686da426ddce\"" Feb 9 19:52:07.915893 env[1114]: time="2024-02-09T19:52:07.915860283Z" level=info msg="RemoveContainer for \"7304c259c965a6548424a3adb65d0fe9b53dad3a2ea33c24fde4686da426ddce\" returns successfully" Feb 9 19:52:07.916076 kubelet[1385]: I0209 19:52:07.916048 1385 scope.go:115] "RemoveContainer" containerID="e0b016a37bb91cf034965010a85ff437e8804b94f19516c2dd3c164903ac00b5" Feb 9 19:52:07.916844 env[1114]: time="2024-02-09T19:52:07.916797885Z" level=info msg="RemoveContainer for \"e0b016a37bb91cf034965010a85ff437e8804b94f19516c2dd3c164903ac00b5\"" Feb 9 19:52:07.922549 kubelet[1385]: I0209 19:52:07.922508 1385 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/466b2df2-3e4e-4f14-9d6e-1ce61ce73459-host-proc-sys-net\") on node \"10.0.0.131\" DevicePath \"\"" Feb 9 19:52:07.922549 kubelet[1385]: I0209 19:52:07.922528 1385 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/466b2df2-3e4e-4f14-9d6e-1ce61ce73459-host-proc-sys-kernel\") on node \"10.0.0.131\" DevicePath \"\"" Feb 9 19:52:07.922549 kubelet[1385]: I0209 19:52:07.922537 1385 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/466b2df2-3e4e-4f14-9d6e-1ce61ce73459-cilium-run\") on node \"10.0.0.131\" DevicePath \"\"" Feb 9 19:52:07.922549 kubelet[1385]: I0209 19:52:07.922546 1385 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/466b2df2-3e4e-4f14-9d6e-1ce61ce73459-cni-path\") on node \"10.0.0.131\" DevicePath \"\"" Feb 9 19:52:07.922549 kubelet[1385]: I0209 19:52:07.922554 1385 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/466b2df2-3e4e-4f14-9d6e-1ce61ce73459-cilium-config-path\") on node \"10.0.0.131\" DevicePath \"\"" Feb 9 19:52:07.922549 kubelet[1385]: I0209 19:52:07.922561 1385 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/466b2df2-3e4e-4f14-9d6e-1ce61ce73459-hostproc\") on node \"10.0.0.131\" DevicePath \"\"" Feb 9 19:52:07.922841 kubelet[1385]: I0209 19:52:07.922569 1385 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/466b2df2-3e4e-4f14-9d6e-1ce61ce73459-cilium-cgroup\") on node \"10.0.0.131\" DevicePath \"\"" Feb 9 19:52:07.922841 kubelet[1385]: I0209 19:52:07.922579 1385 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/466b2df2-3e4e-4f14-9d6e-1ce61ce73459-lib-modules\") on node \"10.0.0.131\" DevicePath \"\"" Feb 9 19:52:07.922841 kubelet[1385]: I0209 19:52:07.922587 1385 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/466b2df2-3e4e-4f14-9d6e-1ce61ce73459-xtables-lock\") on node \"10.0.0.131\" DevicePath \"\"" Feb 9 19:52:07.922841 kubelet[1385]: I0209 19:52:07.922595 1385 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/466b2df2-3e4e-4f14-9d6e-1ce61ce73459-etc-cni-netd\") on node \"10.0.0.131\" DevicePath \"\"" Feb 9 19:52:07.922841 kubelet[1385]: I0209 19:52:07.922604 1385 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-7w22g\" (UniqueName: \"kubernetes.io/projected/466b2df2-3e4e-4f14-9d6e-1ce61ce73459-kube-api-access-7w22g\") on node \"10.0.0.131\" DevicePath \"\"" Feb 9 19:52:07.922841 kubelet[1385]: I0209 19:52:07.922612 1385 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/466b2df2-3e4e-4f14-9d6e-1ce61ce73459-hubble-tls\") on node \"10.0.0.131\" DevicePath \"\"" Feb 9 19:52:07.922841 kubelet[1385]: I0209 19:52:07.922621 1385 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/466b2df2-3e4e-4f14-9d6e-1ce61ce73459-clustermesh-secrets\") on node \"10.0.0.131\" DevicePath \"\"" Feb 9 19:52:07.927445 env[1114]: time="2024-02-09T19:52:07.927394428Z" level=info msg="RemoveContainer for \"e0b016a37bb91cf034965010a85ff437e8804b94f19516c2dd3c164903ac00b5\" returns successfully" Feb 9 19:52:07.927592 kubelet[1385]: I0209 19:52:07.927564 1385 scope.go:115] "RemoveContainer" containerID="60b5e5a3b65761ead52b5ed06e4d53d2af89988a661a80070b56f1ef708d17a5" Feb 9 19:52:07.928590 env[1114]: time="2024-02-09T19:52:07.928566270Z" level=info msg="RemoveContainer for \"60b5e5a3b65761ead52b5ed06e4d53d2af89988a661a80070b56f1ef708d17a5\"" Feb 9 19:52:08.065311 env[1114]: time="2024-02-09T19:52:08.064468994Z" level=info msg="RemoveContainer for \"60b5e5a3b65761ead52b5ed06e4d53d2af89988a661a80070b56f1ef708d17a5\" returns successfully" Feb 9 19:52:08.065311 env[1114]: time="2024-02-09T19:52:08.064914602Z" level=error msg="ContainerStatus for \"a2360f03f56d0f364936af2353cdfeced2f150edc93db0ee74feff9345a02c81\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a2360f03f56d0f364936af2353cdfeced2f150edc93db0ee74feff9345a02c81\": not found" Feb 9 19:52:08.065464 kubelet[1385]: I0209 19:52:08.064688 1385 scope.go:115] "RemoveContainer" containerID="a2360f03f56d0f364936af2353cdfeced2f150edc93db0ee74feff9345a02c81" Feb 9 19:52:08.065464 kubelet[1385]: E0209 19:52:08.065150 1385 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a2360f03f56d0f364936af2353cdfeced2f150edc93db0ee74feff9345a02c81\": not found" containerID="a2360f03f56d0f364936af2353cdfeced2f150edc93db0ee74feff9345a02c81" Feb 9 19:52:08.065464 kubelet[1385]: I0209 19:52:08.065200 1385 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:a2360f03f56d0f364936af2353cdfeced2f150edc93db0ee74feff9345a02c81} err="failed to get container status \"a2360f03f56d0f364936af2353cdfeced2f150edc93db0ee74feff9345a02c81\": rpc error: code = NotFound desc = an error occurred when try to find container \"a2360f03f56d0f364936af2353cdfeced2f150edc93db0ee74feff9345a02c81\": not found" Feb 9 19:52:08.065464 kubelet[1385]: I0209 19:52:08.065215 1385 scope.go:115] "RemoveContainer" containerID="b521e7c81c7c17b76446eea50185374c4fd526737e355380e3c1452042a9996d" Feb 9 19:52:08.065576 env[1114]: time="2024-02-09T19:52:08.065406787Z" level=error msg="ContainerStatus for \"b521e7c81c7c17b76446eea50185374c4fd526737e355380e3c1452042a9996d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b521e7c81c7c17b76446eea50185374c4fd526737e355380e3c1452042a9996d\": not found" Feb 9 19:52:08.065601 kubelet[1385]: E0209 19:52:08.065541 1385 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b521e7c81c7c17b76446eea50185374c4fd526737e355380e3c1452042a9996d\": not found" containerID="b521e7c81c7c17b76446eea50185374c4fd526737e355380e3c1452042a9996d" Feb 9 19:52:08.065601 kubelet[1385]: I0209 19:52:08.065568 1385 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:b521e7c81c7c17b76446eea50185374c4fd526737e355380e3c1452042a9996d} err="failed to get container status \"b521e7c81c7c17b76446eea50185374c4fd526737e355380e3c1452042a9996d\": rpc error: code = NotFound desc = an error occurred when try to find container \"b521e7c81c7c17b76446eea50185374c4fd526737e355380e3c1452042a9996d\": not found" Feb 9 19:52:08.065601 kubelet[1385]: I0209 19:52:08.065575 1385 scope.go:115] "RemoveContainer" containerID="7304c259c965a6548424a3adb65d0fe9b53dad3a2ea33c24fde4686da426ddce" Feb 9 19:52:08.065723 env[1114]: time="2024-02-09T19:52:08.065689077Z" level=error msg="ContainerStatus for \"7304c259c965a6548424a3adb65d0fe9b53dad3a2ea33c24fde4686da426ddce\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7304c259c965a6548424a3adb65d0fe9b53dad3a2ea33c24fde4686da426ddce\": not found" Feb 9 19:52:08.065839 kubelet[1385]: E0209 19:52:08.065803 1385 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7304c259c965a6548424a3adb65d0fe9b53dad3a2ea33c24fde4686da426ddce\": not found" containerID="7304c259c965a6548424a3adb65d0fe9b53dad3a2ea33c24fde4686da426ddce" Feb 9 19:52:08.065839 kubelet[1385]: I0209 19:52:08.065828 1385 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:7304c259c965a6548424a3adb65d0fe9b53dad3a2ea33c24fde4686da426ddce} err="failed to get container status \"7304c259c965a6548424a3adb65d0fe9b53dad3a2ea33c24fde4686da426ddce\": rpc error: code = NotFound desc = an error occurred when try to find container \"7304c259c965a6548424a3adb65d0fe9b53dad3a2ea33c24fde4686da426ddce\": not found" Feb 9 19:52:08.065839 kubelet[1385]: I0209 19:52:08.065835 1385 scope.go:115] "RemoveContainer" containerID="e0b016a37bb91cf034965010a85ff437e8804b94f19516c2dd3c164903ac00b5" Feb 9 19:52:08.066056 env[1114]: time="2024-02-09T19:52:08.066002486Z" level=error msg="ContainerStatus for \"e0b016a37bb91cf034965010a85ff437e8804b94f19516c2dd3c164903ac00b5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e0b016a37bb91cf034965010a85ff437e8804b94f19516c2dd3c164903ac00b5\": not found" Feb 9 19:52:08.066192 kubelet[1385]: E0209 19:52:08.066153 1385 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e0b016a37bb91cf034965010a85ff437e8804b94f19516c2dd3c164903ac00b5\": not found" containerID="e0b016a37bb91cf034965010a85ff437e8804b94f19516c2dd3c164903ac00b5" Feb 9 19:52:08.066248 kubelet[1385]: I0209 19:52:08.066195 1385 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:e0b016a37bb91cf034965010a85ff437e8804b94f19516c2dd3c164903ac00b5} err="failed to get container status \"e0b016a37bb91cf034965010a85ff437e8804b94f19516c2dd3c164903ac00b5\": rpc error: code = NotFound desc = an error occurred when try to find container \"e0b016a37bb91cf034965010a85ff437e8804b94f19516c2dd3c164903ac00b5\": not found" Feb 9 19:52:08.066248 kubelet[1385]: I0209 19:52:08.066216 1385 scope.go:115] "RemoveContainer" containerID="60b5e5a3b65761ead52b5ed06e4d53d2af89988a661a80070b56f1ef708d17a5" Feb 9 19:52:08.066448 env[1114]: time="2024-02-09T19:52:08.066406825Z" level=error msg="ContainerStatus for \"60b5e5a3b65761ead52b5ed06e4d53d2af89988a661a80070b56f1ef708d17a5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"60b5e5a3b65761ead52b5ed06e4d53d2af89988a661a80070b56f1ef708d17a5\": not found" Feb 9 19:52:08.066619 kubelet[1385]: E0209 19:52:08.066602 1385 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"60b5e5a3b65761ead52b5ed06e4d53d2af89988a661a80070b56f1ef708d17a5\": not found" containerID="60b5e5a3b65761ead52b5ed06e4d53d2af89988a661a80070b56f1ef708d17a5" Feb 9 19:52:08.066667 kubelet[1385]: I0209 19:52:08.066636 1385 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:60b5e5a3b65761ead52b5ed06e4d53d2af89988a661a80070b56f1ef708d17a5} err="failed to get container status \"60b5e5a3b65761ead52b5ed06e4d53d2af89988a661a80070b56f1ef708d17a5\": rpc error: code = NotFound desc = an error occurred when try to find container \"60b5e5a3b65761ead52b5ed06e4d53d2af89988a661a80070b56f1ef708d17a5\": not found" Feb 9 19:52:08.733756 kubelet[1385]: E0209 19:52:08.733701 1385 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:52:08.738553 env[1114]: time="2024-02-09T19:52:08.738518156Z" level=info msg="StopPodSandbox for \"c38b56d35692731a814ddbeb47a14c8bdc1b82ef7c1b9426319f56d76b5e11b4\"" Feb 9 19:52:08.738841 env[1114]: time="2024-02-09T19:52:08.738780810Z" level=info msg="TearDown network for sandbox \"c38b56d35692731a814ddbeb47a14c8bdc1b82ef7c1b9426319f56d76b5e11b4\" successfully" Feb 9 19:52:08.738888 env[1114]: time="2024-02-09T19:52:08.738836424Z" level=info msg="StopPodSandbox for \"c38b56d35692731a814ddbeb47a14c8bdc1b82ef7c1b9426319f56d76b5e11b4\" returns successfully" Feb 9 19:52:08.741430 env[1114]: time="2024-02-09T19:52:08.741403337Z" level=info msg="RemovePodSandbox for \"c38b56d35692731a814ddbeb47a14c8bdc1b82ef7c1b9426319f56d76b5e11b4\"" Feb 9 19:52:08.741488 env[1114]: time="2024-02-09T19:52:08.741429586Z" level=info msg="Forcibly stopping sandbox \"c38b56d35692731a814ddbeb47a14c8bdc1b82ef7c1b9426319f56d76b5e11b4\"" Feb 9 19:52:08.741512 env[1114]: time="2024-02-09T19:52:08.741483317Z" level=info msg="TearDown network for sandbox \"c38b56d35692731a814ddbeb47a14c8bdc1b82ef7c1b9426319f56d76b5e11b4\" successfully" Feb 9 19:52:08.744202 env[1114]: time="2024-02-09T19:52:08.744171118Z" level=info msg="RemovePodSandbox \"c38b56d35692731a814ddbeb47a14c8bdc1b82ef7c1b9426319f56d76b5e11b4\" returns successfully" Feb 9 19:52:08.777189 kubelet[1385]: E0209 19:52:08.777131 1385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:52:08.786120 kubelet[1385]: E0209 19:52:08.786087 1385 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 19:52:08.805063 kubelet[1385]: I0209 19:52:08.805027 1385 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=466b2df2-3e4e-4f14-9d6e-1ce61ce73459 path="/var/lib/kubelet/pods/466b2df2-3e4e-4f14-9d6e-1ce61ce73459/volumes" Feb 9 19:52:08.812964 kubelet[1385]: I0209 19:52:08.812934 1385 topology_manager.go:212] "Topology Admit Handler" Feb 9 19:52:08.813093 kubelet[1385]: E0209 19:52:08.812989 1385 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="466b2df2-3e4e-4f14-9d6e-1ce61ce73459" containerName="mount-cgroup" Feb 9 19:52:08.813093 kubelet[1385]: E0209 19:52:08.813005 1385 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="466b2df2-3e4e-4f14-9d6e-1ce61ce73459" containerName="apply-sysctl-overwrites" Feb 9 19:52:08.813093 kubelet[1385]: E0209 19:52:08.813012 1385 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="466b2df2-3e4e-4f14-9d6e-1ce61ce73459" containerName="mount-bpf-fs" Feb 9 19:52:08.813093 kubelet[1385]: E0209 19:52:08.813019 1385 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="466b2df2-3e4e-4f14-9d6e-1ce61ce73459" containerName="clean-cilium-state" Feb 9 19:52:08.813093 kubelet[1385]: E0209 19:52:08.813026 1385 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="466b2df2-3e4e-4f14-9d6e-1ce61ce73459" containerName="cilium-agent" Feb 9 19:52:08.813093 kubelet[1385]: I0209 19:52:08.813045 1385 memory_manager.go:346] "RemoveStaleState removing state" podUID="466b2df2-3e4e-4f14-9d6e-1ce61ce73459" containerName="cilium-agent" Feb 9 19:52:08.817434 systemd[1]: Created slice kubepods-besteffort-pod3abe9505_9d2c_48c0_9726_9b5053959fec.slice. Feb 9 19:52:08.838199 kubelet[1385]: I0209 19:52:08.838151 1385 topology_manager.go:212] "Topology Admit Handler" Feb 9 19:52:08.843008 systemd[1]: Created slice kubepods-burstable-podd5c098c6_625e_4f3f_a248_bad959f23572.slice. Feb 9 19:52:08.927946 kubelet[1385]: I0209 19:52:08.927911 1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3abe9505-9d2c-48c0-9726-9b5053959fec-cilium-config-path\") pod \"cilium-operator-574c4bb98d-b7bdn\" (UID: \"3abe9505-9d2c-48c0-9726-9b5053959fec\") " pod="kube-system/cilium-operator-574c4bb98d-b7bdn" Feb 9 19:52:08.927946 kubelet[1385]: I0209 19:52:08.927949 1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvk8v\" (UniqueName: \"kubernetes.io/projected/3abe9505-9d2c-48c0-9726-9b5053959fec-kube-api-access-wvk8v\") pod \"cilium-operator-574c4bb98d-b7bdn\" (UID: \"3abe9505-9d2c-48c0-9726-9b5053959fec\") " pod="kube-system/cilium-operator-574c4bb98d-b7bdn" Feb 9 19:52:09.028725 kubelet[1385]: I0209 19:52:09.028616 1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d5c098c6-625e-4f3f-a248-bad959f23572-cilium-cgroup\") pod \"cilium-pl4v8\" (UID: \"d5c098c6-625e-4f3f-a248-bad959f23572\") " pod="kube-system/cilium-pl4v8" Feb 9 19:52:09.028725 kubelet[1385]: I0209 19:52:09.028683 1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d5c098c6-625e-4f3f-a248-bad959f23572-cni-path\") pod \"cilium-pl4v8\" (UID: \"d5c098c6-625e-4f3f-a248-bad959f23572\") " pod="kube-system/cilium-pl4v8" Feb 9 19:52:09.028918 kubelet[1385]: I0209 19:52:09.028791 1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d5c098c6-625e-4f3f-a248-bad959f23572-xtables-lock\") pod \"cilium-pl4v8\" (UID: \"d5c098c6-625e-4f3f-a248-bad959f23572\") " pod="kube-system/cilium-pl4v8" Feb 9 19:52:09.028918 kubelet[1385]: I0209 19:52:09.028842 1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d5c098c6-625e-4f3f-a248-bad959f23572-cilium-config-path\") pod \"cilium-pl4v8\" (UID: \"d5c098c6-625e-4f3f-a248-bad959f23572\") " pod="kube-system/cilium-pl4v8" Feb 9 19:52:09.028918 kubelet[1385]: I0209 19:52:09.028874 1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpcps\" (UniqueName: \"kubernetes.io/projected/d5c098c6-625e-4f3f-a248-bad959f23572-kube-api-access-vpcps\") pod \"cilium-pl4v8\" (UID: \"d5c098c6-625e-4f3f-a248-bad959f23572\") " pod="kube-system/cilium-pl4v8" Feb 9 19:52:09.028918 kubelet[1385]: I0209 19:52:09.028889 1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d5c098c6-625e-4f3f-a248-bad959f23572-cilium-run\") pod \"cilium-pl4v8\" (UID: \"d5c098c6-625e-4f3f-a248-bad959f23572\") " pod="kube-system/cilium-pl4v8" Feb 9 19:52:09.028918 kubelet[1385]: I0209 19:52:09.028911 1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d5c098c6-625e-4f3f-a248-bad959f23572-bpf-maps\") pod \"cilium-pl4v8\" (UID: \"d5c098c6-625e-4f3f-a248-bad959f23572\") " pod="kube-system/cilium-pl4v8" Feb 9 19:52:09.029039 kubelet[1385]: I0209 19:52:09.028930 1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d5c098c6-625e-4f3f-a248-bad959f23572-hostproc\") pod \"cilium-pl4v8\" (UID: \"d5c098c6-625e-4f3f-a248-bad959f23572\") " pod="kube-system/cilium-pl4v8" Feb 9 19:52:09.029039 kubelet[1385]: I0209 19:52:09.028946 1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d5c098c6-625e-4f3f-a248-bad959f23572-host-proc-sys-net\") pod \"cilium-pl4v8\" (UID: \"d5c098c6-625e-4f3f-a248-bad959f23572\") " pod="kube-system/cilium-pl4v8" Feb 9 19:52:09.029039 kubelet[1385]: I0209 19:52:09.028975 1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d5c098c6-625e-4f3f-a248-bad959f23572-cilium-ipsec-secrets\") pod \"cilium-pl4v8\" (UID: \"d5c098c6-625e-4f3f-a248-bad959f23572\") " pod="kube-system/cilium-pl4v8" Feb 9 19:52:09.029039 kubelet[1385]: I0209 19:52:09.028990 1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d5c098c6-625e-4f3f-a248-bad959f23572-hubble-tls\") pod \"cilium-pl4v8\" (UID: \"d5c098c6-625e-4f3f-a248-bad959f23572\") " pod="kube-system/cilium-pl4v8" Feb 9 19:52:09.029039 kubelet[1385]: I0209 19:52:09.029005 1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d5c098c6-625e-4f3f-a248-bad959f23572-lib-modules\") pod \"cilium-pl4v8\" (UID: \"d5c098c6-625e-4f3f-a248-bad959f23572\") " pod="kube-system/cilium-pl4v8" Feb 9 19:52:09.029039 kubelet[1385]: I0209 19:52:09.029022 1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d5c098c6-625e-4f3f-a248-bad959f23572-clustermesh-secrets\") pod \"cilium-pl4v8\" (UID: \"d5c098c6-625e-4f3f-a248-bad959f23572\") " pod="kube-system/cilium-pl4v8" Feb 9 19:52:09.029193 kubelet[1385]: I0209 19:52:09.029091 1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d5c098c6-625e-4f3f-a248-bad959f23572-etc-cni-netd\") pod \"cilium-pl4v8\" (UID: \"d5c098c6-625e-4f3f-a248-bad959f23572\") " pod="kube-system/cilium-pl4v8" Feb 9 19:52:09.029320 kubelet[1385]: I0209 19:52:09.029276 1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d5c098c6-625e-4f3f-a248-bad959f23572-host-proc-sys-kernel\") pod \"cilium-pl4v8\" (UID: \"d5c098c6-625e-4f3f-a248-bad959f23572\") " pod="kube-system/cilium-pl4v8" Feb 9 19:52:09.120529 kubelet[1385]: E0209 19:52:09.120490 1385 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:52:09.121056 env[1114]: time="2024-02-09T19:52:09.121019299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-b7bdn,Uid:3abe9505-9d2c-48c0-9726-9b5053959fec,Namespace:kube-system,Attempt:0,}" Feb 9 19:52:09.139687 env[1114]: time="2024-02-09T19:52:09.139624060Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:52:09.139687 env[1114]: time="2024-02-09T19:52:09.139687559Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:52:09.139842 env[1114]: time="2024-02-09T19:52:09.139707798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:52:09.139842 env[1114]: time="2024-02-09T19:52:09.139811262Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/28ea53e129d651e5b6425123c990007c3a6e25a7a84ebe9a44cf5acace6c296d pid=2946 runtime=io.containerd.runc.v2 Feb 9 19:52:09.150141 systemd[1]: Started cri-containerd-28ea53e129d651e5b6425123c990007c3a6e25a7a84ebe9a44cf5acace6c296d.scope. Feb 9 19:52:09.152877 kubelet[1385]: E0209 19:52:09.152857 1385 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:52:09.153192 env[1114]: time="2024-02-09T19:52:09.153150461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pl4v8,Uid:d5c098c6-625e-4f3f-a248-bad959f23572,Namespace:kube-system,Attempt:0,}" Feb 9 19:52:09.166612 env[1114]: time="2024-02-09T19:52:09.166497796Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:52:09.166612 env[1114]: time="2024-02-09T19:52:09.166563209Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:52:09.166973 env[1114]: time="2024-02-09T19:52:09.166919037Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:52:09.167283 env[1114]: time="2024-02-09T19:52:09.167247435Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/49d3758952ea1190fda0629c4b75abba0cc20ef0af33f4cea31b3dfe9901a5b8 pid=2980 runtime=io.containerd.runc.v2 Feb 9 19:52:09.176473 systemd[1]: Started cri-containerd-49d3758952ea1190fda0629c4b75abba0cc20ef0af33f4cea31b3dfe9901a5b8.scope. Feb 9 19:52:09.189774 env[1114]: time="2024-02-09T19:52:09.189735862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-b7bdn,Uid:3abe9505-9d2c-48c0-9726-9b5053959fec,Namespace:kube-system,Attempt:0,} returns sandbox id \"28ea53e129d651e5b6425123c990007c3a6e25a7a84ebe9a44cf5acace6c296d\"" Feb 9 19:52:09.190477 kubelet[1385]: E0209 19:52:09.190453 1385 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:52:09.191594 env[1114]: time="2024-02-09T19:52:09.191559628Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 9 19:52:09.196831 env[1114]: time="2024-02-09T19:52:09.196784353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pl4v8,Uid:d5c098c6-625e-4f3f-a248-bad959f23572,Namespace:kube-system,Attempt:0,} returns sandbox id \"49d3758952ea1190fda0629c4b75abba0cc20ef0af33f4cea31b3dfe9901a5b8\"" Feb 9 19:52:09.197513 kubelet[1385]: E0209 19:52:09.197340 1385 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:52:09.198868 env[1114]: time="2024-02-09T19:52:09.198840086Z" level=info msg="CreateContainer within sandbox \"49d3758952ea1190fda0629c4b75abba0cc20ef0af33f4cea31b3dfe9901a5b8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 19:52:09.212699 env[1114]: time="2024-02-09T19:52:09.212668103Z" level=info msg="CreateContainer within sandbox \"49d3758952ea1190fda0629c4b75abba0cc20ef0af33f4cea31b3dfe9901a5b8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f97dc5cb2d9506e67545a06588f602ca4325d533a529e4cd6780bdf398e6248e\"" Feb 9 19:52:09.212996 env[1114]: time="2024-02-09T19:52:09.212954481Z" level=info msg="StartContainer for \"f97dc5cb2d9506e67545a06588f602ca4325d533a529e4cd6780bdf398e6248e\"" Feb 9 19:52:09.224602 systemd[1]: Started cri-containerd-f97dc5cb2d9506e67545a06588f602ca4325d533a529e4cd6780bdf398e6248e.scope. Feb 9 19:52:09.233297 systemd[1]: cri-containerd-f97dc5cb2d9506e67545a06588f602ca4325d533a529e4cd6780bdf398e6248e.scope: Deactivated successfully. Feb 9 19:52:09.233558 systemd[1]: Stopped cri-containerd-f97dc5cb2d9506e67545a06588f602ca4325d533a529e4cd6780bdf398e6248e.scope. Feb 9 19:52:09.246150 env[1114]: time="2024-02-09T19:52:09.246109518Z" level=info msg="shim disconnected" id=f97dc5cb2d9506e67545a06588f602ca4325d533a529e4cd6780bdf398e6248e Feb 9 19:52:09.246322 env[1114]: time="2024-02-09T19:52:09.246151987Z" level=warning msg="cleaning up after shim disconnected" id=f97dc5cb2d9506e67545a06588f602ca4325d533a529e4cd6780bdf398e6248e namespace=k8s.io Feb 9 19:52:09.246322 env[1114]: time="2024-02-09T19:52:09.246159882Z" level=info msg="cleaning up dead shim" Feb 9 19:52:09.252504 env[1114]: time="2024-02-09T19:52:09.252467392Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:52:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3043 runtime=io.containerd.runc.v2\ntime=\"2024-02-09T19:52:09Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/f97dc5cb2d9506e67545a06588f602ca4325d533a529e4cd6780bdf398e6248e/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 9 19:52:09.252744 env[1114]: time="2024-02-09T19:52:09.252665653Z" level=error msg="copy shim log" error="read /proc/self/fd/68: file already closed" Feb 9 19:52:09.252948 env[1114]: time="2024-02-09T19:52:09.252889554Z" level=error msg="Failed to pipe stderr of container \"f97dc5cb2d9506e67545a06588f602ca4325d533a529e4cd6780bdf398e6248e\"" error="reading from a closed fifo" Feb 9 19:52:09.258246 env[1114]: time="2024-02-09T19:52:09.258213256Z" level=error msg="Failed to pipe stdout of container \"f97dc5cb2d9506e67545a06588f602ca4325d533a529e4cd6780bdf398e6248e\"" error="reading from a closed fifo" Feb 9 19:52:09.260396 env[1114]: time="2024-02-09T19:52:09.260360920Z" level=error msg="StartContainer for \"f97dc5cb2d9506e67545a06588f602ca4325d533a529e4cd6780bdf398e6248e\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 9 19:52:09.260602 kubelet[1385]: E0209 19:52:09.260572 1385 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="f97dc5cb2d9506e67545a06588f602ca4325d533a529e4cd6780bdf398e6248e" Feb 9 19:52:09.260730 kubelet[1385]: E0209 19:52:09.260694 1385 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 9 19:52:09.260730 kubelet[1385]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 9 19:52:09.260730 kubelet[1385]: rm /hostbin/cilium-mount Feb 9 19:52:09.260835 kubelet[1385]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-vpcps,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-pl4v8_kube-system(d5c098c6-625e-4f3f-a248-bad959f23572): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 9 19:52:09.260835 kubelet[1385]: E0209 19:52:09.260733 1385 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-pl4v8" podUID=d5c098c6-625e-4f3f-a248-bad959f23572 Feb 9 19:52:09.778188 kubelet[1385]: E0209 19:52:09.778160 1385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:52:09.908913 env[1114]: time="2024-02-09T19:52:09.908870500Z" level=info msg="StopPodSandbox for \"49d3758952ea1190fda0629c4b75abba0cc20ef0af33f4cea31b3dfe9901a5b8\"" Feb 9 19:52:09.908913 env[1114]: time="2024-02-09T19:52:09.908915846Z" level=info msg="Container to stop \"f97dc5cb2d9506e67545a06588f602ca4325d533a529e4cd6780bdf398e6248e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:52:09.914760 systemd[1]: cri-containerd-49d3758952ea1190fda0629c4b75abba0cc20ef0af33f4cea31b3dfe9901a5b8.scope: Deactivated successfully. Feb 9 19:52:09.932048 env[1114]: time="2024-02-09T19:52:09.932004731Z" level=info msg="shim disconnected" id=49d3758952ea1190fda0629c4b75abba0cc20ef0af33f4cea31b3dfe9901a5b8 Feb 9 19:52:09.932048 env[1114]: time="2024-02-09T19:52:09.932045518Z" level=warning msg="cleaning up after shim disconnected" id=49d3758952ea1190fda0629c4b75abba0cc20ef0af33f4cea31b3dfe9901a5b8 namespace=k8s.io Feb 9 19:52:09.932270 env[1114]: time="2024-02-09T19:52:09.932054444Z" level=info msg="cleaning up dead shim" Feb 9 19:52:09.937584 env[1114]: time="2024-02-09T19:52:09.937559737Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:52:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3073 runtime=io.containerd.runc.v2\n" Feb 9 19:52:09.937831 env[1114]: time="2024-02-09T19:52:09.937803044Z" level=info msg="TearDown network for sandbox \"49d3758952ea1190fda0629c4b75abba0cc20ef0af33f4cea31b3dfe9901a5b8\" successfully" Feb 9 19:52:09.937831 env[1114]: time="2024-02-09T19:52:09.937822300Z" level=info msg="StopPodSandbox for \"49d3758952ea1190fda0629c4b75abba0cc20ef0af33f4cea31b3dfe9901a5b8\" returns successfully" Feb 9 19:52:10.135680 kubelet[1385]: I0209 19:52:10.135298 1385 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d5c098c6-625e-4f3f-a248-bad959f23572-host-proc-sys-kernel\") pod \"d5c098c6-625e-4f3f-a248-bad959f23572\" (UID: \"d5c098c6-625e-4f3f-a248-bad959f23572\") " Feb 9 19:52:10.135680 kubelet[1385]: I0209 19:52:10.135346 1385 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d5c098c6-625e-4f3f-a248-bad959f23572-cilium-config-path\") pod \"d5c098c6-625e-4f3f-a248-bad959f23572\" (UID: \"d5c098c6-625e-4f3f-a248-bad959f23572\") " Feb 9 19:52:10.135680 kubelet[1385]: I0209 19:52:10.135363 1385 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d5c098c6-625e-4f3f-a248-bad959f23572-cilium-run\") pod \"d5c098c6-625e-4f3f-a248-bad959f23572\" (UID: \"d5c098c6-625e-4f3f-a248-bad959f23572\") " Feb 9 19:52:10.135680 kubelet[1385]: I0209 19:52:10.135379 1385 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d5c098c6-625e-4f3f-a248-bad959f23572-hubble-tls\") pod \"d5c098c6-625e-4f3f-a248-bad959f23572\" (UID: \"d5c098c6-625e-4f3f-a248-bad959f23572\") " Feb 9 19:52:10.135680 kubelet[1385]: I0209 19:52:10.135396 1385 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d5c098c6-625e-4f3f-a248-bad959f23572-cilium-cgroup\") pod \"d5c098c6-625e-4f3f-a248-bad959f23572\" (UID: \"d5c098c6-625e-4f3f-a248-bad959f23572\") " Feb 9 19:52:10.135680 kubelet[1385]: I0209 19:52:10.135410 1385 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d5c098c6-625e-4f3f-a248-bad959f23572-cni-path\") pod \"d5c098c6-625e-4f3f-a248-bad959f23572\" (UID: \"d5c098c6-625e-4f3f-a248-bad959f23572\") " Feb 9 19:52:10.135680 kubelet[1385]: I0209 19:52:10.135425 1385 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d5c098c6-625e-4f3f-a248-bad959f23572-bpf-maps\") pod \"d5c098c6-625e-4f3f-a248-bad959f23572\" (UID: \"d5c098c6-625e-4f3f-a248-bad959f23572\") " Feb 9 19:52:10.135680 kubelet[1385]: I0209 19:52:10.135412 1385 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5c098c6-625e-4f3f-a248-bad959f23572-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d5c098c6-625e-4f3f-a248-bad959f23572" (UID: "d5c098c6-625e-4f3f-a248-bad959f23572"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:52:10.135680 kubelet[1385]: I0209 19:52:10.135442 1385 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d5c098c6-625e-4f3f-a248-bad959f23572-host-proc-sys-net\") pod \"d5c098c6-625e-4f3f-a248-bad959f23572\" (UID: \"d5c098c6-625e-4f3f-a248-bad959f23572\") " Feb 9 19:52:10.135680 kubelet[1385]: I0209 19:52:10.135442 1385 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5c098c6-625e-4f3f-a248-bad959f23572-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d5c098c6-625e-4f3f-a248-bad959f23572" (UID: "d5c098c6-625e-4f3f-a248-bad959f23572"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:52:10.135680 kubelet[1385]: I0209 19:52:10.135420 1385 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5c098c6-625e-4f3f-a248-bad959f23572-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d5c098c6-625e-4f3f-a248-bad959f23572" (UID: "d5c098c6-625e-4f3f-a248-bad959f23572"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:52:10.135680 kubelet[1385]: I0209 19:52:10.135458 1385 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d5c098c6-625e-4f3f-a248-bad959f23572-xtables-lock\") pod \"d5c098c6-625e-4f3f-a248-bad959f23572\" (UID: \"d5c098c6-625e-4f3f-a248-bad959f23572\") " Feb 9 19:52:10.135680 kubelet[1385]: I0209 19:52:10.135470 1385 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5c098c6-625e-4f3f-a248-bad959f23572-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d5c098c6-625e-4f3f-a248-bad959f23572" (UID: "d5c098c6-625e-4f3f-a248-bad959f23572"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:52:10.135680 kubelet[1385]: I0209 19:52:10.135474 1385 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d5c098c6-625e-4f3f-a248-bad959f23572-lib-modules\") pod \"d5c098c6-625e-4f3f-a248-bad959f23572\" (UID: \"d5c098c6-625e-4f3f-a248-bad959f23572\") " Feb 9 19:52:10.135680 kubelet[1385]: I0209 19:52:10.135493 1385 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5c098c6-625e-4f3f-a248-bad959f23572-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d5c098c6-625e-4f3f-a248-bad959f23572" (UID: "d5c098c6-625e-4f3f-a248-bad959f23572"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:52:10.136146 kubelet[1385]: I0209 19:52:10.135512 1385 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5c098c6-625e-4f3f-a248-bad959f23572-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d5c098c6-625e-4f3f-a248-bad959f23572" (UID: "d5c098c6-625e-4f3f-a248-bad959f23572"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:52:10.136146 kubelet[1385]: I0209 19:52:10.135530 1385 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5c098c6-625e-4f3f-a248-bad959f23572-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d5c098c6-625e-4f3f-a248-bad959f23572" (UID: "d5c098c6-625e-4f3f-a248-bad959f23572"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:52:10.136146 kubelet[1385]: I0209 19:52:10.135539 1385 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d5c098c6-625e-4f3f-a248-bad959f23572-clustermesh-secrets\") pod \"d5c098c6-625e-4f3f-a248-bad959f23572\" (UID: \"d5c098c6-625e-4f3f-a248-bad959f23572\") " Feb 9 19:52:10.136146 kubelet[1385]: I0209 19:52:10.135547 1385 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5c098c6-625e-4f3f-a248-bad959f23572-cni-path" (OuterVolumeSpecName: "cni-path") pod "d5c098c6-625e-4f3f-a248-bad959f23572" (UID: "d5c098c6-625e-4f3f-a248-bad959f23572"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:52:10.136146 kubelet[1385]: I0209 19:52:10.135562 1385 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpcps\" (UniqueName: \"kubernetes.io/projected/d5c098c6-625e-4f3f-a248-bad959f23572-kube-api-access-vpcps\") pod \"d5c098c6-625e-4f3f-a248-bad959f23572\" (UID: \"d5c098c6-625e-4f3f-a248-bad959f23572\") " Feb 9 19:52:10.136146 kubelet[1385]: I0209 19:52:10.135583 1385 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d5c098c6-625e-4f3f-a248-bad959f23572-hostproc\") pod \"d5c098c6-625e-4f3f-a248-bad959f23572\" (UID: \"d5c098c6-625e-4f3f-a248-bad959f23572\") " Feb 9 19:52:10.136146 kubelet[1385]: I0209 19:52:10.135600 1385 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d5c098c6-625e-4f3f-a248-bad959f23572-etc-cni-netd\") pod \"d5c098c6-625e-4f3f-a248-bad959f23572\" (UID: \"d5c098c6-625e-4f3f-a248-bad959f23572\") " Feb 9 19:52:10.136146 kubelet[1385]: I0209 19:52:10.135617 1385 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d5c098c6-625e-4f3f-a248-bad959f23572-cilium-ipsec-secrets\") pod \"d5c098c6-625e-4f3f-a248-bad959f23572\" (UID: \"d5c098c6-625e-4f3f-a248-bad959f23572\") " Feb 9 19:52:10.136146 kubelet[1385]: I0209 19:52:10.135643 1385 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d5c098c6-625e-4f3f-a248-bad959f23572-bpf-maps\") on node \"10.0.0.131\" DevicePath \"\"" Feb 9 19:52:10.136146 kubelet[1385]: I0209 19:52:10.135653 1385 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d5c098c6-625e-4f3f-a248-bad959f23572-cilium-run\") on node \"10.0.0.131\" DevicePath \"\"" Feb 9 19:52:10.136146 kubelet[1385]: I0209 19:52:10.135662 1385 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d5c098c6-625e-4f3f-a248-bad959f23572-cilium-cgroup\") on node \"10.0.0.131\" DevicePath \"\"" Feb 9 19:52:10.136146 kubelet[1385]: I0209 19:52:10.135670 1385 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d5c098c6-625e-4f3f-a248-bad959f23572-cni-path\") on node \"10.0.0.131\" DevicePath \"\"" Feb 9 19:52:10.136146 kubelet[1385]: I0209 19:52:10.135679 1385 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d5c098c6-625e-4f3f-a248-bad959f23572-host-proc-sys-net\") on node \"10.0.0.131\" DevicePath \"\"" Feb 9 19:52:10.136146 kubelet[1385]: I0209 19:52:10.135687 1385 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d5c098c6-625e-4f3f-a248-bad959f23572-xtables-lock\") on node \"10.0.0.131\" DevicePath \"\"" Feb 9 19:52:10.136146 kubelet[1385]: I0209 19:52:10.135694 1385 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d5c098c6-625e-4f3f-a248-bad959f23572-lib-modules\") on node \"10.0.0.131\" DevicePath \"\"" Feb 9 19:52:10.136146 kubelet[1385]: I0209 19:52:10.135702 1385 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d5c098c6-625e-4f3f-a248-bad959f23572-host-proc-sys-kernel\") on node \"10.0.0.131\" DevicePath \"\"" Feb 9 19:52:10.136146 kubelet[1385]: I0209 19:52:10.135736 1385 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5c098c6-625e-4f3f-a248-bad959f23572-hostproc" (OuterVolumeSpecName: "hostproc") pod "d5c098c6-625e-4f3f-a248-bad959f23572" (UID: "d5c098c6-625e-4f3f-a248-bad959f23572"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:52:10.136572 kubelet[1385]: I0209 19:52:10.135895 1385 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5c098c6-625e-4f3f-a248-bad959f23572-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d5c098c6-625e-4f3f-a248-bad959f23572" (UID: "d5c098c6-625e-4f3f-a248-bad959f23572"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:52:10.136572 kubelet[1385]: W0209 19:52:10.135894 1385 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/d5c098c6-625e-4f3f-a248-bad959f23572/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 19:52:10.137909 kubelet[1385]: I0209 19:52:10.137558 1385 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5c098c6-625e-4f3f-a248-bad959f23572-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d5c098c6-625e-4f3f-a248-bad959f23572" (UID: "d5c098c6-625e-4f3f-a248-bad959f23572"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 19:52:10.138306 kubelet[1385]: I0209 19:52:10.138285 1385 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5c098c6-625e-4f3f-a248-bad959f23572-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d5c098c6-625e-4f3f-a248-bad959f23572" (UID: "d5c098c6-625e-4f3f-a248-bad959f23572"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:52:10.139053 systemd[1]: var-lib-kubelet-pods-d5c098c6\x2d625e\x2d4f3f\x2da248\x2dbad959f23572-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvpcps.mount: Deactivated successfully. Feb 9 19:52:10.139148 systemd[1]: var-lib-kubelet-pods-d5c098c6\x2d625e\x2d4f3f\x2da248\x2dbad959f23572-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 19:52:10.139593 kubelet[1385]: I0209 19:52:10.139573 1385 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5c098c6-625e-4f3f-a248-bad959f23572-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d5c098c6-625e-4f3f-a248-bad959f23572" (UID: "d5c098c6-625e-4f3f-a248-bad959f23572"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:52:10.140346 kubelet[1385]: I0209 19:52:10.140316 1385 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5c098c6-625e-4f3f-a248-bad959f23572-kube-api-access-vpcps" (OuterVolumeSpecName: "kube-api-access-vpcps") pod "d5c098c6-625e-4f3f-a248-bad959f23572" (UID: "d5c098c6-625e-4f3f-a248-bad959f23572"). InnerVolumeSpecName "kube-api-access-vpcps". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:52:10.140721 systemd[1]: var-lib-kubelet-pods-d5c098c6\x2d625e\x2d4f3f\x2da248\x2dbad959f23572-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 19:52:10.141102 kubelet[1385]: I0209 19:52:10.141082 1385 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5c098c6-625e-4f3f-a248-bad959f23572-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "d5c098c6-625e-4f3f-a248-bad959f23572" (UID: "d5c098c6-625e-4f3f-a248-bad959f23572"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:52:10.142015 systemd[1]: var-lib-kubelet-pods-d5c098c6\x2d625e\x2d4f3f\x2da248\x2dbad959f23572-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 9 19:52:10.236562 kubelet[1385]: I0209 19:52:10.236526 1385 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d5c098c6-625e-4f3f-a248-bad959f23572-cilium-ipsec-secrets\") on node \"10.0.0.131\" DevicePath \"\"" Feb 9 19:52:10.236562 kubelet[1385]: I0209 19:52:10.236554 1385 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-vpcps\" (UniqueName: \"kubernetes.io/projected/d5c098c6-625e-4f3f-a248-bad959f23572-kube-api-access-vpcps\") on node \"10.0.0.131\" DevicePath \"\"" Feb 9 19:52:10.236562 kubelet[1385]: I0209 19:52:10.236564 1385 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d5c098c6-625e-4f3f-a248-bad959f23572-hostproc\") on node \"10.0.0.131\" DevicePath \"\"" Feb 9 19:52:10.236562 kubelet[1385]: I0209 19:52:10.236573 1385 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d5c098c6-625e-4f3f-a248-bad959f23572-etc-cni-netd\") on node \"10.0.0.131\" DevicePath \"\"" Feb 9 19:52:10.236761 kubelet[1385]: I0209 19:52:10.236583 1385 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d5c098c6-625e-4f3f-a248-bad959f23572-cilium-config-path\") on node \"10.0.0.131\" DevicePath \"\"" Feb 9 19:52:10.236761 kubelet[1385]: I0209 19:52:10.236591 1385 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d5c098c6-625e-4f3f-a248-bad959f23572-hubble-tls\") on node \"10.0.0.131\" DevicePath \"\"" Feb 9 19:52:10.236761 kubelet[1385]: I0209 19:52:10.236599 1385 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d5c098c6-625e-4f3f-a248-bad959f23572-clustermesh-secrets\") on node \"10.0.0.131\" DevicePath \"\"" Feb 9 19:52:10.779128 kubelet[1385]: E0209 19:52:10.779086 1385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:52:10.808515 systemd[1]: Removed slice kubepods-burstable-podd5c098c6_625e_4f3f_a248_bad959f23572.slice. Feb 9 19:52:10.911546 kubelet[1385]: I0209 19:52:10.911521 1385 scope.go:115] "RemoveContainer" containerID="f97dc5cb2d9506e67545a06588f602ca4325d533a529e4cd6780bdf398e6248e" Feb 9 19:52:10.914590 env[1114]: time="2024-02-09T19:52:10.914536639Z" level=info msg="RemoveContainer for \"f97dc5cb2d9506e67545a06588f602ca4325d533a529e4cd6780bdf398e6248e\"" Feb 9 19:52:10.921690 env[1114]: time="2024-02-09T19:52:10.921653318Z" level=info msg="RemoveContainer for \"f97dc5cb2d9506e67545a06588f602ca4325d533a529e4cd6780bdf398e6248e\" returns successfully" Feb 9 19:52:10.922867 env[1114]: time="2024-02-09T19:52:10.922830449Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:52:10.924541 env[1114]: time="2024-02-09T19:52:10.924510564Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:52:10.926491 env[1114]: time="2024-02-09T19:52:10.926454506Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:52:10.926901 env[1114]: time="2024-02-09T19:52:10.926869485Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 9 19:52:10.928460 env[1114]: time="2024-02-09T19:52:10.928433103Z" level=info msg="CreateContainer within sandbox \"28ea53e129d651e5b6425123c990007c3a6e25a7a84ebe9a44cf5acace6c296d\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 9 19:52:10.933919 kubelet[1385]: I0209 19:52:10.933884 1385 topology_manager.go:212] "Topology Admit Handler" Feb 9 19:52:10.934029 kubelet[1385]: E0209 19:52:10.933934 1385 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d5c098c6-625e-4f3f-a248-bad959f23572" containerName="mount-cgroup" Feb 9 19:52:10.934029 kubelet[1385]: I0209 19:52:10.933960 1385 memory_manager.go:346] "RemoveStaleState removing state" podUID="d5c098c6-625e-4f3f-a248-bad959f23572" containerName="mount-cgroup" Feb 9 19:52:10.938995 systemd[1]: Created slice kubepods-burstable-pod53b2d3fb_fb0e_4cf6_9232_86de3140d80a.slice. Feb 9 19:52:10.939316 kubelet[1385]: I0209 19:52:10.939290 1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/53b2d3fb-fb0e-4cf6-9232-86de3140d80a-cni-path\") pod \"cilium-x8wkg\" (UID: \"53b2d3fb-fb0e-4cf6-9232-86de3140d80a\") " pod="kube-system/cilium-x8wkg" Feb 9 19:52:10.939316 kubelet[1385]: I0209 19:52:10.939332 1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/53b2d3fb-fb0e-4cf6-9232-86de3140d80a-xtables-lock\") pod \"cilium-x8wkg\" (UID: \"53b2d3fb-fb0e-4cf6-9232-86de3140d80a\") " pod="kube-system/cilium-x8wkg" Feb 9 19:52:10.939484 kubelet[1385]: I0209 19:52:10.939362 1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/53b2d3fb-fb0e-4cf6-9232-86de3140d80a-cilium-config-path\") pod \"cilium-x8wkg\" (UID: \"53b2d3fb-fb0e-4cf6-9232-86de3140d80a\") " pod="kube-system/cilium-x8wkg" Feb 9 19:52:10.939484 kubelet[1385]: I0209 19:52:10.939390 1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/53b2d3fb-fb0e-4cf6-9232-86de3140d80a-hubble-tls\") pod \"cilium-x8wkg\" (UID: \"53b2d3fb-fb0e-4cf6-9232-86de3140d80a\") " pod="kube-system/cilium-x8wkg" Feb 9 19:52:10.939484 kubelet[1385]: I0209 19:52:10.939414 1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/53b2d3fb-fb0e-4cf6-9232-86de3140d80a-cilium-run\") pod \"cilium-x8wkg\" (UID: \"53b2d3fb-fb0e-4cf6-9232-86de3140d80a\") " pod="kube-system/cilium-x8wkg" Feb 9 19:52:10.939484 kubelet[1385]: I0209 19:52:10.939439 1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/53b2d3fb-fb0e-4cf6-9232-86de3140d80a-hostproc\") pod \"cilium-x8wkg\" (UID: \"53b2d3fb-fb0e-4cf6-9232-86de3140d80a\") " pod="kube-system/cilium-x8wkg" Feb 9 19:52:10.939484 kubelet[1385]: I0209 19:52:10.939459 1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/53b2d3fb-fb0e-4cf6-9232-86de3140d80a-etc-cni-netd\") pod \"cilium-x8wkg\" (UID: \"53b2d3fb-fb0e-4cf6-9232-86de3140d80a\") " pod="kube-system/cilium-x8wkg" Feb 9 19:52:10.939484 kubelet[1385]: I0209 19:52:10.939482 1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/53b2d3fb-fb0e-4cf6-9232-86de3140d80a-bpf-maps\") pod \"cilium-x8wkg\" (UID: \"53b2d3fb-fb0e-4cf6-9232-86de3140d80a\") " pod="kube-system/cilium-x8wkg" Feb 9 19:52:10.939621 kubelet[1385]: I0209 19:52:10.939507 1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/53b2d3fb-fb0e-4cf6-9232-86de3140d80a-cilium-ipsec-secrets\") pod \"cilium-x8wkg\" (UID: \"53b2d3fb-fb0e-4cf6-9232-86de3140d80a\") " pod="kube-system/cilium-x8wkg" Feb 9 19:52:10.939621 kubelet[1385]: I0209 19:52:10.939529 1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ld9nc\" (UniqueName: \"kubernetes.io/projected/53b2d3fb-fb0e-4cf6-9232-86de3140d80a-kube-api-access-ld9nc\") pod \"cilium-x8wkg\" (UID: \"53b2d3fb-fb0e-4cf6-9232-86de3140d80a\") " pod="kube-system/cilium-x8wkg" Feb 9 19:52:10.939621 kubelet[1385]: I0209 19:52:10.939555 1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/53b2d3fb-fb0e-4cf6-9232-86de3140d80a-host-proc-sys-kernel\") pod \"cilium-x8wkg\" (UID: \"53b2d3fb-fb0e-4cf6-9232-86de3140d80a\") " pod="kube-system/cilium-x8wkg" Feb 9 19:52:10.939621 kubelet[1385]: I0209 19:52:10.939576 1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/53b2d3fb-fb0e-4cf6-9232-86de3140d80a-cilium-cgroup\") pod \"cilium-x8wkg\" (UID: \"53b2d3fb-fb0e-4cf6-9232-86de3140d80a\") " pod="kube-system/cilium-x8wkg" Feb 9 19:52:10.939621 kubelet[1385]: I0209 19:52:10.939602 1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/53b2d3fb-fb0e-4cf6-9232-86de3140d80a-lib-modules\") pod \"cilium-x8wkg\" (UID: \"53b2d3fb-fb0e-4cf6-9232-86de3140d80a\") " pod="kube-system/cilium-x8wkg" Feb 9 19:52:10.939735 kubelet[1385]: I0209 19:52:10.939628 1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/53b2d3fb-fb0e-4cf6-9232-86de3140d80a-clustermesh-secrets\") pod \"cilium-x8wkg\" (UID: \"53b2d3fb-fb0e-4cf6-9232-86de3140d80a\") " pod="kube-system/cilium-x8wkg" Feb 9 19:52:10.939735 kubelet[1385]: I0209 19:52:10.939654 1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/53b2d3fb-fb0e-4cf6-9232-86de3140d80a-host-proc-sys-net\") pod \"cilium-x8wkg\" (UID: \"53b2d3fb-fb0e-4cf6-9232-86de3140d80a\") " pod="kube-system/cilium-x8wkg" Feb 9 19:52:10.940668 env[1114]: time="2024-02-09T19:52:10.940627669Z" level=info msg="CreateContainer within sandbox \"28ea53e129d651e5b6425123c990007c3a6e25a7a84ebe9a44cf5acace6c296d\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"2e984d6dac0cd446dee767e8bf9fbfadbd31b5c239beb16420c19058987bedcd\"" Feb 9 19:52:10.941057 env[1114]: time="2024-02-09T19:52:10.941030897Z" level=info msg="StartContainer for \"2e984d6dac0cd446dee767e8bf9fbfadbd31b5c239beb16420c19058987bedcd\"" Feb 9 19:52:10.955200 systemd[1]: Started cri-containerd-2e984d6dac0cd446dee767e8bf9fbfadbd31b5c239beb16420c19058987bedcd.scope. Feb 9 19:52:11.038770 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3107986225.mount: Deactivated successfully. Feb 9 19:52:11.162697 env[1114]: time="2024-02-09T19:52:11.162631898Z" level=info msg="StartContainer for \"2e984d6dac0cd446dee767e8bf9fbfadbd31b5c239beb16420c19058987bedcd\" returns successfully" Feb 9 19:52:11.246037 kubelet[1385]: E0209 19:52:11.246015 1385 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:52:11.246486 env[1114]: time="2024-02-09T19:52:11.246443673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x8wkg,Uid:53b2d3fb-fb0e-4cf6-9232-86de3140d80a,Namespace:kube-system,Attempt:0,}" Feb 9 19:52:11.257665 env[1114]: time="2024-02-09T19:52:11.257610226Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:52:11.257665 env[1114]: time="2024-02-09T19:52:11.257646774Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:52:11.257665 env[1114]: time="2024-02-09T19:52:11.257656723Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:52:11.257832 env[1114]: time="2024-02-09T19:52:11.257783931Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/75b769f2b3557a73f2852ed909c26c2e0250652eb1610e8402e295cf9aec5394 pid=3138 runtime=io.containerd.runc.v2 Feb 9 19:52:11.266732 systemd[1]: Started cri-containerd-75b769f2b3557a73f2852ed909c26c2e0250652eb1610e8402e295cf9aec5394.scope. Feb 9 19:52:11.284899 env[1114]: time="2024-02-09T19:52:11.284374604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x8wkg,Uid:53b2d3fb-fb0e-4cf6-9232-86de3140d80a,Namespace:kube-system,Attempt:0,} returns sandbox id \"75b769f2b3557a73f2852ed909c26c2e0250652eb1610e8402e295cf9aec5394\"" Feb 9 19:52:11.285878 kubelet[1385]: E0209 19:52:11.285459 1385 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:52:11.287219 env[1114]: time="2024-02-09T19:52:11.287165015Z" level=info msg="CreateContainer within sandbox \"75b769f2b3557a73f2852ed909c26c2e0250652eb1610e8402e295cf9aec5394\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 19:52:11.299558 env[1114]: time="2024-02-09T19:52:11.299484484Z" level=info msg="CreateContainer within sandbox \"75b769f2b3557a73f2852ed909c26c2e0250652eb1610e8402e295cf9aec5394\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"efda58d85194401ac328a166beeb99a0e3a75d8cbe15485646062df080f4222c\"" Feb 9 19:52:11.299920 env[1114]: time="2024-02-09T19:52:11.299890316Z" level=info msg="StartContainer for \"efda58d85194401ac328a166beeb99a0e3a75d8cbe15485646062df080f4222c\"" Feb 9 19:52:11.312234 systemd[1]: Started cri-containerd-efda58d85194401ac328a166beeb99a0e3a75d8cbe15485646062df080f4222c.scope. Feb 9 19:52:11.338780 env[1114]: time="2024-02-09T19:52:11.338728090Z" level=info msg="StartContainer for \"efda58d85194401ac328a166beeb99a0e3a75d8cbe15485646062df080f4222c\" returns successfully" Feb 9 19:52:11.340277 systemd[1]: cri-containerd-efda58d85194401ac328a166beeb99a0e3a75d8cbe15485646062df080f4222c.scope: Deactivated successfully. Feb 9 19:52:11.358653 env[1114]: time="2024-02-09T19:52:11.358589494Z" level=info msg="shim disconnected" id=efda58d85194401ac328a166beeb99a0e3a75d8cbe15485646062df080f4222c Feb 9 19:52:11.358653 env[1114]: time="2024-02-09T19:52:11.358631754Z" level=warning msg="cleaning up after shim disconnected" id=efda58d85194401ac328a166beeb99a0e3a75d8cbe15485646062df080f4222c namespace=k8s.io Feb 9 19:52:11.358653 env[1114]: time="2024-02-09T19:52:11.358640500Z" level=info msg="cleaning up dead shim" Feb 9 19:52:11.364983 env[1114]: time="2024-02-09T19:52:11.364945882Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:52:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3222 runtime=io.containerd.runc.v2\n" Feb 9 19:52:11.780011 kubelet[1385]: E0209 19:52:11.779979 1385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:52:11.789304 kubelet[1385]: I0209 19:52:11.789281 1385 setters.go:548] "Node became not ready" node="10.0.0.131" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-09 19:52:11.789225028 +0000 UTC m=+63.227382795 LastTransitionTime:2024-02-09 19:52:11.789225028 +0000 UTC m=+63.227382795 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 9 19:52:11.914870 kubelet[1385]: E0209 19:52:11.914839 1385 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:52:11.915797 kubelet[1385]: E0209 19:52:11.915776 1385 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:52:11.917343 env[1114]: time="2024-02-09T19:52:11.917302302Z" level=info msg="CreateContainer within sandbox \"75b769f2b3557a73f2852ed909c26c2e0250652eb1610e8402e295cf9aec5394\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 19:52:11.930897 env[1114]: time="2024-02-09T19:52:11.930847574Z" level=info msg="CreateContainer within sandbox \"75b769f2b3557a73f2852ed909c26c2e0250652eb1610e8402e295cf9aec5394\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a4f1321df0186f997dd24c4a0179ffd930d006191a7d422c518b61dc57b58c89\"" Feb 9 19:52:11.931325 env[1114]: time="2024-02-09T19:52:11.931286407Z" level=info msg="StartContainer for \"a4f1321df0186f997dd24c4a0179ffd930d006191a7d422c518b61dc57b58c89\"" Feb 9 19:52:11.936471 kubelet[1385]: I0209 19:52:11.936451 1385 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-574c4bb98d-b7bdn" podStartSLOduration=2.200628629 podCreationTimestamp="2024-02-09 19:52:08 +0000 UTC" firstStartedPulling="2024-02-09 19:52:09.191243224 +0000 UTC m=+60.629400991" lastFinishedPulling="2024-02-09 19:52:10.927035557 +0000 UTC m=+62.365193334" observedRunningTime="2024-02-09 19:52:11.921505728 +0000 UTC m=+63.359663495" watchObservedRunningTime="2024-02-09 19:52:11.936420972 +0000 UTC m=+63.374578739" Feb 9 19:52:11.944273 systemd[1]: Started cri-containerd-a4f1321df0186f997dd24c4a0179ffd930d006191a7d422c518b61dc57b58c89.scope. Feb 9 19:52:11.965453 env[1114]: time="2024-02-09T19:52:11.965401022Z" level=info msg="StartContainer for \"a4f1321df0186f997dd24c4a0179ffd930d006191a7d422c518b61dc57b58c89\" returns successfully" Feb 9 19:52:11.968213 systemd[1]: cri-containerd-a4f1321df0186f997dd24c4a0179ffd930d006191a7d422c518b61dc57b58c89.scope: Deactivated successfully. Feb 9 19:52:11.985907 env[1114]: time="2024-02-09T19:52:11.985862632Z" level=info msg="shim disconnected" id=a4f1321df0186f997dd24c4a0179ffd930d006191a7d422c518b61dc57b58c89 Feb 9 19:52:11.985907 env[1114]: time="2024-02-09T19:52:11.985905232Z" level=warning msg="cleaning up after shim disconnected" id=a4f1321df0186f997dd24c4a0179ffd930d006191a7d422c518b61dc57b58c89 namespace=k8s.io Feb 9 19:52:11.986042 env[1114]: time="2024-02-09T19:52:11.985913628Z" level=info msg="cleaning up dead shim" Feb 9 19:52:11.991682 env[1114]: time="2024-02-09T19:52:11.991650935Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:52:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3285 runtime=io.containerd.runc.v2\n" Feb 9 19:52:12.350717 kubelet[1385]: W0209 19:52:12.350674 1385 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd5c098c6_625e_4f3f_a248_bad959f23572.slice/cri-containerd-f97dc5cb2d9506e67545a06588f602ca4325d533a529e4cd6780bdf398e6248e.scope WatchSource:0}: container "f97dc5cb2d9506e67545a06588f602ca4325d533a529e4cd6780bdf398e6248e" in namespace "k8s.io": not found Feb 9 19:52:12.780375 kubelet[1385]: E0209 19:52:12.780343 1385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:52:12.804365 kubelet[1385]: I0209 19:52:12.804324 1385 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=d5c098c6-625e-4f3f-a248-bad959f23572 path="/var/lib/kubelet/pods/d5c098c6-625e-4f3f-a248-bad959f23572/volumes" Feb 9 19:52:12.919513 kubelet[1385]: E0209 19:52:12.919483 1385 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:52:12.919693 kubelet[1385]: E0209 19:52:12.919563 1385 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:52:12.920721 env[1114]: time="2024-02-09T19:52:12.920687391Z" level=info msg="CreateContainer within sandbox \"75b769f2b3557a73f2852ed909c26c2e0250652eb1610e8402e295cf9aec5394\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 19:52:12.933334 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3578011060.mount: Deactivated successfully. Feb 9 19:52:12.937219 env[1114]: time="2024-02-09T19:52:12.937153374Z" level=info msg="CreateContainer within sandbox \"75b769f2b3557a73f2852ed909c26c2e0250652eb1610e8402e295cf9aec5394\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f42511f1d161084f039c00deb1cf1f2d332ecafc0375ff3deaaf4f38d41cf7de\"" Feb 9 19:52:12.937601 env[1114]: time="2024-02-09T19:52:12.937563364Z" level=info msg="StartContainer for \"f42511f1d161084f039c00deb1cf1f2d332ecafc0375ff3deaaf4f38d41cf7de\"" Feb 9 19:52:12.952252 systemd[1]: Started cri-containerd-f42511f1d161084f039c00deb1cf1f2d332ecafc0375ff3deaaf4f38d41cf7de.scope. Feb 9 19:52:12.975243 systemd[1]: cri-containerd-f42511f1d161084f039c00deb1cf1f2d332ecafc0375ff3deaaf4f38d41cf7de.scope: Deactivated successfully. Feb 9 19:52:12.975414 env[1114]: time="2024-02-09T19:52:12.975337002Z" level=info msg="StartContainer for \"f42511f1d161084f039c00deb1cf1f2d332ecafc0375ff3deaaf4f38d41cf7de\" returns successfully" Feb 9 19:52:12.995582 env[1114]: time="2024-02-09T19:52:12.995528782Z" level=info msg="shim disconnected" id=f42511f1d161084f039c00deb1cf1f2d332ecafc0375ff3deaaf4f38d41cf7de Feb 9 19:52:12.995582 env[1114]: time="2024-02-09T19:52:12.995583414Z" level=warning msg="cleaning up after shim disconnected" id=f42511f1d161084f039c00deb1cf1f2d332ecafc0375ff3deaaf4f38d41cf7de namespace=k8s.io Feb 9 19:52:12.995812 env[1114]: time="2024-02-09T19:52:12.995596318Z" level=info msg="cleaning up dead shim" Feb 9 19:52:13.001324 env[1114]: time="2024-02-09T19:52:13.001298948Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:52:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3342 runtime=io.containerd.runc.v2\n" Feb 9 19:52:13.037446 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f42511f1d161084f039c00deb1cf1f2d332ecafc0375ff3deaaf4f38d41cf7de-rootfs.mount: Deactivated successfully. Feb 9 19:52:13.780650 kubelet[1385]: E0209 19:52:13.780621 1385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:52:13.787305 kubelet[1385]: E0209 19:52:13.787284 1385 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 19:52:13.922560 kubelet[1385]: E0209 19:52:13.922542 1385 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:52:13.923878 env[1114]: time="2024-02-09T19:52:13.923845879Z" level=info msg="CreateContainer within sandbox \"75b769f2b3557a73f2852ed909c26c2e0250652eb1610e8402e295cf9aec5394\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 19:52:13.935768 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount324557623.mount: Deactivated successfully. Feb 9 19:52:13.937234 env[1114]: time="2024-02-09T19:52:13.937195127Z" level=info msg="CreateContainer within sandbox \"75b769f2b3557a73f2852ed909c26c2e0250652eb1610e8402e295cf9aec5394\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c87f7080faba9c3472679d46121e90f5666b8a7b28a523974cedc5b51f5196fc\"" Feb 9 19:52:13.937611 env[1114]: time="2024-02-09T19:52:13.937569500Z" level=info msg="StartContainer for \"c87f7080faba9c3472679d46121e90f5666b8a7b28a523974cedc5b51f5196fc\"" Feb 9 19:52:13.952524 systemd[1]: Started cri-containerd-c87f7080faba9c3472679d46121e90f5666b8a7b28a523974cedc5b51f5196fc.scope. Feb 9 19:52:13.969923 systemd[1]: cri-containerd-c87f7080faba9c3472679d46121e90f5666b8a7b28a523974cedc5b51f5196fc.scope: Deactivated successfully. Feb 9 19:52:13.972127 env[1114]: time="2024-02-09T19:52:13.972079723Z" level=info msg="StartContainer for \"c87f7080faba9c3472679d46121e90f5666b8a7b28a523974cedc5b51f5196fc\" returns successfully" Feb 9 19:52:13.992271 env[1114]: time="2024-02-09T19:52:13.992221224Z" level=info msg="shim disconnected" id=c87f7080faba9c3472679d46121e90f5666b8a7b28a523974cedc5b51f5196fc Feb 9 19:52:13.992271 env[1114]: time="2024-02-09T19:52:13.992263655Z" level=warning msg="cleaning up after shim disconnected" id=c87f7080faba9c3472679d46121e90f5666b8a7b28a523974cedc5b51f5196fc namespace=k8s.io Feb 9 19:52:13.992271 env[1114]: time="2024-02-09T19:52:13.992272552Z" level=info msg="cleaning up dead shim" Feb 9 19:52:13.997904 env[1114]: time="2024-02-09T19:52:13.997881664Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:52:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3399 runtime=io.containerd.runc.v2\n" Feb 9 19:52:14.037515 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c87f7080faba9c3472679d46121e90f5666b8a7b28a523974cedc5b51f5196fc-rootfs.mount: Deactivated successfully. Feb 9 19:52:14.780913 kubelet[1385]: E0209 19:52:14.780878 1385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:52:14.926848 kubelet[1385]: E0209 19:52:14.926816 1385 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:52:14.928718 env[1114]: time="2024-02-09T19:52:14.928671373Z" level=info msg="CreateContainer within sandbox \"75b769f2b3557a73f2852ed909c26c2e0250652eb1610e8402e295cf9aec5394\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 19:52:14.942881 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount993246885.mount: Deactivated successfully. Feb 9 19:52:14.943393 env[1114]: time="2024-02-09T19:52:14.943341558Z" level=info msg="CreateContainer within sandbox \"75b769f2b3557a73f2852ed909c26c2e0250652eb1610e8402e295cf9aec5394\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8e57b929861e33b9b92cda4ef484d3f5aecd0d8ba6e035735fc093df27e48eae\"" Feb 9 19:52:14.943824 env[1114]: time="2024-02-09T19:52:14.943792895Z" level=info msg="StartContainer for \"8e57b929861e33b9b92cda4ef484d3f5aecd0d8ba6e035735fc093df27e48eae\"" Feb 9 19:52:14.958212 systemd[1]: Started cri-containerd-8e57b929861e33b9b92cda4ef484d3f5aecd0d8ba6e035735fc093df27e48eae.scope. Feb 9 19:52:14.983206 env[1114]: time="2024-02-09T19:52:14.983122232Z" level=info msg="StartContainer for \"8e57b929861e33b9b92cda4ef484d3f5aecd0d8ba6e035735fc093df27e48eae\" returns successfully" Feb 9 19:52:15.229208 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 9 19:52:15.459732 kubelet[1385]: W0209 19:52:15.459690 1385 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53b2d3fb_fb0e_4cf6_9232_86de3140d80a.slice/cri-containerd-efda58d85194401ac328a166beeb99a0e3a75d8cbe15485646062df080f4222c.scope WatchSource:0}: task efda58d85194401ac328a166beeb99a0e3a75d8cbe15485646062df080f4222c not found: not found Feb 9 19:52:15.781289 kubelet[1385]: E0209 19:52:15.781257 1385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:52:15.931462 kubelet[1385]: E0209 19:52:15.931437 1385 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:52:15.941450 kubelet[1385]: I0209 19:52:15.941426 1385 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-x8wkg" podStartSLOduration=5.941391239 podCreationTimestamp="2024-02-09 19:52:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:52:15.941064587 +0000 UTC m=+67.379222354" watchObservedRunningTime="2024-02-09 19:52:15.941391239 +0000 UTC m=+67.379548996" Feb 9 19:52:16.781754 kubelet[1385]: E0209 19:52:16.781705 1385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:52:17.247912 kubelet[1385]: E0209 19:52:17.247880 1385 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:52:17.286592 systemd[1]: run-containerd-runc-k8s.io-8e57b929861e33b9b92cda4ef484d3f5aecd0d8ba6e035735fc093df27e48eae-runc.S0Bf1Z.mount: Deactivated successfully. Feb 9 19:52:17.597323 systemd-networkd[1015]: lxc_health: Link UP Feb 9 19:52:17.608346 systemd-networkd[1015]: lxc_health: Gained carrier Feb 9 19:52:17.609201 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 19:52:17.782662 kubelet[1385]: E0209 19:52:17.782601 1385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:52:18.565663 kubelet[1385]: W0209 19:52:18.565605 1385 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53b2d3fb_fb0e_4cf6_9232_86de3140d80a.slice/cri-containerd-a4f1321df0186f997dd24c4a0179ffd930d006191a7d422c518b61dc57b58c89.scope WatchSource:0}: task a4f1321df0186f997dd24c4a0179ffd930d006191a7d422c518b61dc57b58c89 not found: not found Feb 9 19:52:18.654379 systemd-networkd[1015]: lxc_health: Gained IPv6LL Feb 9 19:52:18.783663 kubelet[1385]: E0209 19:52:18.783606 1385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:52:19.248208 kubelet[1385]: E0209 19:52:19.248165 1385 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:52:19.783979 kubelet[1385]: E0209 19:52:19.783936 1385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:52:19.937559 kubelet[1385]: E0209 19:52:19.937534 1385 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:52:20.784260 kubelet[1385]: E0209 19:52:20.784197 1385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:52:21.673753 kubelet[1385]: W0209 19:52:21.673702 1385 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53b2d3fb_fb0e_4cf6_9232_86de3140d80a.slice/cri-containerd-f42511f1d161084f039c00deb1cf1f2d332ecafc0375ff3deaaf4f38d41cf7de.scope WatchSource:0}: task f42511f1d161084f039c00deb1cf1f2d332ecafc0375ff3deaaf4f38d41cf7de not found: not found Feb 9 19:52:21.784700 kubelet[1385]: E0209 19:52:21.784648 1385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:52:22.784938 kubelet[1385]: E0209 19:52:22.784902 1385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:52:23.535667 systemd[1]: run-containerd-runc-k8s.io-8e57b929861e33b9b92cda4ef484d3f5aecd0d8ba6e035735fc093df27e48eae-runc.u4SeMa.mount: Deactivated successfully. Feb 9 19:52:23.785092 kubelet[1385]: E0209 19:52:23.785038 1385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:52:24.780678 kubelet[1385]: W0209 19:52:24.780633 1385 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53b2d3fb_fb0e_4cf6_9232_86de3140d80a.slice/cri-containerd-c87f7080faba9c3472679d46121e90f5666b8a7b28a523974cedc5b51f5196fc.scope WatchSource:0}: task c87f7080faba9c3472679d46121e90f5666b8a7b28a523974cedc5b51f5196fc not found: not found Feb 9 19:52:24.785132 kubelet[1385]: E0209 19:52:24.785115 1385 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"