Feb 12 19:34:11.882807 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Feb 12 18:05:31 -00 2024 Feb 12 19:34:11.882838 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 12 19:34:11.882852 kernel: BIOS-provided physical RAM map: Feb 12 19:34:11.882860 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 12 19:34:11.882868 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Feb 12 19:34:11.882875 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Feb 12 19:34:11.882884 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Feb 12 19:34:11.882893 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Feb 12 19:34:11.882901 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Feb 12 19:34:11.882911 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Feb 12 19:34:11.882918 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Feb 12 19:34:11.882926 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Feb 12 19:34:11.882934 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Feb 12 19:34:11.882943 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Feb 12 19:34:11.882953 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Feb 12 19:34:11.882963 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Feb 12 19:34:11.882971 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Feb 12 19:34:11.882980 kernel: NX (Execute Disable) protection: active Feb 12 19:34:11.882988 kernel: e820: update [mem 0x9b3fa018-0x9b403c57] usable ==> usable Feb 12 19:34:11.882997 kernel: e820: update [mem 0x9b3fa018-0x9b403c57] usable ==> usable Feb 12 19:34:11.883005 kernel: e820: update [mem 0x9b3bd018-0x9b3f9e57] usable ==> usable Feb 12 19:34:11.883013 kernel: e820: update [mem 0x9b3bd018-0x9b3f9e57] usable ==> usable Feb 12 19:34:11.883026 kernel: extended physical RAM map: Feb 12 19:34:11.883034 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 12 19:34:11.883043 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Feb 12 19:34:11.883053 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Feb 12 19:34:11.883062 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Feb 12 19:34:11.883070 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Feb 12 19:34:11.883079 kernel: reserve setup_data: [mem 0x000000000080c000-0x000000000080ffff] usable Feb 12 19:34:11.883087 kernel: reserve setup_data: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Feb 12 19:34:11.883095 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b3bd017] usable Feb 12 19:34:11.883104 kernel: reserve setup_data: [mem 0x000000009b3bd018-0x000000009b3f9e57] usable Feb 12 19:34:11.883112 kernel: reserve setup_data: [mem 0x000000009b3f9e58-0x000000009b3fa017] usable Feb 12 19:34:11.883120 kernel: reserve setup_data: [mem 0x000000009b3fa018-0x000000009b403c57] usable Feb 12 19:34:11.883129 kernel: reserve setup_data: [mem 0x000000009b403c58-0x000000009c8eefff] usable Feb 12 19:34:11.883137 kernel: reserve setup_data: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Feb 12 19:34:11.883147 kernel: reserve setup_data: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Feb 12 19:34:11.883156 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Feb 12 19:34:11.883164 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Feb 12 19:34:11.883173 kernel: reserve setup_data: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Feb 12 19:34:11.883185 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Feb 12 19:34:11.883194 kernel: efi: EFI v2.70 by EDK II Feb 12 19:34:11.883203 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b773018 RNG=0x9cb75018 Feb 12 19:34:11.883213 kernel: random: crng init done Feb 12 19:34:11.883222 kernel: SMBIOS 2.8 present. Feb 12 19:34:11.883231 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 0.0.0 02/06/2015 Feb 12 19:34:11.883240 kernel: Hypervisor detected: KVM Feb 12 19:34:11.883249 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 12 19:34:11.883258 kernel: kvm-clock: cpu 0, msr 5dfaa001, primary cpu clock Feb 12 19:34:11.883267 kernel: kvm-clock: using sched offset of 5392462002 cycles Feb 12 19:34:11.883277 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 12 19:34:11.883286 kernel: tsc: Detected 2794.750 MHz processor Feb 12 19:34:11.883300 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 12 19:34:11.883310 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 12 19:34:11.883319 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Feb 12 19:34:11.883376 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 12 19:34:11.883386 kernel: Using GB pages for direct mapping Feb 12 19:34:11.883396 kernel: Secure boot disabled Feb 12 19:34:11.883405 kernel: ACPI: Early table checksum verification disabled Feb 12 19:34:11.883414 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Feb 12 19:34:11.883424 kernel: ACPI: XSDT 0x000000009CB7D0E8 00004C (v01 BOCHS BXPC 00000001 01000013) Feb 12 19:34:11.883444 kernel: ACPI: FACP 0x000000009CB7A000 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:34:11.883454 kernel: ACPI: DSDT 0x000000009CB7B000 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:34:11.883463 kernel: ACPI: FACS 0x000000009CBDD000 000040 Feb 12 19:34:11.883472 kernel: ACPI: APIC 0x000000009CB79000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:34:11.883482 kernel: ACPI: HPET 0x000000009CB78000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:34:11.883491 kernel: ACPI: WAET 0x000000009CB77000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:34:11.883500 kernel: ACPI: BGRT 0x000000009CB76000 000038 (v01 INTEL EDK2 00000002 01000013) Feb 12 19:34:11.883510 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb7a000-0x9cb7a073] Feb 12 19:34:11.883522 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7b000-0x9cb7ca38] Feb 12 19:34:11.883533 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Feb 12 19:34:11.883542 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb79000-0x9cb7908f] Feb 12 19:34:11.883552 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb78000-0x9cb78037] Feb 12 19:34:11.883561 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb77000-0x9cb77027] Feb 12 19:34:11.883571 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb76000-0x9cb76037] Feb 12 19:34:11.883580 kernel: No NUMA configuration found Feb 12 19:34:11.883589 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Feb 12 19:34:11.883598 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Feb 12 19:34:11.883607 kernel: Zone ranges: Feb 12 19:34:11.883618 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 12 19:34:11.883627 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Feb 12 19:34:11.883637 kernel: Normal empty Feb 12 19:34:11.883646 kernel: Movable zone start for each node Feb 12 19:34:11.883655 kernel: Early memory node ranges Feb 12 19:34:11.883667 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Feb 12 19:34:11.883677 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Feb 12 19:34:11.883686 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Feb 12 19:34:11.883696 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Feb 12 19:34:11.883706 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Feb 12 19:34:11.883716 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Feb 12 19:34:11.883725 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Feb 12 19:34:11.883734 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 12 19:34:11.883744 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Feb 12 19:34:11.883753 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Feb 12 19:34:11.883762 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 12 19:34:11.883771 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Feb 12 19:34:11.883781 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Feb 12 19:34:11.883792 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Feb 12 19:34:11.883801 kernel: ACPI: PM-Timer IO Port: 0xb008 Feb 12 19:34:11.883810 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 12 19:34:11.883819 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 12 19:34:11.883829 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 12 19:34:11.883838 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 12 19:34:11.883848 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 12 19:34:11.883857 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 12 19:34:11.883867 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 12 19:34:11.883878 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 12 19:34:11.883887 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 12 19:34:11.883896 kernel: TSC deadline timer available Feb 12 19:34:11.883905 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Feb 12 19:34:11.883914 kernel: kvm-guest: KVM setup pv remote TLB flush Feb 12 19:34:11.883923 kernel: kvm-guest: setup PV sched yield Feb 12 19:34:11.883935 kernel: [mem 0x9d000000-0xffffffff] available for PCI devices Feb 12 19:34:11.883944 kernel: Booting paravirtualized kernel on KVM Feb 12 19:34:11.883954 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 12 19:34:11.883963 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Feb 12 19:34:11.883977 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u524288 Feb 12 19:34:11.883986 kernel: pcpu-alloc: s185624 r8192 d31464 u524288 alloc=1*2097152 Feb 12 19:34:11.884003 kernel: pcpu-alloc: [0] 0 1 2 3 Feb 12 19:34:11.884014 kernel: kvm-guest: setup async PF for cpu 0 Feb 12 19:34:11.884023 kernel: kvm-guest: stealtime: cpu 0, msr 9b01c0c0 Feb 12 19:34:11.884033 kernel: kvm-guest: PV spinlocks enabled Feb 12 19:34:11.884043 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 12 19:34:11.884053 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Feb 12 19:34:11.884063 kernel: Policy zone: DMA32 Feb 12 19:34:11.884074 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 12 19:34:11.884084 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 12 19:34:11.884095 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 12 19:34:11.884105 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 12 19:34:11.884115 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 12 19:34:11.884126 kernel: Memory: 2400512K/2567000K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 166228K reserved, 0K cma-reserved) Feb 12 19:34:11.884137 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 12 19:34:11.884147 kernel: ftrace: allocating 34475 entries in 135 pages Feb 12 19:34:11.884156 kernel: ftrace: allocated 135 pages with 4 groups Feb 12 19:34:11.884164 kernel: rcu: Hierarchical RCU implementation. Feb 12 19:34:11.884174 kernel: rcu: RCU event tracing is enabled. Feb 12 19:34:11.884183 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 12 19:34:11.884193 kernel: Rude variant of Tasks RCU enabled. Feb 12 19:34:11.884202 kernel: Tracing variant of Tasks RCU enabled. Feb 12 19:34:11.884212 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 12 19:34:11.884224 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 12 19:34:11.884234 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Feb 12 19:34:11.884244 kernel: Console: colour dummy device 80x25 Feb 12 19:34:11.884254 kernel: printk: console [ttyS0] enabled Feb 12 19:34:11.884263 kernel: ACPI: Core revision 20210730 Feb 12 19:34:11.884273 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 12 19:34:11.884283 kernel: APIC: Switch to symmetric I/O mode setup Feb 12 19:34:11.884291 kernel: x2apic enabled Feb 12 19:34:11.884301 kernel: Switched APIC routing to physical x2apic. Feb 12 19:34:11.884310 kernel: kvm-guest: setup PV IPIs Feb 12 19:34:11.884322 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 12 19:34:11.884347 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 12 19:34:11.884356 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Feb 12 19:34:11.884365 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 12 19:34:11.884374 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Feb 12 19:34:11.884383 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Feb 12 19:34:11.884392 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 12 19:34:11.884401 kernel: Spectre V2 : Mitigation: Retpolines Feb 12 19:34:11.884413 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 12 19:34:11.884422 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 12 19:34:11.884439 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Feb 12 19:34:11.884449 kernel: RETBleed: Mitigation: untrained return thunk Feb 12 19:34:11.884461 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 12 19:34:11.884470 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Feb 12 19:34:11.884479 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 12 19:34:11.884489 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 12 19:34:11.884501 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 12 19:34:11.884514 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 12 19:34:11.884524 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 12 19:34:11.884533 kernel: Freeing SMP alternatives memory: 32K Feb 12 19:34:11.884543 kernel: pid_max: default: 32768 minimum: 301 Feb 12 19:34:11.884552 kernel: LSM: Security Framework initializing Feb 12 19:34:11.884562 kernel: SELinux: Initializing. Feb 12 19:34:11.884572 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 12 19:34:11.884582 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 12 19:34:11.884592 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Feb 12 19:34:11.884603 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Feb 12 19:34:11.884613 kernel: ... version: 0 Feb 12 19:34:11.884623 kernel: ... bit width: 48 Feb 12 19:34:11.884632 kernel: ... generic registers: 6 Feb 12 19:34:11.884642 kernel: ... value mask: 0000ffffffffffff Feb 12 19:34:11.884652 kernel: ... max period: 00007fffffffffff Feb 12 19:34:11.884662 kernel: ... fixed-purpose events: 0 Feb 12 19:34:11.884671 kernel: ... event mask: 000000000000003f Feb 12 19:34:11.884681 kernel: signal: max sigframe size: 1776 Feb 12 19:34:11.884692 kernel: rcu: Hierarchical SRCU implementation. Feb 12 19:34:11.884702 kernel: smp: Bringing up secondary CPUs ... Feb 12 19:34:11.884711 kernel: x86: Booting SMP configuration: Feb 12 19:34:11.884721 kernel: .... node #0, CPUs: #1 Feb 12 19:34:11.884731 kernel: kvm-clock: cpu 1, msr 5dfaa041, secondary cpu clock Feb 12 19:34:11.884740 kernel: kvm-guest: setup async PF for cpu 1 Feb 12 19:34:11.884750 kernel: kvm-guest: stealtime: cpu 1, msr 9b09c0c0 Feb 12 19:34:11.884759 kernel: #2 Feb 12 19:34:11.884769 kernel: kvm-clock: cpu 2, msr 5dfaa081, secondary cpu clock Feb 12 19:34:11.884779 kernel: kvm-guest: setup async PF for cpu 2 Feb 12 19:34:11.884790 kernel: kvm-guest: stealtime: cpu 2, msr 9b11c0c0 Feb 12 19:34:11.884800 kernel: #3 Feb 12 19:34:11.884809 kernel: kvm-clock: cpu 3, msr 5dfaa0c1, secondary cpu clock Feb 12 19:34:11.884818 kernel: kvm-guest: setup async PF for cpu 3 Feb 12 19:34:11.884828 kernel: kvm-guest: stealtime: cpu 3, msr 9b19c0c0 Feb 12 19:34:11.884838 kernel: smp: Brought up 1 node, 4 CPUs Feb 12 19:34:11.884847 kernel: smpboot: Max logical packages: 1 Feb 12 19:34:11.884857 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Feb 12 19:34:11.884866 kernel: devtmpfs: initialized Feb 12 19:34:11.884878 kernel: x86/mm: Memory block size: 128MB Feb 12 19:34:11.884888 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Feb 12 19:34:11.884897 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Feb 12 19:34:11.884907 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Feb 12 19:34:11.884917 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Feb 12 19:34:11.884930 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Feb 12 19:34:11.884941 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 12 19:34:11.884950 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 12 19:34:11.884960 kernel: pinctrl core: initialized pinctrl subsystem Feb 12 19:34:11.884972 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 12 19:34:11.884982 kernel: audit: initializing netlink subsys (disabled) Feb 12 19:34:11.884991 kernel: audit: type=2000 audit(1707766452.028:1): state=initialized audit_enabled=0 res=1 Feb 12 19:34:11.885001 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 12 19:34:11.885010 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 12 19:34:11.885020 kernel: cpuidle: using governor menu Feb 12 19:34:11.885030 kernel: ACPI: bus type PCI registered Feb 12 19:34:11.885039 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 12 19:34:11.885049 kernel: dca service started, version 1.12.1 Feb 12 19:34:11.885061 kernel: PCI: Using configuration type 1 for base access Feb 12 19:34:11.885070 kernel: PCI: Using configuration type 1 for extended access Feb 12 19:34:11.885080 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 12 19:34:11.885090 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 12 19:34:11.885099 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 12 19:34:11.885109 kernel: ACPI: Added _OSI(Module Device) Feb 12 19:34:11.885119 kernel: ACPI: Added _OSI(Processor Device) Feb 12 19:34:11.885128 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 12 19:34:11.885138 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 12 19:34:11.885149 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 12 19:34:11.885159 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 12 19:34:11.885168 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 12 19:34:11.885178 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 12 19:34:11.885188 kernel: ACPI: Interpreter enabled Feb 12 19:34:11.885197 kernel: ACPI: PM: (supports S0 S3 S5) Feb 12 19:34:11.885207 kernel: ACPI: Using IOAPIC for interrupt routing Feb 12 19:34:11.885217 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 12 19:34:11.885226 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Feb 12 19:34:11.885238 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 12 19:34:11.885438 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 12 19:34:11.885458 kernel: acpiphp: Slot [3] registered Feb 12 19:34:11.885468 kernel: acpiphp: Slot [4] registered Feb 12 19:34:11.885477 kernel: acpiphp: Slot [5] registered Feb 12 19:34:11.885486 kernel: acpiphp: Slot [6] registered Feb 12 19:34:11.885495 kernel: acpiphp: Slot [7] registered Feb 12 19:34:11.885504 kernel: acpiphp: Slot [8] registered Feb 12 19:34:11.885528 kernel: acpiphp: Slot [9] registered Feb 12 19:34:11.885538 kernel: acpiphp: Slot [10] registered Feb 12 19:34:11.885546 kernel: acpiphp: Slot [11] registered Feb 12 19:34:11.885555 kernel: acpiphp: Slot [12] registered Feb 12 19:34:11.885563 kernel: acpiphp: Slot [13] registered Feb 12 19:34:11.885571 kernel: acpiphp: Slot [14] registered Feb 12 19:34:11.885579 kernel: acpiphp: Slot [15] registered Feb 12 19:34:11.885587 kernel: acpiphp: Slot [16] registered Feb 12 19:34:11.885595 kernel: acpiphp: Slot [17] registered Feb 12 19:34:11.885604 kernel: acpiphp: Slot [18] registered Feb 12 19:34:11.885614 kernel: acpiphp: Slot [19] registered Feb 12 19:34:11.885622 kernel: acpiphp: Slot [20] registered Feb 12 19:34:11.885630 kernel: acpiphp: Slot [21] registered Feb 12 19:34:11.885638 kernel: acpiphp: Slot [22] registered Feb 12 19:34:11.885646 kernel: acpiphp: Slot [23] registered Feb 12 19:34:11.885654 kernel: acpiphp: Slot [24] registered Feb 12 19:34:11.885662 kernel: acpiphp: Slot [25] registered Feb 12 19:34:11.885671 kernel: acpiphp: Slot [26] registered Feb 12 19:34:11.885679 kernel: acpiphp: Slot [27] registered Feb 12 19:34:11.885688 kernel: acpiphp: Slot [28] registered Feb 12 19:34:11.885696 kernel: acpiphp: Slot [29] registered Feb 12 19:34:11.885704 kernel: acpiphp: Slot [30] registered Feb 12 19:34:11.885712 kernel: acpiphp: Slot [31] registered Feb 12 19:34:11.885720 kernel: PCI host bridge to bus 0000:00 Feb 12 19:34:11.885831 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 12 19:34:11.885908 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 12 19:34:11.885982 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 12 19:34:11.886057 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Feb 12 19:34:11.886130 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0x87fffffff window] Feb 12 19:34:11.886201 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 12 19:34:11.886309 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 12 19:34:11.886446 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 12 19:34:11.886561 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Feb 12 19:34:11.886647 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Feb 12 19:34:11.886728 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Feb 12 19:34:11.886808 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Feb 12 19:34:11.886889 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Feb 12 19:34:11.886969 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Feb 12 19:34:11.887062 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 12 19:34:11.887145 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Feb 12 19:34:11.887230 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Feb 12 19:34:11.887364 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Feb 12 19:34:11.887462 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Feb 12 19:34:11.887545 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc1043000-0xc1043fff] Feb 12 19:34:11.887624 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Feb 12 19:34:11.887704 kernel: pci 0000:00:02.0: BAR 0: assigned to efifb Feb 12 19:34:11.887786 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 12 19:34:11.887893 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Feb 12 19:34:11.887982 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0a0-0xc0bf] Feb 12 19:34:11.888081 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Feb 12 19:34:11.888178 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Feb 12 19:34:11.888278 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Feb 12 19:34:11.888404 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Feb 12 19:34:11.888493 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Feb 12 19:34:11.888572 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Feb 12 19:34:11.888682 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Feb 12 19:34:11.888784 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Feb 12 19:34:11.888886 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc1040000-0xc1040fff] Feb 12 19:34:11.888990 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Feb 12 19:34:11.889490 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Feb 12 19:34:11.889509 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 12 19:34:11.889522 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 12 19:34:11.889532 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 12 19:34:11.889541 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 12 19:34:11.889551 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 12 19:34:11.889561 kernel: iommu: Default domain type: Translated Feb 12 19:34:11.889570 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 12 19:34:11.889689 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Feb 12 19:34:11.889798 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 12 19:34:11.889921 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Feb 12 19:34:11.889940 kernel: vgaarb: loaded Feb 12 19:34:11.889949 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 12 19:34:11.889959 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 12 19:34:11.889968 kernel: PTP clock support registered Feb 12 19:34:11.889978 kernel: Registered efivars operations Feb 12 19:34:11.889987 kernel: PCI: Using ACPI for IRQ routing Feb 12 19:34:11.889997 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 12 19:34:11.890006 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Feb 12 19:34:11.890015 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Feb 12 19:34:11.890028 kernel: e820: reserve RAM buffer [mem 0x9b3bd018-0x9bffffff] Feb 12 19:34:11.890038 kernel: e820: reserve RAM buffer [mem 0x9b3fa018-0x9bffffff] Feb 12 19:34:11.890073 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Feb 12 19:34:11.890083 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Feb 12 19:34:11.890093 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 12 19:34:11.890103 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 12 19:34:11.890114 kernel: clocksource: Switched to clocksource kvm-clock Feb 12 19:34:11.890138 kernel: VFS: Disk quotas dquot_6.6.0 Feb 12 19:34:11.890150 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 12 19:34:11.890164 kernel: pnp: PnP ACPI init Feb 12 19:34:11.890376 kernel: pnp 00:02: [dma 2] Feb 12 19:34:11.890409 kernel: pnp: PnP ACPI: found 6 devices Feb 12 19:34:11.890419 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 12 19:34:11.890430 kernel: NET: Registered PF_INET protocol family Feb 12 19:34:11.890454 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 12 19:34:11.890471 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 12 19:34:11.890481 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 12 19:34:11.890495 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 12 19:34:11.890504 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 12 19:34:11.890513 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 12 19:34:11.890523 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 12 19:34:11.890533 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 12 19:34:11.890543 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 12 19:34:11.890552 kernel: NET: Registered PF_XDP protocol family Feb 12 19:34:11.890707 kernel: pci 0000:00:05.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Feb 12 19:34:11.890866 kernel: pci 0000:00:05.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Feb 12 19:34:11.891002 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 12 19:34:11.891127 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 12 19:34:11.891283 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 12 19:34:11.891455 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Feb 12 19:34:11.891598 kernel: pci_bus 0000:00: resource 8 [mem 0x800000000-0x87fffffff window] Feb 12 19:34:11.891731 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Feb 12 19:34:11.891872 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 12 19:34:11.892025 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Feb 12 19:34:11.892040 kernel: PCI: CLS 0 bytes, default 64 Feb 12 19:34:11.892063 kernel: Initialise system trusted keyrings Feb 12 19:34:11.892074 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 12 19:34:11.892084 kernel: Key type asymmetric registered Feb 12 19:34:11.892093 kernel: Asymmetric key parser 'x509' registered Feb 12 19:34:11.892103 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 12 19:34:11.892112 kernel: io scheduler mq-deadline registered Feb 12 19:34:11.892125 kernel: io scheduler kyber registered Feb 12 19:34:11.892135 kernel: io scheduler bfq registered Feb 12 19:34:11.892144 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 12 19:34:11.892152 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 12 19:34:11.892159 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Feb 12 19:34:11.892166 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 12 19:34:11.892173 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 12 19:34:11.892181 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 12 19:34:11.892188 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 12 19:34:11.892195 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 12 19:34:11.892205 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 12 19:34:11.892314 kernel: rtc_cmos 00:05: RTC can wake from S4 Feb 12 19:34:11.892352 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 12 19:34:11.892429 kernel: rtc_cmos 00:05: registered as rtc0 Feb 12 19:34:11.892511 kernel: rtc_cmos 00:05: setting system clock to 2024-02-12T19:34:11 UTC (1707766451) Feb 12 19:34:11.892578 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Feb 12 19:34:11.892587 kernel: efifb: probing for efifb Feb 12 19:34:11.892595 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Feb 12 19:34:11.892602 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Feb 12 19:34:11.892609 kernel: efifb: scrolling: redraw Feb 12 19:34:11.892616 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 12 19:34:11.892624 kernel: Console: switching to colour frame buffer device 160x50 Feb 12 19:34:11.892632 kernel: fb0: EFI VGA frame buffer device Feb 12 19:34:11.892644 kernel: pstore: Registered efi as persistent store backend Feb 12 19:34:11.892653 kernel: NET: Registered PF_INET6 protocol family Feb 12 19:34:11.892661 kernel: Segment Routing with IPv6 Feb 12 19:34:11.892668 kernel: In-situ OAM (IOAM) with IPv6 Feb 12 19:34:11.892676 kernel: NET: Registered PF_PACKET protocol family Feb 12 19:34:11.892683 kernel: Key type dns_resolver registered Feb 12 19:34:11.892690 kernel: IPI shorthand broadcast: enabled Feb 12 19:34:11.892697 kernel: sched_clock: Marking stable (448140789, 88353302)->(555351323, -18857232) Feb 12 19:34:11.892705 kernel: registered taskstats version 1 Feb 12 19:34:11.892714 kernel: Loading compiled-in X.509 certificates Feb 12 19:34:11.892722 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 253e5c5c936b12e2ff2626e7f3214deb753330c8' Feb 12 19:34:11.892729 kernel: Key type .fscrypt registered Feb 12 19:34:11.892737 kernel: Key type fscrypt-provisioning registered Feb 12 19:34:11.892744 kernel: pstore: Using crash dump compression: deflate Feb 12 19:34:11.892751 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 12 19:34:11.892759 kernel: ima: Allocated hash algorithm: sha1 Feb 12 19:34:11.892766 kernel: ima: No architecture policies found Feb 12 19:34:11.892773 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 12 19:34:11.892782 kernel: Write protecting the kernel read-only data: 28672k Feb 12 19:34:11.892790 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 12 19:34:11.892798 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 12 19:34:11.892805 kernel: Run /init as init process Feb 12 19:34:11.892812 kernel: with arguments: Feb 12 19:34:11.892819 kernel: /init Feb 12 19:34:11.892826 kernel: with environment: Feb 12 19:34:11.892833 kernel: HOME=/ Feb 12 19:34:11.892840 kernel: TERM=linux Feb 12 19:34:11.892849 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 12 19:34:11.892858 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 19:34:11.892868 systemd[1]: Detected virtualization kvm. Feb 12 19:34:11.892876 systemd[1]: Detected architecture x86-64. Feb 12 19:34:11.892883 systemd[1]: Running in initrd. Feb 12 19:34:11.892891 systemd[1]: No hostname configured, using default hostname. Feb 12 19:34:11.892898 systemd[1]: Hostname set to . Feb 12 19:34:11.892907 systemd[1]: Initializing machine ID from VM UUID. Feb 12 19:34:11.892915 systemd[1]: Queued start job for default target initrd.target. Feb 12 19:34:11.892923 systemd[1]: Started systemd-ask-password-console.path. Feb 12 19:34:11.892930 systemd[1]: Reached target cryptsetup.target. Feb 12 19:34:11.892938 systemd[1]: Reached target paths.target. Feb 12 19:34:11.892945 systemd[1]: Reached target slices.target. Feb 12 19:34:11.892953 systemd[1]: Reached target swap.target. Feb 12 19:34:11.892960 systemd[1]: Reached target timers.target. Feb 12 19:34:11.892970 systemd[1]: Listening on iscsid.socket. Feb 12 19:34:11.892978 systemd[1]: Listening on iscsiuio.socket. Feb 12 19:34:11.892985 systemd[1]: Listening on systemd-journald-audit.socket. Feb 12 19:34:11.892993 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 12 19:34:11.893001 systemd[1]: Listening on systemd-journald.socket. Feb 12 19:34:11.893009 systemd[1]: Listening on systemd-networkd.socket. Feb 12 19:34:11.893016 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 19:34:11.893024 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 19:34:11.893031 systemd[1]: Reached target sockets.target. Feb 12 19:34:11.893040 systemd[1]: Starting kmod-static-nodes.service... Feb 12 19:34:11.893048 systemd[1]: Finished network-cleanup.service. Feb 12 19:34:11.893056 systemd[1]: Starting systemd-fsck-usr.service... Feb 12 19:34:11.893064 systemd[1]: Starting systemd-journald.service... Feb 12 19:34:11.893071 systemd[1]: Starting systemd-modules-load.service... Feb 12 19:34:11.893079 systemd[1]: Starting systemd-resolved.service... Feb 12 19:34:11.893087 systemd[1]: Starting systemd-vconsole-setup.service... Feb 12 19:34:11.893094 systemd[1]: Finished kmod-static-nodes.service. Feb 12 19:34:11.893102 systemd[1]: Finished systemd-fsck-usr.service. Feb 12 19:34:11.893111 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 19:34:11.893119 systemd[1]: Finished systemd-vconsole-setup.service. Feb 12 19:34:11.893126 systemd[1]: Starting dracut-cmdline-ask.service... Feb 12 19:34:11.893134 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 19:34:11.893142 kernel: audit: type=1130 audit(1707766451.884:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:11.893153 systemd-journald[198]: Journal started Feb 12 19:34:11.893193 systemd-journald[198]: Runtime Journal (/run/log/journal/e25864dacb6042cfad1be58fc94297f2) is 6.0M, max 48.4M, 42.4M free. Feb 12 19:34:11.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:11.879714 systemd-modules-load[199]: Inserted module 'overlay' Feb 12 19:34:11.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:11.894375 systemd[1]: Started systemd-journald.service. Feb 12 19:34:11.894403 kernel: audit: type=1130 audit(1707766451.893:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:11.897030 systemd[1]: Finished dracut-cmdline-ask.service. Feb 12 19:34:11.901291 kernel: audit: type=1130 audit(1707766451.897:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:11.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:11.900170 systemd[1]: Starting dracut-cmdline.service... Feb 12 19:34:11.904724 systemd-resolved[200]: Positive Trust Anchors: Feb 12 19:34:11.904739 systemd-resolved[200]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 19:34:11.904768 systemd-resolved[200]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 19:34:11.906912 systemd-resolved[200]: Defaulting to hostname 'linux'. Feb 12 19:34:11.914739 kernel: audit: type=1130 audit(1707766451.911:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:11.911000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:11.914790 dracut-cmdline[216]: dracut-dracut-053 Feb 12 19:34:11.914790 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 12 19:34:11.907740 systemd[1]: Started systemd-resolved.service. Feb 12 19:34:11.912000 systemd[1]: Reached target nss-lookup.target. Feb 12 19:34:11.924364 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 12 19:34:11.926070 systemd-modules-load[199]: Inserted module 'br_netfilter' Feb 12 19:34:11.926963 kernel: Bridge firewalling registered Feb 12 19:34:11.941351 kernel: SCSI subsystem initialized Feb 12 19:34:11.951465 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 12 19:34:11.951490 kernel: device-mapper: uevent: version 1.0.3 Feb 12 19:34:11.952367 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 12 19:34:11.955051 systemd-modules-load[199]: Inserted module 'dm_multipath' Feb 12 19:34:11.956293 systemd[1]: Finished systemd-modules-load.service. Feb 12 19:34:11.957000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:11.958219 systemd[1]: Starting systemd-sysctl.service... Feb 12 19:34:11.960424 kernel: audit: type=1130 audit(1707766451.957:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:11.965344 kernel: Loading iSCSI transport class v2.0-870. Feb 12 19:34:11.965438 systemd[1]: Finished systemd-sysctl.service. Feb 12 19:34:11.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:11.969346 kernel: audit: type=1130 audit(1707766451.965:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:11.975343 kernel: iscsi: registered transport (tcp) Feb 12 19:34:11.994351 kernel: iscsi: registered transport (qla4xxx) Feb 12 19:34:11.994383 kernel: QLogic iSCSI HBA Driver Feb 12 19:34:12.023782 systemd[1]: Finished dracut-cmdline.service. Feb 12 19:34:12.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:12.024703 systemd[1]: Starting dracut-pre-udev.service... Feb 12 19:34:12.027650 kernel: audit: type=1130 audit(1707766452.023:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:12.069384 kernel: raid6: avx2x4 gen() 27777 MB/s Feb 12 19:34:12.086352 kernel: raid6: avx2x4 xor() 5950 MB/s Feb 12 19:34:12.103356 kernel: raid6: avx2x2 gen() 32077 MB/s Feb 12 19:34:12.130352 kernel: raid6: avx2x2 xor() 19296 MB/s Feb 12 19:34:12.147354 kernel: raid6: avx2x1 gen() 26701 MB/s Feb 12 19:34:12.164354 kernel: raid6: avx2x1 xor() 15331 MB/s Feb 12 19:34:12.181356 kernel: raid6: sse2x4 gen() 14824 MB/s Feb 12 19:34:12.198357 kernel: raid6: sse2x4 xor() 7220 MB/s Feb 12 19:34:12.215356 kernel: raid6: sse2x2 gen() 16309 MB/s Feb 12 19:34:12.232351 kernel: raid6: sse2x2 xor() 9859 MB/s Feb 12 19:34:12.250355 kernel: raid6: sse2x1 gen() 12392 MB/s Feb 12 19:34:12.267793 kernel: raid6: sse2x1 xor() 7824 MB/s Feb 12 19:34:12.267814 kernel: raid6: using algorithm avx2x2 gen() 32077 MB/s Feb 12 19:34:12.267824 kernel: raid6: .... xor() 19296 MB/s, rmw enabled Feb 12 19:34:12.267832 kernel: raid6: using avx2x2 recovery algorithm Feb 12 19:34:12.279358 kernel: xor: automatically using best checksumming function avx Feb 12 19:34:12.375379 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 12 19:34:12.384483 systemd[1]: Finished dracut-pre-udev.service. Feb 12 19:34:12.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:12.385000 audit: BPF prog-id=7 op=LOAD Feb 12 19:34:12.387941 systemd[1]: Starting systemd-udevd.service... Feb 12 19:34:12.389173 kernel: audit: type=1130 audit(1707766452.384:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:12.389196 kernel: audit: type=1334 audit(1707766452.385:10): prog-id=7 op=LOAD Feb 12 19:34:12.387000 audit: BPF prog-id=8 op=LOAD Feb 12 19:34:12.400287 systemd-udevd[400]: Using default interface naming scheme 'v252'. Feb 12 19:34:12.404227 systemd[1]: Started systemd-udevd.service. Feb 12 19:34:12.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:12.427530 systemd[1]: Starting dracut-pre-trigger.service... Feb 12 19:34:12.436612 dracut-pre-trigger[414]: rd.md=0: removing MD RAID activation Feb 12 19:34:12.459502 systemd[1]: Finished dracut-pre-trigger.service. Feb 12 19:34:12.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:12.460942 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 19:34:12.499691 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 19:34:12.499000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:12.521383 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 12 19:34:12.525401 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 12 19:34:12.525437 kernel: GPT:9289727 != 19775487 Feb 12 19:34:12.525446 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 12 19:34:12.525455 kernel: GPT:9289727 != 19775487 Feb 12 19:34:12.525463 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 12 19:34:12.525472 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 19:34:12.531360 kernel: cryptd: max_cpu_qlen set to 1000 Feb 12 19:34:12.543848 kernel: libata version 3.00 loaded. Feb 12 19:34:12.543880 kernel: AVX2 version of gcm_enc/dec engaged. Feb 12 19:34:12.545811 kernel: AES CTR mode by8 optimization enabled Feb 12 19:34:12.546348 kernel: ata_piix 0000:00:01.1: version 2.13 Feb 12 19:34:12.546511 kernel: scsi host0: ata_piix Feb 12 19:34:12.550730 kernel: scsi host1: ata_piix Feb 12 19:34:12.552917 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Feb 12 19:34:12.552940 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Feb 12 19:34:12.562348 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (452) Feb 12 19:34:12.564061 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 12 19:34:12.575813 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 19:34:12.618003 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 12 19:34:12.621054 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 12 19:34:12.621888 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 12 19:34:12.624114 systemd[1]: Starting disk-uuid.service... Feb 12 19:34:12.708362 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Feb 12 19:34:12.708403 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 12 19:34:12.727242 disk-uuid[519]: Primary Header is updated. Feb 12 19:34:12.727242 disk-uuid[519]: Secondary Entries is updated. Feb 12 19:34:12.727242 disk-uuid[519]: Secondary Header is updated. Feb 12 19:34:12.730351 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 19:34:12.733354 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 19:34:12.739352 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Feb 12 19:34:12.739522 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 12 19:34:12.755353 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Feb 12 19:34:13.734121 disk-uuid[532]: The operation has completed successfully. Feb 12 19:34:13.735278 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 19:34:13.756774 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 12 19:34:13.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:13.756000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:13.756853 systemd[1]: Finished disk-uuid.service. Feb 12 19:34:13.762934 systemd[1]: Starting verity-setup.service... Feb 12 19:34:13.774349 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Feb 12 19:34:13.791173 systemd[1]: Found device dev-mapper-usr.device. Feb 12 19:34:13.793454 systemd[1]: Mounting sysusr-usr.mount... Feb 12 19:34:13.795395 systemd[1]: Finished verity-setup.service. Feb 12 19:34:13.796000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:13.850232 systemd[1]: Mounted sysusr-usr.mount. Feb 12 19:34:13.851352 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 12 19:34:13.850907 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 12 19:34:13.851633 systemd[1]: Starting ignition-setup.service... Feb 12 19:34:13.853310 systemd[1]: Starting parse-ip-for-networkd.service... Feb 12 19:34:13.859800 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 12 19:34:13.859837 kernel: BTRFS info (device vda6): using free space tree Feb 12 19:34:13.859851 kernel: BTRFS info (device vda6): has skinny extents Feb 12 19:34:13.868274 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 12 19:34:13.876246 systemd[1]: Finished ignition-setup.service. Feb 12 19:34:13.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:13.877610 systemd[1]: Starting ignition-fetch-offline.service... Feb 12 19:34:13.914219 ignition[628]: Ignition 2.14.0 Feb 12 19:34:13.914230 ignition[628]: Stage: fetch-offline Feb 12 19:34:13.914315 ignition[628]: no configs at "/usr/lib/ignition/base.d" Feb 12 19:34:13.914324 ignition[628]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 19:34:13.914447 ignition[628]: parsed url from cmdline: "" Feb 12 19:34:13.914450 ignition[628]: no config URL provided Feb 12 19:34:13.914454 ignition[628]: reading system config file "/usr/lib/ignition/user.ign" Feb 12 19:34:13.914462 ignition[628]: no config at "/usr/lib/ignition/user.ign" Feb 12 19:34:13.918698 systemd[1]: Finished parse-ip-for-networkd.service. Feb 12 19:34:13.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:13.920000 audit: BPF prog-id=9 op=LOAD Feb 12 19:34:13.914488 ignition[628]: op(1): [started] loading QEMU firmware config module Feb 12 19:34:13.914492 ignition[628]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 12 19:34:13.921315 systemd[1]: Starting systemd-networkd.service... Feb 12 19:34:13.918202 ignition[628]: op(1): [finished] loading QEMU firmware config module Feb 12 19:34:13.931878 ignition[628]: parsing config with SHA512: f79c49a070a0abb9c4817650f980ff83d281513cfd9084a257ac7c8cdf4fdf23d3c0c4894f8d7b7695b7d3d8b6022208d9f242da6b9cdb230be8f9d6e1b2ef8f Feb 12 19:34:13.950901 systemd-networkd[713]: lo: Link UP Feb 12 19:34:13.950912 systemd-networkd[713]: lo: Gained carrier Feb 12 19:34:13.952534 systemd-networkd[713]: Enumeration completed Feb 12 19:34:13.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:13.952641 systemd[1]: Started systemd-networkd.service. Feb 12 19:34:13.953485 systemd[1]: Reached target network.target. Feb 12 19:34:13.954150 systemd-networkd[713]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 19:34:13.955047 systemd-networkd[713]: eth0: Link UP Feb 12 19:34:13.955051 systemd-networkd[713]: eth0: Gained carrier Feb 12 19:34:13.955245 systemd[1]: Starting iscsiuio.service... Feb 12 19:34:13.959430 systemd[1]: Started iscsiuio.service. Feb 12 19:34:13.961548 unknown[628]: fetched base config from "system" Feb 12 19:34:13.961562 unknown[628]: fetched user config from "qemu" Feb 12 19:34:13.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:13.962084 ignition[628]: fetch-offline: fetch-offline passed Feb 12 19:34:13.962146 ignition[628]: Ignition finished successfully Feb 12 19:34:13.965130 systemd[1]: Starting iscsid.service... Feb 12 19:34:13.966724 systemd[1]: Finished ignition-fetch-offline.service. Feb 12 19:34:13.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:13.968297 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 12 19:34:13.969703 iscsid[718]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 12 19:34:13.969703 iscsid[718]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 12 19:34:13.969703 iscsid[718]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 12 19:34:13.969703 iscsid[718]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 12 19:34:13.969703 iscsid[718]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 12 19:34:13.969703 iscsid[718]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 12 19:34:13.971409 systemd-networkd[713]: eth0: DHCPv4 address 10.0.0.52/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 12 19:34:13.978770 systemd[1]: Starting ignition-kargs.service... Feb 12 19:34:13.980032 systemd[1]: Started iscsid.service. Feb 12 19:34:13.980000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:13.981960 systemd[1]: Starting dracut-initqueue.service... Feb 12 19:34:13.988465 ignition[719]: Ignition 2.14.0 Feb 12 19:34:13.988736 ignition[719]: Stage: kargs Feb 12 19:34:13.988848 ignition[719]: no configs at "/usr/lib/ignition/base.d" Feb 12 19:34:13.988861 ignition[719]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 19:34:13.990234 ignition[719]: kargs: kargs passed Feb 12 19:34:13.990286 ignition[719]: Ignition finished successfully Feb 12 19:34:13.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:13.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:13.992648 systemd[1]: Finished ignition-kargs.service. Feb 12 19:34:13.993540 systemd[1]: Finished dracut-initqueue.service. Feb 12 19:34:13.994608 systemd[1]: Reached target remote-fs-pre.target. Feb 12 19:34:13.995885 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 19:34:13.996639 systemd[1]: Reached target remote-fs.target. Feb 12 19:34:13.998247 systemd[1]: Starting dracut-pre-mount.service... Feb 12 19:34:14.000046 systemd[1]: Starting ignition-disks.service... Feb 12 19:34:14.005427 systemd[1]: Finished dracut-pre-mount.service. Feb 12 19:34:14.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:14.007179 ignition[734]: Ignition 2.14.0 Feb 12 19:34:14.007188 ignition[734]: Stage: disks Feb 12 19:34:14.007282 ignition[734]: no configs at "/usr/lib/ignition/base.d" Feb 12 19:34:14.007292 ignition[734]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 19:34:14.008110 ignition[734]: disks: disks passed Feb 12 19:34:14.009005 systemd[1]: Finished ignition-disks.service. Feb 12 19:34:14.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:14.008146 ignition[734]: Ignition finished successfully Feb 12 19:34:14.009649 systemd[1]: Reached target initrd-root-device.target. Feb 12 19:34:14.010477 systemd[1]: Reached target local-fs-pre.target. Feb 12 19:34:14.012061 systemd[1]: Reached target local-fs.target. Feb 12 19:34:14.013132 systemd[1]: Reached target sysinit.target. Feb 12 19:34:14.014245 systemd[1]: Reached target basic.target. Feb 12 19:34:14.016600 systemd[1]: Starting systemd-fsck-root.service... Feb 12 19:34:14.028115 systemd-fsck[746]: ROOT: clean, 602/553520 files, 56013/553472 blocks Feb 12 19:34:14.062119 systemd[1]: Finished systemd-fsck-root.service. Feb 12 19:34:14.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:14.063847 systemd[1]: Mounting sysroot.mount... Feb 12 19:34:14.069354 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 12 19:34:14.069753 systemd[1]: Mounted sysroot.mount. Feb 12 19:34:14.070268 systemd[1]: Reached target initrd-root-fs.target. Feb 12 19:34:14.071919 systemd[1]: Mounting sysroot-usr.mount... Feb 12 19:34:14.072635 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 12 19:34:14.072666 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 12 19:34:14.072685 systemd[1]: Reached target ignition-diskful.target. Feb 12 19:34:14.074129 systemd[1]: Mounted sysroot-usr.mount. Feb 12 19:34:14.075499 systemd[1]: Starting initrd-setup-root.service... Feb 12 19:34:14.080118 initrd-setup-root[756]: cut: /sysroot/etc/passwd: No such file or directory Feb 12 19:34:14.083501 initrd-setup-root[764]: cut: /sysroot/etc/group: No such file or directory Feb 12 19:34:14.086236 initrd-setup-root[772]: cut: /sysroot/etc/shadow: No such file or directory Feb 12 19:34:14.089712 initrd-setup-root[780]: cut: /sysroot/etc/gshadow: No such file or directory Feb 12 19:34:14.111704 systemd[1]: Finished initrd-setup-root.service. Feb 12 19:34:14.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:14.112485 systemd[1]: Starting ignition-mount.service... Feb 12 19:34:14.113941 systemd[1]: Starting sysroot-boot.service... Feb 12 19:34:14.118389 bash[797]: umount: /sysroot/usr/share/oem: not mounted. Feb 12 19:34:14.126561 ignition[799]: INFO : Ignition 2.14.0 Feb 12 19:34:14.126561 ignition[799]: INFO : Stage: mount Feb 12 19:34:14.127966 ignition[799]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 12 19:34:14.127966 ignition[799]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 19:34:14.127966 ignition[799]: INFO : mount: mount passed Feb 12 19:34:14.127966 ignition[799]: INFO : Ignition finished successfully Feb 12 19:34:14.131640 systemd[1]: Finished ignition-mount.service. Feb 12 19:34:14.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:14.132644 systemd[1]: Finished sysroot-boot.service. Feb 12 19:34:14.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:14.801795 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 12 19:34:14.808608 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (807) Feb 12 19:34:14.808640 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 12 19:34:14.808655 kernel: BTRFS info (device vda6): using free space tree Feb 12 19:34:14.809696 kernel: BTRFS info (device vda6): has skinny extents Feb 12 19:34:14.812626 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 12 19:34:14.813685 systemd[1]: Starting ignition-files.service... Feb 12 19:34:14.827570 ignition[827]: INFO : Ignition 2.14.0 Feb 12 19:34:14.827570 ignition[827]: INFO : Stage: files Feb 12 19:34:14.828832 ignition[827]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 12 19:34:14.828832 ignition[827]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 19:34:14.830620 ignition[827]: DEBUG : files: compiled without relabeling support, skipping Feb 12 19:34:14.830620 ignition[827]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 12 19:34:14.830620 ignition[827]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 12 19:34:14.833385 ignition[827]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 12 19:34:14.833385 ignition[827]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 12 19:34:14.833385 ignition[827]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 12 19:34:14.833157 unknown[827]: wrote ssh authorized keys file for user: core Feb 12 19:34:14.837480 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 12 19:34:14.837480 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 12 19:34:14.837480 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 12 19:34:14.837480 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Feb 12 19:34:15.170490 systemd-networkd[713]: eth0: Gained IPv6LL Feb 12 19:34:15.181136 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 12 19:34:15.294420 ignition[827]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Feb 12 19:34:15.296789 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 12 19:34:15.296789 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 12 19:34:15.296789 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-amd64.tar.gz: attempt #1 Feb 12 19:34:15.594185 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 12 19:34:15.667244 ignition[827]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: a3a2c02a90b008686c20babaf272e703924db2a3e2a0d4e2a7c81d994cbc68c47458a4a354ecc243af095b390815c7f203348b9749351ae817bd52a522300449 Feb 12 19:34:15.669606 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 12 19:34:15.669606 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 12 19:34:15.672493 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubeadm: attempt #1 Feb 12 19:34:15.783253 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 12 19:34:16.090915 ignition[827]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 1c324cd645a7bf93d19d24c87498d9a17878eb1cc927e2680200ffeab2f85051ddec47d85b79b8e774042dc6726299ad3d7caf52c060701f00deba30dc33f660 Feb 12 19:34:16.090915 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 12 19:34:16.094160 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubelet" Feb 12 19:34:16.094160 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubelet: attempt #1 Feb 12 19:34:16.138727 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 12 19:34:16.533726 ignition[827]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 40daf2a9b9e666c14b10e627da931bd79978628b1f23ef6429c1cb4fcba261f86ccff440c0dbb0070ee760fe55772b4fd279c4582dfbb17fa30bc94b7f00126b Feb 12 19:34:16.533726 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 12 19:34:16.537391 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/install.sh" Feb 12 19:34:16.537391 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/install.sh" Feb 12 19:34:16.537391 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 12 19:34:16.537391 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 12 19:34:16.537391 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 19:34:16.537391 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 19:34:16.537391 ignition[827]: INFO : files: op(b): [started] processing unit "containerd.service" Feb 12 19:34:16.537391 ignition[827]: INFO : files: op(b): op(c): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 12 19:34:16.537391 ignition[827]: INFO : files: op(b): op(c): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 12 19:34:16.537391 ignition[827]: INFO : files: op(b): [finished] processing unit "containerd.service" Feb 12 19:34:16.537391 ignition[827]: INFO : files: op(d): [started] processing unit "prepare-cni-plugins.service" Feb 12 19:34:16.537391 ignition[827]: INFO : files: op(d): op(e): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 19:34:16.537391 ignition[827]: INFO : files: op(d): op(e): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 19:34:16.537391 ignition[827]: INFO : files: op(d): [finished] processing unit "prepare-cni-plugins.service" Feb 12 19:34:16.537391 ignition[827]: INFO : files: op(f): [started] processing unit "prepare-critools.service" Feb 12 19:34:16.537391 ignition[827]: INFO : files: op(f): op(10): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 19:34:16.537391 ignition[827]: INFO : files: op(f): op(10): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 19:34:16.562646 ignition[827]: INFO : files: op(f): [finished] processing unit "prepare-critools.service" Feb 12 19:34:16.562646 ignition[827]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Feb 12 19:34:16.562646 ignition[827]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 12 19:34:16.562646 ignition[827]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 12 19:34:16.562646 ignition[827]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Feb 12 19:34:16.562646 ignition[827]: INFO : files: op(13): [started] setting preset to enabled for "prepare-critools.service" Feb 12 19:34:16.562646 ignition[827]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-critools.service" Feb 12 19:34:16.562646 ignition[827]: INFO : files: op(14): [started] setting preset to disabled for "coreos-metadata.service" Feb 12 19:34:16.562646 ignition[827]: INFO : files: op(14): op(15): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 12 19:34:16.562646 ignition[827]: INFO : files: op(14): op(15): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 12 19:34:16.562646 ignition[827]: INFO : files: op(14): [finished] setting preset to disabled for "coreos-metadata.service" Feb 12 19:34:16.562646 ignition[827]: INFO : files: op(16): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 19:34:16.562646 ignition[827]: INFO : files: op(16): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 19:34:16.562646 ignition[827]: INFO : files: createResultFile: createFiles: op(17): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 12 19:34:16.562646 ignition[827]: INFO : files: createResultFile: createFiles: op(17): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 12 19:34:16.562646 ignition[827]: INFO : files: files passed Feb 12 19:34:16.562646 ignition[827]: INFO : Ignition finished successfully Feb 12 19:34:16.592894 kernel: kauditd_printk_skb: 22 callbacks suppressed Feb 12 19:34:16.592920 kernel: audit: type=1130 audit(1707766456.563:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:16.592932 kernel: audit: type=1130 audit(1707766456.572:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:16.592942 kernel: audit: type=1131 audit(1707766456.572:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:16.592951 kernel: audit: type=1130 audit(1707766456.578:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:16.563000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:16.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:16.572000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:16.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:16.562771 systemd[1]: Finished ignition-files.service. Feb 12 19:34:16.565202 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 12 19:34:16.594656 initrd-setup-root-after-ignition[852]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Feb 12 19:34:16.568376 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 12 19:34:16.597000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:16.597942 initrd-setup-root-after-ignition[854]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 12 19:34:16.604950 kernel: audit: type=1130 audit(1707766456.597:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:16.604977 kernel: audit: type=1131 audit(1707766456.597:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:16.597000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:16.568951 systemd[1]: Starting ignition-quench.service... Feb 12 19:34:16.572155 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 12 19:34:16.572250 systemd[1]: Finished ignition-quench.service. Feb 12 19:34:16.573138 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 12 19:34:16.578684 systemd[1]: Reached target ignition-complete.target. Feb 12 19:34:16.583174 systemd[1]: Starting initrd-parse-etc.service... Feb 12 19:34:16.595713 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 12 19:34:16.595794 systemd[1]: Finished initrd-parse-etc.service. Feb 12 19:34:16.597857 systemd[1]: Reached target initrd-fs.target. Feb 12 19:34:16.602464 systemd[1]: Reached target initrd.target. Feb 12 19:34:16.602728 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 12 19:34:16.603776 systemd[1]: Starting dracut-pre-pivot.service... Feb 12 19:34:16.614000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:16.613680 systemd[1]: Finished dracut-pre-pivot.service. Feb 12 19:34:16.615600 systemd[1]: Starting initrd-cleanup.service... Feb 12 19:34:16.618812 kernel: audit: type=1130 audit(1707766456.614:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:16.623997 systemd[1]: Stopped target nss-lookup.target. Feb 12 19:34:16.624739 systemd[1]: Stopped target remote-cryptsetup.target. Feb 12 19:34:16.625438 systemd[1]: Stopped target timers.target. Feb 12 19:34:16.627000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:16.626482 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 12 19:34:16.631274 kernel: audit: type=1131 audit(1707766456.627:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:16.626576 systemd[1]: Stopped dracut-pre-pivot.service. Feb 12 19:34:16.627724 systemd[1]: Stopped target initrd.target. Feb 12 19:34:16.630651 systemd[1]: Stopped target basic.target. Feb 12 19:34:16.631879 systemd[1]: Stopped target ignition-complete.target. Feb 12 19:34:16.632452 systemd[1]: Stopped target ignition-diskful.target. Feb 12 19:34:16.632671 systemd[1]: Stopped target initrd-root-device.target. Feb 12 19:34:16.633001 systemd[1]: Stopped target remote-fs.target. Feb 12 19:34:16.633225 systemd[1]: Stopped target remote-fs-pre.target. Feb 12 19:34:16.633486 systemd[1]: Stopped target sysinit.target. Feb 12 19:34:16.637879 systemd[1]: Stopped target local-fs.target. Feb 12 19:34:16.638412 systemd[1]: Stopped target local-fs-pre.target. Feb 12 19:34:16.641000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:16.638724 systemd[1]: Stopped target swap.target. Feb 12 19:34:16.644925 kernel: audit: type=1131 audit(1707766456.641:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:16.638917 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 12 19:34:16.645000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:16.639002 systemd[1]: Stopped dracut-pre-mount.service. Feb 12 19:34:16.649019 kernel: audit: type=1131 audit(1707766456.645:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:16.648000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:16.641627 systemd[1]: Stopped target cryptsetup.target. Feb 12 19:34:16.644511 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 12 19:34:16.644598 systemd[1]: Stopped dracut-initqueue.service. Feb 12 19:34:16.645549 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 12 19:34:16.645688 systemd[1]: Stopped ignition-fetch-offline.service. Feb 12 19:34:16.648660 systemd[1]: Stopped target paths.target. Feb 12 19:34:16.649557 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 12 19:34:16.653369 systemd[1]: Stopped systemd-ask-password-console.path. Feb 12 19:34:16.653635 systemd[1]: Stopped target slices.target. Feb 12 19:34:16.653836 systemd[1]: Stopped target sockets.target. Feb 12 19:34:16.654070 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 12 19:34:16.656000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:16.654171 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 12 19:34:16.657000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:16.656566 systemd[1]: ignition-files.service: Deactivated successfully. Feb 12 19:34:16.656653 systemd[1]: Stopped ignition-files.service. Feb 12 19:34:16.659597 systemd[1]: Stopping ignition-mount.service... Feb 12 19:34:16.661020 iscsid[718]: iscsid shutting down. Feb 12 19:34:16.661009 systemd[1]: Stopping iscsid.service... Feb 12 19:34:16.662000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:16.661449 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 12 19:34:16.661582 systemd[1]: Stopped kmod-static-nodes.service. Feb 12 19:34:16.665288 systemd[1]: Stopping sysroot-boot.service... Feb 12 19:34:16.665415 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 12 19:34:16.666000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:16.665575 systemd[1]: Stopped systemd-udev-trigger.service. Feb 12 19:34:16.667000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:16.666985 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 12 19:34:16.667095 systemd[1]: Stopped dracut-pre-trigger.service. Feb 12 19:34:16.670127 systemd[1]: iscsid.service: Deactivated successfully. Feb 12 19:34:16.670228 systemd[1]: Stopped iscsid.service. Feb 12 19:34:16.670000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:16.672293 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 12 19:34:16.672000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:16.672000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:16.672402 systemd[1]: Finished initrd-cleanup.service. Feb 12 19:34:16.673918 systemd[1]: iscsid.socket: Deactivated successfully. Feb 12 19:34:16.673945 systemd[1]: Closed iscsid.socket. Feb 12 19:34:16.674859 systemd[1]: Stopping iscsiuio.service... Feb 12 19:34:16.677000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:16.677542 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 12 19:34:16.677636 systemd[1]: Stopped iscsiuio.service. Feb 12 19:34:16.678261 systemd[1]: Stopped target network.target. Feb 12 19:34:16.678835 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 12 19:34:16.678864 systemd[1]: Closed iscsiuio.socket. Feb 12 19:34:16.680158 systemd[1]: Stopping systemd-networkd.service... Feb 12 19:34:16.681184 systemd[1]: Stopping systemd-resolved.service... Feb 12 19:34:16.683090 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 12 19:34:16.685691 ignition[868]: INFO : Ignition 2.14.0 Feb 12 19:34:16.685691 ignition[868]: INFO : Stage: umount Feb 12 19:34:16.685691 ignition[868]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 12 19:34:16.685691 ignition[868]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 19:34:16.686000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:16.687000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:16.690436 ignition[868]: INFO : umount: umount passed Feb 12 19:34:16.690436 ignition[868]: INFO : Ignition finished successfully Feb 12 19:34:16.686378 systemd-networkd[713]: eth0: DHCPv6 lease lost Feb 12 19:34:16.686982 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 12 19:34:16.692000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:16.693000 audit: BPF prog-id=9 op=UNLOAD Feb 12 19:34:16.687084 systemd[1]: Stopped ignition-mount.service. Feb 12 19:34:16.693000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:16.687456 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 12 19:34:16.694000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:16.687530 systemd[1]: Stopped systemd-networkd.service. Feb 12 19:34:16.690579 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 12 19:34:16.690620 systemd[1]: Closed systemd-networkd.socket. Feb 12 19:34:16.697000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:16.692058 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 12 19:34:16.698000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:16.692101 systemd[1]: Stopped ignition-disks.service. Feb 12 19:34:16.700000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:16.693101 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 12 19:34:16.693133 systemd[1]: Stopped ignition-kargs.service. Feb 12 19:34:16.703000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:16.694233 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 12 19:34:16.694266 systemd[1]: Stopped ignition-setup.service. Feb 12 19:34:16.696010 systemd[1]: Stopping network-cleanup.service... Feb 12 19:34:16.696982 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 12 19:34:16.697025 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 12 19:34:16.707000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:16.698280 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 19:34:16.698324 systemd[1]: Stopped systemd-sysctl.service. Feb 12 19:34:16.709000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:16.709000 audit: BPF prog-id=6 op=UNLOAD Feb 12 19:34:16.699367 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 12 19:34:16.699405 systemd[1]: Stopped systemd-modules-load.service. Feb 12 19:34:16.700652 systemd[1]: Stopping systemd-udevd.service... Feb 12 19:34:16.711000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:16.703229 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 12 19:34:16.703636 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 12 19:34:16.703718 systemd[1]: Stopped systemd-resolved.service. Feb 12 19:34:16.706298 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 12 19:34:16.706857 systemd[1]: Stopped sysroot-boot.service. Feb 12 19:34:16.709083 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 12 19:34:16.709132 systemd[1]: Stopped initrd-setup-root.service. Feb 12 19:34:16.711666 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 12 19:34:16.711749 systemd[1]: Stopped network-cleanup.service. Feb 12 19:34:16.719767 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 12 19:34:16.719881 systemd[1]: Stopped systemd-udevd.service. Feb 12 19:34:16.720000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:16.721546 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 12 19:34:16.721590 systemd[1]: Closed systemd-udevd-control.socket. Feb 12 19:34:16.721761 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 12 19:34:16.721786 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 12 19:34:16.721000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:16.721958 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 12 19:34:16.722000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:16.721987 systemd[1]: Stopped dracut-pre-udev.service. Feb 12 19:34:16.727000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:16.722207 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 12 19:34:16.722235 systemd[1]: Stopped dracut-cmdline.service. Feb 12 19:34:16.722605 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 12 19:34:16.730000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:16.722634 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 12 19:34:16.728418 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 12 19:34:16.729834 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 12 19:34:16.729876 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 12 19:34:16.741576 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 12 19:34:16.741672 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 12 19:34:16.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:16.742000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:16.742717 systemd[1]: Reached target initrd-switch-root.target. Feb 12 19:34:16.745526 systemd[1]: Starting initrd-switch-root.service... Feb 12 19:34:16.752283 systemd[1]: Switching root. Feb 12 19:34:16.753000 audit: BPF prog-id=5 op=UNLOAD Feb 12 19:34:16.753000 audit: BPF prog-id=4 op=UNLOAD Feb 12 19:34:16.753000 audit: BPF prog-id=3 op=UNLOAD Feb 12 19:34:16.756000 audit: BPF prog-id=8 op=UNLOAD Feb 12 19:34:16.756000 audit: BPF prog-id=7 op=UNLOAD Feb 12 19:34:16.775230 systemd-journald[198]: Journal stopped Feb 12 19:34:19.585938 systemd-journald[198]: Received SIGTERM from PID 1 (systemd). Feb 12 19:34:19.585992 kernel: SELinux: Class mctp_socket not defined in policy. Feb 12 19:34:19.586009 kernel: SELinux: Class anon_inode not defined in policy. Feb 12 19:34:19.586021 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 12 19:34:19.586031 kernel: SELinux: policy capability network_peer_controls=1 Feb 12 19:34:19.586040 kernel: SELinux: policy capability open_perms=1 Feb 12 19:34:19.586050 kernel: SELinux: policy capability extended_socket_class=1 Feb 12 19:34:19.586059 kernel: SELinux: policy capability always_check_network=0 Feb 12 19:34:19.586069 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 12 19:34:19.586080 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 12 19:34:19.586090 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 12 19:34:19.586100 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 12 19:34:19.586110 systemd[1]: Successfully loaded SELinux policy in 38.302ms. Feb 12 19:34:19.586127 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.777ms. Feb 12 19:34:19.586138 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 19:34:19.586709 systemd[1]: Detected virtualization kvm. Feb 12 19:34:19.586731 systemd[1]: Detected architecture x86-64. Feb 12 19:34:19.586743 systemd[1]: Detected first boot. Feb 12 19:34:19.586754 systemd[1]: Initializing machine ID from VM UUID. Feb 12 19:34:19.586767 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 12 19:34:19.586780 systemd[1]: Populated /etc with preset unit settings. Feb 12 19:34:19.586791 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:34:19.586802 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:34:19.586813 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:34:19.586825 systemd[1]: Queued start job for default target multi-user.target. Feb 12 19:34:19.586835 systemd[1]: Unnecessary job was removed for dev-vda6.device. Feb 12 19:34:19.586847 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 12 19:34:19.586865 systemd[1]: Created slice system-addon\x2drun.slice. Feb 12 19:34:19.586880 systemd[1]: Created slice system-getty.slice. Feb 12 19:34:19.586894 systemd[1]: Created slice system-modprobe.slice. Feb 12 19:34:19.586907 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 12 19:34:19.586922 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 12 19:34:19.586936 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 12 19:34:19.586948 systemd[1]: Created slice user.slice. Feb 12 19:34:19.586962 systemd[1]: Started systemd-ask-password-console.path. Feb 12 19:34:19.586977 systemd[1]: Started systemd-ask-password-wall.path. Feb 12 19:34:19.586991 systemd[1]: Set up automount boot.automount. Feb 12 19:34:19.587004 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 12 19:34:19.587018 systemd[1]: Reached target integritysetup.target. Feb 12 19:34:19.587036 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 19:34:19.587046 systemd[1]: Reached target remote-fs.target. Feb 12 19:34:19.587058 systemd[1]: Reached target slices.target. Feb 12 19:34:19.587072 systemd[1]: Reached target swap.target. Feb 12 19:34:19.587086 systemd[1]: Reached target torcx.target. Feb 12 19:34:19.587103 systemd[1]: Reached target veritysetup.target. Feb 12 19:34:19.587116 systemd[1]: Listening on systemd-coredump.socket. Feb 12 19:34:19.587132 systemd[1]: Listening on systemd-initctl.socket. Feb 12 19:34:19.587144 systemd[1]: Listening on systemd-journald-audit.socket. Feb 12 19:34:19.587154 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 12 19:34:19.587164 systemd[1]: Listening on systemd-journald.socket. Feb 12 19:34:19.587175 systemd[1]: Listening on systemd-networkd.socket. Feb 12 19:34:19.587185 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 19:34:19.587196 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 19:34:19.587206 systemd[1]: Listening on systemd-userdbd.socket. Feb 12 19:34:19.587219 systemd[1]: Mounting dev-hugepages.mount... Feb 12 19:34:19.587229 systemd[1]: Mounting dev-mqueue.mount... Feb 12 19:34:19.587239 systemd[1]: Mounting media.mount... Feb 12 19:34:19.587262 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 12 19:34:19.587278 systemd[1]: Mounting sys-kernel-debug.mount... Feb 12 19:34:19.587290 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 12 19:34:19.587300 systemd[1]: Mounting tmp.mount... Feb 12 19:34:19.587310 systemd[1]: Starting flatcar-tmpfiles.service... Feb 12 19:34:19.587322 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 12 19:34:19.587346 systemd[1]: Starting kmod-static-nodes.service... Feb 12 19:34:19.587358 systemd[1]: Starting modprobe@configfs.service... Feb 12 19:34:19.587368 systemd[1]: Starting modprobe@dm_mod.service... Feb 12 19:34:19.587378 systemd[1]: Starting modprobe@drm.service... Feb 12 19:34:19.587389 systemd[1]: Starting modprobe@efi_pstore.service... Feb 12 19:34:19.587399 systemd[1]: Starting modprobe@fuse.service... Feb 12 19:34:19.587409 systemd[1]: Starting modprobe@loop.service... Feb 12 19:34:19.587419 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 12 19:34:19.587430 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 12 19:34:19.587441 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Feb 12 19:34:19.587451 systemd[1]: Starting systemd-journald.service... Feb 12 19:34:19.587461 systemd[1]: Starting systemd-modules-load.service... Feb 12 19:34:19.587473 systemd[1]: Starting systemd-network-generator.service... Feb 12 19:34:19.587487 systemd[1]: Starting systemd-remount-fs.service... Feb 12 19:34:19.587502 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 19:34:19.587513 kernel: loop: module loaded Feb 12 19:34:19.587523 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 12 19:34:19.587534 systemd[1]: Mounted dev-hugepages.mount. Feb 12 19:34:19.587546 systemd[1]: Mounted dev-mqueue.mount. Feb 12 19:34:19.587556 kernel: fuse: init (API version 7.34) Feb 12 19:34:19.587567 systemd[1]: Mounted media.mount. Feb 12 19:34:19.587576 systemd[1]: Mounted sys-kernel-debug.mount. Feb 12 19:34:19.587586 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 12 19:34:19.587599 systemd-journald[1010]: Journal started Feb 12 19:34:19.588154 systemd-journald[1010]: Runtime Journal (/run/log/journal/e25864dacb6042cfad1be58fc94297f2) is 6.0M, max 48.4M, 42.4M free. Feb 12 19:34:19.512000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 19:34:19.512000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 12 19:34:19.584000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 12 19:34:19.584000 audit[1010]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffd78f40910 a2=4000 a3=7ffd78f409ac items=0 ppid=1 pid=1010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:34:19.590254 systemd[1]: Mounted tmp.mount. Feb 12 19:34:19.590534 systemd[1]: Started systemd-journald.service. Feb 12 19:34:19.584000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 12 19:34:19.590000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:19.591213 systemd[1]: Finished kmod-static-nodes.service. Feb 12 19:34:19.591000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:19.592000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:19.592000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:19.592037 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 12 19:34:19.593000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:19.593000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:19.592252 systemd[1]: Finished modprobe@configfs.service. Feb 12 19:34:19.594000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:19.594000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:19.593051 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 12 19:34:19.596000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:19.596000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:19.593269 systemd[1]: Finished modprobe@dm_mod.service. Feb 12 19:34:19.597000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:19.597000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:19.598000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:19.598000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:19.594214 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 12 19:34:19.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:19.594580 systemd[1]: Finished modprobe@drm.service. Feb 12 19:34:19.595463 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 12 19:34:19.595735 systemd[1]: Finished modprobe@efi_pstore.service. Feb 12 19:34:19.596751 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 12 19:34:19.597014 systemd[1]: Finished modprobe@fuse.service. Feb 12 19:34:19.597872 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 12 19:34:19.598109 systemd[1]: Finished modprobe@loop.service. Feb 12 19:34:19.599204 systemd[1]: Finished systemd-modules-load.service. Feb 12 19:34:19.600708 systemd[1]: Finished systemd-network-generator.service. Feb 12 19:34:19.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:19.603348 systemd[1]: Finished systemd-remount-fs.service. Feb 12 19:34:19.603000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:19.604493 systemd[1]: Reached target network-pre.target. Feb 12 19:34:19.606601 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 12 19:34:19.608591 systemd[1]: Mounting sys-kernel-config.mount... Feb 12 19:34:19.609280 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 12 19:34:19.611086 systemd[1]: Starting systemd-hwdb-update.service... Feb 12 19:34:19.613053 systemd[1]: Starting systemd-journal-flush.service... Feb 12 19:34:19.613893 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 12 19:34:19.614929 systemd[1]: Starting systemd-random-seed.service... Feb 12 19:34:19.615613 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 12 19:34:19.616589 systemd[1]: Starting systemd-sysctl.service... Feb 12 19:34:19.620127 systemd-journald[1010]: Time spent on flushing to /var/log/journal/e25864dacb6042cfad1be58fc94297f2 is 24.721ms for 1104 entries. Feb 12 19:34:19.620127 systemd-journald[1010]: System Journal (/var/log/journal/e25864dacb6042cfad1be58fc94297f2) is 8.0M, max 195.6M, 187.6M free. Feb 12 19:34:19.674829 systemd-journald[1010]: Received client request to flush runtime journal. Feb 12 19:34:19.621000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:19.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:19.631000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:19.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:19.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:19.621220 systemd[1]: Finished flatcar-tmpfiles.service. Feb 12 19:34:19.621948 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 12 19:34:19.622691 systemd[1]: Mounted sys-kernel-config.mount. Feb 12 19:34:19.624834 systemd[1]: Starting systemd-sysusers.service... Feb 12 19:34:19.629633 systemd[1]: Finished systemd-sysctl.service. Feb 12 19:34:19.630835 systemd[1]: Finished systemd-random-seed.service. Feb 12 19:34:19.631643 systemd[1]: Reached target first-boot-complete.target. Feb 12 19:34:19.643473 systemd[1]: Finished systemd-sysusers.service. Feb 12 19:34:19.645613 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 19:34:19.669984 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 19:34:19.674423 systemd[1]: Starting systemd-udev-settle.service... Feb 12 19:34:19.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:19.677196 systemd[1]: Finished systemd-journal-flush.service. Feb 12 19:34:19.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:19.683487 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 19:34:19.687817 udevadm[1059]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 12 19:34:20.106038 systemd[1]: Finished systemd-hwdb-update.service. Feb 12 19:34:20.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:20.108375 systemd[1]: Starting systemd-udevd.service... Feb 12 19:34:20.125945 systemd-udevd[1063]: Using default interface naming scheme 'v252'. Feb 12 19:34:20.138716 systemd[1]: Started systemd-udevd.service. Feb 12 19:34:20.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:20.141626 systemd[1]: Starting systemd-networkd.service... Feb 12 19:34:20.146425 systemd[1]: Starting systemd-userdbd.service... Feb 12 19:34:20.173741 systemd[1]: Found device dev-ttyS0.device. Feb 12 19:34:20.187702 systemd[1]: Started systemd-userdbd.service. Feb 12 19:34:20.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:20.215249 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 19:34:20.223361 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 12 19:34:20.227357 kernel: ACPI: button: Power Button [PWRF] Feb 12 19:34:20.243769 systemd-networkd[1074]: lo: Link UP Feb 12 19:34:20.243784 systemd-networkd[1074]: lo: Gained carrier Feb 12 19:34:20.244170 systemd-networkd[1074]: Enumeration completed Feb 12 19:34:20.244289 systemd-networkd[1074]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 19:34:20.244291 systemd[1]: Started systemd-networkd.service. Feb 12 19:34:20.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:20.253091 systemd-networkd[1074]: eth0: Link UP Feb 12 19:34:20.253258 systemd-networkd[1074]: eth0: Gained carrier Feb 12 19:34:20.243000 audit[1082]: AVC avc: denied { confidentiality } for pid=1082 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 12 19:34:20.243000 audit[1082]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5562d1184bf0 a1=32194 a2=7f486c39dbc5 a3=5 items=108 ppid=1063 pid=1082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:34:20.243000 audit: CWD cwd="/" Feb 12 19:34:20.243000 audit: PATH item=0 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=1 name=(null) inode=12891 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=2 name=(null) inode=12891 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=3 name=(null) inode=12892 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=4 name=(null) inode=12891 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=5 name=(null) inode=12893 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=6 name=(null) inode=12891 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=7 name=(null) inode=12894 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=8 name=(null) inode=12894 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=9 name=(null) inode=12895 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=10 name=(null) inode=12894 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=11 name=(null) inode=12896 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=12 name=(null) inode=12894 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=13 name=(null) inode=12897 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=14 name=(null) inode=12894 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=15 name=(null) inode=12898 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=16 name=(null) inode=12894 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=17 name=(null) inode=12899 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=18 name=(null) inode=12891 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=19 name=(null) inode=12900 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=20 name=(null) inode=12900 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=21 name=(null) inode=12901 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=22 name=(null) inode=12900 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=23 name=(null) inode=12902 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=24 name=(null) inode=12900 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=25 name=(null) inode=12903 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=26 name=(null) inode=12900 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=27 name=(null) inode=12904 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=28 name=(null) inode=12900 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=29 name=(null) inode=12905 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=30 name=(null) inode=12891 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=31 name=(null) inode=12906 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=32 name=(null) inode=12906 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=33 name=(null) inode=12907 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=34 name=(null) inode=12906 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=35 name=(null) inode=12908 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=36 name=(null) inode=12906 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=37 name=(null) inode=12909 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=38 name=(null) inode=12906 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=39 name=(null) inode=12910 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=40 name=(null) inode=12906 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=41 name=(null) inode=12911 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=42 name=(null) inode=12891 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=43 name=(null) inode=12912 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=44 name=(null) inode=12912 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=45 name=(null) inode=12913 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=46 name=(null) inode=12912 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=47 name=(null) inode=12914 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=48 name=(null) inode=12912 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=49 name=(null) inode=12915 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=50 name=(null) inode=12912 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=51 name=(null) inode=12916 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=52 name=(null) inode=12912 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=53 name=(null) inode=12917 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=54 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=55 name=(null) inode=12918 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=56 name=(null) inode=12918 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=57 name=(null) inode=12919 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=58 name=(null) inode=12918 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=59 name=(null) inode=12920 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=60 name=(null) inode=12918 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=61 name=(null) inode=12921 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=62 name=(null) inode=12921 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=63 name=(null) inode=12922 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=64 name=(null) inode=12921 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=65 name=(null) inode=12923 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=66 name=(null) inode=12921 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=67 name=(null) inode=12924 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=68 name=(null) inode=12921 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=69 name=(null) inode=12925 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=70 name=(null) inode=12921 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=71 name=(null) inode=12926 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=72 name=(null) inode=12918 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=73 name=(null) inode=12927 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=74 name=(null) inode=12927 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=75 name=(null) inode=12928 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=76 name=(null) inode=12927 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=77 name=(null) inode=12929 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=78 name=(null) inode=12927 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=79 name=(null) inode=12930 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=80 name=(null) inode=12927 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=81 name=(null) inode=12931 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=82 name=(null) inode=12927 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=83 name=(null) inode=12932 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=84 name=(null) inode=12918 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=85 name=(null) inode=12933 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=86 name=(null) inode=12933 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=87 name=(null) inode=12934 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=88 name=(null) inode=12933 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=89 name=(null) inode=12935 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=90 name=(null) inode=12933 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=91 name=(null) inode=12936 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=92 name=(null) inode=12933 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=93 name=(null) inode=12937 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=94 name=(null) inode=12933 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=95 name=(null) inode=12938 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=96 name=(null) inode=12918 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=97 name=(null) inode=12939 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=98 name=(null) inode=12939 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=99 name=(null) inode=12940 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=100 name=(null) inode=12939 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=101 name=(null) inode=12941 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=102 name=(null) inode=12939 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=103 name=(null) inode=12942 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=104 name=(null) inode=12939 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=105 name=(null) inode=12943 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=106 name=(null) inode=12939 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PATH item=107 name=(null) inode=12944 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:34:20.243000 audit: PROCTITLE proctitle="(udev-worker)" Feb 12 19:34:20.269533 systemd-networkd[1074]: eth0: DHCPv4 address 10.0.0.52/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 12 19:34:20.291404 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 0 Feb 12 19:34:20.295355 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 12 19:34:20.298350 kernel: mousedev: PS/2 mouse device common for all mice Feb 12 19:34:20.332645 kernel: kvm: Nested Virtualization enabled Feb 12 19:34:20.332960 kernel: SVM: kvm: Nested Paging enabled Feb 12 19:34:20.333035 kernel: SVM: Virtual VMLOAD VMSAVE supported Feb 12 19:34:20.333098 kernel: SVM: Virtual GIF supported Feb 12 19:34:20.349361 kernel: EDAC MC: Ver: 3.0.0 Feb 12 19:34:20.370770 systemd[1]: Finished systemd-udev-settle.service. Feb 12 19:34:20.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:20.372759 systemd[1]: Starting lvm2-activation-early.service... Feb 12 19:34:20.381687 lvm[1101]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 19:34:20.408368 systemd[1]: Finished lvm2-activation-early.service. Feb 12 19:34:20.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:20.409193 systemd[1]: Reached target cryptsetup.target. Feb 12 19:34:20.410959 systemd[1]: Starting lvm2-activation.service... Feb 12 19:34:20.415867 lvm[1103]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 19:34:20.447368 systemd[1]: Finished lvm2-activation.service. Feb 12 19:34:20.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:20.448121 systemd[1]: Reached target local-fs-pre.target. Feb 12 19:34:20.448702 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 12 19:34:20.448719 systemd[1]: Reached target local-fs.target. Feb 12 19:34:20.449248 systemd[1]: Reached target machines.target. Feb 12 19:34:20.450976 systemd[1]: Starting ldconfig.service... Feb 12 19:34:20.451736 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 12 19:34:20.451796 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 19:34:20.452785 systemd[1]: Starting systemd-boot-update.service... Feb 12 19:34:20.454270 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 12 19:34:20.456403 systemd[1]: Starting systemd-machine-id-commit.service... Feb 12 19:34:20.457144 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 12 19:34:20.457208 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 12 19:34:20.458208 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 12 19:34:20.461477 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1106 (bootctl) Feb 12 19:34:20.462718 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 12 19:34:20.468648 systemd-tmpfiles[1109]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 12 19:34:20.469000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:20.469104 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 12 19:34:20.469198 systemd-tmpfiles[1109]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 12 19:34:20.471185 systemd-tmpfiles[1109]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 12 19:34:20.494197 systemd-fsck[1115]: fsck.fat 4.2 (2021-01-31) Feb 12 19:34:20.494197 systemd-fsck[1115]: /dev/vda1: 790 files, 115362/258078 clusters Feb 12 19:34:20.495745 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 12 19:34:20.496000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:20.498238 systemd[1]: Mounting boot.mount... Feb 12 19:34:20.509520 systemd[1]: Mounted boot.mount. Feb 12 19:34:20.843588 systemd[1]: Finished systemd-boot-update.service. Feb 12 19:34:20.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:20.911160 ldconfig[1105]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 12 19:34:20.917004 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 12 19:34:20.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:20.919061 systemd[1]: Starting audit-rules.service... Feb 12 19:34:20.920609 systemd[1]: Starting clean-ca-certificates.service... Feb 12 19:34:20.922192 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 12 19:34:20.924490 systemd[1]: Starting systemd-resolved.service... Feb 12 19:34:20.926308 systemd[1]: Starting systemd-timesyncd.service... Feb 12 19:34:20.927823 systemd[1]: Starting systemd-update-utmp.service... Feb 12 19:34:20.929156 systemd[1]: Finished clean-ca-certificates.service. Feb 12 19:34:20.929000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:20.930016 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 12 19:34:20.931000 audit[1133]: SYSTEM_BOOT pid=1133 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 12 19:34:20.935281 systemd[1]: Finished systemd-update-utmp.service. Feb 12 19:34:20.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:20.965448 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 12 19:34:20.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:21.021133 systemd[1]: Started systemd-timesyncd.service. Feb 12 19:34:22.025112 systemd-timesyncd[1130]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 12 19:34:22.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:34:22.025180 systemd-timesyncd[1130]: Initial clock synchronization to Mon 2024-02-12 19:34:22.024980 UTC. Feb 12 19:34:22.025866 systemd[1]: Reached target time-set.target. Feb 12 19:34:22.026502 systemd-resolved[1127]: Positive Trust Anchors: Feb 12 19:34:22.026516 systemd-resolved[1127]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 19:34:22.026545 systemd-resolved[1127]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 19:34:22.029000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 12 19:34:22.029000 audit[1146]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd85fb4cb0 a2=420 a3=0 items=0 ppid=1122 pid=1146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:34:22.029000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 12 19:34:22.031358 augenrules[1146]: No rules Feb 12 19:34:22.031914 systemd[1]: Finished audit-rules.service. Feb 12 19:34:22.034949 systemd-resolved[1127]: Defaulting to hostname 'linux'. Feb 12 19:34:22.036441 systemd[1]: Started systemd-resolved.service. Feb 12 19:34:22.037097 systemd[1]: Reached target network.target. Feb 12 19:34:22.037628 systemd[1]: Reached target nss-lookup.target. Feb 12 19:34:22.095310 systemd[1]: Finished ldconfig.service. Feb 12 19:34:22.097521 systemd[1]: Starting systemd-update-done.service... Feb 12 19:34:22.112639 systemd[1]: Finished systemd-update-done.service. Feb 12 19:34:22.113483 systemd[1]: Reached target sysinit.target. Feb 12 19:34:22.114142 systemd[1]: Started motdgen.path. Feb 12 19:34:22.114694 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 12 19:34:22.115681 systemd[1]: Started logrotate.timer. Feb 12 19:34:22.116289 systemd[1]: Started mdadm.timer. Feb 12 19:34:22.116769 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 12 19:34:22.117372 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 12 19:34:22.117392 systemd[1]: Reached target paths.target. Feb 12 19:34:22.117943 systemd[1]: Reached target timers.target. Feb 12 19:34:22.127262 systemd[1]: Listening on dbus.socket. Feb 12 19:34:22.129612 systemd[1]: Starting docker.socket... Feb 12 19:34:22.140906 systemd[1]: Listening on sshd.socket. Feb 12 19:34:22.141626 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 19:34:22.142120 systemd[1]: Listening on docker.socket. Feb 12 19:34:22.142929 systemd[1]: Reached target sockets.target. Feb 12 19:34:22.143535 systemd[1]: Reached target basic.target. Feb 12 19:34:22.144266 systemd[1]: System is tainted: cgroupsv1 Feb 12 19:34:22.144305 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 19:34:22.144322 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 19:34:22.145500 systemd[1]: Starting containerd.service... Feb 12 19:34:22.147025 systemd[1]: Starting dbus.service... Feb 12 19:34:22.149228 systemd[1]: Starting enable-oem-cloudinit.service... Feb 12 19:34:22.151499 systemd[1]: Starting extend-filesystems.service... Feb 12 19:34:22.152482 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 12 19:34:22.153770 systemd[1]: Starting motdgen.service... Feb 12 19:34:22.189201 jq[1160]: false Feb 12 19:34:22.156250 systemd[1]: Starting prepare-cni-plugins.service... Feb 12 19:34:22.179432 dbus-daemon[1158]: [system] SELinux support is enabled Feb 12 19:34:22.158201 systemd[1]: Starting prepare-critools.service... Feb 12 19:34:22.160255 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 12 19:34:22.162283 systemd[1]: Starting sshd-keygen.service... Feb 12 19:34:22.165346 systemd[1]: Starting systemd-logind.service... Feb 12 19:34:22.166168 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 19:34:22.166234 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 12 19:34:22.167648 systemd[1]: Starting update-engine.service... Feb 12 19:34:22.190303 jq[1178]: true Feb 12 19:34:22.171471 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 12 19:34:22.174264 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 12 19:34:22.175890 systemd[1]: Finished systemd-machine-id-commit.service. Feb 12 19:34:22.180021 systemd[1]: Started dbus.service. Feb 12 19:34:22.183397 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 12 19:34:22.183694 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 12 19:34:22.185396 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 12 19:34:22.185681 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 12 19:34:22.196782 tar[1184]: ./ Feb 12 19:34:22.196782 tar[1184]: ./macvlan Feb 12 19:34:22.200660 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 12 19:34:22.203506 tar[1185]: crictl Feb 12 19:34:22.203764 extend-filesystems[1161]: Found sr0 Feb 12 19:34:22.203764 extend-filesystems[1161]: Found vda Feb 12 19:34:22.203764 extend-filesystems[1161]: Found vda1 Feb 12 19:34:22.203764 extend-filesystems[1161]: Found vda2 Feb 12 19:34:22.203764 extend-filesystems[1161]: Found vda3 Feb 12 19:34:22.203764 extend-filesystems[1161]: Found usr Feb 12 19:34:22.203764 extend-filesystems[1161]: Found vda4 Feb 12 19:34:22.203764 extend-filesystems[1161]: Found vda6 Feb 12 19:34:22.203764 extend-filesystems[1161]: Found vda7 Feb 12 19:34:22.203764 extend-filesystems[1161]: Found vda9 Feb 12 19:34:22.203764 extend-filesystems[1161]: Checking size of /dev/vda9 Feb 12 19:34:22.200709 systemd[1]: Reached target system-config.target. Feb 12 19:34:22.201732 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 12 19:34:22.201749 systemd[1]: Reached target user-config.target. Feb 12 19:34:22.212582 systemd[1]: motdgen.service: Deactivated successfully. Feb 12 19:34:22.228415 jq[1193]: true Feb 12 19:34:22.212961 systemd[1]: Finished motdgen.service. Feb 12 19:34:22.238166 update_engine[1175]: I0212 19:34:22.237767 1175 main.cc:92] Flatcar Update Engine starting Feb 12 19:34:22.241145 update_engine[1175]: I0212 19:34:22.241065 1175 update_check_scheduler.cc:74] Next update check in 10m59s Feb 12 19:34:22.241074 systemd[1]: Started update-engine.service. Feb 12 19:34:22.247537 env[1194]: time="2024-02-12T19:34:22.247479932Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 12 19:34:22.249825 tar[1184]: ./static Feb 12 19:34:22.258590 systemd-logind[1173]: Watching system buttons on /dev/input/event1 (Power Button) Feb 12 19:34:22.258617 systemd-logind[1173]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 12 19:34:22.258899 systemd-logind[1173]: New seat seat0. Feb 12 19:34:22.259186 extend-filesystems[1161]: Resized partition /dev/vda9 Feb 12 19:34:22.266539 extend-filesystems[1210]: resize2fs 1.46.5 (30-Dec-2021) Feb 12 19:34:22.271182 systemd[1]: Started systemd-logind.service. Feb 12 19:34:22.274190 systemd[1]: Started locksmithd.service. Feb 12 19:34:22.278866 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 12 19:34:22.292951 tar[1184]: ./vlan Feb 12 19:34:22.299868 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 12 19:34:22.309177 env[1194]: time="2024-02-12T19:34:22.309136237Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 12 19:34:22.315879 env[1194]: time="2024-02-12T19:34:22.315042431Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:34:22.316170 env[1194]: time="2024-02-12T19:34:22.316131744Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 12 19:34:22.316170 env[1194]: time="2024-02-12T19:34:22.316162932Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:34:22.316373 env[1194]: time="2024-02-12T19:34:22.316345174Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 19:34:22.316373 env[1194]: time="2024-02-12T19:34:22.316366815Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 12 19:34:22.316460 env[1194]: time="2024-02-12T19:34:22.316379448Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 12 19:34:22.316460 env[1194]: time="2024-02-12T19:34:22.316389066Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 12 19:34:22.316460 env[1194]: time="2024-02-12T19:34:22.316450612Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:34:22.316547 extend-filesystems[1210]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 12 19:34:22.316547 extend-filesystems[1210]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 12 19:34:22.316547 extend-filesystems[1210]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 12 19:34:22.323604 extend-filesystems[1161]: Resized filesystem in /dev/vda9 Feb 12 19:34:22.324559 bash[1226]: Updated "/home/core/.ssh/authorized_keys" Feb 12 19:34:22.324678 env[1194]: time="2024-02-12T19:34:22.316639626Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:34:22.324678 env[1194]: time="2024-02-12T19:34:22.316778977Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 19:34:22.324678 env[1194]: time="2024-02-12T19:34:22.316793655Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 12 19:34:22.324678 env[1194]: time="2024-02-12T19:34:22.316836676Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 12 19:34:22.324678 env[1194]: time="2024-02-12T19:34:22.316860070Z" level=info msg="metadata content store policy set" policy=shared Feb 12 19:34:22.317624 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 12 19:34:22.317877 systemd[1]: Finished extend-filesystems.service. Feb 12 19:34:22.321861 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 12 19:34:22.328820 env[1194]: time="2024-02-12T19:34:22.326589262Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 12 19:34:22.328820 env[1194]: time="2024-02-12T19:34:22.326637582Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 12 19:34:22.328820 env[1194]: time="2024-02-12T19:34:22.326655887Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 12 19:34:22.328820 env[1194]: time="2024-02-12T19:34:22.326697725Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 12 19:34:22.328820 env[1194]: time="2024-02-12T19:34:22.326724806Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 12 19:34:22.328820 env[1194]: time="2024-02-12T19:34:22.326740586Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 12 19:34:22.328820 env[1194]: time="2024-02-12T19:34:22.326755263Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 12 19:34:22.328820 env[1194]: time="2024-02-12T19:34:22.326769780Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 12 19:34:22.328820 env[1194]: time="2024-02-12T19:34:22.326783506Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 12 19:34:22.328820 env[1194]: time="2024-02-12T19:34:22.326799376Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 12 19:34:22.328820 env[1194]: time="2024-02-12T19:34:22.326817019Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 12 19:34:22.328820 env[1194]: time="2024-02-12T19:34:22.326835043Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 12 19:34:22.328820 env[1194]: time="2024-02-12T19:34:22.327133452Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 12 19:34:22.328820 env[1194]: time="2024-02-12T19:34:22.327217009Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 12 19:34:22.329280 env[1194]: time="2024-02-12T19:34:22.327563128Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 12 19:34:22.329280 env[1194]: time="2024-02-12T19:34:22.327594957Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 12 19:34:22.329280 env[1194]: time="2024-02-12T19:34:22.327613382Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 12 19:34:22.329280 env[1194]: time="2024-02-12T19:34:22.327665580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 12 19:34:22.329280 env[1194]: time="2024-02-12T19:34:22.327681259Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 12 19:34:22.329280 env[1194]: time="2024-02-12T19:34:22.327696398Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 12 19:34:22.329280 env[1194]: time="2024-02-12T19:34:22.327710975Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 12 19:34:22.329280 env[1194]: time="2024-02-12T19:34:22.327736282Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 12 19:34:22.329280 env[1194]: time="2024-02-12T19:34:22.327750128Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 12 19:34:22.329280 env[1194]: time="2024-02-12T19:34:22.327762371Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 12 19:34:22.329280 env[1194]: time="2024-02-12T19:34:22.327774344Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 12 19:34:22.329280 env[1194]: time="2024-02-12T19:34:22.327789432Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 12 19:34:22.329280 env[1194]: time="2024-02-12T19:34:22.327936528Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 12 19:34:22.329280 env[1194]: time="2024-02-12T19:34:22.327956285Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 12 19:34:22.329280 env[1194]: time="2024-02-12T19:34:22.327971644Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 12 19:34:22.329652 env[1194]: time="2024-02-12T19:34:22.327986452Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 12 19:34:22.329652 env[1194]: time="2024-02-12T19:34:22.328006118Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 12 19:34:22.329652 env[1194]: time="2024-02-12T19:34:22.328019223Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 12 19:34:22.329652 env[1194]: time="2024-02-12T19:34:22.328038990Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 12 19:34:22.329652 env[1194]: time="2024-02-12T19:34:22.328078334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 12 19:34:22.329804 env[1194]: time="2024-02-12T19:34:22.328299058Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 12 19:34:22.329804 env[1194]: time="2024-02-12T19:34:22.328365142Z" level=info msg="Connect containerd service" Feb 12 19:34:22.329804 env[1194]: time="2024-02-12T19:34:22.328399827Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 12 19:34:22.332405 env[1194]: time="2024-02-12T19:34:22.330596135Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 19:34:22.332405 env[1194]: time="2024-02-12T19:34:22.330794436Z" level=info msg="Start subscribing containerd event" Feb 12 19:34:22.332405 env[1194]: time="2024-02-12T19:34:22.330854199Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 12 19:34:22.332405 env[1194]: time="2024-02-12T19:34:22.330854800Z" level=info msg="Start recovering state" Feb 12 19:34:22.332405 env[1194]: time="2024-02-12T19:34:22.330906096Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 12 19:34:22.332405 env[1194]: time="2024-02-12T19:34:22.330945660Z" level=info msg="Start event monitor" Feb 12 19:34:22.332405 env[1194]: time="2024-02-12T19:34:22.330959787Z" level=info msg="Start snapshots syncer" Feb 12 19:34:22.332405 env[1194]: time="2024-02-12T19:34:22.330968262Z" level=info msg="Start cni network conf syncer for default" Feb 12 19:34:22.332405 env[1194]: time="2024-02-12T19:34:22.330988661Z" level=info msg="Start streaming server" Feb 12 19:34:22.331050 systemd[1]: Started containerd.service. Feb 12 19:34:22.334119 env[1194]: time="2024-02-12T19:34:22.333126980Z" level=info msg="containerd successfully booted in 0.088979s" Feb 12 19:34:22.344665 tar[1184]: ./portmap Feb 12 19:34:22.360639 locksmithd[1218]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 12 19:34:22.373482 tar[1184]: ./host-local Feb 12 19:34:22.398866 tar[1184]: ./vrf Feb 12 19:34:22.427569 tar[1184]: ./bridge Feb 12 19:34:22.462289 tar[1184]: ./tuning Feb 12 19:34:22.489744 tar[1184]: ./firewall Feb 12 19:34:22.525224 tar[1184]: ./host-device Feb 12 19:34:22.548097 systemd-networkd[1074]: eth0: Gained IPv6LL Feb 12 19:34:22.556275 tar[1184]: ./sbr Feb 12 19:34:22.584509 tar[1184]: ./loopback Feb 12 19:34:22.611527 tar[1184]: ./dhcp Feb 12 19:34:22.634665 systemd[1]: Finished prepare-critools.service. Feb 12 19:34:22.680605 tar[1184]: ./ptp Feb 12 19:34:22.708679 tar[1184]: ./ipvlan Feb 12 19:34:22.736001 tar[1184]: ./bandwidth Feb 12 19:34:22.770294 systemd[1]: Finished prepare-cni-plugins.service. Feb 12 19:34:23.011717 sshd_keygen[1181]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 12 19:34:23.030181 systemd[1]: Finished sshd-keygen.service. Feb 12 19:34:23.032632 systemd[1]: Starting issuegen.service... Feb 12 19:34:23.037916 systemd[1]: issuegen.service: Deactivated successfully. Feb 12 19:34:23.038145 systemd[1]: Finished issuegen.service. Feb 12 19:34:23.040301 systemd[1]: Starting systemd-user-sessions.service... Feb 12 19:34:23.045806 systemd[1]: Finished systemd-user-sessions.service. Feb 12 19:34:23.047944 systemd[1]: Started getty@tty1.service. Feb 12 19:34:23.051074 systemd[1]: Started serial-getty@ttyS0.service. Feb 12 19:34:23.052233 systemd[1]: Reached target getty.target. Feb 12 19:34:23.052874 systemd[1]: Reached target multi-user.target. Feb 12 19:34:23.055015 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 12 19:34:23.064103 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 12 19:34:23.064377 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 12 19:34:23.065470 systemd[1]: Startup finished in 5.750s (kernel) + 5.271s (userspace) = 11.021s. Feb 12 19:34:26.517194 systemd[1]: Created slice system-sshd.slice. Feb 12 19:34:26.518509 systemd[1]: Started sshd@0-10.0.0.52:22-10.0.0.1:48118.service. Feb 12 19:34:26.564029 sshd[1262]: Accepted publickey for core from 10.0.0.1 port 48118 ssh2: RSA SHA256:61J5tVZgtMsvFgBzlA318rHvk/8vx1tAF2anfHXiCnk Feb 12 19:34:26.565260 sshd[1262]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:34:26.573113 systemd-logind[1173]: New session 1 of user core. Feb 12 19:34:26.573882 systemd[1]: Created slice user-500.slice. Feb 12 19:34:26.574705 systemd[1]: Starting user-runtime-dir@500.service... Feb 12 19:34:26.582452 systemd[1]: Finished user-runtime-dir@500.service. Feb 12 19:34:26.583967 systemd[1]: Starting user@500.service... Feb 12 19:34:26.586575 (systemd)[1267]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:34:26.659396 systemd[1267]: Queued start job for default target default.target. Feb 12 19:34:26.659641 systemd[1267]: Reached target paths.target. Feb 12 19:34:26.659661 systemd[1267]: Reached target sockets.target. Feb 12 19:34:26.659676 systemd[1267]: Reached target timers.target. Feb 12 19:34:26.659689 systemd[1267]: Reached target basic.target. Feb 12 19:34:26.659735 systemd[1267]: Reached target default.target. Feb 12 19:34:26.659762 systemd[1267]: Startup finished in 68ms. Feb 12 19:34:26.659864 systemd[1]: Started user@500.service. Feb 12 19:34:26.661107 systemd[1]: Started session-1.scope. Feb 12 19:34:26.709894 systemd[1]: Started sshd@1-10.0.0.52:22-10.0.0.1:48122.service. Feb 12 19:34:26.753414 sshd[1276]: Accepted publickey for core from 10.0.0.1 port 48122 ssh2: RSA SHA256:61J5tVZgtMsvFgBzlA318rHvk/8vx1tAF2anfHXiCnk Feb 12 19:34:26.754301 sshd[1276]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:34:26.757342 systemd-logind[1173]: New session 2 of user core. Feb 12 19:34:26.758141 systemd[1]: Started session-2.scope. Feb 12 19:34:26.811006 sshd[1276]: pam_unix(sshd:session): session closed for user core Feb 12 19:34:26.813169 systemd[1]: Started sshd@2-10.0.0.52:22-10.0.0.1:48128.service. Feb 12 19:34:26.813632 systemd[1]: sshd@1-10.0.0.52:22-10.0.0.1:48122.service: Deactivated successfully. Feb 12 19:34:26.814678 systemd[1]: session-2.scope: Deactivated successfully. Feb 12 19:34:26.814711 systemd-logind[1173]: Session 2 logged out. Waiting for processes to exit. Feb 12 19:34:26.815779 systemd-logind[1173]: Removed session 2. Feb 12 19:34:26.854980 sshd[1281]: Accepted publickey for core from 10.0.0.1 port 48128 ssh2: RSA SHA256:61J5tVZgtMsvFgBzlA318rHvk/8vx1tAF2anfHXiCnk Feb 12 19:34:26.855916 sshd[1281]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:34:26.859013 systemd-logind[1173]: New session 3 of user core. Feb 12 19:34:26.859907 systemd[1]: Started session-3.scope. Feb 12 19:34:26.908657 sshd[1281]: pam_unix(sshd:session): session closed for user core Feb 12 19:34:26.911227 systemd[1]: Started sshd@3-10.0.0.52:22-10.0.0.1:48144.service. Feb 12 19:34:26.911794 systemd[1]: sshd@2-10.0.0.52:22-10.0.0.1:48128.service: Deactivated successfully. Feb 12 19:34:26.912679 systemd-logind[1173]: Session 3 logged out. Waiting for processes to exit. Feb 12 19:34:26.912747 systemd[1]: session-3.scope: Deactivated successfully. Feb 12 19:34:26.914047 systemd-logind[1173]: Removed session 3. Feb 12 19:34:26.950432 sshd[1289]: Accepted publickey for core from 10.0.0.1 port 48144 ssh2: RSA SHA256:61J5tVZgtMsvFgBzlA318rHvk/8vx1tAF2anfHXiCnk Feb 12 19:34:26.951274 sshd[1289]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:34:26.954082 systemd-logind[1173]: New session 4 of user core. Feb 12 19:34:26.954896 systemd[1]: Started session-4.scope. Feb 12 19:34:27.006688 sshd[1289]: pam_unix(sshd:session): session closed for user core Feb 12 19:34:27.008794 systemd[1]: Started sshd@4-10.0.0.52:22-10.0.0.1:48150.service. Feb 12 19:34:27.009220 systemd[1]: sshd@3-10.0.0.52:22-10.0.0.1:48144.service: Deactivated successfully. Feb 12 19:34:27.010155 systemd[1]: session-4.scope: Deactivated successfully. Feb 12 19:34:27.010667 systemd-logind[1173]: Session 4 logged out. Waiting for processes to exit. Feb 12 19:34:27.011487 systemd-logind[1173]: Removed session 4. Feb 12 19:34:27.048148 sshd[1296]: Accepted publickey for core from 10.0.0.1 port 48150 ssh2: RSA SHA256:61J5tVZgtMsvFgBzlA318rHvk/8vx1tAF2anfHXiCnk Feb 12 19:34:27.048962 sshd[1296]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:34:27.051824 systemd-logind[1173]: New session 5 of user core. Feb 12 19:34:27.052584 systemd[1]: Started session-5.scope. Feb 12 19:34:27.106896 sudo[1301]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 12 19:34:27.107064 sudo[1301]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 12 19:34:27.606779 systemd[1]: Reloading. Feb 12 19:34:27.665302 /usr/lib/systemd/system-generators/torcx-generator[1331]: time="2024-02-12T19:34:27Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:34:27.665328 /usr/lib/systemd/system-generators/torcx-generator[1331]: time="2024-02-12T19:34:27Z" level=info msg="torcx already run" Feb 12 19:34:27.722401 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:34:27.722415 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:34:27.738680 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:34:27.804215 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 12 19:34:27.809452 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 12 19:34:27.809880 systemd[1]: Reached target network-online.target. Feb 12 19:34:27.811232 systemd[1]: Started kubelet.service. Feb 12 19:34:27.820688 systemd[1]: Starting coreos-metadata.service... Feb 12 19:34:27.826924 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 12 19:34:27.827148 systemd[1]: Finished coreos-metadata.service. Feb 12 19:34:27.859615 kubelet[1379]: E0212 19:34:27.859460 1379 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 12 19:34:27.861619 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 19:34:27.861754 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 19:34:27.992278 systemd[1]: Stopped kubelet.service. Feb 12 19:34:28.005078 systemd[1]: Reloading. Feb 12 19:34:28.064782 /usr/lib/systemd/system-generators/torcx-generator[1452]: time="2024-02-12T19:34:28Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:34:28.065197 /usr/lib/systemd/system-generators/torcx-generator[1452]: time="2024-02-12T19:34:28Z" level=info msg="torcx already run" Feb 12 19:34:28.127411 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:34:28.127428 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:34:28.144366 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:34:28.214580 systemd[1]: Started kubelet.service. Feb 12 19:34:28.259939 kubelet[1500]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 19:34:28.259939 kubelet[1500]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:34:28.260286 kubelet[1500]: I0212 19:34:28.259960 1500 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 12 19:34:28.263338 kubelet[1500]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 19:34:28.263338 kubelet[1500]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:34:28.593910 kubelet[1500]: I0212 19:34:28.593856 1500 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 12 19:34:28.593910 kubelet[1500]: I0212 19:34:28.593892 1500 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 12 19:34:28.594114 kubelet[1500]: I0212 19:34:28.594097 1500 server.go:836] "Client rotation is on, will bootstrap in background" Feb 12 19:34:28.596068 kubelet[1500]: I0212 19:34:28.596022 1500 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 19:34:28.601466 kubelet[1500]: I0212 19:34:28.601444 1500 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 12 19:34:28.601747 kubelet[1500]: I0212 19:34:28.601727 1500 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 12 19:34:28.601812 kubelet[1500]: I0212 19:34:28.601793 1500 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 12 19:34:28.601900 kubelet[1500]: I0212 19:34:28.601818 1500 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 12 19:34:28.601900 kubelet[1500]: I0212 19:34:28.601830 1500 container_manager_linux.go:308] "Creating device plugin manager" Feb 12 19:34:28.601961 kubelet[1500]: I0212 19:34:28.601923 1500 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:34:28.604605 kubelet[1500]: I0212 19:34:28.604585 1500 kubelet.go:398] "Attempting to sync node with API server" Feb 12 19:34:28.604666 kubelet[1500]: I0212 19:34:28.604617 1500 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 12 19:34:28.604666 kubelet[1500]: I0212 19:34:28.604641 1500 kubelet.go:297] "Adding apiserver pod source" Feb 12 19:34:28.604666 kubelet[1500]: I0212 19:34:28.604657 1500 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 12 19:34:28.604737 kubelet[1500]: E0212 19:34:28.604716 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:34:28.604778 kubelet[1500]: E0212 19:34:28.604763 1500 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:34:28.605207 kubelet[1500]: I0212 19:34:28.605189 1500 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 12 19:34:28.605424 kubelet[1500]: W0212 19:34:28.605393 1500 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 12 19:34:28.605717 kubelet[1500]: I0212 19:34:28.605702 1500 server.go:1186] "Started kubelet" Feb 12 19:34:28.606571 kubelet[1500]: E0212 19:34:28.606544 1500 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 12 19:34:28.606680 kubelet[1500]: E0212 19:34:28.606583 1500 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 12 19:34:28.606720 kubelet[1500]: I0212 19:34:28.606677 1500 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 12 19:34:28.607554 kubelet[1500]: I0212 19:34:28.607534 1500 server.go:451] "Adding debug handlers to kubelet server" Feb 12 19:34:28.609003 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 12 19:34:28.609101 kubelet[1500]: I0212 19:34:28.609084 1500 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 12 19:34:28.609150 kubelet[1500]: I0212 19:34:28.609129 1500 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 12 19:34:28.609495 kubelet[1500]: E0212 19:34:28.609321 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:28.609917 kubelet[1500]: I0212 19:34:28.609893 1500 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 12 19:34:28.616666 kubelet[1500]: W0212 19:34:28.616627 1500 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 19:34:28.616666 kubelet[1500]: E0212 19:34:28.616662 1500 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 19:34:28.616814 kubelet[1500]: W0212 19:34:28.616753 1500 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 19:34:28.616814 kubelet[1500]: E0212 19:34:28.616769 1500 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 19:34:28.616941 kubelet[1500]: E0212 19:34:28.616804 1500 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.52.17b3348e1890b1f1", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.52", UID:"10.0.0.52", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.52"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 34, 28, 605686257, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 34, 28, 605686257, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:34:28.617125 kubelet[1500]: E0212 19:34:28.617103 1500 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: leases.coordination.k8s.io "10.0.0.52" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 19:34:28.617179 kubelet[1500]: W0212 19:34:28.617166 1500 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.52" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 19:34:28.617218 kubelet[1500]: E0212 19:34:28.617181 1500 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.52" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 19:34:28.617541 kubelet[1500]: E0212 19:34:28.617485 1500 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.52.17b3348e189e28ea", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.52", UID:"10.0.0.52", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.52"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 34, 28, 606568682, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 34, 28, 606568682, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:34:28.642720 kubelet[1500]: I0212 19:34:28.642697 1500 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 12 19:34:28.642900 kubelet[1500]: I0212 19:34:28.642884 1500 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 12 19:34:28.642948 kubelet[1500]: I0212 19:34:28.642905 1500 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:34:28.643113 kubelet[1500]: E0212 19:34:28.643040 1500 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.52.17b3348e1abd6a23", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.52", UID:"10.0.0.52", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.52 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.52"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 34, 28, 642171427, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 34, 28, 642171427, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:34:28.643658 kubelet[1500]: E0212 19:34:28.643611 1500 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.52.17b3348e1abd8dd4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.52", UID:"10.0.0.52", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.52 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.52"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 34, 28, 642180564, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 34, 28, 642180564, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:34:28.644359 kubelet[1500]: E0212 19:34:28.644267 1500 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.52.17b3348e1abd9c31", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.52", UID:"10.0.0.52", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.52 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.52"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 34, 28, 642184241, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 34, 28, 642184241, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:34:28.645907 kubelet[1500]: I0212 19:34:28.645887 1500 policy_none.go:49] "None policy: Start" Feb 12 19:34:28.646357 kubelet[1500]: I0212 19:34:28.646335 1500 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 12 19:34:28.646437 kubelet[1500]: I0212 19:34:28.646362 1500 state_mem.go:35] "Initializing new in-memory state store" Feb 12 19:34:28.652202 kubelet[1500]: I0212 19:34:28.652168 1500 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 12 19:34:28.652365 kubelet[1500]: I0212 19:34:28.652337 1500 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 12 19:34:28.653488 kubelet[1500]: E0212 19:34:28.653464 1500 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.52\" not found" Feb 12 19:34:28.653540 kubelet[1500]: E0212 19:34:28.653469 1500 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.52.17b3348e1b5e0e0a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.52", UID:"10.0.0.52", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.52"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 34, 28, 652699146, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 34, 28, 652699146, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:34:28.711091 kubelet[1500]: I0212 19:34:28.711046 1500 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.52" Feb 12 19:34:28.712296 kubelet[1500]: E0212 19:34:28.712273 1500 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.52" Feb 12 19:34:28.712705 kubelet[1500]: E0212 19:34:28.712634 1500 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.52.17b3348e1abd6a23", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.52", UID:"10.0.0.52", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.52 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.52"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 34, 28, 642171427, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 34, 28, 710992316, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.52.17b3348e1abd6a23" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:34:28.713541 kubelet[1500]: E0212 19:34:28.713489 1500 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.52.17b3348e1abd8dd4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.52", UID:"10.0.0.52", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.52 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.52"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 34, 28, 642180564, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 34, 28, 711011532, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.52.17b3348e1abd8dd4" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:34:28.714349 kubelet[1500]: E0212 19:34:28.714247 1500 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.52.17b3348e1abd9c31", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.52", UID:"10.0.0.52", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.52 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.52"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 34, 28, 642184241, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 34, 28, 711014227, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.52.17b3348e1abd9c31" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:34:28.724574 kubelet[1500]: I0212 19:34:28.724545 1500 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 12 19:34:28.742532 kubelet[1500]: I0212 19:34:28.742504 1500 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 12 19:34:28.742532 kubelet[1500]: I0212 19:34:28.742526 1500 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 12 19:34:28.742631 kubelet[1500]: I0212 19:34:28.742544 1500 kubelet.go:2113] "Starting kubelet main sync loop" Feb 12 19:34:28.742631 kubelet[1500]: E0212 19:34:28.742595 1500 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 12 19:34:28.743382 kubelet[1500]: W0212 19:34:28.743364 1500 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 19:34:28.743444 kubelet[1500]: E0212 19:34:28.743390 1500 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 19:34:28.818510 kubelet[1500]: E0212 19:34:28.818467 1500 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: leases.coordination.k8s.io "10.0.0.52" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 19:34:28.913669 kubelet[1500]: I0212 19:34:28.913563 1500 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.52" Feb 12 19:34:28.914489 kubelet[1500]: E0212 19:34:28.914409 1500 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.52.17b3348e1abd6a23", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.52", UID:"10.0.0.52", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.52 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.52"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 34, 28, 642171427, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 34, 28, 913505668, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.52.17b3348e1abd6a23" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:34:28.914768 kubelet[1500]: E0212 19:34:28.914752 1500 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.52" Feb 12 19:34:28.915251 kubelet[1500]: E0212 19:34:28.915205 1500 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.52.17b3348e1abd8dd4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.52", UID:"10.0.0.52", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.52 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.52"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 34, 28, 642180564, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 34, 28, 913519384, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.52.17b3348e1abd8dd4" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:34:29.007184 kubelet[1500]: E0212 19:34:29.007121 1500 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.52.17b3348e1abd9c31", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.52", UID:"10.0.0.52", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.52 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.52"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 34, 28, 642184241, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 34, 28, 913523021, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.52.17b3348e1abd9c31" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:34:29.220126 kubelet[1500]: E0212 19:34:29.219996 1500 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: leases.coordination.k8s.io "10.0.0.52" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 19:34:29.316071 kubelet[1500]: I0212 19:34:29.316041 1500 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.52" Feb 12 19:34:29.316781 kubelet[1500]: E0212 19:34:29.316736 1500 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.52" Feb 12 19:34:29.317224 kubelet[1500]: E0212 19:34:29.317171 1500 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.52.17b3348e1abd6a23", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.52", UID:"10.0.0.52", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.52 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.52"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 34, 28, 642171427, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 34, 29, 315999224, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.52.17b3348e1abd6a23" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:34:29.407163 kubelet[1500]: E0212 19:34:29.407105 1500 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.52.17b3348e1abd8dd4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.52", UID:"10.0.0.52", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.52 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.52"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 34, 28, 642180564, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 34, 29, 316010465, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.52.17b3348e1abd8dd4" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:34:29.562801 kubelet[1500]: W0212 19:34:29.562773 1500 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 19:34:29.562801 kubelet[1500]: E0212 19:34:29.562796 1500 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 19:34:29.605081 kubelet[1500]: E0212 19:34:29.605040 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:34:29.606974 kubelet[1500]: E0212 19:34:29.606897 1500 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.52.17b3348e1abd9c31", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.52", UID:"10.0.0.52", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.52 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.52"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 34, 28, 642184241, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 34, 29, 316013731, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.52.17b3348e1abd9c31" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:34:29.898469 kubelet[1500]: W0212 19:34:29.898349 1500 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.52" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 19:34:29.898469 kubelet[1500]: E0212 19:34:29.898380 1500 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.52" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 19:34:29.951597 kubelet[1500]: W0212 19:34:29.951559 1500 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 19:34:29.951597 kubelet[1500]: E0212 19:34:29.951591 1500 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 19:34:29.998579 kubelet[1500]: W0212 19:34:29.998550 1500 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 19:34:29.998579 kubelet[1500]: E0212 19:34:29.998570 1500 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 19:34:30.023161 kubelet[1500]: E0212 19:34:30.023131 1500 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: leases.coordination.k8s.io "10.0.0.52" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 19:34:30.118366 kubelet[1500]: I0212 19:34:30.118337 1500 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.52" Feb 12 19:34:30.119321 kubelet[1500]: E0212 19:34:30.119304 1500 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.52" Feb 12 19:34:30.119369 kubelet[1500]: E0212 19:34:30.119289 1500 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.52.17b3348e1abd6a23", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.52", UID:"10.0.0.52", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.52 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.52"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 34, 28, 642171427, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 34, 30, 118291351, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.52.17b3348e1abd6a23" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:34:30.120051 kubelet[1500]: E0212 19:34:30.119976 1500 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.52.17b3348e1abd8dd4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.52", UID:"10.0.0.52", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.52 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.52"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 34, 28, 642180564, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 34, 30, 118302352, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.52.17b3348e1abd8dd4" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:34:30.207404 kubelet[1500]: E0212 19:34:30.207288 1500 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.52.17b3348e1abd9c31", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.52", UID:"10.0.0.52", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.52 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.52"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 34, 28, 642184241, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 34, 30, 118305888, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.52.17b3348e1abd9c31" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:34:30.605435 kubelet[1500]: E0212 19:34:30.605366 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:34:31.582296 kubelet[1500]: W0212 19:34:31.582246 1500 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.52" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 19:34:31.582296 kubelet[1500]: E0212 19:34:31.582286 1500 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.52" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 19:34:31.605617 kubelet[1500]: E0212 19:34:31.605566 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:34:31.627665 kubelet[1500]: E0212 19:34:31.627626 1500 controller.go:146] failed to ensure lease exists, will retry in 3.2s, error: leases.coordination.k8s.io "10.0.0.52" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 19:34:31.720882 kubelet[1500]: I0212 19:34:31.720800 1500 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.52" Feb 12 19:34:31.721693 kubelet[1500]: E0212 19:34:31.721670 1500 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.52" Feb 12 19:34:31.722018 kubelet[1500]: E0212 19:34:31.721952 1500 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.52.17b3348e1abd6a23", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.52", UID:"10.0.0.52", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.52 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.52"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 34, 28, 642171427, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 34, 31, 720753207, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.52.17b3348e1abd6a23" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:34:31.722759 kubelet[1500]: E0212 19:34:31.722709 1500 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.52.17b3348e1abd8dd4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.52", UID:"10.0.0.52", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.52 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.52"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 34, 28, 642180564, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 34, 31, 720764879, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.52.17b3348e1abd8dd4" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:34:31.723537 kubelet[1500]: E0212 19:34:31.723462 1500 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.52.17b3348e1abd9c31", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.52", UID:"10.0.0.52", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.52 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.52"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 34, 28, 642184241, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 34, 31, 720768415, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.52.17b3348e1abd9c31" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:34:32.206168 kubelet[1500]: W0212 19:34:32.206129 1500 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 19:34:32.206168 kubelet[1500]: E0212 19:34:32.206163 1500 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 19:34:32.606026 kubelet[1500]: E0212 19:34:32.605983 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:34:32.843097 kubelet[1500]: W0212 19:34:32.843048 1500 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 19:34:32.843097 kubelet[1500]: E0212 19:34:32.843089 1500 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 19:34:32.979825 kubelet[1500]: W0212 19:34:32.979685 1500 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 19:34:32.979825 kubelet[1500]: E0212 19:34:32.979722 1500 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 19:34:33.606164 kubelet[1500]: E0212 19:34:33.606093 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:34:34.606712 kubelet[1500]: E0212 19:34:34.606651 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:34:34.829135 kubelet[1500]: E0212 19:34:34.829088 1500 controller.go:146] failed to ensure lease exists, will retry in 6.4s, error: leases.coordination.k8s.io "10.0.0.52" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 19:34:34.923271 kubelet[1500]: I0212 19:34:34.923167 1500 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.52" Feb 12 19:34:34.924304 kubelet[1500]: E0212 19:34:34.924205 1500 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.52.17b3348e1abd6a23", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.52", UID:"10.0.0.52", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.52 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.52"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 34, 28, 642171427, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 34, 34, 923129411, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.52.17b3348e1abd6a23" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:34:34.924477 kubelet[1500]: E0212 19:34:34.924450 1500 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.52" Feb 12 19:34:34.925357 kubelet[1500]: E0212 19:34:34.925305 1500 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.52.17b3348e1abd8dd4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.52", UID:"10.0.0.52", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.52 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.52"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 34, 28, 642180564, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 34, 34, 923137847, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.52.17b3348e1abd8dd4" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:34:34.926092 kubelet[1500]: E0212 19:34:34.925996 1500 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.52.17b3348e1abd9c31", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.52", UID:"10.0.0.52", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.52 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.52"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 34, 28, 642184241, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 34, 34, 923140332, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.52.17b3348e1abd9c31" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:34:35.607157 kubelet[1500]: E0212 19:34:35.607086 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:34:36.607873 kubelet[1500]: E0212 19:34:36.607803 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:34:37.291488 kubelet[1500]: W0212 19:34:37.291437 1500 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 19:34:37.291488 kubelet[1500]: E0212 19:34:37.291482 1500 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 19:34:37.327778 kubelet[1500]: W0212 19:34:37.327719 1500 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 19:34:37.327778 kubelet[1500]: E0212 19:34:37.327763 1500 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 19:34:37.608178 kubelet[1500]: E0212 19:34:37.608123 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:34:37.951170 kubelet[1500]: W0212 19:34:37.951041 1500 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.52" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 19:34:37.951170 kubelet[1500]: E0212 19:34:37.951082 1500 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.52" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 19:34:38.596343 kubelet[1500]: I0212 19:34:38.596264 1500 transport.go:135] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 12 19:34:38.608515 kubelet[1500]: E0212 19:34:38.608459 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:34:38.653951 kubelet[1500]: E0212 19:34:38.653910 1500 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.52\" not found" Feb 12 19:34:38.981752 kubelet[1500]: E0212 19:34:38.981638 1500 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.0.0.52" not found Feb 12 19:34:39.609018 kubelet[1500]: E0212 19:34:39.608951 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:34:40.219671 kubelet[1500]: E0212 19:34:40.219630 1500 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.0.0.52" not found Feb 12 19:34:40.609163 kubelet[1500]: E0212 19:34:40.609115 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:34:41.235596 kubelet[1500]: E0212 19:34:41.235549 1500 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.52\" not found" node="10.0.0.52" Feb 12 19:34:41.325590 kubelet[1500]: I0212 19:34:41.325562 1500 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.52" Feb 12 19:34:41.610184 kubelet[1500]: E0212 19:34:41.610131 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:34:41.620690 kubelet[1500]: I0212 19:34:41.620673 1500 kubelet_node_status.go:73] "Successfully registered node" node="10.0.0.52" Feb 12 19:34:41.699352 kubelet[1500]: E0212 19:34:41.699317 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:41.781807 sudo[1301]: pam_unix(sudo:session): session closed for user root Feb 12 19:34:41.783353 sshd[1296]: pam_unix(sshd:session): session closed for user core Feb 12 19:34:41.785395 systemd[1]: sshd@4-10.0.0.52:22-10.0.0.1:48150.service: Deactivated successfully. Feb 12 19:34:41.786348 systemd-logind[1173]: Session 5 logged out. Waiting for processes to exit. Feb 12 19:34:41.786378 systemd[1]: session-5.scope: Deactivated successfully. Feb 12 19:34:41.787318 systemd-logind[1173]: Removed session 5. Feb 12 19:34:41.799955 kubelet[1500]: E0212 19:34:41.799906 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:41.901168 kubelet[1500]: E0212 19:34:41.901011 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:42.001768 kubelet[1500]: E0212 19:34:42.001692 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:42.102608 kubelet[1500]: E0212 19:34:42.102557 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:42.203158 kubelet[1500]: E0212 19:34:42.203043 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:42.303516 kubelet[1500]: E0212 19:34:42.303443 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:42.404176 kubelet[1500]: E0212 19:34:42.404104 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:42.504955 kubelet[1500]: E0212 19:34:42.504789 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:42.605832 kubelet[1500]: E0212 19:34:42.605750 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:42.611092 kubelet[1500]: E0212 19:34:42.611042 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:34:42.706776 kubelet[1500]: E0212 19:34:42.706706 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:42.807681 kubelet[1500]: E0212 19:34:42.807518 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:42.908175 kubelet[1500]: E0212 19:34:42.908098 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:43.008766 kubelet[1500]: E0212 19:34:43.008699 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:43.109549 kubelet[1500]: E0212 19:34:43.109491 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:43.210081 kubelet[1500]: E0212 19:34:43.210015 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:43.310604 kubelet[1500]: E0212 19:34:43.310545 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:43.410736 kubelet[1500]: E0212 19:34:43.410634 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:43.511192 kubelet[1500]: E0212 19:34:43.511161 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:43.611964 kubelet[1500]: E0212 19:34:43.611918 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:43.611964 kubelet[1500]: E0212 19:34:43.611918 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:34:43.712380 kubelet[1500]: E0212 19:34:43.712267 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:43.812382 kubelet[1500]: E0212 19:34:43.812337 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:43.913020 kubelet[1500]: E0212 19:34:43.912954 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:44.013618 kubelet[1500]: E0212 19:34:44.013502 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:44.114246 kubelet[1500]: E0212 19:34:44.114178 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:44.214808 kubelet[1500]: E0212 19:34:44.214760 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:44.315193 kubelet[1500]: E0212 19:34:44.315152 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:44.415726 kubelet[1500]: E0212 19:34:44.415691 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:44.516279 kubelet[1500]: E0212 19:34:44.516240 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:44.613003 kubelet[1500]: E0212 19:34:44.612870 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:34:44.617239 kubelet[1500]: E0212 19:34:44.617195 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:44.717933 kubelet[1500]: E0212 19:34:44.717875 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:44.818915 kubelet[1500]: E0212 19:34:44.818835 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:44.919229 kubelet[1500]: E0212 19:34:44.919091 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:45.019677 kubelet[1500]: E0212 19:34:45.019627 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:45.120332 kubelet[1500]: E0212 19:34:45.120291 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:45.221025 kubelet[1500]: E0212 19:34:45.220897 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:45.321712 kubelet[1500]: E0212 19:34:45.321652 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:45.422353 kubelet[1500]: E0212 19:34:45.422288 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:45.523034 kubelet[1500]: E0212 19:34:45.522900 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:45.613780 kubelet[1500]: E0212 19:34:45.613723 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:34:45.624020 kubelet[1500]: E0212 19:34:45.623967 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:45.724767 kubelet[1500]: E0212 19:34:45.724704 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:45.825505 kubelet[1500]: E0212 19:34:45.825450 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:45.926057 kubelet[1500]: E0212 19:34:45.926005 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:46.026695 kubelet[1500]: E0212 19:34:46.026646 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:46.127438 kubelet[1500]: E0212 19:34:46.127311 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:46.227919 kubelet[1500]: E0212 19:34:46.227879 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:46.328621 kubelet[1500]: E0212 19:34:46.328568 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:46.429244 kubelet[1500]: E0212 19:34:46.429102 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:46.529787 kubelet[1500]: E0212 19:34:46.529733 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:46.614530 kubelet[1500]: E0212 19:34:46.614478 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:34:46.630784 kubelet[1500]: E0212 19:34:46.630725 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:46.731489 kubelet[1500]: E0212 19:34:46.731352 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:46.831745 kubelet[1500]: E0212 19:34:46.831694 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:46.932147 kubelet[1500]: E0212 19:34:46.932081 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:47.032677 kubelet[1500]: E0212 19:34:47.032554 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:47.133219 kubelet[1500]: E0212 19:34:47.133168 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:47.233797 kubelet[1500]: E0212 19:34:47.233749 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:47.334373 kubelet[1500]: E0212 19:34:47.334316 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:47.435012 kubelet[1500]: E0212 19:34:47.434954 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:47.535740 kubelet[1500]: E0212 19:34:47.535678 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:47.615509 kubelet[1500]: E0212 19:34:47.615372 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:34:47.636549 kubelet[1500]: E0212 19:34:47.636512 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:47.737467 kubelet[1500]: E0212 19:34:47.737405 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:47.838295 kubelet[1500]: E0212 19:34:47.838225 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:47.938935 kubelet[1500]: E0212 19:34:47.938806 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:48.039430 kubelet[1500]: E0212 19:34:48.039377 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:48.140105 kubelet[1500]: E0212 19:34:48.140058 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:48.240780 kubelet[1500]: E0212 19:34:48.240657 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:48.341578 kubelet[1500]: E0212 19:34:48.341527 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:48.442193 kubelet[1500]: E0212 19:34:48.442129 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:48.542869 kubelet[1500]: E0212 19:34:48.542724 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:48.605328 kubelet[1500]: E0212 19:34:48.605279 1500 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:34:48.615702 kubelet[1500]: E0212 19:34:48.615653 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:34:48.642966 kubelet[1500]: E0212 19:34:48.642923 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:48.655139 kubelet[1500]: E0212 19:34:48.655114 1500 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.52\" not found" Feb 12 19:34:48.743633 kubelet[1500]: E0212 19:34:48.743574 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:48.843880 kubelet[1500]: E0212 19:34:48.843795 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:48.944497 kubelet[1500]: E0212 19:34:48.944428 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:49.044987 kubelet[1500]: E0212 19:34:49.044934 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:49.145737 kubelet[1500]: E0212 19:34:49.145643 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:49.246311 kubelet[1500]: E0212 19:34:49.246256 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:49.347066 kubelet[1500]: E0212 19:34:49.347000 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:49.447748 kubelet[1500]: E0212 19:34:49.447621 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:49.548185 kubelet[1500]: E0212 19:34:49.548137 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:49.616746 kubelet[1500]: E0212 19:34:49.616689 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:34:49.649115 kubelet[1500]: E0212 19:34:49.649086 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:49.749976 kubelet[1500]: E0212 19:34:49.749882 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:49.850109 kubelet[1500]: E0212 19:34:49.850079 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:49.950738 kubelet[1500]: E0212 19:34:49.950683 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:50.051436 kubelet[1500]: E0212 19:34:50.051315 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:50.152008 kubelet[1500]: E0212 19:34:50.151982 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:50.252525 kubelet[1500]: E0212 19:34:50.252475 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:50.353122 kubelet[1500]: E0212 19:34:50.353072 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:50.453697 kubelet[1500]: E0212 19:34:50.453649 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 12 19:34:50.554589 kubelet[1500]: I0212 19:34:50.554558 1500 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 12 19:34:50.554883 env[1194]: time="2024-02-12T19:34:50.554828663Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 12 19:34:50.555283 kubelet[1500]: I0212 19:34:50.554999 1500 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 12 19:34:50.616978 kubelet[1500]: I0212 19:34:50.616870 1500 apiserver.go:52] "Watching apiserver" Feb 12 19:34:50.616978 kubelet[1500]: E0212 19:34:50.616886 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:34:50.619466 kubelet[1500]: I0212 19:34:50.619443 1500 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:34:50.619544 kubelet[1500]: I0212 19:34:50.619505 1500 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:34:50.710507 kubelet[1500]: I0212 19:34:50.710468 1500 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 12 19:34:50.734209 kubelet[1500]: I0212 19:34:50.734185 1500 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7b1f7362-abbb-4e48-8350-08c9935813ed-clustermesh-secrets\") pod \"cilium-5pcng\" (UID: \"7b1f7362-abbb-4e48-8350-08c9935813ed\") " pod="kube-system/cilium-5pcng" Feb 12 19:34:50.734271 kubelet[1500]: I0212 19:34:50.734213 1500 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7b1f7362-abbb-4e48-8350-08c9935813ed-host-proc-sys-kernel\") pod \"cilium-5pcng\" (UID: \"7b1f7362-abbb-4e48-8350-08c9935813ed\") " pod="kube-system/cilium-5pcng" Feb 12 19:34:50.734271 kubelet[1500]: I0212 19:34:50.734233 1500 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7b1f7362-abbb-4e48-8350-08c9935813ed-hubble-tls\") pod \"cilium-5pcng\" (UID: \"7b1f7362-abbb-4e48-8350-08c9935813ed\") " pod="kube-system/cilium-5pcng" Feb 12 19:34:50.734338 kubelet[1500]: I0212 19:34:50.734310 1500 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3a365fb8-83e3-49b9-9654-ede2f99b78f6-kube-proxy\") pod \"kube-proxy-mnrnn\" (UID: \"3a365fb8-83e3-49b9-9654-ede2f99b78f6\") " pod="kube-system/kube-proxy-mnrnn" Feb 12 19:34:50.734389 kubelet[1500]: I0212 19:34:50.734343 1500 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlzb4\" (UniqueName: \"kubernetes.io/projected/3a365fb8-83e3-49b9-9654-ede2f99b78f6-kube-api-access-rlzb4\") pod \"kube-proxy-mnrnn\" (UID: \"3a365fb8-83e3-49b9-9654-ede2f99b78f6\") " pod="kube-system/kube-proxy-mnrnn" Feb 12 19:34:50.734389 kubelet[1500]: I0212 19:34:50.734379 1500 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7b1f7362-abbb-4e48-8350-08c9935813ed-cilium-run\") pod \"cilium-5pcng\" (UID: \"7b1f7362-abbb-4e48-8350-08c9935813ed\") " pod="kube-system/cilium-5pcng" Feb 12 19:34:50.734451 kubelet[1500]: I0212 19:34:50.734402 1500 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3a365fb8-83e3-49b9-9654-ede2f99b78f6-xtables-lock\") pod \"kube-proxy-mnrnn\" (UID: \"3a365fb8-83e3-49b9-9654-ede2f99b78f6\") " pod="kube-system/kube-proxy-mnrnn" Feb 12 19:34:50.734451 kubelet[1500]: I0212 19:34:50.734420 1500 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7b1f7362-abbb-4e48-8350-08c9935813ed-bpf-maps\") pod \"cilium-5pcng\" (UID: \"7b1f7362-abbb-4e48-8350-08c9935813ed\") " pod="kube-system/cilium-5pcng" Feb 12 19:34:50.734451 kubelet[1500]: I0212 19:34:50.734434 1500 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7b1f7362-abbb-4e48-8350-08c9935813ed-cni-path\") pod \"cilium-5pcng\" (UID: \"7b1f7362-abbb-4e48-8350-08c9935813ed\") " pod="kube-system/cilium-5pcng" Feb 12 19:34:50.734554 kubelet[1500]: I0212 19:34:50.734506 1500 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7b1f7362-abbb-4e48-8350-08c9935813ed-xtables-lock\") pod \"cilium-5pcng\" (UID: \"7b1f7362-abbb-4e48-8350-08c9935813ed\") " pod="kube-system/cilium-5pcng" Feb 12 19:34:50.734610 kubelet[1500]: I0212 19:34:50.734595 1500 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7b1f7362-abbb-4e48-8350-08c9935813ed-cilium-config-path\") pod \"cilium-5pcng\" (UID: \"7b1f7362-abbb-4e48-8350-08c9935813ed\") " pod="kube-system/cilium-5pcng" Feb 12 19:34:50.734669 kubelet[1500]: I0212 19:34:50.734650 1500 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7b1f7362-abbb-4e48-8350-08c9935813ed-host-proc-sys-net\") pod \"cilium-5pcng\" (UID: \"7b1f7362-abbb-4e48-8350-08c9935813ed\") " pod="kube-system/cilium-5pcng" Feb 12 19:34:50.734723 kubelet[1500]: I0212 19:34:50.734713 1500 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3a365fb8-83e3-49b9-9654-ede2f99b78f6-lib-modules\") pod \"kube-proxy-mnrnn\" (UID: \"3a365fb8-83e3-49b9-9654-ede2f99b78f6\") " pod="kube-system/kube-proxy-mnrnn" Feb 12 19:34:50.734765 kubelet[1500]: I0212 19:34:50.734757 1500 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7b1f7362-abbb-4e48-8350-08c9935813ed-hostproc\") pod \"cilium-5pcng\" (UID: \"7b1f7362-abbb-4e48-8350-08c9935813ed\") " pod="kube-system/cilium-5pcng" Feb 12 19:34:50.734791 kubelet[1500]: I0212 19:34:50.734788 1500 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7b1f7362-abbb-4e48-8350-08c9935813ed-etc-cni-netd\") pod \"cilium-5pcng\" (UID: \"7b1f7362-abbb-4e48-8350-08c9935813ed\") " pod="kube-system/cilium-5pcng" Feb 12 19:34:50.734831 kubelet[1500]: I0212 19:34:50.734824 1500 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7b1f7362-abbb-4e48-8350-08c9935813ed-cilium-cgroup\") pod \"cilium-5pcng\" (UID: \"7b1f7362-abbb-4e48-8350-08c9935813ed\") " pod="kube-system/cilium-5pcng" Feb 12 19:34:50.734899 kubelet[1500]: I0212 19:34:50.734886 1500 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7b1f7362-abbb-4e48-8350-08c9935813ed-lib-modules\") pod \"cilium-5pcng\" (UID: \"7b1f7362-abbb-4e48-8350-08c9935813ed\") " pod="kube-system/cilium-5pcng" Feb 12 19:34:50.734924 kubelet[1500]: I0212 19:34:50.734917 1500 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnrsv\" (UniqueName: \"kubernetes.io/projected/7b1f7362-abbb-4e48-8350-08c9935813ed-kube-api-access-cnrsv\") pod \"cilium-5pcng\" (UID: \"7b1f7362-abbb-4e48-8350-08c9935813ed\") " pod="kube-system/cilium-5pcng" Feb 12 19:34:50.734955 kubelet[1500]: I0212 19:34:50.734930 1500 reconciler.go:41] "Reconciler: start to sync state" Feb 12 19:34:50.922035 kubelet[1500]: E0212 19:34:50.921931 1500 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:34:50.922613 env[1194]: time="2024-02-12T19:34:50.922552395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mnrnn,Uid:3a365fb8-83e3-49b9-9654-ede2f99b78f6,Namespace:kube-system,Attempt:0,}" Feb 12 19:34:51.223228 kubelet[1500]: E0212 19:34:51.223130 1500 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:34:51.223532 env[1194]: time="2024-02-12T19:34:51.223495573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5pcng,Uid:7b1f7362-abbb-4e48-8350-08c9935813ed,Namespace:kube-system,Attempt:0,}" Feb 12 19:34:51.617630 kubelet[1500]: E0212 19:34:51.617594 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:34:51.707704 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2785850413.mount: Deactivated successfully. Feb 12 19:34:51.712770 env[1194]: time="2024-02-12T19:34:51.712730860Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:34:51.713558 env[1194]: time="2024-02-12T19:34:51.713534556Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:34:51.716269 env[1194]: time="2024-02-12T19:34:51.716239839Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:34:51.717710 env[1194]: time="2024-02-12T19:34:51.717672445Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:34:51.719052 env[1194]: time="2024-02-12T19:34:51.719026844Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:34:51.720869 env[1194]: time="2024-02-12T19:34:51.720828763Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:34:51.722863 env[1194]: time="2024-02-12T19:34:51.722815929Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:34:51.723981 env[1194]: time="2024-02-12T19:34:51.723921061Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:34:51.741622 env[1194]: time="2024-02-12T19:34:51.741555886Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:34:51.741622 env[1194]: time="2024-02-12T19:34:51.741597635Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:34:51.741622 env[1194]: time="2024-02-12T19:34:51.741610910Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:34:51.741880 env[1194]: time="2024-02-12T19:34:51.741750992Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f0918d0ca20e21109ccf66c925994287369eb9e93c1f1e1a4a4b8d160236119c pid=1594 runtime=io.containerd.runc.v2 Feb 12 19:34:51.750911 env[1194]: time="2024-02-12T19:34:51.750826539Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:34:51.751082 env[1194]: time="2024-02-12T19:34:51.750920715Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:34:51.751082 env[1194]: time="2024-02-12T19:34:51.750955030Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:34:51.751178 env[1194]: time="2024-02-12T19:34:51.751132162Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f8ff732cbecabb3909c85e0596616c6b487a2716d6120d9f4e5f465e656d5901 pid=1611 runtime=io.containerd.runc.v2 Feb 12 19:34:51.805939 env[1194]: time="2024-02-12T19:34:51.805313121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5pcng,Uid:7b1f7362-abbb-4e48-8350-08c9935813ed,Namespace:kube-system,Attempt:0,} returns sandbox id \"f0918d0ca20e21109ccf66c925994287369eb9e93c1f1e1a4a4b8d160236119c\"" Feb 12 19:34:51.807557 kubelet[1500]: E0212 19:34:51.807181 1500 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:34:51.808185 env[1194]: time="2024-02-12T19:34:51.808159999Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 12 19:34:51.812577 env[1194]: time="2024-02-12T19:34:51.812552475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mnrnn,Uid:3a365fb8-83e3-49b9-9654-ede2f99b78f6,Namespace:kube-system,Attempt:0,} returns sandbox id \"f8ff732cbecabb3909c85e0596616c6b487a2716d6120d9f4e5f465e656d5901\"" Feb 12 19:34:51.813113 kubelet[1500]: E0212 19:34:51.813094 1500 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:34:52.618475 kubelet[1500]: E0212 19:34:52.618442 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:34:53.619387 kubelet[1500]: E0212 19:34:53.619328 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:34:54.619706 kubelet[1500]: E0212 19:34:54.619642 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:34:55.620623 kubelet[1500]: E0212 19:34:55.620571 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:34:56.621802 kubelet[1500]: E0212 19:34:56.621739 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:34:57.458085 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3906829049.mount: Deactivated successfully. Feb 12 19:34:57.622048 kubelet[1500]: E0212 19:34:57.622004 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:34:58.622430 kubelet[1500]: E0212 19:34:58.622384 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:34:59.623024 kubelet[1500]: E0212 19:34:59.622961 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:35:00.623657 kubelet[1500]: E0212 19:35:00.623624 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:35:01.623799 kubelet[1500]: E0212 19:35:01.623753 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:35:02.624372 kubelet[1500]: E0212 19:35:02.624333 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:35:03.413675 env[1194]: time="2024-02-12T19:35:03.413591920Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:35:03.415257 env[1194]: time="2024-02-12T19:35:03.415211780Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:35:03.416868 env[1194]: time="2024-02-12T19:35:03.416816105Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:35:03.417503 env[1194]: time="2024-02-12T19:35:03.417446318Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 12 19:35:03.418210 env[1194]: time="2024-02-12T19:35:03.418186798Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 12 19:35:03.419688 env[1194]: time="2024-02-12T19:35:03.419662509Z" level=info msg="CreateContainer within sandbox \"f0918d0ca20e21109ccf66c925994287369eb9e93c1f1e1a4a4b8d160236119c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 19:35:03.432968 env[1194]: time="2024-02-12T19:35:03.432917483Z" level=info msg="CreateContainer within sandbox \"f0918d0ca20e21109ccf66c925994287369eb9e93c1f1e1a4a4b8d160236119c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"79475be727c0576099f3eaa5c1d7b053c6a979ce1a34826481f2fbe0db6e0098\"" Feb 12 19:35:03.433608 env[1194]: time="2024-02-12T19:35:03.433564991Z" level=info msg="StartContainer for \"79475be727c0576099f3eaa5c1d7b053c6a979ce1a34826481f2fbe0db6e0098\"" Feb 12 19:35:03.470789 env[1194]: time="2024-02-12T19:35:03.470738171Z" level=info msg="StartContainer for \"79475be727c0576099f3eaa5c1d7b053c6a979ce1a34826481f2fbe0db6e0098\" returns successfully" Feb 12 19:35:03.624695 kubelet[1500]: E0212 19:35:03.624618 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:35:03.792384 kubelet[1500]: E0212 19:35:03.792254 1500 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:35:03.987095 env[1194]: time="2024-02-12T19:35:03.987048076Z" level=info msg="shim disconnected" id=79475be727c0576099f3eaa5c1d7b053c6a979ce1a34826481f2fbe0db6e0098 Feb 12 19:35:03.987095 env[1194]: time="2024-02-12T19:35:03.987092302Z" level=warning msg="cleaning up after shim disconnected" id=79475be727c0576099f3eaa5c1d7b053c6a979ce1a34826481f2fbe0db6e0098 namespace=k8s.io Feb 12 19:35:03.987095 env[1194]: time="2024-02-12T19:35:03.987102318Z" level=info msg="cleaning up dead shim" Feb 12 19:35:03.993908 env[1194]: time="2024-02-12T19:35:03.993864793Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:35:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1720 runtime=io.containerd.runc.v2\n" Feb 12 19:35:04.427907 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-79475be727c0576099f3eaa5c1d7b053c6a979ce1a34826481f2fbe0db6e0098-rootfs.mount: Deactivated successfully. Feb 12 19:35:04.624806 kubelet[1500]: E0212 19:35:04.624752 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:35:04.794551 kubelet[1500]: E0212 19:35:04.794322 1500 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:35:04.795885 env[1194]: time="2024-02-12T19:35:04.795838743Z" level=info msg="CreateContainer within sandbox \"f0918d0ca20e21109ccf66c925994287369eb9e93c1f1e1a4a4b8d160236119c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 12 19:35:04.810309 env[1194]: time="2024-02-12T19:35:04.810262330Z" level=info msg="CreateContainer within sandbox \"f0918d0ca20e21109ccf66c925994287369eb9e93c1f1e1a4a4b8d160236119c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ab8fa3e140674ffa5647f917054f46c2b3673b2c2f775d0f0546aeaac659a43c\"" Feb 12 19:35:04.810724 env[1194]: time="2024-02-12T19:35:04.810685540Z" level=info msg="StartContainer for \"ab8fa3e140674ffa5647f917054f46c2b3673b2c2f775d0f0546aeaac659a43c\"" Feb 12 19:35:04.850884 env[1194]: time="2024-02-12T19:35:04.849613778Z" level=info msg="StartContainer for \"ab8fa3e140674ffa5647f917054f46c2b3673b2c2f775d0f0546aeaac659a43c\" returns successfully" Feb 12 19:35:04.856674 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 19:35:04.856945 systemd[1]: Stopped systemd-sysctl.service. Feb 12 19:35:04.857121 systemd[1]: Stopping systemd-sysctl.service... Feb 12 19:35:04.858572 systemd[1]: Starting systemd-sysctl.service... Feb 12 19:35:04.865854 systemd[1]: Finished systemd-sysctl.service. Feb 12 19:35:04.965330 env[1194]: time="2024-02-12T19:35:04.965262817Z" level=info msg="shim disconnected" id=ab8fa3e140674ffa5647f917054f46c2b3673b2c2f775d0f0546aeaac659a43c Feb 12 19:35:04.965330 env[1194]: time="2024-02-12T19:35:04.965331642Z" level=warning msg="cleaning up after shim disconnected" id=ab8fa3e140674ffa5647f917054f46c2b3673b2c2f775d0f0546aeaac659a43c namespace=k8s.io Feb 12 19:35:04.965602 env[1194]: time="2024-02-12T19:35:04.965347557Z" level=info msg="cleaning up dead shim" Feb 12 19:35:04.972660 env[1194]: time="2024-02-12T19:35:04.972607640Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:35:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1783 runtime=io.containerd.runc.v2\n" Feb 12 19:35:05.427788 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ab8fa3e140674ffa5647f917054f46c2b3673b2c2f775d0f0546aeaac659a43c-rootfs.mount: Deactivated successfully. Feb 12 19:35:05.427970 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2406072656.mount: Deactivated successfully. Feb 12 19:35:05.531800 env[1194]: time="2024-02-12T19:35:05.531712181Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:35:05.534228 env[1194]: time="2024-02-12T19:35:05.534181082Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:35:05.535885 env[1194]: time="2024-02-12T19:35:05.535817589Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:35:05.537174 env[1194]: time="2024-02-12T19:35:05.537149591Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:35:05.539679 env[1194]: time="2024-02-12T19:35:05.539633296Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f\"" Feb 12 19:35:05.541341 env[1194]: time="2024-02-12T19:35:05.541310346Z" level=info msg="CreateContainer within sandbox \"f8ff732cbecabb3909c85e0596616c6b487a2716d6120d9f4e5f465e656d5901\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 12 19:35:05.552008 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount464297085.mount: Deactivated successfully. Feb 12 19:35:05.554548 env[1194]: time="2024-02-12T19:35:05.554498047Z" level=info msg="CreateContainer within sandbox \"f8ff732cbecabb3909c85e0596616c6b487a2716d6120d9f4e5f465e656d5901\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"62b20d081e91b7a1e8da47804814c823aaafeb890142d48832ecb26e02a2be04\"" Feb 12 19:35:05.555066 env[1194]: time="2024-02-12T19:35:05.555015372Z" level=info msg="StartContainer for \"62b20d081e91b7a1e8da47804814c823aaafeb890142d48832ecb26e02a2be04\"" Feb 12 19:35:05.596856 env[1194]: time="2024-02-12T19:35:05.596790297Z" level=info msg="StartContainer for \"62b20d081e91b7a1e8da47804814c823aaafeb890142d48832ecb26e02a2be04\" returns successfully" Feb 12 19:35:05.625608 kubelet[1500]: E0212 19:35:05.625550 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:35:05.796823 kubelet[1500]: E0212 19:35:05.796732 1500 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:35:05.798553 kubelet[1500]: E0212 19:35:05.798537 1500 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:35:05.800425 env[1194]: time="2024-02-12T19:35:05.800367309Z" level=info msg="CreateContainer within sandbox \"f0918d0ca20e21109ccf66c925994287369eb9e93c1f1e1a4a4b8d160236119c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 12 19:35:05.808035 kubelet[1500]: I0212 19:35:05.807994 1500 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-mnrnn" podStartSLOduration=-9.223372012046825e+09 pod.CreationTimestamp="2024-02-12 19:34:41 +0000 UTC" firstStartedPulling="2024-02-12 19:34:51.813442875 +0000 UTC m=+23.595629320" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:35:05.807783509 +0000 UTC m=+37.589969954" watchObservedRunningTime="2024-02-12 19:35:05.807950238 +0000 UTC m=+37.590136703" Feb 12 19:35:05.969787 env[1194]: time="2024-02-12T19:35:05.969724241Z" level=info msg="CreateContainer within sandbox \"f0918d0ca20e21109ccf66c925994287369eb9e93c1f1e1a4a4b8d160236119c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"cdd53d73c30da38b2e0f4a1698f9a5ddeb61a6df63ab9169aa80aaeec34bcfb6\"" Feb 12 19:35:05.970356 env[1194]: time="2024-02-12T19:35:05.970274658Z" level=info msg="StartContainer for \"cdd53d73c30da38b2e0f4a1698f9a5ddeb61a6df63ab9169aa80aaeec34bcfb6\"" Feb 12 19:35:06.076914 env[1194]: time="2024-02-12T19:35:06.076858827Z" level=info msg="StartContainer for \"cdd53d73c30da38b2e0f4a1698f9a5ddeb61a6df63ab9169aa80aaeec34bcfb6\" returns successfully" Feb 12 19:35:06.595447 env[1194]: time="2024-02-12T19:35:06.595377702Z" level=info msg="shim disconnected" id=cdd53d73c30da38b2e0f4a1698f9a5ddeb61a6df63ab9169aa80aaeec34bcfb6 Feb 12 19:35:06.595691 env[1194]: time="2024-02-12T19:35:06.595441072Z" level=warning msg="cleaning up after shim disconnected" id=cdd53d73c30da38b2e0f4a1698f9a5ddeb61a6df63ab9169aa80aaeec34bcfb6 namespace=k8s.io Feb 12 19:35:06.595691 env[1194]: time="2024-02-12T19:35:06.595469977Z" level=info msg="cleaning up dead shim" Feb 12 19:35:06.601599 env[1194]: time="2024-02-12T19:35:06.601543059Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:35:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1983 runtime=io.containerd.runc.v2\n" Feb 12 19:35:06.625951 kubelet[1500]: E0212 19:35:06.625878 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:35:06.838868 kubelet[1500]: E0212 19:35:06.838823 1500 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:35:06.838868 kubelet[1500]: E0212 19:35:06.838866 1500 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:35:06.840500 env[1194]: time="2024-02-12T19:35:06.840461028Z" level=info msg="CreateContainer within sandbox \"f0918d0ca20e21109ccf66c925994287369eb9e93c1f1e1a4a4b8d160236119c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 12 19:35:07.069653 update_engine[1175]: I0212 19:35:07.069612 1175 update_attempter.cc:509] Updating boot flags... Feb 12 19:35:07.119032 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2949980097.mount: Deactivated successfully. Feb 12 19:35:07.406751 env[1194]: time="2024-02-12T19:35:07.406630915Z" level=info msg="CreateContainer within sandbox \"f0918d0ca20e21109ccf66c925994287369eb9e93c1f1e1a4a4b8d160236119c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"453bbfe18bce1b1a791e83774e51c943d6009fc93ae57a295b47d7c08911810b\"" Feb 12 19:35:07.407233 env[1194]: time="2024-02-12T19:35:07.407213654Z" level=info msg="StartContainer for \"453bbfe18bce1b1a791e83774e51c943d6009fc93ae57a295b47d7c08911810b\"" Feb 12 19:35:07.534894 env[1194]: time="2024-02-12T19:35:07.534835995Z" level=info msg="StartContainer for \"453bbfe18bce1b1a791e83774e51c943d6009fc93ae57a295b47d7c08911810b\" returns successfully" Feb 12 19:35:07.546692 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-453bbfe18bce1b1a791e83774e51c943d6009fc93ae57a295b47d7c08911810b-rootfs.mount: Deactivated successfully. Feb 12 19:35:07.626160 kubelet[1500]: E0212 19:35:07.626114 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:35:07.843383 kubelet[1500]: E0212 19:35:07.843349 1500 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:35:07.919158 env[1194]: time="2024-02-12T19:35:07.919110531Z" level=info msg="shim disconnected" id=453bbfe18bce1b1a791e83774e51c943d6009fc93ae57a295b47d7c08911810b Feb 12 19:35:07.919158 env[1194]: time="2024-02-12T19:35:07.919151006Z" level=warning msg="cleaning up after shim disconnected" id=453bbfe18bce1b1a791e83774e51c943d6009fc93ae57a295b47d7c08911810b namespace=k8s.io Feb 12 19:35:07.919158 env[1194]: time="2024-02-12T19:35:07.919159679Z" level=info msg="cleaning up dead shim" Feb 12 19:35:07.925699 env[1194]: time="2024-02-12T19:35:07.925660767Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:35:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2050 runtime=io.containerd.runc.v2\n" Feb 12 19:35:08.605107 kubelet[1500]: E0212 19:35:08.605055 1500 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:35:08.626498 kubelet[1500]: E0212 19:35:08.626459 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:35:08.848910 kubelet[1500]: E0212 19:35:08.848879 1500 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:35:08.850773 env[1194]: time="2024-02-12T19:35:08.850713495Z" level=info msg="CreateContainer within sandbox \"f0918d0ca20e21109ccf66c925994287369eb9e93c1f1e1a4a4b8d160236119c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 12 19:35:08.865083 env[1194]: time="2024-02-12T19:35:08.864955427Z" level=info msg="CreateContainer within sandbox \"f0918d0ca20e21109ccf66c925994287369eb9e93c1f1e1a4a4b8d160236119c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"134fe09604d0bd7ea82c59185de2dcf3eec142792c69ee0b6b5b4c811d2b20ec\"" Feb 12 19:35:08.865584 env[1194]: time="2024-02-12T19:35:08.865542884Z" level=info msg="StartContainer for \"134fe09604d0bd7ea82c59185de2dcf3eec142792c69ee0b6b5b4c811d2b20ec\"" Feb 12 19:35:08.880331 systemd[1]: run-containerd-runc-k8s.io-134fe09604d0bd7ea82c59185de2dcf3eec142792c69ee0b6b5b4c811d2b20ec-runc.ELsRe6.mount: Deactivated successfully. Feb 12 19:35:08.909529 env[1194]: time="2024-02-12T19:35:08.909468141Z" level=info msg="StartContainer for \"134fe09604d0bd7ea82c59185de2dcf3eec142792c69ee0b6b5b4c811d2b20ec\" returns successfully" Feb 12 19:35:09.048158 kubelet[1500]: I0212 19:35:09.048127 1500 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 12 19:35:09.208868 kernel: Initializing XFRM netlink socket Feb 12 19:35:09.626859 kubelet[1500]: E0212 19:35:09.626793 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:35:09.853659 kubelet[1500]: E0212 19:35:09.853623 1500 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:35:10.627308 kubelet[1500]: E0212 19:35:10.627235 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:35:10.822519 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 12 19:35:10.822630 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 12 19:35:10.822686 systemd-networkd[1074]: cilium_host: Link UP Feb 12 19:35:10.824306 systemd-networkd[1074]: cilium_net: Link UP Feb 12 19:35:10.824467 systemd-networkd[1074]: cilium_net: Gained carrier Feb 12 19:35:10.824608 systemd-networkd[1074]: cilium_host: Gained carrier Feb 12 19:35:10.854613 kubelet[1500]: E0212 19:35:10.854568 1500 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:35:10.900116 systemd-networkd[1074]: cilium_vxlan: Link UP Feb 12 19:35:10.900131 systemd-networkd[1074]: cilium_vxlan: Gained carrier Feb 12 19:35:11.012086 systemd-networkd[1074]: cilium_net: Gained IPv6LL Feb 12 19:35:11.120881 kernel: NET: Registered PF_ALG protocol family Feb 12 19:35:11.444010 systemd-networkd[1074]: cilium_host: Gained IPv6LL Feb 12 19:35:11.627497 kubelet[1500]: E0212 19:35:11.627430 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:35:11.665628 systemd-networkd[1074]: lxc_health: Link UP Feb 12 19:35:11.677738 systemd-networkd[1074]: lxc_health: Gained carrier Feb 12 19:35:11.677946 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 12 19:35:11.856140 kubelet[1500]: E0212 19:35:11.856105 1500 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:35:12.084055 systemd-networkd[1074]: cilium_vxlan: Gained IPv6LL Feb 12 19:35:12.627996 kubelet[1500]: E0212 19:35:12.627950 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:35:12.725055 systemd-networkd[1074]: lxc_health: Gained IPv6LL Feb 12 19:35:12.857931 kubelet[1500]: E0212 19:35:12.857892 1500 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:35:13.256995 kubelet[1500]: I0212 19:35:13.256956 1500 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-5pcng" podStartSLOduration=-9.22337200459786e+09 pod.CreationTimestamp="2024-02-12 19:34:41 +0000 UTC" firstStartedPulling="2024-02-12 19:34:51.807827195 +0000 UTC m=+23.590013640" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:35:09.864692786 +0000 UTC m=+41.646879231" watchObservedRunningTime="2024-02-12 19:35:13.256915335 +0000 UTC m=+45.039101780" Feb 12 19:35:13.628286 kubelet[1500]: E0212 19:35:13.628226 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:35:13.859496 kubelet[1500]: E0212 19:35:13.859434 1500 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:35:14.295689 kubelet[1500]: I0212 19:35:14.295650 1500 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:35:14.362285 kubelet[1500]: I0212 19:35:14.362237 1500 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wc86\" (UniqueName: \"kubernetes.io/projected/fb5d8cda-2397-492a-ac10-71a7bc5931bf-kube-api-access-5wc86\") pod \"nginx-deployment-8ffc5cf85-cs5rs\" (UID: \"fb5d8cda-2397-492a-ac10-71a7bc5931bf\") " pod="default/nginx-deployment-8ffc5cf85-cs5rs" Feb 12 19:35:14.628958 kubelet[1500]: E0212 19:35:14.628901 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:35:14.860177 kubelet[1500]: E0212 19:35:14.860144 1500 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:35:14.899336 env[1194]: time="2024-02-12T19:35:14.899230029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-cs5rs,Uid:fb5d8cda-2397-492a-ac10-71a7bc5931bf,Namespace:default,Attempt:0,}" Feb 12 19:35:15.595128 systemd-networkd[1074]: lxc57bb1510c970: Link UP Feb 12 19:35:15.602957 kernel: eth0: renamed from tmp306d0 Feb 12 19:35:15.611951 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 19:35:15.612088 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc57bb1510c970: link becomes ready Feb 12 19:35:15.610324 systemd-networkd[1074]: lxc57bb1510c970: Gained carrier Feb 12 19:35:15.629449 kubelet[1500]: E0212 19:35:15.629412 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:35:15.976247 env[1194]: time="2024-02-12T19:35:15.976093613Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:35:15.976247 env[1194]: time="2024-02-12T19:35:15.976129815Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:35:15.976247 env[1194]: time="2024-02-12T19:35:15.976139191Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:35:15.976623 env[1194]: time="2024-02-12T19:35:15.976269112Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/306d04e53ea78bcc7b6bf67a89160a0dade4c705cd24b6c74b3bf95c4f399722 pid=2592 runtime=io.containerd.runc.v2 Feb 12 19:35:15.994676 systemd-resolved[1127]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 12 19:35:16.016575 env[1194]: time="2024-02-12T19:35:16.016526627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-cs5rs,Uid:fb5d8cda-2397-492a-ac10-71a7bc5931bf,Namespace:default,Attempt:0,} returns sandbox id \"306d04e53ea78bcc7b6bf67a89160a0dade4c705cd24b6c74b3bf95c4f399722\"" Feb 12 19:35:16.017955 env[1194]: time="2024-02-12T19:35:16.017928967Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 12 19:35:16.629924 kubelet[1500]: E0212 19:35:16.629887 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:35:16.947975 systemd-networkd[1074]: lxc57bb1510c970: Gained IPv6LL Feb 12 19:35:17.630439 kubelet[1500]: E0212 19:35:17.630401 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:35:18.630539 kubelet[1500]: E0212 19:35:18.630498 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:35:19.630779 kubelet[1500]: E0212 19:35:19.630727 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:35:19.687400 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3058739861.mount: Deactivated successfully. Feb 12 19:35:20.602085 env[1194]: time="2024-02-12T19:35:20.602017309Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:35:20.604426 env[1194]: time="2024-02-12T19:35:20.604358314Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:35:20.606151 env[1194]: time="2024-02-12T19:35:20.606104818Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:35:20.607504 env[1194]: time="2024-02-12T19:35:20.607463340Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:35:20.608112 env[1194]: time="2024-02-12T19:35:20.608077538Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\"" Feb 12 19:35:20.609653 env[1194]: time="2024-02-12T19:35:20.609612340Z" level=info msg="CreateContainer within sandbox \"306d04e53ea78bcc7b6bf67a89160a0dade4c705cd24b6c74b3bf95c4f399722\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 12 19:35:20.620813 env[1194]: time="2024-02-12T19:35:20.620759771Z" level=info msg="CreateContainer within sandbox \"306d04e53ea78bcc7b6bf67a89160a0dade4c705cd24b6c74b3bf95c4f399722\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"bbbd77a6e514f692df3727b473b41b77dc0717c7681b2f614956822a1e539e48\"" Feb 12 19:35:20.621259 env[1194]: time="2024-02-12T19:35:20.621215951Z" level=info msg="StartContainer for \"bbbd77a6e514f692df3727b473b41b77dc0717c7681b2f614956822a1e539e48\"" Feb 12 19:35:20.631331 kubelet[1500]: E0212 19:35:20.631294 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:35:20.658099 env[1194]: time="2024-02-12T19:35:20.658054462Z" level=info msg="StartContainer for \"bbbd77a6e514f692df3727b473b41b77dc0717c7681b2f614956822a1e539e48\" returns successfully" Feb 12 19:35:20.883036 kubelet[1500]: I0212 19:35:20.882883 1500 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-8ffc5cf85-cs5rs" podStartSLOduration=-9.223372029971956e+09 pod.CreationTimestamp="2024-02-12 19:35:14 +0000 UTC" firstStartedPulling="2024-02-12 19:35:16.01746477 +0000 UTC m=+47.799651205" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:35:20.882612696 +0000 UTC m=+52.664799141" watchObservedRunningTime="2024-02-12 19:35:20.882819799 +0000 UTC m=+52.665006244" Feb 12 19:35:21.632446 kubelet[1500]: E0212 19:35:21.632393 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:35:22.632879 kubelet[1500]: E0212 19:35:22.632793 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:35:23.633471 kubelet[1500]: E0212 19:35:23.633419 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:35:24.634047 kubelet[1500]: E0212 19:35:24.633989 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:35:25.634616 kubelet[1500]: E0212 19:35:25.634565 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:35:26.174255 kubelet[1500]: I0212 19:35:26.174215 1500 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:35:26.220312 kubelet[1500]: I0212 19:35:26.220273 1500 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2lnp\" (UniqueName: \"kubernetes.io/projected/4a3eb962-3e35-4088-a651-e5be2dbb1abf-kube-api-access-x2lnp\") pod \"nfs-server-provisioner-0\" (UID: \"4a3eb962-3e35-4088-a651-e5be2dbb1abf\") " pod="default/nfs-server-provisioner-0" Feb 12 19:35:26.220312 kubelet[1500]: I0212 19:35:26.220311 1500 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/4a3eb962-3e35-4088-a651-e5be2dbb1abf-data\") pod \"nfs-server-provisioner-0\" (UID: \"4a3eb962-3e35-4088-a651-e5be2dbb1abf\") " pod="default/nfs-server-provisioner-0" Feb 12 19:35:26.478181 env[1194]: time="2024-02-12T19:35:26.478050923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:4a3eb962-3e35-4088-a651-e5be2dbb1abf,Namespace:default,Attempt:0,}" Feb 12 19:35:26.505977 systemd-networkd[1074]: lxc206f2ec4b63a: Link UP Feb 12 19:35:26.520508 kernel: eth0: renamed from tmp751a9 Feb 12 19:35:26.524872 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 19:35:26.525025 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc206f2ec4b63a: link becomes ready Feb 12 19:35:26.525022 systemd-networkd[1074]: lxc206f2ec4b63a: Gained carrier Feb 12 19:35:26.634790 kubelet[1500]: E0212 19:35:26.634730 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:35:26.895371 env[1194]: time="2024-02-12T19:35:26.895273901Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:35:26.895371 env[1194]: time="2024-02-12T19:35:26.895308864Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:35:26.895371 env[1194]: time="2024-02-12T19:35:26.895318011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:35:26.895567 env[1194]: time="2024-02-12T19:35:26.895421656Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/751a9210238c8624b637e3b36c18dcaf548afb52df418488c069f59a03295d52 pid=2768 runtime=io.containerd.runc.v2 Feb 12 19:35:26.945554 systemd-resolved[1127]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 12 19:35:26.970311 env[1194]: time="2024-02-12T19:35:26.970268340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:4a3eb962-3e35-4088-a651-e5be2dbb1abf,Namespace:default,Attempt:0,} returns sandbox id \"751a9210238c8624b637e3b36c18dcaf548afb52df418488c069f59a03295d52\"" Feb 12 19:35:26.971388 env[1194]: time="2024-02-12T19:35:26.971362241Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 12 19:35:27.635734 kubelet[1500]: E0212 19:35:27.635688 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:35:27.828106 systemd-networkd[1074]: lxc206f2ec4b63a: Gained IPv6LL Feb 12 19:35:28.605307 kubelet[1500]: E0212 19:35:28.605248 1500 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:35:28.636039 kubelet[1500]: E0212 19:35:28.636003 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:35:29.596250 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2051022712.mount: Deactivated successfully. Feb 12 19:35:29.636519 kubelet[1500]: E0212 19:35:29.636471 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:35:30.636759 kubelet[1500]: E0212 19:35:30.636699 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:35:31.637381 kubelet[1500]: E0212 19:35:31.637339 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:35:32.638141 kubelet[1500]: E0212 19:35:32.638073 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:35:32.646449 env[1194]: time="2024-02-12T19:35:32.646405101Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:35:32.648386 env[1194]: time="2024-02-12T19:35:32.648349529Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:35:32.649914 env[1194]: time="2024-02-12T19:35:32.649886926Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:35:32.651362 env[1194]: time="2024-02-12T19:35:32.651321354Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:35:32.652131 env[1194]: time="2024-02-12T19:35:32.652101685Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Feb 12 19:35:32.653996 env[1194]: time="2024-02-12T19:35:32.653960316Z" level=info msg="CreateContainer within sandbox \"751a9210238c8624b637e3b36c18dcaf548afb52df418488c069f59a03295d52\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 12 19:35:32.663294 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1247020353.mount: Deactivated successfully. Feb 12 19:35:32.663937 env[1194]: time="2024-02-12T19:35:32.663898605Z" level=info msg="CreateContainer within sandbox \"751a9210238c8624b637e3b36c18dcaf548afb52df418488c069f59a03295d52\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"46ffc0be78831e03d2e4954d12b6e722016bbaf844e0b564acbcdcd5f6e4903b\"" Feb 12 19:35:32.664306 env[1194]: time="2024-02-12T19:35:32.664270692Z" level=info msg="StartContainer for \"46ffc0be78831e03d2e4954d12b6e722016bbaf844e0b564acbcdcd5f6e4903b\"" Feb 12 19:35:32.711955 env[1194]: time="2024-02-12T19:35:32.707980488Z" level=info msg="StartContainer for \"46ffc0be78831e03d2e4954d12b6e722016bbaf844e0b564acbcdcd5f6e4903b\" returns successfully" Feb 12 19:35:32.900245 kubelet[1500]: I0212 19:35:32.900133 1500 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=-9.22337202995468e+09 pod.CreationTimestamp="2024-02-12 19:35:26 +0000 UTC" firstStartedPulling="2024-02-12 19:35:26.97114939 +0000 UTC m=+58.753335835" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:35:32.899578979 +0000 UTC m=+64.681765424" watchObservedRunningTime="2024-02-12 19:35:32.90009602 +0000 UTC m=+64.682282465" Feb 12 19:35:33.638668 kubelet[1500]: E0212 19:35:33.638615 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:35:34.639198 kubelet[1500]: E0212 19:35:34.639142 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:35:35.640280 kubelet[1500]: E0212 19:35:35.640235 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:35:36.640795 kubelet[1500]: E0212 19:35:36.640738 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:35:37.641693 kubelet[1500]: E0212 19:35:37.641645 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:35:38.642651 kubelet[1500]: E0212 19:35:38.642582 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:35:39.643072 kubelet[1500]: E0212 19:35:39.643006 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:35:40.643909 kubelet[1500]: E0212 19:35:40.643863 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:35:41.644961 kubelet[1500]: E0212 19:35:41.644919 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:35:42.577000 kubelet[1500]: I0212 19:35:42.576950 1500 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:35:42.646038 kubelet[1500]: E0212 19:35:42.645992 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:35:42.762389 kubelet[1500]: I0212 19:35:42.762351 1500 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkktj\" (UniqueName: \"kubernetes.io/projected/e4377ca2-6f02-4a46-8377-bf622aebaebc-kube-api-access-pkktj\") pod \"test-pod-1\" (UID: \"e4377ca2-6f02-4a46-8377-bf622aebaebc\") " pod="default/test-pod-1" Feb 12 19:35:42.762389 kubelet[1500]: I0212 19:35:42.762390 1500 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2b385cf9-a3eb-4e75-95d6-62fade97220a\" (UniqueName: \"kubernetes.io/nfs/e4377ca2-6f02-4a46-8377-bf622aebaebc-pvc-2b385cf9-a3eb-4e75-95d6-62fade97220a\") pod \"test-pod-1\" (UID: \"e4377ca2-6f02-4a46-8377-bf622aebaebc\") " pod="default/test-pod-1" Feb 12 19:35:43.016868 kernel: FS-Cache: Loaded Feb 12 19:35:43.051221 kernel: RPC: Registered named UNIX socket transport module. Feb 12 19:35:43.051296 kernel: RPC: Registered udp transport module. Feb 12 19:35:43.051322 kernel: RPC: Registered tcp transport module. Feb 12 19:35:43.052284 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 12 19:35:43.089861 kernel: FS-Cache: Netfs 'nfs' registered for caching Feb 12 19:35:43.267924 kernel: NFS: Registering the id_resolver key type Feb 12 19:35:43.268114 kernel: Key type id_resolver registered Feb 12 19:35:43.268141 kernel: Key type id_legacy registered Feb 12 19:35:43.288477 nfsidmap[2912]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 12 19:35:43.291715 nfsidmap[2915]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 12 19:35:43.481302 env[1194]: time="2024-02-12T19:35:43.481241998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:e4377ca2-6f02-4a46-8377-bf622aebaebc,Namespace:default,Attempt:0,}" Feb 12 19:35:43.510027 systemd-networkd[1074]: lxcc34826de21e3: Link UP Feb 12 19:35:43.519872 kernel: eth0: renamed from tmp617bc Feb 12 19:35:43.525957 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 19:35:43.526013 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcc34826de21e3: link becomes ready Feb 12 19:35:43.526017 systemd-networkd[1074]: lxcc34826de21e3: Gained carrier Feb 12 19:35:43.646828 kubelet[1500]: E0212 19:35:43.646769 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:35:43.726773 env[1194]: time="2024-02-12T19:35:43.726698723Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:35:43.726773 env[1194]: time="2024-02-12T19:35:43.726735470Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:35:43.726773 env[1194]: time="2024-02-12T19:35:43.726745418Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:35:43.727034 env[1194]: time="2024-02-12T19:35:43.726886429Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/617bccc33df759f2cf652351034d91479d72eb1e8e918ee1a2efa433eb6a3a8b pid=2949 runtime=io.containerd.runc.v2 Feb 12 19:35:43.750953 systemd-resolved[1127]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 12 19:35:43.780808 env[1194]: time="2024-02-12T19:35:43.780767487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:e4377ca2-6f02-4a46-8377-bf622aebaebc,Namespace:default,Attempt:0,} returns sandbox id \"617bccc33df759f2cf652351034d91479d72eb1e8e918ee1a2efa433eb6a3a8b\"" Feb 12 19:35:43.781982 env[1194]: time="2024-02-12T19:35:43.781954941Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 12 19:35:44.213215 env[1194]: time="2024-02-12T19:35:44.213157876Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:35:44.214808 env[1194]: time="2024-02-12T19:35:44.214754919Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:35:44.216252 env[1194]: time="2024-02-12T19:35:44.216219978Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:35:44.217905 env[1194]: time="2024-02-12T19:35:44.217831539Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:35:44.218579 env[1194]: time="2024-02-12T19:35:44.218543936Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\"" Feb 12 19:35:44.220325 env[1194]: time="2024-02-12T19:35:44.220295175Z" level=info msg="CreateContainer within sandbox \"617bccc33df759f2cf652351034d91479d72eb1e8e918ee1a2efa433eb6a3a8b\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 12 19:35:44.233729 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1860518412.mount: Deactivated successfully. Feb 12 19:35:44.235043 env[1194]: time="2024-02-12T19:35:44.234991963Z" level=info msg="CreateContainer within sandbox \"617bccc33df759f2cf652351034d91479d72eb1e8e918ee1a2efa433eb6a3a8b\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"43bb61e89b47d520a4303caa8360a177829cc0b763fac915962d044468863b26\"" Feb 12 19:35:44.235489 env[1194]: time="2024-02-12T19:35:44.235461261Z" level=info msg="StartContainer for \"43bb61e89b47d520a4303caa8360a177829cc0b763fac915962d044468863b26\"" Feb 12 19:35:44.272920 env[1194]: time="2024-02-12T19:35:44.272865491Z" level=info msg="StartContainer for \"43bb61e89b47d520a4303caa8360a177829cc0b763fac915962d044468863b26\" returns successfully" Feb 12 19:35:44.647921 kubelet[1500]: E0212 19:35:44.647874 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:35:44.660023 systemd-networkd[1074]: lxcc34826de21e3: Gained IPv6LL Feb 12 19:35:44.921182 kubelet[1500]: I0212 19:35:44.921041 1500 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=-9.22337201793378e+09 pod.CreationTimestamp="2024-02-12 19:35:26 +0000 UTC" firstStartedPulling="2024-02-12 19:35:43.781738872 +0000 UTC m=+75.563925317" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:35:44.920594918 +0000 UTC m=+76.702781363" watchObservedRunningTime="2024-02-12 19:35:44.920994868 +0000 UTC m=+76.703181313" Feb 12 19:35:45.648432 kubelet[1500]: E0212 19:35:45.648335 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:35:46.649391 kubelet[1500]: E0212 19:35:46.649326 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:35:47.650506 kubelet[1500]: E0212 19:35:47.650442 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:35:48.417737 systemd[1]: run-containerd-runc-k8s.io-134fe09604d0bd7ea82c59185de2dcf3eec142792c69ee0b6b5b4c811d2b20ec-runc.DNCr3W.mount: Deactivated successfully. Feb 12 19:35:48.430397 env[1194]: time="2024-02-12T19:35:48.430315249Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 19:35:48.436086 env[1194]: time="2024-02-12T19:35:48.436048636Z" level=info msg="StopContainer for \"134fe09604d0bd7ea82c59185de2dcf3eec142792c69ee0b6b5b4c811d2b20ec\" with timeout 1 (s)" Feb 12 19:35:48.436347 env[1194]: time="2024-02-12T19:35:48.436311704Z" level=info msg="Stop container \"134fe09604d0bd7ea82c59185de2dcf3eec142792c69ee0b6b5b4c811d2b20ec\" with signal terminated" Feb 12 19:35:48.441685 systemd-networkd[1074]: lxc_health: Link DOWN Feb 12 19:35:48.441691 systemd-networkd[1074]: lxc_health: Lost carrier Feb 12 19:35:48.478867 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-134fe09604d0bd7ea82c59185de2dcf3eec142792c69ee0b6b5b4c811d2b20ec-rootfs.mount: Deactivated successfully. Feb 12 19:35:48.487602 env[1194]: time="2024-02-12T19:35:48.487540690Z" level=info msg="shim disconnected" id=134fe09604d0bd7ea82c59185de2dcf3eec142792c69ee0b6b5b4c811d2b20ec Feb 12 19:35:48.487730 env[1194]: time="2024-02-12T19:35:48.487607945Z" level=warning msg="cleaning up after shim disconnected" id=134fe09604d0bd7ea82c59185de2dcf3eec142792c69ee0b6b5b4c811d2b20ec namespace=k8s.io Feb 12 19:35:48.487730 env[1194]: time="2024-02-12T19:35:48.487617083Z" level=info msg="cleaning up dead shim" Feb 12 19:35:48.494268 env[1194]: time="2024-02-12T19:35:48.494236352Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:35:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3080 runtime=io.containerd.runc.v2\n" Feb 12 19:35:48.497029 env[1194]: time="2024-02-12T19:35:48.496998115Z" level=info msg="StopContainer for \"134fe09604d0bd7ea82c59185de2dcf3eec142792c69ee0b6b5b4c811d2b20ec\" returns successfully" Feb 12 19:35:48.497684 env[1194]: time="2024-02-12T19:35:48.497630789Z" level=info msg="StopPodSandbox for \"f0918d0ca20e21109ccf66c925994287369eb9e93c1f1e1a4a4b8d160236119c\"" Feb 12 19:35:48.497833 env[1194]: time="2024-02-12T19:35:48.497710768Z" level=info msg="Container to stop \"79475be727c0576099f3eaa5c1d7b053c6a979ce1a34826481f2fbe0db6e0098\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:35:48.497833 env[1194]: time="2024-02-12T19:35:48.497726075Z" level=info msg="Container to stop \"453bbfe18bce1b1a791e83774e51c943d6009fc93ae57a295b47d7c08911810b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:35:48.497833 env[1194]: time="2024-02-12T19:35:48.497737517Z" level=info msg="Container to stop \"ab8fa3e140674ffa5647f917054f46c2b3673b2c2f775d0f0546aeaac659a43c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:35:48.497833 env[1194]: time="2024-02-12T19:35:48.497747876Z" level=info msg="Container to stop \"cdd53d73c30da38b2e0f4a1698f9a5ddeb61a6df63ab9169aa80aaeec34bcfb6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:35:48.497833 env[1194]: time="2024-02-12T19:35:48.497757564Z" level=info msg="Container to stop \"134fe09604d0bd7ea82c59185de2dcf3eec142792c69ee0b6b5b4c811d2b20ec\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:35:48.499619 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f0918d0ca20e21109ccf66c925994287369eb9e93c1f1e1a4a4b8d160236119c-shm.mount: Deactivated successfully. Feb 12 19:35:48.523075 env[1194]: time="2024-02-12T19:35:48.523024237Z" level=info msg="shim disconnected" id=f0918d0ca20e21109ccf66c925994287369eb9e93c1f1e1a4a4b8d160236119c Feb 12 19:35:48.523075 env[1194]: time="2024-02-12T19:35:48.523070393Z" level=warning msg="cleaning up after shim disconnected" id=f0918d0ca20e21109ccf66c925994287369eb9e93c1f1e1a4a4b8d160236119c namespace=k8s.io Feb 12 19:35:48.523075 env[1194]: time="2024-02-12T19:35:48.523078889Z" level=info msg="cleaning up dead shim" Feb 12 19:35:48.529395 env[1194]: time="2024-02-12T19:35:48.529342308Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:35:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3113 runtime=io.containerd.runc.v2\n" Feb 12 19:35:48.529721 env[1194]: time="2024-02-12T19:35:48.529676658Z" level=info msg="TearDown network for sandbox \"f0918d0ca20e21109ccf66c925994287369eb9e93c1f1e1a4a4b8d160236119c\" successfully" Feb 12 19:35:48.529721 env[1194]: time="2024-02-12T19:35:48.529708668Z" level=info msg="StopPodSandbox for \"f0918d0ca20e21109ccf66c925994287369eb9e93c1f1e1a4a4b8d160236119c\" returns successfully" Feb 12 19:35:48.605252 kubelet[1500]: E0212 19:35:48.605207 1500 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:35:48.651570 kubelet[1500]: E0212 19:35:48.651509 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:35:48.668119 kubelet[1500]: E0212 19:35:48.668033 1500 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 12 19:35:48.692329 kubelet[1500]: I0212 19:35:48.692297 1500 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7b1f7362-abbb-4e48-8350-08c9935813ed-xtables-lock\") pod \"7b1f7362-abbb-4e48-8350-08c9935813ed\" (UID: \"7b1f7362-abbb-4e48-8350-08c9935813ed\") " Feb 12 19:35:48.692407 kubelet[1500]: I0212 19:35:48.692336 1500 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7b1f7362-abbb-4e48-8350-08c9935813ed-hostproc\") pod \"7b1f7362-abbb-4e48-8350-08c9935813ed\" (UID: \"7b1f7362-abbb-4e48-8350-08c9935813ed\") " Feb 12 19:35:48.692407 kubelet[1500]: I0212 19:35:48.692382 1500 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cnrsv\" (UniqueName: \"kubernetes.io/projected/7b1f7362-abbb-4e48-8350-08c9935813ed-kube-api-access-cnrsv\") pod \"7b1f7362-abbb-4e48-8350-08c9935813ed\" (UID: \"7b1f7362-abbb-4e48-8350-08c9935813ed\") " Feb 12 19:35:48.692483 kubelet[1500]: I0212 19:35:48.692412 1500 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7b1f7362-abbb-4e48-8350-08c9935813ed-hubble-tls\") pod \"7b1f7362-abbb-4e48-8350-08c9935813ed\" (UID: \"7b1f7362-abbb-4e48-8350-08c9935813ed\") " Feb 12 19:35:48.692483 kubelet[1500]: I0212 19:35:48.692438 1500 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7b1f7362-abbb-4e48-8350-08c9935813ed-cilium-run\") pod \"7b1f7362-abbb-4e48-8350-08c9935813ed\" (UID: \"7b1f7362-abbb-4e48-8350-08c9935813ed\") " Feb 12 19:35:48.692483 kubelet[1500]: I0212 19:35:48.692463 1500 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7b1f7362-abbb-4e48-8350-08c9935813ed-cni-path\") pod \"7b1f7362-abbb-4e48-8350-08c9935813ed\" (UID: \"7b1f7362-abbb-4e48-8350-08c9935813ed\") " Feb 12 19:35:48.692592 kubelet[1500]: I0212 19:35:48.692488 1500 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7b1f7362-abbb-4e48-8350-08c9935813ed-cilium-cgroup\") pod \"7b1f7362-abbb-4e48-8350-08c9935813ed\" (UID: \"7b1f7362-abbb-4e48-8350-08c9935813ed\") " Feb 12 19:35:48.692592 kubelet[1500]: I0212 19:35:48.692521 1500 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7b1f7362-abbb-4e48-8350-08c9935813ed-clustermesh-secrets\") pod \"7b1f7362-abbb-4e48-8350-08c9935813ed\" (UID: \"7b1f7362-abbb-4e48-8350-08c9935813ed\") " Feb 12 19:35:48.692592 kubelet[1500]: I0212 19:35:48.692546 1500 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7b1f7362-abbb-4e48-8350-08c9935813ed-etc-cni-netd\") pod \"7b1f7362-abbb-4e48-8350-08c9935813ed\" (UID: \"7b1f7362-abbb-4e48-8350-08c9935813ed\") " Feb 12 19:35:48.692592 kubelet[1500]: I0212 19:35:48.692532 1500 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b1f7362-abbb-4e48-8350-08c9935813ed-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "7b1f7362-abbb-4e48-8350-08c9935813ed" (UID: "7b1f7362-abbb-4e48-8350-08c9935813ed"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:35:48.692740 kubelet[1500]: I0212 19:35:48.692602 1500 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b1f7362-abbb-4e48-8350-08c9935813ed-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7b1f7362-abbb-4e48-8350-08c9935813ed" (UID: "7b1f7362-abbb-4e48-8350-08c9935813ed"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:35:48.692740 kubelet[1500]: I0212 19:35:48.692570 1500 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7b1f7362-abbb-4e48-8350-08c9935813ed-lib-modules\") pod \"7b1f7362-abbb-4e48-8350-08c9935813ed\" (UID: \"7b1f7362-abbb-4e48-8350-08c9935813ed\") " Feb 12 19:35:48.692740 kubelet[1500]: I0212 19:35:48.692653 1500 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7b1f7362-abbb-4e48-8350-08c9935813ed-host-proc-sys-kernel\") pod \"7b1f7362-abbb-4e48-8350-08c9935813ed\" (UID: \"7b1f7362-abbb-4e48-8350-08c9935813ed\") " Feb 12 19:35:48.692740 kubelet[1500]: I0212 19:35:48.692677 1500 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7b1f7362-abbb-4e48-8350-08c9935813ed-bpf-maps\") pod \"7b1f7362-abbb-4e48-8350-08c9935813ed\" (UID: \"7b1f7362-abbb-4e48-8350-08c9935813ed\") " Feb 12 19:35:48.692740 kubelet[1500]: I0212 19:35:48.692701 1500 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7b1f7362-abbb-4e48-8350-08c9935813ed-host-proc-sys-net\") pod \"7b1f7362-abbb-4e48-8350-08c9935813ed\" (UID: \"7b1f7362-abbb-4e48-8350-08c9935813ed\") " Feb 12 19:35:48.692740 kubelet[1500]: I0212 19:35:48.692731 1500 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7b1f7362-abbb-4e48-8350-08c9935813ed-cilium-config-path\") pod \"7b1f7362-abbb-4e48-8350-08c9935813ed\" (UID: \"7b1f7362-abbb-4e48-8350-08c9935813ed\") " Feb 12 19:35:48.692981 kubelet[1500]: I0212 19:35:48.692763 1500 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7b1f7362-abbb-4e48-8350-08c9935813ed-cilium-run\") on node \"10.0.0.52\" DevicePath \"\"" Feb 12 19:35:48.692981 kubelet[1500]: I0212 19:35:48.692780 1500 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7b1f7362-abbb-4e48-8350-08c9935813ed-lib-modules\") on node \"10.0.0.52\" DevicePath \"\"" Feb 12 19:35:48.692981 kubelet[1500]: I0212 19:35:48.692905 1500 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b1f7362-abbb-4e48-8350-08c9935813ed-hostproc" (OuterVolumeSpecName: "hostproc") pod "7b1f7362-abbb-4e48-8350-08c9935813ed" (UID: "7b1f7362-abbb-4e48-8350-08c9935813ed"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:35:48.692981 kubelet[1500]: I0212 19:35:48.692941 1500 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b1f7362-abbb-4e48-8350-08c9935813ed-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "7b1f7362-abbb-4e48-8350-08c9935813ed" (UID: "7b1f7362-abbb-4e48-8350-08c9935813ed"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:35:48.692981 kubelet[1500]: I0212 19:35:48.692966 1500 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b1f7362-abbb-4e48-8350-08c9935813ed-cni-path" (OuterVolumeSpecName: "cni-path") pod "7b1f7362-abbb-4e48-8350-08c9935813ed" (UID: "7b1f7362-abbb-4e48-8350-08c9935813ed"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:35:48.693183 kubelet[1500]: I0212 19:35:48.692986 1500 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b1f7362-abbb-4e48-8350-08c9935813ed-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "7b1f7362-abbb-4e48-8350-08c9935813ed" (UID: "7b1f7362-abbb-4e48-8350-08c9935813ed"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:35:48.693183 kubelet[1500]: W0212 19:35:48.692999 1500 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/7b1f7362-abbb-4e48-8350-08c9935813ed/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 12 19:35:48.693264 kubelet[1500]: I0212 19:35:48.693230 1500 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b1f7362-abbb-4e48-8350-08c9935813ed-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "7b1f7362-abbb-4e48-8350-08c9935813ed" (UID: "7b1f7362-abbb-4e48-8350-08c9935813ed"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:35:48.693308 kubelet[1500]: I0212 19:35:48.693275 1500 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b1f7362-abbb-4e48-8350-08c9935813ed-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "7b1f7362-abbb-4e48-8350-08c9935813ed" (UID: "7b1f7362-abbb-4e48-8350-08c9935813ed"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:35:48.693308 kubelet[1500]: I0212 19:35:48.693298 1500 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b1f7362-abbb-4e48-8350-08c9935813ed-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "7b1f7362-abbb-4e48-8350-08c9935813ed" (UID: "7b1f7362-abbb-4e48-8350-08c9935813ed"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:35:48.693391 kubelet[1500]: I0212 19:35:48.693321 1500 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b1f7362-abbb-4e48-8350-08c9935813ed-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "7b1f7362-abbb-4e48-8350-08c9935813ed" (UID: "7b1f7362-abbb-4e48-8350-08c9935813ed"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:35:48.695405 kubelet[1500]: I0212 19:35:48.695374 1500 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b1f7362-abbb-4e48-8350-08c9935813ed-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7b1f7362-abbb-4e48-8350-08c9935813ed" (UID: "7b1f7362-abbb-4e48-8350-08c9935813ed"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 19:35:48.695796 kubelet[1500]: I0212 19:35:48.695743 1500 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b1f7362-abbb-4e48-8350-08c9935813ed-kube-api-access-cnrsv" (OuterVolumeSpecName: "kube-api-access-cnrsv") pod "7b1f7362-abbb-4e48-8350-08c9935813ed" (UID: "7b1f7362-abbb-4e48-8350-08c9935813ed"). InnerVolumeSpecName "kube-api-access-cnrsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:35:48.695890 kubelet[1500]: I0212 19:35:48.695744 1500 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b1f7362-abbb-4e48-8350-08c9935813ed-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "7b1f7362-abbb-4e48-8350-08c9935813ed" (UID: "7b1f7362-abbb-4e48-8350-08c9935813ed"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 19:35:48.696390 kubelet[1500]: I0212 19:35:48.696353 1500 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b1f7362-abbb-4e48-8350-08c9935813ed-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "7b1f7362-abbb-4e48-8350-08c9935813ed" (UID: "7b1f7362-abbb-4e48-8350-08c9935813ed"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:35:48.793596 kubelet[1500]: I0212 19:35:48.793527 1500 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7b1f7362-abbb-4e48-8350-08c9935813ed-host-proc-sys-kernel\") on node \"10.0.0.52\" DevicePath \"\"" Feb 12 19:35:48.793596 kubelet[1500]: I0212 19:35:48.793580 1500 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7b1f7362-abbb-4e48-8350-08c9935813ed-bpf-maps\") on node \"10.0.0.52\" DevicePath \"\"" Feb 12 19:35:48.793596 kubelet[1500]: I0212 19:35:48.793596 1500 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7b1f7362-abbb-4e48-8350-08c9935813ed-etc-cni-netd\") on node \"10.0.0.52\" DevicePath \"\"" Feb 12 19:35:48.793596 kubelet[1500]: I0212 19:35:48.793610 1500 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7b1f7362-abbb-4e48-8350-08c9935813ed-cilium-config-path\") on node \"10.0.0.52\" DevicePath \"\"" Feb 12 19:35:48.793920 kubelet[1500]: I0212 19:35:48.793627 1500 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7b1f7362-abbb-4e48-8350-08c9935813ed-host-proc-sys-net\") on node \"10.0.0.52\" DevicePath \"\"" Feb 12 19:35:48.793920 kubelet[1500]: I0212 19:35:48.793643 1500 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-cnrsv\" (UniqueName: \"kubernetes.io/projected/7b1f7362-abbb-4e48-8350-08c9935813ed-kube-api-access-cnrsv\") on node \"10.0.0.52\" DevicePath \"\"" Feb 12 19:35:48.793920 kubelet[1500]: I0212 19:35:48.793655 1500 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7b1f7362-abbb-4e48-8350-08c9935813ed-hubble-tls\") on node \"10.0.0.52\" DevicePath \"\"" Feb 12 19:35:48.793920 kubelet[1500]: I0212 19:35:48.793668 1500 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7b1f7362-abbb-4e48-8350-08c9935813ed-xtables-lock\") on node \"10.0.0.52\" DevicePath \"\"" Feb 12 19:35:48.793920 kubelet[1500]: I0212 19:35:48.793680 1500 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7b1f7362-abbb-4e48-8350-08c9935813ed-hostproc\") on node \"10.0.0.52\" DevicePath \"\"" Feb 12 19:35:48.793920 kubelet[1500]: I0212 19:35:48.793693 1500 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7b1f7362-abbb-4e48-8350-08c9935813ed-cilium-cgroup\") on node \"10.0.0.52\" DevicePath \"\"" Feb 12 19:35:48.793920 kubelet[1500]: I0212 19:35:48.793707 1500 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7b1f7362-abbb-4e48-8350-08c9935813ed-clustermesh-secrets\") on node \"10.0.0.52\" DevicePath \"\"" Feb 12 19:35:48.793920 kubelet[1500]: I0212 19:35:48.793721 1500 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7b1f7362-abbb-4e48-8350-08c9935813ed-cni-path\") on node \"10.0.0.52\" DevicePath \"\"" Feb 12 19:35:48.922018 kubelet[1500]: I0212 19:35:48.921895 1500 scope.go:115] "RemoveContainer" containerID="134fe09604d0bd7ea82c59185de2dcf3eec142792c69ee0b6b5b4c811d2b20ec" Feb 12 19:35:48.923911 env[1194]: time="2024-02-12T19:35:48.923872512Z" level=info msg="RemoveContainer for \"134fe09604d0bd7ea82c59185de2dcf3eec142792c69ee0b6b5b4c811d2b20ec\"" Feb 12 19:35:48.928664 env[1194]: time="2024-02-12T19:35:48.928626502Z" level=info msg="RemoveContainer for \"134fe09604d0bd7ea82c59185de2dcf3eec142792c69ee0b6b5b4c811d2b20ec\" returns successfully" Feb 12 19:35:48.928795 kubelet[1500]: I0212 19:35:48.928779 1500 scope.go:115] "RemoveContainer" containerID="453bbfe18bce1b1a791e83774e51c943d6009fc93ae57a295b47d7c08911810b" Feb 12 19:35:48.929814 env[1194]: time="2024-02-12T19:35:48.929769341Z" level=info msg="RemoveContainer for \"453bbfe18bce1b1a791e83774e51c943d6009fc93ae57a295b47d7c08911810b\"" Feb 12 19:35:48.932708 env[1194]: time="2024-02-12T19:35:48.932666886Z" level=info msg="RemoveContainer for \"453bbfe18bce1b1a791e83774e51c943d6009fc93ae57a295b47d7c08911810b\" returns successfully" Feb 12 19:35:48.932859 kubelet[1500]: I0212 19:35:48.932834 1500 scope.go:115] "RemoveContainer" containerID="cdd53d73c30da38b2e0f4a1698f9a5ddeb61a6df63ab9169aa80aaeec34bcfb6" Feb 12 19:35:48.933543 env[1194]: time="2024-02-12T19:35:48.933523055Z" level=info msg="RemoveContainer for \"cdd53d73c30da38b2e0f4a1698f9a5ddeb61a6df63ab9169aa80aaeec34bcfb6\"" Feb 12 19:35:48.936430 env[1194]: time="2024-02-12T19:35:48.936405172Z" level=info msg="RemoveContainer for \"cdd53d73c30da38b2e0f4a1698f9a5ddeb61a6df63ab9169aa80aaeec34bcfb6\" returns successfully" Feb 12 19:35:48.936573 kubelet[1500]: I0212 19:35:48.936540 1500 scope.go:115] "RemoveContainer" containerID="ab8fa3e140674ffa5647f917054f46c2b3673b2c2f775d0f0546aeaac659a43c" Feb 12 19:35:48.937598 env[1194]: time="2024-02-12T19:35:48.937568781Z" level=info msg="RemoveContainer for \"ab8fa3e140674ffa5647f917054f46c2b3673b2c2f775d0f0546aeaac659a43c\"" Feb 12 19:35:48.940909 env[1194]: time="2024-02-12T19:35:48.940876807Z" level=info msg="RemoveContainer for \"ab8fa3e140674ffa5647f917054f46c2b3673b2c2f775d0f0546aeaac659a43c\" returns successfully" Feb 12 19:35:48.941019 kubelet[1500]: I0212 19:35:48.940997 1500 scope.go:115] "RemoveContainer" containerID="79475be727c0576099f3eaa5c1d7b053c6a979ce1a34826481f2fbe0db6e0098" Feb 12 19:35:48.942106 env[1194]: time="2024-02-12T19:35:48.942048110Z" level=info msg="RemoveContainer for \"79475be727c0576099f3eaa5c1d7b053c6a979ce1a34826481f2fbe0db6e0098\"" Feb 12 19:35:48.945883 env[1194]: time="2024-02-12T19:35:48.945812443Z" level=info msg="RemoveContainer for \"79475be727c0576099f3eaa5c1d7b053c6a979ce1a34826481f2fbe0db6e0098\" returns successfully" Feb 12 19:35:48.946121 kubelet[1500]: I0212 19:35:48.946105 1500 scope.go:115] "RemoveContainer" containerID="134fe09604d0bd7ea82c59185de2dcf3eec142792c69ee0b6b5b4c811d2b20ec" Feb 12 19:35:48.946408 env[1194]: time="2024-02-12T19:35:48.946327388Z" level=error msg="ContainerStatus for \"134fe09604d0bd7ea82c59185de2dcf3eec142792c69ee0b6b5b4c811d2b20ec\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"134fe09604d0bd7ea82c59185de2dcf3eec142792c69ee0b6b5b4c811d2b20ec\": not found" Feb 12 19:35:48.946581 kubelet[1500]: E0212 19:35:48.946555 1500 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"134fe09604d0bd7ea82c59185de2dcf3eec142792c69ee0b6b5b4c811d2b20ec\": not found" containerID="134fe09604d0bd7ea82c59185de2dcf3eec142792c69ee0b6b5b4c811d2b20ec" Feb 12 19:35:48.946630 kubelet[1500]: I0212 19:35:48.946612 1500 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:134fe09604d0bd7ea82c59185de2dcf3eec142792c69ee0b6b5b4c811d2b20ec} err="failed to get container status \"134fe09604d0bd7ea82c59185de2dcf3eec142792c69ee0b6b5b4c811d2b20ec\": rpc error: code = NotFound desc = an error occurred when try to find container \"134fe09604d0bd7ea82c59185de2dcf3eec142792c69ee0b6b5b4c811d2b20ec\": not found" Feb 12 19:35:48.946657 kubelet[1500]: I0212 19:35:48.946633 1500 scope.go:115] "RemoveContainer" containerID="453bbfe18bce1b1a791e83774e51c943d6009fc93ae57a295b47d7c08911810b" Feb 12 19:35:48.946867 env[1194]: time="2024-02-12T19:35:48.946808101Z" level=error msg="ContainerStatus for \"453bbfe18bce1b1a791e83774e51c943d6009fc93ae57a295b47d7c08911810b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"453bbfe18bce1b1a791e83774e51c943d6009fc93ae57a295b47d7c08911810b\": not found" Feb 12 19:35:48.946965 kubelet[1500]: E0212 19:35:48.946951 1500 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"453bbfe18bce1b1a791e83774e51c943d6009fc93ae57a295b47d7c08911810b\": not found" containerID="453bbfe18bce1b1a791e83774e51c943d6009fc93ae57a295b47d7c08911810b" Feb 12 19:35:48.946995 kubelet[1500]: I0212 19:35:48.946984 1500 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:453bbfe18bce1b1a791e83774e51c943d6009fc93ae57a295b47d7c08911810b} err="failed to get container status \"453bbfe18bce1b1a791e83774e51c943d6009fc93ae57a295b47d7c08911810b\": rpc error: code = NotFound desc = an error occurred when try to find container \"453bbfe18bce1b1a791e83774e51c943d6009fc93ae57a295b47d7c08911810b\": not found" Feb 12 19:35:48.946995 kubelet[1500]: I0212 19:35:48.946993 1500 scope.go:115] "RemoveContainer" containerID="cdd53d73c30da38b2e0f4a1698f9a5ddeb61a6df63ab9169aa80aaeec34bcfb6" Feb 12 19:35:48.947170 env[1194]: time="2024-02-12T19:35:48.947113878Z" level=error msg="ContainerStatus for \"cdd53d73c30da38b2e0f4a1698f9a5ddeb61a6df63ab9169aa80aaeec34bcfb6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cdd53d73c30da38b2e0f4a1698f9a5ddeb61a6df63ab9169aa80aaeec34bcfb6\": not found" Feb 12 19:35:48.947261 kubelet[1500]: E0212 19:35:48.947237 1500 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cdd53d73c30da38b2e0f4a1698f9a5ddeb61a6df63ab9169aa80aaeec34bcfb6\": not found" containerID="cdd53d73c30da38b2e0f4a1698f9a5ddeb61a6df63ab9169aa80aaeec34bcfb6" Feb 12 19:35:48.947261 kubelet[1500]: I0212 19:35:48.947256 1500 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:cdd53d73c30da38b2e0f4a1698f9a5ddeb61a6df63ab9169aa80aaeec34bcfb6} err="failed to get container status \"cdd53d73c30da38b2e0f4a1698f9a5ddeb61a6df63ab9169aa80aaeec34bcfb6\": rpc error: code = NotFound desc = an error occurred when try to find container \"cdd53d73c30da38b2e0f4a1698f9a5ddeb61a6df63ab9169aa80aaeec34bcfb6\": not found" Feb 12 19:35:48.947261 kubelet[1500]: I0212 19:35:48.947264 1500 scope.go:115] "RemoveContainer" containerID="ab8fa3e140674ffa5647f917054f46c2b3673b2c2f775d0f0546aeaac659a43c" Feb 12 19:35:48.947533 env[1194]: time="2024-02-12T19:35:48.947407533Z" level=error msg="ContainerStatus for \"ab8fa3e140674ffa5647f917054f46c2b3673b2c2f775d0f0546aeaac659a43c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ab8fa3e140674ffa5647f917054f46c2b3673b2c2f775d0f0546aeaac659a43c\": not found" Feb 12 19:35:48.947583 kubelet[1500]: E0212 19:35:48.947545 1500 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ab8fa3e140674ffa5647f917054f46c2b3673b2c2f775d0f0546aeaac659a43c\": not found" containerID="ab8fa3e140674ffa5647f917054f46c2b3673b2c2f775d0f0546aeaac659a43c" Feb 12 19:35:48.947583 kubelet[1500]: I0212 19:35:48.947562 1500 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:ab8fa3e140674ffa5647f917054f46c2b3673b2c2f775d0f0546aeaac659a43c} err="failed to get container status \"ab8fa3e140674ffa5647f917054f46c2b3673b2c2f775d0f0546aeaac659a43c\": rpc error: code = NotFound desc = an error occurred when try to find container \"ab8fa3e140674ffa5647f917054f46c2b3673b2c2f775d0f0546aeaac659a43c\": not found" Feb 12 19:35:48.947583 kubelet[1500]: I0212 19:35:48.947578 1500 scope.go:115] "RemoveContainer" containerID="79475be727c0576099f3eaa5c1d7b053c6a979ce1a34826481f2fbe0db6e0098" Feb 12 19:35:48.947730 env[1194]: time="2024-02-12T19:35:48.947693513Z" level=error msg="ContainerStatus for \"79475be727c0576099f3eaa5c1d7b053c6a979ce1a34826481f2fbe0db6e0098\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"79475be727c0576099f3eaa5c1d7b053c6a979ce1a34826481f2fbe0db6e0098\": not found" Feb 12 19:35:48.947832 kubelet[1500]: E0212 19:35:48.947813 1500 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"79475be727c0576099f3eaa5c1d7b053c6a979ce1a34826481f2fbe0db6e0098\": not found" containerID="79475be727c0576099f3eaa5c1d7b053c6a979ce1a34826481f2fbe0db6e0098" Feb 12 19:35:48.947832 kubelet[1500]: I0212 19:35:48.947835 1500 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:79475be727c0576099f3eaa5c1d7b053c6a979ce1a34826481f2fbe0db6e0098} err="failed to get container status \"79475be727c0576099f3eaa5c1d7b053c6a979ce1a34826481f2fbe0db6e0098\": rpc error: code = NotFound desc = an error occurred when try to find container \"79475be727c0576099f3eaa5c1d7b053c6a979ce1a34826481f2fbe0db6e0098\": not found" Feb 12 19:35:49.415010 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f0918d0ca20e21109ccf66c925994287369eb9e93c1f1e1a4a4b8d160236119c-rootfs.mount: Deactivated successfully. Feb 12 19:35:49.415192 systemd[1]: var-lib-kubelet-pods-7b1f7362\x2dabbb\x2d4e48\x2d8350\x2d08c9935813ed-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcnrsv.mount: Deactivated successfully. Feb 12 19:35:49.415286 systemd[1]: var-lib-kubelet-pods-7b1f7362\x2dabbb\x2d4e48\x2d8350\x2d08c9935813ed-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 12 19:35:49.415373 systemd[1]: var-lib-kubelet-pods-7b1f7362\x2dabbb\x2d4e48\x2d8350\x2d08c9935813ed-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 12 19:35:49.652313 kubelet[1500]: E0212 19:35:49.652253 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:35:50.653454 kubelet[1500]: E0212 19:35:50.653321 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:35:50.744746 kubelet[1500]: I0212 19:35:50.744704 1500 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=7b1f7362-abbb-4e48-8350-08c9935813ed path="/var/lib/kubelet/pods/7b1f7362-abbb-4e48-8350-08c9935813ed/volumes" Feb 12 19:35:50.866679 kubelet[1500]: I0212 19:35:50.866645 1500 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:35:50.866679 kubelet[1500]: E0212 19:35:50.866692 1500 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7b1f7362-abbb-4e48-8350-08c9935813ed" containerName="mount-bpf-fs" Feb 12 19:35:50.866929 kubelet[1500]: E0212 19:35:50.866700 1500 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7b1f7362-abbb-4e48-8350-08c9935813ed" containerName="mount-cgroup" Feb 12 19:35:50.866929 kubelet[1500]: E0212 19:35:50.866706 1500 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7b1f7362-abbb-4e48-8350-08c9935813ed" containerName="apply-sysctl-overwrites" Feb 12 19:35:50.866929 kubelet[1500]: E0212 19:35:50.866712 1500 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7b1f7362-abbb-4e48-8350-08c9935813ed" containerName="clean-cilium-state" Feb 12 19:35:50.866929 kubelet[1500]: E0212 19:35:50.866719 1500 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7b1f7362-abbb-4e48-8350-08c9935813ed" containerName="cilium-agent" Feb 12 19:35:50.866929 kubelet[1500]: I0212 19:35:50.866736 1500 memory_manager.go:346] "RemoveStaleState removing state" podUID="7b1f7362-abbb-4e48-8350-08c9935813ed" containerName="cilium-agent" Feb 12 19:35:50.895648 kubelet[1500]: I0212 19:35:50.895606 1500 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:35:51.006707 kubelet[1500]: I0212 19:35:51.006557 1500 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9m7tx\" (UniqueName: \"kubernetes.io/projected/54b0eedd-fabe-4f06-805b-32d2d0671af3-kube-api-access-9m7tx\") pod \"cilium-operator-f59cbd8c6-5xnzl\" (UID: \"54b0eedd-fabe-4f06-805b-32d2d0671af3\") " pod="kube-system/cilium-operator-f59cbd8c6-5xnzl" Feb 12 19:35:51.006707 kubelet[1500]: I0212 19:35:51.006619 1500 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d9deae05-96ca-473b-af3f-42b71c6e2319-etc-cni-netd\") pod \"cilium-tm8x9\" (UID: \"d9deae05-96ca-473b-af3f-42b71c6e2319\") " pod="kube-system/cilium-tm8x9" Feb 12 19:35:51.006707 kubelet[1500]: I0212 19:35:51.006644 1500 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d9deae05-96ca-473b-af3f-42b71c6e2319-xtables-lock\") pod \"cilium-tm8x9\" (UID: \"d9deae05-96ca-473b-af3f-42b71c6e2319\") " pod="kube-system/cilium-tm8x9" Feb 12 19:35:51.006707 kubelet[1500]: I0212 19:35:51.006691 1500 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d9deae05-96ca-473b-af3f-42b71c6e2319-clustermesh-secrets\") pod \"cilium-tm8x9\" (UID: \"d9deae05-96ca-473b-af3f-42b71c6e2319\") " pod="kube-system/cilium-tm8x9" Feb 12 19:35:51.006958 kubelet[1500]: I0212 19:35:51.006772 1500 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d9deae05-96ca-473b-af3f-42b71c6e2319-cilium-config-path\") pod \"cilium-tm8x9\" (UID: \"d9deae05-96ca-473b-af3f-42b71c6e2319\") " pod="kube-system/cilium-tm8x9" Feb 12 19:35:51.006958 kubelet[1500]: I0212 19:35:51.006836 1500 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d9deae05-96ca-473b-af3f-42b71c6e2319-hubble-tls\") pod \"cilium-tm8x9\" (UID: \"d9deae05-96ca-473b-af3f-42b71c6e2319\") " pod="kube-system/cilium-tm8x9" Feb 12 19:35:51.006958 kubelet[1500]: I0212 19:35:51.006883 1500 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d9deae05-96ca-473b-af3f-42b71c6e2319-host-proc-sys-net\") pod \"cilium-tm8x9\" (UID: \"d9deae05-96ca-473b-af3f-42b71c6e2319\") " pod="kube-system/cilium-tm8x9" Feb 12 19:35:51.006958 kubelet[1500]: I0212 19:35:51.006904 1500 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/54b0eedd-fabe-4f06-805b-32d2d0671af3-cilium-config-path\") pod \"cilium-operator-f59cbd8c6-5xnzl\" (UID: \"54b0eedd-fabe-4f06-805b-32d2d0671af3\") " pod="kube-system/cilium-operator-f59cbd8c6-5xnzl" Feb 12 19:35:51.006958 kubelet[1500]: I0212 19:35:51.006924 1500 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d9deae05-96ca-473b-af3f-42b71c6e2319-cilium-run\") pod \"cilium-tm8x9\" (UID: \"d9deae05-96ca-473b-af3f-42b71c6e2319\") " pod="kube-system/cilium-tm8x9" Feb 12 19:35:51.007149 kubelet[1500]: I0212 19:35:51.006945 1500 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d9deae05-96ca-473b-af3f-42b71c6e2319-bpf-maps\") pod \"cilium-tm8x9\" (UID: \"d9deae05-96ca-473b-af3f-42b71c6e2319\") " pod="kube-system/cilium-tm8x9" Feb 12 19:35:51.007149 kubelet[1500]: I0212 19:35:51.006982 1500 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d9deae05-96ca-473b-af3f-42b71c6e2319-cilium-cgroup\") pod \"cilium-tm8x9\" (UID: \"d9deae05-96ca-473b-af3f-42b71c6e2319\") " pod="kube-system/cilium-tm8x9" Feb 12 19:35:51.007149 kubelet[1500]: I0212 19:35:51.007011 1500 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d9deae05-96ca-473b-af3f-42b71c6e2319-lib-modules\") pod \"cilium-tm8x9\" (UID: \"d9deae05-96ca-473b-af3f-42b71c6e2319\") " pod="kube-system/cilium-tm8x9" Feb 12 19:35:51.007149 kubelet[1500]: I0212 19:35:51.007028 1500 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d9deae05-96ca-473b-af3f-42b71c6e2319-cilium-ipsec-secrets\") pod \"cilium-tm8x9\" (UID: \"d9deae05-96ca-473b-af3f-42b71c6e2319\") " pod="kube-system/cilium-tm8x9" Feb 12 19:35:51.007149 kubelet[1500]: I0212 19:35:51.007082 1500 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lx72\" (UniqueName: \"kubernetes.io/projected/d9deae05-96ca-473b-af3f-42b71c6e2319-kube-api-access-8lx72\") pod \"cilium-tm8x9\" (UID: \"d9deae05-96ca-473b-af3f-42b71c6e2319\") " pod="kube-system/cilium-tm8x9" Feb 12 19:35:51.007149 kubelet[1500]: I0212 19:35:51.007112 1500 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d9deae05-96ca-473b-af3f-42b71c6e2319-hostproc\") pod \"cilium-tm8x9\" (UID: \"d9deae05-96ca-473b-af3f-42b71c6e2319\") " pod="kube-system/cilium-tm8x9" Feb 12 19:35:51.007372 kubelet[1500]: I0212 19:35:51.007130 1500 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d9deae05-96ca-473b-af3f-42b71c6e2319-cni-path\") pod \"cilium-tm8x9\" (UID: \"d9deae05-96ca-473b-af3f-42b71c6e2319\") " pod="kube-system/cilium-tm8x9" Feb 12 19:35:51.007372 kubelet[1500]: I0212 19:35:51.007146 1500 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d9deae05-96ca-473b-af3f-42b71c6e2319-host-proc-sys-kernel\") pod \"cilium-tm8x9\" (UID: \"d9deae05-96ca-473b-af3f-42b71c6e2319\") " pod="kube-system/cilium-tm8x9" Feb 12 19:35:51.169215 kubelet[1500]: E0212 19:35:51.169160 1500 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:35:51.169714 env[1194]: time="2024-02-12T19:35:51.169645717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-5xnzl,Uid:54b0eedd-fabe-4f06-805b-32d2d0671af3,Namespace:kube-system,Attempt:0,}" Feb 12 19:35:51.181480 env[1194]: time="2024-02-12T19:35:51.181421103Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:35:51.181480 env[1194]: time="2024-02-12T19:35:51.181469062Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:35:51.181610 env[1194]: time="2024-02-12T19:35:51.181486805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:35:51.181657 env[1194]: time="2024-02-12T19:35:51.181623338Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c7e19a98444b28f014feded0091e76979802f94c708382456aa825e48336a97e pid=3142 runtime=io.containerd.runc.v2 Feb 12 19:35:51.198960 kubelet[1500]: E0212 19:35:51.198930 1500 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:35:51.199633 env[1194]: time="2024-02-12T19:35:51.199592116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tm8x9,Uid:d9deae05-96ca-473b-af3f-42b71c6e2319,Namespace:kube-system,Attempt:0,}" Feb 12 19:35:51.211099 env[1194]: time="2024-02-12T19:35:51.211022230Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:35:51.211099 env[1194]: time="2024-02-12T19:35:51.211063186Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:35:51.211099 env[1194]: time="2024-02-12T19:35:51.211072323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:35:51.211333 env[1194]: time="2024-02-12T19:35:51.211182549Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/27d9102e7476eb5addef9e28dc4c473c4fbdcb6ab51ab0aace1feaabb2b166b3 pid=3176 runtime=io.containerd.runc.v2 Feb 12 19:35:51.228468 env[1194]: time="2024-02-12T19:35:51.227778824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-5xnzl,Uid:54b0eedd-fabe-4f06-805b-32d2d0671af3,Namespace:kube-system,Attempt:0,} returns sandbox id \"c7e19a98444b28f014feded0091e76979802f94c708382456aa825e48336a97e\"" Feb 12 19:35:51.228695 kubelet[1500]: E0212 19:35:51.228403 1500 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:35:51.229653 env[1194]: time="2024-02-12T19:35:51.229612202Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 12 19:35:51.242937 env[1194]: time="2024-02-12T19:35:51.242889990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tm8x9,Uid:d9deae05-96ca-473b-af3f-42b71c6e2319,Namespace:kube-system,Attempt:0,} returns sandbox id \"27d9102e7476eb5addef9e28dc4c473c4fbdcb6ab51ab0aace1feaabb2b166b3\"" Feb 12 19:35:51.243621 kubelet[1500]: E0212 19:35:51.243588 1500 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:35:51.245317 env[1194]: time="2024-02-12T19:35:51.245277598Z" level=info msg="CreateContainer within sandbox \"27d9102e7476eb5addef9e28dc4c473c4fbdcb6ab51ab0aace1feaabb2b166b3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 19:35:51.255759 env[1194]: time="2024-02-12T19:35:51.255709277Z" level=info msg="CreateContainer within sandbox \"27d9102e7476eb5addef9e28dc4c473c4fbdcb6ab51ab0aace1feaabb2b166b3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"cfedea781542ee00ffcf7739bdf4a9a8b0c31e024f67262e995aeed246f279ef\"" Feb 12 19:35:51.256169 env[1194]: time="2024-02-12T19:35:51.256008683Z" level=info msg="StartContainer for \"cfedea781542ee00ffcf7739bdf4a9a8b0c31e024f67262e995aeed246f279ef\"" Feb 12 19:35:51.296868 env[1194]: time="2024-02-12T19:35:51.296269382Z" level=info msg="StartContainer for \"cfedea781542ee00ffcf7739bdf4a9a8b0c31e024f67262e995aeed246f279ef\" returns successfully" Feb 12 19:35:51.328756 env[1194]: time="2024-02-12T19:35:51.328706450Z" level=info msg="shim disconnected" id=cfedea781542ee00ffcf7739bdf4a9a8b0c31e024f67262e995aeed246f279ef Feb 12 19:35:51.328756 env[1194]: time="2024-02-12T19:35:51.328761543Z" level=warning msg="cleaning up after shim disconnected" id=cfedea781542ee00ffcf7739bdf4a9a8b0c31e024f67262e995aeed246f279ef namespace=k8s.io Feb 12 19:35:51.329011 env[1194]: time="2024-02-12T19:35:51.328771101Z" level=info msg="cleaning up dead shim" Feb 12 19:35:51.335460 env[1194]: time="2024-02-12T19:35:51.335398488Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:35:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3270 runtime=io.containerd.runc.v2\ntime=\"2024-02-12T19:35:51Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" Feb 12 19:35:51.654446 kubelet[1500]: E0212 19:35:51.654391 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:35:51.927926 env[1194]: time="2024-02-12T19:35:51.927697489Z" level=info msg="StopPodSandbox for \"27d9102e7476eb5addef9e28dc4c473c4fbdcb6ab51ab0aace1feaabb2b166b3\"" Feb 12 19:35:51.927926 env[1194]: time="2024-02-12T19:35:51.927740308Z" level=info msg="Container to stop \"cfedea781542ee00ffcf7739bdf4a9a8b0c31e024f67262e995aeed246f279ef\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:35:51.948164 env[1194]: time="2024-02-12T19:35:51.948112923Z" level=info msg="shim disconnected" id=27d9102e7476eb5addef9e28dc4c473c4fbdcb6ab51ab0aace1feaabb2b166b3 Feb 12 19:35:51.948770 env[1194]: time="2024-02-12T19:35:51.948744117Z" level=warning msg="cleaning up after shim disconnected" id=27d9102e7476eb5addef9e28dc4c473c4fbdcb6ab51ab0aace1feaabb2b166b3 namespace=k8s.io Feb 12 19:35:51.948770 env[1194]: time="2024-02-12T19:35:51.948763783Z" level=info msg="cleaning up dead shim" Feb 12 19:35:51.954475 env[1194]: time="2024-02-12T19:35:51.954434372Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:35:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3302 runtime=io.containerd.runc.v2\n" Feb 12 19:35:51.954752 env[1194]: time="2024-02-12T19:35:51.954727819Z" level=info msg="TearDown network for sandbox \"27d9102e7476eb5addef9e28dc4c473c4fbdcb6ab51ab0aace1feaabb2b166b3\" successfully" Feb 12 19:35:51.954796 env[1194]: time="2024-02-12T19:35:51.954751522Z" level=info msg="StopPodSandbox for \"27d9102e7476eb5addef9e28dc4c473c4fbdcb6ab51ab0aace1feaabb2b166b3\" returns successfully" Feb 12 19:35:52.113970 kubelet[1500]: I0212 19:35:52.113441 1500 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d9deae05-96ca-473b-af3f-42b71c6e2319-host-proc-sys-net\") pod \"d9deae05-96ca-473b-af3f-42b71c6e2319\" (UID: \"d9deae05-96ca-473b-af3f-42b71c6e2319\") " Feb 12 19:35:52.113970 kubelet[1500]: I0212 19:35:52.113482 1500 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9deae05-96ca-473b-af3f-42b71c6e2319-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d9deae05-96ca-473b-af3f-42b71c6e2319" (UID: "d9deae05-96ca-473b-af3f-42b71c6e2319"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:35:52.113970 kubelet[1500]: I0212 19:35:52.113500 1500 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d9deae05-96ca-473b-af3f-42b71c6e2319-cilium-ipsec-secrets\") pod \"d9deae05-96ca-473b-af3f-42b71c6e2319\" (UID: \"d9deae05-96ca-473b-af3f-42b71c6e2319\") " Feb 12 19:35:52.113970 kubelet[1500]: I0212 19:35:52.113527 1500 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d9deae05-96ca-473b-af3f-42b71c6e2319-hostproc\") pod \"d9deae05-96ca-473b-af3f-42b71c6e2319\" (UID: \"d9deae05-96ca-473b-af3f-42b71c6e2319\") " Feb 12 19:35:52.113970 kubelet[1500]: I0212 19:35:52.113552 1500 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d9deae05-96ca-473b-af3f-42b71c6e2319-hubble-tls\") pod \"d9deae05-96ca-473b-af3f-42b71c6e2319\" (UID: \"d9deae05-96ca-473b-af3f-42b71c6e2319\") " Feb 12 19:35:52.113970 kubelet[1500]: I0212 19:35:52.113576 1500 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d9deae05-96ca-473b-af3f-42b71c6e2319-cilium-run\") pod \"d9deae05-96ca-473b-af3f-42b71c6e2319\" (UID: \"d9deae05-96ca-473b-af3f-42b71c6e2319\") " Feb 12 19:35:52.114248 kubelet[1500]: I0212 19:35:52.113595 1500 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9deae05-96ca-473b-af3f-42b71c6e2319-hostproc" (OuterVolumeSpecName: "hostproc") pod "d9deae05-96ca-473b-af3f-42b71c6e2319" (UID: "d9deae05-96ca-473b-af3f-42b71c6e2319"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:35:52.114248 kubelet[1500]: I0212 19:35:52.113685 1500 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9deae05-96ca-473b-af3f-42b71c6e2319-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d9deae05-96ca-473b-af3f-42b71c6e2319" (UID: "d9deae05-96ca-473b-af3f-42b71c6e2319"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:35:52.114248 kubelet[1500]: I0212 19:35:52.113911 1500 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d9deae05-96ca-473b-af3f-42b71c6e2319-lib-modules\") pod \"d9deae05-96ca-473b-af3f-42b71c6e2319\" (UID: \"d9deae05-96ca-473b-af3f-42b71c6e2319\") " Feb 12 19:35:52.114248 kubelet[1500]: I0212 19:35:52.113940 1500 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d9deae05-96ca-473b-af3f-42b71c6e2319-host-proc-sys-kernel\") pod \"d9deae05-96ca-473b-af3f-42b71c6e2319\" (UID: \"d9deae05-96ca-473b-af3f-42b71c6e2319\") " Feb 12 19:35:52.114248 kubelet[1500]: I0212 19:35:52.113965 1500 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d9deae05-96ca-473b-af3f-42b71c6e2319-cni-path\") pod \"d9deae05-96ca-473b-af3f-42b71c6e2319\" (UID: \"d9deae05-96ca-473b-af3f-42b71c6e2319\") " Feb 12 19:35:52.114366 kubelet[1500]: I0212 19:35:52.114034 1500 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9deae05-96ca-473b-af3f-42b71c6e2319-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d9deae05-96ca-473b-af3f-42b71c6e2319" (UID: "d9deae05-96ca-473b-af3f-42b71c6e2319"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:35:52.114366 kubelet[1500]: I0212 19:35:52.114095 1500 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9deae05-96ca-473b-af3f-42b71c6e2319-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d9deae05-96ca-473b-af3f-42b71c6e2319" (UID: "d9deae05-96ca-473b-af3f-42b71c6e2319"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:35:52.114366 kubelet[1500]: I0212 19:35:52.114144 1500 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d9deae05-96ca-473b-af3f-42b71c6e2319-clustermesh-secrets\") pod \"d9deae05-96ca-473b-af3f-42b71c6e2319\" (UID: \"d9deae05-96ca-473b-af3f-42b71c6e2319\") " Feb 12 19:35:52.114366 kubelet[1500]: I0212 19:35:52.114186 1500 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d9deae05-96ca-473b-af3f-42b71c6e2319-cilium-config-path\") pod \"d9deae05-96ca-473b-af3f-42b71c6e2319\" (UID: \"d9deae05-96ca-473b-af3f-42b71c6e2319\") " Feb 12 19:35:52.114366 kubelet[1500]: I0212 19:35:52.114211 1500 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d9deae05-96ca-473b-af3f-42b71c6e2319-cilium-cgroup\") pod \"d9deae05-96ca-473b-af3f-42b71c6e2319\" (UID: \"d9deae05-96ca-473b-af3f-42b71c6e2319\") " Feb 12 19:35:52.114522 kubelet[1500]: I0212 19:35:52.114234 1500 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d9deae05-96ca-473b-af3f-42b71c6e2319-bpf-maps\") pod \"d9deae05-96ca-473b-af3f-42b71c6e2319\" (UID: \"d9deae05-96ca-473b-af3f-42b71c6e2319\") " Feb 12 19:35:52.114522 kubelet[1500]: I0212 19:35:52.114258 1500 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d9deae05-96ca-473b-af3f-42b71c6e2319-xtables-lock\") pod \"d9deae05-96ca-473b-af3f-42b71c6e2319\" (UID: \"d9deae05-96ca-473b-af3f-42b71c6e2319\") " Feb 12 19:35:52.114522 kubelet[1500]: I0212 19:35:52.114283 1500 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8lx72\" (UniqueName: \"kubernetes.io/projected/d9deae05-96ca-473b-af3f-42b71c6e2319-kube-api-access-8lx72\") pod \"d9deae05-96ca-473b-af3f-42b71c6e2319\" (UID: \"d9deae05-96ca-473b-af3f-42b71c6e2319\") " Feb 12 19:35:52.114522 kubelet[1500]: W0212 19:35:52.114432 1500 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/d9deae05-96ca-473b-af3f-42b71c6e2319/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 12 19:35:52.114522 kubelet[1500]: I0212 19:35:52.114452 1500 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9deae05-96ca-473b-af3f-42b71c6e2319-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d9deae05-96ca-473b-af3f-42b71c6e2319" (UID: "d9deae05-96ca-473b-af3f-42b71c6e2319"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:35:52.114522 kubelet[1500]: I0212 19:35:52.114480 1500 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9deae05-96ca-473b-af3f-42b71c6e2319-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d9deae05-96ca-473b-af3f-42b71c6e2319" (UID: "d9deae05-96ca-473b-af3f-42b71c6e2319"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:35:52.114656 kubelet[1500]: I0212 19:35:52.114503 1500 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9deae05-96ca-473b-af3f-42b71c6e2319-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d9deae05-96ca-473b-af3f-42b71c6e2319" (UID: "d9deae05-96ca-473b-af3f-42b71c6e2319"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:35:52.114656 kubelet[1500]: I0212 19:35:52.114520 1500 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9deae05-96ca-473b-af3f-42b71c6e2319-cni-path" (OuterVolumeSpecName: "cni-path") pod "d9deae05-96ca-473b-af3f-42b71c6e2319" (UID: "d9deae05-96ca-473b-af3f-42b71c6e2319"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:35:52.115812 kubelet[1500]: I0212 19:35:52.115628 1500 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d9deae05-96ca-473b-af3f-42b71c6e2319-etc-cni-netd\") pod \"d9deae05-96ca-473b-af3f-42b71c6e2319\" (UID: \"d9deae05-96ca-473b-af3f-42b71c6e2319\") " Feb 12 19:35:52.115812 kubelet[1500]: I0212 19:35:52.115678 1500 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9deae05-96ca-473b-af3f-42b71c6e2319-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d9deae05-96ca-473b-af3f-42b71c6e2319" (UID: "d9deae05-96ca-473b-af3f-42b71c6e2319"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:35:52.115812 kubelet[1500]: I0212 19:35:52.115719 1500 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d9deae05-96ca-473b-af3f-42b71c6e2319-hostproc\") on node \"10.0.0.52\" DevicePath \"\"" Feb 12 19:35:52.115812 kubelet[1500]: I0212 19:35:52.115729 1500 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d9deae05-96ca-473b-af3f-42b71c6e2319-host-proc-sys-net\") on node \"10.0.0.52\" DevicePath \"\"" Feb 12 19:35:52.115812 kubelet[1500]: I0212 19:35:52.115738 1500 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d9deae05-96ca-473b-af3f-42b71c6e2319-host-proc-sys-kernel\") on node \"10.0.0.52\" DevicePath \"\"" Feb 12 19:35:52.115812 kubelet[1500]: I0212 19:35:52.115761 1500 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d9deae05-96ca-473b-af3f-42b71c6e2319-cni-path\") on node \"10.0.0.52\" DevicePath \"\"" Feb 12 19:35:52.115812 kubelet[1500]: I0212 19:35:52.115769 1500 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d9deae05-96ca-473b-af3f-42b71c6e2319-cilium-run\") on node \"10.0.0.52\" DevicePath \"\"" Feb 12 19:35:52.116033 kubelet[1500]: I0212 19:35:52.115776 1500 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d9deae05-96ca-473b-af3f-42b71c6e2319-lib-modules\") on node \"10.0.0.52\" DevicePath \"\"" Feb 12 19:35:52.116033 kubelet[1500]: I0212 19:35:52.115784 1500 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d9deae05-96ca-473b-af3f-42b71c6e2319-cilium-cgroup\") on node \"10.0.0.52\" DevicePath \"\"" Feb 12 19:35:52.116033 kubelet[1500]: I0212 19:35:52.115791 1500 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d9deae05-96ca-473b-af3f-42b71c6e2319-bpf-maps\") on node \"10.0.0.52\" DevicePath \"\"" Feb 12 19:35:52.116033 kubelet[1500]: I0212 19:35:52.115799 1500 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d9deae05-96ca-473b-af3f-42b71c6e2319-xtables-lock\") on node \"10.0.0.52\" DevicePath \"\"" Feb 12 19:35:52.116823 kubelet[1500]: I0212 19:35:52.116797 1500 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9deae05-96ca-473b-af3f-42b71c6e2319-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d9deae05-96ca-473b-af3f-42b71c6e2319" (UID: "d9deae05-96ca-473b-af3f-42b71c6e2319"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:35:52.117059 kubelet[1500]: I0212 19:35:52.117033 1500 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9deae05-96ca-473b-af3f-42b71c6e2319-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d9deae05-96ca-473b-af3f-42b71c6e2319" (UID: "d9deae05-96ca-473b-af3f-42b71c6e2319"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 19:35:52.117835 systemd[1]: var-lib-kubelet-pods-d9deae05\x2d96ca\x2d473b\x2daf3f\x2d42b71c6e2319-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 12 19:35:52.117987 systemd[1]: var-lib-kubelet-pods-d9deae05\x2d96ca\x2d473b\x2daf3f\x2d42b71c6e2319-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 12 19:35:52.119651 kubelet[1500]: I0212 19:35:52.119627 1500 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9deae05-96ca-473b-af3f-42b71c6e2319-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d9deae05-96ca-473b-af3f-42b71c6e2319" (UID: "d9deae05-96ca-473b-af3f-42b71c6e2319"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 19:35:52.119873 kubelet[1500]: I0212 19:35:52.119795 1500 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9deae05-96ca-473b-af3f-42b71c6e2319-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "d9deae05-96ca-473b-af3f-42b71c6e2319" (UID: "d9deae05-96ca-473b-af3f-42b71c6e2319"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 19:35:52.119931 systemd[1]: var-lib-kubelet-pods-d9deae05\x2d96ca\x2d473b\x2daf3f\x2d42b71c6e2319-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 12 19:35:52.121098 kubelet[1500]: I0212 19:35:52.121060 1500 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9deae05-96ca-473b-af3f-42b71c6e2319-kube-api-access-8lx72" (OuterVolumeSpecName: "kube-api-access-8lx72") pod "d9deae05-96ca-473b-af3f-42b71c6e2319" (UID: "d9deae05-96ca-473b-af3f-42b71c6e2319"). InnerVolumeSpecName "kube-api-access-8lx72". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:35:52.122677 systemd[1]: var-lib-kubelet-pods-d9deae05\x2d96ca\x2d473b\x2daf3f\x2d42b71c6e2319-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8lx72.mount: Deactivated successfully. Feb 12 19:35:52.216438 kubelet[1500]: I0212 19:35:52.216350 1500 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d9deae05-96ca-473b-af3f-42b71c6e2319-cilium-config-path\") on node \"10.0.0.52\" DevicePath \"\"" Feb 12 19:35:52.216438 kubelet[1500]: I0212 19:35:52.216382 1500 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-8lx72\" (UniqueName: \"kubernetes.io/projected/d9deae05-96ca-473b-af3f-42b71c6e2319-kube-api-access-8lx72\") on node \"10.0.0.52\" DevicePath \"\"" Feb 12 19:35:52.216438 kubelet[1500]: I0212 19:35:52.216393 1500 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d9deae05-96ca-473b-af3f-42b71c6e2319-etc-cni-netd\") on node \"10.0.0.52\" DevicePath \"\"" Feb 12 19:35:52.216438 kubelet[1500]: I0212 19:35:52.216401 1500 reconciler_common.go:295] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d9deae05-96ca-473b-af3f-42b71c6e2319-cilium-ipsec-secrets\") on node \"10.0.0.52\" DevicePath \"\"" Feb 12 19:35:52.216438 kubelet[1500]: I0212 19:35:52.216410 1500 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d9deae05-96ca-473b-af3f-42b71c6e2319-hubble-tls\") on node \"10.0.0.52\" DevicePath \"\"" Feb 12 19:35:52.216438 kubelet[1500]: I0212 19:35:52.216418 1500 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d9deae05-96ca-473b-af3f-42b71c6e2319-clustermesh-secrets\") on node \"10.0.0.52\" DevicePath \"\"" Feb 12 19:35:52.655103 kubelet[1500]: E0212 19:35:52.655064 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:35:52.930680 kubelet[1500]: I0212 19:35:52.930570 1500 scope.go:115] "RemoveContainer" containerID="cfedea781542ee00ffcf7739bdf4a9a8b0c31e024f67262e995aeed246f279ef" Feb 12 19:35:52.931781 env[1194]: time="2024-02-12T19:35:52.931744374Z" level=info msg="RemoveContainer for \"cfedea781542ee00ffcf7739bdf4a9a8b0c31e024f67262e995aeed246f279ef\"" Feb 12 19:35:52.937799 env[1194]: time="2024-02-12T19:35:52.935233332Z" level=info msg="RemoveContainer for \"cfedea781542ee00ffcf7739bdf4a9a8b0c31e024f67262e995aeed246f279ef\" returns successfully" Feb 12 19:35:52.953116 kubelet[1500]: I0212 19:35:52.953090 1500 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:35:52.953232 kubelet[1500]: E0212 19:35:52.953141 1500 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d9deae05-96ca-473b-af3f-42b71c6e2319" containerName="mount-cgroup" Feb 12 19:35:52.953232 kubelet[1500]: I0212 19:35:52.953170 1500 memory_manager.go:346] "RemoveStaleState removing state" podUID="d9deae05-96ca-473b-af3f-42b71c6e2319" containerName="mount-cgroup" Feb 12 19:35:53.098696 kubelet[1500]: I0212 19:35:53.098667 1500 setters.go:548] "Node became not ready" node="10.0.0.52" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-12 19:35:53.098628112 +0000 UTC m=+84.880814547 LastTransitionTime:2024-02-12 19:35:53.098628112 +0000 UTC m=+84.880814547 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 12 19:35:53.120977 kubelet[1500]: I0212 19:35:53.120945 1500 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ec40d7db-c97b-41f3-8c8b-02466e6288e9-hostproc\") pod \"cilium-b755g\" (UID: \"ec40d7db-c97b-41f3-8c8b-02466e6288e9\") " pod="kube-system/cilium-b755g" Feb 12 19:35:53.121094 kubelet[1500]: I0212 19:35:53.120998 1500 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ec40d7db-c97b-41f3-8c8b-02466e6288e9-cilium-cgroup\") pod \"cilium-b755g\" (UID: \"ec40d7db-c97b-41f3-8c8b-02466e6288e9\") " pod="kube-system/cilium-b755g" Feb 12 19:35:53.121094 kubelet[1500]: I0212 19:35:53.121026 1500 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ec40d7db-c97b-41f3-8c8b-02466e6288e9-cni-path\") pod \"cilium-b755g\" (UID: \"ec40d7db-c97b-41f3-8c8b-02466e6288e9\") " pod="kube-system/cilium-b755g" Feb 12 19:35:53.121094 kubelet[1500]: I0212 19:35:53.121075 1500 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ec40d7db-c97b-41f3-8c8b-02466e6288e9-lib-modules\") pod \"cilium-b755g\" (UID: \"ec40d7db-c97b-41f3-8c8b-02466e6288e9\") " pod="kube-system/cilium-b755g" Feb 12 19:35:53.121176 kubelet[1500]: I0212 19:35:53.121117 1500 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ec40d7db-c97b-41f3-8c8b-02466e6288e9-xtables-lock\") pod \"cilium-b755g\" (UID: \"ec40d7db-c97b-41f3-8c8b-02466e6288e9\") " pod="kube-system/cilium-b755g" Feb 12 19:35:53.121176 kubelet[1500]: I0212 19:35:53.121147 1500 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ec40d7db-c97b-41f3-8c8b-02466e6288e9-cilium-config-path\") pod \"cilium-b755g\" (UID: \"ec40d7db-c97b-41f3-8c8b-02466e6288e9\") " pod="kube-system/cilium-b755g" Feb 12 19:35:53.121225 kubelet[1500]: I0212 19:35:53.121185 1500 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcm66\" (UniqueName: \"kubernetes.io/projected/ec40d7db-c97b-41f3-8c8b-02466e6288e9-kube-api-access-rcm66\") pod \"cilium-b755g\" (UID: \"ec40d7db-c97b-41f3-8c8b-02466e6288e9\") " pod="kube-system/cilium-b755g" Feb 12 19:35:53.121250 kubelet[1500]: I0212 19:35:53.121227 1500 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ec40d7db-c97b-41f3-8c8b-02466e6288e9-cilium-run\") pod \"cilium-b755g\" (UID: \"ec40d7db-c97b-41f3-8c8b-02466e6288e9\") " pod="kube-system/cilium-b755g" Feb 12 19:35:53.121279 kubelet[1500]: I0212 19:35:53.121256 1500 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ec40d7db-c97b-41f3-8c8b-02466e6288e9-bpf-maps\") pod \"cilium-b755g\" (UID: \"ec40d7db-c97b-41f3-8c8b-02466e6288e9\") " pod="kube-system/cilium-b755g" Feb 12 19:35:53.121304 kubelet[1500]: I0212 19:35:53.121279 1500 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ec40d7db-c97b-41f3-8c8b-02466e6288e9-etc-cni-netd\") pod \"cilium-b755g\" (UID: \"ec40d7db-c97b-41f3-8c8b-02466e6288e9\") " pod="kube-system/cilium-b755g" Feb 12 19:35:53.121328 kubelet[1500]: I0212 19:35:53.121316 1500 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ec40d7db-c97b-41f3-8c8b-02466e6288e9-host-proc-sys-kernel\") pod \"cilium-b755g\" (UID: \"ec40d7db-c97b-41f3-8c8b-02466e6288e9\") " pod="kube-system/cilium-b755g" Feb 12 19:35:53.121384 kubelet[1500]: I0212 19:35:53.121357 1500 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ec40d7db-c97b-41f3-8c8b-02466e6288e9-clustermesh-secrets\") pod \"cilium-b755g\" (UID: \"ec40d7db-c97b-41f3-8c8b-02466e6288e9\") " pod="kube-system/cilium-b755g" Feb 12 19:35:53.121453 kubelet[1500]: I0212 19:35:53.121426 1500 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ec40d7db-c97b-41f3-8c8b-02466e6288e9-host-proc-sys-net\") pod \"cilium-b755g\" (UID: \"ec40d7db-c97b-41f3-8c8b-02466e6288e9\") " pod="kube-system/cilium-b755g" Feb 12 19:35:53.121503 kubelet[1500]: I0212 19:35:53.121462 1500 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ec40d7db-c97b-41f3-8c8b-02466e6288e9-hubble-tls\") pod \"cilium-b755g\" (UID: \"ec40d7db-c97b-41f3-8c8b-02466e6288e9\") " pod="kube-system/cilium-b755g" Feb 12 19:35:53.121547 kubelet[1500]: I0212 19:35:53.121506 1500 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ec40d7db-c97b-41f3-8c8b-02466e6288e9-cilium-ipsec-secrets\") pod \"cilium-b755g\" (UID: \"ec40d7db-c97b-41f3-8c8b-02466e6288e9\") " pod="kube-system/cilium-b755g" Feb 12 19:35:53.128206 env[1194]: time="2024-02-12T19:35:53.128168626Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:35:53.129790 env[1194]: time="2024-02-12T19:35:53.129756790Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:35:53.131157 env[1194]: time="2024-02-12T19:35:53.131127400Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:35:53.131530 env[1194]: time="2024-02-12T19:35:53.131496397Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 12 19:35:53.132814 env[1194]: time="2024-02-12T19:35:53.132792729Z" level=info msg="CreateContainer within sandbox \"c7e19a98444b28f014feded0091e76979802f94c708382456aa825e48336a97e\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 12 19:35:53.142338 env[1194]: time="2024-02-12T19:35:53.142304968Z" level=info msg="CreateContainer within sandbox \"c7e19a98444b28f014feded0091e76979802f94c708382456aa825e48336a97e\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"4b7861d6c5fd4c4d1018a1e1bf597c15ee22f46de345c74b20189ef3e03b1cdf\"" Feb 12 19:35:53.142634 env[1194]: time="2024-02-12T19:35:53.142608361Z" level=info msg="StartContainer for \"4b7861d6c5fd4c4d1018a1e1bf597c15ee22f46de345c74b20189ef3e03b1cdf\"" Feb 12 19:35:53.180337 env[1194]: time="2024-02-12T19:35:53.180285464Z" level=info msg="StartContainer for \"4b7861d6c5fd4c4d1018a1e1bf597c15ee22f46de345c74b20189ef3e03b1cdf\" returns successfully" Feb 12 19:35:53.257590 kubelet[1500]: E0212 19:35:53.256638 1500 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:35:53.257718 env[1194]: time="2024-02-12T19:35:53.257324002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-b755g,Uid:ec40d7db-c97b-41f3-8c8b-02466e6288e9,Namespace:kube-system,Attempt:0,}" Feb 12 19:35:53.271398 env[1194]: time="2024-02-12T19:35:53.271327668Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:35:53.271606 env[1194]: time="2024-02-12T19:35:53.271575369Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:35:53.271737 env[1194]: time="2024-02-12T19:35:53.271707635Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:35:53.272183 env[1194]: time="2024-02-12T19:35:53.272123679Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5e10e7a1347ad7c02dc93469b4e6c1e5b2ceb0ed1efbbce017a673cb4b20bdce pid=3368 runtime=io.containerd.runc.v2 Feb 12 19:35:53.321522 env[1194]: time="2024-02-12T19:35:53.321221701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-b755g,Uid:ec40d7db-c97b-41f3-8c8b-02466e6288e9,Namespace:kube-system,Attempt:0,} returns sandbox id \"5e10e7a1347ad7c02dc93469b4e6c1e5b2ceb0ed1efbbce017a673cb4b20bdce\"" Feb 12 19:35:53.323084 kubelet[1500]: E0212 19:35:53.323054 1500 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:35:53.325070 env[1194]: time="2024-02-12T19:35:53.325018946Z" level=info msg="CreateContainer within sandbox \"5e10e7a1347ad7c02dc93469b4e6c1e5b2ceb0ed1efbbce017a673cb4b20bdce\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 19:35:53.526027 env[1194]: time="2024-02-12T19:35:53.525876674Z" level=info msg="CreateContainer within sandbox \"5e10e7a1347ad7c02dc93469b4e6c1e5b2ceb0ed1efbbce017a673cb4b20bdce\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a476d28b3a12aa40cebdc67a1640b1f9e2074544d66f26fe636e71590878ef2f\"" Feb 12 19:35:53.526955 env[1194]: time="2024-02-12T19:35:53.526890000Z" level=info msg="StartContainer for \"a476d28b3a12aa40cebdc67a1640b1f9e2074544d66f26fe636e71590878ef2f\"" Feb 12 19:35:53.568515 env[1194]: time="2024-02-12T19:35:53.568463712Z" level=info msg="StartContainer for \"a476d28b3a12aa40cebdc67a1640b1f9e2074544d66f26fe636e71590878ef2f\" returns successfully" Feb 12 19:35:53.592963 env[1194]: time="2024-02-12T19:35:53.592891983Z" level=info msg="shim disconnected" id=a476d28b3a12aa40cebdc67a1640b1f9e2074544d66f26fe636e71590878ef2f Feb 12 19:35:53.592963 env[1194]: time="2024-02-12T19:35:53.592957766Z" level=warning msg="cleaning up after shim disconnected" id=a476d28b3a12aa40cebdc67a1640b1f9e2074544d66f26fe636e71590878ef2f namespace=k8s.io Feb 12 19:35:53.592963 env[1194]: time="2024-02-12T19:35:53.592967334Z" level=info msg="cleaning up dead shim" Feb 12 19:35:53.599017 env[1194]: time="2024-02-12T19:35:53.598959141Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:35:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3453 runtime=io.containerd.runc.v2\n" Feb 12 19:35:53.655339 kubelet[1500]: E0212 19:35:53.655298 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:35:53.669018 kubelet[1500]: E0212 19:35:53.668982 1500 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 12 19:35:53.933948 kubelet[1500]: E0212 19:35:53.933906 1500 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:35:53.935306 kubelet[1500]: E0212 19:35:53.935277 1500 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:35:53.937128 env[1194]: time="2024-02-12T19:35:53.937090245Z" level=info msg="CreateContainer within sandbox \"5e10e7a1347ad7c02dc93469b4e6c1e5b2ceb0ed1efbbce017a673cb4b20bdce\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 12 19:35:53.940597 kubelet[1500]: I0212 19:35:53.940575 1500 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-f59cbd8c6-5xnzl" podStartSLOduration=-9.223372032914232e+09 pod.CreationTimestamp="2024-02-12 19:35:50 +0000 UTC" firstStartedPulling="2024-02-12 19:35:51.229310251 +0000 UTC m=+83.011496706" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:35:53.94031724 +0000 UTC m=+85.722503675" watchObservedRunningTime="2024-02-12 19:35:53.940544492 +0000 UTC m=+85.722730927" Feb 12 19:35:53.949023 env[1194]: time="2024-02-12T19:35:53.948976201Z" level=info msg="CreateContainer within sandbox \"5e10e7a1347ad7c02dc93469b4e6c1e5b2ceb0ed1efbbce017a673cb4b20bdce\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"05d3aa5975b6c569aee5fce7d2a69195f1e4351f0e8960e2d1218e43596737be\"" Feb 12 19:35:53.949395 env[1194]: time="2024-02-12T19:35:53.949370875Z" level=info msg="StartContainer for \"05d3aa5975b6c569aee5fce7d2a69195f1e4351f0e8960e2d1218e43596737be\"" Feb 12 19:35:53.990085 env[1194]: time="2024-02-12T19:35:53.990017723Z" level=info msg="StartContainer for \"05d3aa5975b6c569aee5fce7d2a69195f1e4351f0e8960e2d1218e43596737be\" returns successfully" Feb 12 19:35:54.020943 env[1194]: time="2024-02-12T19:35:54.020865605Z" level=info msg="shim disconnected" id=05d3aa5975b6c569aee5fce7d2a69195f1e4351f0e8960e2d1218e43596737be Feb 12 19:35:54.020943 env[1194]: time="2024-02-12T19:35:54.020939112Z" level=warning msg="cleaning up after shim disconnected" id=05d3aa5975b6c569aee5fce7d2a69195f1e4351f0e8960e2d1218e43596737be namespace=k8s.io Feb 12 19:35:54.021150 env[1194]: time="2024-02-12T19:35:54.020954841Z" level=info msg="cleaning up dead shim" Feb 12 19:35:54.027355 env[1194]: time="2024-02-12T19:35:54.027291556Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:35:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3515 runtime=io.containerd.runc.v2\n" Feb 12 19:35:54.656169 kubelet[1500]: E0212 19:35:54.656102 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:35:54.745723 kubelet[1500]: I0212 19:35:54.745688 1500 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=d9deae05-96ca-473b-af3f-42b71c6e2319 path="/var/lib/kubelet/pods/d9deae05-96ca-473b-af3f-42b71c6e2319/volumes" Feb 12 19:35:54.940457 kubelet[1500]: E0212 19:35:54.940346 1500 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:35:54.940457 kubelet[1500]: E0212 19:35:54.940402 1500 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:35:54.942140 env[1194]: time="2024-02-12T19:35:54.942093599Z" level=info msg="CreateContainer within sandbox \"5e10e7a1347ad7c02dc93469b4e6c1e5b2ceb0ed1efbbce017a673cb4b20bdce\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 12 19:35:54.958255 env[1194]: time="2024-02-12T19:35:54.958199841Z" level=info msg="CreateContainer within sandbox \"5e10e7a1347ad7c02dc93469b4e6c1e5b2ceb0ed1efbbce017a673cb4b20bdce\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a1603543237557e2881a77b12800e2243be97f847b0772eaa9a23c02689e8a6b\"" Feb 12 19:35:54.958695 env[1194]: time="2024-02-12T19:35:54.958668174Z" level=info msg="StartContainer for \"a1603543237557e2881a77b12800e2243be97f847b0772eaa9a23c02689e8a6b\"" Feb 12 19:35:55.002508 env[1194]: time="2024-02-12T19:35:55.002457891Z" level=info msg="StartContainer for \"a1603543237557e2881a77b12800e2243be97f847b0772eaa9a23c02689e8a6b\" returns successfully" Feb 12 19:35:55.024653 env[1194]: time="2024-02-12T19:35:55.024596802Z" level=info msg="shim disconnected" id=a1603543237557e2881a77b12800e2243be97f847b0772eaa9a23c02689e8a6b Feb 12 19:35:55.024653 env[1194]: time="2024-02-12T19:35:55.024646394Z" level=warning msg="cleaning up after shim disconnected" id=a1603543237557e2881a77b12800e2243be97f847b0772eaa9a23c02689e8a6b namespace=k8s.io Feb 12 19:35:55.024653 env[1194]: time="2024-02-12T19:35:55.024654810Z" level=info msg="cleaning up dead shim" Feb 12 19:35:55.032392 env[1194]: time="2024-02-12T19:35:55.032328331Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:35:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3570 runtime=io.containerd.runc.v2\n" Feb 12 19:35:55.113530 systemd[1]: run-containerd-runc-k8s.io-a1603543237557e2881a77b12800e2243be97f847b0772eaa9a23c02689e8a6b-runc.gt9IgG.mount: Deactivated successfully. Feb 12 19:35:55.113690 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a1603543237557e2881a77b12800e2243be97f847b0772eaa9a23c02689e8a6b-rootfs.mount: Deactivated successfully. Feb 12 19:35:55.656860 kubelet[1500]: E0212 19:35:55.656802 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:35:55.946833 kubelet[1500]: E0212 19:35:55.946719 1500 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:35:55.948369 env[1194]: time="2024-02-12T19:35:55.948328934Z" level=info msg="CreateContainer within sandbox \"5e10e7a1347ad7c02dc93469b4e6c1e5b2ceb0ed1efbbce017a673cb4b20bdce\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 12 19:35:55.962015 env[1194]: time="2024-02-12T19:35:55.961952856Z" level=info msg="CreateContainer within sandbox \"5e10e7a1347ad7c02dc93469b4e6c1e5b2ceb0ed1efbbce017a673cb4b20bdce\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"aba131a6ed346e2dfb9b9fe77182ad2ea9ca745521974d8bef0c969bec66c89c\"" Feb 12 19:35:55.962486 env[1194]: time="2024-02-12T19:35:55.962449461Z" level=info msg="StartContainer for \"aba131a6ed346e2dfb9b9fe77182ad2ea9ca745521974d8bef0c969bec66c89c\"" Feb 12 19:35:56.000568 env[1194]: time="2024-02-12T19:35:56.000520153Z" level=info msg="StartContainer for \"aba131a6ed346e2dfb9b9fe77182ad2ea9ca745521974d8bef0c969bec66c89c\" returns successfully" Feb 12 19:35:56.017203 env[1194]: time="2024-02-12T19:35:56.017150837Z" level=info msg="shim disconnected" id=aba131a6ed346e2dfb9b9fe77182ad2ea9ca745521974d8bef0c969bec66c89c Feb 12 19:35:56.017339 env[1194]: time="2024-02-12T19:35:56.017208204Z" level=warning msg="cleaning up after shim disconnected" id=aba131a6ed346e2dfb9b9fe77182ad2ea9ca745521974d8bef0c969bec66c89c namespace=k8s.io Feb 12 19:35:56.017339 env[1194]: time="2024-02-12T19:35:56.017218473Z" level=info msg="cleaning up dead shim" Feb 12 19:35:56.023972 env[1194]: time="2024-02-12T19:35:56.023915440Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:35:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3624 runtime=io.containerd.runc.v2\n" Feb 12 19:35:56.113906 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aba131a6ed346e2dfb9b9fe77182ad2ea9ca745521974d8bef0c969bec66c89c-rootfs.mount: Deactivated successfully. Feb 12 19:35:56.657183 kubelet[1500]: E0212 19:35:56.657128 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:35:56.950189 kubelet[1500]: E0212 19:35:56.949955 1500 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:35:56.952048 env[1194]: time="2024-02-12T19:35:56.952004060Z" level=info msg="CreateContainer within sandbox \"5e10e7a1347ad7c02dc93469b4e6c1e5b2ceb0ed1efbbce017a673cb4b20bdce\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 12 19:35:57.194548 env[1194]: time="2024-02-12T19:35:57.194465882Z" level=info msg="CreateContainer within sandbox \"5e10e7a1347ad7c02dc93469b4e6c1e5b2ceb0ed1efbbce017a673cb4b20bdce\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"152c71456cc22d1af32bd1cd150f255df2560d6126ce7bc45f3639dbcf67f981\"" Feb 12 19:35:57.195069 env[1194]: time="2024-02-12T19:35:57.195013001Z" level=info msg="StartContainer for \"152c71456cc22d1af32bd1cd150f255df2560d6126ce7bc45f3639dbcf67f981\"" Feb 12 19:35:57.239006 env[1194]: time="2024-02-12T19:35:57.238691182Z" level=info msg="StartContainer for \"152c71456cc22d1af32bd1cd150f255df2560d6126ce7bc45f3639dbcf67f981\" returns successfully" Feb 12 19:35:57.499873 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 12 19:35:57.657766 kubelet[1500]: E0212 19:35:57.657719 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:35:57.954716 kubelet[1500]: E0212 19:35:57.954591 1500 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:35:57.965416 kubelet[1500]: I0212 19:35:57.965359 1500 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-b755g" podStartSLOduration=5.965313687 pod.CreationTimestamp="2024-02-12 19:35:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:35:57.964867816 +0000 UTC m=+89.747054261" watchObservedRunningTime="2024-02-12 19:35:57.965313687 +0000 UTC m=+89.747500132" Feb 12 19:35:58.658877 kubelet[1500]: E0212 19:35:58.658810 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:35:58.956318 kubelet[1500]: E0212 19:35:58.956187 1500 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:35:59.322985 systemd[1]: run-containerd-runc-k8s.io-152c71456cc22d1af32bd1cd150f255df2560d6126ce7bc45f3639dbcf67f981-runc.YziPpz.mount: Deactivated successfully. Feb 12 19:35:59.659612 kubelet[1500]: E0212 19:35:59.659503 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:35:59.957921 kubelet[1500]: E0212 19:35:59.957675 1500 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:35:59.993740 systemd-networkd[1074]: lxc_health: Link UP Feb 12 19:36:00.001655 systemd-networkd[1074]: lxc_health: Gained carrier Feb 12 19:36:00.001879 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 12 19:36:00.660626 kubelet[1500]: E0212 19:36:00.660564 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:36:01.259016 kubelet[1500]: E0212 19:36:01.258972 1500 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:36:01.661247 kubelet[1500]: E0212 19:36:01.661190 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:36:01.814415 systemd-networkd[1074]: lxc_health: Gained IPv6LL Feb 12 19:36:01.961879 kubelet[1500]: E0212 19:36:01.961745 1500 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:36:02.661689 kubelet[1500]: E0212 19:36:02.661635 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:36:02.963236 kubelet[1500]: E0212 19:36:02.963106 1500 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:36:03.662515 kubelet[1500]: E0212 19:36:03.662467 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:36:04.663465 kubelet[1500]: E0212 19:36:04.663366 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:36:05.664467 kubelet[1500]: E0212 19:36:05.664432 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:36:06.664890 kubelet[1500]: E0212 19:36:06.664762 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:36:07.666086 kubelet[1500]: E0212 19:36:07.665989 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:36:07.743863 kubelet[1500]: E0212 19:36:07.743811 1500 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:36:08.605265 kubelet[1500]: E0212 19:36:08.605204 1500 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:36:08.666462 kubelet[1500]: E0212 19:36:08.666390 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"