Feb 9 19:43:50.247471 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Feb 9 17:23:38 -00 2024 Feb 9 19:43:50.247495 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:43:50.247508 kernel: BIOS-provided physical RAM map: Feb 9 19:43:50.247516 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 9 19:43:50.247523 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Feb 9 19:43:50.247531 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Feb 9 19:43:50.247554 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Feb 9 19:43:50.247562 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Feb 9 19:43:50.247570 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Feb 9 19:43:50.247579 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Feb 9 19:43:50.247587 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Feb 9 19:43:50.247594 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Feb 9 19:43:50.247602 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Feb 9 19:43:50.247610 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Feb 9 19:43:50.247620 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Feb 9 19:43:50.247630 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Feb 9 19:43:50.247638 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Feb 9 19:43:50.247646 kernel: NX (Execute Disable) protection: active Feb 9 19:43:50.247654 kernel: e820: update [mem 0x9b3f7018-0x9b400c57] usable ==> usable Feb 9 19:43:50.247663 kernel: e820: update [mem 0x9b3f7018-0x9b400c57] usable ==> usable Feb 9 19:43:50.247671 kernel: e820: update [mem 0x9b1aa018-0x9b1e6e57] usable ==> usable Feb 9 19:43:50.247679 kernel: e820: update [mem 0x9b1aa018-0x9b1e6e57] usable ==> usable Feb 9 19:43:50.247687 kernel: extended physical RAM map: Feb 9 19:43:50.247695 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 9 19:43:50.247703 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Feb 9 19:43:50.247713 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Feb 9 19:43:50.247721 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Feb 9 19:43:50.247729 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Feb 9 19:43:50.247737 kernel: reserve setup_data: [mem 0x000000000080c000-0x000000000080ffff] usable Feb 9 19:43:50.247745 kernel: reserve setup_data: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Feb 9 19:43:50.247753 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b1aa017] usable Feb 9 19:43:50.247761 kernel: reserve setup_data: [mem 0x000000009b1aa018-0x000000009b1e6e57] usable Feb 9 19:43:50.247769 kernel: reserve setup_data: [mem 0x000000009b1e6e58-0x000000009b3f7017] usable Feb 9 19:43:50.247777 kernel: reserve setup_data: [mem 0x000000009b3f7018-0x000000009b400c57] usable Feb 9 19:43:50.247794 kernel: reserve setup_data: [mem 0x000000009b400c58-0x000000009c8eefff] usable Feb 9 19:43:50.247803 kernel: reserve setup_data: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Feb 9 19:43:50.247814 kernel: reserve setup_data: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Feb 9 19:43:50.247822 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Feb 9 19:43:50.247830 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Feb 9 19:43:50.247838 kernel: reserve setup_data: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Feb 9 19:43:50.247850 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Feb 9 19:43:50.247859 kernel: efi: EFI v2.70 by EDK II Feb 9 19:43:50.247868 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b773018 RNG=0x9cb75018 Feb 9 19:43:50.247878 kernel: random: crng init done Feb 9 19:43:50.247887 kernel: SMBIOS 2.8 present. Feb 9 19:43:50.247895 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 0.0.0 02/06/2015 Feb 9 19:43:50.247904 kernel: Hypervisor detected: KVM Feb 9 19:43:50.247913 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 9 19:43:50.247921 kernel: kvm-clock: cpu 0, msr 36faa001, primary cpu clock Feb 9 19:43:50.247930 kernel: kvm-clock: using sched offset of 4384835382 cycles Feb 9 19:43:50.247940 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 9 19:43:50.247949 kernel: tsc: Detected 2794.750 MHz processor Feb 9 19:43:50.247960 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 9 19:43:50.247969 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 9 19:43:50.247978 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Feb 9 19:43:50.247987 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 9 19:43:50.247996 kernel: Using GB pages for direct mapping Feb 9 19:43:50.248016 kernel: Secure boot disabled Feb 9 19:43:50.248025 kernel: ACPI: Early table checksum verification disabled Feb 9 19:43:50.248034 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Feb 9 19:43:50.248711 kernel: ACPI: XSDT 0x000000009CB7D0E8 00004C (v01 BOCHS BXPC 00000001 01000013) Feb 9 19:43:50.248724 kernel: ACPI: FACP 0x000000009CB7A000 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 19:43:50.248734 kernel: ACPI: DSDT 0x000000009CB7B000 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 19:43:50.248743 kernel: ACPI: FACS 0x000000009CBDD000 000040 Feb 9 19:43:50.248751 kernel: ACPI: APIC 0x000000009CB79000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 19:43:50.248760 kernel: ACPI: HPET 0x000000009CB78000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 19:43:50.248769 kernel: ACPI: WAET 0x000000009CB77000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 19:43:50.248778 kernel: ACPI: BGRT 0x000000009CB76000 000038 (v01 INTEL EDK2 00000002 01000013) Feb 9 19:43:50.248786 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb7a000-0x9cb7a073] Feb 9 19:43:50.248795 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7b000-0x9cb7ca38] Feb 9 19:43:50.248807 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Feb 9 19:43:50.248815 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb79000-0x9cb7908f] Feb 9 19:43:50.248824 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb78000-0x9cb78037] Feb 9 19:43:50.248833 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb77000-0x9cb77027] Feb 9 19:43:50.248842 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb76000-0x9cb76037] Feb 9 19:43:50.248851 kernel: No NUMA configuration found Feb 9 19:43:50.248860 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Feb 9 19:43:50.248869 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Feb 9 19:43:50.248878 kernel: Zone ranges: Feb 9 19:43:50.248888 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 9 19:43:50.248897 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Feb 9 19:43:50.248906 kernel: Normal empty Feb 9 19:43:50.248914 kernel: Movable zone start for each node Feb 9 19:43:50.248923 kernel: Early memory node ranges Feb 9 19:43:50.248932 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Feb 9 19:43:50.248941 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Feb 9 19:43:50.248949 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Feb 9 19:43:50.248958 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Feb 9 19:43:50.248968 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Feb 9 19:43:50.248977 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Feb 9 19:43:50.248986 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Feb 9 19:43:50.248995 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 9 19:43:50.249004 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Feb 9 19:43:50.249010 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Feb 9 19:43:50.249018 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 9 19:43:50.249026 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Feb 9 19:43:50.249033 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Feb 9 19:43:50.249044 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Feb 9 19:43:50.249052 kernel: ACPI: PM-Timer IO Port: 0xb008 Feb 9 19:43:50.249061 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 9 19:43:50.249070 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 9 19:43:50.249080 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 9 19:43:50.249088 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 9 19:43:50.249107 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 9 19:43:50.249117 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 9 19:43:50.249126 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 9 19:43:50.249137 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 9 19:43:50.249147 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 9 19:43:50.249155 kernel: TSC deadline timer available Feb 9 19:43:50.249189 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Feb 9 19:43:50.249198 kernel: kvm-guest: KVM setup pv remote TLB flush Feb 9 19:43:50.249207 kernel: kvm-guest: setup PV sched yield Feb 9 19:43:50.249216 kernel: [mem 0x9d000000-0xffffffff] available for PCI devices Feb 9 19:43:50.249225 kernel: Booting paravirtualized kernel on KVM Feb 9 19:43:50.249234 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 9 19:43:50.249243 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Feb 9 19:43:50.249254 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u524288 Feb 9 19:43:50.249263 kernel: pcpu-alloc: s185624 r8192 d31464 u524288 alloc=1*2097152 Feb 9 19:43:50.249278 kernel: pcpu-alloc: [0] 0 1 2 3 Feb 9 19:43:50.249288 kernel: kvm-guest: setup async PF for cpu 0 Feb 9 19:43:50.249297 kernel: kvm-guest: stealtime: cpu 0, msr 9ae1c0c0 Feb 9 19:43:50.249306 kernel: kvm-guest: PV spinlocks enabled Feb 9 19:43:50.249315 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 9 19:43:50.249324 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Feb 9 19:43:50.249334 kernel: Policy zone: DMA32 Feb 9 19:43:50.249345 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:43:50.249356 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 19:43:50.249368 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 9 19:43:50.249378 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 19:43:50.249388 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 19:43:50.249399 kernel: Memory: 2400436K/2567000K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 166304K reserved, 0K cma-reserved) Feb 9 19:43:50.249410 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 9 19:43:50.249420 kernel: ftrace: allocating 34475 entries in 135 pages Feb 9 19:43:50.249430 kernel: ftrace: allocated 135 pages with 4 groups Feb 9 19:43:50.249439 kernel: rcu: Hierarchical RCU implementation. Feb 9 19:43:50.249450 kernel: rcu: RCU event tracing is enabled. Feb 9 19:43:50.249460 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 9 19:43:50.249469 kernel: Rude variant of Tasks RCU enabled. Feb 9 19:43:50.249478 kernel: Tracing variant of Tasks RCU enabled. Feb 9 19:43:50.249488 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 19:43:50.249497 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 9 19:43:50.249508 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Feb 9 19:43:50.249517 kernel: Console: colour dummy device 80x25 Feb 9 19:43:50.249526 kernel: printk: console [ttyS0] enabled Feb 9 19:43:50.249547 kernel: ACPI: Core revision 20210730 Feb 9 19:43:50.249557 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 9 19:43:50.249566 kernel: APIC: Switch to symmetric I/O mode setup Feb 9 19:43:50.249575 kernel: x2apic enabled Feb 9 19:43:50.249584 kernel: Switched APIC routing to physical x2apic. Feb 9 19:43:50.249592 kernel: kvm-guest: setup PV IPIs Feb 9 19:43:50.249604 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 9 19:43:50.249614 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 9 19:43:50.249624 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Feb 9 19:43:50.249634 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 9 19:43:50.249643 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Feb 9 19:43:50.249652 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Feb 9 19:43:50.249662 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 9 19:43:50.249671 kernel: Spectre V2 : Mitigation: Retpolines Feb 9 19:43:50.249681 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 9 19:43:50.249692 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 9 19:43:50.249702 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Feb 9 19:43:50.249711 kernel: RETBleed: Mitigation: untrained return thunk Feb 9 19:43:50.249721 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 9 19:43:50.249731 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Feb 9 19:43:50.249740 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 9 19:43:50.249750 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 9 19:43:50.249760 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 9 19:43:50.249772 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 9 19:43:50.249782 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 9 19:43:50.249791 kernel: Freeing SMP alternatives memory: 32K Feb 9 19:43:50.249800 kernel: pid_max: default: 32768 minimum: 301 Feb 9 19:43:50.249810 kernel: LSM: Security Framework initializing Feb 9 19:43:50.249819 kernel: SELinux: Initializing. Feb 9 19:43:50.249829 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 19:43:50.249838 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 19:43:50.249848 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Feb 9 19:43:50.249860 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Feb 9 19:43:50.249870 kernel: ... version: 0 Feb 9 19:43:50.249880 kernel: ... bit width: 48 Feb 9 19:43:50.249889 kernel: ... generic registers: 6 Feb 9 19:43:50.249899 kernel: ... value mask: 0000ffffffffffff Feb 9 19:43:50.249909 kernel: ... max period: 00007fffffffffff Feb 9 19:43:50.249918 kernel: ... fixed-purpose events: 0 Feb 9 19:43:50.249928 kernel: ... event mask: 000000000000003f Feb 9 19:43:50.249937 kernel: signal: max sigframe size: 1776 Feb 9 19:43:50.249947 kernel: rcu: Hierarchical SRCU implementation. Feb 9 19:43:50.249958 kernel: smp: Bringing up secondary CPUs ... Feb 9 19:43:50.249968 kernel: x86: Booting SMP configuration: Feb 9 19:43:50.249977 kernel: .... node #0, CPUs: #1 Feb 9 19:43:50.249987 kernel: kvm-clock: cpu 1, msr 36faa041, secondary cpu clock Feb 9 19:43:50.249997 kernel: kvm-guest: setup async PF for cpu 1 Feb 9 19:43:50.250006 kernel: kvm-guest: stealtime: cpu 1, msr 9ae9c0c0 Feb 9 19:43:50.250015 kernel: #2 Feb 9 19:43:50.250025 kernel: kvm-clock: cpu 2, msr 36faa081, secondary cpu clock Feb 9 19:43:50.250035 kernel: kvm-guest: setup async PF for cpu 2 Feb 9 19:43:50.250062 kernel: kvm-guest: stealtime: cpu 2, msr 9af1c0c0 Feb 9 19:43:50.250071 kernel: #3 Feb 9 19:43:50.250081 kernel: kvm-clock: cpu 3, msr 36faa0c1, secondary cpu clock Feb 9 19:43:50.250090 kernel: kvm-guest: setup async PF for cpu 3 Feb 9 19:43:50.250109 kernel: kvm-guest: stealtime: cpu 3, msr 9af9c0c0 Feb 9 19:43:50.250119 kernel: smp: Brought up 1 node, 4 CPUs Feb 9 19:43:50.250128 kernel: smpboot: Max logical packages: 1 Feb 9 19:43:50.250138 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Feb 9 19:43:50.250147 kernel: devtmpfs: initialized Feb 9 19:43:50.250159 kernel: x86/mm: Memory block size: 128MB Feb 9 19:43:50.250169 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Feb 9 19:43:50.250178 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Feb 9 19:43:50.250188 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Feb 9 19:43:50.250209 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Feb 9 19:43:50.250219 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Feb 9 19:43:50.250229 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 19:43:50.250238 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 9 19:43:50.250248 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 19:43:50.250270 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 19:43:50.250281 kernel: audit: initializing netlink subsys (disabled) Feb 9 19:43:50.250290 kernel: audit: type=2000 audit(1707507828.822:1): state=initialized audit_enabled=0 res=1 Feb 9 19:43:50.250299 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 19:43:50.250309 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 9 19:43:50.250318 kernel: cpuidle: using governor menu Feb 9 19:43:50.250328 kernel: ACPI: bus type PCI registered Feb 9 19:43:50.250338 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 19:43:50.250347 kernel: dca service started, version 1.12.1 Feb 9 19:43:50.250370 kernel: PCI: Using configuration type 1 for base access Feb 9 19:43:50.250380 kernel: PCI: Using configuration type 1 for extended access Feb 9 19:43:50.250389 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 9 19:43:50.250399 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 19:43:50.250417 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 19:43:50.250427 kernel: ACPI: Added _OSI(Module Device) Feb 9 19:43:50.250439 kernel: ACPI: Added _OSI(Processor Device) Feb 9 19:43:50.250448 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 19:43:50.250458 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 19:43:50.250470 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 19:43:50.250480 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 19:43:50.250489 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 19:43:50.250498 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 9 19:43:50.250518 kernel: ACPI: Interpreter enabled Feb 9 19:43:50.250527 kernel: ACPI: PM: (supports S0 S3 S5) Feb 9 19:43:50.250549 kernel: ACPI: Using IOAPIC for interrupt routing Feb 9 19:43:50.250559 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 9 19:43:50.250568 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Feb 9 19:43:50.250579 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 9 19:43:50.250742 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 9 19:43:50.250761 kernel: acpiphp: Slot [3] registered Feb 9 19:43:50.250771 kernel: acpiphp: Slot [4] registered Feb 9 19:43:50.250780 kernel: acpiphp: Slot [5] registered Feb 9 19:43:50.250790 kernel: acpiphp: Slot [6] registered Feb 9 19:43:50.250799 kernel: acpiphp: Slot [7] registered Feb 9 19:43:50.250809 kernel: acpiphp: Slot [8] registered Feb 9 19:43:50.250833 kernel: acpiphp: Slot [9] registered Feb 9 19:43:50.250843 kernel: acpiphp: Slot [10] registered Feb 9 19:43:50.250853 kernel: acpiphp: Slot [11] registered Feb 9 19:43:50.250863 kernel: acpiphp: Slot [12] registered Feb 9 19:43:50.250872 kernel: acpiphp: Slot [13] registered Feb 9 19:43:50.250882 kernel: acpiphp: Slot [14] registered Feb 9 19:43:50.250892 kernel: acpiphp: Slot [15] registered Feb 9 19:43:50.250901 kernel: acpiphp: Slot [16] registered Feb 9 19:43:50.250911 kernel: acpiphp: Slot [17] registered Feb 9 19:43:50.250920 kernel: acpiphp: Slot [18] registered Feb 9 19:43:50.250933 kernel: acpiphp: Slot [19] registered Feb 9 19:43:50.250942 kernel: acpiphp: Slot [20] registered Feb 9 19:43:50.250967 kernel: acpiphp: Slot [21] registered Feb 9 19:43:50.250978 kernel: acpiphp: Slot [22] registered Feb 9 19:43:50.250987 kernel: acpiphp: Slot [23] registered Feb 9 19:43:50.250997 kernel: acpiphp: Slot [24] registered Feb 9 19:43:50.251006 kernel: acpiphp: Slot [25] registered Feb 9 19:43:50.251016 kernel: acpiphp: Slot [26] registered Feb 9 19:43:50.251025 kernel: acpiphp: Slot [27] registered Feb 9 19:43:50.251036 kernel: acpiphp: Slot [28] registered Feb 9 19:43:50.251045 kernel: acpiphp: Slot [29] registered Feb 9 19:43:50.251053 kernel: acpiphp: Slot [30] registered Feb 9 19:43:50.251063 kernel: acpiphp: Slot [31] registered Feb 9 19:43:50.251080 kernel: PCI host bridge to bus 0000:00 Feb 9 19:43:50.251194 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 9 19:43:50.251293 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 9 19:43:50.251393 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 9 19:43:50.251495 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Feb 9 19:43:50.251618 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0x87fffffff window] Feb 9 19:43:50.251727 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 9 19:43:50.251871 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 9 19:43:50.252021 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 9 19:43:50.252170 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Feb 9 19:43:50.252299 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Feb 9 19:43:50.252398 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Feb 9 19:43:50.252516 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Feb 9 19:43:50.252627 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Feb 9 19:43:50.252720 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Feb 9 19:43:50.252821 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 9 19:43:50.252916 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Feb 9 19:43:50.253017 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Feb 9 19:43:50.253133 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Feb 9 19:43:50.253233 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Feb 9 19:43:50.253331 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc1043000-0xc1043fff] Feb 9 19:43:50.253433 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Feb 9 19:43:50.253544 kernel: pci 0000:00:02.0: BAR 0: assigned to efifb Feb 9 19:43:50.253644 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 9 19:43:50.253760 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Feb 9 19:43:50.253865 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0a0-0xc0bf] Feb 9 19:43:50.253970 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Feb 9 19:43:50.254064 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Feb 9 19:43:50.254175 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Feb 9 19:43:50.254272 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Feb 9 19:43:50.254367 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Feb 9 19:43:50.254463 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Feb 9 19:43:50.254577 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Feb 9 19:43:50.254676 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Feb 9 19:43:50.254777 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc1040000-0xc1040fff] Feb 9 19:43:50.254878 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Feb 9 19:43:50.255020 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Feb 9 19:43:50.255036 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 9 19:43:50.255050 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 9 19:43:50.255060 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 9 19:43:50.255070 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 9 19:43:50.255079 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 9 19:43:50.255089 kernel: iommu: Default domain type: Translated Feb 9 19:43:50.255108 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 9 19:43:50.255209 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Feb 9 19:43:50.255309 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 9 19:43:50.255407 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Feb 9 19:43:50.255424 kernel: vgaarb: loaded Feb 9 19:43:50.255434 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 19:43:50.255444 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 19:43:50.255453 kernel: PTP clock support registered Feb 9 19:43:50.255463 kernel: Registered efivars operations Feb 9 19:43:50.255473 kernel: PCI: Using ACPI for IRQ routing Feb 9 19:43:50.255482 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 9 19:43:50.255492 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Feb 9 19:43:50.255501 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Feb 9 19:43:50.255513 kernel: e820: reserve RAM buffer [mem 0x9b1aa018-0x9bffffff] Feb 9 19:43:50.255522 kernel: e820: reserve RAM buffer [mem 0x9b3f7018-0x9bffffff] Feb 9 19:43:50.255531 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Feb 9 19:43:50.255603 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Feb 9 19:43:50.255612 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 9 19:43:50.255622 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 9 19:43:50.255631 kernel: clocksource: Switched to clocksource kvm-clock Feb 9 19:43:50.255641 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 19:43:50.255653 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 19:43:50.255662 kernel: pnp: PnP ACPI init Feb 9 19:43:50.255769 kernel: pnp 00:02: [dma 2] Feb 9 19:43:50.255785 kernel: pnp: PnP ACPI: found 6 devices Feb 9 19:43:50.255794 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 9 19:43:50.255803 kernel: NET: Registered PF_INET protocol family Feb 9 19:43:50.255812 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 19:43:50.258826 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 9 19:43:50.258842 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 19:43:50.258855 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 9 19:43:50.258865 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 9 19:43:50.258874 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 9 19:43:50.258883 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 19:43:50.258893 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 19:43:50.258902 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 19:43:50.258912 kernel: NET: Registered PF_XDP protocol family Feb 9 19:43:50.259019 kernel: pci 0000:00:05.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Feb 9 19:43:50.259145 kernel: pci 0000:00:05.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Feb 9 19:43:50.259236 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 9 19:43:50.259326 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 9 19:43:50.259412 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 9 19:43:50.259495 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Feb 9 19:43:50.259593 kernel: pci_bus 0000:00: resource 8 [mem 0x800000000-0x87fffffff window] Feb 9 19:43:50.259691 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Feb 9 19:43:50.259790 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 9 19:43:50.259890 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Feb 9 19:43:50.259905 kernel: PCI: CLS 0 bytes, default 64 Feb 9 19:43:50.259915 kernel: Initialise system trusted keyrings Feb 9 19:43:50.259925 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 9 19:43:50.259935 kernel: Key type asymmetric registered Feb 9 19:43:50.259945 kernel: Asymmetric key parser 'x509' registered Feb 9 19:43:50.259955 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 19:43:50.259965 kernel: io scheduler mq-deadline registered Feb 9 19:43:50.259978 kernel: io scheduler kyber registered Feb 9 19:43:50.259988 kernel: io scheduler bfq registered Feb 9 19:43:50.259997 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 9 19:43:50.260008 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 9 19:43:50.260018 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Feb 9 19:43:50.260027 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 9 19:43:50.260037 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 19:43:50.260047 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 9 19:43:50.260057 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 9 19:43:50.260069 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 9 19:43:50.260079 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 9 19:43:50.260199 kernel: rtc_cmos 00:05: RTC can wake from S4 Feb 9 19:43:50.260219 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 9 19:43:50.260306 kernel: rtc_cmos 00:05: registered as rtc0 Feb 9 19:43:50.260398 kernel: rtc_cmos 00:05: setting system clock to 2024-02-09T19:43:49 UTC (1707507829) Feb 9 19:43:50.260485 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Feb 9 19:43:50.260499 kernel: efifb: probing for efifb Feb 9 19:43:50.260510 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Feb 9 19:43:50.260520 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Feb 9 19:43:50.260531 kernel: efifb: scrolling: redraw Feb 9 19:43:50.260567 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 9 19:43:50.260578 kernel: Console: switching to colour frame buffer device 160x50 Feb 9 19:43:50.260588 kernel: hpet: Lost 2 RTC interrupts Feb 9 19:43:50.260601 kernel: fb0: EFI VGA frame buffer device Feb 9 19:43:50.260612 kernel: pstore: Registered efi as persistent store backend Feb 9 19:43:50.260623 kernel: NET: Registered PF_INET6 protocol family Feb 9 19:43:50.260633 kernel: Segment Routing with IPv6 Feb 9 19:43:50.260645 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 19:43:50.260655 kernel: NET: Registered PF_PACKET protocol family Feb 9 19:43:50.260665 kernel: Key type dns_resolver registered Feb 9 19:43:50.260676 kernel: IPI shorthand broadcast: enabled Feb 9 19:43:50.260686 kernel: sched_clock: Marking stable (450160757, 94950933)->(568098615, -22986925) Feb 9 19:43:50.260698 kernel: registered taskstats version 1 Feb 9 19:43:50.260708 kernel: Loading compiled-in X.509 certificates Feb 9 19:43:50.260719 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 56154408a02b3bd349a9e9180c9bd837fd1d636a' Feb 9 19:43:50.260729 kernel: Key type .fscrypt registered Feb 9 19:43:50.260740 kernel: Key type fscrypt-provisioning registered Feb 9 19:43:50.260751 kernel: pstore: Using crash dump compression: deflate Feb 9 19:43:50.260761 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 19:43:50.260771 kernel: ima: Allocated hash algorithm: sha1 Feb 9 19:43:50.260781 kernel: ima: No architecture policies found Feb 9 19:43:50.260793 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 9 19:43:50.260803 kernel: Write protecting the kernel read-only data: 28672k Feb 9 19:43:50.260814 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 9 19:43:50.260824 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 9 19:43:50.260835 kernel: Run /init as init process Feb 9 19:43:50.260845 kernel: with arguments: Feb 9 19:43:50.260855 kernel: /init Feb 9 19:43:50.260865 kernel: with environment: Feb 9 19:43:50.260875 kernel: HOME=/ Feb 9 19:43:50.260887 kernel: TERM=linux Feb 9 19:43:50.260897 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 19:43:50.260909 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 19:43:50.260923 systemd[1]: Detected virtualization kvm. Feb 9 19:43:50.260934 systemd[1]: Detected architecture x86-64. Feb 9 19:43:50.260945 systemd[1]: Running in initrd. Feb 9 19:43:50.260956 systemd[1]: No hostname configured, using default hostname. Feb 9 19:43:50.260967 systemd[1]: Hostname set to . Feb 9 19:43:50.260980 systemd[1]: Initializing machine ID from VM UUID. Feb 9 19:43:50.260991 systemd[1]: Queued start job for default target initrd.target. Feb 9 19:43:50.261001 systemd[1]: Started systemd-ask-password-console.path. Feb 9 19:43:50.261012 systemd[1]: Reached target cryptsetup.target. Feb 9 19:43:50.261023 systemd[1]: Reached target paths.target. Feb 9 19:43:50.261034 systemd[1]: Reached target slices.target. Feb 9 19:43:50.261044 systemd[1]: Reached target swap.target. Feb 9 19:43:50.261055 systemd[1]: Reached target timers.target. Feb 9 19:43:50.261068 systemd[1]: Listening on iscsid.socket. Feb 9 19:43:50.261079 systemd[1]: Listening on iscsiuio.socket. Feb 9 19:43:50.261098 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 19:43:50.261109 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 19:43:50.261120 systemd[1]: Listening on systemd-journald.socket. Feb 9 19:43:50.261131 systemd[1]: Listening on systemd-networkd.socket. Feb 9 19:43:50.261142 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 19:43:50.261153 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 19:43:50.261166 systemd[1]: Reached target sockets.target. Feb 9 19:43:50.261177 systemd[1]: Starting kmod-static-nodes.service... Feb 9 19:43:50.261187 systemd[1]: Finished network-cleanup.service. Feb 9 19:43:50.261198 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 19:43:50.261209 systemd[1]: Starting systemd-journald.service... Feb 9 19:43:50.261220 systemd[1]: Starting systemd-modules-load.service... Feb 9 19:43:50.261230 systemd[1]: Starting systemd-resolved.service... Feb 9 19:43:50.261241 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 19:43:50.261252 systemd[1]: Finished kmod-static-nodes.service. Feb 9 19:43:50.261265 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 19:43:50.261276 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 19:43:50.261287 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 19:43:50.261297 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 19:43:50.261308 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 19:43:50.261319 kernel: audit: type=1130 audit(1707507830.255:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:50.261333 systemd-journald[196]: Journal started Feb 9 19:43:50.261390 systemd-journald[196]: Runtime Journal (/run/log/journal/16474b4e7ccf4b3caaeeae1bbe28c2a7) is 6.0M, max 48.4M, 42.4M free. Feb 9 19:43:50.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:50.244801 systemd-modules-load[197]: Inserted module 'overlay' Feb 9 19:43:50.265580 systemd[1]: Started systemd-journald.service. Feb 9 19:43:50.265601 kernel: audit: type=1130 audit(1707507830.262:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:50.262000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:50.263627 systemd-resolved[198]: Positive Trust Anchors: Feb 9 19:43:50.263634 systemd-resolved[198]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 19:43:50.267000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:50.263671 systemd-resolved[198]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 19:43:50.271558 kernel: audit: type=1130 audit(1707507830.267:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:50.266049 systemd-resolved[198]: Defaulting to hostname 'linux'. Feb 9 19:43:50.267608 systemd[1]: Started systemd-resolved.service. Feb 9 19:43:50.268574 systemd[1]: Reached target nss-lookup.target. Feb 9 19:43:50.278461 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 19:43:50.278813 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 19:43:50.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:50.280393 systemd[1]: Starting dracut-cmdline.service... Feb 9 19:43:50.283488 kernel: audit: type=1130 audit(1707507830.279:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:50.283504 kernel: Bridge firewalling registered Feb 9 19:43:50.282877 systemd-modules-load[197]: Inserted module 'br_netfilter' Feb 9 19:43:50.289668 dracut-cmdline[215]: dracut-dracut-053 Feb 9 19:43:50.291403 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:43:50.297553 kernel: SCSI subsystem initialized Feb 9 19:43:50.307752 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 19:43:50.307778 kernel: device-mapper: uevent: version 1.0.3 Feb 9 19:43:50.307792 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 19:43:50.311177 systemd-modules-load[197]: Inserted module 'dm_multipath' Feb 9 19:43:50.311810 systemd[1]: Finished systemd-modules-load.service. Feb 9 19:43:50.311000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:50.315894 kernel: audit: type=1130 audit(1707507830.311:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:50.315591 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:43:50.322913 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:43:50.322000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:50.326572 kernel: audit: type=1130 audit(1707507830.322:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:50.359560 kernel: Loading iSCSI transport class v2.0-870. Feb 9 19:43:50.369566 kernel: iscsi: registered transport (tcp) Feb 9 19:43:50.388580 kernel: iscsi: registered transport (qla4xxx) Feb 9 19:43:50.388643 kernel: QLogic iSCSI HBA Driver Feb 9 19:43:50.408068 systemd[1]: Finished dracut-cmdline.service. Feb 9 19:43:50.411119 kernel: audit: type=1130 audit(1707507830.407:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:50.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:50.411131 systemd[1]: Starting dracut-pre-udev.service... Feb 9 19:43:50.455569 kernel: raid6: avx2x4 gen() 30270 MB/s Feb 9 19:43:50.472562 kernel: raid6: avx2x4 xor() 7554 MB/s Feb 9 19:43:50.489562 kernel: raid6: avx2x2 gen() 32346 MB/s Feb 9 19:43:50.506553 kernel: raid6: avx2x2 xor() 19313 MB/s Feb 9 19:43:50.523558 kernel: raid6: avx2x1 gen() 26572 MB/s Feb 9 19:43:50.540559 kernel: raid6: avx2x1 xor() 15376 MB/s Feb 9 19:43:50.562567 kernel: raid6: sse2x4 gen() 14739 MB/s Feb 9 19:43:50.579574 kernel: raid6: sse2x4 xor() 7114 MB/s Feb 9 19:43:50.596566 kernel: raid6: sse2x2 gen() 16300 MB/s Feb 9 19:43:50.613560 kernel: raid6: sse2x2 xor() 9799 MB/s Feb 9 19:43:50.630557 kernel: raid6: sse2x1 gen() 11904 MB/s Feb 9 19:43:50.647991 kernel: raid6: sse2x1 xor() 7776 MB/s Feb 9 19:43:50.648014 kernel: raid6: using algorithm avx2x2 gen() 32346 MB/s Feb 9 19:43:50.648028 kernel: raid6: .... xor() 19313 MB/s, rmw enabled Feb 9 19:43:50.648040 kernel: raid6: using avx2x2 recovery algorithm Feb 9 19:43:50.659558 kernel: xor: automatically using best checksumming function avx Feb 9 19:43:50.745590 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 9 19:43:50.751984 systemd[1]: Finished dracut-pre-udev.service. Feb 9 19:43:50.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:50.755000 audit: BPF prog-id=7 op=LOAD Feb 9 19:43:50.755000 audit: BPF prog-id=8 op=LOAD Feb 9 19:43:50.755563 kernel: audit: type=1130 audit(1707507830.751:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:50.755682 systemd[1]: Starting systemd-udevd.service... Feb 9 19:43:50.768398 systemd-udevd[400]: Using default interface naming scheme 'v252'. Feb 9 19:43:50.772978 systemd[1]: Started systemd-udevd.service. Feb 9 19:43:50.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:50.774123 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 19:43:50.782280 dracut-pre-trigger[406]: rd.md=0: removing MD RAID activation Feb 9 19:43:50.802201 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 19:43:50.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:50.804125 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 19:43:50.842911 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 19:43:50.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:50.877563 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 9 19:43:50.880558 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 19:43:50.880588 kernel: libata version 3.00 loaded. Feb 9 19:43:50.884555 kernel: ata_piix 0000:00:01.1: version 2.13 Feb 9 19:43:50.893557 kernel: AVX2 version of gcm_enc/dec engaged. Feb 9 19:43:50.893603 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 9 19:43:50.893617 kernel: GPT:9289727 != 19775487 Feb 9 19:43:50.894710 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 9 19:43:50.894741 kernel: GPT:9289727 != 19775487 Feb 9 19:43:50.895771 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 9 19:43:50.895791 kernel: AES CTR mode by8 optimization enabled Feb 9 19:43:50.895800 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 19:43:50.898561 kernel: scsi host0: ata_piix Feb 9 19:43:50.898742 kernel: scsi host1: ata_piix Feb 9 19:43:50.900247 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Feb 9 19:43:50.900273 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Feb 9 19:43:50.919560 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (453) Feb 9 19:43:50.920162 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 19:43:50.929032 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 19:43:50.931166 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 19:43:50.938044 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 19:43:50.942458 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 19:43:50.943670 systemd[1]: Starting disk-uuid.service... Feb 9 19:43:50.949404 disk-uuid[514]: Primary Header is updated. Feb 9 19:43:50.949404 disk-uuid[514]: Secondary Entries is updated. Feb 9 19:43:50.949404 disk-uuid[514]: Secondary Header is updated. Feb 9 19:43:50.951829 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 19:43:50.958558 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 19:43:51.059569 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Feb 9 19:43:51.059655 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 9 19:43:51.090569 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Feb 9 19:43:51.090781 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 9 19:43:51.107554 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Feb 9 19:43:51.957722 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 19:43:51.957809 disk-uuid[515]: The operation has completed successfully. Feb 9 19:43:51.981557 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 19:43:51.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:51.982000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:51.981650 systemd[1]: Finished disk-uuid.service. Feb 9 19:43:51.989742 systemd[1]: Starting verity-setup.service... Feb 9 19:43:52.002575 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Feb 9 19:43:52.021382 systemd[1]: Found device dev-mapper-usr.device. Feb 9 19:43:52.023409 systemd[1]: Mounting sysusr-usr.mount... Feb 9 19:43:52.025195 systemd[1]: Finished verity-setup.service. Feb 9 19:43:52.026000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:52.100560 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 19:43:52.100586 systemd[1]: Mounted sysusr-usr.mount. Feb 9 19:43:52.100971 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 19:43:52.101726 systemd[1]: Starting ignition-setup.service... Feb 9 19:43:52.103766 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 19:43:52.114113 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:43:52.114171 kernel: BTRFS info (device vda6): using free space tree Feb 9 19:43:52.114183 kernel: BTRFS info (device vda6): has skinny extents Feb 9 19:43:52.121696 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 19:43:52.129987 systemd[1]: Finished ignition-setup.service. Feb 9 19:43:52.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:52.132329 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 19:43:52.172271 ignition[630]: Ignition 2.14.0 Feb 9 19:43:52.172328 ignition[630]: Stage: fetch-offline Feb 9 19:43:52.172378 ignition[630]: no configs at "/usr/lib/ignition/base.d" Feb 9 19:43:52.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:52.173428 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 19:43:52.174000 audit: BPF prog-id=9 op=LOAD Feb 9 19:43:52.172390 ignition[630]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 19:43:52.175612 systemd[1]: Starting systemd-networkd.service... Feb 9 19:43:52.172512 ignition[630]: parsed url from cmdline: "" Feb 9 19:43:52.172517 ignition[630]: no config URL provided Feb 9 19:43:52.172523 ignition[630]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 19:43:52.172549 ignition[630]: no config at "/usr/lib/ignition/user.ign" Feb 9 19:43:52.172573 ignition[630]: op(1): [started] loading QEMU firmware config module Feb 9 19:43:52.172579 ignition[630]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 9 19:43:52.177305 ignition[630]: op(1): [finished] loading QEMU firmware config module Feb 9 19:43:52.193326 ignition[630]: parsing config with SHA512: 339a5f0da1a9d4eaa1b0086fa9b7033f54dcf5d8ac29493c10f2d8cbc08a856b3682a34b91f62d097d9e0475e10c1521059636583271318ab1e5c4cb8822d28c Feb 9 19:43:52.216515 systemd-networkd[709]: lo: Link UP Feb 9 19:43:52.216527 systemd-networkd[709]: lo: Gained carrier Feb 9 19:43:52.216975 systemd-networkd[709]: Enumeration completed Feb 9 19:43:52.218000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:52.217077 systemd[1]: Started systemd-networkd.service. Feb 9 19:43:52.217997 systemd-networkd[709]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:43:52.218877 systemd[1]: Reached target network.target. Feb 9 19:43:52.219001 systemd-networkd[709]: eth0: Link UP Feb 9 19:43:52.219005 systemd-networkd[709]: eth0: Gained carrier Feb 9 19:43:52.223979 systemd[1]: Starting iscsiuio.service... Feb 9 19:43:52.226311 unknown[630]: fetched base config from "system" Feb 9 19:43:52.226328 unknown[630]: fetched user config from "qemu" Feb 9 19:43:52.226975 ignition[630]: fetch-offline: fetch-offline passed Feb 9 19:43:52.227081 ignition[630]: Ignition finished successfully Feb 9 19:43:52.228569 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 19:43:52.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:52.230000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:52.229678 systemd[1]: Started iscsiuio.service. Feb 9 19:43:52.230761 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 9 19:43:52.231754 systemd[1]: Starting ignition-kargs.service... Feb 9 19:43:52.233711 systemd[1]: Starting iscsid.service... Feb 9 19:43:52.234852 systemd-networkd[709]: eth0: DHCPv4 address 10.0.0.52/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 9 19:43:52.237062 iscsid[715]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 19:43:52.237062 iscsid[715]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 19:43:52.237062 iscsid[715]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 19:43:52.237062 iscsid[715]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 19:43:52.237062 iscsid[715]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 19:43:52.237062 iscsid[715]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 19:43:52.238754 systemd[1]: Started iscsid.service. Feb 9 19:43:52.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:52.246940 systemd[1]: Starting dracut-initqueue.service... Feb 9 19:43:52.251934 ignition[714]: Ignition 2.14.0 Feb 9 19:43:52.251940 ignition[714]: Stage: kargs Feb 9 19:43:52.252057 ignition[714]: no configs at "/usr/lib/ignition/base.d" Feb 9 19:43:52.252066 ignition[714]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 19:43:52.255201 ignition[714]: kargs: kargs passed Feb 9 19:43:52.255246 ignition[714]: Ignition finished successfully Feb 9 19:43:52.257071 systemd[1]: Finished dracut-initqueue.service. Feb 9 19:43:52.257000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:52.257738 systemd[1]: Reached target remote-fs-pre.target. Feb 9 19:43:52.261000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:52.258038 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 19:43:52.258287 systemd[1]: Reached target remote-fs.target. Feb 9 19:43:52.259134 systemd[1]: Starting dracut-pre-mount.service... Feb 9 19:43:52.260798 systemd[1]: Finished ignition-kargs.service. Feb 9 19:43:52.262285 systemd[1]: Starting ignition-disks.service... Feb 9 19:43:52.267880 systemd[1]: Finished dracut-pre-mount.service. Feb 9 19:43:52.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:52.270604 ignition[730]: Ignition 2.14.0 Feb 9 19:43:52.270614 ignition[730]: Stage: disks Feb 9 19:43:52.270702 ignition[730]: no configs at "/usr/lib/ignition/base.d" Feb 9 19:43:52.270711 ignition[730]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 19:43:52.273806 ignition[730]: disks: disks passed Feb 9 19:43:52.274278 ignition[730]: Ignition finished successfully Feb 9 19:43:52.275178 systemd[1]: Finished ignition-disks.service. Feb 9 19:43:52.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:52.275492 systemd[1]: Reached target initrd-root-device.target. Feb 9 19:43:52.276844 systemd[1]: Reached target local-fs-pre.target. Feb 9 19:43:52.278141 systemd[1]: Reached target local-fs.target. Feb 9 19:43:52.278369 systemd[1]: Reached target sysinit.target. Feb 9 19:43:52.278714 systemd[1]: Reached target basic.target. Feb 9 19:43:52.279957 systemd[1]: Starting systemd-fsck-root.service... Feb 9 19:43:52.291317 systemd-fsck[743]: ROOT: clean, 602/553520 files, 56014/553472 blocks Feb 9 19:43:52.296139 systemd[1]: Finished systemd-fsck-root.service. Feb 9 19:43:52.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:52.297769 systemd[1]: Mounting sysroot.mount... Feb 9 19:43:52.304565 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 19:43:52.305180 systemd[1]: Mounted sysroot.mount. Feb 9 19:43:52.305756 systemd[1]: Reached target initrd-root-fs.target. Feb 9 19:43:52.306981 systemd[1]: Mounting sysroot-usr.mount... Feb 9 19:43:52.308104 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 9 19:43:52.308147 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 19:43:52.308175 systemd[1]: Reached target ignition-diskful.target. Feb 9 19:43:52.309725 systemd[1]: Mounted sysroot-usr.mount. Feb 9 19:43:52.311557 systemd[1]: Starting initrd-setup-root.service... Feb 9 19:43:52.317983 initrd-setup-root[753]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 19:43:52.321036 initrd-setup-root[761]: cut: /sysroot/etc/group: No such file or directory Feb 9 19:43:52.324781 initrd-setup-root[769]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 19:43:52.328121 initrd-setup-root[777]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 19:43:52.355431 systemd[1]: Finished initrd-setup-root.service. Feb 9 19:43:52.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:52.356823 systemd[1]: Starting ignition-mount.service... Feb 9 19:43:52.357825 systemd[1]: Starting sysroot-boot.service... Feb 9 19:43:52.364878 bash[795]: umount: /sysroot/usr/share/oem: not mounted. Feb 9 19:43:52.372261 ignition[796]: INFO : Ignition 2.14.0 Feb 9 19:43:52.372261 ignition[796]: INFO : Stage: mount Feb 9 19:43:52.373000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:52.372996 systemd[1]: Finished sysroot-boot.service. Feb 9 19:43:52.374834 ignition[796]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 19:43:52.374834 ignition[796]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 19:43:52.376951 ignition[796]: INFO : mount: mount passed Feb 9 19:43:52.377594 ignition[796]: INFO : Ignition finished successfully Feb 9 19:43:52.378842 systemd[1]: Finished ignition-mount.service. Feb 9 19:43:52.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:53.033470 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 19:43:53.038557 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (804) Feb 9 19:43:53.038591 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:43:53.039950 kernel: BTRFS info (device vda6): using free space tree Feb 9 19:43:53.039968 kernel: BTRFS info (device vda6): has skinny extents Feb 9 19:43:53.042560 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 19:43:53.043412 systemd[1]: Starting ignition-files.service... Feb 9 19:43:53.056901 ignition[824]: INFO : Ignition 2.14.0 Feb 9 19:43:53.056901 ignition[824]: INFO : Stage: files Feb 9 19:43:53.058429 ignition[824]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 19:43:53.058429 ignition[824]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 19:43:53.061671 ignition[824]: DEBUG : files: compiled without relabeling support, skipping Feb 9 19:43:53.063073 ignition[824]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 19:43:53.063073 ignition[824]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 19:43:53.065589 ignition[824]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 19:43:53.066874 ignition[824]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 19:43:53.068488 unknown[824]: wrote ssh authorized keys file for user: core Feb 9 19:43:53.069510 ignition[824]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 19:43:53.071217 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 9 19:43:53.072800 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 9 19:43:53.074385 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 9 19:43:53.076144 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Feb 9 19:43:53.475422 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 9 19:43:53.588589 ignition[824]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Feb 9 19:43:53.590711 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 9 19:43:53.590711 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 9 19:43:53.590711 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-amd64.tar.gz: attempt #1 Feb 9 19:43:53.649719 systemd-networkd[709]: eth0: Gained IPv6LL Feb 9 19:43:53.863831 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 9 19:43:53.945646 ignition[824]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: a3a2c02a90b008686c20babaf272e703924db2a3e2a0d4e2a7c81d994cbc68c47458a4a354ecc243af095b390815c7f203348b9749351ae817bd52a522300449 Feb 9 19:43:53.948590 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 9 19:43:53.948590 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 19:43:53.948590 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubeadm: attempt #1 Feb 9 19:43:54.018391 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 9 19:43:54.296179 ignition[824]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 1c324cd645a7bf93d19d24c87498d9a17878eb1cc927e2680200ffeab2f85051ddec47d85b79b8e774042dc6726299ad3d7caf52c060701f00deba30dc33f660 Feb 9 19:43:54.298362 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 19:43:54.298362 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 19:43:54.298362 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubelet: attempt #1 Feb 9 19:43:54.342817 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 9 19:43:54.906431 ignition[824]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 40daf2a9b9e666c14b10e627da931bd79978628b1f23ef6429c1cb4fcba261f86ccff440c0dbb0070ee760fe55772b4fd279c4582dfbb17fa30bc94b7f00126b Feb 9 19:43:54.908531 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 19:43:54.908531 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/install.sh" Feb 9 19:43:54.908531 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 19:43:54.908531 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 19:43:54.908531 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 19:43:54.952860 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 19:43:54.954379 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 19:43:54.954379 ignition[824]: INFO : files: op(b): [started] processing unit "prepare-critools.service" Feb 9 19:43:54.954379 ignition[824]: INFO : files: op(b): op(c): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 19:43:54.954379 ignition[824]: INFO : files: op(b): op(c): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 19:43:54.954379 ignition[824]: INFO : files: op(b): [finished] processing unit "prepare-critools.service" Feb 9 19:43:54.954379 ignition[824]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Feb 9 19:43:54.954379 ignition[824]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 9 19:43:54.954379 ignition[824]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 9 19:43:54.954379 ignition[824]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Feb 9 19:43:54.954379 ignition[824]: INFO : files: op(f): [started] processing unit "containerd.service" Feb 9 19:43:54.954379 ignition[824]: INFO : files: op(f): op(10): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 9 19:43:54.954379 ignition[824]: INFO : files: op(f): op(10): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 9 19:43:54.954379 ignition[824]: INFO : files: op(f): [finished] processing unit "containerd.service" Feb 9 19:43:54.954379 ignition[824]: INFO : files: op(11): [started] processing unit "prepare-cni-plugins.service" Feb 9 19:43:54.954379 ignition[824]: INFO : files: op(11): op(12): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 19:43:54.954379 ignition[824]: INFO : files: op(11): op(12): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 19:43:54.954379 ignition[824]: INFO : files: op(11): [finished] processing unit "prepare-cni-plugins.service" Feb 9 19:43:54.954379 ignition[824]: INFO : files: op(13): [started] setting preset to enabled for "prepare-critools.service" Feb 9 19:43:54.985152 kernel: kauditd_printk_skb: 23 callbacks suppressed Feb 9 19:43:54.985180 kernel: audit: type=1130 audit(1707507834.978:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:54.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:54.985269 ignition[824]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 19:43:54.985269 ignition[824]: INFO : files: op(14): [started] setting preset to disabled for "coreos-metadata.service" Feb 9 19:43:54.985269 ignition[824]: INFO : files: op(14): op(15): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 9 19:43:54.985269 ignition[824]: INFO : files: op(14): op(15): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 9 19:43:54.985269 ignition[824]: INFO : files: op(14): [finished] setting preset to disabled for "coreos-metadata.service" Feb 9 19:43:54.985269 ignition[824]: INFO : files: op(16): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 19:43:54.985269 ignition[824]: INFO : files: op(16): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 19:43:54.985269 ignition[824]: INFO : files: createResultFile: createFiles: op(17): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 19:43:54.985269 ignition[824]: INFO : files: createResultFile: createFiles: op(17): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 19:43:54.985269 ignition[824]: INFO : files: files passed Feb 9 19:43:54.985269 ignition[824]: INFO : Ignition finished successfully Feb 9 19:43:55.006180 kernel: audit: type=1130 audit(1707507834.987:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:55.006198 kernel: audit: type=1131 audit(1707507834.987:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:55.006208 kernel: audit: type=1130 audit(1707507834.994:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:54.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:54.987000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:54.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:54.977430 systemd[1]: Finished ignition-files.service. Feb 9 19:43:54.979340 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 19:43:55.007788 initrd-setup-root-after-ignition[848]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Feb 9 19:43:54.983634 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 19:43:55.010156 initrd-setup-root-after-ignition[851]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 19:43:54.984382 systemd[1]: Starting ignition-quench.service... Feb 9 19:43:54.987525 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 19:43:55.019592 kernel: audit: type=1130 audit(1707507835.012:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:55.019608 kernel: audit: type=1131 audit(1707507835.012:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:55.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:55.012000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:54.987607 systemd[1]: Finished ignition-quench.service. Feb 9 19:43:54.988431 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 19:43:54.995004 systemd[1]: Reached target ignition-complete.target. Feb 9 19:43:55.000292 systemd[1]: Starting initrd-parse-etc.service... Feb 9 19:43:55.011729 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 19:43:55.011838 systemd[1]: Finished initrd-parse-etc.service. Feb 9 19:43:55.012723 systemd[1]: Reached target initrd-fs.target. Feb 9 19:43:55.019660 systemd[1]: Reached target initrd.target. Feb 9 19:43:55.020289 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 19:43:55.021208 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 19:43:55.029641 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 19:43:55.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:55.031964 systemd[1]: Starting initrd-cleanup.service... Feb 9 19:43:55.033897 kernel: audit: type=1130 audit(1707507835.030:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:55.041107 systemd[1]: Stopped target network.target. Feb 9 19:43:55.042397 systemd[1]: Stopped target nss-lookup.target. Feb 9 19:43:55.043138 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 19:43:55.044274 systemd[1]: Stopped target timers.target. Feb 9 19:43:55.045402 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 19:43:55.049445 kernel: audit: type=1131 audit(1707507835.045:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:55.045000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:55.045584 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 19:43:55.046609 systemd[1]: Stopped target initrd.target. Feb 9 19:43:55.049617 systemd[1]: Stopped target basic.target. Feb 9 19:43:55.050717 systemd[1]: Stopped target ignition-complete.target. Feb 9 19:43:55.051811 systemd[1]: Stopped target ignition-diskful.target. Feb 9 19:43:55.052923 systemd[1]: Stopped target initrd-root-device.target. Feb 9 19:43:55.054127 systemd[1]: Stopped target remote-fs.target. Feb 9 19:43:55.055259 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 19:43:55.056444 systemd[1]: Stopped target sysinit.target. Feb 9 19:43:55.057504 systemd[1]: Stopped target local-fs.target. Feb 9 19:43:55.058624 systemd[1]: Stopped target local-fs-pre.target. Feb 9 19:43:55.059694 systemd[1]: Stopped target swap.target. Feb 9 19:43:55.064719 kernel: audit: type=1131 audit(1707507835.061:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:55.061000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:55.060698 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 19:43:55.060847 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 19:43:55.069167 kernel: audit: type=1131 audit(1707507835.065:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:55.065000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:55.061957 systemd[1]: Stopped target cryptsetup.target. Feb 9 19:43:55.068000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:55.064817 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 19:43:55.064955 systemd[1]: Stopped dracut-initqueue.service. Feb 9 19:43:55.066182 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 19:43:55.066321 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 19:43:55.069406 systemd[1]: Stopped target paths.target. Feb 9 19:43:55.070367 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 19:43:55.073583 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 19:43:55.074720 systemd[1]: Stopped target slices.target. Feb 9 19:43:55.075832 systemd[1]: Stopped target sockets.target. Feb 9 19:43:55.076877 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 19:43:55.076997 systemd[1]: Closed iscsid.socket. Feb 9 19:43:55.077927 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 19:43:55.079000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:55.078041 systemd[1]: Closed iscsiuio.socket. Feb 9 19:43:55.080000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:55.079059 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 19:43:55.079202 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 19:43:55.080182 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 19:43:55.083000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:55.080315 systemd[1]: Stopped ignition-files.service. Feb 9 19:43:55.082439 systemd[1]: Stopping ignition-mount.service... Feb 9 19:43:55.082834 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 19:43:55.089462 ignition[865]: INFO : Ignition 2.14.0 Feb 9 19:43:55.089462 ignition[865]: INFO : Stage: umount Feb 9 19:43:55.089462 ignition[865]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 19:43:55.089462 ignition[865]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 19:43:55.089462 ignition[865]: INFO : umount: umount passed Feb 9 19:43:55.089462 ignition[865]: INFO : Ignition finished successfully Feb 9 19:43:55.091000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:55.092000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:55.082990 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 19:43:55.085276 systemd[1]: Stopping sysroot-boot.service... Feb 9 19:43:55.088871 systemd[1]: Stopping systemd-networkd.service... Feb 9 19:43:55.089784 systemd[1]: Stopping systemd-resolved.service... Feb 9 19:43:55.090740 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 19:43:55.097000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:55.090929 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 19:43:55.092121 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 19:43:55.092294 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 19:43:55.092580 systemd-networkd[709]: eth0: DHCPv6 lease lost Feb 9 19:43:55.097212 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 19:43:55.102000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:55.097348 systemd[1]: Stopped systemd-networkd.service. Feb 9 19:43:55.101326 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 19:43:55.101954 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 19:43:55.102073 systemd[1]: Stopped systemd-resolved.service. Feb 9 19:43:55.105000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:55.104951 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 19:43:55.106000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:55.107000 audit: BPF prog-id=9 op=UNLOAD Feb 9 19:43:55.107000 audit: BPF prog-id=6 op=UNLOAD Feb 9 19:43:55.105046 systemd[1]: Stopped ignition-mount.service. Feb 9 19:43:55.108000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:55.109000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:55.106380 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 19:43:55.110000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:55.112000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:55.106441 systemd[1]: Stopped sysroot-boot.service. Feb 9 19:43:55.107443 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 19:43:55.107508 systemd[1]: Closed systemd-networkd.socket. Feb 9 19:43:55.114000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:55.115000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:55.108370 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 19:43:55.115000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:55.108403 systemd[1]: Stopped ignition-disks.service. Feb 9 19:43:55.109044 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 19:43:55.109073 systemd[1]: Stopped ignition-kargs.service. Feb 9 19:43:55.110151 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 19:43:55.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:55.119000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:55.110182 systemd[1]: Stopped ignition-setup.service. Feb 9 19:43:55.110814 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 19:43:55.122000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:55.110841 systemd[1]: Stopped initrd-setup-root.service. Feb 9 19:43:55.112625 systemd[1]: Stopping network-cleanup.service... Feb 9 19:43:55.113196 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 19:43:55.113234 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 19:43:55.114648 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 19:43:55.114679 systemd[1]: Stopped systemd-sysctl.service. Feb 9 19:43:55.128000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:55.115827 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 19:43:55.115856 systemd[1]: Stopped systemd-modules-load.service. Feb 9 19:43:55.117236 systemd[1]: Stopping systemd-udevd.service... Feb 9 19:43:55.118636 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 19:43:55.119053 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 19:43:55.133000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:55.134000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:55.119124 systemd[1]: Finished initrd-cleanup.service. Feb 9 19:43:55.134000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:55.122299 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 19:43:55.137000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:55.122394 systemd[1]: Stopped network-cleanup.service. Feb 9 19:43:55.127682 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 19:43:55.127778 systemd[1]: Stopped systemd-udevd.service. Feb 9 19:43:55.129707 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 19:43:55.129737 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 19:43:55.130880 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 19:43:55.130904 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 19:43:55.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:55.142000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:55.132065 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 19:43:55.132094 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 19:43:55.133585 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 19:43:55.133612 systemd[1]: Stopped dracut-cmdline.service. Feb 9 19:43:55.134862 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 19:43:55.134891 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 19:43:55.136079 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 19:43:55.136701 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 19:43:55.136737 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 19:43:55.142079 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 19:43:55.151000 audit: BPF prog-id=5 op=UNLOAD Feb 9 19:43:55.151000 audit: BPF prog-id=4 op=UNLOAD Feb 9 19:43:55.151000 audit: BPF prog-id=3 op=UNLOAD Feb 9 19:43:55.142141 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 19:43:55.142912 systemd[1]: Reached target initrd-switch-root.target. Feb 9 19:43:55.152000 audit: BPF prog-id=8 op=UNLOAD Feb 9 19:43:55.152000 audit: BPF prog-id=7 op=UNLOAD Feb 9 19:43:55.144656 systemd[1]: Starting initrd-switch-root.service... Feb 9 19:43:55.149961 systemd[1]: Switching root. Feb 9 19:43:55.168845 iscsid[715]: iscsid shutting down. Feb 9 19:43:55.169426 systemd-journald[196]: Journal stopped Feb 9 19:43:57.631908 systemd-journald[196]: Received SIGTERM from PID 1 (systemd). Feb 9 19:43:57.631968 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 19:43:57.631980 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 19:43:57.631995 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 19:43:57.632006 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 19:43:57.632016 kernel: SELinux: policy capability open_perms=1 Feb 9 19:43:57.632025 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 19:43:57.632034 kernel: SELinux: policy capability always_check_network=0 Feb 9 19:43:57.632044 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 19:43:57.632053 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 19:43:57.632063 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 19:43:57.632073 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 19:43:57.632084 systemd[1]: Successfully loaded SELinux policy in 34.783ms. Feb 9 19:43:57.632105 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.471ms. Feb 9 19:43:57.632116 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 19:43:57.632126 systemd[1]: Detected virtualization kvm. Feb 9 19:43:57.632136 systemd[1]: Detected architecture x86-64. Feb 9 19:43:57.632146 systemd[1]: Detected first boot. Feb 9 19:43:57.632156 systemd[1]: Initializing machine ID from VM UUID. Feb 9 19:43:57.632166 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 19:43:57.632177 systemd[1]: Populated /etc with preset unit settings. Feb 9 19:43:57.632188 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:43:57.632204 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:43:57.632215 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:43:57.632226 systemd[1]: Queued start job for default target multi-user.target. Feb 9 19:43:57.632237 systemd[1]: Unnecessary job was removed for dev-vda6.device. Feb 9 19:43:57.632249 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 19:43:57.632259 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 19:43:57.632269 systemd[1]: Created slice system-getty.slice. Feb 9 19:43:57.632279 systemd[1]: Created slice system-modprobe.slice. Feb 9 19:43:57.632289 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 19:43:57.632300 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 19:43:57.632309 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 19:43:57.632319 systemd[1]: Created slice user.slice. Feb 9 19:43:57.632329 systemd[1]: Started systemd-ask-password-console.path. Feb 9 19:43:57.632340 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 19:43:57.632350 systemd[1]: Set up automount boot.automount. Feb 9 19:43:57.632360 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 19:43:57.632369 systemd[1]: Reached target integritysetup.target. Feb 9 19:43:57.632379 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 19:43:57.632389 systemd[1]: Reached target remote-fs.target. Feb 9 19:43:57.632399 systemd[1]: Reached target slices.target. Feb 9 19:43:57.632409 systemd[1]: Reached target swap.target. Feb 9 19:43:57.632420 systemd[1]: Reached target torcx.target. Feb 9 19:43:57.632430 systemd[1]: Reached target veritysetup.target. Feb 9 19:43:57.632439 systemd[1]: Listening on systemd-coredump.socket. Feb 9 19:43:57.632449 systemd[1]: Listening on systemd-initctl.socket. Feb 9 19:43:57.632461 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 19:43:57.632471 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 19:43:57.632481 systemd[1]: Listening on systemd-journald.socket. Feb 9 19:43:57.632491 systemd[1]: Listening on systemd-networkd.socket. Feb 9 19:43:57.632500 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 19:43:57.632514 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 19:43:57.632530 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 19:43:57.632555 systemd[1]: Mounting dev-hugepages.mount... Feb 9 19:43:57.632566 systemd[1]: Mounting dev-mqueue.mount... Feb 9 19:43:57.632575 systemd[1]: Mounting media.mount... Feb 9 19:43:57.632585 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 19:43:57.632595 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 19:43:57.632605 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 19:43:57.632615 systemd[1]: Mounting tmp.mount... Feb 9 19:43:57.632625 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 19:43:57.632637 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 19:43:57.632647 systemd[1]: Starting kmod-static-nodes.service... Feb 9 19:43:57.632657 systemd[1]: Starting modprobe@configfs.service... Feb 9 19:43:57.632667 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 19:43:57.632677 systemd[1]: Starting modprobe@drm.service... Feb 9 19:43:57.632686 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 19:43:57.632697 systemd[1]: Starting modprobe@fuse.service... Feb 9 19:43:57.632707 systemd[1]: Starting modprobe@loop.service... Feb 9 19:43:57.632718 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 19:43:57.632731 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 9 19:43:57.632741 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Feb 9 19:43:57.632750 systemd[1]: Starting systemd-journald.service... Feb 9 19:43:57.632760 kernel: fuse: init (API version 7.34) Feb 9 19:43:57.632769 kernel: loop: module loaded Feb 9 19:43:57.632779 systemd[1]: Starting systemd-modules-load.service... Feb 9 19:43:57.632789 systemd[1]: Starting systemd-network-generator.service... Feb 9 19:43:57.632799 systemd[1]: Starting systemd-remount-fs.service... Feb 9 19:43:57.632810 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 19:43:57.632820 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 19:43:57.632830 systemd[1]: Mounted dev-hugepages.mount. Feb 9 19:43:57.632842 systemd[1]: Mounted dev-mqueue.mount. Feb 9 19:43:57.632852 systemd[1]: Mounted media.mount. Feb 9 19:43:57.632862 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 19:43:57.632874 systemd-journald[1009]: Journal started Feb 9 19:43:57.632913 systemd-journald[1009]: Runtime Journal (/run/log/journal/16474b4e7ccf4b3caaeeae1bbe28c2a7) is 6.0M, max 48.4M, 42.4M free. Feb 9 19:43:57.560000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 19:43:57.560000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 9 19:43:57.630000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 19:43:57.630000 audit[1009]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7fffe21c9f90 a2=4000 a3=7fffe21ca02c items=0 ppid=1 pid=1009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:43:57.630000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 19:43:57.634065 systemd[1]: Started systemd-journald.service. Feb 9 19:43:57.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:57.635771 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 19:43:57.636616 systemd[1]: Mounted tmp.mount. Feb 9 19:43:57.637758 systemd[1]: Finished kmod-static-nodes.service. Feb 9 19:43:57.637000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:57.638749 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 19:43:57.639045 systemd[1]: Finished modprobe@configfs.service. Feb 9 19:43:57.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:57.639000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:57.640065 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 19:43:57.640373 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 19:43:57.640000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:57.640000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:57.641430 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 19:43:57.641631 systemd[1]: Finished modprobe@drm.service. Feb 9 19:43:57.641000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:57.641000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:57.642909 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 19:43:57.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:57.644045 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 19:43:57.644324 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 19:43:57.644000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:57.644000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:57.645662 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 19:43:57.645909 systemd[1]: Finished modprobe@fuse.service. Feb 9 19:43:57.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:57.646000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:57.646848 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 19:43:57.647145 systemd[1]: Finished modprobe@loop.service. Feb 9 19:43:57.647000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:57.647000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:57.648597 systemd[1]: Finished systemd-modules-load.service. Feb 9 19:43:57.648000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:57.650000 systemd[1]: Finished systemd-network-generator.service. Feb 9 19:43:57.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:57.651489 systemd[1]: Finished systemd-remount-fs.service. Feb 9 19:43:57.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:57.652704 systemd[1]: Reached target network-pre.target. Feb 9 19:43:57.655247 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 19:43:57.657237 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 19:43:57.658103 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 19:43:57.659979 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 19:43:57.662259 systemd[1]: Starting systemd-journal-flush.service... Feb 9 19:43:57.663133 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 19:43:57.664371 systemd[1]: Starting systemd-random-seed.service... Feb 9 19:43:57.665194 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 19:43:57.665955 systemd-journald[1009]: Time spent on flushing to /var/log/journal/16474b4e7ccf4b3caaeeae1bbe28c2a7 is 14.608ms for 1097 entries. Feb 9 19:43:57.665955 systemd-journald[1009]: System Journal (/var/log/journal/16474b4e7ccf4b3caaeeae1bbe28c2a7) is 8.0M, max 195.6M, 187.6M free. Feb 9 19:43:57.697733 systemd-journald[1009]: Received client request to flush runtime journal. Feb 9 19:43:57.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:57.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:57.682000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:57.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:57.666763 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:43:57.669027 systemd[1]: Starting systemd-sysusers.service... Feb 9 19:43:57.673023 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 19:43:57.698903 udevadm[1051]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 9 19:43:57.674239 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 19:43:57.675099 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 19:43:57.676081 systemd[1]: Finished systemd-random-seed.service. Feb 9 19:43:57.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:57.676830 systemd[1]: Reached target first-boot-complete.target. Feb 9 19:43:57.678813 systemd[1]: Starting systemd-udev-settle.service... Feb 9 19:43:57.681908 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:43:57.690685 systemd[1]: Finished systemd-sysusers.service. Feb 9 19:43:57.692550 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 19:43:57.698598 systemd[1]: Finished systemd-journal-flush.service. Feb 9 19:43:57.709965 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 19:43:57.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:58.073832 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 19:43:58.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:58.075651 systemd[1]: Starting systemd-udevd.service... Feb 9 19:43:58.091873 systemd-udevd[1060]: Using default interface naming scheme 'v252'. Feb 9 19:43:58.103481 systemd[1]: Started systemd-udevd.service. Feb 9 19:43:58.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:58.105346 systemd[1]: Starting systemd-networkd.service... Feb 9 19:43:58.129155 systemd[1]: Starting systemd-userdbd.service... Feb 9 19:43:58.145844 systemd[1]: Found device dev-ttyS0.device. Feb 9 19:43:58.158080 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 19:43:58.162734 systemd[1]: Started systemd-userdbd.service. Feb 9 19:43:58.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:58.170554 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 9 19:43:58.185555 kernel: ACPI: button: Power Button [PWRF] Feb 9 19:43:58.183000 audit[1076]: AVC avc: denied { confidentiality } for pid=1076 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 19:43:58.183000 audit[1076]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55c46c938e10 a1=32194 a2=7fdd9ea65bc5 a3=5 items=108 ppid=1060 pid=1076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:43:58.183000 audit: CWD cwd="/" Feb 9 19:43:58.183000 audit: PATH item=0 name=(null) inode=51 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=1 name=(null) inode=14870 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=2 name=(null) inode=14870 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=3 name=(null) inode=14871 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=4 name=(null) inode=14870 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=5 name=(null) inode=14872 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=6 name=(null) inode=14870 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=7 name=(null) inode=14873 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=8 name=(null) inode=14873 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=9 name=(null) inode=14874 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=10 name=(null) inode=14873 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=11 name=(null) inode=14875 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=12 name=(null) inode=14873 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=13 name=(null) inode=14876 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=14 name=(null) inode=14873 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=15 name=(null) inode=14877 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=16 name=(null) inode=14873 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=17 name=(null) inode=14878 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=18 name=(null) inode=14870 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=19 name=(null) inode=14879 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=20 name=(null) inode=14879 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=21 name=(null) inode=14880 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=22 name=(null) inode=14879 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=23 name=(null) inode=14881 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=24 name=(null) inode=14879 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=25 name=(null) inode=14882 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=26 name=(null) inode=14879 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=27 name=(null) inode=14883 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=28 name=(null) inode=14879 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=29 name=(null) inode=14884 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=30 name=(null) inode=14870 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=31 name=(null) inode=14885 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=32 name=(null) inode=14885 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=33 name=(null) inode=14886 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=34 name=(null) inode=14885 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=35 name=(null) inode=14887 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=36 name=(null) inode=14885 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=37 name=(null) inode=14888 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=38 name=(null) inode=14885 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=39 name=(null) inode=14889 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=40 name=(null) inode=14885 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=41 name=(null) inode=14890 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=42 name=(null) inode=14870 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=43 name=(null) inode=14891 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=44 name=(null) inode=14891 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=45 name=(null) inode=14892 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=46 name=(null) inode=14891 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=47 name=(null) inode=14893 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=48 name=(null) inode=14891 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=49 name=(null) inode=14894 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=50 name=(null) inode=14891 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=51 name=(null) inode=14895 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=52 name=(null) inode=14891 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=53 name=(null) inode=14896 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=54 name=(null) inode=51 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=55 name=(null) inode=14897 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=56 name=(null) inode=14897 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=57 name=(null) inode=14898 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=58 name=(null) inode=14897 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=59 name=(null) inode=14899 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=60 name=(null) inode=14897 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=61 name=(null) inode=14900 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=62 name=(null) inode=14900 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=63 name=(null) inode=14901 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=64 name=(null) inode=14900 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=65 name=(null) inode=14902 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=66 name=(null) inode=14900 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=67 name=(null) inode=14903 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=68 name=(null) inode=14900 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=69 name=(null) inode=14904 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=70 name=(null) inode=14900 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=71 name=(null) inode=14905 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=72 name=(null) inode=14897 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=73 name=(null) inode=14906 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=74 name=(null) inode=14906 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=75 name=(null) inode=14907 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=76 name=(null) inode=14906 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=77 name=(null) inode=14908 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=78 name=(null) inode=14906 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=79 name=(null) inode=14909 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=80 name=(null) inode=14906 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=81 name=(null) inode=14910 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=82 name=(null) inode=14906 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=83 name=(null) inode=14911 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=84 name=(null) inode=14897 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=85 name=(null) inode=14912 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=86 name=(null) inode=14912 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=87 name=(null) inode=14913 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=88 name=(null) inode=14912 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=89 name=(null) inode=14914 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=90 name=(null) inode=14912 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=91 name=(null) inode=14915 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=92 name=(null) inode=14912 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=93 name=(null) inode=14916 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=94 name=(null) inode=14912 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=95 name=(null) inode=14917 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=96 name=(null) inode=14897 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=97 name=(null) inode=14918 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=98 name=(null) inode=14918 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=99 name=(null) inode=14919 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=100 name=(null) inode=14918 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=101 name=(null) inode=14920 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=102 name=(null) inode=14918 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=103 name=(null) inode=14921 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=104 name=(null) inode=14918 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=105 name=(null) inode=14922 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=106 name=(null) inode=14918 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PATH item=107 name=(null) inode=14923 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:43:58.183000 audit: PROCTITLE proctitle="(udev-worker)" Feb 9 19:43:58.210565 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 9 19:43:58.223576 kernel: mousedev: PS/2 mouse device common for all mice Feb 9 19:43:58.230627 systemd-networkd[1068]: lo: Link UP Feb 9 19:43:58.230641 systemd-networkd[1068]: lo: Gained carrier Feb 9 19:43:58.231166 systemd-networkd[1068]: Enumeration completed Feb 9 19:43:58.231292 systemd-networkd[1068]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:43:58.231307 systemd[1]: Started systemd-networkd.service. Feb 9 19:43:58.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:58.232924 systemd-networkd[1068]: eth0: Link UP Feb 9 19:43:58.232935 systemd-networkd[1068]: eth0: Gained carrier Feb 9 19:43:58.235555 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 0 Feb 9 19:43:58.246645 systemd-networkd[1068]: eth0: DHCPv4 address 10.0.0.52/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 9 19:43:58.275599 kernel: kvm: Nested Virtualization enabled Feb 9 19:43:58.275644 kernel: SVM: kvm: Nested Paging enabled Feb 9 19:43:58.276561 kernel: SVM: Virtual VMLOAD VMSAVE supported Feb 9 19:43:58.276592 kernel: SVM: Virtual GIF supported Feb 9 19:43:58.291568 kernel: EDAC MC: Ver: 3.0.0 Feb 9 19:43:58.310941 systemd[1]: Finished systemd-udev-settle.service. Feb 9 19:43:58.311000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:58.312715 systemd[1]: Starting lvm2-activation-early.service... Feb 9 19:43:58.318954 lvm[1097]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 19:43:58.344651 systemd[1]: Finished lvm2-activation-early.service. Feb 9 19:43:58.345000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:58.345641 systemd[1]: Reached target cryptsetup.target. Feb 9 19:43:58.347360 systemd[1]: Starting lvm2-activation.service... Feb 9 19:43:58.350696 lvm[1099]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 19:43:58.375153 systemd[1]: Finished lvm2-activation.service. Feb 9 19:43:58.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:58.375847 systemd[1]: Reached target local-fs-pre.target. Feb 9 19:43:58.376446 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 19:43:58.376463 systemd[1]: Reached target local-fs.target. Feb 9 19:43:58.377032 systemd[1]: Reached target machines.target. Feb 9 19:43:58.378468 systemd[1]: Starting ldconfig.service... Feb 9 19:43:58.379142 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 19:43:58.379196 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:43:58.380020 systemd[1]: Starting systemd-boot-update.service... Feb 9 19:43:58.381742 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 19:43:58.383442 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 19:43:58.384238 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 19:43:58.384276 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 19:43:58.385555 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 19:43:58.386529 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1102 (bootctl) Feb 9 19:43:58.387579 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 19:43:58.394437 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 19:43:58.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:58.395036 systemd-tmpfiles[1105]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 19:43:58.395943 systemd-tmpfiles[1105]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 19:43:58.398606 systemd-tmpfiles[1105]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 19:43:58.422266 systemd-fsck[1111]: fsck.fat 4.2 (2021-01-31) Feb 9 19:43:58.422266 systemd-fsck[1111]: /dev/vda1: 790 files, 115362/258078 clusters Feb 9 19:43:58.425000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:58.424618 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 19:43:58.426769 systemd[1]: Mounting boot.mount... Feb 9 19:43:58.451660 systemd[1]: Mounted boot.mount. Feb 9 19:43:58.463202 systemd[1]: Finished systemd-boot-update.service. Feb 9 19:43:58.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:58.472320 ldconfig[1101]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 19:43:59.094182 systemd[1]: Finished ldconfig.service. Feb 9 19:43:59.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:59.097705 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 19:43:59.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:59.099807 systemd[1]: Starting audit-rules.service... Feb 9 19:43:59.102201 systemd[1]: Starting clean-ca-certificates.service... Feb 9 19:43:59.104098 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 19:43:59.106069 systemd[1]: Starting systemd-resolved.service... Feb 9 19:43:59.108477 systemd[1]: Starting systemd-timesyncd.service... Feb 9 19:43:59.110530 systemd[1]: Starting systemd-update-utmp.service... Feb 9 19:43:59.112387 systemd[1]: Finished clean-ca-certificates.service. Feb 9 19:43:59.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:59.114693 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 19:43:59.118000 audit[1128]: SYSTEM_BOOT pid=1128 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 19:43:59.121035 systemd[1]: Finished systemd-update-utmp.service. Feb 9 19:43:59.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:59.125274 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 19:43:59.126481 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 19:43:59.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:59.127465 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 19:43:59.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:59.129394 systemd[1]: Starting systemd-update-done.service... Feb 9 19:43:59.136000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:43:59.136892 systemd[1]: Finished systemd-update-done.service. Feb 9 19:43:59.139000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 19:43:59.139000 audit[1145]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe924a1390 a2=420 a3=0 items=0 ppid=1119 pid=1145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:43:59.139000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 19:43:59.140219 augenrules[1145]: No rules Feb 9 19:43:59.140505 systemd[1]: Finished audit-rules.service. Feb 9 19:43:59.172494 systemd[1]: Started systemd-timesyncd.service. Feb 9 19:43:59.173360 systemd[1]: Reached target time-set.target. Feb 9 19:43:59.173509 systemd-timesyncd[1125]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 9 19:43:59.173563 systemd-timesyncd[1125]: Initial clock synchronization to Fri 2024-02-09 19:43:58.931412 UTC. Feb 9 19:43:59.175457 systemd-resolved[1124]: Positive Trust Anchors: Feb 9 19:43:59.175468 systemd-resolved[1124]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 19:43:59.175495 systemd-resolved[1124]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 19:43:59.181771 systemd-resolved[1124]: Defaulting to hostname 'linux'. Feb 9 19:43:59.183579 systemd[1]: Started systemd-resolved.service. Feb 9 19:43:59.184183 systemd[1]: Reached target network.target. Feb 9 19:43:59.184723 systemd[1]: Reached target nss-lookup.target. Feb 9 19:43:59.185289 systemd[1]: Reached target sysinit.target. Feb 9 19:43:59.185895 systemd[1]: Started motdgen.path. Feb 9 19:43:59.186398 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 19:43:59.187251 systemd[1]: Started logrotate.timer. Feb 9 19:43:59.187819 systemd[1]: Started mdadm.timer. Feb 9 19:43:59.188300 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 19:43:59.188903 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 19:43:59.188928 systemd[1]: Reached target paths.target. Feb 9 19:43:59.189439 systemd[1]: Reached target timers.target. Feb 9 19:43:59.190234 systemd[1]: Listening on dbus.socket. Feb 9 19:43:59.191773 systemd[1]: Starting docker.socket... Feb 9 19:43:59.192997 systemd[1]: Listening on sshd.socket. Feb 9 19:43:59.193803 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:43:59.194041 systemd[1]: Listening on docker.socket. Feb 9 19:43:59.194589 systemd[1]: Reached target sockets.target. Feb 9 19:43:59.195177 systemd[1]: Reached target basic.target. Feb 9 19:43:59.195805 systemd[1]: System is tainted: cgroupsv1 Feb 9 19:43:59.195842 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 19:43:59.195858 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 19:43:59.196717 systemd[1]: Starting containerd.service... Feb 9 19:43:59.198092 systemd[1]: Starting dbus.service... Feb 9 19:43:59.199423 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 19:43:59.201035 systemd[1]: Starting extend-filesystems.service... Feb 9 19:43:59.201713 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 19:43:59.202524 systemd[1]: Starting motdgen.service... Feb 9 19:43:59.204793 jq[1157]: false Feb 9 19:43:59.203827 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 19:43:59.205153 systemd[1]: Starting prepare-critools.service... Feb 9 19:43:59.206447 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 19:43:59.207960 systemd[1]: Starting sshd-keygen.service... Feb 9 19:43:59.210296 systemd[1]: Starting systemd-logind.service... Feb 9 19:43:59.211509 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:43:59.211565 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 19:43:59.214646 systemd[1]: Starting update-engine.service... Feb 9 19:43:59.216083 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 19:43:59.223857 extend-filesystems[1158]: Found sr0 Feb 9 19:43:59.223857 extend-filesystems[1158]: Found vda Feb 9 19:43:59.223857 extend-filesystems[1158]: Found vda1 Feb 9 19:43:59.223857 extend-filesystems[1158]: Found vda2 Feb 9 19:43:59.223857 extend-filesystems[1158]: Found vda3 Feb 9 19:43:59.223857 extend-filesystems[1158]: Found usr Feb 9 19:43:59.223857 extend-filesystems[1158]: Found vda4 Feb 9 19:43:59.223857 extend-filesystems[1158]: Found vda6 Feb 9 19:43:59.223857 extend-filesystems[1158]: Found vda7 Feb 9 19:43:59.223857 extend-filesystems[1158]: Found vda9 Feb 9 19:43:59.223336 dbus-daemon[1156]: [system] SELinux support is enabled Feb 9 19:43:59.217911 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 19:43:59.243620 extend-filesystems[1158]: Checking size of /dev/vda9 Feb 9 19:43:59.218125 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 19:43:59.219378 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 19:43:59.244492 tar[1178]: ./ Feb 9 19:43:59.244492 tar[1178]: ./macvlan Feb 9 19:43:59.219593 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 19:43:59.244767 tar[1179]: crictl Feb 9 19:43:59.223630 systemd[1]: Started dbus.service. Feb 9 19:43:59.245022 jq[1174]: true Feb 9 19:43:59.226332 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 19:43:59.226349 systemd[1]: Reached target system-config.target. Feb 9 19:43:59.245284 jq[1189]: true Feb 9 19:43:59.227016 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 19:43:59.227030 systemd[1]: Reached target user-config.target. Feb 9 19:43:59.229254 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 19:43:59.229470 systemd[1]: Finished motdgen.service. Feb 9 19:43:59.257987 update_engine[1171]: I0209 19:43:59.257841 1171 main.cc:92] Flatcar Update Engine starting Feb 9 19:43:59.259441 systemd[1]: Started update-engine.service. Feb 9 19:43:59.260433 update_engine[1171]: I0209 19:43:59.259462 1171 update_check_scheduler.cc:74] Next update check in 11m12s Feb 9 19:43:59.264500 systemd[1]: Started locksmithd.service. Feb 9 19:43:59.271777 systemd-logind[1168]: Watching system buttons on /dev/input/event1 (Power Button) Feb 9 19:43:59.272039 systemd-logind[1168]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 9 19:43:59.273054 systemd-logind[1168]: New seat seat0. Feb 9 19:43:59.275425 systemd[1]: Started systemd-logind.service. Feb 9 19:43:59.278085 env[1184]: time="2024-02-09T19:43:59.277067750Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 19:43:59.281171 extend-filesystems[1158]: Resized partition /dev/vda9 Feb 9 19:43:59.283463 extend-filesystems[1220]: resize2fs 1.46.5 (30-Dec-2021) Feb 9 19:43:59.284408 tar[1178]: ./static Feb 9 19:43:59.301560 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 9 19:43:59.316225 tar[1178]: ./vlan Feb 9 19:43:59.318922 env[1184]: time="2024-02-09T19:43:59.318867745Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 19:43:59.319056 env[1184]: time="2024-02-09T19:43:59.319029127Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:43:59.321185 env[1184]: time="2024-02-09T19:43:59.319992494Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:43:59.321185 env[1184]: time="2024-02-09T19:43:59.320019955Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:43:59.321185 env[1184]: time="2024-02-09T19:43:59.320222585Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:43:59.321185 env[1184]: time="2024-02-09T19:43:59.320237784Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 19:43:59.321185 env[1184]: time="2024-02-09T19:43:59.320249526Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 19:43:59.321185 env[1184]: time="2024-02-09T19:43:59.320258322Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 19:43:59.321185 env[1184]: time="2024-02-09T19:43:59.320315219Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:43:59.321185 env[1184]: time="2024-02-09T19:43:59.320514472Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:43:59.321185 env[1184]: time="2024-02-09T19:43:59.320665716Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:43:59.321185 env[1184]: time="2024-02-09T19:43:59.320681025Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 19:43:59.321399 env[1184]: time="2024-02-09T19:43:59.320724386Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 19:43:59.321399 env[1184]: time="2024-02-09T19:43:59.320734996Z" level=info msg="metadata content store policy set" policy=shared Feb 9 19:43:59.347771 tar[1178]: ./portmap Feb 9 19:43:59.376244 tar[1178]: ./host-local Feb 9 19:43:59.422156 tar[1178]: ./vrf Feb 9 19:43:59.449515 tar[1178]: ./bridge Feb 9 19:43:59.482080 tar[1178]: ./tuning Feb 9 19:43:59.508262 tar[1178]: ./firewall Feb 9 19:43:59.542645 tar[1178]: ./host-device Feb 9 19:43:59.572767 tar[1178]: ./sbr Feb 9 19:43:59.599823 tar[1178]: ./loopback Feb 9 19:43:59.626014 tar[1178]: ./dhcp Feb 9 19:43:59.675191 systemd[1]: Finished prepare-critools.service. Feb 9 19:43:59.701800 tar[1178]: ./ptp Feb 9 19:43:59.705576 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 9 19:43:59.713378 locksmithd[1205]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 19:44:00.157476 extend-filesystems[1220]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 9 19:44:00.157476 extend-filesystems[1220]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 9 19:44:00.157476 extend-filesystems[1220]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 9 19:44:00.175666 extend-filesystems[1158]: Resized filesystem in /dev/vda9 Feb 9 19:44:00.176403 bash[1221]: Updated "/home/core/.ssh/authorized_keys" Feb 9 19:44:00.157779 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 19:44:00.176677 env[1184]: time="2024-02-09T19:44:00.162610579Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 19:44:00.176677 env[1184]: time="2024-02-09T19:44:00.162674352Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 19:44:00.176677 env[1184]: time="2024-02-09T19:44:00.162687907Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 19:44:00.176677 env[1184]: time="2024-02-09T19:44:00.162732219Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 19:44:00.176677 env[1184]: time="2024-02-09T19:44:00.162745802Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 19:44:00.176677 env[1184]: time="2024-02-09T19:44:00.162757189Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 19:44:00.176677 env[1184]: time="2024-02-09T19:44:00.162767322Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 19:44:00.176677 env[1184]: time="2024-02-09T19:44:00.162778369Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 19:44:00.176677 env[1184]: time="2024-02-09T19:44:00.162790378Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 19:44:00.176677 env[1184]: time="2024-02-09T19:44:00.162824421Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 19:44:00.176677 env[1184]: time="2024-02-09T19:44:00.162834943Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 19:44:00.176677 env[1184]: time="2024-02-09T19:44:00.162845552Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 19:44:00.176677 env[1184]: time="2024-02-09T19:44:00.162941116Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 19:44:00.176677 env[1184]: time="2024-02-09T19:44:00.163017549Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 19:44:00.158004 systemd[1]: Finished extend-filesystems.service. Feb 9 19:44:00.177015 env[1184]: time="2024-02-09T19:44:00.163362990Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 19:44:00.177015 env[1184]: time="2024-02-09T19:44:00.163385996Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 19:44:00.177015 env[1184]: time="2024-02-09T19:44:00.163409120Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 19:44:00.177015 env[1184]: time="2024-02-09T19:44:00.163449819Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 19:44:00.177015 env[1184]: time="2024-02-09T19:44:00.163476265Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 19:44:00.177015 env[1184]: time="2024-02-09T19:44:00.163488681Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 19:44:00.177015 env[1184]: time="2024-02-09T19:44:00.163500399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 19:44:00.177015 env[1184]: time="2024-02-09T19:44:00.163510804Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 19:44:00.177015 env[1184]: time="2024-02-09T19:44:00.163520540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 19:44:00.177015 env[1184]: time="2024-02-09T19:44:00.163554379Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 19:44:00.177015 env[1184]: time="2024-02-09T19:44:00.163565086Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 19:44:00.177015 env[1184]: time="2024-02-09T19:44:00.163583322Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 19:44:00.177015 env[1184]: time="2024-02-09T19:44:00.163715785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 19:44:00.177015 env[1184]: time="2024-02-09T19:44:00.163730310Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 19:44:00.177015 env[1184]: time="2024-02-09T19:44:00.163740268Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 19:44:00.168434 systemd[1]: Started containerd.service. Feb 9 19:44:00.177317 env[1184]: time="2024-02-09T19:44:00.163750315Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 19:44:00.177317 env[1184]: time="2024-02-09T19:44:00.163764305Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 19:44:00.177317 env[1184]: time="2024-02-09T19:44:00.163791208Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 19:44:00.177317 env[1184]: time="2024-02-09T19:44:00.163808143Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 19:44:00.177317 env[1184]: time="2024-02-09T19:44:00.163840573Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 19:44:00.169487 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 19:44:00.177461 env[1184]: time="2024-02-09T19:44:00.164038433Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 19:44:00.177461 env[1184]: time="2024-02-09T19:44:00.164171314Z" level=info msg="Connect containerd service" Feb 9 19:44:00.177461 env[1184]: time="2024-02-09T19:44:00.164201034Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 19:44:00.177461 env[1184]: time="2024-02-09T19:44:00.164746521Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 19:44:00.177461 env[1184]: time="2024-02-09T19:44:00.164953834Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 19:44:00.177461 env[1184]: time="2024-02-09T19:44:00.164981970Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 19:44:00.177461 env[1184]: time="2024-02-09T19:44:00.165024574Z" level=info msg="containerd successfully booted in 0.895920s" Feb 9 19:44:00.177461 env[1184]: time="2024-02-09T19:44:00.166266148Z" level=info msg="Start subscribing containerd event" Feb 9 19:44:00.177461 env[1184]: time="2024-02-09T19:44:00.166387954Z" level=info msg="Start recovering state" Feb 9 19:44:00.177461 env[1184]: time="2024-02-09T19:44:00.166578031Z" level=info msg="Start event monitor" Feb 9 19:44:00.177461 env[1184]: time="2024-02-09T19:44:00.166594839Z" level=info msg="Start snapshots syncer" Feb 9 19:44:00.177461 env[1184]: time="2024-02-09T19:44:00.166604225Z" level=info msg="Start cni network conf syncer for default" Feb 9 19:44:00.177461 env[1184]: time="2024-02-09T19:44:00.166610375Z" level=info msg="Start streaming server" Feb 9 19:44:00.178631 systemd-networkd[1068]: eth0: Gained IPv6LL Feb 9 19:44:00.185068 tar[1178]: ./ipvlan Feb 9 19:44:00.212349 tar[1178]: ./bandwidth Feb 9 19:44:00.246308 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 19:44:00.487468 sshd_keygen[1192]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 19:44:00.504980 systemd[1]: Finished sshd-keygen.service. Feb 9 19:44:00.506952 systemd[1]: Starting issuegen.service... Feb 9 19:44:00.511706 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 19:44:00.511879 systemd[1]: Finished issuegen.service. Feb 9 19:44:00.513520 systemd[1]: Starting systemd-user-sessions.service... Feb 9 19:44:00.517709 systemd[1]: Finished systemd-user-sessions.service. Feb 9 19:44:00.519296 systemd[1]: Started getty@tty1.service. Feb 9 19:44:00.520686 systemd[1]: Started serial-getty@ttyS0.service. Feb 9 19:44:00.521383 systemd[1]: Reached target getty.target. Feb 9 19:44:00.522010 systemd[1]: Reached target multi-user.target. Feb 9 19:44:00.523431 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 19:44:00.528949 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 19:44:00.529125 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 19:44:00.530037 systemd[1]: Startup finished in 6.104s (kernel) + 5.320s (userspace) = 11.424s. Feb 9 19:44:04.464801 systemd[1]: Created slice system-sshd.slice. Feb 9 19:44:04.465764 systemd[1]: Started sshd@0-10.0.0.52:22-10.0.0.1:43672.service. Feb 9 19:44:04.563637 sshd[1259]: Accepted publickey for core from 10.0.0.1 port 43672 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:44:04.564822 sshd[1259]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:44:04.572197 systemd-logind[1168]: New session 1 of user core. Feb 9 19:44:04.573089 systemd[1]: Created slice user-500.slice. Feb 9 19:44:04.573943 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 19:44:04.580443 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 19:44:04.581507 systemd[1]: Starting user@500.service... Feb 9 19:44:04.584366 (systemd)[1264]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:44:04.645049 systemd[1264]: Queued start job for default target default.target. Feb 9 19:44:04.645238 systemd[1264]: Reached target paths.target. Feb 9 19:44:04.645253 systemd[1264]: Reached target sockets.target. Feb 9 19:44:04.645264 systemd[1264]: Reached target timers.target. Feb 9 19:44:04.645274 systemd[1264]: Reached target basic.target. Feb 9 19:44:04.645312 systemd[1264]: Reached target default.target. Feb 9 19:44:04.645336 systemd[1264]: Startup finished in 56ms. Feb 9 19:44:04.645413 systemd[1]: Started user@500.service. Feb 9 19:44:04.646357 systemd[1]: Started session-1.scope. Feb 9 19:44:04.695635 systemd[1]: Started sshd@1-10.0.0.52:22-10.0.0.1:43678.service. Feb 9 19:44:04.737017 sshd[1273]: Accepted publickey for core from 10.0.0.1 port 43678 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:44:04.737995 sshd[1273]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:44:04.741139 systemd-logind[1168]: New session 2 of user core. Feb 9 19:44:04.741830 systemd[1]: Started session-2.scope. Feb 9 19:44:04.794600 sshd[1273]: pam_unix(sshd:session): session closed for user core Feb 9 19:44:04.796483 systemd[1]: Started sshd@2-10.0.0.52:22-10.0.0.1:43684.service. Feb 9 19:44:04.797127 systemd-logind[1168]: Session 2 logged out. Waiting for processes to exit. Feb 9 19:44:04.797326 systemd[1]: sshd@1-10.0.0.52:22-10.0.0.1:43678.service: Deactivated successfully. Feb 9 19:44:04.797873 systemd[1]: session-2.scope: Deactivated successfully. Feb 9 19:44:04.798313 systemd-logind[1168]: Removed session 2. Feb 9 19:44:04.835305 sshd[1278]: Accepted publickey for core from 10.0.0.1 port 43684 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:44:04.836383 sshd[1278]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:44:04.839845 systemd-logind[1168]: New session 3 of user core. Feb 9 19:44:04.840777 systemd[1]: Started session-3.scope. Feb 9 19:44:04.889012 sshd[1278]: pam_unix(sshd:session): session closed for user core Feb 9 19:44:04.891507 systemd[1]: Started sshd@3-10.0.0.52:22-10.0.0.1:43690.service. Feb 9 19:44:04.891926 systemd[1]: sshd@2-10.0.0.52:22-10.0.0.1:43684.service: Deactivated successfully. Feb 9 19:44:04.892951 systemd[1]: session-3.scope: Deactivated successfully. Feb 9 19:44:04.893365 systemd-logind[1168]: Session 3 logged out. Waiting for processes to exit. Feb 9 19:44:04.894436 systemd-logind[1168]: Removed session 3. Feb 9 19:44:04.931749 sshd[1286]: Accepted publickey for core from 10.0.0.1 port 43690 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:44:04.932711 sshd[1286]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:44:04.936260 systemd-logind[1168]: New session 4 of user core. Feb 9 19:44:04.936942 systemd[1]: Started session-4.scope. Feb 9 19:44:04.990184 sshd[1286]: pam_unix(sshd:session): session closed for user core Feb 9 19:44:04.992115 systemd[1]: Started sshd@4-10.0.0.52:22-10.0.0.1:43704.service. Feb 9 19:44:04.993647 systemd[1]: sshd@3-10.0.0.52:22-10.0.0.1:43690.service: Deactivated successfully. Feb 9 19:44:04.994794 systemd-logind[1168]: Session 4 logged out. Waiting for processes to exit. Feb 9 19:44:04.994802 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 19:44:04.995700 systemd-logind[1168]: Removed session 4. Feb 9 19:44:05.031701 sshd[1292]: Accepted publickey for core from 10.0.0.1 port 43704 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:44:05.032672 sshd[1292]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:44:05.035920 systemd-logind[1168]: New session 5 of user core. Feb 9 19:44:05.036671 systemd[1]: Started session-5.scope. Feb 9 19:44:05.092925 sudo[1298]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 19:44:05.093078 sudo[1298]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 19:44:05.608079 systemd[1]: Reloading. Feb 9 19:44:05.664959 /usr/lib/systemd/system-generators/torcx-generator[1327]: time="2024-02-09T19:44:05Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:44:05.664988 /usr/lib/systemd/system-generators/torcx-generator[1327]: time="2024-02-09T19:44:05Z" level=info msg="torcx already run" Feb 9 19:44:05.742875 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:44:05.742893 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:44:05.764732 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:44:05.850309 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 19:44:05.855163 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 19:44:05.855699 systemd[1]: Reached target network-online.target. Feb 9 19:44:05.857263 systemd[1]: Started kubelet.service. Feb 9 19:44:05.865906 systemd[1]: Starting coreos-metadata.service... Feb 9 19:44:05.871594 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 9 19:44:05.871770 systemd[1]: Finished coreos-metadata.service. Feb 9 19:44:05.912268 kubelet[1376]: E0209 19:44:05.912194 1376 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 19:44:05.913802 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:44:05.913930 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:44:06.040005 systemd[1]: Stopped kubelet.service. Feb 9 19:44:06.055567 systemd[1]: Reloading. Feb 9 19:44:06.109417 /usr/lib/systemd/system-generators/torcx-generator[1452]: time="2024-02-09T19:44:06Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:44:06.109451 /usr/lib/systemd/system-generators/torcx-generator[1452]: time="2024-02-09T19:44:06Z" level=info msg="torcx already run" Feb 9 19:44:06.180797 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:44:06.180814 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:44:06.197056 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:44:06.283019 systemd[1]: Started kubelet.service. Feb 9 19:44:06.329052 kubelet[1499]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:44:06.329052 kubelet[1499]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:44:06.329479 kubelet[1499]: I0209 19:44:06.329072 1499 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 19:44:06.330554 kubelet[1499]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:44:06.330554 kubelet[1499]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:44:06.638316 kubelet[1499]: I0209 19:44:06.638275 1499 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 19:44:06.638316 kubelet[1499]: I0209 19:44:06.638303 1499 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 19:44:06.638850 kubelet[1499]: I0209 19:44:06.638831 1499 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 19:44:06.641213 kubelet[1499]: I0209 19:44:06.641159 1499 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 19:44:06.645085 kubelet[1499]: I0209 19:44:06.645060 1499 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 19:44:06.645366 kubelet[1499]: I0209 19:44:06.645347 1499 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 19:44:06.645435 kubelet[1499]: I0209 19:44:06.645419 1499 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 19:44:06.645520 kubelet[1499]: I0209 19:44:06.645438 1499 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 19:44:06.645520 kubelet[1499]: I0209 19:44:06.645448 1499 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 19:44:06.645591 kubelet[1499]: I0209 19:44:06.645567 1499 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:44:06.648690 kubelet[1499]: I0209 19:44:06.648660 1499 kubelet.go:398] "Attempting to sync node with API server" Feb 9 19:44:06.648690 kubelet[1499]: I0209 19:44:06.648694 1499 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 19:44:06.648855 kubelet[1499]: I0209 19:44:06.648725 1499 kubelet.go:297] "Adding apiserver pod source" Feb 9 19:44:06.648855 kubelet[1499]: I0209 19:44:06.648739 1499 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 19:44:06.648975 kubelet[1499]: E0209 19:44:06.648961 1499 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:44:06.649083 kubelet[1499]: E0209 19:44:06.649071 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:44:06.649298 kubelet[1499]: I0209 19:44:06.649283 1499 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 19:44:06.649502 kubelet[1499]: W0209 19:44:06.649481 1499 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 19:44:06.649830 kubelet[1499]: I0209 19:44:06.649810 1499 server.go:1186] "Started kubelet" Feb 9 19:44:06.650404 kubelet[1499]: I0209 19:44:06.650393 1499 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 19:44:06.651054 kubelet[1499]: I0209 19:44:06.651041 1499 server.go:451] "Adding debug handlers to kubelet server" Feb 9 19:44:06.651744 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 9 19:44:06.651784 kubelet[1499]: E0209 19:44:06.650847 1499 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 19:44:06.651784 kubelet[1499]: E0209 19:44:06.651750 1499 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 19:44:06.651841 kubelet[1499]: I0209 19:44:06.651828 1499 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 19:44:06.652039 kubelet[1499]: I0209 19:44:06.652024 1499 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 19:44:06.652127 kubelet[1499]: I0209 19:44:06.652103 1499 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 19:44:06.663328 kubelet[1499]: W0209 19:44:06.663291 1499 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.52" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:44:06.663328 kubelet[1499]: E0209 19:44:06.663331 1499 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.52" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:44:06.663475 kubelet[1499]: E0209 19:44:06.663368 1499 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: leases.coordination.k8s.io "10.0.0.52" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 19:44:06.663475 kubelet[1499]: W0209 19:44:06.663410 1499 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:44:06.663475 kubelet[1499]: E0209 19:44:06.663422 1499 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:44:06.663575 kubelet[1499]: W0209 19:44:06.663509 1499 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:44:06.663575 kubelet[1499]: E0209 19:44:06.663545 1499 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:44:06.663699 kubelet[1499]: E0209 19:44:06.663593 1499 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.52.17b24956fac0cb97", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.52", UID:"10.0.0.52", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.52"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 44, 6, 649793431, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 44, 6, 649793431, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:44:06.664763 kubelet[1499]: E0209 19:44:06.664703 1499 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.52.17b24956fade7c14", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.52", UID:"10.0.0.52", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.52"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 44, 6, 651739156, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 44, 6, 651739156, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:44:06.684919 kubelet[1499]: I0209 19:44:06.684890 1499 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 19:44:06.684919 kubelet[1499]: I0209 19:44:06.684909 1499 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 19:44:06.684919 kubelet[1499]: I0209 19:44:06.684924 1499 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:44:06.685234 kubelet[1499]: E0209 19:44:06.685140 1499 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.52.17b24956fcce10f9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.52", UID:"10.0.0.52", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.52 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.52"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 44, 6, 684217593, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 44, 6, 684217593, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:44:06.685878 kubelet[1499]: E0209 19:44:06.685835 1499 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.52.17b24956fcce3a8c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.52", UID:"10.0.0.52", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.52 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.52"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 44, 6, 684228236, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 44, 6, 684228236, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:44:06.686550 kubelet[1499]: E0209 19:44:06.686495 1499 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.52.17b24956fcce4810", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.52", UID:"10.0.0.52", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.52 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.52"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 44, 6, 684231696, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 44, 6, 684231696, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:44:06.687495 kubelet[1499]: I0209 19:44:06.687461 1499 policy_none.go:49] "None policy: Start" Feb 9 19:44:06.688055 kubelet[1499]: I0209 19:44:06.688038 1499 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 19:44:06.688103 kubelet[1499]: I0209 19:44:06.688058 1499 state_mem.go:35] "Initializing new in-memory state store" Feb 9 19:44:06.694582 kubelet[1499]: I0209 19:44:06.694517 1499 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 19:44:06.694820 kubelet[1499]: I0209 19:44:06.694797 1499 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 19:44:06.696338 kubelet[1499]: E0209 19:44:06.696317 1499 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.52\" not found" Feb 9 19:44:06.697190 kubelet[1499]: E0209 19:44:06.697095 1499 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.52.17b24956fd7dc8ed", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.52", UID:"10.0.0.52", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.52"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 44, 6, 695733485, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 44, 6, 695733485, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:44:06.753511 kubelet[1499]: I0209 19:44:06.753474 1499 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.52" Feb 9 19:44:06.754364 kubelet[1499]: E0209 19:44:06.754339 1499 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.52" Feb 9 19:44:06.754742 kubelet[1499]: E0209 19:44:06.754683 1499 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.52.17b24956fcce10f9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.52", UID:"10.0.0.52", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.52 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.52"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 44, 6, 684217593, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 44, 6, 753431420, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.52.17b24956fcce10f9" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:44:06.755506 kubelet[1499]: E0209 19:44:06.755434 1499 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.52.17b24956fcce3a8c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.52", UID:"10.0.0.52", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.52 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.52"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 44, 6, 684228236, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 44, 6, 753436915, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.52.17b24956fcce3a8c" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:44:06.756256 kubelet[1499]: E0209 19:44:06.756210 1499 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.52.17b24956fcce4810", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.52", UID:"10.0.0.52", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.52 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.52"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 44, 6, 684231696, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 44, 6, 753440009, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.52.17b24956fcce4810" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:44:06.769407 kubelet[1499]: I0209 19:44:06.769386 1499 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 19:44:06.786212 kubelet[1499]: I0209 19:44:06.786163 1499 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 19:44:06.786212 kubelet[1499]: I0209 19:44:06.786192 1499 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 19:44:06.786212 kubelet[1499]: I0209 19:44:06.786216 1499 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 19:44:06.786389 kubelet[1499]: E0209 19:44:06.786269 1499 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 9 19:44:06.787043 kubelet[1499]: W0209 19:44:06.786953 1499 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:44:06.787043 kubelet[1499]: E0209 19:44:06.786990 1499 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:44:06.864880 kubelet[1499]: E0209 19:44:06.864851 1499 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: leases.coordination.k8s.io "10.0.0.52" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 19:44:06.956018 kubelet[1499]: I0209 19:44:06.955914 1499 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.52" Feb 9 19:44:06.957228 kubelet[1499]: E0209 19:44:06.957203 1499 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.52" Feb 9 19:44:06.957228 kubelet[1499]: E0209 19:44:06.957149 1499 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.52.17b24956fcce10f9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.52", UID:"10.0.0.52", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.52 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.52"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 44, 6, 684217593, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 44, 6, 955882095, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.52.17b24956fcce10f9" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:44:06.957994 kubelet[1499]: E0209 19:44:06.957951 1499 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.52.17b24956fcce3a8c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.52", UID:"10.0.0.52", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.52 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.52"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 44, 6, 684228236, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 44, 6, 955887047, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.52.17b24956fcce3a8c" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:44:07.051405 kubelet[1499]: E0209 19:44:07.051309 1499 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.52.17b24956fcce4810", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.52", UID:"10.0.0.52", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.52 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.52"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 44, 6, 684231696, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 44, 6, 955889369, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.52.17b24956fcce4810" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:44:07.266936 kubelet[1499]: E0209 19:44:07.266790 1499 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: leases.coordination.k8s.io "10.0.0.52" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 19:44:07.358931 kubelet[1499]: I0209 19:44:07.358901 1499 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.52" Feb 9 19:44:07.360184 kubelet[1499]: E0209 19:44:07.360159 1499 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.52" Feb 9 19:44:07.360269 kubelet[1499]: E0209 19:44:07.360136 1499 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.52.17b24956fcce10f9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.52", UID:"10.0.0.52", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.52 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.52"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 44, 6, 684217593, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 44, 7, 358839750, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.52.17b24956fcce10f9" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:44:07.451689 kubelet[1499]: E0209 19:44:07.451610 1499 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.52.17b24956fcce3a8c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.52", UID:"10.0.0.52", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.52 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.52"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 44, 6, 684228236, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 44, 7, 358865282, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.52.17b24956fcce3a8c" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:44:07.623494 kubelet[1499]: W0209 19:44:07.623398 1499 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:44:07.623494 kubelet[1499]: E0209 19:44:07.623427 1499 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:44:07.649640 kubelet[1499]: E0209 19:44:07.649617 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:44:07.652012 kubelet[1499]: E0209 19:44:07.651926 1499 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.52.17b24956fcce4810", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.52", UID:"10.0.0.52", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.52 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.52"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 44, 6, 684231696, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 44, 7, 358872587, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.52.17b24956fcce4810" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:44:07.770811 kubelet[1499]: W0209 19:44:07.770780 1499 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.52" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:44:07.770811 kubelet[1499]: E0209 19:44:07.770803 1499 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.52" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:44:08.068742 kubelet[1499]: E0209 19:44:08.068618 1499 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: leases.coordination.k8s.io "10.0.0.52" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 19:44:08.161878 kubelet[1499]: I0209 19:44:08.161836 1499 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.52" Feb 9 19:44:08.163167 kubelet[1499]: E0209 19:44:08.163065 1499 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.52.17b24956fcce10f9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.52", UID:"10.0.0.52", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.52 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.52"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 44, 6, 684217593, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 44, 8, 161778406, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.52.17b24956fcce10f9" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:44:08.163391 kubelet[1499]: E0209 19:44:08.163357 1499 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.52" Feb 9 19:44:08.163992 kubelet[1499]: E0209 19:44:08.163905 1499 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.52.17b24956fcce3a8c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.52", UID:"10.0.0.52", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.52 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.52"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 44, 6, 684228236, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 44, 8, 161790531, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.52.17b24956fcce3a8c" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:44:08.213134 kubelet[1499]: W0209 19:44:08.213104 1499 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:44:08.213134 kubelet[1499]: E0209 19:44:08.213135 1499 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:44:08.251605 kubelet[1499]: E0209 19:44:08.251391 1499 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.52.17b24956fcce4810", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.52", UID:"10.0.0.52", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.52 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.52"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 44, 6, 684231696, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 44, 8, 161794120, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.52.17b24956fcce4810" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:44:08.267211 kubelet[1499]: W0209 19:44:08.267164 1499 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:44:08.267211 kubelet[1499]: E0209 19:44:08.267210 1499 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:44:08.650664 kubelet[1499]: E0209 19:44:08.650605 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:44:09.651311 kubelet[1499]: E0209 19:44:09.651267 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:44:09.670386 kubelet[1499]: E0209 19:44:09.670348 1499 controller.go:146] failed to ensure lease exists, will retry in 3.2s, error: leases.coordination.k8s.io "10.0.0.52" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 19:44:09.764432 kubelet[1499]: I0209 19:44:09.764401 1499 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.52" Feb 9 19:44:09.765470 kubelet[1499]: E0209 19:44:09.765447 1499 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.52" Feb 9 19:44:09.765518 kubelet[1499]: E0209 19:44:09.765457 1499 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.52.17b24956fcce10f9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.52", UID:"10.0.0.52", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.52 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.52"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 44, 6, 684217593, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 44, 9, 764343746, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.52.17b24956fcce10f9" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:44:09.766550 kubelet[1499]: E0209 19:44:09.766497 1499 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.52.17b24956fcce3a8c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.52", UID:"10.0.0.52", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.52 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.52"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 44, 6, 684228236, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 44, 9, 764361486, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.52.17b24956fcce3a8c" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:44:09.767189 kubelet[1499]: E0209 19:44:09.767144 1499 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.52.17b24956fcce4810", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.52", UID:"10.0.0.52", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.52 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.52"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 44, 6, 684231696, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 44, 9, 764364951, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.52.17b24956fcce4810" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:44:10.185057 kubelet[1499]: W0209 19:44:10.185021 1499 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.52" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:44:10.185057 kubelet[1499]: E0209 19:44:10.185051 1499 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.52" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:44:10.563616 kubelet[1499]: W0209 19:44:10.563510 1499 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:44:10.563616 kubelet[1499]: E0209 19:44:10.563549 1499 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:44:10.651971 kubelet[1499]: E0209 19:44:10.651923 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:44:10.910640 kubelet[1499]: W0209 19:44:10.910489 1499 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:44:10.910640 kubelet[1499]: E0209 19:44:10.910524 1499 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:44:10.954737 kubelet[1499]: W0209 19:44:10.954691 1499 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:44:10.954737 kubelet[1499]: E0209 19:44:10.954734 1499 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:44:11.652523 kubelet[1499]: E0209 19:44:11.652469 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:44:12.653365 kubelet[1499]: E0209 19:44:12.653324 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:44:12.871347 kubelet[1499]: E0209 19:44:12.871310 1499 controller.go:146] failed to ensure lease exists, will retry in 6.4s, error: leases.coordination.k8s.io "10.0.0.52" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 19:44:12.966121 kubelet[1499]: I0209 19:44:12.966032 1499 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.52" Feb 9 19:44:12.967303 kubelet[1499]: E0209 19:44:12.967260 1499 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.52" Feb 9 19:44:12.967354 kubelet[1499]: E0209 19:44:12.967212 1499 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.52.17b24956fcce10f9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.52", UID:"10.0.0.52", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.52 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.52"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 44, 6, 684217593, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 44, 12, 965998506, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.52.17b24956fcce10f9" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:44:12.968221 kubelet[1499]: E0209 19:44:12.968167 1499 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.52.17b24956fcce3a8c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.52", UID:"10.0.0.52", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.52 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.52"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 44, 6, 684228236, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 44, 12, 966004322, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.52.17b24956fcce3a8c" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:44:12.968917 kubelet[1499]: E0209 19:44:12.968875 1499 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.52.17b24956fcce4810", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.52", UID:"10.0.0.52", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.52 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.52"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 44, 6, 684231696, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 44, 12, 966007050, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.52.17b24956fcce4810" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:44:13.654350 kubelet[1499]: E0209 19:44:13.654294 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:44:14.191239 kubelet[1499]: W0209 19:44:14.191201 1499 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:44:14.191239 kubelet[1499]: E0209 19:44:14.191234 1499 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:44:14.403673 kubelet[1499]: W0209 19:44:14.403634 1499 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:44:14.403673 kubelet[1499]: E0209 19:44:14.403673 1499 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:44:14.440462 kubelet[1499]: W0209 19:44:14.440424 1499 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.52" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:44:14.440462 kubelet[1499]: E0209 19:44:14.440461 1499 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.52" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:44:14.655461 kubelet[1499]: E0209 19:44:14.655364 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:44:15.656020 kubelet[1499]: E0209 19:44:15.655974 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:44:16.641620 kubelet[1499]: I0209 19:44:16.641574 1499 transport.go:135] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 9 19:44:16.656935 kubelet[1499]: E0209 19:44:16.656899 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:44:16.696401 kubelet[1499]: E0209 19:44:16.696376 1499 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.52\" not found" Feb 9 19:44:17.010893 kubelet[1499]: E0209 19:44:17.010789 1499 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.0.0.52" not found Feb 9 19:44:17.657084 kubelet[1499]: E0209 19:44:17.657041 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:44:18.266453 kubelet[1499]: E0209 19:44:18.266419 1499 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.0.0.52" not found Feb 9 19:44:18.657575 kubelet[1499]: E0209 19:44:18.657444 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:44:19.275933 kubelet[1499]: E0209 19:44:19.275908 1499 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.52\" not found" node="10.0.0.52" Feb 9 19:44:19.368856 kubelet[1499]: I0209 19:44:19.368835 1499 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.52" Feb 9 19:44:19.658456 kubelet[1499]: E0209 19:44:19.658398 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:44:19.668219 kubelet[1499]: I0209 19:44:19.668187 1499 kubelet_node_status.go:73] "Successfully registered node" node="10.0.0.52" Feb 9 19:44:19.681440 kubelet[1499]: E0209 19:44:19.681405 1499 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 9 19:44:19.782127 kubelet[1499]: E0209 19:44:19.782079 1499 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 9 19:44:19.797639 sudo[1298]: pam_unix(sudo:session): session closed for user root Feb 9 19:44:19.799063 sshd[1292]: pam_unix(sshd:session): session closed for user core Feb 9 19:44:19.801802 systemd[1]: sshd@4-10.0.0.52:22-10.0.0.1:43704.service: Deactivated successfully. Feb 9 19:44:19.802811 systemd-logind[1168]: Session 5 logged out. Waiting for processes to exit. Feb 9 19:44:19.802856 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 19:44:19.803843 systemd-logind[1168]: Removed session 5. Feb 9 19:44:19.882805 kubelet[1499]: E0209 19:44:19.882766 1499 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 9 19:44:19.983437 kubelet[1499]: E0209 19:44:19.983318 1499 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 9 19:44:20.083887 kubelet[1499]: E0209 19:44:20.083854 1499 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 9 19:44:20.184345 kubelet[1499]: E0209 19:44:20.184312 1499 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 9 19:44:20.285342 kubelet[1499]: E0209 19:44:20.285240 1499 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 9 19:44:20.385771 kubelet[1499]: E0209 19:44:20.385732 1499 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 9 19:44:20.486206 kubelet[1499]: E0209 19:44:20.486172 1499 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 9 19:44:20.586699 kubelet[1499]: E0209 19:44:20.586597 1499 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 9 19:44:20.659210 kubelet[1499]: E0209 19:44:20.659178 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:44:20.687328 kubelet[1499]: E0209 19:44:20.687306 1499 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 9 19:44:20.787987 kubelet[1499]: E0209 19:44:20.787956 1499 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 9 19:44:20.888227 kubelet[1499]: E0209 19:44:20.888129 1499 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 9 19:44:20.988547 kubelet[1499]: E0209 19:44:20.988503 1499 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 9 19:44:21.088936 kubelet[1499]: E0209 19:44:21.088907 1499 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 9 19:44:21.189393 kubelet[1499]: E0209 19:44:21.189324 1499 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 9 19:44:21.289822 kubelet[1499]: E0209 19:44:21.289787 1499 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 9 19:44:21.390323 kubelet[1499]: E0209 19:44:21.390286 1499 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 9 19:44:21.490873 kubelet[1499]: E0209 19:44:21.490743 1499 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 9 19:44:21.591216 kubelet[1499]: E0209 19:44:21.591182 1499 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 9 19:44:21.659795 kubelet[1499]: E0209 19:44:21.659768 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:44:21.691931 kubelet[1499]: E0209 19:44:21.691911 1499 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 9 19:44:21.792346 kubelet[1499]: E0209 19:44:21.792250 1499 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 9 19:44:21.893361 kubelet[1499]: E0209 19:44:21.893322 1499 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 9 19:44:21.993649 kubelet[1499]: E0209 19:44:21.993629 1499 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 9 19:44:22.094089 kubelet[1499]: E0209 19:44:22.094007 1499 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 9 19:44:22.194578 kubelet[1499]: E0209 19:44:22.194550 1499 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 9 19:44:22.294936 kubelet[1499]: E0209 19:44:22.294917 1499 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 9 19:44:22.395360 kubelet[1499]: E0209 19:44:22.395280 1499 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 9 19:44:22.495754 kubelet[1499]: E0209 19:44:22.495722 1499 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 9 19:44:22.596118 kubelet[1499]: E0209 19:44:22.596087 1499 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 9 19:44:22.660752 kubelet[1499]: E0209 19:44:22.660685 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:44:22.696886 kubelet[1499]: E0209 19:44:22.696854 1499 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 9 19:44:22.797182 kubelet[1499]: E0209 19:44:22.797147 1499 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 9 19:44:22.897405 kubelet[1499]: E0209 19:44:22.897370 1499 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 9 19:44:22.998525 kubelet[1499]: E0209 19:44:22.998405 1499 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 9 19:44:23.098893 kubelet[1499]: E0209 19:44:23.098869 1499 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 9 19:44:23.199435 kubelet[1499]: E0209 19:44:23.199389 1499 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 9 19:44:23.299947 kubelet[1499]: E0209 19:44:23.299866 1499 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 9 19:44:23.400298 kubelet[1499]: E0209 19:44:23.400267 1499 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 9 19:44:23.500757 kubelet[1499]: E0209 19:44:23.500724 1499 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 9 19:44:23.601217 kubelet[1499]: E0209 19:44:23.601146 1499 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 9 19:44:23.660905 kubelet[1499]: E0209 19:44:23.660875 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:44:23.702112 kubelet[1499]: E0209 19:44:23.702075 1499 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 9 19:44:23.802856 kubelet[1499]: E0209 19:44:23.802816 1499 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 9 19:44:23.903101 kubelet[1499]: E0209 19:44:23.903012 1499 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 9 19:44:24.003481 kubelet[1499]: E0209 19:44:24.003445 1499 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 9 19:44:24.103877 kubelet[1499]: E0209 19:44:24.103841 1499 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 9 19:44:24.204554 kubelet[1499]: E0209 19:44:24.204451 1499 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 9 19:44:24.304932 kubelet[1499]: E0209 19:44:24.304905 1499 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 9 19:44:24.405295 kubelet[1499]: E0209 19:44:24.405270 1499 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 9 19:44:24.505814 kubelet[1499]: E0209 19:44:24.505742 1499 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 9 19:44:24.606252 kubelet[1499]: E0209 19:44:24.606210 1499 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 9 19:44:24.662000 kubelet[1499]: E0209 19:44:24.661963 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:44:24.707219 kubelet[1499]: E0209 19:44:24.707187 1499 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 9 19:44:24.807943 kubelet[1499]: E0209 19:44:24.807850 1499 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 9 19:44:24.908137 kubelet[1499]: E0209 19:44:24.908089 1499 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.52\" not found" Feb 9 19:44:25.009407 kubelet[1499]: I0209 19:44:25.009374 1499 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 9 19:44:25.009730 env[1184]: time="2024-02-09T19:44:25.009689692Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 19:44:25.009991 kubelet[1499]: I0209 19:44:25.009861 1499 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 9 19:44:25.660693 kubelet[1499]: I0209 19:44:25.660650 1499 apiserver.go:52] "Watching apiserver" Feb 9 19:44:25.662300 kubelet[1499]: E0209 19:44:25.662277 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:44:25.662767 kubelet[1499]: I0209 19:44:25.662748 1499 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:44:25.662819 kubelet[1499]: I0209 19:44:25.662808 1499 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:44:25.753367 kubelet[1499]: I0209 19:44:25.753333 1499 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 19:44:25.838934 kubelet[1499]: I0209 19:44:25.838888 1499 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/870a43b7-8fcf-4396-907f-1bccc87ecbc8-cilium-run\") pod \"cilium-zhvlr\" (UID: \"870a43b7-8fcf-4396-907f-1bccc87ecbc8\") " pod="kube-system/cilium-zhvlr" Feb 9 19:44:25.839073 kubelet[1499]: I0209 19:44:25.838957 1499 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/870a43b7-8fcf-4396-907f-1bccc87ecbc8-cni-path\") pod \"cilium-zhvlr\" (UID: \"870a43b7-8fcf-4396-907f-1bccc87ecbc8\") " pod="kube-system/cilium-zhvlr" Feb 9 19:44:25.839073 kubelet[1499]: I0209 19:44:25.838987 1499 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/870a43b7-8fcf-4396-907f-1bccc87ecbc8-lib-modules\") pod \"cilium-zhvlr\" (UID: \"870a43b7-8fcf-4396-907f-1bccc87ecbc8\") " pod="kube-system/cilium-zhvlr" Feb 9 19:44:25.839073 kubelet[1499]: I0209 19:44:25.839014 1499 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/870a43b7-8fcf-4396-907f-1bccc87ecbc8-xtables-lock\") pod \"cilium-zhvlr\" (UID: \"870a43b7-8fcf-4396-907f-1bccc87ecbc8\") " pod="kube-system/cilium-zhvlr" Feb 9 19:44:25.839073 kubelet[1499]: I0209 19:44:25.839059 1499 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58hwv\" (UniqueName: \"kubernetes.io/projected/870a43b7-8fcf-4396-907f-1bccc87ecbc8-kube-api-access-58hwv\") pod \"cilium-zhvlr\" (UID: \"870a43b7-8fcf-4396-907f-1bccc87ecbc8\") " pod="kube-system/cilium-zhvlr" Feb 9 19:44:25.839176 kubelet[1499]: I0209 19:44:25.839132 1499 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9a1d1008-3df8-4da2-b5cd-26e36888d692-kube-proxy\") pod \"kube-proxy-rpnh5\" (UID: \"9a1d1008-3df8-4da2-b5cd-26e36888d692\") " pod="kube-system/kube-proxy-rpnh5" Feb 9 19:44:25.839176 kubelet[1499]: I0209 19:44:25.839168 1499 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9a1d1008-3df8-4da2-b5cd-26e36888d692-lib-modules\") pod \"kube-proxy-rpnh5\" (UID: \"9a1d1008-3df8-4da2-b5cd-26e36888d692\") " pod="kube-system/kube-proxy-rpnh5" Feb 9 19:44:25.839221 kubelet[1499]: I0209 19:44:25.839206 1499 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/870a43b7-8fcf-4396-907f-1bccc87ecbc8-clustermesh-secrets\") pod \"cilium-zhvlr\" (UID: \"870a43b7-8fcf-4396-907f-1bccc87ecbc8\") " pod="kube-system/cilium-zhvlr" Feb 9 19:44:25.839246 kubelet[1499]: I0209 19:44:25.839235 1499 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/870a43b7-8fcf-4396-907f-1bccc87ecbc8-cilium-config-path\") pod \"cilium-zhvlr\" (UID: \"870a43b7-8fcf-4396-907f-1bccc87ecbc8\") " pod="kube-system/cilium-zhvlr" Feb 9 19:44:25.839296 kubelet[1499]: I0209 19:44:25.839280 1499 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/870a43b7-8fcf-4396-907f-1bccc87ecbc8-hubble-tls\") pod \"cilium-zhvlr\" (UID: \"870a43b7-8fcf-4396-907f-1bccc87ecbc8\") " pod="kube-system/cilium-zhvlr" Feb 9 19:44:25.839374 kubelet[1499]: I0209 19:44:25.839361 1499 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5k9nx\" (UniqueName: \"kubernetes.io/projected/9a1d1008-3df8-4da2-b5cd-26e36888d692-kube-api-access-5k9nx\") pod \"kube-proxy-rpnh5\" (UID: \"9a1d1008-3df8-4da2-b5cd-26e36888d692\") " pod="kube-system/kube-proxy-rpnh5" Feb 9 19:44:25.839396 kubelet[1499]: I0209 19:44:25.839394 1499 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/870a43b7-8fcf-4396-907f-1bccc87ecbc8-host-proc-sys-kernel\") pod \"cilium-zhvlr\" (UID: \"870a43b7-8fcf-4396-907f-1bccc87ecbc8\") " pod="kube-system/cilium-zhvlr" Feb 9 19:44:25.839417 kubelet[1499]: I0209 19:44:25.839413 1499 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/870a43b7-8fcf-4396-907f-1bccc87ecbc8-hostproc\") pod \"cilium-zhvlr\" (UID: \"870a43b7-8fcf-4396-907f-1bccc87ecbc8\") " pod="kube-system/cilium-zhvlr" Feb 9 19:44:25.839442 kubelet[1499]: I0209 19:44:25.839432 1499 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/870a43b7-8fcf-4396-907f-1bccc87ecbc8-cilium-cgroup\") pod \"cilium-zhvlr\" (UID: \"870a43b7-8fcf-4396-907f-1bccc87ecbc8\") " pod="kube-system/cilium-zhvlr" Feb 9 19:44:25.839484 kubelet[1499]: I0209 19:44:25.839469 1499 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/870a43b7-8fcf-4396-907f-1bccc87ecbc8-etc-cni-netd\") pod \"cilium-zhvlr\" (UID: \"870a43b7-8fcf-4396-907f-1bccc87ecbc8\") " pod="kube-system/cilium-zhvlr" Feb 9 19:44:25.839519 kubelet[1499]: I0209 19:44:25.839508 1499 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/870a43b7-8fcf-4396-907f-1bccc87ecbc8-host-proc-sys-net\") pod \"cilium-zhvlr\" (UID: \"870a43b7-8fcf-4396-907f-1bccc87ecbc8\") " pod="kube-system/cilium-zhvlr" Feb 9 19:44:25.839566 kubelet[1499]: I0209 19:44:25.839555 1499 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9a1d1008-3df8-4da2-b5cd-26e36888d692-xtables-lock\") pod \"kube-proxy-rpnh5\" (UID: \"9a1d1008-3df8-4da2-b5cd-26e36888d692\") " pod="kube-system/kube-proxy-rpnh5" Feb 9 19:44:25.839611 kubelet[1499]: I0209 19:44:25.839598 1499 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/870a43b7-8fcf-4396-907f-1bccc87ecbc8-bpf-maps\") pod \"cilium-zhvlr\" (UID: \"870a43b7-8fcf-4396-907f-1bccc87ecbc8\") " pod="kube-system/cilium-zhvlr" Feb 9 19:44:25.839641 kubelet[1499]: I0209 19:44:25.839625 1499 reconciler.go:41] "Reconciler: start to sync state" Feb 9 19:44:25.965888 kubelet[1499]: E0209 19:44:25.965751 1499 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:44:25.966499 env[1184]: time="2024-02-09T19:44:25.966449871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zhvlr,Uid:870a43b7-8fcf-4396-907f-1bccc87ecbc8,Namespace:kube-system,Attempt:0,}" Feb 9 19:44:26.267040 kubelet[1499]: E0209 19:44:26.266918 1499 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:44:26.267473 env[1184]: time="2024-02-09T19:44:26.267427479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rpnh5,Uid:9a1d1008-3df8-4da2-b5cd-26e36888d692,Namespace:kube-system,Attempt:0,}" Feb 9 19:44:26.643290 env[1184]: time="2024-02-09T19:44:26.643164037Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:44:26.649166 kubelet[1499]: E0209 19:44:26.649122 1499 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:44:26.658854 env[1184]: time="2024-02-09T19:44:26.658808993Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:44:26.662495 kubelet[1499]: E0209 19:44:26.662467 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:44:26.677907 env[1184]: time="2024-02-09T19:44:26.677851730Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:44:26.680194 env[1184]: time="2024-02-09T19:44:26.680146537Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:44:26.681023 env[1184]: time="2024-02-09T19:44:26.680976776Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:44:26.682289 env[1184]: time="2024-02-09T19:44:26.682259706Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:44:26.694209 env[1184]: time="2024-02-09T19:44:26.694155953Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:44:26.706729 env[1184]: time="2024-02-09T19:44:26.706688676Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:44:26.723721 env[1184]: time="2024-02-09T19:44:26.723665679Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:44:26.723721 env[1184]: time="2024-02-09T19:44:26.723704454Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:44:26.723721 env[1184]: time="2024-02-09T19:44:26.723721171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:44:26.723943 env[1184]: time="2024-02-09T19:44:26.723855036Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/48472beb7d9fb202cd86bd123039819966242d5f1f51ea40061766996d694eb3 pid=1593 runtime=io.containerd.runc.v2 Feb 9 19:44:26.727257 env[1184]: time="2024-02-09T19:44:26.727204101Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:44:26.727257 env[1184]: time="2024-02-09T19:44:26.727236992Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:44:26.727257 env[1184]: time="2024-02-09T19:44:26.727246110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:44:26.727424 env[1184]: time="2024-02-09T19:44:26.727339588Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b386d5dde3ef2acc06400d84f60a4b14df3460a8e696c4b35229865681deb0b8 pid=1611 runtime=io.containerd.runc.v2 Feb 9 19:44:26.760089 env[1184]: time="2024-02-09T19:44:26.760038248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zhvlr,Uid:870a43b7-8fcf-4396-907f-1bccc87ecbc8,Namespace:kube-system,Attempt:0,} returns sandbox id \"48472beb7d9fb202cd86bd123039819966242d5f1f51ea40061766996d694eb3\"" Feb 9 19:44:26.761560 env[1184]: time="2024-02-09T19:44:26.761027655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rpnh5,Uid:9a1d1008-3df8-4da2-b5cd-26e36888d692,Namespace:kube-system,Attempt:0,} returns sandbox id \"b386d5dde3ef2acc06400d84f60a4b14df3460a8e696c4b35229865681deb0b8\"" Feb 9 19:44:26.762584 kubelet[1499]: E0209 19:44:26.762179 1499 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:44:26.762858 kubelet[1499]: E0209 19:44:26.762848 1499 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:44:26.763604 env[1184]: time="2024-02-09T19:44:26.763583267Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 9 19:44:26.947176 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2914204360.mount: Deactivated successfully. Feb 9 19:44:27.662795 kubelet[1499]: E0209 19:44:27.662754 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:44:28.663805 kubelet[1499]: E0209 19:44:28.663768 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:44:29.663982 kubelet[1499]: E0209 19:44:29.663939 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:44:30.664650 kubelet[1499]: E0209 19:44:30.664612 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:44:31.665620 kubelet[1499]: E0209 19:44:31.665579 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:44:32.019873 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2754173705.mount: Deactivated successfully. Feb 9 19:44:32.666198 kubelet[1499]: E0209 19:44:32.666170 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:44:33.666589 kubelet[1499]: E0209 19:44:33.666561 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:44:34.667656 kubelet[1499]: E0209 19:44:34.667601 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:44:35.667986 kubelet[1499]: E0209 19:44:35.667949 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:44:35.914625 env[1184]: time="2024-02-09T19:44:35.914574592Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:44:35.916338 env[1184]: time="2024-02-09T19:44:35.916296081Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:44:35.918294 env[1184]: time="2024-02-09T19:44:35.918216157Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:44:35.918949 env[1184]: time="2024-02-09T19:44:35.918914951Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 9 19:44:35.919589 env[1184]: time="2024-02-09T19:44:35.919568908Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 9 19:44:35.920485 env[1184]: time="2024-02-09T19:44:35.920463503Z" level=info msg="CreateContainer within sandbox \"48472beb7d9fb202cd86bd123039819966242d5f1f51ea40061766996d694eb3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 19:44:35.931171 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2968442204.mount: Deactivated successfully. Feb 9 19:44:35.934166 env[1184]: time="2024-02-09T19:44:35.934132644Z" level=info msg="CreateContainer within sandbox \"48472beb7d9fb202cd86bd123039819966242d5f1f51ea40061766996d694eb3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"093056d377e3eacb87ec30d1306d42a97778fee08ba514f30de3c523e03d2f16\"" Feb 9 19:44:35.934733 env[1184]: time="2024-02-09T19:44:35.934701996Z" level=info msg="StartContainer for \"093056d377e3eacb87ec30d1306d42a97778fee08ba514f30de3c523e03d2f16\"" Feb 9 19:44:35.974867 env[1184]: time="2024-02-09T19:44:35.974812142Z" level=info msg="StartContainer for \"093056d377e3eacb87ec30d1306d42a97778fee08ba514f30de3c523e03d2f16\" returns successfully" Feb 9 19:44:36.513210 env[1184]: time="2024-02-09T19:44:36.513139655Z" level=info msg="shim disconnected" id=093056d377e3eacb87ec30d1306d42a97778fee08ba514f30de3c523e03d2f16 Feb 9 19:44:36.513210 env[1184]: time="2024-02-09T19:44:36.513196476Z" level=warning msg="cleaning up after shim disconnected" id=093056d377e3eacb87ec30d1306d42a97778fee08ba514f30de3c523e03d2f16 namespace=k8s.io Feb 9 19:44:36.513210 env[1184]: time="2024-02-09T19:44:36.513209431Z" level=info msg="cleaning up dead shim" Feb 9 19:44:36.520619 env[1184]: time="2024-02-09T19:44:36.520594151Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:44:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1721 runtime=io.containerd.runc.v2\n" Feb 9 19:44:36.668392 kubelet[1499]: E0209 19:44:36.668361 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:44:36.828033 kubelet[1499]: E0209 19:44:36.827793 1499 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:44:36.829320 env[1184]: time="2024-02-09T19:44:36.829290087Z" level=info msg="CreateContainer within sandbox \"48472beb7d9fb202cd86bd123039819966242d5f1f51ea40061766996d694eb3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 19:44:36.855613 env[1184]: time="2024-02-09T19:44:36.855572030Z" level=info msg="CreateContainer within sandbox \"48472beb7d9fb202cd86bd123039819966242d5f1f51ea40061766996d694eb3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e6f23b61ca43e3a849a70173dffb2527e9f293f77e2b1545c6990b739d608c2e\"" Feb 9 19:44:36.856406 env[1184]: time="2024-02-09T19:44:36.856381697Z" level=info msg="StartContainer for \"e6f23b61ca43e3a849a70173dffb2527e9f293f77e2b1545c6990b739d608c2e\"" Feb 9 19:44:36.891773 env[1184]: time="2024-02-09T19:44:36.891718672Z" level=info msg="StartContainer for \"e6f23b61ca43e3a849a70173dffb2527e9f293f77e2b1545c6990b739d608c2e\" returns successfully" Feb 9 19:44:36.899898 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 19:44:36.900137 systemd[1]: Stopped systemd-sysctl.service. Feb 9 19:44:36.900269 systemd[1]: Stopping systemd-sysctl.service... Feb 9 19:44:36.901767 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:44:36.908004 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:44:36.920687 env[1184]: time="2024-02-09T19:44:36.920641527Z" level=info msg="shim disconnected" id=e6f23b61ca43e3a849a70173dffb2527e9f293f77e2b1545c6990b739d608c2e Feb 9 19:44:36.920687 env[1184]: time="2024-02-09T19:44:36.920682126Z" level=warning msg="cleaning up after shim disconnected" id=e6f23b61ca43e3a849a70173dffb2527e9f293f77e2b1545c6990b739d608c2e namespace=k8s.io Feb 9 19:44:36.920687 env[1184]: time="2024-02-09T19:44:36.920690774Z" level=info msg="cleaning up dead shim" Feb 9 19:44:36.927555 env[1184]: time="2024-02-09T19:44:36.926965381Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:44:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1786 runtime=io.containerd.runc.v2\n" Feb 9 19:44:36.928782 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-093056d377e3eacb87ec30d1306d42a97778fee08ba514f30de3c523e03d2f16-rootfs.mount: Deactivated successfully. Feb 9 19:44:37.454936 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2765126587.mount: Deactivated successfully. Feb 9 19:44:37.669010 kubelet[1499]: E0209 19:44:37.668966 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:44:37.830929 kubelet[1499]: E0209 19:44:37.830658 1499 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:44:37.833794 env[1184]: time="2024-02-09T19:44:37.833730917Z" level=info msg="CreateContainer within sandbox \"48472beb7d9fb202cd86bd123039819966242d5f1f51ea40061766996d694eb3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 19:44:37.847480 env[1184]: time="2024-02-09T19:44:37.847439766Z" level=info msg="CreateContainer within sandbox \"48472beb7d9fb202cd86bd123039819966242d5f1f51ea40061766996d694eb3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"636fb5b48ad37a7cfe7d343dac7f0a37ebbe9be69c6162b051d94ea1e0165556\"" Feb 9 19:44:37.847957 env[1184]: time="2024-02-09T19:44:37.847929938Z" level=info msg="StartContainer for \"636fb5b48ad37a7cfe7d343dac7f0a37ebbe9be69c6162b051d94ea1e0165556\"" Feb 9 19:44:37.887710 env[1184]: time="2024-02-09T19:44:37.887667468Z" level=info msg="StartContainer for \"636fb5b48ad37a7cfe7d343dac7f0a37ebbe9be69c6162b051d94ea1e0165556\" returns successfully" Feb 9 19:44:37.901931 env[1184]: time="2024-02-09T19:44:37.901886549Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:44:38.143940 env[1184]: time="2024-02-09T19:44:38.143826344Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:44:38.145960 env[1184]: time="2024-02-09T19:44:38.145915806Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:44:38.146453 env[1184]: time="2024-02-09T19:44:38.146414083Z" level=info msg="shim disconnected" id=636fb5b48ad37a7cfe7d343dac7f0a37ebbe9be69c6162b051d94ea1e0165556 Feb 9 19:44:38.146529 env[1184]: time="2024-02-09T19:44:38.146456265Z" level=warning msg="cleaning up after shim disconnected" id=636fb5b48ad37a7cfe7d343dac7f0a37ebbe9be69c6162b051d94ea1e0165556 namespace=k8s.io Feb 9 19:44:38.146529 env[1184]: time="2024-02-09T19:44:38.146465493Z" level=info msg="cleaning up dead shim" Feb 9 19:44:38.148617 env[1184]: time="2024-02-09T19:44:38.148585174Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:44:38.149133 env[1184]: time="2024-02-09T19:44:38.149105694Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f\"" Feb 9 19:44:38.150751 env[1184]: time="2024-02-09T19:44:38.150717971Z" level=info msg="CreateContainer within sandbox \"b386d5dde3ef2acc06400d84f60a4b14df3460a8e696c4b35229865681deb0b8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 19:44:38.153259 env[1184]: time="2024-02-09T19:44:38.153229363Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:44:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1842 runtime=io.containerd.runc.v2\n" Feb 9 19:44:38.162809 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2807615899.mount: Deactivated successfully. Feb 9 19:44:38.164471 env[1184]: time="2024-02-09T19:44:38.164434796Z" level=info msg="CreateContainer within sandbox \"b386d5dde3ef2acc06400d84f60a4b14df3460a8e696c4b35229865681deb0b8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7504f30c0a6e23468bc75a789fe435abce03de7284946b1eceb9be304d97bdc0\"" Feb 9 19:44:38.164921 env[1184]: time="2024-02-09T19:44:38.164894308Z" level=info msg="StartContainer for \"7504f30c0a6e23468bc75a789fe435abce03de7284946b1eceb9be304d97bdc0\"" Feb 9 19:44:38.210253 env[1184]: time="2024-02-09T19:44:38.210174763Z" level=info msg="StartContainer for \"7504f30c0a6e23468bc75a789fe435abce03de7284946b1eceb9be304d97bdc0\" returns successfully" Feb 9 19:44:38.669118 kubelet[1499]: E0209 19:44:38.669074 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:44:38.833252 kubelet[1499]: E0209 19:44:38.833221 1499 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:44:38.834746 kubelet[1499]: E0209 19:44:38.834709 1499 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:44:38.836318 env[1184]: time="2024-02-09T19:44:38.836284920Z" level=info msg="CreateContainer within sandbox \"48472beb7d9fb202cd86bd123039819966242d5f1f51ea40061766996d694eb3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 19:44:38.841359 kubelet[1499]: I0209 19:44:38.841337 1499 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-rpnh5" podStartSLOduration=-9.223372017013475e+09 pod.CreationTimestamp="2024-02-09 19:44:19 +0000 UTC" firstStartedPulling="2024-02-09 19:44:26.763335955 +0000 UTC m=+20.477192538" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:44:38.841281331 +0000 UTC m=+32.555137924" watchObservedRunningTime="2024-02-09 19:44:38.841300057 +0000 UTC m=+32.555156640" Feb 9 19:44:38.849592 env[1184]: time="2024-02-09T19:44:38.849529710Z" level=info msg="CreateContainer within sandbox \"48472beb7d9fb202cd86bd123039819966242d5f1f51ea40061766996d694eb3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"48cfff3eb125656f08c4ececf9d72176c19ac2fa6c9d6d37c854d7c668bb4508\"" Feb 9 19:44:38.850002 env[1184]: time="2024-02-09T19:44:38.849972498Z" level=info msg="StartContainer for \"48cfff3eb125656f08c4ececf9d72176c19ac2fa6c9d6d37c854d7c668bb4508\"" Feb 9 19:44:38.893796 env[1184]: time="2024-02-09T19:44:38.893741763Z" level=info msg="StartContainer for \"48cfff3eb125656f08c4ececf9d72176c19ac2fa6c9d6d37c854d7c668bb4508\" returns successfully" Feb 9 19:44:38.909526 env[1184]: time="2024-02-09T19:44:38.909476824Z" level=info msg="shim disconnected" id=48cfff3eb125656f08c4ececf9d72176c19ac2fa6c9d6d37c854d7c668bb4508 Feb 9 19:44:38.909526 env[1184]: time="2024-02-09T19:44:38.909519677Z" level=warning msg="cleaning up after shim disconnected" id=48cfff3eb125656f08c4ececf9d72176c19ac2fa6c9d6d37c854d7c668bb4508 namespace=k8s.io Feb 9 19:44:38.909526 env[1184]: time="2024-02-09T19:44:38.909529395Z" level=info msg="cleaning up dead shim" Feb 9 19:44:38.915443 env[1184]: time="2024-02-09T19:44:38.915414280Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:44:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2039 runtime=io.containerd.runc.v2\n" Feb 9 19:44:39.669243 kubelet[1499]: E0209 19:44:39.669193 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:44:39.838744 kubelet[1499]: E0209 19:44:39.838720 1499 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:44:39.838899 kubelet[1499]: E0209 19:44:39.838794 1499 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:44:39.840493 env[1184]: time="2024-02-09T19:44:39.840457529Z" level=info msg="CreateContainer within sandbox \"48472beb7d9fb202cd86bd123039819966242d5f1f51ea40061766996d694eb3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 19:44:39.851969 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4227746887.mount: Deactivated successfully. Feb 9 19:44:39.854089 env[1184]: time="2024-02-09T19:44:39.854030449Z" level=info msg="CreateContainer within sandbox \"48472beb7d9fb202cd86bd123039819966242d5f1f51ea40061766996d694eb3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"39ed72253d7b299b60c4d56ba62921b3e5a0d1bb936a93f67cfb688852e558cd\"" Feb 9 19:44:39.854587 env[1184]: time="2024-02-09T19:44:39.854562690Z" level=info msg="StartContainer for \"39ed72253d7b299b60c4d56ba62921b3e5a0d1bb936a93f67cfb688852e558cd\"" Feb 9 19:44:39.893247 env[1184]: time="2024-02-09T19:44:39.893185141Z" level=info msg="StartContainer for \"39ed72253d7b299b60c4d56ba62921b3e5a0d1bb936a93f67cfb688852e558cd\" returns successfully" Feb 9 19:44:40.027674 kubelet[1499]: I0209 19:44:40.027504 1499 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 19:44:40.192576 kernel: Initializing XFRM netlink socket Feb 9 19:44:40.669560 kubelet[1499]: E0209 19:44:40.669498 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:44:40.841999 kubelet[1499]: E0209 19:44:40.841971 1499 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:44:40.851923 kubelet[1499]: I0209 19:44:40.851893 1499 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-zhvlr" podStartSLOduration=-9.22337201500291e+09 pod.CreationTimestamp="2024-02-09 19:44:19 +0000 UTC" firstStartedPulling="2024-02-09 19:44:26.763155266 +0000 UTC m=+20.477011849" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:44:40.851490765 +0000 UTC m=+34.565347348" watchObservedRunningTime="2024-02-09 19:44:40.851864348 +0000 UTC m=+34.565720931" Feb 9 19:44:41.312503 kubelet[1499]: I0209 19:44:41.312457 1499 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:44:41.424559 kubelet[1499]: I0209 19:44:41.424506 1499 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xh4jv\" (UniqueName: \"kubernetes.io/projected/29696c38-afd1-4bb1-ab63-b53ad8e37ec4-kube-api-access-xh4jv\") pod \"nginx-deployment-8ffc5cf85-5sp6l\" (UID: \"29696c38-afd1-4bb1-ab63-b53ad8e37ec4\") " pod="default/nginx-deployment-8ffc5cf85-5sp6l" Feb 9 19:44:41.615200 env[1184]: time="2024-02-09T19:44:41.615094749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-5sp6l,Uid:29696c38-afd1-4bb1-ab63-b53ad8e37ec4,Namespace:default,Attempt:0,}" Feb 9 19:44:41.670616 kubelet[1499]: E0209 19:44:41.670583 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:44:41.804373 systemd-networkd[1068]: cilium_host: Link UP Feb 9 19:44:41.804479 systemd-networkd[1068]: cilium_net: Link UP Feb 9 19:44:41.804482 systemd-networkd[1068]: cilium_net: Gained carrier Feb 9 19:44:41.804616 systemd-networkd[1068]: cilium_host: Gained carrier Feb 9 19:44:41.805867 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 9 19:44:41.805994 systemd-networkd[1068]: cilium_host: Gained IPv6LL Feb 9 19:44:41.844298 kubelet[1499]: E0209 19:44:41.844250 1499 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:44:41.880181 systemd-networkd[1068]: cilium_vxlan: Link UP Feb 9 19:44:41.880190 systemd-networkd[1068]: cilium_vxlan: Gained carrier Feb 9 19:44:42.051570 kernel: NET: Registered PF_ALG protocol family Feb 9 19:44:42.607663 systemd-networkd[1068]: lxc_health: Link UP Feb 9 19:44:42.614644 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 19:44:42.617139 systemd-networkd[1068]: lxc_health: Gained carrier Feb 9 19:44:42.671119 kubelet[1499]: E0209 19:44:42.671073 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:44:42.737709 systemd-networkd[1068]: cilium_net: Gained IPv6LL Feb 9 19:44:42.845235 kubelet[1499]: E0209 19:44:42.845208 1499 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:44:43.057683 systemd-networkd[1068]: cilium_vxlan: Gained IPv6LL Feb 9 19:44:43.148688 systemd-networkd[1068]: lxc255fbefd05aa: Link UP Feb 9 19:44:43.158655 kernel: eth0: renamed from tmp3f8e6 Feb 9 19:44:43.163276 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:44:43.163338 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc255fbefd05aa: link becomes ready Feb 9 19:44:43.163423 systemd-networkd[1068]: lxc255fbefd05aa: Gained carrier Feb 9 19:44:43.672300 kubelet[1499]: E0209 19:44:43.672262 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:44:43.967731 kubelet[1499]: E0209 19:44:43.967634 1499 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:44:44.121653 update_engine[1171]: I0209 19:44:44.121604 1171 update_attempter.cc:509] Updating boot flags... Feb 9 19:44:44.593746 systemd-networkd[1068]: lxc_health: Gained IPv6LL Feb 9 19:44:44.672706 kubelet[1499]: E0209 19:44:44.672646 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:44:44.849063 kubelet[1499]: E0209 19:44:44.848964 1499 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:44:44.977677 systemd-networkd[1068]: lxc255fbefd05aa: Gained IPv6LL Feb 9 19:44:45.672821 kubelet[1499]: E0209 19:44:45.672752 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:44:46.433730 env[1184]: time="2024-02-09T19:44:46.433642229Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:44:46.433730 env[1184]: time="2024-02-09T19:44:46.433682346Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:44:46.433730 env[1184]: time="2024-02-09T19:44:46.433692495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:44:46.434370 env[1184]: time="2024-02-09T19:44:46.434308848Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3f8e699612caafd819ca021451f2894aa7ad1f68363b858f3171dfcee6800747 pid=2591 runtime=io.containerd.runc.v2 Feb 9 19:44:46.445712 systemd[1]: run-containerd-runc-k8s.io-3f8e699612caafd819ca021451f2894aa7ad1f68363b858f3171dfcee6800747-runc.O1SdR2.mount: Deactivated successfully. Feb 9 19:44:46.454988 systemd-resolved[1124]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 19:44:46.477756 env[1184]: time="2024-02-09T19:44:46.477703431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-5sp6l,Uid:29696c38-afd1-4bb1-ab63-b53ad8e37ec4,Namespace:default,Attempt:0,} returns sandbox id \"3f8e699612caafd819ca021451f2894aa7ad1f68363b858f3171dfcee6800747\"" Feb 9 19:44:46.479246 env[1184]: time="2024-02-09T19:44:46.479222453Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 9 19:44:46.649233 kubelet[1499]: E0209 19:44:46.649177 1499 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:44:46.673798 kubelet[1499]: E0209 19:44:46.673732 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:44:47.674369 kubelet[1499]: E0209 19:44:47.674325 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:44:48.674834 kubelet[1499]: E0209 19:44:48.674782 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:44:49.675975 kubelet[1499]: E0209 19:44:49.675905 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:44:49.978457 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1492064562.mount: Deactivated successfully. Feb 9 19:44:50.676954 kubelet[1499]: E0209 19:44:50.676905 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:44:50.799601 env[1184]: time="2024-02-09T19:44:50.799528338Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:44:50.801031 env[1184]: time="2024-02-09T19:44:50.800989780Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:44:50.802786 env[1184]: time="2024-02-09T19:44:50.802752687Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:44:50.804109 env[1184]: time="2024-02-09T19:44:50.804058642Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:44:50.804613 env[1184]: time="2024-02-09T19:44:50.804577703Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\"" Feb 9 19:44:50.806233 env[1184]: time="2024-02-09T19:44:50.806207697Z" level=info msg="CreateContainer within sandbox \"3f8e699612caafd819ca021451f2894aa7ad1f68363b858f3171dfcee6800747\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 9 19:44:50.816838 env[1184]: time="2024-02-09T19:44:50.816784441Z" level=info msg="CreateContainer within sandbox \"3f8e699612caafd819ca021451f2894aa7ad1f68363b858f3171dfcee6800747\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"5cf82668bd55782e4000e045ac641ece24cdb17cf0df6af77108c933fcc6edad\"" Feb 9 19:44:50.817255 env[1184]: time="2024-02-09T19:44:50.817208040Z" level=info msg="StartContainer for \"5cf82668bd55782e4000e045ac641ece24cdb17cf0df6af77108c933fcc6edad\"" Feb 9 19:44:50.853665 env[1184]: time="2024-02-09T19:44:50.853636906Z" level=info msg="StartContainer for \"5cf82668bd55782e4000e045ac641ece24cdb17cf0df6af77108c933fcc6edad\" returns successfully" Feb 9 19:44:50.864327 kubelet[1499]: I0209 19:44:50.864245 1499 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-8ffc5cf85-5sp6l" podStartSLOduration=-9.22337202699056e+09 pod.CreationTimestamp="2024-02-09 19:44:41 +0000 UTC" firstStartedPulling="2024-02-09 19:44:46.478907249 +0000 UTC m=+40.192763822" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:44:50.864134508 +0000 UTC m=+44.577991091" watchObservedRunningTime="2024-02-09 19:44:50.864215583 +0000 UTC m=+44.578072166" Feb 9 19:44:51.677702 kubelet[1499]: E0209 19:44:51.677650 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:44:52.677932 kubelet[1499]: E0209 19:44:52.677879 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:44:52.800738 kubelet[1499]: I0209 19:44:52.800712 1499 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:44:52.982331 kubelet[1499]: I0209 19:44:52.982226 1499 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/8ade6434-8d85-4385-88e0-96878f73b76d-data\") pod \"nfs-server-provisioner-0\" (UID: \"8ade6434-8d85-4385-88e0-96878f73b76d\") " pod="default/nfs-server-provisioner-0" Feb 9 19:44:52.982331 kubelet[1499]: I0209 19:44:52.982266 1499 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ndvc\" (UniqueName: \"kubernetes.io/projected/8ade6434-8d85-4385-88e0-96878f73b76d-kube-api-access-8ndvc\") pod \"nfs-server-provisioner-0\" (UID: \"8ade6434-8d85-4385-88e0-96878f73b76d\") " pod="default/nfs-server-provisioner-0" Feb 9 19:44:53.103600 env[1184]: time="2024-02-09T19:44:53.103530662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:8ade6434-8d85-4385-88e0-96878f73b76d,Namespace:default,Attempt:0,}" Feb 9 19:44:53.205665 systemd-networkd[1068]: lxc33ff0c91bab7: Link UP Feb 9 19:44:53.209562 kernel: eth0: renamed from tmp1983c Feb 9 19:44:53.215714 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:44:53.215826 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc33ff0c91bab7: link becomes ready Feb 9 19:44:53.215926 systemd-networkd[1068]: lxc33ff0c91bab7: Gained carrier Feb 9 19:44:53.444635 env[1184]: time="2024-02-09T19:44:53.444569801Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:44:53.444635 env[1184]: time="2024-02-09T19:44:53.444612223Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:44:53.444635 env[1184]: time="2024-02-09T19:44:53.444622673Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:44:53.444874 env[1184]: time="2024-02-09T19:44:53.444766336Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1983c2ef087e52842bb697adbaf82dbbf2469633fce3919bacb8d10a9fc55b5e pid=2767 runtime=io.containerd.runc.v2 Feb 9 19:44:53.465688 systemd-resolved[1124]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 19:44:53.490047 env[1184]: time="2024-02-09T19:44:53.489999791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:8ade6434-8d85-4385-88e0-96878f73b76d,Namespace:default,Attempt:0,} returns sandbox id \"1983c2ef087e52842bb697adbaf82dbbf2469633fce3919bacb8d10a9fc55b5e\"" Feb 9 19:44:53.491398 env[1184]: time="2024-02-09T19:44:53.491367527Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 9 19:44:53.678639 kubelet[1499]: E0209 19:44:53.678594 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:44:54.678891 kubelet[1499]: E0209 19:44:54.678842 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:44:55.025829 systemd-networkd[1068]: lxc33ff0c91bab7: Gained IPv6LL Feb 9 19:44:55.679512 kubelet[1499]: E0209 19:44:55.679452 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:44:56.301937 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4260122172.mount: Deactivated successfully. Feb 9 19:44:56.680207 kubelet[1499]: E0209 19:44:56.680111 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:44:57.681164 kubelet[1499]: E0209 19:44:57.681123 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:44:58.682171 kubelet[1499]: E0209 19:44:58.682108 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:44:58.915443 env[1184]: time="2024-02-09T19:44:58.915371022Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:44:58.917013 env[1184]: time="2024-02-09T19:44:58.916979439Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:44:58.918838 env[1184]: time="2024-02-09T19:44:58.918784348Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:44:58.920577 env[1184]: time="2024-02-09T19:44:58.920520366Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:44:58.921218 env[1184]: time="2024-02-09T19:44:58.921168758Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Feb 9 19:44:58.923220 env[1184]: time="2024-02-09T19:44:58.923180270Z" level=info msg="CreateContainer within sandbox \"1983c2ef087e52842bb697adbaf82dbbf2469633fce3919bacb8d10a9fc55b5e\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 9 19:44:58.933260 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3770424798.mount: Deactivated successfully. Feb 9 19:44:58.935218 env[1184]: time="2024-02-09T19:44:58.935170342Z" level=info msg="CreateContainer within sandbox \"1983c2ef087e52842bb697adbaf82dbbf2469633fce3919bacb8d10a9fc55b5e\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"1e670d3896d4f9c00d8950b71f283682b7a87d4d9016fb28c033d36aafe7caf2\"" Feb 9 19:44:58.935677 env[1184]: time="2024-02-09T19:44:58.935640505Z" level=info msg="StartContainer for \"1e670d3896d4f9c00d8950b71f283682b7a87d4d9016fb28c033d36aafe7caf2\"" Feb 9 19:44:58.975293 env[1184]: time="2024-02-09T19:44:58.972953241Z" level=info msg="StartContainer for \"1e670d3896d4f9c00d8950b71f283682b7a87d4d9016fb28c033d36aafe7caf2\" returns successfully" Feb 9 19:44:59.683285 kubelet[1499]: E0209 19:44:59.683223 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:44:59.886079 kubelet[1499]: I0209 19:44:59.884070 1499 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=-9.223372028970743e+09 pod.CreationTimestamp="2024-02-09 19:44:52 +0000 UTC" firstStartedPulling="2024-02-09 19:44:53.491073267 +0000 UTC m=+47.204929850" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:44:59.883599297 +0000 UTC m=+53.597455880" watchObservedRunningTime="2024-02-09 19:44:59.884033001 +0000 UTC m=+53.597889584" Feb 9 19:45:00.683406 kubelet[1499]: E0209 19:45:00.683362 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:45:01.683929 kubelet[1499]: E0209 19:45:01.683866 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:45:02.684249 kubelet[1499]: E0209 19:45:02.684193 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:45:03.684934 kubelet[1499]: E0209 19:45:03.684862 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:45:04.685903 kubelet[1499]: E0209 19:45:04.685834 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:45:05.686440 kubelet[1499]: E0209 19:45:05.686395 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:45:06.649195 kubelet[1499]: E0209 19:45:06.649136 1499 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:45:06.686557 kubelet[1499]: E0209 19:45:06.686499 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:45:07.687065 kubelet[1499]: E0209 19:45:07.687009 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:45:08.422787 kubelet[1499]: I0209 19:45:08.422751 1499 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:45:08.559849 kubelet[1499]: I0209 19:45:08.559806 1499 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e5a81943-53fe-4e3c-94ad-6feacc473ab3\" (UniqueName: \"kubernetes.io/nfs/f435accb-4049-4d95-a3ea-cc7bf04de720-pvc-e5a81943-53fe-4e3c-94ad-6feacc473ab3\") pod \"test-pod-1\" (UID: \"f435accb-4049-4d95-a3ea-cc7bf04de720\") " pod="default/test-pod-1" Feb 9 19:45:08.560029 kubelet[1499]: I0209 19:45:08.559865 1499 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pc9xv\" (UniqueName: \"kubernetes.io/projected/f435accb-4049-4d95-a3ea-cc7bf04de720-kube-api-access-pc9xv\") pod \"test-pod-1\" (UID: \"f435accb-4049-4d95-a3ea-cc7bf04de720\") " pod="default/test-pod-1" Feb 9 19:45:08.681559 kernel: FS-Cache: Loaded Feb 9 19:45:08.688051 kubelet[1499]: E0209 19:45:08.688022 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:45:08.723591 kernel: RPC: Registered named UNIX socket transport module. Feb 9 19:45:08.723720 kernel: RPC: Registered udp transport module. Feb 9 19:45:08.723740 kernel: RPC: Registered tcp transport module. Feb 9 19:45:08.724934 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 9 19:45:08.772572 kernel: FS-Cache: Netfs 'nfs' registered for caching Feb 9 19:45:09.012644 kernel: NFS: Registering the id_resolver key type Feb 9 19:45:09.012835 kernel: Key type id_resolver registered Feb 9 19:45:09.012863 kernel: Key type id_legacy registered Feb 9 19:45:09.240336 nfsidmap[2908]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 9 19:45:09.243488 nfsidmap[2912]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 9 19:45:09.325904 env[1184]: time="2024-02-09T19:45:09.325846087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:f435accb-4049-4d95-a3ea-cc7bf04de720,Namespace:default,Attempt:0,}" Feb 9 19:45:09.351816 systemd-networkd[1068]: lxc0633346fc51d: Link UP Feb 9 19:45:09.360573 kernel: eth0: renamed from tmp70833 Feb 9 19:45:09.368643 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:45:09.368721 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc0633346fc51d: link becomes ready Feb 9 19:45:09.368767 systemd-networkd[1068]: lxc0633346fc51d: Gained carrier Feb 9 19:45:09.553038 env[1184]: time="2024-02-09T19:45:09.552964994Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:45:09.553038 env[1184]: time="2024-02-09T19:45:09.553014658Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:45:09.553512 env[1184]: time="2024-02-09T19:45:09.553307912Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:45:09.553644 env[1184]: time="2024-02-09T19:45:09.553507720Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/708337b88bd54a3db985341bd8b0623613c6efb639a56c0285ce11fe29305505 pid=2945 runtime=io.containerd.runc.v2 Feb 9 19:45:09.571645 systemd-resolved[1124]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 19:45:09.593940 env[1184]: time="2024-02-09T19:45:09.593891503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:f435accb-4049-4d95-a3ea-cc7bf04de720,Namespace:default,Attempt:0,} returns sandbox id \"708337b88bd54a3db985341bd8b0623613c6efb639a56c0285ce11fe29305505\"" Feb 9 19:45:09.595302 env[1184]: time="2024-02-09T19:45:09.595278950Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 9 19:45:09.689123 kubelet[1499]: E0209 19:45:09.689086 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:45:10.301649 env[1184]: time="2024-02-09T19:45:10.301600557Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:10.304564 env[1184]: time="2024-02-09T19:45:10.304512666Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:10.307186 env[1184]: time="2024-02-09T19:45:10.307153723Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:10.308865 env[1184]: time="2024-02-09T19:45:10.308825175Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:10.309371 env[1184]: time="2024-02-09T19:45:10.309349066Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\"" Feb 9 19:45:10.311054 env[1184]: time="2024-02-09T19:45:10.311026761Z" level=info msg="CreateContainer within sandbox \"708337b88bd54a3db985341bd8b0623613c6efb639a56c0285ce11fe29305505\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 9 19:45:10.324851 env[1184]: time="2024-02-09T19:45:10.324786183Z" level=info msg="CreateContainer within sandbox \"708337b88bd54a3db985341bd8b0623613c6efb639a56c0285ce11fe29305505\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"924e6ed9bac47f283caeb7231a17547e200debf36874dc4402548b4e15341fa9\"" Feb 9 19:45:10.325327 env[1184]: time="2024-02-09T19:45:10.325301357Z" level=info msg="StartContainer for \"924e6ed9bac47f283caeb7231a17547e200debf36874dc4402548b4e15341fa9\"" Feb 9 19:45:10.365413 env[1184]: time="2024-02-09T19:45:10.365371598Z" level=info msg="StartContainer for \"924e6ed9bac47f283caeb7231a17547e200debf36874dc4402548b4e15341fa9\" returns successfully" Feb 9 19:45:10.690238 kubelet[1499]: E0209 19:45:10.690114 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:45:10.993232 kubelet[1499]: I0209 19:45:10.993111 1499 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=-9.223372018861713e+09 pod.CreationTimestamp="2024-02-09 19:44:53 +0000 UTC" firstStartedPulling="2024-02-09 19:45:09.595012275 +0000 UTC m=+63.308868858" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:45:10.992316645 +0000 UTC m=+64.706173248" watchObservedRunningTime="2024-02-09 19:45:10.993062547 +0000 UTC m=+64.706919130" Feb 9 19:45:11.217699 systemd-networkd[1068]: lxc0633346fc51d: Gained IPv6LL Feb 9 19:45:11.690875 kubelet[1499]: E0209 19:45:11.690814 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:45:12.691208 kubelet[1499]: E0209 19:45:12.691148 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:45:13.691554 kubelet[1499]: E0209 19:45:13.691507 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:45:14.692137 kubelet[1499]: E0209 19:45:14.692091 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:45:15.140425 env[1184]: time="2024-02-09T19:45:15.140357989Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 19:45:15.144769 env[1184]: time="2024-02-09T19:45:15.144745261Z" level=info msg="StopContainer for \"39ed72253d7b299b60c4d56ba62921b3e5a0d1bb936a93f67cfb688852e558cd\" with timeout 1 (s)" Feb 9 19:45:15.144972 env[1184]: time="2024-02-09T19:45:15.144950719Z" level=info msg="Stop container \"39ed72253d7b299b60c4d56ba62921b3e5a0d1bb936a93f67cfb688852e558cd\" with signal terminated" Feb 9 19:45:15.150097 systemd-networkd[1068]: lxc_health: Link DOWN Feb 9 19:45:15.150102 systemd-networkd[1068]: lxc_health: Lost carrier Feb 9 19:45:15.192103 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-39ed72253d7b299b60c4d56ba62921b3e5a0d1bb936a93f67cfb688852e558cd-rootfs.mount: Deactivated successfully. Feb 9 19:45:15.201646 env[1184]: time="2024-02-09T19:45:15.201593818Z" level=info msg="shim disconnected" id=39ed72253d7b299b60c4d56ba62921b3e5a0d1bb936a93f67cfb688852e558cd Feb 9 19:45:15.201811 env[1184]: time="2024-02-09T19:45:15.201643341Z" level=warning msg="cleaning up after shim disconnected" id=39ed72253d7b299b60c4d56ba62921b3e5a0d1bb936a93f67cfb688852e558cd namespace=k8s.io Feb 9 19:45:15.201811 env[1184]: time="2024-02-09T19:45:15.201658971Z" level=info msg="cleaning up dead shim" Feb 9 19:45:15.208606 env[1184]: time="2024-02-09T19:45:15.208519588Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:45:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3077 runtime=io.containerd.runc.v2\n" Feb 9 19:45:15.211999 env[1184]: time="2024-02-09T19:45:15.211951965Z" level=info msg="StopContainer for \"39ed72253d7b299b60c4d56ba62921b3e5a0d1bb936a93f67cfb688852e558cd\" returns successfully" Feb 9 19:45:15.212638 env[1184]: time="2024-02-09T19:45:15.212612664Z" level=info msg="StopPodSandbox for \"48472beb7d9fb202cd86bd123039819966242d5f1f51ea40061766996d694eb3\"" Feb 9 19:45:15.212699 env[1184]: time="2024-02-09T19:45:15.212677086Z" level=info msg="Container to stop \"093056d377e3eacb87ec30d1306d42a97778fee08ba514f30de3c523e03d2f16\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:45:15.212736 env[1184]: time="2024-02-09T19:45:15.212699788Z" level=info msg="Container to stop \"636fb5b48ad37a7cfe7d343dac7f0a37ebbe9be69c6162b051d94ea1e0165556\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:45:15.212736 env[1184]: time="2024-02-09T19:45:15.212714175Z" level=info msg="Container to stop \"e6f23b61ca43e3a849a70173dffb2527e9f293f77e2b1545c6990b739d608c2e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:45:15.212736 env[1184]: time="2024-02-09T19:45:15.212728512Z" level=info msg="Container to stop \"48cfff3eb125656f08c4ececf9d72176c19ac2fa6c9d6d37c854d7c668bb4508\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:45:15.212872 env[1184]: time="2024-02-09T19:45:15.212743000Z" level=info msg="Container to stop \"39ed72253d7b299b60c4d56ba62921b3e5a0d1bb936a93f67cfb688852e558cd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:45:15.214813 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-48472beb7d9fb202cd86bd123039819966242d5f1f51ea40061766996d694eb3-shm.mount: Deactivated successfully. Feb 9 19:45:15.231404 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-48472beb7d9fb202cd86bd123039819966242d5f1f51ea40061766996d694eb3-rootfs.mount: Deactivated successfully. Feb 9 19:45:15.235827 env[1184]: time="2024-02-09T19:45:15.235787325Z" level=info msg="shim disconnected" id=48472beb7d9fb202cd86bd123039819966242d5f1f51ea40061766996d694eb3 Feb 9 19:45:15.235941 env[1184]: time="2024-02-09T19:45:15.235829735Z" level=warning msg="cleaning up after shim disconnected" id=48472beb7d9fb202cd86bd123039819966242d5f1f51ea40061766996d694eb3 namespace=k8s.io Feb 9 19:45:15.235941 env[1184]: time="2024-02-09T19:45:15.235837550Z" level=info msg="cleaning up dead shim" Feb 9 19:45:15.242368 env[1184]: time="2024-02-09T19:45:15.242303752Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:45:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3109 runtime=io.containerd.runc.v2\n" Feb 9 19:45:15.242689 env[1184]: time="2024-02-09T19:45:15.242661518Z" level=info msg="TearDown network for sandbox \"48472beb7d9fb202cd86bd123039819966242d5f1f51ea40061766996d694eb3\" successfully" Feb 9 19:45:15.242756 env[1184]: time="2024-02-09T19:45:15.242690362Z" level=info msg="StopPodSandbox for \"48472beb7d9fb202cd86bd123039819966242d5f1f51ea40061766996d694eb3\" returns successfully" Feb 9 19:45:15.395395 kubelet[1499]: I0209 19:45:15.394429 1499 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/870a43b7-8fcf-4396-907f-1bccc87ecbc8-cilium-run\") pod \"870a43b7-8fcf-4396-907f-1bccc87ecbc8\" (UID: \"870a43b7-8fcf-4396-907f-1bccc87ecbc8\") " Feb 9 19:45:15.395395 kubelet[1499]: I0209 19:45:15.394478 1499 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/870a43b7-8fcf-4396-907f-1bccc87ecbc8-cilium-cgroup\") pod \"870a43b7-8fcf-4396-907f-1bccc87ecbc8\" (UID: \"870a43b7-8fcf-4396-907f-1bccc87ecbc8\") " Feb 9 19:45:15.395395 kubelet[1499]: I0209 19:45:15.394501 1499 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/870a43b7-8fcf-4396-907f-1bccc87ecbc8-host-proc-sys-net\") pod \"870a43b7-8fcf-4396-907f-1bccc87ecbc8\" (UID: \"870a43b7-8fcf-4396-907f-1bccc87ecbc8\") " Feb 9 19:45:15.395395 kubelet[1499]: I0209 19:45:15.394522 1499 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/870a43b7-8fcf-4396-907f-1bccc87ecbc8-etc-cni-netd\") pod \"870a43b7-8fcf-4396-907f-1bccc87ecbc8\" (UID: \"870a43b7-8fcf-4396-907f-1bccc87ecbc8\") " Feb 9 19:45:15.395395 kubelet[1499]: I0209 19:45:15.394558 1499 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/870a43b7-8fcf-4396-907f-1bccc87ecbc8-bpf-maps\") pod \"870a43b7-8fcf-4396-907f-1bccc87ecbc8\" (UID: \"870a43b7-8fcf-4396-907f-1bccc87ecbc8\") " Feb 9 19:45:15.395395 kubelet[1499]: I0209 19:45:15.394581 1499 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/870a43b7-8fcf-4396-907f-1bccc87ecbc8-lib-modules\") pod \"870a43b7-8fcf-4396-907f-1bccc87ecbc8\" (UID: \"870a43b7-8fcf-4396-907f-1bccc87ecbc8\") " Feb 9 19:45:15.395912 kubelet[1499]: I0209 19:45:15.394575 1499 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/870a43b7-8fcf-4396-907f-1bccc87ecbc8-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "870a43b7-8fcf-4396-907f-1bccc87ecbc8" (UID: "870a43b7-8fcf-4396-907f-1bccc87ecbc8"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:45:15.395912 kubelet[1499]: I0209 19:45:15.394605 1499 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/870a43b7-8fcf-4396-907f-1bccc87ecbc8-hubble-tls\") pod \"870a43b7-8fcf-4396-907f-1bccc87ecbc8\" (UID: \"870a43b7-8fcf-4396-907f-1bccc87ecbc8\") " Feb 9 19:45:15.395912 kubelet[1499]: I0209 19:45:15.394623 1499 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/870a43b7-8fcf-4396-907f-1bccc87ecbc8-host-proc-sys-kernel\") pod \"870a43b7-8fcf-4396-907f-1bccc87ecbc8\" (UID: \"870a43b7-8fcf-4396-907f-1bccc87ecbc8\") " Feb 9 19:45:15.395912 kubelet[1499]: I0209 19:45:15.394628 1499 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/870a43b7-8fcf-4396-907f-1bccc87ecbc8-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "870a43b7-8fcf-4396-907f-1bccc87ecbc8" (UID: "870a43b7-8fcf-4396-907f-1bccc87ecbc8"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:45:15.395912 kubelet[1499]: I0209 19:45:15.394642 1499 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58hwv\" (UniqueName: \"kubernetes.io/projected/870a43b7-8fcf-4396-907f-1bccc87ecbc8-kube-api-access-58hwv\") pod \"870a43b7-8fcf-4396-907f-1bccc87ecbc8\" (UID: \"870a43b7-8fcf-4396-907f-1bccc87ecbc8\") " Feb 9 19:45:15.396078 kubelet[1499]: I0209 19:45:15.394645 1499 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/870a43b7-8fcf-4396-907f-1bccc87ecbc8-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "870a43b7-8fcf-4396-907f-1bccc87ecbc8" (UID: "870a43b7-8fcf-4396-907f-1bccc87ecbc8"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:45:15.396078 kubelet[1499]: I0209 19:45:15.394662 1499 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/870a43b7-8fcf-4396-907f-1bccc87ecbc8-clustermesh-secrets\") pod \"870a43b7-8fcf-4396-907f-1bccc87ecbc8\" (UID: \"870a43b7-8fcf-4396-907f-1bccc87ecbc8\") " Feb 9 19:45:15.396078 kubelet[1499]: I0209 19:45:15.394664 1499 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/870a43b7-8fcf-4396-907f-1bccc87ecbc8-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "870a43b7-8fcf-4396-907f-1bccc87ecbc8" (UID: "870a43b7-8fcf-4396-907f-1bccc87ecbc8"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:45:15.396078 kubelet[1499]: I0209 19:45:15.394652 1499 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/870a43b7-8fcf-4396-907f-1bccc87ecbc8-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "870a43b7-8fcf-4396-907f-1bccc87ecbc8" (UID: "870a43b7-8fcf-4396-907f-1bccc87ecbc8"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:45:15.396078 kubelet[1499]: I0209 19:45:15.394699 1499 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/870a43b7-8fcf-4396-907f-1bccc87ecbc8-cni-path\") pod \"870a43b7-8fcf-4396-907f-1bccc87ecbc8\" (UID: \"870a43b7-8fcf-4396-907f-1bccc87ecbc8\") " Feb 9 19:45:15.396239 kubelet[1499]: I0209 19:45:15.394697 1499 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/870a43b7-8fcf-4396-907f-1bccc87ecbc8-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "870a43b7-8fcf-4396-907f-1bccc87ecbc8" (UID: "870a43b7-8fcf-4396-907f-1bccc87ecbc8"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:45:15.396239 kubelet[1499]: I0209 19:45:15.394679 1499 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/870a43b7-8fcf-4396-907f-1bccc87ecbc8-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "870a43b7-8fcf-4396-907f-1bccc87ecbc8" (UID: "870a43b7-8fcf-4396-907f-1bccc87ecbc8"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:45:15.396239 kubelet[1499]: I0209 19:45:15.394718 1499 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/870a43b7-8fcf-4396-907f-1bccc87ecbc8-xtables-lock\") pod \"870a43b7-8fcf-4396-907f-1bccc87ecbc8\" (UID: \"870a43b7-8fcf-4396-907f-1bccc87ecbc8\") " Feb 9 19:45:15.396239 kubelet[1499]: I0209 19:45:15.394740 1499 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/870a43b7-8fcf-4396-907f-1bccc87ecbc8-cilium-config-path\") pod \"870a43b7-8fcf-4396-907f-1bccc87ecbc8\" (UID: \"870a43b7-8fcf-4396-907f-1bccc87ecbc8\") " Feb 9 19:45:15.396239 kubelet[1499]: I0209 19:45:15.394755 1499 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/870a43b7-8fcf-4396-907f-1bccc87ecbc8-hostproc\") pod \"870a43b7-8fcf-4396-907f-1bccc87ecbc8\" (UID: \"870a43b7-8fcf-4396-907f-1bccc87ecbc8\") " Feb 9 19:45:15.396239 kubelet[1499]: I0209 19:45:15.394784 1499 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/870a43b7-8fcf-4396-907f-1bccc87ecbc8-cilium-run\") on node \"10.0.0.52\" DevicePath \"\"" Feb 9 19:45:15.396435 kubelet[1499]: I0209 19:45:15.394793 1499 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/870a43b7-8fcf-4396-907f-1bccc87ecbc8-cilium-cgroup\") on node \"10.0.0.52\" DevicePath \"\"" Feb 9 19:45:15.396435 kubelet[1499]: I0209 19:45:15.394803 1499 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/870a43b7-8fcf-4396-907f-1bccc87ecbc8-host-proc-sys-net\") on node \"10.0.0.52\" DevicePath \"\"" Feb 9 19:45:15.396435 kubelet[1499]: I0209 19:45:15.394812 1499 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/870a43b7-8fcf-4396-907f-1bccc87ecbc8-etc-cni-netd\") on node \"10.0.0.52\" DevicePath \"\"" Feb 9 19:45:15.396435 kubelet[1499]: I0209 19:45:15.394821 1499 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/870a43b7-8fcf-4396-907f-1bccc87ecbc8-bpf-maps\") on node \"10.0.0.52\" DevicePath \"\"" Feb 9 19:45:15.396435 kubelet[1499]: I0209 19:45:15.394829 1499 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/870a43b7-8fcf-4396-907f-1bccc87ecbc8-lib-modules\") on node \"10.0.0.52\" DevicePath \"\"" Feb 9 19:45:15.396435 kubelet[1499]: I0209 19:45:15.394837 1499 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/870a43b7-8fcf-4396-907f-1bccc87ecbc8-host-proc-sys-kernel\") on node \"10.0.0.52\" DevicePath \"\"" Feb 9 19:45:15.396435 kubelet[1499]: I0209 19:45:15.394863 1499 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/870a43b7-8fcf-4396-907f-1bccc87ecbc8-hostproc" (OuterVolumeSpecName: "hostproc") pod "870a43b7-8fcf-4396-907f-1bccc87ecbc8" (UID: "870a43b7-8fcf-4396-907f-1bccc87ecbc8"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:45:15.396672 kubelet[1499]: I0209 19:45:15.394879 1499 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/870a43b7-8fcf-4396-907f-1bccc87ecbc8-cni-path" (OuterVolumeSpecName: "cni-path") pod "870a43b7-8fcf-4396-907f-1bccc87ecbc8" (UID: "870a43b7-8fcf-4396-907f-1bccc87ecbc8"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:45:15.396672 kubelet[1499]: I0209 19:45:15.394891 1499 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/870a43b7-8fcf-4396-907f-1bccc87ecbc8-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "870a43b7-8fcf-4396-907f-1bccc87ecbc8" (UID: "870a43b7-8fcf-4396-907f-1bccc87ecbc8"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:45:15.396672 kubelet[1499]: W0209 19:45:15.394988 1499 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/870a43b7-8fcf-4396-907f-1bccc87ecbc8/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 19:45:15.396672 kubelet[1499]: I0209 19:45:15.396508 1499 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/870a43b7-8fcf-4396-907f-1bccc87ecbc8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "870a43b7-8fcf-4396-907f-1bccc87ecbc8" (UID: "870a43b7-8fcf-4396-907f-1bccc87ecbc8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 19:45:15.397554 kubelet[1499]: I0209 19:45:15.397462 1499 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/870a43b7-8fcf-4396-907f-1bccc87ecbc8-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "870a43b7-8fcf-4396-907f-1bccc87ecbc8" (UID: "870a43b7-8fcf-4396-907f-1bccc87ecbc8"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:45:15.397740 kubelet[1499]: I0209 19:45:15.397706 1499 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/870a43b7-8fcf-4396-907f-1bccc87ecbc8-kube-api-access-58hwv" (OuterVolumeSpecName: "kube-api-access-58hwv") pod "870a43b7-8fcf-4396-907f-1bccc87ecbc8" (UID: "870a43b7-8fcf-4396-907f-1bccc87ecbc8"). InnerVolumeSpecName "kube-api-access-58hwv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:45:15.398468 kubelet[1499]: I0209 19:45:15.398446 1499 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/870a43b7-8fcf-4396-907f-1bccc87ecbc8-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "870a43b7-8fcf-4396-907f-1bccc87ecbc8" (UID: "870a43b7-8fcf-4396-907f-1bccc87ecbc8"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:45:15.399122 systemd[1]: var-lib-kubelet-pods-870a43b7\x2d8fcf\x2d4396\x2d907f\x2d1bccc87ecbc8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d58hwv.mount: Deactivated successfully. Feb 9 19:45:15.399319 systemd[1]: var-lib-kubelet-pods-870a43b7\x2d8fcf\x2d4396\x2d907f\x2d1bccc87ecbc8-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 19:45:15.495888 kubelet[1499]: I0209 19:45:15.495824 1499 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/870a43b7-8fcf-4396-907f-1bccc87ecbc8-cni-path\") on node \"10.0.0.52\" DevicePath \"\"" Feb 9 19:45:15.495888 kubelet[1499]: I0209 19:45:15.495872 1499 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/870a43b7-8fcf-4396-907f-1bccc87ecbc8-xtables-lock\") on node \"10.0.0.52\" DevicePath \"\"" Feb 9 19:45:15.495888 kubelet[1499]: I0209 19:45:15.495886 1499 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/870a43b7-8fcf-4396-907f-1bccc87ecbc8-cilium-config-path\") on node \"10.0.0.52\" DevicePath \"\"" Feb 9 19:45:15.495888 kubelet[1499]: I0209 19:45:15.495895 1499 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/870a43b7-8fcf-4396-907f-1bccc87ecbc8-hostproc\") on node \"10.0.0.52\" DevicePath \"\"" Feb 9 19:45:15.495888 kubelet[1499]: I0209 19:45:15.495904 1499 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/870a43b7-8fcf-4396-907f-1bccc87ecbc8-hubble-tls\") on node \"10.0.0.52\" DevicePath \"\"" Feb 9 19:45:15.496151 kubelet[1499]: I0209 19:45:15.495915 1499 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-58hwv\" (UniqueName: \"kubernetes.io/projected/870a43b7-8fcf-4396-907f-1bccc87ecbc8-kube-api-access-58hwv\") on node \"10.0.0.52\" DevicePath \"\"" Feb 9 19:45:15.496151 kubelet[1499]: I0209 19:45:15.495923 1499 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/870a43b7-8fcf-4396-907f-1bccc87ecbc8-clustermesh-secrets\") on node \"10.0.0.52\" DevicePath \"\"" Feb 9 19:45:15.693165 kubelet[1499]: E0209 19:45:15.693029 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:45:15.902487 kubelet[1499]: I0209 19:45:15.902463 1499 scope.go:115] "RemoveContainer" containerID="39ed72253d7b299b60c4d56ba62921b3e5a0d1bb936a93f67cfb688852e558cd" Feb 9 19:45:15.905347 env[1184]: time="2024-02-09T19:45:15.905312995Z" level=info msg="RemoveContainer for \"39ed72253d7b299b60c4d56ba62921b3e5a0d1bb936a93f67cfb688852e558cd\"" Feb 9 19:45:15.911457 env[1184]: time="2024-02-09T19:45:15.911414467Z" level=info msg="RemoveContainer for \"39ed72253d7b299b60c4d56ba62921b3e5a0d1bb936a93f67cfb688852e558cd\" returns successfully" Feb 9 19:45:15.911653 kubelet[1499]: I0209 19:45:15.911637 1499 scope.go:115] "RemoveContainer" containerID="48cfff3eb125656f08c4ececf9d72176c19ac2fa6c9d6d37c854d7c668bb4508" Feb 9 19:45:15.912355 env[1184]: time="2024-02-09T19:45:15.912329135Z" level=info msg="RemoveContainer for \"48cfff3eb125656f08c4ececf9d72176c19ac2fa6c9d6d37c854d7c668bb4508\"" Feb 9 19:45:15.915132 env[1184]: time="2024-02-09T19:45:15.915102888Z" level=info msg="RemoveContainer for \"48cfff3eb125656f08c4ececf9d72176c19ac2fa6c9d6d37c854d7c668bb4508\" returns successfully" Feb 9 19:45:15.915260 kubelet[1499]: I0209 19:45:15.915235 1499 scope.go:115] "RemoveContainer" containerID="636fb5b48ad37a7cfe7d343dac7f0a37ebbe9be69c6162b051d94ea1e0165556" Feb 9 19:45:15.916411 env[1184]: time="2024-02-09T19:45:15.916384221Z" level=info msg="RemoveContainer for \"636fb5b48ad37a7cfe7d343dac7f0a37ebbe9be69c6162b051d94ea1e0165556\"" Feb 9 19:45:15.920043 env[1184]: time="2024-02-09T19:45:15.920000084Z" level=info msg="RemoveContainer for \"636fb5b48ad37a7cfe7d343dac7f0a37ebbe9be69c6162b051d94ea1e0165556\" returns successfully" Feb 9 19:45:15.920208 kubelet[1499]: I0209 19:45:15.920169 1499 scope.go:115] "RemoveContainer" containerID="e6f23b61ca43e3a849a70173dffb2527e9f293f77e2b1545c6990b739d608c2e" Feb 9 19:45:15.921080 env[1184]: time="2024-02-09T19:45:15.921049979Z" level=info msg="RemoveContainer for \"e6f23b61ca43e3a849a70173dffb2527e9f293f77e2b1545c6990b739d608c2e\"" Feb 9 19:45:15.923931 env[1184]: time="2024-02-09T19:45:15.923897220Z" level=info msg="RemoveContainer for \"e6f23b61ca43e3a849a70173dffb2527e9f293f77e2b1545c6990b739d608c2e\" returns successfully" Feb 9 19:45:15.924058 kubelet[1499]: I0209 19:45:15.924010 1499 scope.go:115] "RemoveContainer" containerID="093056d377e3eacb87ec30d1306d42a97778fee08ba514f30de3c523e03d2f16" Feb 9 19:45:15.924803 env[1184]: time="2024-02-09T19:45:15.924781050Z" level=info msg="RemoveContainer for \"093056d377e3eacb87ec30d1306d42a97778fee08ba514f30de3c523e03d2f16\"" Feb 9 19:45:15.927321 env[1184]: time="2024-02-09T19:45:15.927293870Z" level=info msg="RemoveContainer for \"093056d377e3eacb87ec30d1306d42a97778fee08ba514f30de3c523e03d2f16\" returns successfully" Feb 9 19:45:15.927429 kubelet[1499]: I0209 19:45:15.927406 1499 scope.go:115] "RemoveContainer" containerID="39ed72253d7b299b60c4d56ba62921b3e5a0d1bb936a93f67cfb688852e558cd" Feb 9 19:45:15.927636 env[1184]: time="2024-02-09T19:45:15.927567777Z" level=error msg="ContainerStatus for \"39ed72253d7b299b60c4d56ba62921b3e5a0d1bb936a93f67cfb688852e558cd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"39ed72253d7b299b60c4d56ba62921b3e5a0d1bb936a93f67cfb688852e558cd\": not found" Feb 9 19:45:15.927747 kubelet[1499]: E0209 19:45:15.927727 1499 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"39ed72253d7b299b60c4d56ba62921b3e5a0d1bb936a93f67cfb688852e558cd\": not found" containerID="39ed72253d7b299b60c4d56ba62921b3e5a0d1bb936a93f67cfb688852e558cd" Feb 9 19:45:15.927779 kubelet[1499]: I0209 19:45:15.927762 1499 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:39ed72253d7b299b60c4d56ba62921b3e5a0d1bb936a93f67cfb688852e558cd} err="failed to get container status \"39ed72253d7b299b60c4d56ba62921b3e5a0d1bb936a93f67cfb688852e558cd\": rpc error: code = NotFound desc = an error occurred when try to find container \"39ed72253d7b299b60c4d56ba62921b3e5a0d1bb936a93f67cfb688852e558cd\": not found" Feb 9 19:45:15.927779 kubelet[1499]: I0209 19:45:15.927773 1499 scope.go:115] "RemoveContainer" containerID="48cfff3eb125656f08c4ececf9d72176c19ac2fa6c9d6d37c854d7c668bb4508" Feb 9 19:45:15.927940 env[1184]: time="2024-02-09T19:45:15.927898713Z" level=error msg="ContainerStatus for \"48cfff3eb125656f08c4ececf9d72176c19ac2fa6c9d6d37c854d7c668bb4508\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"48cfff3eb125656f08c4ececf9d72176c19ac2fa6c9d6d37c854d7c668bb4508\": not found" Feb 9 19:45:15.928020 kubelet[1499]: E0209 19:45:15.927986 1499 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"48cfff3eb125656f08c4ececf9d72176c19ac2fa6c9d6d37c854d7c668bb4508\": not found" containerID="48cfff3eb125656f08c4ececf9d72176c19ac2fa6c9d6d37c854d7c668bb4508" Feb 9 19:45:15.928020 kubelet[1499]: I0209 19:45:15.928005 1499 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:48cfff3eb125656f08c4ececf9d72176c19ac2fa6c9d6d37c854d7c668bb4508} err="failed to get container status \"48cfff3eb125656f08c4ececf9d72176c19ac2fa6c9d6d37c854d7c668bb4508\": rpc error: code = NotFound desc = an error occurred when try to find container \"48cfff3eb125656f08c4ececf9d72176c19ac2fa6c9d6d37c854d7c668bb4508\": not found" Feb 9 19:45:15.928020 kubelet[1499]: I0209 19:45:15.928012 1499 scope.go:115] "RemoveContainer" containerID="636fb5b48ad37a7cfe7d343dac7f0a37ebbe9be69c6162b051d94ea1e0165556" Feb 9 19:45:15.928180 env[1184]: time="2024-02-09T19:45:15.928119962Z" level=error msg="ContainerStatus for \"636fb5b48ad37a7cfe7d343dac7f0a37ebbe9be69c6162b051d94ea1e0165556\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"636fb5b48ad37a7cfe7d343dac7f0a37ebbe9be69c6162b051d94ea1e0165556\": not found" Feb 9 19:45:15.928266 kubelet[1499]: E0209 19:45:15.928249 1499 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"636fb5b48ad37a7cfe7d343dac7f0a37ebbe9be69c6162b051d94ea1e0165556\": not found" containerID="636fb5b48ad37a7cfe7d343dac7f0a37ebbe9be69c6162b051d94ea1e0165556" Feb 9 19:45:15.928327 kubelet[1499]: I0209 19:45:15.928282 1499 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:636fb5b48ad37a7cfe7d343dac7f0a37ebbe9be69c6162b051d94ea1e0165556} err="failed to get container status \"636fb5b48ad37a7cfe7d343dac7f0a37ebbe9be69c6162b051d94ea1e0165556\": rpc error: code = NotFound desc = an error occurred when try to find container \"636fb5b48ad37a7cfe7d343dac7f0a37ebbe9be69c6162b051d94ea1e0165556\": not found" Feb 9 19:45:15.928327 kubelet[1499]: I0209 19:45:15.928292 1499 scope.go:115] "RemoveContainer" containerID="e6f23b61ca43e3a849a70173dffb2527e9f293f77e2b1545c6990b739d608c2e" Feb 9 19:45:15.928456 env[1184]: time="2024-02-09T19:45:15.928415029Z" level=error msg="ContainerStatus for \"e6f23b61ca43e3a849a70173dffb2527e9f293f77e2b1545c6990b739d608c2e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e6f23b61ca43e3a849a70173dffb2527e9f293f77e2b1545c6990b739d608c2e\": not found" Feb 9 19:45:15.928517 kubelet[1499]: E0209 19:45:15.928506 1499 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e6f23b61ca43e3a849a70173dffb2527e9f293f77e2b1545c6990b739d608c2e\": not found" containerID="e6f23b61ca43e3a849a70173dffb2527e9f293f77e2b1545c6990b739d608c2e" Feb 9 19:45:15.928573 kubelet[1499]: I0209 19:45:15.928520 1499 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:e6f23b61ca43e3a849a70173dffb2527e9f293f77e2b1545c6990b739d608c2e} err="failed to get container status \"e6f23b61ca43e3a849a70173dffb2527e9f293f77e2b1545c6990b739d608c2e\": rpc error: code = NotFound desc = an error occurred when try to find container \"e6f23b61ca43e3a849a70173dffb2527e9f293f77e2b1545c6990b739d608c2e\": not found" Feb 9 19:45:15.928573 kubelet[1499]: I0209 19:45:15.928528 1499 scope.go:115] "RemoveContainer" containerID="093056d377e3eacb87ec30d1306d42a97778fee08ba514f30de3c523e03d2f16" Feb 9 19:45:15.928687 env[1184]: time="2024-02-09T19:45:15.928651777Z" level=error msg="ContainerStatus for \"093056d377e3eacb87ec30d1306d42a97778fee08ba514f30de3c523e03d2f16\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"093056d377e3eacb87ec30d1306d42a97778fee08ba514f30de3c523e03d2f16\": not found" Feb 9 19:45:15.928766 kubelet[1499]: E0209 19:45:15.928753 1499 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"093056d377e3eacb87ec30d1306d42a97778fee08ba514f30de3c523e03d2f16\": not found" containerID="093056d377e3eacb87ec30d1306d42a97778fee08ba514f30de3c523e03d2f16" Feb 9 19:45:15.928799 kubelet[1499]: I0209 19:45:15.928777 1499 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:093056d377e3eacb87ec30d1306d42a97778fee08ba514f30de3c523e03d2f16} err="failed to get container status \"093056d377e3eacb87ec30d1306d42a97778fee08ba514f30de3c523e03d2f16\": rpc error: code = NotFound desc = an error occurred when try to find container \"093056d377e3eacb87ec30d1306d42a97778fee08ba514f30de3c523e03d2f16\": not found" Feb 9 19:45:16.129731 systemd[1]: var-lib-kubelet-pods-870a43b7\x2d8fcf\x2d4396\x2d907f\x2d1bccc87ecbc8-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 19:45:16.693645 kubelet[1499]: E0209 19:45:16.693596 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:45:16.705295 kubelet[1499]: E0209 19:45:16.705268 1499 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 19:45:16.789187 kubelet[1499]: I0209 19:45:16.789144 1499 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=870a43b7-8fcf-4396-907f-1bccc87ecbc8 path="/var/lib/kubelet/pods/870a43b7-8fcf-4396-907f-1bccc87ecbc8/volumes" Feb 9 19:45:17.658502 kubelet[1499]: I0209 19:45:17.658435 1499 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:45:17.658502 kubelet[1499]: E0209 19:45:17.658495 1499 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="870a43b7-8fcf-4396-907f-1bccc87ecbc8" containerName="mount-cgroup" Feb 9 19:45:17.658502 kubelet[1499]: E0209 19:45:17.658507 1499 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="870a43b7-8fcf-4396-907f-1bccc87ecbc8" containerName="apply-sysctl-overwrites" Feb 9 19:45:17.658502 kubelet[1499]: E0209 19:45:17.658514 1499 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="870a43b7-8fcf-4396-907f-1bccc87ecbc8" containerName="mount-bpf-fs" Feb 9 19:45:17.658502 kubelet[1499]: E0209 19:45:17.658520 1499 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="870a43b7-8fcf-4396-907f-1bccc87ecbc8" containerName="clean-cilium-state" Feb 9 19:45:17.658502 kubelet[1499]: E0209 19:45:17.658527 1499 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="870a43b7-8fcf-4396-907f-1bccc87ecbc8" containerName="cilium-agent" Feb 9 19:45:17.658832 kubelet[1499]: I0209 19:45:17.658563 1499 memory_manager.go:346] "RemoveStaleState removing state" podUID="870a43b7-8fcf-4396-907f-1bccc87ecbc8" containerName="cilium-agent" Feb 9 19:45:17.694178 kubelet[1499]: E0209 19:45:17.694131 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:45:17.749419 kubelet[1499]: I0209 19:45:17.749382 1499 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:45:17.809618 kubelet[1499]: I0209 19:45:17.809566 1499 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfck7\" (UniqueName: \"kubernetes.io/projected/79a75a26-dfcd-4d8a-aab7-99609eee590a-kube-api-access-jfck7\") pod \"cilium-operator-f59cbd8c6-sqxg9\" (UID: \"79a75a26-dfcd-4d8a-aab7-99609eee590a\") " pod="kube-system/cilium-operator-f59cbd8c6-sqxg9" Feb 9 19:45:17.809771 kubelet[1499]: I0209 19:45:17.809637 1499 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/79a75a26-dfcd-4d8a-aab7-99609eee590a-cilium-config-path\") pod \"cilium-operator-f59cbd8c6-sqxg9\" (UID: \"79a75a26-dfcd-4d8a-aab7-99609eee590a\") " pod="kube-system/cilium-operator-f59cbd8c6-sqxg9" Feb 9 19:45:17.910067 kubelet[1499]: I0209 19:45:17.909950 1499 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a1a7af2a-d634-48e2-8a51-c6c669f2134a-cilium-cgroup\") pod \"cilium-4c9fj\" (UID: \"a1a7af2a-d634-48e2-8a51-c6c669f2134a\") " pod="kube-system/cilium-4c9fj" Feb 9 19:45:17.910067 kubelet[1499]: I0209 19:45:17.909985 1499 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a1a7af2a-d634-48e2-8a51-c6c669f2134a-cni-path\") pod \"cilium-4c9fj\" (UID: \"a1a7af2a-d634-48e2-8a51-c6c669f2134a\") " pod="kube-system/cilium-4c9fj" Feb 9 19:45:17.910067 kubelet[1499]: I0209 19:45:17.910020 1499 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a1a7af2a-d634-48e2-8a51-c6c669f2134a-host-proc-sys-net\") pod \"cilium-4c9fj\" (UID: \"a1a7af2a-d634-48e2-8a51-c6c669f2134a\") " pod="kube-system/cilium-4c9fj" Feb 9 19:45:17.910067 kubelet[1499]: I0209 19:45:17.910040 1499 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a1a7af2a-d634-48e2-8a51-c6c669f2134a-host-proc-sys-kernel\") pod \"cilium-4c9fj\" (UID: \"a1a7af2a-d634-48e2-8a51-c6c669f2134a\") " pod="kube-system/cilium-4c9fj" Feb 9 19:45:17.910263 kubelet[1499]: I0209 19:45:17.910117 1499 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz4vj\" (UniqueName: \"kubernetes.io/projected/a1a7af2a-d634-48e2-8a51-c6c669f2134a-kube-api-access-sz4vj\") pod \"cilium-4c9fj\" (UID: \"a1a7af2a-d634-48e2-8a51-c6c669f2134a\") " pod="kube-system/cilium-4c9fj" Feb 9 19:45:17.910263 kubelet[1499]: I0209 19:45:17.910179 1499 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a1a7af2a-d634-48e2-8a51-c6c669f2134a-hostproc\") pod \"cilium-4c9fj\" (UID: \"a1a7af2a-d634-48e2-8a51-c6c669f2134a\") " pod="kube-system/cilium-4c9fj" Feb 9 19:45:17.910263 kubelet[1499]: I0209 19:45:17.910226 1499 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a1a7af2a-d634-48e2-8a51-c6c669f2134a-clustermesh-secrets\") pod \"cilium-4c9fj\" (UID: \"a1a7af2a-d634-48e2-8a51-c6c669f2134a\") " pod="kube-system/cilium-4c9fj" Feb 9 19:45:17.910263 kubelet[1499]: I0209 19:45:17.910245 1499 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a1a7af2a-d634-48e2-8a51-c6c669f2134a-cilium-config-path\") pod \"cilium-4c9fj\" (UID: \"a1a7af2a-d634-48e2-8a51-c6c669f2134a\") " pod="kube-system/cilium-4c9fj" Feb 9 19:45:17.910354 kubelet[1499]: I0209 19:45:17.910271 1499 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a1a7af2a-d634-48e2-8a51-c6c669f2134a-cilium-ipsec-secrets\") pod \"cilium-4c9fj\" (UID: \"a1a7af2a-d634-48e2-8a51-c6c669f2134a\") " pod="kube-system/cilium-4c9fj" Feb 9 19:45:17.910354 kubelet[1499]: I0209 19:45:17.910288 1499 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a1a7af2a-d634-48e2-8a51-c6c669f2134a-hubble-tls\") pod \"cilium-4c9fj\" (UID: \"a1a7af2a-d634-48e2-8a51-c6c669f2134a\") " pod="kube-system/cilium-4c9fj" Feb 9 19:45:17.910354 kubelet[1499]: I0209 19:45:17.910317 1499 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a1a7af2a-d634-48e2-8a51-c6c669f2134a-bpf-maps\") pod \"cilium-4c9fj\" (UID: \"a1a7af2a-d634-48e2-8a51-c6c669f2134a\") " pod="kube-system/cilium-4c9fj" Feb 9 19:45:17.910736 kubelet[1499]: I0209 19:45:17.910707 1499 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a1a7af2a-d634-48e2-8a51-c6c669f2134a-cilium-run\") pod \"cilium-4c9fj\" (UID: \"a1a7af2a-d634-48e2-8a51-c6c669f2134a\") " pod="kube-system/cilium-4c9fj" Feb 9 19:45:17.910794 kubelet[1499]: I0209 19:45:17.910744 1499 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a1a7af2a-d634-48e2-8a51-c6c669f2134a-etc-cni-netd\") pod \"cilium-4c9fj\" (UID: \"a1a7af2a-d634-48e2-8a51-c6c669f2134a\") " pod="kube-system/cilium-4c9fj" Feb 9 19:45:17.910794 kubelet[1499]: I0209 19:45:17.910763 1499 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a1a7af2a-d634-48e2-8a51-c6c669f2134a-lib-modules\") pod \"cilium-4c9fj\" (UID: \"a1a7af2a-d634-48e2-8a51-c6c669f2134a\") " pod="kube-system/cilium-4c9fj" Feb 9 19:45:17.910794 kubelet[1499]: I0209 19:45:17.910785 1499 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a1a7af2a-d634-48e2-8a51-c6c669f2134a-xtables-lock\") pod \"cilium-4c9fj\" (UID: \"a1a7af2a-d634-48e2-8a51-c6c669f2134a\") " pod="kube-system/cilium-4c9fj" Feb 9 19:45:17.961244 kubelet[1499]: E0209 19:45:17.961210 1499 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:17.961803 env[1184]: time="2024-02-09T19:45:17.961760970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-sqxg9,Uid:79a75a26-dfcd-4d8a-aab7-99609eee590a,Namespace:kube-system,Attempt:0,}" Feb 9 19:45:17.974865 env[1184]: time="2024-02-09T19:45:17.974775702Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:45:17.974865 env[1184]: time="2024-02-09T19:45:17.974817520Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:45:17.974865 env[1184]: time="2024-02-09T19:45:17.974839312Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:45:17.975054 env[1184]: time="2024-02-09T19:45:17.975008251Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/64736502c133e2ac349847a674bdf25d2165e8ae356f92be186bba4e595f2b13 pid=3132 runtime=io.containerd.runc.v2 Feb 9 19:45:18.026605 env[1184]: time="2024-02-09T19:45:18.026560419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-sqxg9,Uid:79a75a26-dfcd-4d8a-aab7-99609eee590a,Namespace:kube-system,Attempt:0,} returns sandbox id \"64736502c133e2ac349847a674bdf25d2165e8ae356f92be186bba4e595f2b13\"" Feb 9 19:45:18.027109 kubelet[1499]: E0209 19:45:18.027095 1499 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:18.027690 env[1184]: time="2024-02-09T19:45:18.027662230Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 9 19:45:18.052035 kubelet[1499]: E0209 19:45:18.052009 1499 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:18.052531 env[1184]: time="2024-02-09T19:45:18.052495766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4c9fj,Uid:a1a7af2a-d634-48e2-8a51-c6c669f2134a,Namespace:kube-system,Attempt:0,}" Feb 9 19:45:18.065196 env[1184]: time="2024-02-09T19:45:18.065125786Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:45:18.065334 env[1184]: time="2024-02-09T19:45:18.065165180Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:45:18.065334 env[1184]: time="2024-02-09T19:45:18.065174869Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:45:18.065473 env[1184]: time="2024-02-09T19:45:18.065308992Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0ac867643c0810ea1e5991757550f7f0c31baf425f038a34fb1b12e95a2847fb pid=3180 runtime=io.containerd.runc.v2 Feb 9 19:45:18.095936 env[1184]: time="2024-02-09T19:45:18.095869896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4c9fj,Uid:a1a7af2a-d634-48e2-8a51-c6c669f2134a,Namespace:kube-system,Attempt:0,} returns sandbox id \"0ac867643c0810ea1e5991757550f7f0c31baf425f038a34fb1b12e95a2847fb\"" Feb 9 19:45:18.096906 kubelet[1499]: E0209 19:45:18.096554 1499 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:18.098266 env[1184]: time="2024-02-09T19:45:18.098220056Z" level=info msg="CreateContainer within sandbox \"0ac867643c0810ea1e5991757550f7f0c31baf425f038a34fb1b12e95a2847fb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 19:45:18.113212 env[1184]: time="2024-02-09T19:45:18.113141215Z" level=info msg="CreateContainer within sandbox \"0ac867643c0810ea1e5991757550f7f0c31baf425f038a34fb1b12e95a2847fb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f96d95539a208bd5fb1c0eedc22261a6e58a337b8bb8026fb319f34d5c5e7e7a\"" Feb 9 19:45:18.113883 env[1184]: time="2024-02-09T19:45:18.113825918Z" level=info msg="StartContainer for \"f96d95539a208bd5fb1c0eedc22261a6e58a337b8bb8026fb319f34d5c5e7e7a\"" Feb 9 19:45:18.160584 env[1184]: time="2024-02-09T19:45:18.160469557Z" level=info msg="StartContainer for \"f96d95539a208bd5fb1c0eedc22261a6e58a337b8bb8026fb319f34d5c5e7e7a\" returns successfully" Feb 9 19:45:18.186724 env[1184]: time="2024-02-09T19:45:18.186663762Z" level=info msg="shim disconnected" id=f96d95539a208bd5fb1c0eedc22261a6e58a337b8bb8026fb319f34d5c5e7e7a Feb 9 19:45:18.186724 env[1184]: time="2024-02-09T19:45:18.186710710Z" level=warning msg="cleaning up after shim disconnected" id=f96d95539a208bd5fb1c0eedc22261a6e58a337b8bb8026fb319f34d5c5e7e7a namespace=k8s.io Feb 9 19:45:18.186724 env[1184]: time="2024-02-09T19:45:18.186718525Z" level=info msg="cleaning up dead shim" Feb 9 19:45:18.194328 env[1184]: time="2024-02-09T19:45:18.194292233Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:45:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3264 runtime=io.containerd.runc.v2\n" Feb 9 19:45:18.694809 kubelet[1499]: E0209 19:45:18.694746 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:45:18.910179 env[1184]: time="2024-02-09T19:45:18.910124274Z" level=info msg="StopPodSandbox for \"0ac867643c0810ea1e5991757550f7f0c31baf425f038a34fb1b12e95a2847fb\"" Feb 9 19:45:18.910353 env[1184]: time="2024-02-09T19:45:18.910211418Z" level=info msg="Container to stop \"f96d95539a208bd5fb1c0eedc22261a6e58a337b8bb8026fb319f34d5c5e7e7a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:45:19.042155 env[1184]: time="2024-02-09T19:45:19.041888831Z" level=info msg="shim disconnected" id=0ac867643c0810ea1e5991757550f7f0c31baf425f038a34fb1b12e95a2847fb Feb 9 19:45:19.042155 env[1184]: time="2024-02-09T19:45:19.041933906Z" level=warning msg="cleaning up after shim disconnected" id=0ac867643c0810ea1e5991757550f7f0c31baf425f038a34fb1b12e95a2847fb namespace=k8s.io Feb 9 19:45:19.042155 env[1184]: time="2024-02-09T19:45:19.041943204Z" level=info msg="cleaning up dead shim" Feb 9 19:45:19.050562 env[1184]: time="2024-02-09T19:45:19.050510947Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:45:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3296 runtime=io.containerd.runc.v2\n" Feb 9 19:45:19.050817 env[1184]: time="2024-02-09T19:45:19.050785326Z" level=info msg="TearDown network for sandbox \"0ac867643c0810ea1e5991757550f7f0c31baf425f038a34fb1b12e95a2847fb\" successfully" Feb 9 19:45:19.050817 env[1184]: time="2024-02-09T19:45:19.050808229Z" level=info msg="StopPodSandbox for \"0ac867643c0810ea1e5991757550f7f0c31baf425f038a34fb1b12e95a2847fb\" returns successfully" Feb 9 19:45:19.220169 kubelet[1499]: I0209 19:45:19.220108 1499 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1a7af2a-d634-48e2-8a51-c6c669f2134a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a1a7af2a-d634-48e2-8a51-c6c669f2134a" (UID: "a1a7af2a-d634-48e2-8a51-c6c669f2134a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:45:19.220169 kubelet[1499]: I0209 19:45:19.220114 1499 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a1a7af2a-d634-48e2-8a51-c6c669f2134a-cilium-cgroup\") pod \"a1a7af2a-d634-48e2-8a51-c6c669f2134a\" (UID: \"a1a7af2a-d634-48e2-8a51-c6c669f2134a\") " Feb 9 19:45:19.222048 kubelet[1499]: I0209 19:45:19.220192 1499 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a1a7af2a-d634-48e2-8a51-c6c669f2134a-cilium-config-path\") pod \"a1a7af2a-d634-48e2-8a51-c6c669f2134a\" (UID: \"a1a7af2a-d634-48e2-8a51-c6c669f2134a\") " Feb 9 19:45:19.222048 kubelet[1499]: I0209 19:45:19.220214 1499 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a1a7af2a-d634-48e2-8a51-c6c669f2134a-cilium-ipsec-secrets\") pod \"a1a7af2a-d634-48e2-8a51-c6c669f2134a\" (UID: \"a1a7af2a-d634-48e2-8a51-c6c669f2134a\") " Feb 9 19:45:19.222048 kubelet[1499]: I0209 19:45:19.220243 1499 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1a7af2a-d634-48e2-8a51-c6c669f2134a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a1a7af2a-d634-48e2-8a51-c6c669f2134a" (UID: "a1a7af2a-d634-48e2-8a51-c6c669f2134a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:45:19.222048 kubelet[1499]: I0209 19:45:19.220272 1499 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a1a7af2a-d634-48e2-8a51-c6c669f2134a-etc-cni-netd\") pod \"a1a7af2a-d634-48e2-8a51-c6c669f2134a\" (UID: \"a1a7af2a-d634-48e2-8a51-c6c669f2134a\") " Feb 9 19:45:19.222048 kubelet[1499]: I0209 19:45:19.220291 1499 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a1a7af2a-d634-48e2-8a51-c6c669f2134a-host-proc-sys-kernel\") pod \"a1a7af2a-d634-48e2-8a51-c6c669f2134a\" (UID: \"a1a7af2a-d634-48e2-8a51-c6c669f2134a\") " Feb 9 19:45:19.222048 kubelet[1499]: I0209 19:45:19.220311 1499 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sz4vj\" (UniqueName: \"kubernetes.io/projected/a1a7af2a-d634-48e2-8a51-c6c669f2134a-kube-api-access-sz4vj\") pod \"a1a7af2a-d634-48e2-8a51-c6c669f2134a\" (UID: \"a1a7af2a-d634-48e2-8a51-c6c669f2134a\") " Feb 9 19:45:19.222342 kubelet[1499]: I0209 19:45:19.220328 1499 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a1a7af2a-d634-48e2-8a51-c6c669f2134a-hubble-tls\") pod \"a1a7af2a-d634-48e2-8a51-c6c669f2134a\" (UID: \"a1a7af2a-d634-48e2-8a51-c6c669f2134a\") " Feb 9 19:45:19.222342 kubelet[1499]: I0209 19:45:19.220352 1499 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a1a7af2a-d634-48e2-8a51-c6c669f2134a-xtables-lock\") pod \"a1a7af2a-d634-48e2-8a51-c6c669f2134a\" (UID: \"a1a7af2a-d634-48e2-8a51-c6c669f2134a\") " Feb 9 19:45:19.222342 kubelet[1499]: I0209 19:45:19.220371 1499 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a1a7af2a-d634-48e2-8a51-c6c669f2134a-clustermesh-secrets\") pod \"a1a7af2a-d634-48e2-8a51-c6c669f2134a\" (UID: \"a1a7af2a-d634-48e2-8a51-c6c669f2134a\") " Feb 9 19:45:19.222342 kubelet[1499]: I0209 19:45:19.220388 1499 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a1a7af2a-d634-48e2-8a51-c6c669f2134a-host-proc-sys-net\") pod \"a1a7af2a-d634-48e2-8a51-c6c669f2134a\" (UID: \"a1a7af2a-d634-48e2-8a51-c6c669f2134a\") " Feb 9 19:45:19.222342 kubelet[1499]: I0209 19:45:19.220403 1499 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a1a7af2a-d634-48e2-8a51-c6c669f2134a-cni-path\") pod \"a1a7af2a-d634-48e2-8a51-c6c669f2134a\" (UID: \"a1a7af2a-d634-48e2-8a51-c6c669f2134a\") " Feb 9 19:45:19.222342 kubelet[1499]: I0209 19:45:19.220419 1499 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a1a7af2a-d634-48e2-8a51-c6c669f2134a-hostproc\") pod \"a1a7af2a-d634-48e2-8a51-c6c669f2134a\" (UID: \"a1a7af2a-d634-48e2-8a51-c6c669f2134a\") " Feb 9 19:45:19.222514 kubelet[1499]: I0209 19:45:19.220433 1499 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a1a7af2a-d634-48e2-8a51-c6c669f2134a-bpf-maps\") pod \"a1a7af2a-d634-48e2-8a51-c6c669f2134a\" (UID: \"a1a7af2a-d634-48e2-8a51-c6c669f2134a\") " Feb 9 19:45:19.222514 kubelet[1499]: I0209 19:45:19.220452 1499 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a1a7af2a-d634-48e2-8a51-c6c669f2134a-lib-modules\") pod \"a1a7af2a-d634-48e2-8a51-c6c669f2134a\" (UID: \"a1a7af2a-d634-48e2-8a51-c6c669f2134a\") " Feb 9 19:45:19.222514 kubelet[1499]: I0209 19:45:19.220469 1499 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a1a7af2a-d634-48e2-8a51-c6c669f2134a-cilium-run\") pod \"a1a7af2a-d634-48e2-8a51-c6c669f2134a\" (UID: \"a1a7af2a-d634-48e2-8a51-c6c669f2134a\") " Feb 9 19:45:19.222514 kubelet[1499]: W0209 19:45:19.220459 1499 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/a1a7af2a-d634-48e2-8a51-c6c669f2134a/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 19:45:19.222514 kubelet[1499]: I0209 19:45:19.220497 1499 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a1a7af2a-d634-48e2-8a51-c6c669f2134a-cilium-cgroup\") on node \"10.0.0.52\" DevicePath \"\"" Feb 9 19:45:19.222514 kubelet[1499]: I0209 19:45:19.220507 1499 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a1a7af2a-d634-48e2-8a51-c6c669f2134a-etc-cni-netd\") on node \"10.0.0.52\" DevicePath \"\"" Feb 9 19:45:19.222514 kubelet[1499]: I0209 19:45:19.220521 1499 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1a7af2a-d634-48e2-8a51-c6c669f2134a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a1a7af2a-d634-48e2-8a51-c6c669f2134a" (UID: "a1a7af2a-d634-48e2-8a51-c6c669f2134a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:45:19.222700 kubelet[1499]: I0209 19:45:19.220555 1499 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1a7af2a-d634-48e2-8a51-c6c669f2134a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a1a7af2a-d634-48e2-8a51-c6c669f2134a" (UID: "a1a7af2a-d634-48e2-8a51-c6c669f2134a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:45:19.222700 kubelet[1499]: I0209 19:45:19.220570 1499 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1a7af2a-d634-48e2-8a51-c6c669f2134a-cni-path" (OuterVolumeSpecName: "cni-path") pod "a1a7af2a-d634-48e2-8a51-c6c669f2134a" (UID: "a1a7af2a-d634-48e2-8a51-c6c669f2134a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:45:19.222700 kubelet[1499]: I0209 19:45:19.220582 1499 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1a7af2a-d634-48e2-8a51-c6c669f2134a-hostproc" (OuterVolumeSpecName: "hostproc") pod "a1a7af2a-d634-48e2-8a51-c6c669f2134a" (UID: "a1a7af2a-d634-48e2-8a51-c6c669f2134a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:45:19.222700 kubelet[1499]: I0209 19:45:19.220594 1499 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1a7af2a-d634-48e2-8a51-c6c669f2134a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a1a7af2a-d634-48e2-8a51-c6c669f2134a" (UID: "a1a7af2a-d634-48e2-8a51-c6c669f2134a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:45:19.222700 kubelet[1499]: I0209 19:45:19.220606 1499 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1a7af2a-d634-48e2-8a51-c6c669f2134a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a1a7af2a-d634-48e2-8a51-c6c669f2134a" (UID: "a1a7af2a-d634-48e2-8a51-c6c669f2134a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:45:19.222813 kubelet[1499]: I0209 19:45:19.221165 1499 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1a7af2a-d634-48e2-8a51-c6c669f2134a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a1a7af2a-d634-48e2-8a51-c6c669f2134a" (UID: "a1a7af2a-d634-48e2-8a51-c6c669f2134a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:45:19.222813 kubelet[1499]: I0209 19:45:19.221197 1499 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1a7af2a-d634-48e2-8a51-c6c669f2134a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a1a7af2a-d634-48e2-8a51-c6c669f2134a" (UID: "a1a7af2a-d634-48e2-8a51-c6c669f2134a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:45:19.222813 kubelet[1499]: I0209 19:45:19.222782 1499 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1a7af2a-d634-48e2-8a51-c6c669f2134a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a1a7af2a-d634-48e2-8a51-c6c669f2134a" (UID: "a1a7af2a-d634-48e2-8a51-c6c669f2134a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 19:45:19.225443 kubelet[1499]: I0209 19:45:19.223375 1499 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1a7af2a-d634-48e2-8a51-c6c669f2134a-kube-api-access-sz4vj" (OuterVolumeSpecName: "kube-api-access-sz4vj") pod "a1a7af2a-d634-48e2-8a51-c6c669f2134a" (UID: "a1a7af2a-d634-48e2-8a51-c6c669f2134a"). InnerVolumeSpecName "kube-api-access-sz4vj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:45:19.224675 systemd[1]: var-lib-kubelet-pods-a1a7af2a\x2dd634\x2d48e2\x2d8a51\x2dc6c669f2134a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsz4vj.mount: Deactivated successfully. Feb 9 19:45:19.226665 systemd[1]: var-lib-kubelet-pods-a1a7af2a\x2dd634\x2d48e2\x2d8a51\x2dc6c669f2134a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 19:45:19.228583 kubelet[1499]: I0209 19:45:19.227241 1499 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1a7af2a-d634-48e2-8a51-c6c669f2134a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a1a7af2a-d634-48e2-8a51-c6c669f2134a" (UID: "a1a7af2a-d634-48e2-8a51-c6c669f2134a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:45:19.228908 systemd[1]: var-lib-kubelet-pods-a1a7af2a\x2dd634\x2d48e2\x2d8a51\x2dc6c669f2134a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 19:45:19.229212 kubelet[1499]: I0209 19:45:19.229119 1499 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1a7af2a-d634-48e2-8a51-c6c669f2134a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a1a7af2a-d634-48e2-8a51-c6c669f2134a" (UID: "a1a7af2a-d634-48e2-8a51-c6c669f2134a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:45:19.229327 kubelet[1499]: I0209 19:45:19.229312 1499 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1a7af2a-d634-48e2-8a51-c6c669f2134a-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "a1a7af2a-d634-48e2-8a51-c6c669f2134a" (UID: "a1a7af2a-d634-48e2-8a51-c6c669f2134a"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:45:19.233052 systemd[1]: var-lib-kubelet-pods-a1a7af2a\x2dd634\x2d48e2\x2d8a51\x2dc6c669f2134a-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 9 19:45:19.321459 kubelet[1499]: I0209 19:45:19.321427 1499 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-sz4vj\" (UniqueName: \"kubernetes.io/projected/a1a7af2a-d634-48e2-8a51-c6c669f2134a-kube-api-access-sz4vj\") on node \"10.0.0.52\" DevicePath \"\"" Feb 9 19:45:19.321459 kubelet[1499]: I0209 19:45:19.321464 1499 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a1a7af2a-d634-48e2-8a51-c6c669f2134a-hubble-tls\") on node \"10.0.0.52\" DevicePath \"\"" Feb 9 19:45:19.321697 kubelet[1499]: I0209 19:45:19.321481 1499 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a1a7af2a-d634-48e2-8a51-c6c669f2134a-host-proc-sys-kernel\") on node \"10.0.0.52\" DevicePath \"\"" Feb 9 19:45:19.321697 kubelet[1499]: I0209 19:45:19.321492 1499 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a1a7af2a-d634-48e2-8a51-c6c669f2134a-xtables-lock\") on node \"10.0.0.52\" DevicePath \"\"" Feb 9 19:45:19.321697 kubelet[1499]: I0209 19:45:19.321505 1499 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a1a7af2a-d634-48e2-8a51-c6c669f2134a-clustermesh-secrets\") on node \"10.0.0.52\" DevicePath \"\"" Feb 9 19:45:19.321697 kubelet[1499]: I0209 19:45:19.321517 1499 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a1a7af2a-d634-48e2-8a51-c6c669f2134a-bpf-maps\") on node \"10.0.0.52\" DevicePath \"\"" Feb 9 19:45:19.321697 kubelet[1499]: I0209 19:45:19.321527 1499 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a1a7af2a-d634-48e2-8a51-c6c669f2134a-lib-modules\") on node \"10.0.0.52\" DevicePath \"\"" Feb 9 19:45:19.321697 kubelet[1499]: I0209 19:45:19.321556 1499 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a1a7af2a-d634-48e2-8a51-c6c669f2134a-cilium-run\") on node \"10.0.0.52\" DevicePath \"\"" Feb 9 19:45:19.321697 kubelet[1499]: I0209 19:45:19.321570 1499 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a1a7af2a-d634-48e2-8a51-c6c669f2134a-host-proc-sys-net\") on node \"10.0.0.52\" DevicePath \"\"" Feb 9 19:45:19.321697 kubelet[1499]: I0209 19:45:19.321582 1499 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a1a7af2a-d634-48e2-8a51-c6c669f2134a-cni-path\") on node \"10.0.0.52\" DevicePath \"\"" Feb 9 19:45:19.321933 kubelet[1499]: I0209 19:45:19.321592 1499 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a1a7af2a-d634-48e2-8a51-c6c669f2134a-hostproc\") on node \"10.0.0.52\" DevicePath \"\"" Feb 9 19:45:19.321933 kubelet[1499]: I0209 19:45:19.321604 1499 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a1a7af2a-d634-48e2-8a51-c6c669f2134a-cilium-config-path\") on node \"10.0.0.52\" DevicePath \"\"" Feb 9 19:45:19.321933 kubelet[1499]: I0209 19:45:19.321615 1499 reconciler_common.go:295] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a1a7af2a-d634-48e2-8a51-c6c669f2134a-cilium-ipsec-secrets\") on node \"10.0.0.52\" DevicePath \"\"" Feb 9 19:45:19.695219 kubelet[1499]: E0209 19:45:19.695084 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:45:19.914287 kubelet[1499]: I0209 19:45:19.914240 1499 scope.go:115] "RemoveContainer" containerID="f96d95539a208bd5fb1c0eedc22261a6e58a337b8bb8026fb319f34d5c5e7e7a" Feb 9 19:45:19.915714 env[1184]: time="2024-02-09T19:45:19.915680498Z" level=info msg="RemoveContainer for \"f96d95539a208bd5fb1c0eedc22261a6e58a337b8bb8026fb319f34d5c5e7e7a\"" Feb 9 19:45:19.919373 env[1184]: time="2024-02-09T19:45:19.919310105Z" level=info msg="RemoveContainer for \"f96d95539a208bd5fb1c0eedc22261a6e58a337b8bb8026fb319f34d5c5e7e7a\" returns successfully" Feb 9 19:45:19.937143 kubelet[1499]: I0209 19:45:19.937114 1499 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:45:19.937143 kubelet[1499]: E0209 19:45:19.937161 1499 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a1a7af2a-d634-48e2-8a51-c6c669f2134a" containerName="mount-cgroup" Feb 9 19:45:19.937297 kubelet[1499]: I0209 19:45:19.937182 1499 memory_manager.go:346] "RemoveStaleState removing state" podUID="a1a7af2a-d634-48e2-8a51-c6c669f2134a" containerName="mount-cgroup" Feb 9 19:45:20.026017 kubelet[1499]: I0209 19:45:20.025883 1499 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/83c15fff-c9cc-432d-b5cc-db681b50fff5-hostproc\") pod \"cilium-dxrx9\" (UID: \"83c15fff-c9cc-432d-b5cc-db681b50fff5\") " pod="kube-system/cilium-dxrx9" Feb 9 19:45:20.026017 kubelet[1499]: I0209 19:45:20.025923 1499 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/83c15fff-c9cc-432d-b5cc-db681b50fff5-cni-path\") pod \"cilium-dxrx9\" (UID: \"83c15fff-c9cc-432d-b5cc-db681b50fff5\") " pod="kube-system/cilium-dxrx9" Feb 9 19:45:20.026017 kubelet[1499]: I0209 19:45:20.025947 1499 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/83c15fff-c9cc-432d-b5cc-db681b50fff5-host-proc-sys-net\") pod \"cilium-dxrx9\" (UID: \"83c15fff-c9cc-432d-b5cc-db681b50fff5\") " pod="kube-system/cilium-dxrx9" Feb 9 19:45:20.026017 kubelet[1499]: I0209 19:45:20.025969 1499 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/83c15fff-c9cc-432d-b5cc-db681b50fff5-cilium-config-path\") pod \"cilium-dxrx9\" (UID: \"83c15fff-c9cc-432d-b5cc-db681b50fff5\") " pod="kube-system/cilium-dxrx9" Feb 9 19:45:20.026260 kubelet[1499]: I0209 19:45:20.026033 1499 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/83c15fff-c9cc-432d-b5cc-db681b50fff5-xtables-lock\") pod \"cilium-dxrx9\" (UID: \"83c15fff-c9cc-432d-b5cc-db681b50fff5\") " pod="kube-system/cilium-dxrx9" Feb 9 19:45:20.026260 kubelet[1499]: I0209 19:45:20.026105 1499 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/83c15fff-c9cc-432d-b5cc-db681b50fff5-cilium-run\") pod \"cilium-dxrx9\" (UID: \"83c15fff-c9cc-432d-b5cc-db681b50fff5\") " pod="kube-system/cilium-dxrx9" Feb 9 19:45:20.026260 kubelet[1499]: I0209 19:45:20.026145 1499 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/83c15fff-c9cc-432d-b5cc-db681b50fff5-bpf-maps\") pod \"cilium-dxrx9\" (UID: \"83c15fff-c9cc-432d-b5cc-db681b50fff5\") " pod="kube-system/cilium-dxrx9" Feb 9 19:45:20.026260 kubelet[1499]: I0209 19:45:20.026162 1499 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/83c15fff-c9cc-432d-b5cc-db681b50fff5-cilium-cgroup\") pod \"cilium-dxrx9\" (UID: \"83c15fff-c9cc-432d-b5cc-db681b50fff5\") " pod="kube-system/cilium-dxrx9" Feb 9 19:45:20.026260 kubelet[1499]: I0209 19:45:20.026188 1499 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/83c15fff-c9cc-432d-b5cc-db681b50fff5-clustermesh-secrets\") pod \"cilium-dxrx9\" (UID: \"83c15fff-c9cc-432d-b5cc-db681b50fff5\") " pod="kube-system/cilium-dxrx9" Feb 9 19:45:20.026260 kubelet[1499]: I0209 19:45:20.026218 1499 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/83c15fff-c9cc-432d-b5cc-db681b50fff5-cilium-ipsec-secrets\") pod \"cilium-dxrx9\" (UID: \"83c15fff-c9cc-432d-b5cc-db681b50fff5\") " pod="kube-system/cilium-dxrx9" Feb 9 19:45:20.026410 kubelet[1499]: I0209 19:45:20.026256 1499 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2b2q\" (UniqueName: \"kubernetes.io/projected/83c15fff-c9cc-432d-b5cc-db681b50fff5-kube-api-access-b2b2q\") pod \"cilium-dxrx9\" (UID: \"83c15fff-c9cc-432d-b5cc-db681b50fff5\") " pod="kube-system/cilium-dxrx9" Feb 9 19:45:20.026410 kubelet[1499]: I0209 19:45:20.026288 1499 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/83c15fff-c9cc-432d-b5cc-db681b50fff5-hubble-tls\") pod \"cilium-dxrx9\" (UID: \"83c15fff-c9cc-432d-b5cc-db681b50fff5\") " pod="kube-system/cilium-dxrx9" Feb 9 19:45:20.026410 kubelet[1499]: I0209 19:45:20.026306 1499 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/83c15fff-c9cc-432d-b5cc-db681b50fff5-lib-modules\") pod \"cilium-dxrx9\" (UID: \"83c15fff-c9cc-432d-b5cc-db681b50fff5\") " pod="kube-system/cilium-dxrx9" Feb 9 19:45:20.026410 kubelet[1499]: I0209 19:45:20.026333 1499 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/83c15fff-c9cc-432d-b5cc-db681b50fff5-host-proc-sys-kernel\") pod \"cilium-dxrx9\" (UID: \"83c15fff-c9cc-432d-b5cc-db681b50fff5\") " pod="kube-system/cilium-dxrx9" Feb 9 19:45:20.026410 kubelet[1499]: I0209 19:45:20.026361 1499 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/83c15fff-c9cc-432d-b5cc-db681b50fff5-etc-cni-netd\") pod \"cilium-dxrx9\" (UID: \"83c15fff-c9cc-432d-b5cc-db681b50fff5\") " pod="kube-system/cilium-dxrx9" Feb 9 19:45:20.198406 env[1184]: time="2024-02-09T19:45:20.198342776Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:20.200241 env[1184]: time="2024-02-09T19:45:20.200183001Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:20.201897 env[1184]: time="2024-02-09T19:45:20.201873544Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:20.202360 env[1184]: time="2024-02-09T19:45:20.202338022Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 9 19:45:20.203678 env[1184]: time="2024-02-09T19:45:20.203657343Z" level=info msg="CreateContainer within sandbox \"64736502c133e2ac349847a674bdf25d2165e8ae356f92be186bba4e595f2b13\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 9 19:45:20.213859 env[1184]: time="2024-02-09T19:45:20.213791232Z" level=info msg="CreateContainer within sandbox \"64736502c133e2ac349847a674bdf25d2165e8ae356f92be186bba4e595f2b13\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"10530bd76072c016b118a229b86c567caf93cf3e838d7274d53fd96770f1f706\"" Feb 9 19:45:20.214233 env[1184]: time="2024-02-09T19:45:20.214184015Z" level=info msg="StartContainer for \"10530bd76072c016b118a229b86c567caf93cf3e838d7274d53fd96770f1f706\"" Feb 9 19:45:20.240147 kubelet[1499]: E0209 19:45:20.240107 1499 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:20.240845 env[1184]: time="2024-02-09T19:45:20.240803620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dxrx9,Uid:83c15fff-c9cc-432d-b5cc-db681b50fff5,Namespace:kube-system,Attempt:0,}" Feb 9 19:45:20.496123 env[1184]: time="2024-02-09T19:45:20.496061348Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:45:20.496123 env[1184]: time="2024-02-09T19:45:20.496092067Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:45:20.496123 env[1184]: time="2024-02-09T19:45:20.496100974Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:45:20.496333 env[1184]: time="2024-02-09T19:45:20.496194089Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8efed62932d7e077f6ad4da1ff3362c2a8bc8b1d6b4c8d0041ed69b082d2ddb3 pid=3353 runtime=io.containerd.runc.v2 Feb 9 19:45:20.502729 env[1184]: time="2024-02-09T19:45:20.502673256Z" level=info msg="StartContainer for \"10530bd76072c016b118a229b86c567caf93cf3e838d7274d53fd96770f1f706\" returns successfully" Feb 9 19:45:20.525320 env[1184]: time="2024-02-09T19:45:20.525259504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dxrx9,Uid:83c15fff-c9cc-432d-b5cc-db681b50fff5,Namespace:kube-system,Attempt:0,} returns sandbox id \"8efed62932d7e077f6ad4da1ff3362c2a8bc8b1d6b4c8d0041ed69b082d2ddb3\"" Feb 9 19:45:20.525870 kubelet[1499]: E0209 19:45:20.525847 1499 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:20.527563 env[1184]: time="2024-02-09T19:45:20.527521276Z" level=info msg="CreateContainer within sandbox \"8efed62932d7e077f6ad4da1ff3362c2a8bc8b1d6b4c8d0041ed69b082d2ddb3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 19:45:20.541352 env[1184]: time="2024-02-09T19:45:20.541299469Z" level=info msg="CreateContainer within sandbox \"8efed62932d7e077f6ad4da1ff3362c2a8bc8b1d6b4c8d0041ed69b082d2ddb3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f0518cfe9cde80f394ced552d25951d278df48391133aa24ed30df688c48dea0\"" Feb 9 19:45:20.541898 env[1184]: time="2024-02-09T19:45:20.541869746Z" level=info msg="StartContainer for \"f0518cfe9cde80f394ced552d25951d278df48391133aa24ed30df688c48dea0\"" Feb 9 19:45:20.585302 env[1184]: time="2024-02-09T19:45:20.585241821Z" level=info msg="StartContainer for \"f0518cfe9cde80f394ced552d25951d278df48391133aa24ed30df688c48dea0\" returns successfully" Feb 9 19:45:20.611409 env[1184]: time="2024-02-09T19:45:20.611350232Z" level=info msg="shim disconnected" id=f0518cfe9cde80f394ced552d25951d278df48391133aa24ed30df688c48dea0 Feb 9 19:45:20.611409 env[1184]: time="2024-02-09T19:45:20.611399444Z" level=warning msg="cleaning up after shim disconnected" id=f0518cfe9cde80f394ced552d25951d278df48391133aa24ed30df688c48dea0 namespace=k8s.io Feb 9 19:45:20.611409 env[1184]: time="2024-02-09T19:45:20.611408451Z" level=info msg="cleaning up dead shim" Feb 9 19:45:20.622435 env[1184]: time="2024-02-09T19:45:20.622392637Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:45:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3441 runtime=io.containerd.runc.v2\n" Feb 9 19:45:20.695478 kubelet[1499]: E0209 19:45:20.695447 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:45:20.792722 kubelet[1499]: I0209 19:45:20.792633 1499 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=a1a7af2a-d634-48e2-8a51-c6c669f2134a path="/var/lib/kubelet/pods/a1a7af2a-d634-48e2-8a51-c6c669f2134a/volumes" Feb 9 19:45:20.917380 kubelet[1499]: E0209 19:45:20.917352 1499 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:20.918240 kubelet[1499]: E0209 19:45:20.918220 1499 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:20.919468 env[1184]: time="2024-02-09T19:45:20.919430174Z" level=info msg="CreateContainer within sandbox \"8efed62932d7e077f6ad4da1ff3362c2a8bc8b1d6b4c8d0041ed69b082d2ddb3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 19:45:20.924848 kubelet[1499]: I0209 19:45:20.924833 1499 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-f59cbd8c6-sqxg9" podStartSLOduration=-9.223372032929976e+09 pod.CreationTimestamp="2024-02-09 19:45:17 +0000 UTC" firstStartedPulling="2024-02-09 19:45:18.027425193 +0000 UTC m=+71.741281776" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:45:20.924581212 +0000 UTC m=+74.638437795" watchObservedRunningTime="2024-02-09 19:45:20.924799624 +0000 UTC m=+74.638656208" Feb 9 19:45:20.931145 env[1184]: time="2024-02-09T19:45:20.931095306Z" level=info msg="CreateContainer within sandbox \"8efed62932d7e077f6ad4da1ff3362c2a8bc8b1d6b4c8d0041ed69b082d2ddb3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0237cd5b151c24720e1312d37a890fd3d8aff82526da1b5370075607e0d00f7b\"" Feb 9 19:45:20.931507 env[1184]: time="2024-02-09T19:45:20.931474782Z" level=info msg="StartContainer for \"0237cd5b151c24720e1312d37a890fd3d8aff82526da1b5370075607e0d00f7b\"" Feb 9 19:45:20.974003 env[1184]: time="2024-02-09T19:45:20.973956657Z" level=info msg="StartContainer for \"0237cd5b151c24720e1312d37a890fd3d8aff82526da1b5370075607e0d00f7b\" returns successfully" Feb 9 19:45:20.991829 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0237cd5b151c24720e1312d37a890fd3d8aff82526da1b5370075607e0d00f7b-rootfs.mount: Deactivated successfully. Feb 9 19:45:20.995758 env[1184]: time="2024-02-09T19:45:20.995706465Z" level=info msg="shim disconnected" id=0237cd5b151c24720e1312d37a890fd3d8aff82526da1b5370075607e0d00f7b Feb 9 19:45:20.995758 env[1184]: time="2024-02-09T19:45:20.995750437Z" level=warning msg="cleaning up after shim disconnected" id=0237cd5b151c24720e1312d37a890fd3d8aff82526da1b5370075607e0d00f7b namespace=k8s.io Feb 9 19:45:20.995758 env[1184]: time="2024-02-09T19:45:20.995758773Z" level=info msg="cleaning up dead shim" Feb 9 19:45:21.001990 env[1184]: time="2024-02-09T19:45:21.001950778Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:45:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3503 runtime=io.containerd.runc.v2\n" Feb 9 19:45:21.041385 kubelet[1499]: I0209 19:45:21.041349 1499 setters.go:548] "Node became not ready" node="10.0.0.52" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-09 19:45:21.041290769 +0000 UTC m=+74.755147352 LastTransitionTime:2024-02-09 19:45:21.041290769 +0000 UTC m=+74.755147352 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 9 19:45:21.696055 kubelet[1499]: E0209 19:45:21.695994 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:45:21.706787 kubelet[1499]: E0209 19:45:21.706756 1499 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 19:45:21.921268 kubelet[1499]: E0209 19:45:21.921243 1499 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:21.921268 kubelet[1499]: E0209 19:45:21.921274 1499 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:21.922862 env[1184]: time="2024-02-09T19:45:21.922821945Z" level=info msg="CreateContainer within sandbox \"8efed62932d7e077f6ad4da1ff3362c2a8bc8b1d6b4c8d0041ed69b082d2ddb3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 19:45:21.953790 env[1184]: time="2024-02-09T19:45:21.953681881Z" level=info msg="CreateContainer within sandbox \"8efed62932d7e077f6ad4da1ff3362c2a8bc8b1d6b4c8d0041ed69b082d2ddb3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e963063b223c7e28f20e7b0202fffe93abd476c36308ba514f41aa06df5b5559\"" Feb 9 19:45:21.954297 env[1184]: time="2024-02-09T19:45:21.954254132Z" level=info msg="StartContainer for \"e963063b223c7e28f20e7b0202fffe93abd476c36308ba514f41aa06df5b5559\"" Feb 9 19:45:21.994123 env[1184]: time="2024-02-09T19:45:21.994086063Z" level=info msg="StartContainer for \"e963063b223c7e28f20e7b0202fffe93abd476c36308ba514f41aa06df5b5559\" returns successfully" Feb 9 19:45:22.014945 env[1184]: time="2024-02-09T19:45:22.014898172Z" level=info msg="shim disconnected" id=e963063b223c7e28f20e7b0202fffe93abd476c36308ba514f41aa06df5b5559 Feb 9 19:45:22.014945 env[1184]: time="2024-02-09T19:45:22.014946834Z" level=warning msg="cleaning up after shim disconnected" id=e963063b223c7e28f20e7b0202fffe93abd476c36308ba514f41aa06df5b5559 namespace=k8s.io Feb 9 19:45:22.015156 env[1184]: time="2024-02-09T19:45:22.014957384Z" level=info msg="cleaning up dead shim" Feb 9 19:45:22.021296 env[1184]: time="2024-02-09T19:45:22.021244275Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:45:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3559 runtime=io.containerd.runc.v2\n" Feb 9 19:45:22.696754 kubelet[1499]: E0209 19:45:22.696672 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:45:22.924037 kubelet[1499]: E0209 19:45:22.924013 1499 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:22.925630 env[1184]: time="2024-02-09T19:45:22.925593867Z" level=info msg="CreateContainer within sandbox \"8efed62932d7e077f6ad4da1ff3362c2a8bc8b1d6b4c8d0041ed69b082d2ddb3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 19:45:22.936618 env[1184]: time="2024-02-09T19:45:22.936572568Z" level=info msg="CreateContainer within sandbox \"8efed62932d7e077f6ad4da1ff3362c2a8bc8b1d6b4c8d0041ed69b082d2ddb3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"de4461749bad2a22975032424013e6aca86792890c076801a9034d29bf0f84b5\"" Feb 9 19:45:22.937056 env[1184]: time="2024-02-09T19:45:22.936962865Z" level=info msg="StartContainer for \"de4461749bad2a22975032424013e6aca86792890c076801a9034d29bf0f84b5\"" Feb 9 19:45:22.947361 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e963063b223c7e28f20e7b0202fffe93abd476c36308ba514f41aa06df5b5559-rootfs.mount: Deactivated successfully. Feb 9 19:45:22.981421 env[1184]: time="2024-02-09T19:45:22.981377139Z" level=info msg="StartContainer for \"de4461749bad2a22975032424013e6aca86792890c076801a9034d29bf0f84b5\" returns successfully" Feb 9 19:45:22.992969 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-de4461749bad2a22975032424013e6aca86792890c076801a9034d29bf0f84b5-rootfs.mount: Deactivated successfully. Feb 9 19:45:22.996257 env[1184]: time="2024-02-09T19:45:22.996213915Z" level=info msg="shim disconnected" id=de4461749bad2a22975032424013e6aca86792890c076801a9034d29bf0f84b5 Feb 9 19:45:22.996346 env[1184]: time="2024-02-09T19:45:22.996261925Z" level=warning msg="cleaning up after shim disconnected" id=de4461749bad2a22975032424013e6aca86792890c076801a9034d29bf0f84b5 namespace=k8s.io Feb 9 19:45:22.996346 env[1184]: time="2024-02-09T19:45:22.996270461Z" level=info msg="cleaning up dead shim" Feb 9 19:45:23.003089 env[1184]: time="2024-02-09T19:45:23.003044201Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:45:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3613 runtime=io.containerd.runc.v2\n" Feb 9 19:45:23.696954 kubelet[1499]: E0209 19:45:23.696903 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:45:23.927409 kubelet[1499]: E0209 19:45:23.927385 1499 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:23.929137 env[1184]: time="2024-02-09T19:45:23.929104625Z" level=info msg="CreateContainer within sandbox \"8efed62932d7e077f6ad4da1ff3362c2a8bc8b1d6b4c8d0041ed69b082d2ddb3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 19:45:23.941622 env[1184]: time="2024-02-09T19:45:23.941570872Z" level=info msg="CreateContainer within sandbox \"8efed62932d7e077f6ad4da1ff3362c2a8bc8b1d6b4c8d0041ed69b082d2ddb3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b9afce384cdd72de09ff546ccf984365f7d2bfa135336c6644b769db9a6c73f0\"" Feb 9 19:45:23.942075 env[1184]: time="2024-02-09T19:45:23.942052641Z" level=info msg="StartContainer for \"b9afce384cdd72de09ff546ccf984365f7d2bfa135336c6644b769db9a6c73f0\"" Feb 9 19:45:23.980066 env[1184]: time="2024-02-09T19:45:23.979759072Z" level=info msg="StartContainer for \"b9afce384cdd72de09ff546ccf984365f7d2bfa135336c6644b769db9a6c73f0\" returns successfully" Feb 9 19:45:24.230589 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 9 19:45:24.697621 kubelet[1499]: E0209 19:45:24.697553 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:45:24.931121 kubelet[1499]: E0209 19:45:24.931094 1499 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:24.942213 kubelet[1499]: I0209 19:45:24.942192 1499 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-dxrx9" podStartSLOduration=5.942166704 pod.CreationTimestamp="2024-02-09 19:45:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:45:24.941910822 +0000 UTC m=+78.655767405" watchObservedRunningTime="2024-02-09 19:45:24.942166704 +0000 UTC m=+78.656023277" Feb 9 19:45:25.698686 kubelet[1499]: E0209 19:45:25.698618 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:45:25.933316 kubelet[1499]: E0209 19:45:25.933278 1499 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:26.649310 kubelet[1499]: E0209 19:45:26.649266 1499 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:45:26.699704 kubelet[1499]: E0209 19:45:26.699665 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:45:26.784672 systemd-networkd[1068]: lxc_health: Link UP Feb 9 19:45:26.792058 systemd-networkd[1068]: lxc_health: Gained carrier Feb 9 19:45:26.792574 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 19:45:26.935438 kubelet[1499]: E0209 19:45:26.935322 1499 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:27.700459 kubelet[1499]: E0209 19:45:27.700343 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:45:28.244209 kubelet[1499]: E0209 19:45:28.244023 1499 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:28.701102 kubelet[1499]: E0209 19:45:28.701037 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:45:28.753812 systemd-networkd[1068]: lxc_health: Gained IPv6LL Feb 9 19:45:28.938005 kubelet[1499]: E0209 19:45:28.937973 1499 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:29.701352 kubelet[1499]: E0209 19:45:29.701294 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:45:29.939620 kubelet[1499]: E0209 19:45:29.939591 1499 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:30.334088 systemd[1]: run-containerd-runc-k8s.io-b9afce384cdd72de09ff546ccf984365f7d2bfa135336c6644b769db9a6c73f0-runc.I3Kcpb.mount: Deactivated successfully. Feb 9 19:45:30.701877 kubelet[1499]: E0209 19:45:30.701719 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:45:31.702510 kubelet[1499]: E0209 19:45:31.702459 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:45:32.702647 kubelet[1499]: E0209 19:45:32.702590 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:45:33.703552 kubelet[1499]: E0209 19:45:33.703477 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:45:34.703646 kubelet[1499]: E0209 19:45:34.703601 1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"