Apr 12 18:51:59.798270 kernel: Linux version 5.15.154-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Apr 12 17:19:00 -00 2024 Apr 12 18:51:59.798298 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=189121f7c8c0a24098d3bb1e040d34611f7c276be43815ff7fe409fce185edaf Apr 12 18:51:59.798312 kernel: BIOS-provided physical RAM map: Apr 12 18:51:59.798321 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 12 18:51:59.798328 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Apr 12 18:51:59.798336 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Apr 12 18:51:59.798345 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Apr 12 18:51:59.798353 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Apr 12 18:51:59.798361 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Apr 12 18:51:59.798371 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Apr 12 18:51:59.798379 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Apr 12 18:51:59.798387 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Apr 12 18:51:59.798395 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Apr 12 18:51:59.798402 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Apr 12 18:51:59.798410 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Apr 12 18:51:59.798420 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Apr 12 18:51:59.798427 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Apr 12 18:51:59.798435 kernel: NX (Execute Disable) protection: active Apr 12 18:51:59.798444 kernel: e820: update [mem 0x9b3f7018-0x9b400c57] usable ==> usable Apr 12 18:51:59.798452 kernel: e820: update [mem 0x9b3f7018-0x9b400c57] usable ==> usable Apr 12 18:51:59.798460 kernel: e820: update [mem 0x9b1aa018-0x9b1e6e57] usable ==> usable Apr 12 18:51:59.798467 kernel: e820: update [mem 0x9b1aa018-0x9b1e6e57] usable ==> usable Apr 12 18:51:59.798478 kernel: extended physical RAM map: Apr 12 18:51:59.798486 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 12 18:51:59.798493 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Apr 12 18:51:59.798503 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Apr 12 18:51:59.798511 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Apr 12 18:51:59.798520 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Apr 12 18:51:59.798528 kernel: reserve setup_data: [mem 0x000000000080c000-0x000000000080ffff] usable Apr 12 18:51:59.798537 kernel: reserve setup_data: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Apr 12 18:51:59.798545 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b1aa017] usable Apr 12 18:51:59.798553 kernel: reserve setup_data: [mem 0x000000009b1aa018-0x000000009b1e6e57] usable Apr 12 18:51:59.798562 kernel: reserve setup_data: [mem 0x000000009b1e6e58-0x000000009b3f7017] usable Apr 12 18:51:59.798570 kernel: reserve setup_data: [mem 0x000000009b3f7018-0x000000009b400c57] usable Apr 12 18:51:59.798577 kernel: reserve setup_data: [mem 0x000000009b400c58-0x000000009c8eefff] usable Apr 12 18:51:59.798585 kernel: reserve setup_data: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Apr 12 18:51:59.798595 kernel: reserve setup_data: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Apr 12 18:51:59.798602 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Apr 12 18:51:59.798610 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Apr 12 18:51:59.798617 kernel: reserve setup_data: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Apr 12 18:51:59.798628 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Apr 12 18:51:59.798636 kernel: efi: EFI v2.70 by EDK II Apr 12 18:51:59.798645 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b773018 RNG=0x9cb75018 Apr 12 18:51:59.798655 kernel: random: crng init done Apr 12 18:51:59.798664 kernel: SMBIOS 2.8 present. Apr 12 18:51:59.798673 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 0.0.0 02/06/2015 Apr 12 18:51:59.798682 kernel: Hypervisor detected: KVM Apr 12 18:51:59.798691 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 12 18:51:59.798700 kernel: kvm-clock: cpu 0, msr 14191001, primary cpu clock Apr 12 18:51:59.798709 kernel: kvm-clock: using sched offset of 7710291885 cycles Apr 12 18:51:59.798719 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 12 18:51:59.798728 kernel: tsc: Detected 2794.750 MHz processor Apr 12 18:51:59.798744 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 12 18:51:59.798770 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 12 18:51:59.798779 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Apr 12 18:51:59.798789 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 12 18:51:59.798797 kernel: Using GB pages for direct mapping Apr 12 18:51:59.798805 kernel: Secure boot disabled Apr 12 18:51:59.798814 kernel: ACPI: Early table checksum verification disabled Apr 12 18:51:59.798824 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Apr 12 18:51:59.798835 kernel: ACPI: XSDT 0x000000009CB7D0E8 00004C (v01 BOCHS BXPC 00000001 01000013) Apr 12 18:51:59.798849 kernel: ACPI: FACP 0x000000009CB7A000 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 12 18:51:59.798861 kernel: ACPI: DSDT 0x000000009CB7B000 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 12 18:51:59.798872 kernel: ACPI: FACS 0x000000009CBDD000 000040 Apr 12 18:51:59.798883 kernel: ACPI: APIC 0x000000009CB79000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 12 18:51:59.798895 kernel: ACPI: HPET 0x000000009CB78000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 12 18:51:59.798907 kernel: ACPI: WAET 0x000000009CB77000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 12 18:51:59.798918 kernel: ACPI: BGRT 0x000000009CB76000 000038 (v01 INTEL EDK2 00000002 01000013) Apr 12 18:51:59.798930 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb7a000-0x9cb7a073] Apr 12 18:51:59.798946 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7b000-0x9cb7ca38] Apr 12 18:51:59.798960 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Apr 12 18:51:59.798986 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb79000-0x9cb7908f] Apr 12 18:51:59.798997 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb78000-0x9cb78037] Apr 12 18:51:59.799008 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb77000-0x9cb77027] Apr 12 18:51:59.799020 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb76000-0x9cb76037] Apr 12 18:51:59.799029 kernel: No NUMA configuration found Apr 12 18:51:59.799039 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Apr 12 18:51:59.799048 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Apr 12 18:51:59.799057 kernel: Zone ranges: Apr 12 18:51:59.799069 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 12 18:51:59.799077 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Apr 12 18:51:59.799086 kernel: Normal empty Apr 12 18:51:59.799097 kernel: Movable zone start for each node Apr 12 18:51:59.799106 kernel: Early memory node ranges Apr 12 18:51:59.799114 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 12 18:51:59.799123 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Apr 12 18:51:59.799132 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Apr 12 18:51:59.799141 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Apr 12 18:51:59.799153 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Apr 12 18:51:59.799162 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Apr 12 18:51:59.799170 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Apr 12 18:51:59.799179 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 12 18:51:59.799187 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 12 18:51:59.799196 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Apr 12 18:51:59.799204 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 12 18:51:59.799213 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Apr 12 18:51:59.799222 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Apr 12 18:51:59.799234 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Apr 12 18:51:59.799242 kernel: ACPI: PM-Timer IO Port: 0xb008 Apr 12 18:51:59.799251 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 12 18:51:59.799260 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 12 18:51:59.799269 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 12 18:51:59.799277 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 12 18:51:59.799286 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 12 18:51:59.799295 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 12 18:51:59.799304 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 12 18:51:59.799314 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 12 18:51:59.799323 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 12 18:51:59.799331 kernel: TSC deadline timer available Apr 12 18:51:59.799340 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Apr 12 18:51:59.799349 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 12 18:51:59.799357 kernel: kvm-guest: setup PV sched yield Apr 12 18:51:59.799371 kernel: [mem 0x9d000000-0xffffffff] available for PCI devices Apr 12 18:51:59.799380 kernel: Booting paravirtualized kernel on KVM Apr 12 18:51:59.799390 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 12 18:51:59.799401 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Apr 12 18:51:59.799413 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 Apr 12 18:51:59.799423 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 Apr 12 18:51:59.799439 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 12 18:51:59.799450 kernel: kvm-guest: setup async PF for cpu 0 Apr 12 18:51:59.799460 kernel: kvm-guest: stealtime: cpu 0, msr 9ae1c0c0 Apr 12 18:51:59.799470 kernel: kvm-guest: PV spinlocks enabled Apr 12 18:51:59.799480 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 12 18:51:59.799490 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Apr 12 18:51:59.799500 kernel: Policy zone: DMA32 Apr 12 18:51:59.799511 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=189121f7c8c0a24098d3bb1e040d34611f7c276be43815ff7fe409fce185edaf Apr 12 18:51:59.799521 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 12 18:51:59.799533 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 12 18:51:59.799543 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 12 18:51:59.799553 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 12 18:51:59.799563 kernel: Memory: 2398372K/2567000K available (12294K kernel code, 2275K rwdata, 13708K rodata, 47440K init, 4148K bss, 168368K reserved, 0K cma-reserved) Apr 12 18:51:59.799575 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 12 18:51:59.799585 kernel: ftrace: allocating 34508 entries in 135 pages Apr 12 18:51:59.799601 kernel: ftrace: allocated 135 pages with 4 groups Apr 12 18:51:59.799611 kernel: rcu: Hierarchical RCU implementation. Apr 12 18:51:59.799622 kernel: rcu: RCU event tracing is enabled. Apr 12 18:51:59.799632 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 12 18:51:59.799642 kernel: Rude variant of Tasks RCU enabled. Apr 12 18:51:59.799652 kernel: Tracing variant of Tasks RCU enabled. Apr 12 18:51:59.799662 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 12 18:51:59.799675 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 12 18:51:59.799684 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 12 18:51:59.799694 kernel: Console: colour dummy device 80x25 Apr 12 18:51:59.799704 kernel: printk: console [ttyS0] enabled Apr 12 18:51:59.799714 kernel: ACPI: Core revision 20210730 Apr 12 18:51:59.799724 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 12 18:51:59.799734 kernel: APIC: Switch to symmetric I/O mode setup Apr 12 18:51:59.799743 kernel: x2apic enabled Apr 12 18:51:59.799766 kernel: Switched APIC routing to physical x2apic. Apr 12 18:51:59.799776 kernel: kvm-guest: setup PV IPIs Apr 12 18:51:59.799788 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 12 18:51:59.799798 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Apr 12 18:51:59.799808 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Apr 12 18:51:59.799818 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 12 18:51:59.799828 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Apr 12 18:51:59.799839 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Apr 12 18:51:59.799851 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 12 18:51:59.799863 kernel: Spectre V2 : Mitigation: Retpolines Apr 12 18:51:59.799877 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Apr 12 18:51:59.799890 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Apr 12 18:51:59.799902 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Apr 12 18:51:59.799914 kernel: RETBleed: Mitigation: untrained return thunk Apr 12 18:51:59.799930 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 12 18:51:59.799943 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Apr 12 18:51:59.799952 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 12 18:51:59.799976 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 12 18:51:59.799988 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 12 18:51:59.800000 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 12 18:51:59.800010 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Apr 12 18:51:59.800019 kernel: Freeing SMP alternatives memory: 32K Apr 12 18:51:59.800029 kernel: pid_max: default: 32768 minimum: 301 Apr 12 18:51:59.800039 kernel: LSM: Security Framework initializing Apr 12 18:51:59.800049 kernel: SELinux: Initializing. Apr 12 18:51:59.800059 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 12 18:51:59.800069 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 12 18:51:59.800079 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Apr 12 18:51:59.800091 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Apr 12 18:51:59.800100 kernel: ... version: 0 Apr 12 18:51:59.800110 kernel: ... bit width: 48 Apr 12 18:51:59.800120 kernel: ... generic registers: 6 Apr 12 18:51:59.800130 kernel: ... value mask: 0000ffffffffffff Apr 12 18:51:59.800139 kernel: ... max period: 00007fffffffffff Apr 12 18:51:59.800149 kernel: ... fixed-purpose events: 0 Apr 12 18:51:59.800159 kernel: ... event mask: 000000000000003f Apr 12 18:51:59.800168 kernel: signal: max sigframe size: 1776 Apr 12 18:51:59.800180 kernel: rcu: Hierarchical SRCU implementation. Apr 12 18:51:59.800190 kernel: smp: Bringing up secondary CPUs ... Apr 12 18:51:59.800199 kernel: x86: Booting SMP configuration: Apr 12 18:51:59.800209 kernel: .... node #0, CPUs: #1 Apr 12 18:51:59.800219 kernel: kvm-clock: cpu 1, msr 14191041, secondary cpu clock Apr 12 18:51:59.800229 kernel: kvm-guest: setup async PF for cpu 1 Apr 12 18:51:59.800238 kernel: kvm-guest: stealtime: cpu 1, msr 9ae9c0c0 Apr 12 18:51:59.800248 kernel: #2 Apr 12 18:51:59.800258 kernel: kvm-clock: cpu 2, msr 14191081, secondary cpu clock Apr 12 18:51:59.800268 kernel: kvm-guest: setup async PF for cpu 2 Apr 12 18:51:59.800279 kernel: kvm-guest: stealtime: cpu 2, msr 9af1c0c0 Apr 12 18:51:59.800289 kernel: #3 Apr 12 18:51:59.800299 kernel: kvm-clock: cpu 3, msr 141910c1, secondary cpu clock Apr 12 18:51:59.800308 kernel: kvm-guest: setup async PF for cpu 3 Apr 12 18:51:59.800318 kernel: kvm-guest: stealtime: cpu 3, msr 9af9c0c0 Apr 12 18:51:59.800327 kernel: smp: Brought up 1 node, 4 CPUs Apr 12 18:51:59.800337 kernel: smpboot: Max logical packages: 1 Apr 12 18:51:59.800347 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Apr 12 18:51:59.800357 kernel: devtmpfs: initialized Apr 12 18:51:59.800368 kernel: x86/mm: Memory block size: 128MB Apr 12 18:51:59.800378 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Apr 12 18:51:59.800388 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Apr 12 18:51:59.800398 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Apr 12 18:51:59.800410 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Apr 12 18:51:59.800420 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Apr 12 18:51:59.800430 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 12 18:51:59.800440 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 12 18:51:59.800450 kernel: pinctrl core: initialized pinctrl subsystem Apr 12 18:51:59.800461 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 12 18:51:59.800470 kernel: audit: initializing netlink subsys (disabled) Apr 12 18:51:59.800480 kernel: audit: type=2000 audit(1712947918.143:1): state=initialized audit_enabled=0 res=1 Apr 12 18:51:59.800490 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 12 18:51:59.800500 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 12 18:51:59.800510 kernel: cpuidle: using governor menu Apr 12 18:51:59.800519 kernel: ACPI: bus type PCI registered Apr 12 18:51:59.800529 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 12 18:51:59.800539 kernel: dca service started, version 1.12.1 Apr 12 18:51:59.800550 kernel: PCI: Using configuration type 1 for base access Apr 12 18:51:59.800560 kernel: PCI: Using configuration type 1 for extended access Apr 12 18:51:59.800570 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 12 18:51:59.800580 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Apr 12 18:51:59.800590 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Apr 12 18:51:59.800599 kernel: ACPI: Added _OSI(Module Device) Apr 12 18:51:59.800608 kernel: ACPI: Added _OSI(Processor Device) Apr 12 18:51:59.800618 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 12 18:51:59.800628 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 12 18:51:59.800639 kernel: ACPI: Added _OSI(Linux-Dell-Video) Apr 12 18:51:59.800649 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Apr 12 18:51:59.800659 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Apr 12 18:51:59.800669 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 12 18:51:59.800679 kernel: ACPI: Interpreter enabled Apr 12 18:51:59.800688 kernel: ACPI: PM: (supports S0 S3 S5) Apr 12 18:51:59.800697 kernel: ACPI: Using IOAPIC for interrupt routing Apr 12 18:51:59.800706 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 12 18:51:59.800715 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Apr 12 18:51:59.800726 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 12 18:51:59.800945 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 12 18:51:59.800962 kernel: acpiphp: Slot [3] registered Apr 12 18:51:59.800980 kernel: acpiphp: Slot [4] registered Apr 12 18:51:59.800989 kernel: acpiphp: Slot [5] registered Apr 12 18:51:59.800998 kernel: acpiphp: Slot [6] registered Apr 12 18:51:59.801007 kernel: acpiphp: Slot [7] registered Apr 12 18:51:59.801016 kernel: acpiphp: Slot [8] registered Apr 12 18:51:59.801028 kernel: acpiphp: Slot [9] registered Apr 12 18:51:59.801038 kernel: acpiphp: Slot [10] registered Apr 12 18:51:59.801047 kernel: acpiphp: Slot [11] registered Apr 12 18:51:59.801057 kernel: acpiphp: Slot [12] registered Apr 12 18:51:59.801066 kernel: acpiphp: Slot [13] registered Apr 12 18:51:59.801075 kernel: acpiphp: Slot [14] registered Apr 12 18:51:59.801085 kernel: acpiphp: Slot [15] registered Apr 12 18:51:59.801094 kernel: acpiphp: Slot [16] registered Apr 12 18:51:59.801103 kernel: acpiphp: Slot [17] registered Apr 12 18:51:59.801112 kernel: acpiphp: Slot [18] registered Apr 12 18:51:59.801124 kernel: acpiphp: Slot [19] registered Apr 12 18:51:59.801133 kernel: acpiphp: Slot [20] registered Apr 12 18:51:59.801143 kernel: acpiphp: Slot [21] registered Apr 12 18:51:59.801152 kernel: acpiphp: Slot [22] registered Apr 12 18:51:59.801161 kernel: acpiphp: Slot [23] registered Apr 12 18:51:59.801171 kernel: acpiphp: Slot [24] registered Apr 12 18:51:59.801180 kernel: acpiphp: Slot [25] registered Apr 12 18:51:59.801190 kernel: acpiphp: Slot [26] registered Apr 12 18:51:59.801200 kernel: acpiphp: Slot [27] registered Apr 12 18:51:59.801211 kernel: acpiphp: Slot [28] registered Apr 12 18:51:59.801221 kernel: acpiphp: Slot [29] registered Apr 12 18:51:59.801230 kernel: acpiphp: Slot [30] registered Apr 12 18:51:59.801240 kernel: acpiphp: Slot [31] registered Apr 12 18:51:59.801250 kernel: PCI host bridge to bus 0000:00 Apr 12 18:51:59.801366 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 12 18:51:59.801453 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 12 18:51:59.801536 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 12 18:51:59.801644 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Apr 12 18:51:59.801728 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0x87fffffff window] Apr 12 18:51:59.801844 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 12 18:51:59.802007 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Apr 12 18:51:59.802126 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Apr 12 18:51:59.802236 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Apr 12 18:51:59.802339 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Apr 12 18:51:59.802434 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Apr 12 18:51:59.802528 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Apr 12 18:51:59.802623 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Apr 12 18:51:59.802717 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Apr 12 18:51:59.802872 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Apr 12 18:51:59.803013 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Apr 12 18:51:59.803118 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Apr 12 18:51:59.803233 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Apr 12 18:51:59.803332 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Apr 12 18:51:59.803427 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc1043000-0xc1043fff] Apr 12 18:51:59.803521 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Apr 12 18:51:59.803614 kernel: pci 0000:00:02.0: BAR 0: assigned to efifb Apr 12 18:51:59.803708 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 12 18:51:59.803844 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Apr 12 18:51:59.803949 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0a0-0xc0bf] Apr 12 18:51:59.804070 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Apr 12 18:51:59.804168 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Apr 12 18:51:59.804281 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Apr 12 18:51:59.804378 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Apr 12 18:51:59.804472 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Apr 12 18:51:59.804572 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Apr 12 18:51:59.804689 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Apr 12 18:51:59.804802 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Apr 12 18:51:59.804899 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc1040000-0xc1040fff] Apr 12 18:51:59.805012 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Apr 12 18:51:59.805107 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Apr 12 18:51:59.805120 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 12 18:51:59.805134 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 12 18:51:59.805144 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 12 18:51:59.805154 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 12 18:51:59.805164 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Apr 12 18:51:59.805173 kernel: iommu: Default domain type: Translated Apr 12 18:51:59.805183 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 12 18:51:59.805279 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Apr 12 18:51:59.805381 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 12 18:51:59.805491 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Apr 12 18:51:59.805509 kernel: vgaarb: loaded Apr 12 18:51:59.805519 kernel: pps_core: LinuxPPS API ver. 1 registered Apr 12 18:51:59.805529 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Apr 12 18:51:59.805539 kernel: PTP clock support registered Apr 12 18:51:59.805548 kernel: Registered efivars operations Apr 12 18:51:59.805557 kernel: PCI: Using ACPI for IRQ routing Apr 12 18:51:59.805567 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 12 18:51:59.805577 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Apr 12 18:51:59.805586 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Apr 12 18:51:59.805598 kernel: e820: reserve RAM buffer [mem 0x9b1aa018-0x9bffffff] Apr 12 18:51:59.805607 kernel: e820: reserve RAM buffer [mem 0x9b3f7018-0x9bffffff] Apr 12 18:51:59.805616 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Apr 12 18:51:59.805625 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Apr 12 18:51:59.805634 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 12 18:51:59.805644 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 12 18:51:59.805653 kernel: clocksource: Switched to clocksource kvm-clock Apr 12 18:51:59.805663 kernel: VFS: Disk quotas dquot_6.6.0 Apr 12 18:51:59.805675 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 12 18:51:59.805684 kernel: pnp: PnP ACPI init Apr 12 18:51:59.805842 kernel: pnp 00:02: [dma 2] Apr 12 18:51:59.805865 kernel: pnp: PnP ACPI: found 6 devices Apr 12 18:51:59.805878 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 12 18:51:59.805891 kernel: NET: Registered PF_INET protocol family Apr 12 18:51:59.805903 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 12 18:51:59.805915 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 12 18:51:59.805927 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 12 18:51:59.805944 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 12 18:51:59.805955 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Apr 12 18:51:59.805974 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 12 18:51:59.805984 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 12 18:51:59.805993 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 12 18:51:59.806002 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 12 18:51:59.806011 kernel: NET: Registered PF_XDP protocol family Apr 12 18:51:59.806132 kernel: pci 0000:00:05.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Apr 12 18:51:59.806281 kernel: pci 0000:00:05.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Apr 12 18:51:59.806432 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 12 18:51:59.806579 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 12 18:51:59.806726 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 12 18:51:59.806897 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Apr 12 18:51:59.811134 kernel: pci_bus 0000:00: resource 8 [mem 0x800000000-0x87fffffff window] Apr 12 18:51:59.811294 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Apr 12 18:51:59.811440 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Apr 12 18:51:59.811601 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Apr 12 18:51:59.811618 kernel: PCI: CLS 0 bytes, default 64 Apr 12 18:51:59.811629 kernel: Initialise system trusted keyrings Apr 12 18:51:59.811640 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 12 18:51:59.811650 kernel: Key type asymmetric registered Apr 12 18:51:59.811678 kernel: Asymmetric key parser 'x509' registered Apr 12 18:51:59.811689 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Apr 12 18:51:59.811699 kernel: io scheduler mq-deadline registered Apr 12 18:51:59.811718 kernel: io scheduler kyber registered Apr 12 18:51:59.811741 kernel: io scheduler bfq registered Apr 12 18:51:59.811770 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 12 18:51:59.811781 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Apr 12 18:51:59.811790 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Apr 12 18:51:59.811800 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Apr 12 18:51:59.811810 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 12 18:51:59.811820 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 12 18:51:59.811849 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 12 18:51:59.811866 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 12 18:51:59.811878 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 12 18:51:59.814041 kernel: rtc_cmos 00:05: RTC can wake from S4 Apr 12 18:51:59.814067 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 12 18:51:59.814162 kernel: rtc_cmos 00:05: registered as rtc0 Apr 12 18:51:59.814257 kernel: rtc_cmos 00:05: setting system clock to 2024-04-12T18:51:58 UTC (1712947918) Apr 12 18:51:59.814388 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 12 18:51:59.814404 kernel: efifb: probing for efifb Apr 12 18:51:59.814415 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Apr 12 18:51:59.814439 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Apr 12 18:51:59.814450 kernel: efifb: scrolling: redraw Apr 12 18:51:59.814460 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 12 18:51:59.814470 kernel: Console: switching to colour frame buffer device 160x50 Apr 12 18:51:59.814484 kernel: fb0: EFI VGA frame buffer device Apr 12 18:51:59.814497 kernel: pstore: Registered efi as persistent store backend Apr 12 18:51:59.814521 kernel: NET: Registered PF_INET6 protocol family Apr 12 18:51:59.814531 kernel: Segment Routing with IPv6 Apr 12 18:51:59.814541 kernel: In-situ OAM (IOAM) with IPv6 Apr 12 18:51:59.814552 kernel: NET: Registered PF_PACKET protocol family Apr 12 18:51:59.814562 kernel: Key type dns_resolver registered Apr 12 18:51:59.814572 kernel: IPI shorthand broadcast: enabled Apr 12 18:51:59.814606 kernel: sched_clock: Marking stable (786368056, 138350151)->(1005694968, -80976761) Apr 12 18:51:59.814620 kernel: registered taskstats version 1 Apr 12 18:51:59.814631 kernel: Loading compiled-in X.509 certificates Apr 12 18:51:59.814640 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.154-flatcar: 1fa140a38fc6bd27c8b56127e4d1eb4f665c7ec4' Apr 12 18:51:59.814664 kernel: Key type .fscrypt registered Apr 12 18:51:59.814675 kernel: Key type fscrypt-provisioning registered Apr 12 18:51:59.814686 kernel: pstore: Using crash dump compression: deflate Apr 12 18:51:59.814696 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 12 18:51:59.814714 kernel: ima: Allocated hash algorithm: sha1 Apr 12 18:51:59.814730 kernel: ima: No architecture policies found Apr 12 18:51:59.814741 kernel: Freeing unused kernel image (initmem) memory: 47440K Apr 12 18:51:59.814800 kernel: Write protecting the kernel read-only data: 28672k Apr 12 18:51:59.814812 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Apr 12 18:51:59.814823 kernel: Freeing unused kernel image (rodata/data gap) memory: 628K Apr 12 18:51:59.814834 kernel: Run /init as init process Apr 12 18:51:59.814844 kernel: with arguments: Apr 12 18:51:59.814855 kernel: /init Apr 12 18:51:59.814865 kernel: with environment: Apr 12 18:51:59.814877 kernel: HOME=/ Apr 12 18:51:59.814887 kernel: TERM=linux Apr 12 18:51:59.814903 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 12 18:51:59.814916 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Apr 12 18:51:59.814930 systemd[1]: Detected virtualization kvm. Apr 12 18:51:59.814941 systemd[1]: Detected architecture x86-64. Apr 12 18:51:59.814952 systemd[1]: Running in initrd. Apr 12 18:51:59.814963 systemd[1]: No hostname configured, using default hostname. Apr 12 18:51:59.814985 systemd[1]: Hostname set to . Apr 12 18:51:59.814998 systemd[1]: Initializing machine ID from VM UUID. Apr 12 18:51:59.815009 systemd[1]: Queued start job for default target initrd.target. Apr 12 18:51:59.815020 systemd[1]: Started systemd-ask-password-console.path. Apr 12 18:51:59.815030 systemd[1]: Reached target cryptsetup.target. Apr 12 18:51:59.815040 systemd[1]: Reached target paths.target. Apr 12 18:51:59.815050 systemd[1]: Reached target slices.target. Apr 12 18:51:59.815061 systemd[1]: Reached target swap.target. Apr 12 18:51:59.815072 systemd[1]: Reached target timers.target. Apr 12 18:51:59.815085 systemd[1]: Listening on iscsid.socket. Apr 12 18:51:59.815096 systemd[1]: Listening on iscsiuio.socket. Apr 12 18:51:59.815106 systemd[1]: Listening on systemd-journald-audit.socket. Apr 12 18:51:59.815116 systemd[1]: Listening on systemd-journald-dev-log.socket. Apr 12 18:51:59.815127 systemd[1]: Listening on systemd-journald.socket. Apr 12 18:51:59.815138 systemd[1]: Listening on systemd-networkd.socket. Apr 12 18:51:59.815149 systemd[1]: Listening on systemd-udevd-control.socket. Apr 12 18:51:59.815160 systemd[1]: Listening on systemd-udevd-kernel.socket. Apr 12 18:51:59.815173 systemd[1]: Reached target sockets.target. Apr 12 18:51:59.815183 systemd[1]: Starting kmod-static-nodes.service... Apr 12 18:51:59.815193 systemd[1]: Finished network-cleanup.service. Apr 12 18:51:59.815204 systemd[1]: Starting systemd-fsck-usr.service... Apr 12 18:51:59.815214 systemd[1]: Starting systemd-journald.service... Apr 12 18:51:59.815225 systemd[1]: Starting systemd-modules-load.service... Apr 12 18:51:59.815237 systemd[1]: Starting systemd-resolved.service... Apr 12 18:51:59.815247 systemd[1]: Starting systemd-vconsole-setup.service... Apr 12 18:51:59.815259 systemd[1]: Finished kmod-static-nodes.service. Apr 12 18:51:59.815273 systemd[1]: Finished systemd-fsck-usr.service. Apr 12 18:51:59.815284 systemd[1]: Finished systemd-vconsole-setup.service. Apr 12 18:51:59.815295 systemd[1]: Starting dracut-cmdline-ask.service... Apr 12 18:51:59.815306 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Apr 12 18:51:59.815316 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Apr 12 18:51:59.815328 kernel: audit: type=1130 audit(1712947919.797:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:51:59.815339 systemd[1]: Finished dracut-cmdline-ask.service. Apr 12 18:51:59.815353 systemd-journald[197]: Journal started Apr 12 18:51:59.815429 systemd-journald[197]: Runtime Journal (/run/log/journal/609f72796f724ce7b35417c74a440cb7) is 6.0M, max 48.4M, 42.4M free. Apr 12 18:51:59.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:51:59.773838 systemd-modules-load[198]: Inserted module 'overlay' Apr 12 18:51:59.825791 systemd[1]: Starting dracut-cmdline.service... Apr 12 18:51:59.825822 kernel: audit: type=1130 audit(1712947919.813:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:51:59.825846 systemd[1]: Started systemd-journald.service. Apr 12 18:51:59.825860 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 12 18:51:59.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:51:59.826870 systemd-resolved[199]: Positive Trust Anchors: Apr 12 18:51:59.827571 systemd-resolved[199]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 12 18:51:59.834742 kernel: audit: type=1130 audit(1712947919.827:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:51:59.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:51:59.827612 systemd-resolved[199]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Apr 12 18:51:59.836485 systemd-resolved[199]: Defaulting to hostname 'linux'. Apr 12 18:51:59.847452 kernel: audit: type=1130 audit(1712947919.841:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:51:59.847482 kernel: Bridge firewalling registered Apr 12 18:51:59.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:51:59.847548 dracut-cmdline[218]: dracut-dracut-053 Apr 12 18:51:59.847548 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=189121f7c8c0a24098d3bb1e040d34611f7c276be43815ff7fe409fce185edaf Apr 12 18:51:59.837528 systemd[1]: Started systemd-resolved.service. Apr 12 18:51:59.842889 systemd[1]: Reached target nss-lookup.target. Apr 12 18:51:59.846483 systemd-modules-load[198]: Inserted module 'br_netfilter' Apr 12 18:51:59.874922 kernel: SCSI subsystem initialized Apr 12 18:51:59.902478 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 12 18:51:59.902560 kernel: device-mapper: uevent: version 1.0.3 Apr 12 18:51:59.902578 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Apr 12 18:51:59.920592 systemd-modules-load[198]: Inserted module 'dm_multipath' Apr 12 18:51:59.924795 systemd[1]: Finished systemd-modules-load.service. Apr 12 18:51:59.931000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:51:59.937171 systemd[1]: Starting systemd-sysctl.service... Apr 12 18:51:59.943372 kernel: audit: type=1130 audit(1712947919.931:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:51:59.961791 systemd[1]: Finished systemd-sysctl.service. Apr 12 18:51:59.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:51:59.967849 kernel: audit: type=1130 audit(1712947919.961:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:00.001161 kernel: Loading iSCSI transport class v2.0-870. Apr 12 18:52:00.035388 kernel: iscsi: registered transport (tcp) Apr 12 18:52:00.065800 kernel: iscsi: registered transport (qla4xxx) Apr 12 18:52:00.065880 kernel: QLogic iSCSI HBA Driver Apr 12 18:52:00.163063 systemd[1]: Finished dracut-cmdline.service. Apr 12 18:52:00.173470 kernel: audit: type=1130 audit(1712947920.165:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:00.165000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:00.166973 systemd[1]: Starting dracut-pre-udev.service... Apr 12 18:52:00.264839 kernel: raid6: avx2x4 gen() 19882 MB/s Apr 12 18:52:00.281878 kernel: raid6: avx2x4 xor() 5209 MB/s Apr 12 18:52:00.298827 kernel: raid6: avx2x2 gen() 19658 MB/s Apr 12 18:52:00.315827 kernel: raid6: avx2x2 xor() 12427 MB/s Apr 12 18:52:00.332827 kernel: raid6: avx2x1 gen() 15930 MB/s Apr 12 18:52:00.349875 kernel: raid6: avx2x1 xor() 10971 MB/s Apr 12 18:52:00.366825 kernel: raid6: sse2x4 gen() 9628 MB/s Apr 12 18:52:00.384820 kernel: raid6: sse2x4 xor() 4313 MB/s Apr 12 18:52:00.403065 kernel: raid6: sse2x2 gen() 9563 MB/s Apr 12 18:52:00.419843 kernel: raid6: sse2x2 xor() 5994 MB/s Apr 12 18:52:00.435877 kernel: raid6: sse2x1 gen() 7092 MB/s Apr 12 18:52:00.458863 kernel: raid6: sse2x1 xor() 4883 MB/s Apr 12 18:52:00.458960 kernel: raid6: using algorithm avx2x4 gen() 19882 MB/s Apr 12 18:52:00.458976 kernel: raid6: .... xor() 5209 MB/s, rmw enabled Apr 12 18:52:00.458989 kernel: raid6: using avx2x2 recovery algorithm Apr 12 18:52:00.501026 kernel: xor: automatically using best checksumming function avx Apr 12 18:52:00.672804 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Apr 12 18:52:00.694156 systemd[1]: Finished dracut-pre-udev.service. Apr 12 18:52:00.712814 kernel: audit: type=1130 audit(1712947920.695:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:00.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:00.712000 audit: BPF prog-id=7 op=LOAD Apr 12 18:52:00.721831 kernel: audit: type=1334 audit(1712947920.712:10): prog-id=7 op=LOAD Apr 12 18:52:00.724000 audit: BPF prog-id=8 op=LOAD Apr 12 18:52:00.740204 systemd[1]: Starting systemd-udevd.service... Apr 12 18:52:00.771803 systemd-udevd[401]: Using default interface naming scheme 'v252'. Apr 12 18:52:00.779136 systemd[1]: Started systemd-udevd.service. Apr 12 18:52:00.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:00.790337 systemd[1]: Starting dracut-pre-trigger.service... Apr 12 18:52:00.821048 dracut-pre-trigger[411]: rd.md=0: removing MD RAID activation Apr 12 18:52:00.892740 systemd[1]: Finished dracut-pre-trigger.service. Apr 12 18:52:00.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:00.906969 systemd[1]: Starting systemd-udev-trigger.service... Apr 12 18:52:00.983689 systemd[1]: Finished systemd-udev-trigger.service. Apr 12 18:52:00.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:01.112288 kernel: cryptd: max_cpu_qlen set to 1000 Apr 12 18:52:01.118618 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 12 18:52:01.129809 kernel: libata version 3.00 loaded. Apr 12 18:52:01.134789 kernel: ata_piix 0000:00:01.1: version 2.13 Apr 12 18:52:01.135056 kernel: scsi host0: ata_piix Apr 12 18:52:01.139487 kernel: scsi host1: ata_piix Apr 12 18:52:01.139773 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Apr 12 18:52:01.141557 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Apr 12 18:52:01.146177 kernel: AVX2 version of gcm_enc/dec engaged. Apr 12 18:52:01.146240 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 12 18:52:01.148409 kernel: GPT:9289727 != 19775487 Apr 12 18:52:01.148460 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 12 18:52:01.152317 kernel: GPT:9289727 != 19775487 Apr 12 18:52:01.152365 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 12 18:52:01.152381 kernel: AES CTR mode by8 optimization enabled Apr 12 18:52:01.152394 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 12 18:52:01.305442 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 12 18:52:01.312642 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 12 18:52:01.380685 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 12 18:52:01.381427 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 12 18:52:01.381447 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (442) Apr 12 18:52:01.394409 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Apr 12 18:52:01.395148 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Apr 12 18:52:01.407985 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Apr 12 18:52:01.406952 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Apr 12 18:52:01.413888 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Apr 12 18:52:01.422900 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Apr 12 18:52:01.429412 systemd[1]: Starting disk-uuid.service... Apr 12 18:52:01.440740 disk-uuid[528]: Primary Header is updated. Apr 12 18:52:01.440740 disk-uuid[528]: Secondary Entries is updated. Apr 12 18:52:01.440740 disk-uuid[528]: Secondary Header is updated. Apr 12 18:52:01.448997 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 12 18:52:01.456111 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 12 18:52:02.458612 disk-uuid[529]: The operation has completed successfully. Apr 12 18:52:02.461326 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 12 18:52:02.576963 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 12 18:52:02.577814 systemd[1]: Finished disk-uuid.service. Apr 12 18:52:02.593944 systemd[1]: Starting verity-setup.service... Apr 12 18:52:02.591000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:02.591000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:02.639224 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 12 18:52:02.750271 systemd[1]: Found device dev-mapper-usr.device. Apr 12 18:52:02.775009 systemd[1]: Mounting sysusr-usr.mount... Apr 12 18:52:02.783521 systemd[1]: Finished verity-setup.service. Apr 12 18:52:02.786000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:02.956801 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Apr 12 18:52:02.957778 systemd[1]: Mounted sysusr-usr.mount. Apr 12 18:52:02.958320 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Apr 12 18:52:02.961084 systemd[1]: Starting ignition-setup.service... Apr 12 18:52:02.972971 systemd[1]: Starting parse-ip-for-networkd.service... Apr 12 18:52:02.993177 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 12 18:52:02.993626 kernel: BTRFS info (device vda6): using free space tree Apr 12 18:52:02.993646 kernel: BTRFS info (device vda6): has skinny extents Apr 12 18:52:03.029019 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 12 18:52:03.052816 kernel: kauditd_printk_skb: 7 callbacks suppressed Apr 12 18:52:03.052898 kernel: audit: type=1130 audit(1712947923.048:18): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:03.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:03.049365 systemd[1]: Finished ignition-setup.service. Apr 12 18:52:03.051658 systemd[1]: Starting ignition-fetch-offline.service... Apr 12 18:52:03.146584 systemd[1]: Finished parse-ip-for-networkd.service. Apr 12 18:52:03.153000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:03.158000 audit: BPF prog-id=9 op=LOAD Apr 12 18:52:03.160572 kernel: audit: type=1130 audit(1712947923.153:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:03.160660 kernel: audit: type=1334 audit(1712947923.158:20): prog-id=9 op=LOAD Apr 12 18:52:03.170917 systemd[1]: Starting systemd-networkd.service... Apr 12 18:52:03.187467 ignition[632]: Ignition 2.14.0 Apr 12 18:52:03.188179 ignition[632]: Stage: fetch-offline Apr 12 18:52:03.188249 ignition[632]: no configs at "/usr/lib/ignition/base.d" Apr 12 18:52:03.188264 ignition[632]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 12 18:52:03.188418 ignition[632]: parsed url from cmdline: "" Apr 12 18:52:03.188424 ignition[632]: no config URL provided Apr 12 18:52:03.188432 ignition[632]: reading system config file "/usr/lib/ignition/user.ign" Apr 12 18:52:03.188443 ignition[632]: no config at "/usr/lib/ignition/user.ign" Apr 12 18:52:03.188472 ignition[632]: op(1): [started] loading QEMU firmware config module Apr 12 18:52:03.188486 ignition[632]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 12 18:52:03.206260 ignition[632]: op(1): [finished] loading QEMU firmware config module Apr 12 18:52:03.249412 systemd-networkd[707]: lo: Link UP Apr 12 18:52:03.249428 systemd-networkd[707]: lo: Gained carrier Apr 12 18:52:03.251130 systemd-networkd[707]: Enumeration completed Apr 12 18:52:03.252374 systemd[1]: Started systemd-networkd.service. Apr 12 18:52:03.255129 systemd-networkd[707]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 12 18:52:03.256714 systemd-networkd[707]: eth0: Link UP Apr 12 18:52:03.256727 systemd-networkd[707]: eth0: Gained carrier Apr 12 18:52:03.273000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:03.274001 systemd[1]: Reached target network.target. Apr 12 18:52:03.279179 kernel: audit: type=1130 audit(1712947923.273:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:03.293956 systemd[1]: Starting iscsiuio.service... Apr 12 18:52:03.296115 systemd[1]: Started iscsiuio.service. Apr 12 18:52:03.307227 kernel: audit: type=1130 audit(1712947923.296:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:03.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:03.298395 systemd[1]: Starting iscsid.service... Apr 12 18:52:03.307718 iscsid[714]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Apr 12 18:52:03.307718 iscsid[714]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Apr 12 18:52:03.307718 iscsid[714]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Apr 12 18:52:03.307718 iscsid[714]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Apr 12 18:52:03.307718 iscsid[714]: If using hardware iscsi like qla4xxx this message can be ignored. Apr 12 18:52:03.307718 iscsid[714]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Apr 12 18:52:03.307718 iscsid[714]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Apr 12 18:52:03.327823 systemd[1]: Started iscsid.service. Apr 12 18:52:03.344930 kernel: audit: type=1130 audit(1712947923.333:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:03.333000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:03.335978 systemd[1]: Starting dracut-initqueue.service... Apr 12 18:52:03.355478 systemd[1]: Finished dracut-initqueue.service. Apr 12 18:52:03.370949 kernel: audit: type=1130 audit(1712947923.355:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:03.355000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:03.356904 systemd[1]: Reached target remote-fs-pre.target. Apr 12 18:52:03.358074 systemd[1]: Reached target remote-cryptsetup.target. Apr 12 18:52:03.370914 systemd[1]: Reached target remote-fs.target. Apr 12 18:52:03.384552 systemd[1]: Starting dracut-pre-mount.service... Apr 12 18:52:03.387952 ignition[632]: parsing config with SHA512: 295f933745ee31f211e375eb73c7d79ef02ae5bb89995e69681ffa8474cf315e2f6af501cc51cbff7473071ded49e14a985b6c9a92ddec2a30fb0efc0aa2353b Apr 12 18:52:03.397248 systemd[1]: Finished dracut-pre-mount.service. Apr 12 18:52:03.408779 kernel: audit: type=1130 audit(1712947923.397:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:03.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:03.398966 systemd-networkd[707]: eth0: DHCPv4 address 10.0.0.118/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 12 18:52:03.464650 unknown[632]: fetched base config from "system" Apr 12 18:52:03.464681 unknown[632]: fetched user config from "qemu" Apr 12 18:52:03.467384 ignition[632]: fetch-offline: fetch-offline passed Apr 12 18:52:03.467516 ignition[632]: Ignition finished successfully Apr 12 18:52:03.472583 systemd[1]: Finished ignition-fetch-offline.service. Apr 12 18:52:03.477000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:03.478291 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 12 18:52:03.479543 systemd[1]: Starting ignition-kargs.service... Apr 12 18:52:03.483203 kernel: audit: type=1130 audit(1712947923.477:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:03.501325 ignition[729]: Ignition 2.14.0 Apr 12 18:52:03.501346 ignition[729]: Stage: kargs Apr 12 18:52:03.501505 ignition[729]: no configs at "/usr/lib/ignition/base.d" Apr 12 18:52:03.501520 ignition[729]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 12 18:52:03.505518 ignition[729]: kargs: kargs passed Apr 12 18:52:03.505607 ignition[729]: Ignition finished successfully Apr 12 18:52:03.516016 systemd[1]: Finished ignition-kargs.service. Apr 12 18:52:03.515000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:03.523480 kernel: audit: type=1130 audit(1712947923.515:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:03.523395 systemd[1]: Starting ignition-disks.service... Apr 12 18:52:03.538788 ignition[735]: Ignition 2.14.0 Apr 12 18:52:03.538803 ignition[735]: Stage: disks Apr 12 18:52:03.538961 ignition[735]: no configs at "/usr/lib/ignition/base.d" Apr 12 18:52:03.538973 ignition[735]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 12 18:52:03.540589 ignition[735]: disks: disks passed Apr 12 18:52:03.546313 systemd[1]: Finished ignition-disks.service. Apr 12 18:52:03.547000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:03.540653 ignition[735]: Ignition finished successfully Apr 12 18:52:03.547521 systemd[1]: Reached target initrd-root-device.target. Apr 12 18:52:03.549300 systemd[1]: Reached target local-fs-pre.target. Apr 12 18:52:03.552082 systemd[1]: Reached target local-fs.target. Apr 12 18:52:03.552443 systemd[1]: Reached target sysinit.target. Apr 12 18:52:03.553056 systemd[1]: Reached target basic.target. Apr 12 18:52:03.560075 systemd[1]: Starting systemd-fsck-root.service... Apr 12 18:52:03.583842 systemd-fsck[744]: ROOT: clean, 612/553520 files, 56019/553472 blocks Apr 12 18:52:03.595446 systemd[1]: Finished systemd-fsck-root.service. Apr 12 18:52:03.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:03.600368 systemd[1]: Mounting sysroot.mount... Apr 12 18:52:03.616960 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Apr 12 18:52:03.612193 systemd[1]: Mounted sysroot.mount. Apr 12 18:52:03.619498 systemd[1]: Reached target initrd-root-fs.target. Apr 12 18:52:03.625474 systemd[1]: Mounting sysroot-usr.mount... Apr 12 18:52:03.628888 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Apr 12 18:52:03.628952 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 12 18:52:03.628994 systemd[1]: Reached target ignition-diskful.target. Apr 12 18:52:03.647704 systemd[1]: Mounted sysroot-usr.mount. Apr 12 18:52:03.651132 systemd[1]: Starting initrd-setup-root.service... Apr 12 18:52:03.664028 initrd-setup-root[754]: cut: /sysroot/etc/passwd: No such file or directory Apr 12 18:52:03.668996 initrd-setup-root[762]: cut: /sysroot/etc/group: No such file or directory Apr 12 18:52:03.677026 initrd-setup-root[770]: cut: /sysroot/etc/shadow: No such file or directory Apr 12 18:52:03.683464 initrd-setup-root[778]: cut: /sysroot/etc/gshadow: No such file or directory Apr 12 18:52:03.769238 systemd[1]: Finished initrd-setup-root.service. Apr 12 18:52:03.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:03.777207 systemd[1]: Starting ignition-mount.service... Apr 12 18:52:03.793598 bash[794]: umount: /sysroot/usr/share/oem: not mounted. Apr 12 18:52:03.796071 systemd[1]: Starting sysroot-boot.service... Apr 12 18:52:03.812488 ignition[796]: INFO : Ignition 2.14.0 Apr 12 18:52:03.813613 ignition[796]: INFO : Stage: mount Apr 12 18:52:03.814375 ignition[796]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 12 18:52:03.814375 ignition[796]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 12 18:52:03.816923 ignition[796]: INFO : mount: mount passed Apr 12 18:52:03.816923 ignition[796]: INFO : Ignition finished successfully Apr 12 18:52:03.820145 systemd[1]: Finished ignition-mount.service. Apr 12 18:52:03.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:03.844135 systemd[1]: Finished sysroot-boot.service. Apr 12 18:52:03.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:03.859959 systemd[1]: Mounting sysroot-usr-share-oem.mount... Apr 12 18:52:03.871828 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (807) Apr 12 18:52:03.875273 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 12 18:52:03.875325 kernel: BTRFS info (device vda6): using free space tree Apr 12 18:52:03.875339 kernel: BTRFS info (device vda6): has skinny extents Apr 12 18:52:03.883810 systemd[1]: Mounted sysroot-usr-share-oem.mount. Apr 12 18:52:03.888654 systemd[1]: Starting ignition-files.service... Apr 12 18:52:03.935017 ignition[827]: INFO : Ignition 2.14.0 Apr 12 18:52:03.935017 ignition[827]: INFO : Stage: files Apr 12 18:52:03.939221 ignition[827]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 12 18:52:03.939221 ignition[827]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 12 18:52:03.939221 ignition[827]: DEBUG : files: compiled without relabeling support, skipping Apr 12 18:52:03.951827 ignition[827]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 12 18:52:03.951827 ignition[827]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 12 18:52:03.963587 ignition[827]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 12 18:52:03.974019 ignition[827]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 12 18:52:03.984261 unknown[827]: wrote ssh authorized keys file for user: core Apr 12 18:52:03.989414 ignition[827]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 12 18:52:03.989414 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 12 18:52:03.989414 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Apr 12 18:52:04.133004 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 12 18:52:04.269941 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 12 18:52:04.272544 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Apr 12 18:52:04.272544 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-amd64.tar.gz: attempt #1 Apr 12 18:52:04.319030 systemd-networkd[707]: eth0: Gained IPv6LL Apr 12 18:52:04.660327 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 12 18:52:05.091642 ignition[827]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: aa622325bf05520939f9e020d7a28ab48ac23e2fae6f47d5a4e52174c88c1ebc31b464853e4fd65bd8f5331f330a6ca96fd370d247d3eeaed042da4ee2d1219a Apr 12 18:52:05.091642 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Apr 12 18:52:05.100805 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Apr 12 18:52:05.100805 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz: attempt #1 Apr 12 18:52:05.367070 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Apr 12 18:52:07.464443 ignition[827]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 5d0324ca8a3c90c680b6e1fddb245a2255582fa15949ba1f3c6bb7323df9d3af754dae98d6e40ac9ccafb2999c932df2c4288d418949a4915d928eb23c090540 Apr 12 18:52:07.464443 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Apr 12 18:52:07.471783 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/etc/docker/daemon.json" Apr 12 18:52:07.471783 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/etc/docker/daemon.json" Apr 12 18:52:07.471783 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubectl" Apr 12 18:52:07.471783 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.29.2/bin/linux/amd64/kubectl: attempt #1 Apr 12 18:52:07.563676 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Apr 12 18:52:08.841344 ignition[827]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: a2de71807eb4c41f4d70e5c47fac72ecf3c74984be6c08be0597fc58621baeeddc1b5cc6431ab007eee9bd0a98f8628dd21512b06daaeccfac5837e9792a98a7 Apr 12 18:52:08.848450 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubectl" Apr 12 18:52:08.848450 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubelet" Apr 12 18:52:08.848450 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.29.2/bin/linux/amd64/kubelet: attempt #1 Apr 12 18:52:08.901405 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Apr 12 18:52:11.322295 ignition[827]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: d3fef1d4b99415179ecb94d4de953bddb74c0fb0f798265829b899bb031e2ab8c2b60037b79a66405a9b102d3db0d90e9257595f4b11660356de0e2e63744cd7 Apr 12 18:52:11.322295 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubelet" Apr 12 18:52:11.336962 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/opt/bin/kubeadm" Apr 12 18:52:11.336962 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET https://dl.k8s.io/release/v1.29.2/bin/linux/amd64/kubeadm: attempt #1 Apr 12 18:52:11.388674 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET result: OK Apr 12 18:52:12.314522 ignition[827]: DEBUG : files: createFilesystemsFiles: createFiles: op(9): file matches expected sum of: 4261cb0319688a0557b3052cce8df9d754abc38d5fc8e0eeeb63a85a2194895fdca5bad464f8516459ed7b1764d7bbb2304f5f434d42bb35f38764b4b00ce663 Apr 12 18:52:12.324886 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/bin/kubeadm" Apr 12 18:52:12.324886 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 12 18:52:12.324886 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Apr 12 18:52:12.587496 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 12 18:52:12.880720 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 12 18:52:12.880720 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Apr 12 18:52:12.900029 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Apr 12 18:52:12.900029 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 12 18:52:12.900029 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 12 18:52:12.900029 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 12 18:52:12.900029 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 12 18:52:12.900029 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 12 18:52:12.900029 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 12 18:52:12.900029 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 12 18:52:12.900029 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 12 18:52:12.900029 ignition[827]: INFO : files: op(10): [started] processing unit "prepare-cni-plugins.service" Apr 12 18:52:12.900029 ignition[827]: INFO : files: op(10): op(11): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Apr 12 18:52:12.900029 ignition[827]: INFO : files: op(10): op(11): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Apr 12 18:52:12.900029 ignition[827]: INFO : files: op(10): [finished] processing unit "prepare-cni-plugins.service" Apr 12 18:52:12.900029 ignition[827]: INFO : files: op(12): [started] processing unit "prepare-critools.service" Apr 12 18:52:12.900029 ignition[827]: INFO : files: op(12): op(13): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Apr 12 18:52:12.900029 ignition[827]: INFO : files: op(12): op(13): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Apr 12 18:52:12.900029 ignition[827]: INFO : files: op(12): [finished] processing unit "prepare-critools.service" Apr 12 18:52:12.999693 ignition[827]: INFO : files: op(14): [started] processing unit "prepare-helm.service" Apr 12 18:52:12.999693 ignition[827]: INFO : files: op(14): op(15): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 12 18:52:12.999693 ignition[827]: INFO : files: op(14): op(15): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 12 18:52:12.999693 ignition[827]: INFO : files: op(14): [finished] processing unit "prepare-helm.service" Apr 12 18:52:12.999693 ignition[827]: INFO : files: op(16): [started] processing unit "coreos-metadata.service" Apr 12 18:52:12.999693 ignition[827]: INFO : files: op(16): op(17): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 12 18:52:12.999693 ignition[827]: INFO : files: op(16): op(17): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 12 18:52:12.999693 ignition[827]: INFO : files: op(16): [finished] processing unit "coreos-metadata.service" Apr 12 18:52:12.999693 ignition[827]: INFO : files: op(18): [started] setting preset to enabled for "prepare-cni-plugins.service" Apr 12 18:52:12.999693 ignition[827]: INFO : files: op(18): [finished] setting preset to enabled for "prepare-cni-plugins.service" Apr 12 18:52:12.999693 ignition[827]: INFO : files: op(19): [started] setting preset to enabled for "prepare-critools.service" Apr 12 18:52:12.999693 ignition[827]: INFO : files: op(19): [finished] setting preset to enabled for "prepare-critools.service" Apr 12 18:52:12.999693 ignition[827]: INFO : files: op(1a): [started] setting preset to enabled for "prepare-helm.service" Apr 12 18:52:12.999693 ignition[827]: INFO : files: op(1a): [finished] setting preset to enabled for "prepare-helm.service" Apr 12 18:52:12.999693 ignition[827]: INFO : files: op(1b): [started] setting preset to disabled for "coreos-metadata.service" Apr 12 18:52:12.999693 ignition[827]: INFO : files: op(1b): op(1c): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 12 18:52:13.329469 ignition[827]: INFO : files: op(1b): op(1c): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 12 18:52:13.332130 ignition[827]: INFO : files: op(1b): [finished] setting preset to disabled for "coreos-metadata.service" Apr 12 18:52:13.332130 ignition[827]: INFO : files: createResultFile: createFiles: op(1d): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 12 18:52:13.332130 ignition[827]: INFO : files: createResultFile: createFiles: op(1d): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 12 18:52:13.332130 ignition[827]: INFO : files: files passed Apr 12 18:52:13.332130 ignition[827]: INFO : Ignition finished successfully Apr 12 18:52:13.347024 systemd[1]: Finished ignition-files.service. Apr 12 18:52:13.351717 kernel: kauditd_printk_skb: 5 callbacks suppressed Apr 12 18:52:13.351751 kernel: audit: type=1130 audit(1712947933.346:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:13.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:13.353857 systemd[1]: Starting initrd-setup-root-after-ignition.service... Apr 12 18:52:13.363905 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Apr 12 18:52:13.367486 initrd-setup-root-after-ignition[850]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Apr 12 18:52:13.369677 initrd-setup-root-after-ignition[852]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 12 18:52:13.373423 systemd[1]: Starting ignition-quench.service... Apr 12 18:52:13.375846 systemd[1]: Finished initrd-setup-root-after-ignition.service. Apr 12 18:52:13.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:13.385109 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 12 18:52:13.385247 systemd[1]: Finished ignition-quench.service. Apr 12 18:52:13.388657 systemd[1]: Reached target ignition-complete.target. Apr 12 18:52:13.411087 kernel: audit: type=1130 audit(1712947933.378:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:13.411128 kernel: audit: type=1130 audit(1712947933.387:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:13.411161 kernel: audit: type=1131 audit(1712947933.387:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:13.387000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:13.387000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:13.397740 systemd[1]: Starting initrd-parse-etc.service... Apr 12 18:52:13.429564 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 12 18:52:13.430858 systemd[1]: Finished initrd-parse-etc.service. Apr 12 18:52:13.434365 systemd[1]: Reached target initrd-fs.target. Apr 12 18:52:13.436269 systemd[1]: Reached target initrd.target. Apr 12 18:52:13.450675 kernel: audit: type=1130 audit(1712947933.431:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:13.450710 kernel: audit: type=1131 audit(1712947933.434:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:13.431000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:13.434000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:13.450800 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Apr 12 18:52:13.453685 systemd[1]: Starting dracut-pre-pivot.service... Apr 12 18:52:13.472120 systemd[1]: Finished dracut-pre-pivot.service. Apr 12 18:52:13.485055 kernel: audit: type=1130 audit(1712947933.471:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:13.471000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:13.485224 systemd[1]: Starting initrd-cleanup.service... Apr 12 18:52:13.501131 systemd[1]: Stopped target nss-lookup.target. Apr 12 18:52:13.501689 systemd[1]: Stopped target remote-cryptsetup.target. Apr 12 18:52:13.506049 systemd[1]: Stopped target timers.target. Apr 12 18:52:13.506501 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 12 18:52:13.516691 kernel: audit: type=1131 audit(1712947933.508:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:13.508000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:13.506705 systemd[1]: Stopped dracut-pre-pivot.service. Apr 12 18:52:13.509639 systemd[1]: Stopped target initrd.target. Apr 12 18:52:13.517399 systemd[1]: Stopped target basic.target. Apr 12 18:52:13.519146 systemd[1]: Stopped target ignition-complete.target. Apr 12 18:52:13.519531 systemd[1]: Stopped target ignition-diskful.target. Apr 12 18:52:13.520183 systemd[1]: Stopped target initrd-root-device.target. Apr 12 18:52:13.524347 systemd[1]: Stopped target remote-fs.target. Apr 12 18:52:13.526377 systemd[1]: Stopped target remote-fs-pre.target. Apr 12 18:52:13.528424 systemd[1]: Stopped target sysinit.target. Apr 12 18:52:13.530333 systemd[1]: Stopped target local-fs.target. Apr 12 18:52:13.532405 systemd[1]: Stopped target local-fs-pre.target. Apr 12 18:52:13.534390 systemd[1]: Stopped target swap.target. Apr 12 18:52:13.537000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:13.534742 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 12 18:52:13.542905 kernel: audit: type=1131 audit(1712947933.537:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:13.534971 systemd[1]: Stopped dracut-pre-mount.service. Apr 12 18:52:13.538278 systemd[1]: Stopped target cryptsetup.target. Apr 12 18:52:13.552442 kernel: audit: type=1131 audit(1712947933.546:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:13.546000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:13.545653 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 12 18:52:13.552000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:13.545871 systemd[1]: Stopped dracut-initqueue.service. Apr 12 18:52:13.547242 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 12 18:52:13.547402 systemd[1]: Stopped ignition-fetch-offline.service. Apr 12 18:52:13.553480 systemd[1]: Stopped target paths.target. Apr 12 18:52:13.554957 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 12 18:52:13.559853 systemd[1]: Stopped systemd-ask-password-console.path. Apr 12 18:52:13.562999 systemd[1]: Stopped target slices.target. Apr 12 18:52:13.568000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:13.564892 systemd[1]: Stopped target sockets.target. Apr 12 18:52:13.567300 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 12 18:52:13.575000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:13.567483 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Apr 12 18:52:13.569079 systemd[1]: ignition-files.service: Deactivated successfully. Apr 12 18:52:13.569227 systemd[1]: Stopped ignition-files.service. Apr 12 18:52:13.583000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:13.589468 iscsid[714]: iscsid shutting down. Apr 12 18:52:13.577628 systemd[1]: Stopping ignition-mount.service... Apr 12 18:52:13.578305 systemd[1]: Stopping iscsid.service... Apr 12 18:52:13.593862 ignition[867]: INFO : Ignition 2.14.0 Apr 12 18:52:13.593862 ignition[867]: INFO : Stage: umount Apr 12 18:52:13.593862 ignition[867]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 12 18:52:13.593862 ignition[867]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 12 18:52:13.596000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:13.580286 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 12 18:52:13.599000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:13.602435 ignition[867]: INFO : umount: umount passed Apr 12 18:52:13.602435 ignition[867]: INFO : Ignition finished successfully Apr 12 18:52:13.602000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:13.580526 systemd[1]: Stopped kmod-static-nodes.service. Apr 12 18:52:13.585552 systemd[1]: Stopping sysroot-boot.service... Apr 12 18:52:13.607000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:13.592708 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 12 18:52:13.593948 systemd[1]: Stopped systemd-udev-trigger.service. Apr 12 18:52:13.610000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:13.597815 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 12 18:52:13.613000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:13.598120 systemd[1]: Stopped dracut-pre-trigger.service. Apr 12 18:52:13.617000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:13.602702 systemd[1]: iscsid.service: Deactivated successfully. Apr 12 18:52:13.602860 systemd[1]: Stopped iscsid.service. Apr 12 18:52:13.605953 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 12 18:52:13.625000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:13.606079 systemd[1]: Stopped ignition-mount.service. Apr 12 18:52:13.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:13.627000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:13.608434 systemd[1]: iscsid.socket: Deactivated successfully. Apr 12 18:52:13.630000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:13.608568 systemd[1]: Closed iscsid.socket. Apr 12 18:52:13.609663 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 12 18:52:13.609836 systemd[1]: Stopped ignition-disks.service. Apr 12 18:52:13.611811 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 12 18:52:13.611956 systemd[1]: Stopped ignition-kargs.service. Apr 12 18:52:13.654000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:13.614285 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 12 18:52:13.614422 systemd[1]: Stopped ignition-setup.service. Apr 12 18:52:13.618656 systemd[1]: Stopping iscsiuio.service... Apr 12 18:52:13.624788 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 12 18:52:13.625464 systemd[1]: iscsiuio.service: Deactivated successfully. Apr 12 18:52:13.625612 systemd[1]: Stopped iscsiuio.service. Apr 12 18:52:13.626483 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 12 18:52:13.626593 systemd[1]: Finished initrd-cleanup.service. Apr 12 18:52:13.628609 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 12 18:52:13.628702 systemd[1]: Stopped sysroot-boot.service. Apr 12 18:52:13.679000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:13.631399 systemd[1]: Stopped target network.target. Apr 12 18:52:13.641462 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 12 18:52:13.641539 systemd[1]: Closed iscsiuio.socket. Apr 12 18:52:13.643322 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 12 18:52:13.691000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:13.692000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:13.643393 systemd[1]: Stopped initrd-setup-root.service. Apr 12 18:52:13.696000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:13.655829 systemd[1]: Stopping systemd-networkd.service... Apr 12 18:52:13.662317 systemd[1]: Stopping systemd-resolved.service... Apr 12 18:52:13.672291 systemd-networkd[707]: eth0: DHCPv6 lease lost Apr 12 18:52:13.701000 audit: BPF prog-id=9 op=UNLOAD Apr 12 18:52:13.675283 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 12 18:52:13.675437 systemd[1]: Stopped systemd-networkd.service. Apr 12 18:52:13.682106 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 12 18:52:13.682182 systemd[1]: Closed systemd-networkd.socket. Apr 12 18:52:13.706000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:13.683838 systemd[1]: Stopping network-cleanup.service... Apr 12 18:52:13.709000 audit: BPF prog-id=6 op=UNLOAD Apr 12 18:52:13.686138 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 12 18:52:13.686245 systemd[1]: Stopped parse-ip-for-networkd.service. Apr 12 18:52:13.692497 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 12 18:52:13.692599 systemd[1]: Stopped systemd-sysctl.service. Apr 12 18:52:13.718000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:13.696838 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 12 18:52:13.696919 systemd[1]: Stopped systemd-modules-load.service. Apr 12 18:52:13.697467 systemd[1]: Stopping systemd-udevd.service... Apr 12 18:52:13.726000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:13.701028 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Apr 12 18:52:13.701651 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 12 18:52:13.701811 systemd[1]: Stopped systemd-resolved.service. Apr 12 18:52:13.746000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:13.715925 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 12 18:52:13.716083 systemd[1]: Stopped network-cleanup.service. Apr 12 18:52:13.748000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:13.750000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:13.718597 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 12 18:52:13.718827 systemd[1]: Stopped systemd-udevd.service. Apr 12 18:52:13.752000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:13.728335 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 12 18:52:13.728406 systemd[1]: Closed systemd-udevd-control.socket. Apr 12 18:52:13.729426 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 12 18:52:13.729470 systemd[1]: Closed systemd-udevd-kernel.socket. Apr 12 18:52:13.744971 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 12 18:52:13.745060 systemd[1]: Stopped dracut-pre-udev.service. Apr 12 18:52:13.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:13.764000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:13.747411 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 12 18:52:13.747496 systemd[1]: Stopped dracut-cmdline.service. Apr 12 18:52:13.749420 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 12 18:52:13.749482 systemd[1]: Stopped dracut-cmdline-ask.service. Apr 12 18:52:13.752305 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Apr 12 18:52:13.753961 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 12 18:52:13.754040 systemd[1]: Stopped systemd-vconsole-setup.service. Apr 12 18:52:13.763108 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 12 18:52:13.763265 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Apr 12 18:52:13.765088 systemd[1]: Reached target initrd-switch-root.target. Apr 12 18:52:13.768383 systemd[1]: Starting initrd-switch-root.service... Apr 12 18:52:13.787252 systemd[1]: Switching root. Apr 12 18:52:13.813232 systemd-journald[197]: Journal stopped Apr 12 18:52:24.768619 systemd-journald[197]: Received SIGTERM from PID 1 (n/a). Apr 12 18:52:24.768689 kernel: SELinux: Class mctp_socket not defined in policy. Apr 12 18:52:24.768707 kernel: SELinux: Class anon_inode not defined in policy. Apr 12 18:52:24.768722 kernel: SELinux: the above unknown classes and permissions will be allowed Apr 12 18:52:24.768741 kernel: SELinux: policy capability network_peer_controls=1 Apr 12 18:52:24.768772 kernel: SELinux: policy capability open_perms=1 Apr 12 18:52:24.768790 kernel: SELinux: policy capability extended_socket_class=1 Apr 12 18:52:24.768808 kernel: SELinux: policy capability always_check_network=0 Apr 12 18:52:24.768822 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 12 18:52:24.768841 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 12 18:52:24.768855 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 12 18:52:24.768869 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 12 18:52:24.768885 systemd[1]: Successfully loaded SELinux policy in 74.931ms. Apr 12 18:52:24.768911 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.826ms. Apr 12 18:52:24.768930 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Apr 12 18:52:24.768946 systemd[1]: Detected virtualization kvm. Apr 12 18:52:24.768965 systemd[1]: Detected architecture x86-64. Apr 12 18:52:24.768980 systemd[1]: Detected first boot. Apr 12 18:52:24.768996 systemd[1]: Initializing machine ID from VM UUID. Apr 12 18:52:24.769012 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Apr 12 18:52:24.769027 systemd[1]: Populated /etc with preset unit settings. Apr 12 18:52:24.769043 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Apr 12 18:52:24.769063 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Apr 12 18:52:24.769081 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 12 18:52:24.769096 kernel: kauditd_printk_skb: 47 callbacks suppressed Apr 12 18:52:24.769110 kernel: audit: type=1334 audit(1712947944.444:83): prog-id=12 op=LOAD Apr 12 18:52:24.769126 kernel: audit: type=1334 audit(1712947944.444:84): prog-id=3 op=UNLOAD Apr 12 18:52:24.769140 kernel: audit: type=1334 audit(1712947944.446:85): prog-id=13 op=LOAD Apr 12 18:52:24.769159 kernel: audit: type=1334 audit(1712947944.447:86): prog-id=14 op=LOAD Apr 12 18:52:24.769176 kernel: audit: type=1334 audit(1712947944.447:87): prog-id=4 op=UNLOAD Apr 12 18:52:24.769192 kernel: audit: type=1334 audit(1712947944.447:88): prog-id=5 op=UNLOAD Apr 12 18:52:24.769206 kernel: audit: type=1334 audit(1712947944.450:89): prog-id=15 op=LOAD Apr 12 18:52:24.769219 kernel: audit: type=1334 audit(1712947944.450:90): prog-id=12 op=UNLOAD Apr 12 18:52:24.769232 kernel: audit: type=1334 audit(1712947944.453:91): prog-id=16 op=LOAD Apr 12 18:52:24.769246 kernel: audit: type=1334 audit(1712947944.454:92): prog-id=17 op=LOAD Apr 12 18:52:24.769261 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 12 18:52:24.769288 systemd[1]: Stopped initrd-switch-root.service. Apr 12 18:52:24.769304 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 12 18:52:24.769319 systemd[1]: Created slice system-addon\x2dconfig.slice. Apr 12 18:52:24.769338 systemd[1]: Created slice system-addon\x2drun.slice. Apr 12 18:52:24.769354 systemd[1]: Created slice system-getty.slice. Apr 12 18:52:24.769370 systemd[1]: Created slice system-modprobe.slice. Apr 12 18:52:24.769387 systemd[1]: Created slice system-serial\x2dgetty.slice. Apr 12 18:52:24.769402 systemd[1]: Created slice system-system\x2dcloudinit.slice. Apr 12 18:52:24.769418 systemd[1]: Created slice system-systemd\x2dfsck.slice. Apr 12 18:52:24.769433 systemd[1]: Created slice user.slice. Apr 12 18:52:24.769451 systemd[1]: Started systemd-ask-password-console.path. Apr 12 18:52:24.769467 systemd[1]: Started systemd-ask-password-wall.path. Apr 12 18:52:24.769483 systemd[1]: Set up automount boot.automount. Apr 12 18:52:24.769500 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Apr 12 18:52:24.769515 systemd[1]: Stopped target initrd-switch-root.target. Apr 12 18:52:24.769531 systemd[1]: Stopped target initrd-fs.target. Apr 12 18:52:24.769546 systemd[1]: Stopped target initrd-root-fs.target. Apr 12 18:52:24.769562 systemd[1]: Reached target integritysetup.target. Apr 12 18:52:24.769577 systemd[1]: Reached target remote-cryptsetup.target. Apr 12 18:52:24.769592 systemd[1]: Reached target remote-fs.target. Apr 12 18:52:24.769611 systemd[1]: Reached target slices.target. Apr 12 18:52:24.769626 systemd[1]: Reached target swap.target. Apr 12 18:52:24.769640 systemd[1]: Reached target torcx.target. Apr 12 18:52:24.769655 systemd[1]: Reached target veritysetup.target. Apr 12 18:52:24.769670 systemd[1]: Listening on systemd-coredump.socket. Apr 12 18:52:24.769685 systemd[1]: Listening on systemd-initctl.socket. Apr 12 18:52:24.769706 systemd[1]: Listening on systemd-networkd.socket. Apr 12 18:52:24.769723 systemd[1]: Listening on systemd-udevd-control.socket. Apr 12 18:52:24.769738 systemd[1]: Listening on systemd-udevd-kernel.socket. Apr 12 18:52:24.769772 systemd[1]: Listening on systemd-userdbd.socket. Apr 12 18:52:24.769788 systemd[1]: Mounting dev-hugepages.mount... Apr 12 18:52:24.769804 systemd[1]: Mounting dev-mqueue.mount... Apr 12 18:52:24.769819 systemd[1]: Mounting media.mount... Apr 12 18:52:24.769834 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 12 18:52:24.769855 systemd[1]: Mounting sys-kernel-debug.mount... Apr 12 18:52:24.769870 systemd[1]: Mounting sys-kernel-tracing.mount... Apr 12 18:52:24.769885 systemd[1]: Mounting tmp.mount... Apr 12 18:52:24.769901 systemd[1]: Starting flatcar-tmpfiles.service... Apr 12 18:52:24.769916 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Apr 12 18:52:24.769934 systemd[1]: Starting kmod-static-nodes.service... Apr 12 18:52:24.769949 systemd[1]: Starting modprobe@configfs.service... Apr 12 18:52:24.769964 systemd[1]: Starting modprobe@dm_mod.service... Apr 12 18:52:24.769978 systemd[1]: Starting modprobe@drm.service... Apr 12 18:52:24.769996 systemd[1]: Starting modprobe@efi_pstore.service... Apr 12 18:52:24.770011 systemd[1]: Starting modprobe@fuse.service... Apr 12 18:52:24.770026 systemd[1]: Starting modprobe@loop.service... Apr 12 18:52:24.770044 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 12 18:52:24.770061 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 12 18:52:24.770077 systemd[1]: Stopped systemd-fsck-root.service. Apr 12 18:52:24.770093 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 12 18:52:24.770111 systemd[1]: Stopped systemd-fsck-usr.service. Apr 12 18:52:24.770127 systemd[1]: Stopped systemd-journald.service. Apr 12 18:52:24.770146 kernel: loop: module loaded Apr 12 18:52:24.770163 kernel: fuse: init (API version 7.34) Apr 12 18:52:24.770180 systemd[1]: Starting systemd-journald.service... Apr 12 18:52:24.770200 systemd[1]: Starting systemd-modules-load.service... Apr 12 18:52:24.770216 systemd[1]: Starting systemd-network-generator.service... Apr 12 18:52:24.770233 systemd[1]: Starting systemd-remount-fs.service... Apr 12 18:52:24.770249 systemd[1]: Starting systemd-udev-trigger.service... Apr 12 18:52:24.771330 systemd[1]: verity-setup.service: Deactivated successfully. Apr 12 18:52:24.771370 systemd[1]: Stopped verity-setup.service. Apr 12 18:52:24.771394 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 12 18:52:24.771410 systemd[1]: Mounted dev-hugepages.mount. Apr 12 18:52:24.771425 systemd[1]: Mounted dev-mqueue.mount. Apr 12 18:52:24.771441 systemd[1]: Mounted media.mount. Apr 12 18:52:24.771473 systemd[1]: Mounted sys-kernel-debug.mount. Apr 12 18:52:24.771496 systemd[1]: Mounted sys-kernel-tracing.mount. Apr 12 18:52:24.771511 systemd[1]: Mounted tmp.mount. Apr 12 18:52:24.771532 systemd-journald[980]: Journal started Apr 12 18:52:24.771623 systemd-journald[980]: Runtime Journal (/run/log/journal/609f72796f724ce7b35417c74a440cb7) is 6.0M, max 48.4M, 42.4M free. Apr 12 18:52:13.950000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 12 18:52:14.805000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Apr 12 18:52:14.810000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Apr 12 18:52:14.815000 audit: BPF prog-id=10 op=LOAD Apr 12 18:52:14.817000 audit: BPF prog-id=10 op=UNLOAD Apr 12 18:52:14.820000 audit: BPF prog-id=11 op=LOAD Apr 12 18:52:14.822000 audit: BPF prog-id=11 op=UNLOAD Apr 12 18:52:14.980000 audit[901]: AVC avc: denied { associate } for pid=901 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Apr 12 18:52:14.980000 audit[901]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001558b2 a1=c0000d8de0 a2=c0000e10c0 a3=32 items=0 ppid=884 pid=901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 12 18:52:14.980000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Apr 12 18:52:14.980000 audit[901]: AVC avc: denied { associate } for pid=901 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Apr 12 18:52:14.980000 audit[901]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c000155989 a2=1ed a3=0 items=2 ppid=884 pid=901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 12 18:52:14.980000 audit: CWD cwd="/" Apr 12 18:52:14.980000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:14.980000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:14.980000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Apr 12 18:52:24.444000 audit: BPF prog-id=12 op=LOAD Apr 12 18:52:24.444000 audit: BPF prog-id=3 op=UNLOAD Apr 12 18:52:24.446000 audit: BPF prog-id=13 op=LOAD Apr 12 18:52:24.447000 audit: BPF prog-id=14 op=LOAD Apr 12 18:52:24.447000 audit: BPF prog-id=4 op=UNLOAD Apr 12 18:52:24.447000 audit: BPF prog-id=5 op=UNLOAD Apr 12 18:52:24.450000 audit: BPF prog-id=15 op=LOAD Apr 12 18:52:24.450000 audit: BPF prog-id=12 op=UNLOAD Apr 12 18:52:24.453000 audit: BPF prog-id=16 op=LOAD Apr 12 18:52:24.454000 audit: BPF prog-id=17 op=LOAD Apr 12 18:52:24.454000 audit: BPF prog-id=13 op=UNLOAD Apr 12 18:52:24.454000 audit: BPF prog-id=14 op=UNLOAD Apr 12 18:52:24.455000 audit: BPF prog-id=18 op=LOAD Apr 12 18:52:24.455000 audit: BPF prog-id=15 op=UNLOAD Apr 12 18:52:24.456000 audit: BPF prog-id=19 op=LOAD Apr 12 18:52:24.458000 audit: BPF prog-id=20 op=LOAD Apr 12 18:52:24.458000 audit: BPF prog-id=16 op=UNLOAD Apr 12 18:52:24.458000 audit: BPF prog-id=17 op=UNLOAD Apr 12 18:52:24.459000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:24.465000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:24.466000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:24.474000 audit: BPF prog-id=18 op=UNLOAD Apr 12 18:52:24.688000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:24.692000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:24.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:24.695000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:24.697000 audit: BPF prog-id=21 op=LOAD Apr 12 18:52:24.697000 audit: BPF prog-id=22 op=LOAD Apr 12 18:52:24.697000 audit: BPF prog-id=23 op=LOAD Apr 12 18:52:24.697000 audit: BPF prog-id=19 op=UNLOAD Apr 12 18:52:24.697000 audit: BPF prog-id=20 op=UNLOAD Apr 12 18:52:24.742000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:24.766000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Apr 12 18:52:24.766000 audit[980]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffddb844500 a2=4000 a3=7ffddb84459c items=0 ppid=1 pid=980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 12 18:52:24.766000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Apr 12 18:52:24.442027 systemd[1]: Queued start job for default target multi-user.target. Apr 12 18:52:14.977275 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-04-12T18:52:14Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.3 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.3 /var/lib/torcx/store]" Apr 12 18:52:24.442044 systemd[1]: Unnecessary job was removed for dev-vda6.device. Apr 12 18:52:14.977615 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-04-12T18:52:14Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Apr 12 18:52:24.459850 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 12 18:52:14.977635 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-04-12T18:52:14Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Apr 12 18:52:14.977670 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-04-12T18:52:14Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Apr 12 18:52:14.977682 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-04-12T18:52:14Z" level=debug msg="skipped missing lower profile" missing profile=oem Apr 12 18:52:14.977724 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-04-12T18:52:14Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Apr 12 18:52:14.977739 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-04-12T18:52:14Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Apr 12 18:52:14.978056 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-04-12T18:52:14Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Apr 12 18:52:14.978120 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-04-12T18:52:14Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Apr 12 18:52:14.978138 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-04-12T18:52:14Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Apr 12 18:52:14.981283 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-04-12T18:52:14Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Apr 12 18:52:14.981324 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-04-12T18:52:14Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Apr 12 18:52:24.775454 systemd[1]: Finished flatcar-tmpfiles.service. Apr 12 18:52:14.981346 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-04-12T18:52:14Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.3: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.3 Apr 12 18:52:14.981361 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-04-12T18:52:14Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Apr 12 18:52:14.981381 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-04-12T18:52:14Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.3: no such file or directory" path=/var/lib/torcx/store/3510.3.3 Apr 12 18:52:14.981396 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-04-12T18:52:14Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Apr 12 18:52:23.879357 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-04-12T18:52:23Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Apr 12 18:52:23.880246 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-04-12T18:52:23Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Apr 12 18:52:23.880632 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-04-12T18:52:23Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Apr 12 18:52:23.881362 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-04-12T18:52:23Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Apr 12 18:52:23.881558 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-04-12T18:52:23Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Apr 12 18:52:23.881779 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-04-12T18:52:23Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Apr 12 18:52:24.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:24.781436 systemd[1]: Started systemd-journald.service. Apr 12 18:52:24.780468 systemd[1]: Finished kmod-static-nodes.service. Apr 12 18:52:24.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:24.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:24.784451 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 12 18:52:24.784818 systemd[1]: Finished modprobe@configfs.service. Apr 12 18:52:24.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:24.785000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:24.786352 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 12 18:52:24.786602 systemd[1]: Finished modprobe@dm_mod.service. Apr 12 18:52:24.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:24.788000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:24.789566 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 12 18:52:24.789955 systemd[1]: Finished modprobe@drm.service. Apr 12 18:52:24.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:24.793000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:24.794910 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 12 18:52:24.795197 systemd[1]: Finished modprobe@efi_pstore.service. Apr 12 18:52:24.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:24.800000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:24.801122 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 12 18:52:24.801343 systemd[1]: Finished modprobe@fuse.service. Apr 12 18:52:24.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:24.801000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:24.802810 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 12 18:52:24.802987 systemd[1]: Finished modprobe@loop.service. Apr 12 18:52:24.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:24.803000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:24.804358 systemd[1]: Finished systemd-modules-load.service. Apr 12 18:52:24.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:24.807103 systemd[1]: Finished systemd-network-generator.service. Apr 12 18:52:24.807000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:24.808910 systemd[1]: Finished systemd-remount-fs.service. Apr 12 18:52:24.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:24.811868 systemd[1]: Reached target network-pre.target. Apr 12 18:52:24.817045 systemd[1]: Mounting sys-fs-fuse-connections.mount... Apr 12 18:52:24.837229 systemd[1]: Mounting sys-kernel-config.mount... Apr 12 18:52:24.838841 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 12 18:52:24.848064 systemd[1]: Starting systemd-hwdb-update.service... Apr 12 18:52:24.861970 systemd[1]: Starting systemd-journal-flush.service... Apr 12 18:52:24.863349 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 12 18:52:24.866678 systemd[1]: Starting systemd-random-seed.service... Apr 12 18:52:24.871672 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Apr 12 18:52:24.874472 systemd[1]: Starting systemd-sysctl.service... Apr 12 18:52:24.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:24.877276 systemd[1]: Starting systemd-sysusers.service... Apr 12 18:52:24.881497 systemd[1]: Finished systemd-udev-trigger.service. Apr 12 18:52:24.904113 systemd-journald[980]: Time spent on flushing to /var/log/journal/609f72796f724ce7b35417c74a440cb7 is 30.761ms for 1223 entries. Apr 12 18:52:24.904113 systemd-journald[980]: System Journal (/var/log/journal/609f72796f724ce7b35417c74a440cb7) is 8.0M, max 195.6M, 187.6M free. Apr 12 18:52:24.956927 systemd-journald[980]: Received client request to flush runtime journal. Apr 12 18:52:24.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:24.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:24.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:24.884955 systemd[1]: Mounted sys-fs-fuse-connections.mount. Apr 12 18:52:24.957977 udevadm[1006]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 12 18:52:24.886585 systemd[1]: Mounted sys-kernel-config.mount. Apr 12 18:52:24.889900 systemd[1]: Starting systemd-udev-settle.service... Apr 12 18:52:24.926063 systemd[1]: Finished systemd-random-seed.service. Apr 12 18:52:24.928902 systemd[1]: Reached target first-boot-complete.target. Apr 12 18:52:24.935619 systemd[1]: Finished systemd-sysusers.service. Apr 12 18:52:24.949027 systemd[1]: Finished systemd-sysctl.service. Apr 12 18:52:24.958906 systemd[1]: Finished systemd-journal-flush.service. Apr 12 18:52:24.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:25.775543 systemd[1]: Finished systemd-hwdb-update.service. Apr 12 18:52:25.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:25.777000 audit: BPF prog-id=24 op=LOAD Apr 12 18:52:25.777000 audit: BPF prog-id=25 op=LOAD Apr 12 18:52:25.777000 audit: BPF prog-id=7 op=UNLOAD Apr 12 18:52:25.777000 audit: BPF prog-id=8 op=UNLOAD Apr 12 18:52:25.784733 systemd[1]: Starting systemd-udevd.service... Apr 12 18:52:25.827499 systemd-udevd[1009]: Using default interface naming scheme 'v252'. Apr 12 18:52:25.890926 systemd[1]: Started systemd-udevd.service. Apr 12 18:52:25.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:25.893000 audit: BPF prog-id=26 op=LOAD Apr 12 18:52:25.895419 systemd[1]: Starting systemd-networkd.service... Apr 12 18:52:25.913000 audit: BPF prog-id=27 op=LOAD Apr 12 18:52:25.913000 audit: BPF prog-id=28 op=LOAD Apr 12 18:52:25.913000 audit: BPF prog-id=29 op=LOAD Apr 12 18:52:25.915650 systemd[1]: Starting systemd-userdbd.service... Apr 12 18:52:25.933119 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Apr 12 18:52:25.988886 systemd[1]: Started systemd-userdbd.service. Apr 12 18:52:25.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:26.029792 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 12 18:52:26.029000 audit[1013]: AVC avc: denied { confidentiality } for pid=1013 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Apr 12 18:52:26.035790 kernel: ACPI: button: Power Button [PWRF] Apr 12 18:52:26.029000 audit[1013]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55ae3064a3c0 a1=32194 a2=7fba52d4bbc5 a3=5 items=108 ppid=1009 pid=1013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 12 18:52:26.029000 audit: CWD cwd="/" Apr 12 18:52:26.029000 audit: PATH item=0 name=(null) inode=50 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=1 name=(null) inode=12765 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=2 name=(null) inode=12765 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=3 name=(null) inode=12766 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=4 name=(null) inode=12765 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=5 name=(null) inode=12767 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=6 name=(null) inode=12765 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=7 name=(null) inode=12768 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=8 name=(null) inode=12768 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=9 name=(null) inode=12769 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=10 name=(null) inode=12768 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=11 name=(null) inode=12770 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=12 name=(null) inode=12768 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=13 name=(null) inode=12771 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=14 name=(null) inode=12768 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=15 name=(null) inode=12772 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=16 name=(null) inode=12768 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=17 name=(null) inode=12773 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=18 name=(null) inode=12765 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=19 name=(null) inode=12774 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=20 name=(null) inode=12774 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=21 name=(null) inode=12775 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=22 name=(null) inode=12774 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=23 name=(null) inode=12776 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=24 name=(null) inode=12774 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=25 name=(null) inode=12777 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=26 name=(null) inode=12774 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=27 name=(null) inode=12778 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=28 name=(null) inode=12774 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=29 name=(null) inode=12779 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=30 name=(null) inode=12765 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=31 name=(null) inode=12780 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=32 name=(null) inode=12780 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=33 name=(null) inode=12781 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=34 name=(null) inode=12780 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=35 name=(null) inode=12782 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=36 name=(null) inode=12780 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=37 name=(null) inode=12783 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=38 name=(null) inode=12780 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=39 name=(null) inode=12784 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=40 name=(null) inode=12780 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=41 name=(null) inode=12785 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=42 name=(null) inode=12765 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=43 name=(null) inode=12786 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=44 name=(null) inode=12786 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=45 name=(null) inode=12787 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=46 name=(null) inode=12786 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=47 name=(null) inode=12788 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=48 name=(null) inode=12786 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=49 name=(null) inode=12789 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=50 name=(null) inode=12786 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=51 name=(null) inode=12790 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=52 name=(null) inode=12786 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=53 name=(null) inode=12791 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=54 name=(null) inode=50 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=55 name=(null) inode=12792 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=56 name=(null) inode=12792 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=57 name=(null) inode=12793 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=58 name=(null) inode=12792 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=59 name=(null) inode=12794 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=60 name=(null) inode=12792 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=61 name=(null) inode=12795 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=62 name=(null) inode=12795 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=63 name=(null) inode=12796 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=64 name=(null) inode=12795 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=65 name=(null) inode=12797 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=66 name=(null) inode=12795 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=67 name=(null) inode=12798 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=68 name=(null) inode=12795 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=69 name=(null) inode=12799 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=70 name=(null) inode=12795 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=71 name=(null) inode=12800 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=72 name=(null) inode=12792 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=73 name=(null) inode=12801 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=74 name=(null) inode=12801 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=75 name=(null) inode=12802 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=76 name=(null) inode=12801 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=77 name=(null) inode=12803 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=78 name=(null) inode=12801 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=79 name=(null) inode=12804 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=80 name=(null) inode=12801 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=81 name=(null) inode=12805 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=82 name=(null) inode=12801 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=83 name=(null) inode=12806 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=84 name=(null) inode=12792 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=85 name=(null) inode=12807 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=86 name=(null) inode=12807 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=87 name=(null) inode=12808 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=88 name=(null) inode=12807 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=89 name=(null) inode=12809 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=90 name=(null) inode=12807 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=91 name=(null) inode=12810 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=92 name=(null) inode=12807 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=93 name=(null) inode=12811 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=94 name=(null) inode=12807 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=95 name=(null) inode=12812 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=96 name=(null) inode=12792 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=97 name=(null) inode=12813 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=98 name=(null) inode=12813 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=99 name=(null) inode=12814 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=100 name=(null) inode=12813 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=101 name=(null) inode=12815 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=102 name=(null) inode=12813 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=103 name=(null) inode=12816 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=104 name=(null) inode=12813 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=105 name=(null) inode=12817 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=106 name=(null) inode=12813 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PATH item=107 name=(null) inode=12818 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:52:26.029000 audit: PROCTITLE proctitle="(udev-worker)" Apr 12 18:52:26.079799 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 12 18:52:26.085801 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 0 Apr 12 18:52:26.091769 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Apr 12 18:52:26.097788 systemd-networkd[1019]: lo: Link UP Apr 12 18:52:26.097807 systemd-networkd[1019]: lo: Gained carrier Apr 12 18:52:26.098392 systemd-networkd[1019]: Enumeration completed Apr 12 18:52:26.098524 systemd[1]: Started systemd-networkd.service. Apr 12 18:52:26.098564 systemd-networkd[1019]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 12 18:52:26.099000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:26.101412 systemd-networkd[1019]: eth0: Link UP Apr 12 18:52:26.101430 systemd-networkd[1019]: eth0: Gained carrier Apr 12 18:52:26.107856 kernel: mousedev: PS/2 mouse device common for all mice Apr 12 18:52:26.116960 systemd-networkd[1019]: eth0: DHCPv4 address 10.0.0.118/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 12 18:52:26.357531 kernel: kvm: Nested Virtualization enabled Apr 12 18:52:26.357732 kernel: SVM: kvm: Nested Paging enabled Apr 12 18:52:26.357789 kernel: SVM: Virtual VMLOAD VMSAVE supported Apr 12 18:52:26.358420 kernel: SVM: Virtual GIF supported Apr 12 18:52:26.420817 kernel: EDAC MC: Ver: 3.0.0 Apr 12 18:52:26.444485 systemd[1]: Finished systemd-udev-settle.service. Apr 12 18:52:26.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:26.450275 systemd[1]: Starting lvm2-activation-early.service... Apr 12 18:52:26.466018 lvm[1045]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 12 18:52:26.500250 systemd[1]: Finished lvm2-activation-early.service. Apr 12 18:52:26.504207 systemd[1]: Reached target cryptsetup.target. Apr 12 18:52:26.500000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:26.506990 systemd[1]: Starting lvm2-activation.service... Apr 12 18:52:26.516604 lvm[1046]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 12 18:52:26.553370 systemd[1]: Finished lvm2-activation.service. Apr 12 18:52:26.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:26.556416 systemd[1]: Reached target local-fs-pre.target. Apr 12 18:52:26.558010 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 12 18:52:26.558044 systemd[1]: Reached target local-fs.target. Apr 12 18:52:26.559291 systemd[1]: Reached target machines.target. Apr 12 18:52:26.562879 systemd[1]: Starting ldconfig.service... Apr 12 18:52:26.567529 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Apr 12 18:52:26.567612 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Apr 12 18:52:26.570717 systemd[1]: Starting systemd-boot-update.service... Apr 12 18:52:26.579943 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Apr 12 18:52:26.587492 systemd[1]: Starting systemd-machine-id-commit.service... Apr 12 18:52:26.588989 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Apr 12 18:52:26.589055 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Apr 12 18:52:26.590973 systemd[1]: Starting systemd-tmpfiles-setup.service... Apr 12 18:52:26.592820 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1048 (bootctl) Apr 12 18:52:26.599487 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Apr 12 18:52:26.616801 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Apr 12 18:52:26.618000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:26.653478 systemd-tmpfiles[1051]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Apr 12 18:52:26.885069 systemd-tmpfiles[1051]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 12 18:52:26.907383 systemd-tmpfiles[1051]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 12 18:52:26.962694 systemd-fsck[1056]: fsck.fat 4.2 (2021-01-31) Apr 12 18:52:26.962694 systemd-fsck[1056]: /dev/vda1: 790 files, 119263/258078 clusters Apr 12 18:52:26.965189 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Apr 12 18:52:26.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:26.969267 systemd[1]: Mounting boot.mount... Apr 12 18:52:27.213739 systemd[1]: Mounted boot.mount. Apr 12 18:52:27.238028 systemd[1]: Finished systemd-boot-update.service. Apr 12 18:52:27.238000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:27.304874 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 12 18:52:27.312848 systemd[1]: Finished systemd-machine-id-commit.service. Apr 12 18:52:27.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:27.440391 systemd[1]: Finished systemd-tmpfiles-setup.service. Apr 12 18:52:27.441000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:27.450921 systemd[1]: Starting audit-rules.service... Apr 12 18:52:27.453870 systemd[1]: Starting clean-ca-certificates.service... Apr 12 18:52:27.464236 systemd[1]: Starting systemd-journal-catalog-update.service... Apr 12 18:52:27.471000 audit: BPF prog-id=30 op=LOAD Apr 12 18:52:27.472919 systemd[1]: Starting systemd-resolved.service... Apr 12 18:52:27.476000 audit: BPF prog-id=31 op=LOAD Apr 12 18:52:27.478356 systemd[1]: Starting systemd-timesyncd.service... Apr 12 18:52:27.490921 systemd[1]: Starting systemd-update-utmp.service... Apr 12 18:52:27.493249 systemd[1]: Finished clean-ca-certificates.service. Apr 12 18:52:27.497000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:27.498349 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 12 18:52:27.501000 audit[1070]: SYSTEM_BOOT pid=1070 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Apr 12 18:52:27.506998 systemd[1]: Finished systemd-update-utmp.service. Apr 12 18:52:27.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:27.518236 systemd[1]: Finished systemd-journal-catalog-update.service. Apr 12 18:52:27.523000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:52:27.548000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Apr 12 18:52:27.548000 audit[1080]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdddd80290 a2=420 a3=0 items=0 ppid=1059 pid=1080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 12 18:52:27.548000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Apr 12 18:52:27.549410 augenrules[1080]: No rules Apr 12 18:52:27.550655 systemd[1]: Finished audit-rules.service. Apr 12 18:52:27.594134 ldconfig[1047]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 12 18:52:27.610384 systemd-resolved[1063]: Positive Trust Anchors: Apr 12 18:52:27.610409 systemd-resolved[1063]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 12 18:52:27.610449 systemd-resolved[1063]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Apr 12 18:52:27.612603 systemd[1]: Started systemd-timesyncd.service. Apr 12 18:52:27.616340 systemd[1]: Finished ldconfig.service. Apr 12 18:52:27.617422 systemd[1]: Reached target time-set.target. Apr 12 18:52:28.074056 systemd-timesyncd[1064]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 12 18:52:28.074112 systemd-timesyncd[1064]: Initial clock synchronization to Fri 2024-04-12 18:52:28.073947 UTC. Apr 12 18:52:28.076265 systemd[1]: Starting systemd-update-done.service... Apr 12 18:52:28.090352 systemd[1]: Finished systemd-update-done.service. Apr 12 18:52:28.096958 systemd-resolved[1063]: Defaulting to hostname 'linux'. Apr 12 18:52:28.098940 systemd[1]: Started systemd-resolved.service. Apr 12 18:52:28.100288 systemd[1]: Reached target network.target. Apr 12 18:52:28.103680 systemd[1]: Reached target nss-lookup.target. Apr 12 18:52:28.111674 systemd[1]: Reached target sysinit.target. Apr 12 18:52:28.128719 systemd[1]: Started motdgen.path. Apr 12 18:52:28.134507 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Apr 12 18:52:28.136262 systemd[1]: Started logrotate.timer. Apr 12 18:52:28.137383 systemd[1]: Started mdadm.timer. Apr 12 18:52:28.138362 systemd[1]: Started systemd-tmpfiles-clean.timer. Apr 12 18:52:28.139559 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 12 18:52:28.139603 systemd[1]: Reached target paths.target. Apr 12 18:52:28.140650 systemd[1]: Reached target timers.target. Apr 12 18:52:28.142181 systemd[1]: Listening on dbus.socket. Apr 12 18:52:28.145496 systemd[1]: Starting docker.socket... Apr 12 18:52:28.162902 systemd[1]: Listening on sshd.socket. Apr 12 18:52:28.164504 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Apr 12 18:52:28.166579 systemd[1]: Listening on docker.socket. Apr 12 18:52:28.174411 systemd[1]: Reached target sockets.target. Apr 12 18:52:28.177010 systemd[1]: Reached target basic.target. Apr 12 18:52:28.178268 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Apr 12 18:52:28.178309 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Apr 12 18:52:28.184070 systemd[1]: Starting containerd.service... Apr 12 18:52:28.187100 systemd[1]: Starting dbus.service... Apr 12 18:52:28.199984 systemd[1]: Starting enable-oem-cloudinit.service... Apr 12 18:52:28.203484 systemd[1]: Starting extend-filesystems.service... Apr 12 18:52:28.208437 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Apr 12 18:52:28.214449 systemd[1]: Starting motdgen.service... Apr 12 18:52:28.218392 systemd[1]: Starting prepare-cni-plugins.service... Apr 12 18:52:28.222329 jq[1091]: false Apr 12 18:52:28.232583 systemd[1]: Starting prepare-critools.service... Apr 12 18:52:28.240754 systemd[1]: Starting prepare-helm.service... Apr 12 18:52:28.246045 systemd[1]: Starting ssh-key-proc-cmdline.service... Apr 12 18:52:28.250118 extend-filesystems[1092]: Found sr0 Apr 12 18:52:28.252867 extend-filesystems[1092]: Found vda Apr 12 18:52:28.252867 extend-filesystems[1092]: Found vda1 Apr 12 18:52:28.252867 extend-filesystems[1092]: Found vda2 Apr 12 18:52:28.252867 extend-filesystems[1092]: Found vda3 Apr 12 18:52:28.252867 extend-filesystems[1092]: Found usr Apr 12 18:52:28.252867 extend-filesystems[1092]: Found vda4 Apr 12 18:52:28.252867 extend-filesystems[1092]: Found vda6 Apr 12 18:52:28.252867 extend-filesystems[1092]: Found vda7 Apr 12 18:52:28.252867 extend-filesystems[1092]: Found vda9 Apr 12 18:52:28.252867 extend-filesystems[1092]: Checking size of /dev/vda9 Apr 12 18:52:28.299524 extend-filesystems[1092]: Resized partition /dev/vda9 Apr 12 18:52:28.268531 systemd[1]: Starting sshd-keygen.service... Apr 12 18:52:28.255315 dbus-daemon[1090]: [system] SELinux support is enabled Apr 12 18:52:28.302204 extend-filesystems[1111]: resize2fs 1.46.5 (30-Dec-2021) Apr 12 18:52:28.309836 systemd[1]: Starting systemd-logind.service... Apr 12 18:52:28.315973 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Apr 12 18:52:28.316060 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 12 18:52:28.316778 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 12 18:52:28.329603 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 12 18:52:28.322312 systemd[1]: Starting update-engine.service... Apr 12 18:52:28.332423 systemd[1]: Starting update-ssh-keys-after-ignition.service... Apr 12 18:52:28.334954 systemd[1]: Started dbus.service. Apr 12 18:52:28.339409 jq[1118]: true Apr 12 18:52:28.343093 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 12 18:52:28.344521 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Apr 12 18:52:28.344891 systemd[1]: motdgen.service: Deactivated successfully. Apr 12 18:52:28.348842 systemd[1]: Finished motdgen.service. Apr 12 18:52:28.363870 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 12 18:52:28.364082 systemd[1]: Finished ssh-key-proc-cmdline.service. Apr 12 18:52:28.388270 tar[1120]: ./ Apr 12 18:52:28.388270 tar[1120]: ./loopback Apr 12 18:52:28.392022 tar[1121]: crictl Apr 12 18:52:28.397020 tar[1122]: linux-amd64/helm Apr 12 18:52:28.392481 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 12 18:52:28.392507 systemd[1]: Reached target system-config.target. Apr 12 18:52:28.393904 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 12 18:52:28.393923 systemd[1]: Reached target user-config.target. Apr 12 18:52:28.408254 jq[1123]: true Apr 12 18:52:28.433057 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 12 18:52:28.476638 update_engine[1117]: I0412 18:52:28.468348 1117 main.cc:92] Flatcar Update Engine starting Apr 12 18:52:28.462837 systemd-networkd[1019]: eth0: Gained IPv6LL Apr 12 18:52:28.479651 env[1124]: time="2024-04-12T18:52:28.477712685Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Apr 12 18:52:28.482153 update_engine[1117]: I0412 18:52:28.482033 1117 update_check_scheduler.cc:74] Next update check in 6m6s Apr 12 18:52:28.484542 systemd[1]: Started update-engine.service. Apr 12 18:52:28.530152 extend-filesystems[1111]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 12 18:52:28.530152 extend-filesystems[1111]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 12 18:52:28.530152 extend-filesystems[1111]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 12 18:52:28.540019 bash[1147]: Updated "/home/core/.ssh/authorized_keys" Apr 12 18:52:28.540138 extend-filesystems[1092]: Resized filesystem in /dev/vda9 Apr 12 18:52:28.551044 systemd[1]: Started locksmithd.service. Apr 12 18:52:28.552590 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 12 18:52:28.552849 systemd[1]: Finished extend-filesystems.service. Apr 12 18:52:28.557470 systemd[1]: Finished update-ssh-keys-after-ignition.service. Apr 12 18:52:28.560914 env[1124]: time="2024-04-12T18:52:28.560846371Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 12 18:52:28.563673 env[1124]: time="2024-04-12T18:52:28.561074208Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 12 18:52:28.568224 tar[1120]: ./bandwidth Apr 12 18:52:28.570547 systemd-logind[1116]: Watching system buttons on /dev/input/event1 (Power Button) Apr 12 18:52:28.574421 systemd-logind[1116]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 12 18:52:28.576799 systemd-logind[1116]: New seat seat0. Apr 12 18:52:28.584234 env[1124]: time="2024-04-12T18:52:28.584141780Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.154-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 12 18:52:28.584234 env[1124]: time="2024-04-12T18:52:28.584225857Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 12 18:52:28.584900 env[1124]: time="2024-04-12T18:52:28.584534857Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 12 18:52:28.584900 env[1124]: time="2024-04-12T18:52:28.584564693Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 12 18:52:28.584900 env[1124]: time="2024-04-12T18:52:28.584589619Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Apr 12 18:52:28.584900 env[1124]: time="2024-04-12T18:52:28.584605098Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 12 18:52:28.584900 env[1124]: time="2024-04-12T18:52:28.584693254Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 12 18:52:28.585060 env[1124]: time="2024-04-12T18:52:28.585020627Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 12 18:52:28.585257 env[1124]: time="2024-04-12T18:52:28.585160760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 12 18:52:28.585257 env[1124]: time="2024-04-12T18:52:28.585199593Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 12 18:52:28.585334 env[1124]: time="2024-04-12T18:52:28.585273612Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Apr 12 18:52:28.585334 env[1124]: time="2024-04-12T18:52:28.585290043Z" level=info msg="metadata content store policy set" policy=shared Apr 12 18:52:28.589038 systemd[1]: Started systemd-logind.service. Apr 12 18:52:28.597799 env[1124]: time="2024-04-12T18:52:28.597564768Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 12 18:52:28.597799 env[1124]: time="2024-04-12T18:52:28.597630030Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 12 18:52:28.597799 env[1124]: time="2024-04-12T18:52:28.597649937Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 12 18:52:28.597799 env[1124]: time="2024-04-12T18:52:28.597713006Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 12 18:52:28.597799 env[1124]: time="2024-04-12T18:52:28.597734045Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 12 18:52:28.600515 env[1124]: time="2024-04-12T18:52:28.597752419Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 12 18:52:28.600515 env[1124]: time="2024-04-12T18:52:28.598121151Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 12 18:52:28.600515 env[1124]: time="2024-04-12T18:52:28.598145076Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 12 18:52:28.600515 env[1124]: time="2024-04-12T18:52:28.598163691Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Apr 12 18:52:28.600515 env[1124]: time="2024-04-12T18:52:28.598193597Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 12 18:52:28.600515 env[1124]: time="2024-04-12T18:52:28.598209947Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 12 18:52:28.600515 env[1124]: time="2024-04-12T18:52:28.598225617Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 12 18:52:28.600515 env[1124]: time="2024-04-12T18:52:28.598396617Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 12 18:52:28.600515 env[1124]: time="2024-04-12T18:52:28.598491646Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 12 18:52:28.600515 env[1124]: time="2024-04-12T18:52:28.598866438Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 12 18:52:28.600515 env[1124]: time="2024-04-12T18:52:28.598904480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 12 18:52:28.600515 env[1124]: time="2024-04-12T18:52:28.598921522Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 12 18:52:28.600515 env[1124]: time="2024-04-12T18:52:28.598979801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 12 18:52:28.600515 env[1124]: time="2024-04-12T18:52:28.598998556Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 12 18:52:28.601007 env[1124]: time="2024-04-12T18:52:28.599016359Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 12 18:52:28.601007 env[1124]: time="2024-04-12T18:52:28.599031097Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 12 18:52:28.601007 env[1124]: time="2024-04-12T18:52:28.599049572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 12 18:52:28.601007 env[1124]: time="2024-04-12T18:52:28.599065151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 12 18:52:28.601007 env[1124]: time="2024-04-12T18:52:28.599080940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 12 18:52:28.601007 env[1124]: time="2024-04-12T18:52:28.599095478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 12 18:52:28.601007 env[1124]: time="2024-04-12T18:52:28.599111137Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 12 18:52:28.601007 env[1124]: time="2024-04-12T18:52:28.599259595Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 12 18:52:28.601007 env[1124]: time="2024-04-12T18:52:28.599280655Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 12 18:52:28.601007 env[1124]: time="2024-04-12T18:52:28.599297356Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 12 18:52:28.601007 env[1124]: time="2024-04-12T18:52:28.599312064Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 12 18:52:28.601007 env[1124]: time="2024-04-12T18:52:28.599331430Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Apr 12 18:52:28.601007 env[1124]: time="2024-04-12T18:52:28.599346038Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 12 18:52:28.601007 env[1124]: time="2024-04-12T18:52:28.599374681Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Apr 12 18:52:28.601396 env[1124]: time="2024-04-12T18:52:28.599426899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 12 18:52:28.601437 env[1124]: time="2024-04-12T18:52:28.599746058Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 12 18:52:28.601437 env[1124]: time="2024-04-12T18:52:28.599843801Z" level=info msg="Connect containerd service" Apr 12 18:52:28.601437 env[1124]: time="2024-04-12T18:52:28.599923420Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 12 18:52:28.605939 env[1124]: time="2024-04-12T18:52:28.602729662Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 12 18:52:28.605939 env[1124]: time="2024-04-12T18:52:28.603234849Z" level=info msg="Start subscribing containerd event" Apr 12 18:52:28.605939 env[1124]: time="2024-04-12T18:52:28.603297557Z" level=info msg="Start recovering state" Apr 12 18:52:28.605939 env[1124]: time="2024-04-12T18:52:28.603466343Z" level=info msg="Start event monitor" Apr 12 18:52:28.605939 env[1124]: time="2024-04-12T18:52:28.603496560Z" level=info msg="Start snapshots syncer" Apr 12 18:52:28.605939 env[1124]: time="2024-04-12T18:52:28.603525925Z" level=info msg="Start cni network conf syncer for default" Apr 12 18:52:28.605939 env[1124]: time="2024-04-12T18:52:28.603537276Z" level=info msg="Start streaming server" Apr 12 18:52:28.605939 env[1124]: time="2024-04-12T18:52:28.603805960Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 12 18:52:28.605939 env[1124]: time="2024-04-12T18:52:28.603875981Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 12 18:52:28.604082 systemd[1]: Started containerd.service. Apr 12 18:52:28.613488 env[1124]: time="2024-04-12T18:52:28.613409597Z" level=info msg="containerd successfully booted in 0.150764s" Apr 12 18:52:28.641837 tar[1120]: ./ptp Apr 12 18:52:28.686310 tar[1120]: ./vlan Apr 12 18:52:28.730412 tar[1120]: ./host-device Apr 12 18:52:28.772267 tar[1120]: ./tuning Apr 12 18:52:28.810087 tar[1120]: ./vrf Apr 12 18:52:28.848963 tar[1120]: ./sbr Apr 12 18:52:28.887564 tar[1120]: ./tap Apr 12 18:52:28.933708 tar[1120]: ./dhcp Apr 12 18:52:29.053602 tar[1122]: linux-amd64/LICENSE Apr 12 18:52:29.053723 tar[1122]: linux-amd64/README.md Apr 12 18:52:29.058710 tar[1120]: ./static Apr 12 18:52:29.059347 systemd[1]: Finished prepare-helm.service. Apr 12 18:52:29.062249 systemd[1]: Finished prepare-critools.service. Apr 12 18:52:29.090482 tar[1120]: ./firewall Apr 12 18:52:29.093361 locksmithd[1152]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 12 18:52:29.247195 tar[1120]: ./macvlan Apr 12 18:52:29.567062 tar[1120]: ./dummy Apr 12 18:52:29.623973 sshd_keygen[1115]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 12 18:52:29.693545 systemd[1]: Finished sshd-keygen.service. Apr 12 18:52:29.697041 systemd[1]: Starting issuegen.service... Apr 12 18:52:29.736705 systemd[1]: issuegen.service: Deactivated successfully. Apr 12 18:52:29.738214 systemd[1]: Finished issuegen.service. Apr 12 18:52:29.749594 systemd[1]: Starting systemd-user-sessions.service... Apr 12 18:52:29.764560 systemd[1]: Finished systemd-user-sessions.service. Apr 12 18:52:29.772107 systemd[1]: Started getty@tty1.service. Apr 12 18:52:29.776336 systemd[1]: Started serial-getty@ttyS0.service. Apr 12 18:52:29.778203 systemd[1]: Reached target getty.target. Apr 12 18:52:29.903026 tar[1120]: ./bridge Apr 12 18:52:30.332476 tar[1120]: ./ipvlan Apr 12 18:52:30.690978 tar[1120]: ./portmap Apr 12 18:52:31.272145 tar[1120]: ./host-local Apr 12 18:52:31.370420 systemd[1]: Finished prepare-cni-plugins.service. Apr 12 18:52:31.376241 systemd[1]: Reached target multi-user.target. Apr 12 18:52:31.385356 systemd[1]: Starting systemd-update-utmp-runlevel.service... Apr 12 18:52:31.391595 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Apr 12 18:52:31.391813 systemd[1]: Finished systemd-update-utmp-runlevel.service. Apr 12 18:52:31.393968 systemd[1]: Startup finished in 1.625s (kernel) + 14.467s (initrd) + 17.067s (userspace) = 33.160s. Apr 12 18:52:37.286739 systemd[1]: Created slice system-sshd.slice. Apr 12 18:52:37.299568 systemd[1]: Started sshd@0-10.0.0.118:22-10.0.0.1:57918.service. Apr 12 18:52:37.398620 sshd[1179]: Accepted publickey for core from 10.0.0.1 port 57918 ssh2: RSA SHA256:oFTmhZVjs8bjXH/lYpDQZ+WL9oh5tEY90V+L3H6oLsU Apr 12 18:52:37.401984 sshd[1179]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:52:37.422348 systemd[1]: Created slice user-500.slice. Apr 12 18:52:37.424784 systemd[1]: Starting user-runtime-dir@500.service... Apr 12 18:52:37.429948 systemd-logind[1116]: New session 1 of user core. Apr 12 18:52:37.443505 systemd[1]: Finished user-runtime-dir@500.service. Apr 12 18:52:37.445718 systemd[1]: Starting user@500.service... Apr 12 18:52:37.458326 (systemd)[1182]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:52:37.598081 systemd[1182]: Queued start job for default target default.target. Apr 12 18:52:37.598786 systemd[1182]: Reached target paths.target. Apr 12 18:52:37.598816 systemd[1182]: Reached target sockets.target. Apr 12 18:52:37.598834 systemd[1182]: Reached target timers.target. Apr 12 18:52:37.598850 systemd[1182]: Reached target basic.target. Apr 12 18:52:37.598916 systemd[1182]: Reached target default.target. Apr 12 18:52:37.598964 systemd[1182]: Startup finished in 131ms. Apr 12 18:52:37.599156 systemd[1]: Started user@500.service. Apr 12 18:52:37.607198 systemd[1]: Started session-1.scope. Apr 12 18:52:37.735457 systemd[1]: Started sshd@1-10.0.0.118:22-10.0.0.1:57922.service. Apr 12 18:52:37.805001 sshd[1192]: Accepted publickey for core from 10.0.0.1 port 57922 ssh2: RSA SHA256:oFTmhZVjs8bjXH/lYpDQZ+WL9oh5tEY90V+L3H6oLsU Apr 12 18:52:37.808832 sshd[1192]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:52:37.827065 systemd[1]: Started session-2.scope. Apr 12 18:52:37.827879 systemd-logind[1116]: New session 2 of user core. Apr 12 18:52:37.959531 sshd[1192]: pam_unix(sshd:session): session closed for user core Apr 12 18:52:37.979699 systemd[1]: Started sshd@2-10.0.0.118:22-10.0.0.1:57930.service. Apr 12 18:52:37.980630 systemd[1]: sshd@1-10.0.0.118:22-10.0.0.1:57922.service: Deactivated successfully. Apr 12 18:52:38.012365 systemd[1]: session-2.scope: Deactivated successfully. Apr 12 18:52:38.030707 systemd-logind[1116]: Session 2 logged out. Waiting for processes to exit. Apr 12 18:52:38.054625 systemd-logind[1116]: Removed session 2. Apr 12 18:52:38.087700 sshd[1197]: Accepted publickey for core from 10.0.0.1 port 57930 ssh2: RSA SHA256:oFTmhZVjs8bjXH/lYpDQZ+WL9oh5tEY90V+L3H6oLsU Apr 12 18:52:38.089647 sshd[1197]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:52:38.111936 systemd-logind[1116]: New session 3 of user core. Apr 12 18:52:38.112984 systemd[1]: Started session-3.scope. Apr 12 18:52:38.201457 sshd[1197]: pam_unix(sshd:session): session closed for user core Apr 12 18:52:38.214831 systemd[1]: sshd@2-10.0.0.118:22-10.0.0.1:57930.service: Deactivated successfully. Apr 12 18:52:38.215559 systemd[1]: session-3.scope: Deactivated successfully. Apr 12 18:52:38.222616 systemd[1]: Started sshd@3-10.0.0.118:22-10.0.0.1:57946.service. Apr 12 18:52:38.226038 systemd-logind[1116]: Session 3 logged out. Waiting for processes to exit. Apr 12 18:52:38.228158 systemd-logind[1116]: Removed session 3. Apr 12 18:52:38.274119 sshd[1204]: Accepted publickey for core from 10.0.0.1 port 57946 ssh2: RSA SHA256:oFTmhZVjs8bjXH/lYpDQZ+WL9oh5tEY90V+L3H6oLsU Apr 12 18:52:38.275731 sshd[1204]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:52:38.296955 systemd-logind[1116]: New session 4 of user core. Apr 12 18:52:38.298170 systemd[1]: Started session-4.scope. Apr 12 18:52:38.403319 sshd[1204]: pam_unix(sshd:session): session closed for user core Apr 12 18:52:38.412685 systemd[1]: sshd@3-10.0.0.118:22-10.0.0.1:57946.service: Deactivated successfully. Apr 12 18:52:38.417095 systemd[1]: session-4.scope: Deactivated successfully. Apr 12 18:52:38.420563 systemd-logind[1116]: Session 4 logged out. Waiting for processes to exit. Apr 12 18:52:38.421563 systemd[1]: Started sshd@4-10.0.0.118:22-10.0.0.1:57956.service. Apr 12 18:52:38.427253 systemd-logind[1116]: Removed session 4. Apr 12 18:52:38.498569 sshd[1210]: Accepted publickey for core from 10.0.0.1 port 57956 ssh2: RSA SHA256:oFTmhZVjs8bjXH/lYpDQZ+WL9oh5tEY90V+L3H6oLsU Apr 12 18:52:38.503523 sshd[1210]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:52:38.512011 systemd-logind[1116]: New session 5 of user core. Apr 12 18:52:38.513178 systemd[1]: Started session-5.scope. Apr 12 18:52:38.598177 sudo[1213]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 12 18:52:38.598455 sudo[1213]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Apr 12 18:52:39.364641 systemd[1]: Starting systemd-networkd-wait-online.service... Apr 12 18:52:39.380156 systemd[1]: Finished systemd-networkd-wait-online.service. Apr 12 18:52:39.380627 systemd[1]: Reached target network-online.target. Apr 12 18:52:39.384340 systemd[1]: Starting docker.service... Apr 12 18:52:39.486305 env[1230]: time="2024-04-12T18:52:39.486194946Z" level=info msg="Starting up" Apr 12 18:52:39.501898 env[1230]: time="2024-04-12T18:52:39.501077792Z" level=info msg="parsed scheme: \"unix\"" module=grpc Apr 12 18:52:39.501898 env[1230]: time="2024-04-12T18:52:39.501124940Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Apr 12 18:52:39.501898 env[1230]: time="2024-04-12T18:52:39.501165286Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Apr 12 18:52:39.501898 env[1230]: time="2024-04-12T18:52:39.501182438Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Apr 12 18:52:39.508950 env[1230]: time="2024-04-12T18:52:39.506639750Z" level=info msg="parsed scheme: \"unix\"" module=grpc Apr 12 18:52:39.508950 env[1230]: time="2024-04-12T18:52:39.506681959Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Apr 12 18:52:39.508950 env[1230]: time="2024-04-12T18:52:39.506709431Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Apr 12 18:52:39.508950 env[1230]: time="2024-04-12T18:52:39.506723628Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Apr 12 18:52:39.594409 env[1230]: time="2024-04-12T18:52:39.594330991Z" level=info msg="Loading containers: start." Apr 12 18:52:40.479187 kernel: Initializing XFRM netlink socket Apr 12 18:52:40.603900 env[1230]: time="2024-04-12T18:52:40.601021366Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Apr 12 18:52:40.733605 systemd-networkd[1019]: docker0: Link UP Apr 12 18:52:40.758583 env[1230]: time="2024-04-12T18:52:40.758521749Z" level=info msg="Loading containers: done." Apr 12 18:52:40.800131 env[1230]: time="2024-04-12T18:52:40.800043742Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 12 18:52:40.800366 env[1230]: time="2024-04-12T18:52:40.800328496Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Apr 12 18:52:40.800677 env[1230]: time="2024-04-12T18:52:40.800474089Z" level=info msg="Daemon has completed initialization" Apr 12 18:52:40.849143 systemd[1]: Started docker.service. Apr 12 18:52:40.876958 env[1230]: time="2024-04-12T18:52:40.876860444Z" level=info msg="API listen on /run/docker.sock" Apr 12 18:52:40.918159 systemd[1]: Reloading. Apr 12 18:52:41.056859 /usr/lib/systemd/system-generators/torcx-generator[1370]: time="2024-04-12T18:52:41Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.3 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.3 /var/lib/torcx/store]" Apr 12 18:52:41.056899 /usr/lib/systemd/system-generators/torcx-generator[1370]: time="2024-04-12T18:52:41Z" level=info msg="torcx already run" Apr 12 18:52:41.212080 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Apr 12 18:52:41.212112 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Apr 12 18:52:41.246377 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 12 18:52:41.406059 systemd[1]: Started kubelet.service. Apr 12 18:52:41.551476 kubelet[1411]: E0412 18:52:41.551386 1411 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 12 18:52:41.556204 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 12 18:52:41.556376 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 12 18:52:42.002077 env[1124]: time="2024-04-12T18:52:42.001996423Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.3\"" Apr 12 18:52:42.762738 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4234331500.mount: Deactivated successfully. Apr 12 18:52:45.383317 env[1124]: time="2024-04-12T18:52:45.383211441Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:52:45.389990 env[1124]: time="2024-04-12T18:52:45.389899170Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:52:45.394682 env[1124]: time="2024-04-12T18:52:45.394117239Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:52:45.399729 env[1124]: time="2024-04-12T18:52:45.397879934Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:52:45.399729 env[1124]: time="2024-04-12T18:52:45.398964999Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.3\" returns image reference \"sha256:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533\"" Apr 12 18:52:45.432209 env[1124]: time="2024-04-12T18:52:45.432129963Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.3\"" Apr 12 18:52:48.911704 env[1124]: time="2024-04-12T18:52:48.911064134Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:52:48.915897 env[1124]: time="2024-04-12T18:52:48.915756592Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:52:48.928271 env[1124]: time="2024-04-12T18:52:48.927506864Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:52:48.936825 env[1124]: time="2024-04-12T18:52:48.932334415Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:52:48.936825 env[1124]: time="2024-04-12T18:52:48.933285759Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.3\" returns image reference \"sha256:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3\"" Apr 12 18:52:48.984006 env[1124]: time="2024-04-12T18:52:48.983951778Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.3\"" Apr 12 18:52:51.311020 env[1124]: time="2024-04-12T18:52:51.310919822Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:52:51.319202 env[1124]: time="2024-04-12T18:52:51.319107775Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:52:51.324140 env[1124]: time="2024-04-12T18:52:51.324052376Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:52:51.332630 env[1124]: time="2024-04-12T18:52:51.327018127Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:52:51.332630 env[1124]: time="2024-04-12T18:52:51.327964702Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.3\" returns image reference \"sha256:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b\"" Apr 12 18:52:51.361189 env[1124]: time="2024-04-12T18:52:51.361118986Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.3\"" Apr 12 18:52:51.819479 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 12 18:52:51.821423 systemd[1]: Stopped kubelet.service. Apr 12 18:52:51.835205 systemd[1]: Started kubelet.service. Apr 12 18:52:51.954546 kubelet[1457]: E0412 18:52:51.953390 1457 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 12 18:52:51.962451 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 12 18:52:51.962631 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 12 18:52:53.368440 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4194182611.mount: Deactivated successfully. Apr 12 18:52:54.869926 env[1124]: time="2024-04-12T18:52:54.869834647Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:52:54.888224 env[1124]: time="2024-04-12T18:52:54.888016017Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:52:54.908850 env[1124]: time="2024-04-12T18:52:54.907389923Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:52:54.911592 env[1124]: time="2024-04-12T18:52:54.911380565Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:52:54.913139 env[1124]: time="2024-04-12T18:52:54.913083749Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.3\" returns image reference \"sha256:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392\"" Apr 12 18:52:54.950855 env[1124]: time="2024-04-12T18:52:54.950783335Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Apr 12 18:52:55.555075 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount103526367.mount: Deactivated successfully. Apr 12 18:52:57.701663 env[1124]: time="2024-04-12T18:52:57.700333261Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:52:57.707907 env[1124]: time="2024-04-12T18:52:57.706138957Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:52:57.707907 env[1124]: time="2024-04-12T18:52:57.707286869Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:52:57.715172 env[1124]: time="2024-04-12T18:52:57.715039184Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:52:57.715903 env[1124]: time="2024-04-12T18:52:57.715841098Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Apr 12 18:52:57.739523 env[1124]: time="2024-04-12T18:52:57.739470573Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Apr 12 18:52:58.251889 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3773988714.mount: Deactivated successfully. Apr 12 18:52:58.269459 env[1124]: time="2024-04-12T18:52:58.268408782Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:52:58.272700 env[1124]: time="2024-04-12T18:52:58.272584682Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:52:58.274803 env[1124]: time="2024-04-12T18:52:58.274718473Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:52:58.277003 env[1124]: time="2024-04-12T18:52:58.276948023Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:52:58.277407 env[1124]: time="2024-04-12T18:52:58.277329839Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Apr 12 18:52:58.295748 env[1124]: time="2024-04-12T18:52:58.295695675Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Apr 12 18:52:59.052449 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1981652512.mount: Deactivated successfully. Apr 12 18:53:02.090526 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 12 18:53:02.090907 systemd[1]: Stopped kubelet.service. Apr 12 18:53:02.108504 systemd[1]: Started kubelet.service. Apr 12 18:53:02.334726 kubelet[1484]: E0412 18:53:02.334645 1484 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 12 18:53:02.338953 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 12 18:53:02.339133 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 12 18:53:05.835442 env[1124]: time="2024-04-12T18:53:05.835289109Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:53:05.841449 env[1124]: time="2024-04-12T18:53:05.841361157Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:53:05.845629 env[1124]: time="2024-04-12T18:53:05.845528619Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:53:05.850369 env[1124]: time="2024-04-12T18:53:05.848583934Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:53:05.853692 env[1124]: time="2024-04-12T18:53:05.853606264Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Apr 12 18:53:08.984893 systemd[1]: Stopped kubelet.service. Apr 12 18:53:09.008357 systemd[1]: Reloading. Apr 12 18:53:09.073726 /usr/lib/systemd/system-generators/torcx-generator[1591]: time="2024-04-12T18:53:09Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.3 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.3 /var/lib/torcx/store]" Apr 12 18:53:09.074159 /usr/lib/systemd/system-generators/torcx-generator[1591]: time="2024-04-12T18:53:09Z" level=info msg="torcx already run" Apr 12 18:53:09.212418 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Apr 12 18:53:09.212446 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Apr 12 18:53:09.240190 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 12 18:53:09.398929 systemd[1]: Started kubelet.service. Apr 12 18:53:09.510876 kubelet[1633]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 12 18:53:09.510876 kubelet[1633]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 12 18:53:09.510876 kubelet[1633]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 12 18:53:09.510876 kubelet[1633]: I0412 18:53:09.510746 1633 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 12 18:53:10.125150 kubelet[1633]: I0412 18:53:10.125055 1633 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Apr 12 18:53:10.125150 kubelet[1633]: I0412 18:53:10.125128 1633 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 12 18:53:10.133350 kubelet[1633]: I0412 18:53:10.125438 1633 server.go:919] "Client rotation is on, will bootstrap in background" Apr 12 18:53:10.133350 kubelet[1633]: E0412 18:53:10.131112 1633 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.118:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.118:6443: connect: connection refused Apr 12 18:53:10.133350 kubelet[1633]: I0412 18:53:10.131210 1633 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 12 18:53:10.145662 kubelet[1633]: I0412 18:53:10.145573 1633 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 12 18:53:10.146024 kubelet[1633]: I0412 18:53:10.145991 1633 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 12 18:53:10.148628 kubelet[1633]: I0412 18:53:10.146259 1633 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 12 18:53:10.148628 kubelet[1633]: I0412 18:53:10.147752 1633 topology_manager.go:138] "Creating topology manager with none policy" Apr 12 18:53:10.148628 kubelet[1633]: I0412 18:53:10.147797 1633 container_manager_linux.go:301] "Creating device plugin manager" Apr 12 18:53:10.148628 kubelet[1633]: I0412 18:53:10.148141 1633 state_mem.go:36] "Initialized new in-memory state store" Apr 12 18:53:10.148628 kubelet[1633]: I0412 18:53:10.148319 1633 kubelet.go:396] "Attempting to sync node with API server" Apr 12 18:53:10.148628 kubelet[1633]: I0412 18:53:10.148343 1633 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 12 18:53:10.148628 kubelet[1633]: I0412 18:53:10.148408 1633 kubelet.go:312] "Adding apiserver pod source" Apr 12 18:53:10.149049 kubelet[1633]: I0412 18:53:10.148436 1633 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 12 18:53:10.150605 kubelet[1633]: I0412 18:53:10.150566 1633 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Apr 12 18:53:10.152166 kubelet[1633]: I0412 18:53:10.150992 1633 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 12 18:53:10.152166 kubelet[1633]: W0412 18:53:10.151004 1633 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.118:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.118:6443: connect: connection refused Apr 12 18:53:10.152166 kubelet[1633]: E0412 18:53:10.151082 1633 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.118:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.118:6443: connect: connection refused Apr 12 18:53:10.152166 kubelet[1633]: W0412 18:53:10.151082 1633 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 12 18:53:10.153181 kubelet[1633]: I0412 18:53:10.153145 1633 server.go:1256] "Started kubelet" Apr 12 18:53:10.156493 kubelet[1633]: W0412 18:53:10.156404 1633 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.118:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.118:6443: connect: connection refused Apr 12 18:53:10.156493 kubelet[1633]: E0412 18:53:10.156509 1633 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.118:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.118:6443: connect: connection refused Apr 12 18:53:10.158842 kubelet[1633]: E0412 18:53:10.157504 1633 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.118:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.118:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17c59d1f17cba287 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-04-12 18:53:10.153110151 +0000 UTC m=+0.746852355,LastTimestamp:2024-04-12 18:53:10.153110151 +0000 UTC m=+0.746852355,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 12 18:53:10.158842 kubelet[1633]: I0412 18:53:10.157569 1633 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Apr 12 18:53:10.158842 kubelet[1633]: I0412 18:53:10.158034 1633 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 12 18:53:10.158842 kubelet[1633]: I0412 18:53:10.158571 1633 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 12 18:53:10.162323 kubelet[1633]: I0412 18:53:10.159414 1633 server.go:461] "Adding debug handlers to kubelet server" Apr 12 18:53:10.166461 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Apr 12 18:53:10.168226 kubelet[1633]: I0412 18:53:10.167477 1633 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 12 18:53:10.170379 kubelet[1633]: I0412 18:53:10.168620 1633 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 12 18:53:10.170379 kubelet[1633]: I0412 18:53:10.169320 1633 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Apr 12 18:53:10.170379 kubelet[1633]: I0412 18:53:10.169418 1633 reconciler_new.go:29] "Reconciler: start to sync state" Apr 12 18:53:10.173125 kubelet[1633]: I0412 18:53:10.173070 1633 factory.go:221] Registration of the systemd container factory successfully Apr 12 18:53:10.173358 kubelet[1633]: I0412 18:53:10.173211 1633 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 12 18:53:10.175018 kubelet[1633]: I0412 18:53:10.174981 1633 factory.go:221] Registration of the containerd container factory successfully Apr 12 18:53:10.175784 kubelet[1633]: W0412 18:53:10.175650 1633 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.118:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.118:6443: connect: connection refused Apr 12 18:53:10.175784 kubelet[1633]: E0412 18:53:10.175758 1633 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.118:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.118:6443: connect: connection refused Apr 12 18:53:10.176022 kubelet[1633]: E0412 18:53:10.175988 1633 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.118:6443: connect: connection refused" interval="200ms" Apr 12 18:53:10.209559 kubelet[1633]: I0412 18:53:10.209511 1633 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 12 18:53:10.209833 kubelet[1633]: I0412 18:53:10.209815 1633 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 12 18:53:10.209969 kubelet[1633]: I0412 18:53:10.209953 1633 state_mem.go:36] "Initialized new in-memory state store" Apr 12 18:53:10.215523 kubelet[1633]: I0412 18:53:10.215461 1633 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 12 18:53:10.222672 kubelet[1633]: I0412 18:53:10.222611 1633 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 12 18:53:10.222874 kubelet[1633]: I0412 18:53:10.222710 1633 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 12 18:53:10.222874 kubelet[1633]: I0412 18:53:10.222746 1633 kubelet.go:2329] "Starting kubelet main sync loop" Apr 12 18:53:10.222874 kubelet[1633]: E0412 18:53:10.222855 1633 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 12 18:53:10.224481 kubelet[1633]: W0412 18:53:10.224415 1633 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.118:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.118:6443: connect: connection refused Apr 12 18:53:10.224587 kubelet[1633]: E0412 18:53:10.224507 1633 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.118:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.118:6443: connect: connection refused Apr 12 18:53:10.272241 kubelet[1633]: I0412 18:53:10.270538 1633 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Apr 12 18:53:10.275110 kubelet[1633]: E0412 18:53:10.275054 1633 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.118:6443/api/v1/nodes\": dial tcp 10.0.0.118:6443: connect: connection refused" node="localhost" Apr 12 18:53:10.275844 kubelet[1633]: E0412 18:53:10.275721 1633 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.118:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.118:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17c59d1f17cba287 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-04-12 18:53:10.153110151 +0000 UTC m=+0.746852355,LastTimestamp:2024-04-12 18:53:10.153110151 +0000 UTC m=+0.746852355,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 12 18:53:10.323169 kubelet[1633]: E0412 18:53:10.323015 1633 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 12 18:53:10.377571 kubelet[1633]: E0412 18:53:10.377373 1633 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.118:6443: connect: connection refused" interval="400ms" Apr 12 18:53:10.459748 kubelet[1633]: I0412 18:53:10.459666 1633 policy_none.go:49] "None policy: Start" Apr 12 18:53:10.464342 kubelet[1633]: I0412 18:53:10.460933 1633 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 12 18:53:10.464342 kubelet[1633]: I0412 18:53:10.460979 1633 state_mem.go:35] "Initializing new in-memory state store" Apr 12 18:53:10.483085 kubelet[1633]: I0412 18:53:10.482025 1633 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Apr 12 18:53:10.483085 kubelet[1633]: E0412 18:53:10.482464 1633 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.118:6443/api/v1/nodes\": dial tcp 10.0.0.118:6443: connect: connection refused" node="localhost" Apr 12 18:53:10.491605 systemd[1]: Created slice kubepods.slice. Apr 12 18:53:10.515620 systemd[1]: Created slice kubepods-burstable.slice. Apr 12 18:53:10.521138 systemd[1]: Created slice kubepods-besteffort.slice. Apr 12 18:53:10.526161 kubelet[1633]: E0412 18:53:10.525508 1633 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 12 18:53:10.535394 kubelet[1633]: I0412 18:53:10.532116 1633 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 12 18:53:10.535394 kubelet[1633]: I0412 18:53:10.532479 1633 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 12 18:53:10.535394 kubelet[1633]: E0412 18:53:10.534379 1633 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 12 18:53:10.779188 kubelet[1633]: E0412 18:53:10.778955 1633 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.118:6443: connect: connection refused" interval="800ms" Apr 12 18:53:10.899020 kubelet[1633]: I0412 18:53:10.898214 1633 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Apr 12 18:53:10.899020 kubelet[1633]: E0412 18:53:10.898891 1633 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.118:6443/api/v1/nodes\": dial tcp 10.0.0.118:6443: connect: connection refused" node="localhost" Apr 12 18:53:10.926351 kubelet[1633]: I0412 18:53:10.926240 1633 topology_manager.go:215] "Topology Admit Handler" podUID="57f39bebe7d3a0d0d07d357156046409" podNamespace="kube-system" podName="kube-apiserver-localhost" Apr 12 18:53:10.928210 kubelet[1633]: I0412 18:53:10.928162 1633 topology_manager.go:215] "Topology Admit Handler" podUID="f4e8212a5db7e0401319814fa9ad65c9" podNamespace="kube-system" podName="kube-controller-manager-localhost" Apr 12 18:53:10.931319 kubelet[1633]: I0412 18:53:10.931282 1633 topology_manager.go:215] "Topology Admit Handler" podUID="5d5c5aff921df216fcba2c51c322ceb1" podNamespace="kube-system" podName="kube-scheduler-localhost" Apr 12 18:53:11.027526 kubelet[1633]: I0412 18:53:11.026649 1633 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/57f39bebe7d3a0d0d07d357156046409-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"57f39bebe7d3a0d0d07d357156046409\") " pod="kube-system/kube-apiserver-localhost" Apr 12 18:53:11.042853 systemd[1]: Created slice kubepods-burstable-pod57f39bebe7d3a0d0d07d357156046409.slice. Apr 12 18:53:11.109293 systemd[1]: Created slice kubepods-burstable-podf4e8212a5db7e0401319814fa9ad65c9.slice. Apr 12 18:53:11.127390 kubelet[1633]: I0412 18:53:11.127329 1633 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f4e8212a5db7e0401319814fa9ad65c9-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"f4e8212a5db7e0401319814fa9ad65c9\") " pod="kube-system/kube-controller-manager-localhost" Apr 12 18:53:11.127683 kubelet[1633]: I0412 18:53:11.127662 1633 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f4e8212a5db7e0401319814fa9ad65c9-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"f4e8212a5db7e0401319814fa9ad65c9\") " pod="kube-system/kube-controller-manager-localhost" Apr 12 18:53:11.127841 kubelet[1633]: I0412 18:53:11.127819 1633 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/57f39bebe7d3a0d0d07d357156046409-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"57f39bebe7d3a0d0d07d357156046409\") " pod="kube-system/kube-apiserver-localhost" Apr 12 18:53:11.127981 kubelet[1633]: I0412 18:53:11.127961 1633 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/57f39bebe7d3a0d0d07d357156046409-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"57f39bebe7d3a0d0d07d357156046409\") " pod="kube-system/kube-apiserver-localhost" Apr 12 18:53:11.128108 kubelet[1633]: I0412 18:53:11.128089 1633 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f4e8212a5db7e0401319814fa9ad65c9-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f4e8212a5db7e0401319814fa9ad65c9\") " pod="kube-system/kube-controller-manager-localhost" Apr 12 18:53:11.128235 kubelet[1633]: I0412 18:53:11.128216 1633 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f4e8212a5db7e0401319814fa9ad65c9-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"f4e8212a5db7e0401319814fa9ad65c9\") " pod="kube-system/kube-controller-manager-localhost" Apr 12 18:53:11.128371 kubelet[1633]: I0412 18:53:11.128351 1633 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5d5c5aff921df216fcba2c51c322ceb1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5d5c5aff921df216fcba2c51c322ceb1\") " pod="kube-system/kube-scheduler-localhost" Apr 12 18:53:11.128589 kubelet[1633]: I0412 18:53:11.128568 1633 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f4e8212a5db7e0401319814fa9ad65c9-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f4e8212a5db7e0401319814fa9ad65c9\") " pod="kube-system/kube-controller-manager-localhost" Apr 12 18:53:11.135658 systemd[1]: Created slice kubepods-burstable-pod5d5c5aff921df216fcba2c51c322ceb1.slice. Apr 12 18:53:11.341193 kubelet[1633]: W0412 18:53:11.341112 1633 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.118:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.118:6443: connect: connection refused Apr 12 18:53:11.341193 kubelet[1633]: E0412 18:53:11.341164 1633 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.118:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.118:6443: connect: connection refused Apr 12 18:53:11.355748 kubelet[1633]: E0412 18:53:11.355665 1633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:53:11.363090 env[1124]: time="2024-04-12T18:53:11.362560449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:57f39bebe7d3a0d0d07d357156046409,Namespace:kube-system,Attempt:0,}" Apr 12 18:53:11.409137 kubelet[1633]: W0412 18:53:11.409054 1633 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.118:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.118:6443: connect: connection refused Apr 12 18:53:11.409137 kubelet[1633]: E0412 18:53:11.409120 1633 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.118:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.118:6443: connect: connection refused Apr 12 18:53:11.417518 kubelet[1633]: E0412 18:53:11.417381 1633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:53:11.418183 env[1124]: time="2024-04-12T18:53:11.418104214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:f4e8212a5db7e0401319814fa9ad65c9,Namespace:kube-system,Attempt:0,}" Apr 12 18:53:11.442014 kubelet[1633]: E0412 18:53:11.441927 1633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:53:11.442802 env[1124]: time="2024-04-12T18:53:11.442721766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5d5c5aff921df216fcba2c51c322ceb1,Namespace:kube-system,Attempt:0,}" Apr 12 18:53:11.534190 kubelet[1633]: W0412 18:53:11.531468 1633 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.118:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.118:6443: connect: connection refused Apr 12 18:53:11.534190 kubelet[1633]: E0412 18:53:11.532790 1633 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.118:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.118:6443: connect: connection refused Apr 12 18:53:11.580251 kubelet[1633]: E0412 18:53:11.580116 1633 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.118:6443: connect: connection refused" interval="1.6s" Apr 12 18:53:11.590783 kubelet[1633]: W0412 18:53:11.590667 1633 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.118:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.118:6443: connect: connection refused Apr 12 18:53:11.590783 kubelet[1633]: E0412 18:53:11.590752 1633 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.118:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.118:6443: connect: connection refused Apr 12 18:53:11.704131 kubelet[1633]: I0412 18:53:11.703082 1633 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Apr 12 18:53:11.704131 kubelet[1633]: E0412 18:53:11.703509 1633 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.118:6443/api/v1/nodes\": dial tcp 10.0.0.118:6443: connect: connection refused" node="localhost" Apr 12 18:53:12.086945 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount76736437.mount: Deactivated successfully. Apr 12 18:53:12.110408 env[1124]: time="2024-04-12T18:53:12.110324213Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:53:12.127309 env[1124]: time="2024-04-12T18:53:12.127214948Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:53:12.132342 env[1124]: time="2024-04-12T18:53:12.129791676Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:53:12.132342 env[1124]: time="2024-04-12T18:53:12.131906539Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:53:12.142396 env[1124]: time="2024-04-12T18:53:12.142219413Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:53:12.151410 env[1124]: time="2024-04-12T18:53:12.147827529Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:53:12.153615 env[1124]: time="2024-04-12T18:53:12.153543569Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:53:12.157387 env[1124]: time="2024-04-12T18:53:12.156203675Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:53:12.163719 env[1124]: time="2024-04-12T18:53:12.163205119Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:53:12.167433 env[1124]: time="2024-04-12T18:53:12.167365164Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:53:12.172518 env[1124]: time="2024-04-12T18:53:12.172435944Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:53:12.180463 env[1124]: time="2024-04-12T18:53:12.180004161Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:53:12.237802 kubelet[1633]: E0412 18:53:12.233947 1633 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.118:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.118:6443: connect: connection refused Apr 12 18:53:12.329642 env[1124]: time="2024-04-12T18:53:12.328571296Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:53:12.329642 env[1124]: time="2024-04-12T18:53:12.328662900Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:53:12.329642 env[1124]: time="2024-04-12T18:53:12.328680343Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:53:12.329642 env[1124]: time="2024-04-12T18:53:12.328905549Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/27f2de0fa97a9a5ebcd3176e63cfee82ea85e607bb90fdf97b62a6113b4760cf pid=1691 runtime=io.containerd.runc.v2 Apr 12 18:53:12.332475 env[1124]: time="2024-04-12T18:53:12.330961922Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:53:12.332475 env[1124]: time="2024-04-12T18:53:12.331010824Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:53:12.332475 env[1124]: time="2024-04-12T18:53:12.331026124Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:53:12.332475 env[1124]: time="2024-04-12T18:53:12.331327414Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5c0058729b35103c3ed151a99c7d78f07966c0a2cd442bf4b50d6bb44f1d55bc pid=1674 runtime=io.containerd.runc.v2 Apr 12 18:53:12.349467 env[1124]: time="2024-04-12T18:53:12.344660163Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:53:12.349467 env[1124]: time="2024-04-12T18:53:12.344726780Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:53:12.349467 env[1124]: time="2024-04-12T18:53:12.344743391Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:53:12.349467 env[1124]: time="2024-04-12T18:53:12.345024072Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ab10e6cdc420c19cd1d463afe9ec47f00cdbfa6b3962a6df3102c00387b3694a pid=1709 runtime=io.containerd.runc.v2 Apr 12 18:53:12.371652 systemd[1]: Started cri-containerd-5c0058729b35103c3ed151a99c7d78f07966c0a2cd442bf4b50d6bb44f1d55bc.scope. Apr 12 18:53:12.601673 systemd[1]: Started cri-containerd-ab10e6cdc420c19cd1d463afe9ec47f00cdbfa6b3962a6df3102c00387b3694a.scope. Apr 12 18:53:12.612908 systemd[1]: Started cri-containerd-27f2de0fa97a9a5ebcd3176e63cfee82ea85e607bb90fdf97b62a6113b4760cf.scope. Apr 12 18:53:12.832723 env[1124]: time="2024-04-12T18:53:12.832660465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5d5c5aff921df216fcba2c51c322ceb1,Namespace:kube-system,Attempt:0,} returns sandbox id \"27f2de0fa97a9a5ebcd3176e63cfee82ea85e607bb90fdf97b62a6113b4760cf\"" Apr 12 18:53:12.835758 kubelet[1633]: E0412 18:53:12.835410 1633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:53:12.849212 env[1124]: time="2024-04-12T18:53:12.849102971Z" level=info msg="CreateContainer within sandbox \"27f2de0fa97a9a5ebcd3176e63cfee82ea85e607bb90fdf97b62a6113b4760cf\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 12 18:53:12.979645 env[1124]: time="2024-04-12T18:53:12.979260106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:57f39bebe7d3a0d0d07d357156046409,Namespace:kube-system,Attempt:0,} returns sandbox id \"5c0058729b35103c3ed151a99c7d78f07966c0a2cd442bf4b50d6bb44f1d55bc\"" Apr 12 18:53:12.984544 kubelet[1633]: E0412 18:53:12.984304 1633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:53:13.003620 env[1124]: time="2024-04-12T18:53:13.002820826Z" level=info msg="CreateContainer within sandbox \"27f2de0fa97a9a5ebcd3176e63cfee82ea85e607bb90fdf97b62a6113b4760cf\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f16d1e3240990d777b38c6e84509fd3b16719fe5828a15e91d9f34e3e0b57129\"" Apr 12 18:53:13.003620 env[1124]: time="2024-04-12T18:53:13.003439837Z" level=info msg="CreateContainer within sandbox \"5c0058729b35103c3ed151a99c7d78f07966c0a2cd442bf4b50d6bb44f1d55bc\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 12 18:53:13.007533 env[1124]: time="2024-04-12T18:53:13.004881575Z" level=info msg="StartContainer for \"f16d1e3240990d777b38c6e84509fd3b16719fe5828a15e91d9f34e3e0b57129\"" Apr 12 18:53:13.031873 env[1124]: time="2024-04-12T18:53:13.031811194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:f4e8212a5db7e0401319814fa9ad65c9,Namespace:kube-system,Attempt:0,} returns sandbox id \"ab10e6cdc420c19cd1d463afe9ec47f00cdbfa6b3962a6df3102c00387b3694a\"" Apr 12 18:53:13.035053 kubelet[1633]: E0412 18:53:13.034832 1633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:53:13.050335 env[1124]: time="2024-04-12T18:53:13.045571367Z" level=info msg="CreateContainer within sandbox \"ab10e6cdc420c19cd1d463afe9ec47f00cdbfa6b3962a6df3102c00387b3694a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 12 18:53:13.084153 env[1124]: time="2024-04-12T18:53:13.084083318Z" level=info msg="CreateContainer within sandbox \"5c0058729b35103c3ed151a99c7d78f07966c0a2cd442bf4b50d6bb44f1d55bc\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5ed302f7ca2700738065dc47facf853a0581ea2b68af704b6260ff59d882c103\"" Apr 12 18:53:13.085269 env[1124]: time="2024-04-12T18:53:13.085188309Z" level=info msg="StartContainer for \"5ed302f7ca2700738065dc47facf853a0581ea2b68af704b6260ff59d882c103\"" Apr 12 18:53:13.103119 systemd[1]: Started cri-containerd-f16d1e3240990d777b38c6e84509fd3b16719fe5828a15e91d9f34e3e0b57129.scope. Apr 12 18:53:13.117136 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3811907559.mount: Deactivated successfully. Apr 12 18:53:13.168265 env[1124]: time="2024-04-12T18:53:13.159343044Z" level=info msg="CreateContainer within sandbox \"ab10e6cdc420c19cd1d463afe9ec47f00cdbfa6b3962a6df3102c00387b3694a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c03b0d5baf67b31712276d2ad82f0b93e2a4c7fe744a23bfcbf409739a8efeac\"" Apr 12 18:53:13.169062 env[1124]: time="2024-04-12T18:53:13.169018130Z" level=info msg="StartContainer for \"c03b0d5baf67b31712276d2ad82f0b93e2a4c7fe744a23bfcbf409739a8efeac\"" Apr 12 18:53:13.166098 systemd[1]: Started cri-containerd-5ed302f7ca2700738065dc47facf853a0581ea2b68af704b6260ff59d882c103.scope. Apr 12 18:53:13.182495 kubelet[1633]: E0412 18:53:13.181783 1633 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.118:6443: connect: connection refused" interval="3.2s" Apr 12 18:53:13.245573 update_engine[1117]: I0412 18:53:13.234001 1117 update_attempter.cc:509] Updating boot flags... Apr 12 18:53:13.276896 systemd[1]: Started cri-containerd-c03b0d5baf67b31712276d2ad82f0b93e2a4c7fe744a23bfcbf409739a8efeac.scope. Apr 12 18:53:13.282810 kubelet[1633]: W0412 18:53:13.281777 1633 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.118:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.118:6443: connect: connection refused Apr 12 18:53:13.282810 kubelet[1633]: E0412 18:53:13.281892 1633 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.118:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.118:6443: connect: connection refused Apr 12 18:53:13.314860 kubelet[1633]: I0412 18:53:13.314820 1633 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Apr 12 18:53:13.315240 kubelet[1633]: E0412 18:53:13.315213 1633 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.118:6443/api/v1/nodes\": dial tcp 10.0.0.118:6443: connect: connection refused" node="localhost" Apr 12 18:53:13.377596 env[1124]: time="2024-04-12T18:53:13.377531821Z" level=info msg="StartContainer for \"f16d1e3240990d777b38c6e84509fd3b16719fe5828a15e91d9f34e3e0b57129\" returns successfully" Apr 12 18:53:13.554926 kubelet[1633]: W0412 18:53:13.554549 1633 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.118:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.118:6443: connect: connection refused Apr 12 18:53:13.554926 kubelet[1633]: E0412 18:53:13.554636 1633 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.118:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.118:6443: connect: connection refused Apr 12 18:53:13.656597 env[1124]: time="2024-04-12T18:53:13.656483518Z" level=info msg="StartContainer for \"5ed302f7ca2700738065dc47facf853a0581ea2b68af704b6260ff59d882c103\" returns successfully" Apr 12 18:53:13.658129 env[1124]: time="2024-04-12T18:53:13.658007852Z" level=info msg="StartContainer for \"c03b0d5baf67b31712276d2ad82f0b93e2a4c7fe744a23bfcbf409739a8efeac\" returns successfully" Apr 12 18:53:14.266665 kubelet[1633]: E0412 18:53:14.266617 1633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:53:14.269356 kubelet[1633]: E0412 18:53:14.269326 1633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:53:14.279845 kubelet[1633]: E0412 18:53:14.279794 1633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:53:15.283409 kubelet[1633]: E0412 18:53:15.282319 1633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:53:15.283409 kubelet[1633]: E0412 18:53:15.282995 1633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:53:15.283914 kubelet[1633]: E0412 18:53:15.283584 1633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:53:16.283280 kubelet[1633]: E0412 18:53:16.283205 1633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:53:16.522280 kubelet[1633]: I0412 18:53:16.520534 1633 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Apr 12 18:53:16.713569 kubelet[1633]: E0412 18:53:16.713470 1633 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 12 18:53:16.759908 kubelet[1633]: I0412 18:53:16.759820 1633 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Apr 12 18:53:16.763498 kubelet[1633]: E0412 18:53:16.761398 1633 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"localhost\": nodes \"localhost\" not found" Apr 12 18:53:16.912094 kubelet[1633]: E0412 18:53:16.912036 1633 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 12 18:53:17.013936 kubelet[1633]: E0412 18:53:17.013793 1633 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 12 18:53:17.124334 kubelet[1633]: E0412 18:53:17.123025 1633 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 12 18:53:17.226693 kubelet[1633]: E0412 18:53:17.225060 1633 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 12 18:53:17.325487 kubelet[1633]: E0412 18:53:17.325287 1633 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 12 18:53:17.425946 kubelet[1633]: E0412 18:53:17.425711 1633 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 12 18:53:17.527217 kubelet[1633]: E0412 18:53:17.526873 1633 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 12 18:53:18.163468 kubelet[1633]: I0412 18:53:18.163379 1633 apiserver.go:52] "Watching apiserver" Apr 12 18:53:18.170469 kubelet[1633]: I0412 18:53:18.170394 1633 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Apr 12 18:53:20.016895 kubelet[1633]: E0412 18:53:20.016813 1633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:53:20.102002 kubelet[1633]: E0412 18:53:20.101952 1633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:53:20.333339 kubelet[1633]: E0412 18:53:20.330063 1633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:53:20.333339 kubelet[1633]: E0412 18:53:20.330235 1633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:53:20.342602 kubelet[1633]: I0412 18:53:20.342550 1633 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=0.342458316 podStartE2EDuration="342.458316ms" podCreationTimestamp="2024-04-12 18:53:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:53:20.326749182 +0000 UTC m=+10.920491376" watchObservedRunningTime="2024-04-12 18:53:20.342458316 +0000 UTC m=+10.936200531" Apr 12 18:53:20.673570 kubelet[1633]: E0412 18:53:20.673503 1633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:53:20.678606 kubelet[1633]: I0412 18:53:20.678564 1633 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=0.6784763 podStartE2EDuration="678.4763ms" podCreationTimestamp="2024-04-12 18:53:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:53:20.342976214 +0000 UTC m=+10.936718398" watchObservedRunningTime="2024-04-12 18:53:20.6784763 +0000 UTC m=+11.272218484" Apr 12 18:53:21.335387 kubelet[1633]: E0412 18:53:21.331604 1633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:53:21.714132 systemd[1]: Reloading. Apr 12 18:53:21.843677 /usr/lib/systemd/system-generators/torcx-generator[1941]: time="2024-04-12T18:53:21Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.3 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.3 /var/lib/torcx/store]" Apr 12 18:53:21.844311 /usr/lib/systemd/system-generators/torcx-generator[1941]: time="2024-04-12T18:53:21Z" level=info msg="torcx already run" Apr 12 18:53:22.010692 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Apr 12 18:53:22.010722 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Apr 12 18:53:22.044144 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 12 18:53:22.345955 systemd[1]: Stopping kubelet.service... Apr 12 18:53:22.359401 systemd[1]: kubelet.service: Deactivated successfully. Apr 12 18:53:22.359667 systemd[1]: Stopped kubelet.service. Apr 12 18:53:22.359741 systemd[1]: kubelet.service: Consumed 1.991s CPU time. Apr 12 18:53:22.363437 systemd[1]: Started kubelet.service. Apr 12 18:53:22.465985 kubelet[1983]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 12 18:53:22.465985 kubelet[1983]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 12 18:53:22.465985 kubelet[1983]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 12 18:53:22.465985 kubelet[1983]: I0412 18:53:22.461355 1983 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 12 18:53:22.480873 kubelet[1983]: I0412 18:53:22.480193 1983 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Apr 12 18:53:22.480873 kubelet[1983]: I0412 18:53:22.480243 1983 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 12 18:53:22.480873 kubelet[1983]: I0412 18:53:22.480526 1983 server.go:919] "Client rotation is on, will bootstrap in background" Apr 12 18:53:22.485289 kubelet[1983]: I0412 18:53:22.482596 1983 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Apr 12 18:53:22.485480 kubelet[1983]: I0412 18:53:22.485323 1983 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 12 18:53:22.517485 kubelet[1983]: I0412 18:53:22.517411 1983 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 12 18:53:22.517749 kubelet[1983]: I0412 18:53:22.517724 1983 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 12 18:53:22.521317 kubelet[1983]: I0412 18:53:22.518014 1983 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 12 18:53:22.521317 kubelet[1983]: I0412 18:53:22.518059 1983 topology_manager.go:138] "Creating topology manager with none policy" Apr 12 18:53:22.521317 kubelet[1983]: I0412 18:53:22.518071 1983 container_manager_linux.go:301] "Creating device plugin manager" Apr 12 18:53:22.521317 kubelet[1983]: I0412 18:53:22.518128 1983 state_mem.go:36] "Initialized new in-memory state store" Apr 12 18:53:22.521317 kubelet[1983]: I0412 18:53:22.518251 1983 kubelet.go:396] "Attempting to sync node with API server" Apr 12 18:53:22.521317 kubelet[1983]: I0412 18:53:22.518270 1983 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 12 18:53:22.521317 kubelet[1983]: I0412 18:53:22.518300 1983 kubelet.go:312] "Adding apiserver pod source" Apr 12 18:53:22.521728 kubelet[1983]: I0412 18:53:22.518314 1983 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 12 18:53:22.523629 kubelet[1983]: I0412 18:53:22.523591 1983 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Apr 12 18:53:22.524065 kubelet[1983]: I0412 18:53:22.524048 1983 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 12 18:53:22.525712 kubelet[1983]: I0412 18:53:22.525691 1983 server.go:1256] "Started kubelet" Apr 12 18:53:22.530867 kubelet[1983]: I0412 18:53:22.530446 1983 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 12 18:53:22.544818 kubelet[1983]: I0412 18:53:22.535949 1983 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Apr 12 18:53:22.544818 kubelet[1983]: I0412 18:53:22.536141 1983 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 12 18:53:22.544818 kubelet[1983]: I0412 18:53:22.536582 1983 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 12 18:53:22.544818 kubelet[1983]: I0412 18:53:22.537143 1983 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 12 18:53:22.544818 kubelet[1983]: I0412 18:53:22.537570 1983 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Apr 12 18:53:22.544818 kubelet[1983]: I0412 18:53:22.537755 1983 reconciler_new.go:29] "Reconciler: start to sync state" Apr 12 18:53:22.544818 kubelet[1983]: I0412 18:53:22.538935 1983 server.go:461] "Adding debug handlers to kubelet server" Apr 12 18:53:22.543102 sudo[1998]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 12 18:53:22.543509 sudo[1998]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Apr 12 18:53:22.553313 kubelet[1983]: I0412 18:53:22.552263 1983 factory.go:221] Registration of the systemd container factory successfully Apr 12 18:53:22.553313 kubelet[1983]: I0412 18:53:22.552473 1983 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 12 18:53:22.562831 kubelet[1983]: E0412 18:53:22.561938 1983 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 12 18:53:22.565672 kubelet[1983]: I0412 18:53:22.565633 1983 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 12 18:53:22.570153 kubelet[1983]: I0412 18:53:22.567521 1983 factory.go:221] Registration of the containerd container factory successfully Apr 12 18:53:22.576281 kubelet[1983]: I0412 18:53:22.573308 1983 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 12 18:53:22.576281 kubelet[1983]: I0412 18:53:22.573353 1983 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 12 18:53:22.576281 kubelet[1983]: I0412 18:53:22.573379 1983 kubelet.go:2329] "Starting kubelet main sync loop" Apr 12 18:53:22.576281 kubelet[1983]: E0412 18:53:22.573443 1983 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 12 18:53:22.661890 kubelet[1983]: I0412 18:53:22.660137 1983 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Apr 12 18:53:22.676894 kubelet[1983]: E0412 18:53:22.676843 1983 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 12 18:53:22.689597 kubelet[1983]: I0412 18:53:22.689260 1983 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Apr 12 18:53:22.689597 kubelet[1983]: I0412 18:53:22.689404 1983 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Apr 12 18:53:22.709286 kubelet[1983]: I0412 18:53:22.709232 1983 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 12 18:53:22.709286 kubelet[1983]: I0412 18:53:22.709270 1983 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 12 18:53:22.709537 kubelet[1983]: I0412 18:53:22.709324 1983 state_mem.go:36] "Initialized new in-memory state store" Apr 12 18:53:22.709609 kubelet[1983]: I0412 18:53:22.709575 1983 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 12 18:53:22.709663 kubelet[1983]: I0412 18:53:22.709622 1983 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 12 18:53:22.709663 kubelet[1983]: I0412 18:53:22.709634 1983 policy_none.go:49] "None policy: Start" Apr 12 18:53:22.710370 kubelet[1983]: I0412 18:53:22.710324 1983 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 12 18:53:22.710429 kubelet[1983]: I0412 18:53:22.710376 1983 state_mem.go:35] "Initializing new in-memory state store" Apr 12 18:53:22.712650 kubelet[1983]: I0412 18:53:22.710578 1983 state_mem.go:75] "Updated machine memory state" Apr 12 18:53:22.729460 kubelet[1983]: I0412 18:53:22.729390 1983 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 12 18:53:22.730278 kubelet[1983]: I0412 18:53:22.730246 1983 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 12 18:53:22.880282 kubelet[1983]: I0412 18:53:22.878401 1983 topology_manager.go:215] "Topology Admit Handler" podUID="57f39bebe7d3a0d0d07d357156046409" podNamespace="kube-system" podName="kube-apiserver-localhost" Apr 12 18:53:22.880282 kubelet[1983]: I0412 18:53:22.878568 1983 topology_manager.go:215] "Topology Admit Handler" podUID="f4e8212a5db7e0401319814fa9ad65c9" podNamespace="kube-system" podName="kube-controller-manager-localhost" Apr 12 18:53:22.880282 kubelet[1983]: I0412 18:53:22.878658 1983 topology_manager.go:215] "Topology Admit Handler" podUID="5d5c5aff921df216fcba2c51c322ceb1" podNamespace="kube-system" podName="kube-scheduler-localhost" Apr 12 18:53:22.899371 kubelet[1983]: E0412 18:53:22.899320 1983 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 12 18:53:22.903556 kubelet[1983]: E0412 18:53:22.903502 1983 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 12 18:53:22.903713 kubelet[1983]: E0412 18:53:22.903630 1983 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 12 18:53:22.979674 kubelet[1983]: I0412 18:53:22.979463 1983 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f4e8212a5db7e0401319814fa9ad65c9-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f4e8212a5db7e0401319814fa9ad65c9\") " pod="kube-system/kube-controller-manager-localhost" Apr 12 18:53:22.979674 kubelet[1983]: I0412 18:53:22.979542 1983 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f4e8212a5db7e0401319814fa9ad65c9-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"f4e8212a5db7e0401319814fa9ad65c9\") " pod="kube-system/kube-controller-manager-localhost" Apr 12 18:53:22.979674 kubelet[1983]: I0412 18:53:22.979579 1983 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f4e8212a5db7e0401319814fa9ad65c9-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"f4e8212a5db7e0401319814fa9ad65c9\") " pod="kube-system/kube-controller-manager-localhost" Apr 12 18:53:22.979674 kubelet[1983]: I0412 18:53:22.979613 1983 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5d5c5aff921df216fcba2c51c322ceb1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5d5c5aff921df216fcba2c51c322ceb1\") " pod="kube-system/kube-scheduler-localhost" Apr 12 18:53:22.979674 kubelet[1983]: I0412 18:53:22.979639 1983 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/57f39bebe7d3a0d0d07d357156046409-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"57f39bebe7d3a0d0d07d357156046409\") " pod="kube-system/kube-apiserver-localhost" Apr 12 18:53:22.980065 kubelet[1983]: I0412 18:53:22.979662 1983 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/57f39bebe7d3a0d0d07d357156046409-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"57f39bebe7d3a0d0d07d357156046409\") " pod="kube-system/kube-apiserver-localhost" Apr 12 18:53:22.980065 kubelet[1983]: I0412 18:53:22.979688 1983 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f4e8212a5db7e0401319814fa9ad65c9-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f4e8212a5db7e0401319814fa9ad65c9\") " pod="kube-system/kube-controller-manager-localhost" Apr 12 18:53:22.980065 kubelet[1983]: I0412 18:53:22.979718 1983 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/57f39bebe7d3a0d0d07d357156046409-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"57f39bebe7d3a0d0d07d357156046409\") " pod="kube-system/kube-apiserver-localhost" Apr 12 18:53:22.980065 kubelet[1983]: I0412 18:53:22.979794 1983 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f4e8212a5db7e0401319814fa9ad65c9-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"f4e8212a5db7e0401319814fa9ad65c9\") " pod="kube-system/kube-controller-manager-localhost" Apr 12 18:53:23.201253 kubelet[1983]: E0412 18:53:23.201189 1983 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:53:23.204321 kubelet[1983]: E0412 18:53:23.204273 1983 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:53:23.204567 kubelet[1983]: E0412 18:53:23.204546 1983 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:53:23.523381 kubelet[1983]: I0412 18:53:23.522888 1983 apiserver.go:52] "Watching apiserver" Apr 12 18:53:23.539512 kubelet[1983]: I0412 18:53:23.538106 1983 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Apr 12 18:53:23.601757 kubelet[1983]: E0412 18:53:23.600932 1983 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:53:23.605152 kubelet[1983]: E0412 18:53:23.604111 1983 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:53:23.610562 kubelet[1983]: E0412 18:53:23.610439 1983 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:53:23.658365 kubelet[1983]: I0412 18:53:23.650489 1983 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.650432547 podStartE2EDuration="3.650432547s" podCreationTimestamp="2024-04-12 18:53:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:53:23.650107234 +0000 UTC m=+1.272782355" watchObservedRunningTime="2024-04-12 18:53:23.650432547 +0000 UTC m=+1.273107658" Apr 12 18:53:24.674802 kubelet[1983]: E0412 18:53:24.666662 1983 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:53:24.674802 kubelet[1983]: E0412 18:53:24.667897 1983 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:53:25.170820 sudo[1998]: pam_unix(sudo:session): session closed for user root Apr 12 18:53:25.682535 kubelet[1983]: E0412 18:53:25.682496 1983 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:53:25.684735 kubelet[1983]: E0412 18:53:25.683978 1983 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:53:26.546001 sudo[1213]: pam_unix(sudo:session): session closed for user root Apr 12 18:53:26.550915 sshd[1210]: pam_unix(sshd:session): session closed for user core Apr 12 18:53:26.559938 systemd[1]: sshd@4-10.0.0.118:22-10.0.0.1:57956.service: Deactivated successfully. Apr 12 18:53:26.560849 systemd[1]: session-5.scope: Deactivated successfully. Apr 12 18:53:26.561026 systemd[1]: session-5.scope: Consumed 6.991s CPU time. Apr 12 18:53:26.567627 systemd-logind[1116]: Session 5 logged out. Waiting for processes to exit. Apr 12 18:53:26.572788 systemd-logind[1116]: Removed session 5. Apr 12 18:53:26.685444 kubelet[1983]: E0412 18:53:26.684576 1983 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:53:30.101993 kubelet[1983]: E0412 18:53:30.092522 1983 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:53:30.703098 kubelet[1983]: E0412 18:53:30.703059 1983 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:53:34.364964 kubelet[1983]: I0412 18:53:34.360107 1983 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 12 18:53:34.366300 env[1124]: time="2024-04-12T18:53:34.366218324Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 12 18:53:34.367339 kubelet[1983]: I0412 18:53:34.367291 1983 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 12 18:53:35.067861 kubelet[1983]: I0412 18:53:35.066081 1983 topology_manager.go:215] "Topology Admit Handler" podUID="e013ebdf-8734-4cd7-826a-183d39594872" podNamespace="kube-system" podName="cilium-2qnbb" Apr 12 18:53:35.098265 systemd[1]: Created slice kubepods-burstable-pode013ebdf_8734_4cd7_826a_183d39594872.slice. Apr 12 18:53:35.113030 kubelet[1983]: I0412 18:53:35.106063 1983 topology_manager.go:215] "Topology Admit Handler" podUID="1e360192-9c61-4ce9-a0a2-ab106fbbdf00" podNamespace="kube-system" podName="kube-proxy-jdct7" Apr 12 18:53:35.120995 systemd[1]: Created slice kubepods-besteffort-pod1e360192_9c61_4ce9_a0a2_ab106fbbdf00.slice. Apr 12 18:53:35.133395 kubelet[1983]: I0412 18:53:35.133323 1983 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e013ebdf-8734-4cd7-826a-183d39594872-lib-modules\") pod \"cilium-2qnbb\" (UID: \"e013ebdf-8734-4cd7-826a-183d39594872\") " pod="kube-system/cilium-2qnbb" Apr 12 18:53:35.133596 kubelet[1983]: I0412 18:53:35.133412 1983 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e013ebdf-8734-4cd7-826a-183d39594872-xtables-lock\") pod \"cilium-2qnbb\" (UID: \"e013ebdf-8734-4cd7-826a-183d39594872\") " pod="kube-system/cilium-2qnbb" Apr 12 18:53:35.133596 kubelet[1983]: I0412 18:53:35.133457 1983 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e013ebdf-8734-4cd7-826a-183d39594872-host-proc-sys-net\") pod \"cilium-2qnbb\" (UID: \"e013ebdf-8734-4cd7-826a-183d39594872\") " pod="kube-system/cilium-2qnbb" Apr 12 18:53:35.133596 kubelet[1983]: I0412 18:53:35.133499 1983 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e013ebdf-8734-4cd7-826a-183d39594872-hostproc\") pod \"cilium-2qnbb\" (UID: \"e013ebdf-8734-4cd7-826a-183d39594872\") " pod="kube-system/cilium-2qnbb" Apr 12 18:53:35.133596 kubelet[1983]: I0412 18:53:35.133535 1983 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e013ebdf-8734-4cd7-826a-183d39594872-cilium-run\") pod \"cilium-2qnbb\" (UID: \"e013ebdf-8734-4cd7-826a-183d39594872\") " pod="kube-system/cilium-2qnbb" Apr 12 18:53:35.133596 kubelet[1983]: I0412 18:53:35.133569 1983 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e013ebdf-8734-4cd7-826a-183d39594872-hubble-tls\") pod \"cilium-2qnbb\" (UID: \"e013ebdf-8734-4cd7-826a-183d39594872\") " pod="kube-system/cilium-2qnbb" Apr 12 18:53:35.133596 kubelet[1983]: I0412 18:53:35.133598 1983 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1e360192-9c61-4ce9-a0a2-ab106fbbdf00-kube-proxy\") pod \"kube-proxy-jdct7\" (UID: \"1e360192-9c61-4ce9-a0a2-ab106fbbdf00\") " pod="kube-system/kube-proxy-jdct7" Apr 12 18:53:35.133845 kubelet[1983]: I0412 18:53:35.133627 1983 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e013ebdf-8734-4cd7-826a-183d39594872-etc-cni-netd\") pod \"cilium-2qnbb\" (UID: \"e013ebdf-8734-4cd7-826a-183d39594872\") " pod="kube-system/cilium-2qnbb" Apr 12 18:53:35.133845 kubelet[1983]: I0412 18:53:35.133657 1983 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e013ebdf-8734-4cd7-826a-183d39594872-host-proc-sys-kernel\") pod \"cilium-2qnbb\" (UID: \"e013ebdf-8734-4cd7-826a-183d39594872\") " pod="kube-system/cilium-2qnbb" Apr 12 18:53:35.133845 kubelet[1983]: I0412 18:53:35.133692 1983 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfrsn\" (UniqueName: \"kubernetes.io/projected/e013ebdf-8734-4cd7-826a-183d39594872-kube-api-access-sfrsn\") pod \"cilium-2qnbb\" (UID: \"e013ebdf-8734-4cd7-826a-183d39594872\") " pod="kube-system/cilium-2qnbb" Apr 12 18:53:35.133845 kubelet[1983]: I0412 18:53:35.133720 1983 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e013ebdf-8734-4cd7-826a-183d39594872-cilium-cgroup\") pod \"cilium-2qnbb\" (UID: \"e013ebdf-8734-4cd7-826a-183d39594872\") " pod="kube-system/cilium-2qnbb" Apr 12 18:53:35.133845 kubelet[1983]: I0412 18:53:35.133745 1983 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e013ebdf-8734-4cd7-826a-183d39594872-cni-path\") pod \"cilium-2qnbb\" (UID: \"e013ebdf-8734-4cd7-826a-183d39594872\") " pod="kube-system/cilium-2qnbb" Apr 12 18:53:35.134033 kubelet[1983]: I0412 18:53:35.133796 1983 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e013ebdf-8734-4cd7-826a-183d39594872-cilium-config-path\") pod \"cilium-2qnbb\" (UID: \"e013ebdf-8734-4cd7-826a-183d39594872\") " pod="kube-system/cilium-2qnbb" Apr 12 18:53:35.134033 kubelet[1983]: I0412 18:53:35.133828 1983 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e013ebdf-8734-4cd7-826a-183d39594872-bpf-maps\") pod \"cilium-2qnbb\" (UID: \"e013ebdf-8734-4cd7-826a-183d39594872\") " pod="kube-system/cilium-2qnbb" Apr 12 18:53:35.134033 kubelet[1983]: I0412 18:53:35.133865 1983 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e013ebdf-8734-4cd7-826a-183d39594872-clustermesh-secrets\") pod \"cilium-2qnbb\" (UID: \"e013ebdf-8734-4cd7-826a-183d39594872\") " pod="kube-system/cilium-2qnbb" Apr 12 18:53:35.134033 kubelet[1983]: I0412 18:53:35.133895 1983 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bnwx\" (UniqueName: \"kubernetes.io/projected/1e360192-9c61-4ce9-a0a2-ab106fbbdf00-kube-api-access-6bnwx\") pod \"kube-proxy-jdct7\" (UID: \"1e360192-9c61-4ce9-a0a2-ab106fbbdf00\") " pod="kube-system/kube-proxy-jdct7" Apr 12 18:53:35.134180 kubelet[1983]: I0412 18:53:35.134099 1983 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1e360192-9c61-4ce9-a0a2-ab106fbbdf00-xtables-lock\") pod \"kube-proxy-jdct7\" (UID: \"1e360192-9c61-4ce9-a0a2-ab106fbbdf00\") " pod="kube-system/kube-proxy-jdct7" Apr 12 18:53:35.134180 kubelet[1983]: I0412 18:53:35.134151 1983 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1e360192-9c61-4ce9-a0a2-ab106fbbdf00-lib-modules\") pod \"kube-proxy-jdct7\" (UID: \"1e360192-9c61-4ce9-a0a2-ab106fbbdf00\") " pod="kube-system/kube-proxy-jdct7" Apr 12 18:53:35.413504 kubelet[1983]: E0412 18:53:35.413458 1983 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:53:35.415326 env[1124]: time="2024-04-12T18:53:35.414819866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2qnbb,Uid:e013ebdf-8734-4cd7-826a-183d39594872,Namespace:kube-system,Attempt:0,}" Apr 12 18:53:35.448466 kubelet[1983]: E0412 18:53:35.448235 1983 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:53:35.449352 env[1124]: time="2024-04-12T18:53:35.449298635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jdct7,Uid:1e360192-9c61-4ce9-a0a2-ab106fbbdf00,Namespace:kube-system,Attempt:0,}" Apr 12 18:53:35.490123 env[1124]: time="2024-04-12T18:53:35.490041655Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:53:35.490377 env[1124]: time="2024-04-12T18:53:35.490344715Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:53:35.490499 env[1124]: time="2024-04-12T18:53:35.490466142Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:53:35.499069 env[1124]: time="2024-04-12T18:53:35.492663401Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2897425ba6ee7bcb8ed0c933423dfdeb0806d6e0493762d0847a8ca7027898a5 pid=2077 runtime=io.containerd.runc.v2 Apr 12 18:53:35.507313 kubelet[1983]: I0412 18:53:35.507268 1983 topology_manager.go:215] "Topology Admit Handler" podUID="9f3f767e-c6bb-4e52-9d6f-20670f8e9421" podNamespace="kube-system" podName="cilium-operator-5cc964979-bssrk" Apr 12 18:53:35.527559 systemd[1]: Created slice kubepods-besteffort-pod9f3f767e_c6bb_4e52_9d6f_20670f8e9421.slice. Apr 12 18:53:35.548972 kubelet[1983]: I0412 18:53:35.545605 1983 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hwnm\" (UniqueName: \"kubernetes.io/projected/9f3f767e-c6bb-4e52-9d6f-20670f8e9421-kube-api-access-6hwnm\") pod \"cilium-operator-5cc964979-bssrk\" (UID: \"9f3f767e-c6bb-4e52-9d6f-20670f8e9421\") " pod="kube-system/cilium-operator-5cc964979-bssrk" Apr 12 18:53:35.548972 kubelet[1983]: I0412 18:53:35.545660 1983 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9f3f767e-c6bb-4e52-9d6f-20670f8e9421-cilium-config-path\") pod \"cilium-operator-5cc964979-bssrk\" (UID: \"9f3f767e-c6bb-4e52-9d6f-20670f8e9421\") " pod="kube-system/cilium-operator-5cc964979-bssrk" Apr 12 18:53:35.702049 systemd[1]: Started cri-containerd-2897425ba6ee7bcb8ed0c933423dfdeb0806d6e0493762d0847a8ca7027898a5.scope. Apr 12 18:53:35.714101 env[1124]: time="2024-04-12T18:53:35.713974685Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:53:35.716809 env[1124]: time="2024-04-12T18:53:35.715324382Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:53:35.716809 env[1124]: time="2024-04-12T18:53:35.715381599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:53:35.716809 env[1124]: time="2024-04-12T18:53:35.715731878Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/95bf0ddef9553674e75fe1217e28ea7a714baeaa16a44840c34b10e9932cfecb pid=2103 runtime=io.containerd.runc.v2 Apr 12 18:53:35.787128 systemd[1]: Started cri-containerd-95bf0ddef9553674e75fe1217e28ea7a714baeaa16a44840c34b10e9932cfecb.scope. Apr 12 18:53:35.810200 env[1124]: time="2024-04-12T18:53:35.810056853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2qnbb,Uid:e013ebdf-8734-4cd7-826a-183d39594872,Namespace:kube-system,Attempt:0,} returns sandbox id \"2897425ba6ee7bcb8ed0c933423dfdeb0806d6e0493762d0847a8ca7027898a5\"" Apr 12 18:53:35.813915 kubelet[1983]: E0412 18:53:35.811382 1983 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:53:35.815161 env[1124]: time="2024-04-12T18:53:35.815109036Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 12 18:53:35.843986 kubelet[1983]: E0412 18:53:35.843913 1983 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:53:35.847199 env[1124]: time="2024-04-12T18:53:35.846721880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-bssrk,Uid:9f3f767e-c6bb-4e52-9d6f-20670f8e9421,Namespace:kube-system,Attempt:0,}" Apr 12 18:53:35.902426 env[1124]: time="2024-04-12T18:53:35.902347706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jdct7,Uid:1e360192-9c61-4ce9-a0a2-ab106fbbdf00,Namespace:kube-system,Attempt:0,} returns sandbox id \"95bf0ddef9553674e75fe1217e28ea7a714baeaa16a44840c34b10e9932cfecb\"" Apr 12 18:53:35.903448 kubelet[1983]: E0412 18:53:35.903398 1983 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:53:35.909565 env[1124]: time="2024-04-12T18:53:35.909490207Z" level=info msg="CreateContainer within sandbox \"95bf0ddef9553674e75fe1217e28ea7a714baeaa16a44840c34b10e9932cfecb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 12 18:53:35.919799 env[1124]: time="2024-04-12T18:53:35.919632124Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:53:35.919799 env[1124]: time="2024-04-12T18:53:35.919705313Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:53:35.920047 env[1124]: time="2024-04-12T18:53:35.919721062Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:53:35.920200 env[1124]: time="2024-04-12T18:53:35.920056101Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/53e4cdef93a2178dd801ce289cbafefc16bd3f65b171ed41113f219c14b37895 pid=2155 runtime=io.containerd.runc.v2 Apr 12 18:53:35.966892 systemd[1]: Started cri-containerd-53e4cdef93a2178dd801ce289cbafefc16bd3f65b171ed41113f219c14b37895.scope. Apr 12 18:53:35.997842 env[1124]: time="2024-04-12T18:53:35.997444041Z" level=info msg="CreateContainer within sandbox \"95bf0ddef9553674e75fe1217e28ea7a714baeaa16a44840c34b10e9932cfecb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2de58777733bbfb224c27a7fd282c1665c2a240f7903ffca208615502ee32a6e\"" Apr 12 18:53:35.999573 env[1124]: time="2024-04-12T18:53:35.999526213Z" level=info msg="StartContainer for \"2de58777733bbfb224c27a7fd282c1665c2a240f7903ffca208615502ee32a6e\"" Apr 12 18:53:36.075089 systemd[1]: Started cri-containerd-2de58777733bbfb224c27a7fd282c1665c2a240f7903ffca208615502ee32a6e.scope. Apr 12 18:53:36.151291 env[1124]: time="2024-04-12T18:53:36.151202930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-bssrk,Uid:9f3f767e-c6bb-4e52-9d6f-20670f8e9421,Namespace:kube-system,Attempt:0,} returns sandbox id \"53e4cdef93a2178dd801ce289cbafefc16bd3f65b171ed41113f219c14b37895\"" Apr 12 18:53:36.157721 kubelet[1983]: E0412 18:53:36.157166 1983 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:53:36.206576 env[1124]: time="2024-04-12T18:53:36.206511084Z" level=info msg="StartContainer for \"2de58777733bbfb224c27a7fd282c1665c2a240f7903ffca208615502ee32a6e\" returns successfully" Apr 12 18:53:36.732751 kubelet[1983]: E0412 18:53:36.732373 1983 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:53:44.777930 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2245942772.mount: Deactivated successfully. Apr 12 18:53:50.461084 env[1124]: time="2024-04-12T18:53:50.460893418Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:53:50.470820 env[1124]: time="2024-04-12T18:53:50.470060731Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:53:50.484997 env[1124]: time="2024-04-12T18:53:50.483796908Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Apr 12 18:53:50.484997 env[1124]: time="2024-04-12T18:53:50.484216836Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:53:50.492570 env[1124]: time="2024-04-12T18:53:50.492082747Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 12 18:53:50.493348 env[1124]: time="2024-04-12T18:53:50.493203811Z" level=info msg="CreateContainer within sandbox \"2897425ba6ee7bcb8ed0c933423dfdeb0806d6e0493762d0847a8ca7027898a5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 12 18:53:50.541375 env[1124]: time="2024-04-12T18:53:50.541231126Z" level=info msg="CreateContainer within sandbox \"2897425ba6ee7bcb8ed0c933423dfdeb0806d6e0493762d0847a8ca7027898a5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7bfb08d7764fd9de760d5f143b2aff953e2a7ede4b0830ab4cc9330e7690b8c6\"" Apr 12 18:53:50.549437 env[1124]: time="2024-04-12T18:53:50.546545649Z" level=info msg="StartContainer for \"7bfb08d7764fd9de760d5f143b2aff953e2a7ede4b0830ab4cc9330e7690b8c6\"" Apr 12 18:53:50.604636 systemd[1]: Started cri-containerd-7bfb08d7764fd9de760d5f143b2aff953e2a7ede4b0830ab4cc9330e7690b8c6.scope. Apr 12 18:53:50.718893 env[1124]: time="2024-04-12T18:53:50.718558411Z" level=info msg="StartContainer for \"7bfb08d7764fd9de760d5f143b2aff953e2a7ede4b0830ab4cc9330e7690b8c6\" returns successfully" Apr 12 18:53:50.726578 systemd[1]: cri-containerd-7bfb08d7764fd9de760d5f143b2aff953e2a7ede4b0830ab4cc9330e7690b8c6.scope: Deactivated successfully. Apr 12 18:53:50.773233 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7bfb08d7764fd9de760d5f143b2aff953e2a7ede4b0830ab4cc9330e7690b8c6-rootfs.mount: Deactivated successfully. Apr 12 18:53:50.839784 kubelet[1983]: E0412 18:53:50.839701 1983 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:53:50.885741 kubelet[1983]: I0412 18:53:50.885194 1983 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-jdct7" podStartSLOduration=15.885139819 podStartE2EDuration="15.885139819s" podCreationTimestamp="2024-04-12 18:53:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:53:36.779424895 +0000 UTC m=+14.402100006" watchObservedRunningTime="2024-04-12 18:53:50.885139819 +0000 UTC m=+28.507814931" Apr 12 18:53:51.394168 env[1124]: time="2024-04-12T18:53:51.393832521Z" level=info msg="shim disconnected" id=7bfb08d7764fd9de760d5f143b2aff953e2a7ede4b0830ab4cc9330e7690b8c6 Apr 12 18:53:51.394168 env[1124]: time="2024-04-12T18:53:51.393920125Z" level=warning msg="cleaning up after shim disconnected" id=7bfb08d7764fd9de760d5f143b2aff953e2a7ede4b0830ab4cc9330e7690b8c6 namespace=k8s.io Apr 12 18:53:51.394168 env[1124]: time="2024-04-12T18:53:51.393936636Z" level=info msg="cleaning up dead shim" Apr 12 18:53:51.415465 env[1124]: time="2024-04-12T18:53:51.415317075Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:53:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2397 runtime=io.containerd.runc.v2\n" Apr 12 18:53:51.897172 kubelet[1983]: E0412 18:53:51.896951 1983 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:53:51.908976 env[1124]: time="2024-04-12T18:53:51.906388219Z" level=info msg="CreateContainer within sandbox \"2897425ba6ee7bcb8ed0c933423dfdeb0806d6e0493762d0847a8ca7027898a5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 12 18:53:51.990129 env[1124]: time="2024-04-12T18:53:51.990049436Z" level=info msg="CreateContainer within sandbox \"2897425ba6ee7bcb8ed0c933423dfdeb0806d6e0493762d0847a8ca7027898a5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1925d234b039a13e3d0b303f9542b6959460e05d8942033fddfea6c1e7f39502\"" Apr 12 18:53:51.993257 env[1124]: time="2024-04-12T18:53:51.993187363Z" level=info msg="StartContainer for \"1925d234b039a13e3d0b303f9542b6959460e05d8942033fddfea6c1e7f39502\"" Apr 12 18:53:52.046724 systemd[1]: run-containerd-runc-k8s.io-1925d234b039a13e3d0b303f9542b6959460e05d8942033fddfea6c1e7f39502-runc.nmkHny.mount: Deactivated successfully. Apr 12 18:53:52.053579 systemd[1]: Started cri-containerd-1925d234b039a13e3d0b303f9542b6959460e05d8942033fddfea6c1e7f39502.scope. Apr 12 18:53:52.121434 env[1124]: time="2024-04-12T18:53:52.121355515Z" level=info msg="StartContainer for \"1925d234b039a13e3d0b303f9542b6959460e05d8942033fddfea6c1e7f39502\" returns successfully" Apr 12 18:53:52.134116 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 12 18:53:52.134395 systemd[1]: Stopped systemd-sysctl.service. Apr 12 18:53:52.138997 systemd[1]: Stopping systemd-sysctl.service... Apr 12 18:53:52.141336 systemd[1]: Starting systemd-sysctl.service... Apr 12 18:53:52.142842 systemd[1]: cri-containerd-1925d234b039a13e3d0b303f9542b6959460e05d8942033fddfea6c1e7f39502.scope: Deactivated successfully. Apr 12 18:53:52.155235 systemd[1]: Finished systemd-sysctl.service. Apr 12 18:53:52.182683 env[1124]: time="2024-04-12T18:53:52.182579193Z" level=info msg="shim disconnected" id=1925d234b039a13e3d0b303f9542b6959460e05d8942033fddfea6c1e7f39502 Apr 12 18:53:52.182683 env[1124]: time="2024-04-12T18:53:52.182650838Z" level=warning msg="cleaning up after shim disconnected" id=1925d234b039a13e3d0b303f9542b6959460e05d8942033fddfea6c1e7f39502 namespace=k8s.io Apr 12 18:53:52.182683 env[1124]: time="2024-04-12T18:53:52.182666537Z" level=info msg="cleaning up dead shim" Apr 12 18:53:52.195968 env[1124]: time="2024-04-12T18:53:52.195890420Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:53:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2461 runtime=io.containerd.runc.v2\n" Apr 12 18:53:52.912804 kubelet[1983]: E0412 18:53:52.912660 1983 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:53:52.934499 env[1124]: time="2024-04-12T18:53:52.930843602Z" level=info msg="CreateContainer within sandbox \"2897425ba6ee7bcb8ed0c933423dfdeb0806d6e0493762d0847a8ca7027898a5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 12 18:53:52.974073 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1925d234b039a13e3d0b303f9542b6959460e05d8942033fddfea6c1e7f39502-rootfs.mount: Deactivated successfully. Apr 12 18:53:53.100916 env[1124]: time="2024-04-12T18:53:53.100849637Z" level=info msg="CreateContainer within sandbox \"2897425ba6ee7bcb8ed0c933423dfdeb0806d6e0493762d0847a8ca7027898a5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0a71050448f507b975132b679d49f0bea5c34295e263939760635c92f428085e\"" Apr 12 18:53:53.101861 env[1124]: time="2024-04-12T18:53:53.101832682Z" level=info msg="StartContainer for \"0a71050448f507b975132b679d49f0bea5c34295e263939760635c92f428085e\"" Apr 12 18:53:53.182642 systemd[1]: Started cri-containerd-0a71050448f507b975132b679d49f0bea5c34295e263939760635c92f428085e.scope. Apr 12 18:53:53.265585 env[1124]: time="2024-04-12T18:53:53.265061725Z" level=info msg="StartContainer for \"0a71050448f507b975132b679d49f0bea5c34295e263939760635c92f428085e\" returns successfully" Apr 12 18:53:53.276546 systemd[1]: cri-containerd-0a71050448f507b975132b679d49f0bea5c34295e263939760635c92f428085e.scope: Deactivated successfully. Apr 12 18:53:53.501011 env[1124]: time="2024-04-12T18:53:53.500806588Z" level=info msg="shim disconnected" id=0a71050448f507b975132b679d49f0bea5c34295e263939760635c92f428085e Apr 12 18:53:53.501011 env[1124]: time="2024-04-12T18:53:53.500886809Z" level=warning msg="cleaning up after shim disconnected" id=0a71050448f507b975132b679d49f0bea5c34295e263939760635c92f428085e namespace=k8s.io Apr 12 18:53:53.501011 env[1124]: time="2024-04-12T18:53:53.500902137Z" level=info msg="cleaning up dead shim" Apr 12 18:53:53.551651 env[1124]: time="2024-04-12T18:53:53.549892989Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:53:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2518 runtime=io.containerd.runc.v2\ntime=\"2024-04-12T18:53:53Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" Apr 12 18:53:53.960107 kubelet[1983]: E0412 18:53:53.957579 1983 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:53:53.969561 env[1124]: time="2024-04-12T18:53:53.968078632Z" level=info msg="CreateContainer within sandbox \"2897425ba6ee7bcb8ed0c933423dfdeb0806d6e0493762d0847a8ca7027898a5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 12 18:53:53.974285 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0a71050448f507b975132b679d49f0bea5c34295e263939760635c92f428085e-rootfs.mount: Deactivated successfully. Apr 12 18:53:54.069672 env[1124]: time="2024-04-12T18:53:54.069573379Z" level=info msg="CreateContainer within sandbox \"2897425ba6ee7bcb8ed0c933423dfdeb0806d6e0493762d0847a8ca7027898a5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4563592c2c403158d9a66ec95da03d2cdf7de9056136d82592a48f24960656a9\"" Apr 12 18:53:54.073466 env[1124]: time="2024-04-12T18:53:54.073321701Z" level=info msg="StartContainer for \"4563592c2c403158d9a66ec95da03d2cdf7de9056136d82592a48f24960656a9\"" Apr 12 18:53:54.123467 systemd[1]: Started cri-containerd-4563592c2c403158d9a66ec95da03d2cdf7de9056136d82592a48f24960656a9.scope. Apr 12 18:53:54.180584 systemd[1]: cri-containerd-4563592c2c403158d9a66ec95da03d2cdf7de9056136d82592a48f24960656a9.scope: Deactivated successfully. Apr 12 18:53:54.185817 env[1124]: time="2024-04-12T18:53:54.184657120Z" level=info msg="StartContainer for \"4563592c2c403158d9a66ec95da03d2cdf7de9056136d82592a48f24960656a9\" returns successfully" Apr 12 18:53:54.216122 env[1124]: time="2024-04-12T18:53:54.215933420Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:53:54.223470 env[1124]: time="2024-04-12T18:53:54.221995223Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:53:54.226808 env[1124]: time="2024-04-12T18:53:54.226629879Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:53:54.228245 env[1124]: time="2024-04-12T18:53:54.227441261Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Apr 12 18:53:54.230861 env[1124]: time="2024-04-12T18:53:54.230797678Z" level=info msg="CreateContainer within sandbox \"53e4cdef93a2178dd801ce289cbafefc16bd3f65b171ed41113f219c14b37895\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 12 18:53:54.398807 env[1124]: time="2024-04-12T18:53:54.394997249Z" level=info msg="shim disconnected" id=4563592c2c403158d9a66ec95da03d2cdf7de9056136d82592a48f24960656a9 Apr 12 18:53:54.398807 env[1124]: time="2024-04-12T18:53:54.395072290Z" level=warning msg="cleaning up after shim disconnected" id=4563592c2c403158d9a66ec95da03d2cdf7de9056136d82592a48f24960656a9 namespace=k8s.io Apr 12 18:53:54.398807 env[1124]: time="2024-04-12T18:53:54.395085115Z" level=info msg="cleaning up dead shim" Apr 12 18:53:54.423664 env[1124]: time="2024-04-12T18:53:54.423587391Z" level=info msg="CreateContainer within sandbox \"53e4cdef93a2178dd801ce289cbafefc16bd3f65b171ed41113f219c14b37895\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"69b4fea9827c368299d0649c60d643dedb0781d78cd593b03f3a4b5089687d34\"" Apr 12 18:53:54.429611 env[1124]: time="2024-04-12T18:53:54.424975805Z" level=info msg="StartContainer for \"69b4fea9827c368299d0649c60d643dedb0781d78cd593b03f3a4b5089687d34\"" Apr 12 18:53:54.435932 env[1124]: time="2024-04-12T18:53:54.435855238Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:53:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2573 runtime=io.containerd.runc.v2\n" Apr 12 18:53:54.528945 systemd[1]: Started cri-containerd-69b4fea9827c368299d0649c60d643dedb0781d78cd593b03f3a4b5089687d34.scope. Apr 12 18:53:54.618601 env[1124]: time="2024-04-12T18:53:54.618504665Z" level=info msg="StartContainer for \"69b4fea9827c368299d0649c60d643dedb0781d78cd593b03f3a4b5089687d34\" returns successfully" Apr 12 18:53:54.967945 kubelet[1983]: E0412 18:53:54.965440 1983 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:53:54.975899 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4563592c2c403158d9a66ec95da03d2cdf7de9056136d82592a48f24960656a9-rootfs.mount: Deactivated successfully. Apr 12 18:53:54.989799 kubelet[1983]: E0412 18:53:54.989223 1983 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:53:54.997345 kubelet[1983]: I0412 18:53:54.997272 1983 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-bssrk" podStartSLOduration=1.9277656680000002 podStartE2EDuration="19.997214801s" podCreationTimestamp="2024-04-12 18:53:35 +0000 UTC" firstStartedPulling="2024-04-12 18:53:36.15912855 +0000 UTC m=+13.781803661" lastFinishedPulling="2024-04-12 18:53:54.228577683 +0000 UTC m=+31.851252794" observedRunningTime="2024-04-12 18:53:54.995816497 +0000 UTC m=+32.618491628" watchObservedRunningTime="2024-04-12 18:53:54.997214801 +0000 UTC m=+32.619889912" Apr 12 18:53:54.997932 env[1124]: time="2024-04-12T18:53:54.997885809Z" level=info msg="CreateContainer within sandbox \"2897425ba6ee7bcb8ed0c933423dfdeb0806d6e0493762d0847a8ca7027898a5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 12 18:53:55.085181 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1234959382.mount: Deactivated successfully. Apr 12 18:53:55.096180 env[1124]: time="2024-04-12T18:53:55.096092697Z" level=info msg="CreateContainer within sandbox \"2897425ba6ee7bcb8ed0c933423dfdeb0806d6e0493762d0847a8ca7027898a5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"45e7353ef9a50927a1a52702880d2e028dd6f35356cb27e450b068989c9c5855\"" Apr 12 18:53:55.098641 env[1124]: time="2024-04-12T18:53:55.098581788Z" level=info msg="StartContainer for \"45e7353ef9a50927a1a52702880d2e028dd6f35356cb27e450b068989c9c5855\"" Apr 12 18:53:55.212776 systemd[1]: Started cri-containerd-45e7353ef9a50927a1a52702880d2e028dd6f35356cb27e450b068989c9c5855.scope. Apr 12 18:53:55.409407 env[1124]: time="2024-04-12T18:53:55.408879755Z" level=info msg="StartContainer for \"45e7353ef9a50927a1a52702880d2e028dd6f35356cb27e450b068989c9c5855\" returns successfully" Apr 12 18:53:55.601671 kubelet[1983]: I0412 18:53:55.601609 1983 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Apr 12 18:53:55.656904 kubelet[1983]: I0412 18:53:55.656735 1983 topology_manager.go:215] "Topology Admit Handler" podUID="1d06a39c-7b60-424b-9326-4d203182fa43" podNamespace="kube-system" podName="coredns-76f75df574-p6zww" Apr 12 18:53:55.670427 systemd[1]: Created slice kubepods-burstable-pod1d06a39c_7b60_424b_9326_4d203182fa43.slice. Apr 12 18:53:55.679859 kubelet[1983]: I0412 18:53:55.679788 1983 topology_manager.go:215] "Topology Admit Handler" podUID="d0bd0707-c3e4-482b-9912-d2bf50164964" podNamespace="kube-system" podName="coredns-76f75df574-th9kh" Apr 12 18:53:55.688406 kubelet[1983]: W0412 18:53:55.686076 1983 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Apr 12 18:53:55.688406 kubelet[1983]: E0412 18:53:55.686136 1983 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Apr 12 18:53:55.695331 systemd[1]: Created slice kubepods-burstable-podd0bd0707_c3e4_482b_9912_d2bf50164964.slice. Apr 12 18:53:55.803469 kubelet[1983]: I0412 18:53:55.803405 1983 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1d06a39c-7b60-424b-9326-4d203182fa43-config-volume\") pod \"coredns-76f75df574-p6zww\" (UID: \"1d06a39c-7b60-424b-9326-4d203182fa43\") " pod="kube-system/coredns-76f75df574-p6zww" Apr 12 18:53:55.804039 kubelet[1983]: I0412 18:53:55.804019 1983 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d0bd0707-c3e4-482b-9912-d2bf50164964-config-volume\") pod \"coredns-76f75df574-th9kh\" (UID: \"d0bd0707-c3e4-482b-9912-d2bf50164964\") " pod="kube-system/coredns-76f75df574-th9kh" Apr 12 18:53:55.804231 kubelet[1983]: I0412 18:53:55.804211 1983 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tx52g\" (UniqueName: \"kubernetes.io/projected/d0bd0707-c3e4-482b-9912-d2bf50164964-kube-api-access-tx52g\") pod \"coredns-76f75df574-th9kh\" (UID: \"d0bd0707-c3e4-482b-9912-d2bf50164964\") " pod="kube-system/coredns-76f75df574-th9kh" Apr 12 18:53:55.804418 kubelet[1983]: I0412 18:53:55.804401 1983 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stnlf\" (UniqueName: \"kubernetes.io/projected/1d06a39c-7b60-424b-9326-4d203182fa43-kube-api-access-stnlf\") pod \"coredns-76f75df574-p6zww\" (UID: \"1d06a39c-7b60-424b-9326-4d203182fa43\") " pod="kube-system/coredns-76f75df574-p6zww" Apr 12 18:53:55.996995 kubelet[1983]: E0412 18:53:55.996822 1983 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:53:56.000962 kubelet[1983]: E0412 18:53:56.000140 1983 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:53:56.597361 kubelet[1983]: E0412 18:53:56.586621 1983 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:53:56.597591 env[1124]: time="2024-04-12T18:53:56.588750999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-p6zww,Uid:1d06a39c-7b60-424b-9326-4d203182fa43,Namespace:kube-system,Attempt:0,}" Apr 12 18:53:56.619622 kubelet[1983]: E0412 18:53:56.617203 1983 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:53:56.619840 env[1124]: time="2024-04-12T18:53:56.617916215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-th9kh,Uid:d0bd0707-c3e4-482b-9912-d2bf50164964,Namespace:kube-system,Attempt:0,}" Apr 12 18:53:57.003441 kubelet[1983]: E0412 18:53:56.998714 1983 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:53:58.000531 kubelet[1983]: E0412 18:53:58.000449 1983 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:53:59.747226 systemd-networkd[1019]: cilium_host: Link UP Apr 12 18:53:59.747421 systemd-networkd[1019]: cilium_net: Link UP Apr 12 18:53:59.753839 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Apr 12 18:53:59.753935 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Apr 12 18:53:59.754816 systemd-networkd[1019]: cilium_net: Gained carrier Apr 12 18:53:59.763110 systemd-networkd[1019]: cilium_host: Gained carrier Apr 12 18:54:00.051411 systemd-networkd[1019]: cilium_vxlan: Link UP Apr 12 18:54:00.051420 systemd-networkd[1019]: cilium_vxlan: Gained carrier Apr 12 18:54:00.159008 systemd-networkd[1019]: cilium_host: Gained IPv6LL Apr 12 18:54:00.551002 systemd-networkd[1019]: cilium_net: Gained IPv6LL Apr 12 18:54:00.589686 kernel: NET: Registered PF_ALG protocol family Apr 12 18:54:01.775620 systemd-networkd[1019]: cilium_vxlan: Gained IPv6LL Apr 12 18:54:02.259322 systemd-networkd[1019]: lxc_health: Link UP Apr 12 18:54:02.309345 systemd-networkd[1019]: lxc_health: Gained carrier Apr 12 18:54:02.315195 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Apr 12 18:54:02.811055 systemd-networkd[1019]: lxccc5cc49a1b3f: Link UP Apr 12 18:54:02.857056 kernel: eth0: renamed from tmp25eb3 Apr 12 18:54:02.897947 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxccc5cc49a1b3f: link becomes ready Apr 12 18:54:02.897666 systemd-networkd[1019]: lxccc5cc49a1b3f: Gained carrier Apr 12 18:54:02.930562 systemd-networkd[1019]: lxc38128bb73f41: Link UP Apr 12 18:54:02.944914 kernel: eth0: renamed from tmp4a683 Apr 12 18:54:02.965810 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc38128bb73f41: link becomes ready Apr 12 18:54:02.965549 systemd-networkd[1019]: lxc38128bb73f41: Gained carrier Apr 12 18:54:03.427031 kubelet[1983]: E0412 18:54:03.426992 1983 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:54:03.515135 kubelet[1983]: I0412 18:54:03.515082 1983 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-2qnbb" podStartSLOduration=13.839443283 podStartE2EDuration="28.515016016s" podCreationTimestamp="2024-04-12 18:53:35 +0000 UTC" firstStartedPulling="2024-04-12 18:53:35.814516743 +0000 UTC m=+13.437191854" lastFinishedPulling="2024-04-12 18:53:50.490089476 +0000 UTC m=+28.112764587" observedRunningTime="2024-04-12 18:53:56.039526014 +0000 UTC m=+33.662201145" watchObservedRunningTime="2024-04-12 18:54:03.515016016 +0000 UTC m=+41.137691127" Apr 12 18:54:04.017659 systemd-networkd[1019]: lxc_health: Gained IPv6LL Apr 12 18:54:04.053268 kubelet[1983]: E0412 18:54:04.053234 1983 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:54:04.135053 systemd-networkd[1019]: lxccc5cc49a1b3f: Gained IPv6LL Apr 12 18:54:04.839141 systemd-networkd[1019]: lxc38128bb73f41: Gained IPv6LL Apr 12 18:54:05.051493 kubelet[1983]: E0412 18:54:05.051419 1983 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:54:10.031873 env[1124]: time="2024-04-12T18:54:10.028747096Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:54:10.031873 env[1124]: time="2024-04-12T18:54:10.028812192Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:54:10.031873 env[1124]: time="2024-04-12T18:54:10.028826880Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:54:10.031873 env[1124]: time="2024-04-12T18:54:10.028984907Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/25eb31453e1733a985dcdc93652be2983ceb38155ed43e30c7e5f257a289a411 pid=3217 runtime=io.containerd.runc.v2 Apr 12 18:54:10.074364 env[1124]: time="2024-04-12T18:54:10.069194907Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:54:10.074364 env[1124]: time="2024-04-12T18:54:10.069244042Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:54:10.074364 env[1124]: time="2024-04-12T18:54:10.069257418Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:54:10.074364 env[1124]: time="2024-04-12T18:54:10.070669002Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4a6836a57c8687773e1634b3137076b4d5a0be9d00616fa49443dc37c3c671b6 pid=3243 runtime=io.containerd.runc.v2 Apr 12 18:54:10.078639 systemd[1]: run-containerd-runc-k8s.io-25eb31453e1733a985dcdc93652be2983ceb38155ed43e30c7e5f257a289a411-runc.BkSIoi.mount: Deactivated successfully. Apr 12 18:54:10.104510 systemd[1]: Started cri-containerd-25eb31453e1733a985dcdc93652be2983ceb38155ed43e30c7e5f257a289a411.scope. Apr 12 18:54:10.148607 systemd[1]: Started cri-containerd-4a6836a57c8687773e1634b3137076b4d5a0be9d00616fa49443dc37c3c671b6.scope. Apr 12 18:54:10.163864 systemd-resolved[1063]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 12 18:54:10.186350 systemd-resolved[1063]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 12 18:54:10.254896 env[1124]: time="2024-04-12T18:54:10.254838877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-p6zww,Uid:1d06a39c-7b60-424b-9326-4d203182fa43,Namespace:kube-system,Attempt:0,} returns sandbox id \"25eb31453e1733a985dcdc93652be2983ceb38155ed43e30c7e5f257a289a411\"" Apr 12 18:54:10.261825 kubelet[1983]: E0412 18:54:10.259512 1983 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:54:10.269343 env[1124]: time="2024-04-12T18:54:10.266461127Z" level=info msg="CreateContainer within sandbox \"25eb31453e1733a985dcdc93652be2983ceb38155ed43e30c7e5f257a289a411\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 12 18:54:10.307682 env[1124]: time="2024-04-12T18:54:10.300370950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-th9kh,Uid:d0bd0707-c3e4-482b-9912-d2bf50164964,Namespace:kube-system,Attempt:0,} returns sandbox id \"4a6836a57c8687773e1634b3137076b4d5a0be9d00616fa49443dc37c3c671b6\"" Apr 12 18:54:10.308049 kubelet[1983]: E0412 18:54:10.305286 1983 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:54:10.313068 env[1124]: time="2024-04-12T18:54:10.312984339Z" level=info msg="CreateContainer within sandbox \"4a6836a57c8687773e1634b3137076b4d5a0be9d00616fa49443dc37c3c671b6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 12 18:54:10.365875 env[1124]: time="2024-04-12T18:54:10.365504743Z" level=info msg="CreateContainer within sandbox \"25eb31453e1733a985dcdc93652be2983ceb38155ed43e30c7e5f257a289a411\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3ea59676027dd8d8018aea16cae30422a66af9f296b37bafd85f74956daa4ace\"" Apr 12 18:54:10.369370 env[1124]: time="2024-04-12T18:54:10.366655402Z" level=info msg="StartContainer for \"3ea59676027dd8d8018aea16cae30422a66af9f296b37bafd85f74956daa4ace\"" Apr 12 18:54:10.404709 env[1124]: time="2024-04-12T18:54:10.404602477Z" level=info msg="CreateContainer within sandbox \"4a6836a57c8687773e1634b3137076b4d5a0be9d00616fa49443dc37c3c671b6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5ac7c45e9ed30f0c78d4854747da6ff86418b5109a24520c730e7b9709170629\"" Apr 12 18:54:10.415225 env[1124]: time="2024-04-12T18:54:10.410096754Z" level=info msg="StartContainer for \"5ac7c45e9ed30f0c78d4854747da6ff86418b5109a24520c730e7b9709170629\"" Apr 12 18:54:10.428559 systemd[1]: Started cri-containerd-3ea59676027dd8d8018aea16cae30422a66af9f296b37bafd85f74956daa4ace.scope. Apr 12 18:54:10.485685 systemd[1]: Started cri-containerd-5ac7c45e9ed30f0c78d4854747da6ff86418b5109a24520c730e7b9709170629.scope. Apr 12 18:54:10.707588 env[1124]: time="2024-04-12T18:54:10.707500488Z" level=info msg="StartContainer for \"3ea59676027dd8d8018aea16cae30422a66af9f296b37bafd85f74956daa4ace\" returns successfully" Apr 12 18:54:10.711308 env[1124]: time="2024-04-12T18:54:10.711221047Z" level=info msg="StartContainer for \"5ac7c45e9ed30f0c78d4854747da6ff86418b5109a24520c730e7b9709170629\" returns successfully" Apr 12 18:54:11.059353 systemd[1]: run-containerd-runc-k8s.io-4a6836a57c8687773e1634b3137076b4d5a0be9d00616fa49443dc37c3c671b6-runc.dolIhB.mount: Deactivated successfully. Apr 12 18:54:11.133747 kubelet[1983]: E0412 18:54:11.130369 1983 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:54:11.142535 kubelet[1983]: E0412 18:54:11.142132 1983 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:54:11.222832 kubelet[1983]: I0412 18:54:11.218101 1983 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-th9kh" podStartSLOduration=36.218040796 podStartE2EDuration="36.218040796s" podCreationTimestamp="2024-04-12 18:53:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:54:11.162289498 +0000 UTC m=+48.784964629" watchObservedRunningTime="2024-04-12 18:54:11.218040796 +0000 UTC m=+48.840715907" Apr 12 18:54:11.289927 kubelet[1983]: I0412 18:54:11.289881 1983 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-p6zww" podStartSLOduration=36.289822026 podStartE2EDuration="36.289822026s" podCreationTimestamp="2024-04-12 18:53:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:54:11.218670224 +0000 UTC m=+48.841345345" watchObservedRunningTime="2024-04-12 18:54:11.289822026 +0000 UTC m=+48.912497147" Apr 12 18:54:12.154957 kubelet[1983]: E0412 18:54:12.154882 1983 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:54:12.159387 kubelet[1983]: E0412 18:54:12.155712 1983 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:54:13.178688 kubelet[1983]: E0412 18:54:13.178640 1983 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:54:13.179900 kubelet[1983]: E0412 18:54:13.179879 1983 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:54:16.534086 systemd[1]: Started sshd@5-10.0.0.118:22-10.0.0.1:44640.service. Apr 12 18:54:16.634851 sshd[3381]: Accepted publickey for core from 10.0.0.1 port 44640 ssh2: RSA SHA256:oFTmhZVjs8bjXH/lYpDQZ+WL9oh5tEY90V+L3H6oLsU Apr 12 18:54:16.643007 sshd[3381]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:54:16.663881 systemd[1]: Started session-6.scope. Apr 12 18:54:16.664523 systemd-logind[1116]: New session 6 of user core. Apr 12 18:54:17.073233 sshd[3381]: pam_unix(sshd:session): session closed for user core Apr 12 18:54:17.081153 systemd[1]: sshd@5-10.0.0.118:22-10.0.0.1:44640.service: Deactivated successfully. Apr 12 18:54:17.082396 systemd[1]: session-6.scope: Deactivated successfully. Apr 12 18:54:17.089534 systemd-logind[1116]: Session 6 logged out. Waiting for processes to exit. Apr 12 18:54:17.102044 systemd-logind[1116]: Removed session 6. Apr 12 18:54:22.093961 systemd[1]: Started sshd@6-10.0.0.118:22-10.0.0.1:50350.service. Apr 12 18:54:22.186649 sshd[3395]: Accepted publickey for core from 10.0.0.1 port 50350 ssh2: RSA SHA256:oFTmhZVjs8bjXH/lYpDQZ+WL9oh5tEY90V+L3H6oLsU Apr 12 18:54:22.198536 sshd[3395]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:54:22.215664 systemd[1]: Started session-7.scope. Apr 12 18:54:22.216814 systemd-logind[1116]: New session 7 of user core. Apr 12 18:54:22.550088 sshd[3395]: pam_unix(sshd:session): session closed for user core Apr 12 18:54:22.566246 systemd[1]: sshd@6-10.0.0.118:22-10.0.0.1:50350.service: Deactivated successfully. Apr 12 18:54:22.567229 systemd[1]: session-7.scope: Deactivated successfully. Apr 12 18:54:22.569661 systemd-logind[1116]: Session 7 logged out. Waiting for processes to exit. Apr 12 18:54:22.572163 systemd-logind[1116]: Removed session 7. Apr 12 18:54:27.556326 systemd[1]: Started sshd@7-10.0.0.118:22-10.0.0.1:41068.service. Apr 12 18:54:27.623003 sshd[3411]: Accepted publickey for core from 10.0.0.1 port 41068 ssh2: RSA SHA256:oFTmhZVjs8bjXH/lYpDQZ+WL9oh5tEY90V+L3H6oLsU Apr 12 18:54:27.625599 sshd[3411]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:54:27.636159 systemd-logind[1116]: New session 8 of user core. Apr 12 18:54:27.638730 systemd[1]: Started session-8.scope. Apr 12 18:54:27.982604 sshd[3411]: pam_unix(sshd:session): session closed for user core Apr 12 18:54:27.993026 systemd[1]: sshd@7-10.0.0.118:22-10.0.0.1:41068.service: Deactivated successfully. Apr 12 18:54:27.994097 systemd[1]: session-8.scope: Deactivated successfully. Apr 12 18:54:27.998707 systemd-logind[1116]: Session 8 logged out. Waiting for processes to exit. Apr 12 18:54:28.023723 systemd-logind[1116]: Removed session 8. Apr 12 18:54:32.995391 systemd[1]: Started sshd@8-10.0.0.118:22-10.0.0.1:41082.service. Apr 12 18:54:33.067832 sshd[3425]: Accepted publickey for core from 10.0.0.1 port 41082 ssh2: RSA SHA256:oFTmhZVjs8bjXH/lYpDQZ+WL9oh5tEY90V+L3H6oLsU Apr 12 18:54:33.072841 sshd[3425]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:54:33.105418 systemd[1]: Started session-9.scope. Apr 12 18:54:33.112410 systemd-logind[1116]: New session 9 of user core. Apr 12 18:54:33.447590 sshd[3425]: pam_unix(sshd:session): session closed for user core Apr 12 18:54:33.453297 systemd[1]: sshd@8-10.0.0.118:22-10.0.0.1:41082.service: Deactivated successfully. Apr 12 18:54:33.454323 systemd[1]: session-9.scope: Deactivated successfully. Apr 12 18:54:33.455327 systemd-logind[1116]: Session 9 logged out. Waiting for processes to exit. Apr 12 18:54:33.459404 systemd-logind[1116]: Removed session 9. Apr 12 18:54:34.578800 kubelet[1983]: E0412 18:54:34.578707 1983 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:54:34.580641 kubelet[1983]: E0412 18:54:34.580612 1983 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:54:38.465108 systemd[1]: Started sshd@9-10.0.0.118:22-10.0.0.1:46706.service. Apr 12 18:54:38.522499 sshd[3443]: Accepted publickey for core from 10.0.0.1 port 46706 ssh2: RSA SHA256:oFTmhZVjs8bjXH/lYpDQZ+WL9oh5tEY90V+L3H6oLsU Apr 12 18:54:38.526910 sshd[3443]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:54:38.542258 systemd-logind[1116]: New session 10 of user core. Apr 12 18:54:38.562142 systemd[1]: Started session-10.scope. Apr 12 18:54:38.763539 sshd[3443]: pam_unix(sshd:session): session closed for user core Apr 12 18:54:38.767044 systemd[1]: sshd@9-10.0.0.118:22-10.0.0.1:46706.service: Deactivated successfully. Apr 12 18:54:38.768011 systemd[1]: session-10.scope: Deactivated successfully. Apr 12 18:54:38.770644 systemd-logind[1116]: Session 10 logged out. Waiting for processes to exit. Apr 12 18:54:38.771814 systemd-logind[1116]: Removed session 10. Apr 12 18:54:40.580554 kubelet[1983]: E0412 18:54:40.578506 1983 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:54:42.356430 kernel: hrtimer: interrupt took 10841208 ns Apr 12 18:54:43.791178 systemd[1]: Started sshd@10-10.0.0.118:22-10.0.0.1:46708.service. Apr 12 18:54:43.858067 sshd[3458]: Accepted publickey for core from 10.0.0.1 port 46708 ssh2: RSA SHA256:oFTmhZVjs8bjXH/lYpDQZ+WL9oh5tEY90V+L3H6oLsU Apr 12 18:54:43.862959 sshd[3458]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:54:43.874998 systemd[1]: Started session-11.scope. Apr 12 18:54:43.877035 systemd-logind[1116]: New session 11 of user core. Apr 12 18:54:44.098896 sshd[3458]: pam_unix(sshd:session): session closed for user core Apr 12 18:54:44.109494 systemd[1]: sshd@10-10.0.0.118:22-10.0.0.1:46708.service: Deactivated successfully. Apr 12 18:54:44.110613 systemd[1]: session-11.scope: Deactivated successfully. Apr 12 18:54:44.113816 systemd-logind[1116]: Session 11 logged out. Waiting for processes to exit. Apr 12 18:54:44.118570 systemd-logind[1116]: Removed session 11. Apr 12 18:54:49.110104 systemd[1]: Started sshd@11-10.0.0.118:22-10.0.0.1:57946.service. Apr 12 18:54:49.165197 sshd[3472]: Accepted publickey for core from 10.0.0.1 port 57946 ssh2: RSA SHA256:oFTmhZVjs8bjXH/lYpDQZ+WL9oh5tEY90V+L3H6oLsU Apr 12 18:54:49.169874 sshd[3472]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:54:49.195130 systemd[1]: Started session-12.scope. Apr 12 18:54:49.197870 systemd-logind[1116]: New session 12 of user core. Apr 12 18:54:49.488022 sshd[3472]: pam_unix(sshd:session): session closed for user core Apr 12 18:54:49.493331 systemd[1]: Started sshd@12-10.0.0.118:22-10.0.0.1:57958.service. Apr 12 18:54:49.503112 systemd[1]: sshd@11-10.0.0.118:22-10.0.0.1:57946.service: Deactivated successfully. Apr 12 18:54:49.504070 systemd[1]: session-12.scope: Deactivated successfully. Apr 12 18:54:49.509001 systemd-logind[1116]: Session 12 logged out. Waiting for processes to exit. Apr 12 18:54:49.510809 systemd-logind[1116]: Removed session 12. Apr 12 18:54:49.551264 sshd[3485]: Accepted publickey for core from 10.0.0.1 port 57958 ssh2: RSA SHA256:oFTmhZVjs8bjXH/lYpDQZ+WL9oh5tEY90V+L3H6oLsU Apr 12 18:54:49.557394 sshd[3485]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:54:49.581783 systemd-logind[1116]: New session 13 of user core. Apr 12 18:54:49.593405 systemd[1]: Started session-13.scope. Apr 12 18:54:50.118624 sshd[3485]: pam_unix(sshd:session): session closed for user core Apr 12 18:54:50.136456 systemd[1]: Started sshd@13-10.0.0.118:22-10.0.0.1:57970.service. Apr 12 18:54:50.139340 systemd-logind[1116]: Session 13 logged out. Waiting for processes to exit. Apr 12 18:54:50.146231 systemd[1]: sshd@12-10.0.0.118:22-10.0.0.1:57958.service: Deactivated successfully. Apr 12 18:54:50.147246 systemd[1]: session-13.scope: Deactivated successfully. Apr 12 18:54:50.162838 systemd-logind[1116]: Removed session 13. Apr 12 18:54:50.242137 sshd[3496]: Accepted publickey for core from 10.0.0.1 port 57970 ssh2: RSA SHA256:oFTmhZVjs8bjXH/lYpDQZ+WL9oh5tEY90V+L3H6oLsU Apr 12 18:54:50.246359 sshd[3496]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:54:50.276791 systemd-logind[1116]: New session 14 of user core. Apr 12 18:54:50.287556 systemd[1]: Started session-14.scope. Apr 12 18:54:50.613269 sshd[3496]: pam_unix(sshd:session): session closed for user core Apr 12 18:54:50.626398 systemd[1]: sshd@13-10.0.0.118:22-10.0.0.1:57970.service: Deactivated successfully. Apr 12 18:54:50.630638 systemd[1]: session-14.scope: Deactivated successfully. Apr 12 18:54:50.632204 systemd-logind[1116]: Session 14 logged out. Waiting for processes to exit. Apr 12 18:54:50.640068 systemd-logind[1116]: Removed session 14. Apr 12 18:54:52.576694 kubelet[1983]: E0412 18:54:52.576609 1983 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:54:55.624510 systemd[1]: Started sshd@14-10.0.0.118:22-10.0.0.1:57986.service. Apr 12 18:54:55.679825 sshd[3510]: Accepted publickey for core from 10.0.0.1 port 57986 ssh2: RSA SHA256:oFTmhZVjs8bjXH/lYpDQZ+WL9oh5tEY90V+L3H6oLsU Apr 12 18:54:55.682496 sshd[3510]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:54:55.701141 systemd[1]: Started session-15.scope. Apr 12 18:54:55.702726 systemd-logind[1116]: New session 15 of user core. Apr 12 18:54:55.910573 sshd[3510]: pam_unix(sshd:session): session closed for user core Apr 12 18:54:55.914578 systemd[1]: sshd@14-10.0.0.118:22-10.0.0.1:57986.service: Deactivated successfully. Apr 12 18:54:55.915625 systemd[1]: session-15.scope: Deactivated successfully. Apr 12 18:54:55.916618 systemd-logind[1116]: Session 15 logged out. Waiting for processes to exit. Apr 12 18:54:55.918511 systemd-logind[1116]: Removed session 15. Apr 12 18:55:00.920450 systemd[1]: Started sshd@15-10.0.0.118:22-10.0.0.1:60470.service. Apr 12 18:55:00.963012 sshd[3523]: Accepted publickey for core from 10.0.0.1 port 60470 ssh2: RSA SHA256:oFTmhZVjs8bjXH/lYpDQZ+WL9oh5tEY90V+L3H6oLsU Apr 12 18:55:00.965534 sshd[3523]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:55:00.980901 systemd[1]: Started session-16.scope. Apr 12 18:55:00.981737 systemd-logind[1116]: New session 16 of user core. Apr 12 18:55:01.201464 sshd[3523]: pam_unix(sshd:session): session closed for user core Apr 12 18:55:01.208830 systemd-logind[1116]: Session 16 logged out. Waiting for processes to exit. Apr 12 18:55:01.209161 systemd[1]: sshd@15-10.0.0.118:22-10.0.0.1:60470.service: Deactivated successfully. Apr 12 18:55:01.210168 systemd[1]: session-16.scope: Deactivated successfully. Apr 12 18:55:01.213626 systemd-logind[1116]: Removed session 16. Apr 12 18:55:03.578735 kubelet[1983]: E0412 18:55:03.578679 1983 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:55:06.218457 systemd[1]: Started sshd@16-10.0.0.118:22-10.0.0.1:60472.service. Apr 12 18:55:06.270480 sshd[3537]: Accepted publickey for core from 10.0.0.1 port 60472 ssh2: RSA SHA256:oFTmhZVjs8bjXH/lYpDQZ+WL9oh5tEY90V+L3H6oLsU Apr 12 18:55:06.276746 sshd[3537]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:55:06.316255 systemd-logind[1116]: New session 17 of user core. Apr 12 18:55:06.316258 systemd[1]: Started session-17.scope. Apr 12 18:55:06.593572 sshd[3537]: pam_unix(sshd:session): session closed for user core Apr 12 18:55:06.600203 systemd[1]: sshd@16-10.0.0.118:22-10.0.0.1:60472.service: Deactivated successfully. Apr 12 18:55:06.601264 systemd[1]: session-17.scope: Deactivated successfully. Apr 12 18:55:06.603195 systemd-logind[1116]: Session 17 logged out. Waiting for processes to exit. Apr 12 18:55:06.625745 systemd-logind[1116]: Removed session 17. Apr 12 18:55:11.615944 systemd[1]: Started sshd@17-10.0.0.118:22-10.0.0.1:37526.service. Apr 12 18:55:11.668476 sshd[3552]: Accepted publickey for core from 10.0.0.1 port 37526 ssh2: RSA SHA256:oFTmhZVjs8bjXH/lYpDQZ+WL9oh5tEY90V+L3H6oLsU Apr 12 18:55:11.670344 sshd[3552]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:55:11.679962 systemd-logind[1116]: New session 18 of user core. Apr 12 18:55:11.681252 systemd[1]: Started session-18.scope. Apr 12 18:55:11.885598 sshd[3552]: pam_unix(sshd:session): session closed for user core Apr 12 18:55:11.892413 systemd[1]: Started sshd@18-10.0.0.118:22-10.0.0.1:37542.service. Apr 12 18:55:11.893964 systemd[1]: sshd@17-10.0.0.118:22-10.0.0.1:37526.service: Deactivated successfully. Apr 12 18:55:11.895829 systemd[1]: session-18.scope: Deactivated successfully. Apr 12 18:55:11.899881 systemd-logind[1116]: Session 18 logged out. Waiting for processes to exit. Apr 12 18:55:11.904315 systemd-logind[1116]: Removed session 18. Apr 12 18:55:11.939032 sshd[3565]: Accepted publickey for core from 10.0.0.1 port 37542 ssh2: RSA SHA256:oFTmhZVjs8bjXH/lYpDQZ+WL9oh5tEY90V+L3H6oLsU Apr 12 18:55:11.941223 sshd[3565]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:55:11.951753 systemd-logind[1116]: New session 19 of user core. Apr 12 18:55:11.964990 systemd[1]: Started session-19.scope. Apr 12 18:55:12.772712 sshd[3565]: pam_unix(sshd:session): session closed for user core Apr 12 18:55:12.784568 systemd[1]: Started sshd@19-10.0.0.118:22-10.0.0.1:37548.service. Apr 12 18:55:12.788408 systemd[1]: sshd@18-10.0.0.118:22-10.0.0.1:37542.service: Deactivated successfully. Apr 12 18:55:12.789461 systemd[1]: session-19.scope: Deactivated successfully. Apr 12 18:55:12.790410 systemd-logind[1116]: Session 19 logged out. Waiting for processes to exit. Apr 12 18:55:12.792135 systemd-logind[1116]: Removed session 19. Apr 12 18:55:12.846335 sshd[3577]: Accepted publickey for core from 10.0.0.1 port 37548 ssh2: RSA SHA256:oFTmhZVjs8bjXH/lYpDQZ+WL9oh5tEY90V+L3H6oLsU Apr 12 18:55:12.850626 sshd[3577]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:55:12.862068 systemd-logind[1116]: New session 20 of user core. Apr 12 18:55:12.865379 systemd[1]: Started session-20.scope. Apr 12 18:55:13.575664 kubelet[1983]: E0412 18:55:13.575618 1983 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:55:15.112507 sshd[3577]: pam_unix(sshd:session): session closed for user core Apr 12 18:55:15.124608 systemd[1]: Started sshd@20-10.0.0.118:22-10.0.0.1:37560.service. Apr 12 18:55:15.125377 systemd[1]: sshd@19-10.0.0.118:22-10.0.0.1:37548.service: Deactivated successfully. Apr 12 18:55:15.126429 systemd[1]: session-20.scope: Deactivated successfully. Apr 12 18:55:15.141139 systemd-logind[1116]: Session 20 logged out. Waiting for processes to exit. Apr 12 18:55:15.142761 systemd-logind[1116]: Removed session 20. Apr 12 18:55:15.226741 sshd[3598]: Accepted publickey for core from 10.0.0.1 port 37560 ssh2: RSA SHA256:oFTmhZVjs8bjXH/lYpDQZ+WL9oh5tEY90V+L3H6oLsU Apr 12 18:55:15.227704 sshd[3598]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:55:15.251562 systemd[1]: Started session-21.scope. Apr 12 18:55:15.253727 systemd-logind[1116]: New session 21 of user core. Apr 12 18:55:15.810315 sshd[3598]: pam_unix(sshd:session): session closed for user core Apr 12 18:55:15.820472 systemd[1]: Started sshd@21-10.0.0.118:22-10.0.0.1:37576.service. Apr 12 18:55:15.821345 systemd[1]: sshd@20-10.0.0.118:22-10.0.0.1:37560.service: Deactivated successfully. Apr 12 18:55:15.822278 systemd[1]: session-21.scope: Deactivated successfully. Apr 12 18:55:15.829604 systemd-logind[1116]: Session 21 logged out. Waiting for processes to exit. Apr 12 18:55:15.836271 systemd-logind[1116]: Removed session 21. Apr 12 18:55:15.872049 sshd[3609]: Accepted publickey for core from 10.0.0.1 port 37576 ssh2: RSA SHA256:oFTmhZVjs8bjXH/lYpDQZ+WL9oh5tEY90V+L3H6oLsU Apr 12 18:55:15.874397 sshd[3609]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:55:15.888901 systemd-logind[1116]: New session 22 of user core. Apr 12 18:55:15.890063 systemd[1]: Started session-22.scope. Apr 12 18:55:16.177833 sshd[3609]: pam_unix(sshd:session): session closed for user core Apr 12 18:55:16.180682 systemd[1]: sshd@21-10.0.0.118:22-10.0.0.1:37576.service: Deactivated successfully. Apr 12 18:55:16.181625 systemd[1]: session-22.scope: Deactivated successfully. Apr 12 18:55:16.186196 systemd-logind[1116]: Session 22 logged out. Waiting for processes to exit. Apr 12 18:55:16.187662 systemd-logind[1116]: Removed session 22. Apr 12 18:55:20.598966 kubelet[1983]: E0412 18:55:20.598904 1983 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:55:21.186891 systemd[1]: Started sshd@22-10.0.0.118:22-10.0.0.1:52190.service. Apr 12 18:55:21.253570 sshd[3623]: Accepted publickey for core from 10.0.0.1 port 52190 ssh2: RSA SHA256:oFTmhZVjs8bjXH/lYpDQZ+WL9oh5tEY90V+L3H6oLsU Apr 12 18:55:21.269537 sshd[3623]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:55:21.295484 systemd-logind[1116]: New session 23 of user core. Apr 12 18:55:21.298786 systemd[1]: Started session-23.scope. Apr 12 18:55:21.590740 sshd[3623]: pam_unix(sshd:session): session closed for user core Apr 12 18:55:21.599930 systemd[1]: sshd@22-10.0.0.118:22-10.0.0.1:52190.service: Deactivated successfully. Apr 12 18:55:21.600888 systemd[1]: session-23.scope: Deactivated successfully. Apr 12 18:55:21.616870 systemd-logind[1116]: Session 23 logged out. Waiting for processes to exit. Apr 12 18:55:21.635606 systemd-logind[1116]: Removed session 23. Apr 12 18:55:26.606071 systemd[1]: Started sshd@23-10.0.0.118:22-10.0.0.1:52206.service. Apr 12 18:55:26.657356 sshd[3641]: Accepted publickey for core from 10.0.0.1 port 52206 ssh2: RSA SHA256:oFTmhZVjs8bjXH/lYpDQZ+WL9oh5tEY90V+L3H6oLsU Apr 12 18:55:26.660194 sshd[3641]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:55:26.682478 systemd[1]: Started session-24.scope. Apr 12 18:55:26.683605 systemd-logind[1116]: New session 24 of user core. Apr 12 18:55:26.902482 sshd[3641]: pam_unix(sshd:session): session closed for user core Apr 12 18:55:26.912611 systemd[1]: sshd@23-10.0.0.118:22-10.0.0.1:52206.service: Deactivated successfully. Apr 12 18:55:26.914793 systemd[1]: session-24.scope: Deactivated successfully. Apr 12 18:55:26.915843 systemd-logind[1116]: Session 24 logged out. Waiting for processes to exit. Apr 12 18:55:26.928019 systemd-logind[1116]: Removed session 24. Apr 12 18:55:31.919706 systemd[1]: Started sshd@24-10.0.0.118:22-10.0.0.1:45932.service. Apr 12 18:55:31.997123 sshd[3654]: Accepted publickey for core from 10.0.0.1 port 45932 ssh2: RSA SHA256:oFTmhZVjs8bjXH/lYpDQZ+WL9oh5tEY90V+L3H6oLsU Apr 12 18:55:31.999908 sshd[3654]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:55:32.014748 systemd-logind[1116]: New session 25 of user core. Apr 12 18:55:32.018902 systemd[1]: Started session-25.scope. Apr 12 18:55:32.207450 sshd[3654]: pam_unix(sshd:session): session closed for user core Apr 12 18:55:32.210915 systemd[1]: sshd@24-10.0.0.118:22-10.0.0.1:45932.service: Deactivated successfully. Apr 12 18:55:32.211979 systemd[1]: session-25.scope: Deactivated successfully. Apr 12 18:55:32.218900 systemd-logind[1116]: Session 25 logged out. Waiting for processes to exit. Apr 12 18:55:32.222677 systemd-logind[1116]: Removed session 25. Apr 12 18:55:37.234687 systemd[1]: Started sshd@25-10.0.0.118:22-10.0.0.1:36064.service. Apr 12 18:55:37.285642 sshd[3670]: Accepted publickey for core from 10.0.0.1 port 36064 ssh2: RSA SHA256:oFTmhZVjs8bjXH/lYpDQZ+WL9oh5tEY90V+L3H6oLsU Apr 12 18:55:37.293446 sshd[3670]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:55:37.304071 systemd-logind[1116]: New session 26 of user core. Apr 12 18:55:37.304680 systemd[1]: Started session-26.scope. Apr 12 18:55:37.535457 sshd[3670]: pam_unix(sshd:session): session closed for user core Apr 12 18:55:37.559675 systemd[1]: sshd@25-10.0.0.118:22-10.0.0.1:36064.service: Deactivated successfully. Apr 12 18:55:37.560714 systemd[1]: session-26.scope: Deactivated successfully. Apr 12 18:55:37.562359 systemd-logind[1116]: Session 26 logged out. Waiting for processes to exit. Apr 12 18:55:37.563388 systemd-logind[1116]: Removed session 26. Apr 12 18:55:42.558669 systemd[1]: Started sshd@26-10.0.0.118:22-10.0.0.1:36080.service. Apr 12 18:55:42.583858 kubelet[1983]: E0412 18:55:42.582588 1983 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:55:42.617669 sshd[3683]: Accepted publickey for core from 10.0.0.1 port 36080 ssh2: RSA SHA256:oFTmhZVjs8bjXH/lYpDQZ+WL9oh5tEY90V+L3H6oLsU Apr 12 18:55:42.621539 sshd[3683]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:55:42.628995 systemd-logind[1116]: New session 27 of user core. Apr 12 18:55:42.629709 systemd[1]: Started session-27.scope. Apr 12 18:55:42.866360 sshd[3683]: pam_unix(sshd:session): session closed for user core Apr 12 18:55:42.874652 systemd[1]: Started sshd@27-10.0.0.118:22-10.0.0.1:36088.service. Apr 12 18:55:42.880850 systemd[1]: sshd@26-10.0.0.118:22-10.0.0.1:36080.service: Deactivated successfully. Apr 12 18:55:42.881919 systemd[1]: session-27.scope: Deactivated successfully. Apr 12 18:55:42.895401 systemd-logind[1116]: Session 27 logged out. Waiting for processes to exit. Apr 12 18:55:42.899396 systemd-logind[1116]: Removed session 27. Apr 12 18:55:42.985530 sshd[3695]: Accepted publickey for core from 10.0.0.1 port 36088 ssh2: RSA SHA256:oFTmhZVjs8bjXH/lYpDQZ+WL9oh5tEY90V+L3H6oLsU Apr 12 18:55:42.993419 sshd[3695]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:55:43.003847 systemd[1]: Started session-28.scope. Apr 12 18:55:43.004433 systemd-logind[1116]: New session 28 of user core. Apr 12 18:55:45.074429 systemd[1]: run-containerd-runc-k8s.io-45e7353ef9a50927a1a52702880d2e028dd6f35356cb27e450b068989c9c5855-runc.cvr1Q5.mount: Deactivated successfully. Apr 12 18:55:45.084120 env[1124]: time="2024-04-12T18:55:45.082492312Z" level=info msg="StopContainer for \"69b4fea9827c368299d0649c60d643dedb0781d78cd593b03f3a4b5089687d34\" with timeout 30 (s)" Apr 12 18:55:45.085085 env[1124]: time="2024-04-12T18:55:45.085029846Z" level=info msg="Stop container \"69b4fea9827c368299d0649c60d643dedb0781d78cd593b03f3a4b5089687d34\" with signal terminated" Apr 12 18:55:45.108634 systemd[1]: cri-containerd-69b4fea9827c368299d0649c60d643dedb0781d78cd593b03f3a4b5089687d34.scope: Deactivated successfully. Apr 12 18:55:45.129640 env[1124]: time="2024-04-12T18:55:45.129534959Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 12 18:55:45.160474 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-69b4fea9827c368299d0649c60d643dedb0781d78cd593b03f3a4b5089687d34-rootfs.mount: Deactivated successfully. Apr 12 18:55:45.165458 env[1124]: time="2024-04-12T18:55:45.162188233Z" level=info msg="StopContainer for \"45e7353ef9a50927a1a52702880d2e028dd6f35356cb27e450b068989c9c5855\" with timeout 2 (s)" Apr 12 18:55:45.165458 env[1124]: time="2024-04-12T18:55:45.162629915Z" level=info msg="Stop container \"45e7353ef9a50927a1a52702880d2e028dd6f35356cb27e450b068989c9c5855\" with signal terminated" Apr 12 18:55:45.175280 systemd-networkd[1019]: lxc_health: Link DOWN Apr 12 18:55:45.175292 systemd-networkd[1019]: lxc_health: Lost carrier Apr 12 18:55:45.180533 env[1124]: time="2024-04-12T18:55:45.180451822Z" level=info msg="shim disconnected" id=69b4fea9827c368299d0649c60d643dedb0781d78cd593b03f3a4b5089687d34 Apr 12 18:55:45.180533 env[1124]: time="2024-04-12T18:55:45.180512396Z" level=warning msg="cleaning up after shim disconnected" id=69b4fea9827c368299d0649c60d643dedb0781d78cd593b03f3a4b5089687d34 namespace=k8s.io Apr 12 18:55:45.180533 env[1124]: time="2024-04-12T18:55:45.180522475Z" level=info msg="cleaning up dead shim" Apr 12 18:55:45.208048 env[1124]: time="2024-04-12T18:55:45.207944824Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:55:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3748 runtime=io.containerd.runc.v2\n" Apr 12 18:55:45.222924 env[1124]: time="2024-04-12T18:55:45.222828931Z" level=info msg="StopContainer for \"69b4fea9827c368299d0649c60d643dedb0781d78cd593b03f3a4b5089687d34\" returns successfully" Apr 12 18:55:45.223945 env[1124]: time="2024-04-12T18:55:45.223876185Z" level=info msg="StopPodSandbox for \"53e4cdef93a2178dd801ce289cbafefc16bd3f65b171ed41113f219c14b37895\"" Apr 12 18:55:45.224235 env[1124]: time="2024-04-12T18:55:45.223977496Z" level=info msg="Container to stop \"69b4fea9827c368299d0649c60d643dedb0781d78cd593b03f3a4b5089687d34\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:55:45.226444 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-53e4cdef93a2178dd801ce289cbafefc16bd3f65b171ed41113f219c14b37895-shm.mount: Deactivated successfully. Apr 12 18:55:45.233698 systemd[1]: cri-containerd-45e7353ef9a50927a1a52702880d2e028dd6f35356cb27e450b068989c9c5855.scope: Deactivated successfully. Apr 12 18:55:45.234079 systemd[1]: cri-containerd-45e7353ef9a50927a1a52702880d2e028dd6f35356cb27e450b068989c9c5855.scope: Consumed 12.145s CPU time. Apr 12 18:55:45.253547 systemd[1]: cri-containerd-53e4cdef93a2178dd801ce289cbafefc16bd3f65b171ed41113f219c14b37895.scope: Deactivated successfully. Apr 12 18:55:45.287881 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-45e7353ef9a50927a1a52702880d2e028dd6f35356cb27e450b068989c9c5855-rootfs.mount: Deactivated successfully. Apr 12 18:55:45.402305 env[1124]: time="2024-04-12T18:55:45.400811167Z" level=info msg="shim disconnected" id=45e7353ef9a50927a1a52702880d2e028dd6f35356cb27e450b068989c9c5855 Apr 12 18:55:45.402305 env[1124]: time="2024-04-12T18:55:45.401184721Z" level=warning msg="cleaning up after shim disconnected" id=45e7353ef9a50927a1a52702880d2e028dd6f35356cb27e450b068989c9c5855 namespace=k8s.io Apr 12 18:55:45.403783 env[1124]: time="2024-04-12T18:55:45.403374722Z" level=info msg="cleaning up dead shim" Apr 12 18:55:45.405833 env[1124]: time="2024-04-12T18:55:45.405741043Z" level=info msg="shim disconnected" id=53e4cdef93a2178dd801ce289cbafefc16bd3f65b171ed41113f219c14b37895 Apr 12 18:55:45.405943 env[1124]: time="2024-04-12T18:55:45.405849007Z" level=warning msg="cleaning up after shim disconnected" id=53e4cdef93a2178dd801ce289cbafefc16bd3f65b171ed41113f219c14b37895 namespace=k8s.io Apr 12 18:55:45.405943 env[1124]: time="2024-04-12T18:55:45.405864606Z" level=info msg="cleaning up dead shim" Apr 12 18:55:45.429911 env[1124]: time="2024-04-12T18:55:45.429715791Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:55:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3799 runtime=io.containerd.runc.v2\n" Apr 12 18:55:45.430354 env[1124]: time="2024-04-12T18:55:45.430305244Z" level=info msg="TearDown network for sandbox \"53e4cdef93a2178dd801ce289cbafefc16bd3f65b171ed41113f219c14b37895\" successfully" Apr 12 18:55:45.430354 env[1124]: time="2024-04-12T18:55:45.430344547Z" level=info msg="StopPodSandbox for \"53e4cdef93a2178dd801ce289cbafefc16bd3f65b171ed41113f219c14b37895\" returns successfully" Apr 12 18:55:45.432610 env[1124]: time="2024-04-12T18:55:45.432519849Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:55:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3795 runtime=io.containerd.runc.v2\ntime=\"2024-04-12T18:55:45Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" Apr 12 18:55:45.445631 env[1124]: time="2024-04-12T18:55:45.442799343Z" level=info msg="StopContainer for \"45e7353ef9a50927a1a52702880d2e028dd6f35356cb27e450b068989c9c5855\" returns successfully" Apr 12 18:55:45.445631 env[1124]: time="2024-04-12T18:55:45.444006119Z" level=info msg="StopPodSandbox for \"2897425ba6ee7bcb8ed0c933423dfdeb0806d6e0493762d0847a8ca7027898a5\"" Apr 12 18:55:45.445631 env[1124]: time="2024-04-12T18:55:45.444108021Z" level=info msg="Container to stop \"1925d234b039a13e3d0b303f9542b6959460e05d8942033fddfea6c1e7f39502\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:55:45.445631 env[1124]: time="2024-04-12T18:55:45.444180598Z" level=info msg="Container to stop \"0a71050448f507b975132b679d49f0bea5c34295e263939760635c92f428085e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:55:45.445631 env[1124]: time="2024-04-12T18:55:45.444196658Z" level=info msg="Container to stop \"4563592c2c403158d9a66ec95da03d2cdf7de9056136d82592a48f24960656a9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:55:45.445631 env[1124]: time="2024-04-12T18:55:45.444210704Z" level=info msg="Container to stop \"45e7353ef9a50927a1a52702880d2e028dd6f35356cb27e450b068989c9c5855\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:55:45.445631 env[1124]: time="2024-04-12T18:55:45.444223027Z" level=info msg="Container to stop \"7bfb08d7764fd9de760d5f143b2aff953e2a7ede4b0830ab4cc9330e7690b8c6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:55:45.463490 systemd[1]: cri-containerd-2897425ba6ee7bcb8ed0c933423dfdeb0806d6e0493762d0847a8ca7027898a5.scope: Deactivated successfully. Apr 12 18:55:45.535536 env[1124]: time="2024-04-12T18:55:45.535435505Z" level=info msg="shim disconnected" id=2897425ba6ee7bcb8ed0c933423dfdeb0806d6e0493762d0847a8ca7027898a5 Apr 12 18:55:45.535536 env[1124]: time="2024-04-12T18:55:45.535520375Z" level=warning msg="cleaning up after shim disconnected" id=2897425ba6ee7bcb8ed0c933423dfdeb0806d6e0493762d0847a8ca7027898a5 namespace=k8s.io Apr 12 18:55:45.535536 env[1124]: time="2024-04-12T18:55:45.535535183Z" level=info msg="cleaning up dead shim" Apr 12 18:55:45.552516 env[1124]: time="2024-04-12T18:55:45.551807647Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:55:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3838 runtime=io.containerd.runc.v2\n" Apr 12 18:55:45.552516 env[1124]: time="2024-04-12T18:55:45.552231587Z" level=info msg="TearDown network for sandbox \"2897425ba6ee7bcb8ed0c933423dfdeb0806d6e0493762d0847a8ca7027898a5\" successfully" Apr 12 18:55:45.552516 env[1124]: time="2024-04-12T18:55:45.552263317Z" level=info msg="StopPodSandbox for \"2897425ba6ee7bcb8ed0c933423dfdeb0806d6e0493762d0847a8ca7027898a5\" returns successfully" Apr 12 18:55:45.598590 kubelet[1983]: I0412 18:55:45.598521 1983 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9f3f767e-c6bb-4e52-9d6f-20670f8e9421-cilium-config-path\") pod \"9f3f767e-c6bb-4e52-9d6f-20670f8e9421\" (UID: \"9f3f767e-c6bb-4e52-9d6f-20670f8e9421\") " Apr 12 18:55:45.598590 kubelet[1983]: I0412 18:55:45.598594 1983 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6hwnm\" (UniqueName: \"kubernetes.io/projected/9f3f767e-c6bb-4e52-9d6f-20670f8e9421-kube-api-access-6hwnm\") pod \"9f3f767e-c6bb-4e52-9d6f-20670f8e9421\" (UID: \"9f3f767e-c6bb-4e52-9d6f-20670f8e9421\") " Apr 12 18:55:45.601782 kubelet[1983]: I0412 18:55:45.601700 1983 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f3f767e-c6bb-4e52-9d6f-20670f8e9421-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9f3f767e-c6bb-4e52-9d6f-20670f8e9421" (UID: "9f3f767e-c6bb-4e52-9d6f-20670f8e9421"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 12 18:55:45.610059 kubelet[1983]: I0412 18:55:45.609936 1983 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f3f767e-c6bb-4e52-9d6f-20670f8e9421-kube-api-access-6hwnm" (OuterVolumeSpecName: "kube-api-access-6hwnm") pod "9f3f767e-c6bb-4e52-9d6f-20670f8e9421" (UID: "9f3f767e-c6bb-4e52-9d6f-20670f8e9421"). InnerVolumeSpecName "kube-api-access-6hwnm". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 12 18:55:45.699442 kubelet[1983]: I0412 18:55:45.699255 1983 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e013ebdf-8734-4cd7-826a-183d39594872-host-proc-sys-kernel\") pod \"e013ebdf-8734-4cd7-826a-183d39594872\" (UID: \"e013ebdf-8734-4cd7-826a-183d39594872\") " Apr 12 18:55:45.699442 kubelet[1983]: I0412 18:55:45.699353 1983 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfrsn\" (UniqueName: \"kubernetes.io/projected/e013ebdf-8734-4cd7-826a-183d39594872-kube-api-access-sfrsn\") pod \"e013ebdf-8734-4cd7-826a-183d39594872\" (UID: \"e013ebdf-8734-4cd7-826a-183d39594872\") " Apr 12 18:55:45.699442 kubelet[1983]: I0412 18:55:45.699385 1983 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e013ebdf-8734-4cd7-826a-183d39594872-cilium-cgroup\") pod \"e013ebdf-8734-4cd7-826a-183d39594872\" (UID: \"e013ebdf-8734-4cd7-826a-183d39594872\") " Apr 12 18:55:45.699442 kubelet[1983]: I0412 18:55:45.699416 1983 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e013ebdf-8734-4cd7-826a-183d39594872-cni-path\") pod \"e013ebdf-8734-4cd7-826a-183d39594872\" (UID: \"e013ebdf-8734-4cd7-826a-183d39594872\") " Apr 12 18:55:45.699442 kubelet[1983]: I0412 18:55:45.699446 1983 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e013ebdf-8734-4cd7-826a-183d39594872-etc-cni-netd\") pod \"e013ebdf-8734-4cd7-826a-183d39594872\" (UID: \"e013ebdf-8734-4cd7-826a-183d39594872\") " Apr 12 18:55:45.699863 kubelet[1983]: I0412 18:55:45.699426 1983 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e013ebdf-8734-4cd7-826a-183d39594872-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e013ebdf-8734-4cd7-826a-183d39594872" (UID: "e013ebdf-8734-4cd7-826a-183d39594872"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:55:45.699863 kubelet[1983]: I0412 18:55:45.699478 1983 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e013ebdf-8734-4cd7-826a-183d39594872-clustermesh-secrets\") pod \"e013ebdf-8734-4cd7-826a-183d39594872\" (UID: \"e013ebdf-8734-4cd7-826a-183d39594872\") " Apr 12 18:55:45.699863 kubelet[1983]: I0412 18:55:45.699584 1983 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e013ebdf-8734-4cd7-826a-183d39594872-xtables-lock\") pod \"e013ebdf-8734-4cd7-826a-183d39594872\" (UID: \"e013ebdf-8734-4cd7-826a-183d39594872\") " Apr 12 18:55:45.699863 kubelet[1983]: I0412 18:55:45.699619 1983 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e013ebdf-8734-4cd7-826a-183d39594872-lib-modules\") pod \"e013ebdf-8734-4cd7-826a-183d39594872\" (UID: \"e013ebdf-8734-4cd7-826a-183d39594872\") " Apr 12 18:55:45.699863 kubelet[1983]: I0412 18:55:45.699647 1983 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e013ebdf-8734-4cd7-826a-183d39594872-host-proc-sys-net\") pod \"e013ebdf-8734-4cd7-826a-183d39594872\" (UID: \"e013ebdf-8734-4cd7-826a-183d39594872\") " Apr 12 18:55:45.699863 kubelet[1983]: I0412 18:55:45.699683 1983 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e013ebdf-8734-4cd7-826a-183d39594872-hubble-tls\") pod \"e013ebdf-8734-4cd7-826a-183d39594872\" (UID: \"e013ebdf-8734-4cd7-826a-183d39594872\") " Apr 12 18:55:45.700128 kubelet[1983]: I0412 18:55:45.699710 1983 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e013ebdf-8734-4cd7-826a-183d39594872-bpf-maps\") pod \"e013ebdf-8734-4cd7-826a-183d39594872\" (UID: \"e013ebdf-8734-4cd7-826a-183d39594872\") " Apr 12 18:55:45.700128 kubelet[1983]: I0412 18:55:45.699736 1983 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e013ebdf-8734-4cd7-826a-183d39594872-hostproc\") pod \"e013ebdf-8734-4cd7-826a-183d39594872\" (UID: \"e013ebdf-8734-4cd7-826a-183d39594872\") " Apr 12 18:55:45.700128 kubelet[1983]: I0412 18:55:45.699813 1983 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e013ebdf-8734-4cd7-826a-183d39594872-cilium-run\") pod \"e013ebdf-8734-4cd7-826a-183d39594872\" (UID: \"e013ebdf-8734-4cd7-826a-183d39594872\") " Apr 12 18:55:45.700128 kubelet[1983]: I0412 18:55:45.699847 1983 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e013ebdf-8734-4cd7-826a-183d39594872-cilium-config-path\") pod \"e013ebdf-8734-4cd7-826a-183d39594872\" (UID: \"e013ebdf-8734-4cd7-826a-183d39594872\") " Apr 12 18:55:45.700128 kubelet[1983]: I0412 18:55:45.699917 1983 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9f3f767e-c6bb-4e52-9d6f-20670f8e9421-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 12 18:55:45.700128 kubelet[1983]: I0412 18:55:45.699936 1983 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-6hwnm\" (UniqueName: \"kubernetes.io/projected/9f3f767e-c6bb-4e52-9d6f-20670f8e9421-kube-api-access-6hwnm\") on node \"localhost\" DevicePath \"\"" Apr 12 18:55:45.700128 kubelet[1983]: I0412 18:55:45.699950 1983 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e013ebdf-8734-4cd7-826a-183d39594872-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Apr 12 18:55:45.702681 kubelet[1983]: I0412 18:55:45.702623 1983 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e013ebdf-8734-4cd7-826a-183d39594872-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e013ebdf-8734-4cd7-826a-183d39594872" (UID: "e013ebdf-8734-4cd7-826a-183d39594872"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 12 18:55:45.702798 kubelet[1983]: I0412 18:55:45.702682 1983 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e013ebdf-8734-4cd7-826a-183d39594872-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e013ebdf-8734-4cd7-826a-183d39594872" (UID: "e013ebdf-8734-4cd7-826a-183d39594872"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:55:45.702798 kubelet[1983]: I0412 18:55:45.702713 1983 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e013ebdf-8734-4cd7-826a-183d39594872-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e013ebdf-8734-4cd7-826a-183d39594872" (UID: "e013ebdf-8734-4cd7-826a-183d39594872"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:55:45.702798 kubelet[1983]: I0412 18:55:45.702737 1983 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e013ebdf-8734-4cd7-826a-183d39594872-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e013ebdf-8734-4cd7-826a-183d39594872" (UID: "e013ebdf-8734-4cd7-826a-183d39594872"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:55:45.702999 kubelet[1983]: I0412 18:55:45.702961 1983 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e013ebdf-8734-4cd7-826a-183d39594872-hostproc" (OuterVolumeSpecName: "hostproc") pod "e013ebdf-8734-4cd7-826a-183d39594872" (UID: "e013ebdf-8734-4cd7-826a-183d39594872"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:55:45.703146 kubelet[1983]: I0412 18:55:45.703125 1983 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e013ebdf-8734-4cd7-826a-183d39594872-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e013ebdf-8734-4cd7-826a-183d39594872" (UID: "e013ebdf-8734-4cd7-826a-183d39594872"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:55:45.703262 kubelet[1983]: I0412 18:55:45.703242 1983 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e013ebdf-8734-4cd7-826a-183d39594872-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e013ebdf-8734-4cd7-826a-183d39594872" (UID: "e013ebdf-8734-4cd7-826a-183d39594872"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:55:45.707005 kubelet[1983]: I0412 18:55:45.706471 1983 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e013ebdf-8734-4cd7-826a-183d39594872-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e013ebdf-8734-4cd7-826a-183d39594872" (UID: "e013ebdf-8734-4cd7-826a-183d39594872"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:55:45.707005 kubelet[1983]: I0412 18:55:45.706530 1983 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e013ebdf-8734-4cd7-826a-183d39594872-cni-path" (OuterVolumeSpecName: "cni-path") pod "e013ebdf-8734-4cd7-826a-183d39594872" (UID: "e013ebdf-8734-4cd7-826a-183d39594872"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:55:45.707005 kubelet[1983]: I0412 18:55:45.706559 1983 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e013ebdf-8734-4cd7-826a-183d39594872-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e013ebdf-8734-4cd7-826a-183d39594872" (UID: "e013ebdf-8734-4cd7-826a-183d39594872"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:55:45.710471 kubelet[1983]: I0412 18:55:45.709721 1983 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e013ebdf-8734-4cd7-826a-183d39594872-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e013ebdf-8734-4cd7-826a-183d39594872" (UID: "e013ebdf-8734-4cd7-826a-183d39594872"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 12 18:55:45.720116 kubelet[1983]: I0412 18:55:45.720046 1983 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e013ebdf-8734-4cd7-826a-183d39594872-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e013ebdf-8734-4cd7-826a-183d39594872" (UID: "e013ebdf-8734-4cd7-826a-183d39594872"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 12 18:55:45.722980 kubelet[1983]: I0412 18:55:45.722734 1983 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e013ebdf-8734-4cd7-826a-183d39594872-kube-api-access-sfrsn" (OuterVolumeSpecName: "kube-api-access-sfrsn") pod "e013ebdf-8734-4cd7-826a-183d39594872" (UID: "e013ebdf-8734-4cd7-826a-183d39594872"). InnerVolumeSpecName "kube-api-access-sfrsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 12 18:55:45.800630 kubelet[1983]: I0412 18:55:45.800528 1983 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e013ebdf-8734-4cd7-826a-183d39594872-xtables-lock\") on node \"localhost\" DevicePath \"\"" Apr 12 18:55:45.800630 kubelet[1983]: I0412 18:55:45.800594 1983 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e013ebdf-8734-4cd7-826a-183d39594872-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Apr 12 18:55:45.800630 kubelet[1983]: I0412 18:55:45.800621 1983 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e013ebdf-8734-4cd7-826a-183d39594872-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Apr 12 18:55:45.800630 kubelet[1983]: I0412 18:55:45.800640 1983 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e013ebdf-8734-4cd7-826a-183d39594872-lib-modules\") on node \"localhost\" DevicePath \"\"" Apr 12 18:55:45.800630 kubelet[1983]: I0412 18:55:45.800656 1983 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e013ebdf-8734-4cd7-826a-183d39594872-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Apr 12 18:55:45.801089 kubelet[1983]: I0412 18:55:45.800685 1983 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e013ebdf-8734-4cd7-826a-183d39594872-hubble-tls\") on node \"localhost\" DevicePath \"\"" Apr 12 18:55:45.801089 kubelet[1983]: I0412 18:55:45.800700 1983 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e013ebdf-8734-4cd7-826a-183d39594872-bpf-maps\") on node \"localhost\" DevicePath \"\"" Apr 12 18:55:45.801089 kubelet[1983]: I0412 18:55:45.800711 1983 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e013ebdf-8734-4cd7-826a-183d39594872-hostproc\") on node \"localhost\" DevicePath \"\"" Apr 12 18:55:45.801089 kubelet[1983]: I0412 18:55:45.800728 1983 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e013ebdf-8734-4cd7-826a-183d39594872-cilium-run\") on node \"localhost\" DevicePath \"\"" Apr 12 18:55:45.801089 kubelet[1983]: I0412 18:55:45.800741 1983 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e013ebdf-8734-4cd7-826a-183d39594872-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 12 18:55:45.801089 kubelet[1983]: I0412 18:55:45.800756 1983 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-sfrsn\" (UniqueName: \"kubernetes.io/projected/e013ebdf-8734-4cd7-826a-183d39594872-kube-api-access-sfrsn\") on node \"localhost\" DevicePath \"\"" Apr 12 18:55:45.801089 kubelet[1983]: I0412 18:55:45.800791 1983 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e013ebdf-8734-4cd7-826a-183d39594872-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Apr 12 18:55:45.801089 kubelet[1983]: I0412 18:55:45.800805 1983 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e013ebdf-8734-4cd7-826a-183d39594872-cni-path\") on node \"localhost\" DevicePath \"\"" Apr 12 18:55:45.942910 kubelet[1983]: I0412 18:55:45.942868 1983 scope.go:117] "RemoveContainer" containerID="69b4fea9827c368299d0649c60d643dedb0781d78cd593b03f3a4b5089687d34" Apr 12 18:55:45.948267 env[1124]: time="2024-04-12T18:55:45.947991330Z" level=info msg="RemoveContainer for \"69b4fea9827c368299d0649c60d643dedb0781d78cd593b03f3a4b5089687d34\"" Apr 12 18:55:45.948673 systemd[1]: Removed slice kubepods-besteffort-pod9f3f767e_c6bb_4e52_9d6f_20670f8e9421.slice. Apr 12 18:55:45.959600 systemd[1]: Removed slice kubepods-burstable-pode013ebdf_8734_4cd7_826a_183d39594872.slice. Apr 12 18:55:45.959708 systemd[1]: kubepods-burstable-pode013ebdf_8734_4cd7_826a_183d39594872.slice: Consumed 12.311s CPU time. Apr 12 18:55:45.971647 env[1124]: time="2024-04-12T18:55:45.971573979Z" level=info msg="RemoveContainer for \"69b4fea9827c368299d0649c60d643dedb0781d78cd593b03f3a4b5089687d34\" returns successfully" Apr 12 18:55:45.976419 kubelet[1983]: I0412 18:55:45.976348 1983 scope.go:117] "RemoveContainer" containerID="69b4fea9827c368299d0649c60d643dedb0781d78cd593b03f3a4b5089687d34" Apr 12 18:55:45.979539 env[1124]: time="2024-04-12T18:55:45.979414123Z" level=error msg="ContainerStatus for \"69b4fea9827c368299d0649c60d643dedb0781d78cd593b03f3a4b5089687d34\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"69b4fea9827c368299d0649c60d643dedb0781d78cd593b03f3a4b5089687d34\": not found" Apr 12 18:55:45.979921 kubelet[1983]: E0412 18:55:45.979895 1983 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"69b4fea9827c368299d0649c60d643dedb0781d78cd593b03f3a4b5089687d34\": not found" containerID="69b4fea9827c368299d0649c60d643dedb0781d78cd593b03f3a4b5089687d34" Apr 12 18:55:45.980081 kubelet[1983]: I0412 18:55:45.980055 1983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"69b4fea9827c368299d0649c60d643dedb0781d78cd593b03f3a4b5089687d34"} err="failed to get container status \"69b4fea9827c368299d0649c60d643dedb0781d78cd593b03f3a4b5089687d34\": rpc error: code = NotFound desc = an error occurred when try to find container \"69b4fea9827c368299d0649c60d643dedb0781d78cd593b03f3a4b5089687d34\": not found" Apr 12 18:55:45.980081 kubelet[1983]: I0412 18:55:45.980083 1983 scope.go:117] "RemoveContainer" containerID="45e7353ef9a50927a1a52702880d2e028dd6f35356cb27e450b068989c9c5855" Apr 12 18:55:45.994448 env[1124]: time="2024-04-12T18:55:45.994356379Z" level=info msg="RemoveContainer for \"45e7353ef9a50927a1a52702880d2e028dd6f35356cb27e450b068989c9c5855\"" Apr 12 18:55:46.011397 env[1124]: time="2024-04-12T18:55:46.011167669Z" level=info msg="RemoveContainer for \"45e7353ef9a50927a1a52702880d2e028dd6f35356cb27e450b068989c9c5855\" returns successfully" Apr 12 18:55:46.011707 kubelet[1983]: I0412 18:55:46.011649 1983 scope.go:117] "RemoveContainer" containerID="4563592c2c403158d9a66ec95da03d2cdf7de9056136d82592a48f24960656a9" Apr 12 18:55:46.022129 env[1124]: time="2024-04-12T18:55:46.021456078Z" level=info msg="RemoveContainer for \"4563592c2c403158d9a66ec95da03d2cdf7de9056136d82592a48f24960656a9\"" Apr 12 18:55:46.044609 env[1124]: time="2024-04-12T18:55:46.044520318Z" level=info msg="RemoveContainer for \"4563592c2c403158d9a66ec95da03d2cdf7de9056136d82592a48f24960656a9\" returns successfully" Apr 12 18:55:46.044958 kubelet[1983]: I0412 18:55:46.044904 1983 scope.go:117] "RemoveContainer" containerID="0a71050448f507b975132b679d49f0bea5c34295e263939760635c92f428085e" Apr 12 18:55:46.056340 env[1124]: time="2024-04-12T18:55:46.056272157Z" level=info msg="RemoveContainer for \"0a71050448f507b975132b679d49f0bea5c34295e263939760635c92f428085e\"" Apr 12 18:55:46.072614 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-53e4cdef93a2178dd801ce289cbafefc16bd3f65b171ed41113f219c14b37895-rootfs.mount: Deactivated successfully. Apr 12 18:55:46.073459 systemd[1]: var-lib-kubelet-pods-9f3f767e\x2dc6bb\x2d4e52\x2d9d6f\x2d20670f8e9421-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6hwnm.mount: Deactivated successfully. Apr 12 18:55:46.073583 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2897425ba6ee7bcb8ed0c933423dfdeb0806d6e0493762d0847a8ca7027898a5-rootfs.mount: Deactivated successfully. Apr 12 18:55:46.073662 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2897425ba6ee7bcb8ed0c933423dfdeb0806d6e0493762d0847a8ca7027898a5-shm.mount: Deactivated successfully. Apr 12 18:55:46.073748 systemd[1]: var-lib-kubelet-pods-e013ebdf\x2d8734\x2d4cd7\x2d826a\x2d183d39594872-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsfrsn.mount: Deactivated successfully. Apr 12 18:55:46.073850 systemd[1]: var-lib-kubelet-pods-e013ebdf\x2d8734\x2d4cd7\x2d826a\x2d183d39594872-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 12 18:55:46.073934 systemd[1]: var-lib-kubelet-pods-e013ebdf\x2d8734\x2d4cd7\x2d826a\x2d183d39594872-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 12 18:55:46.076205 env[1124]: time="2024-04-12T18:55:46.075459296Z" level=info msg="RemoveContainer for \"0a71050448f507b975132b679d49f0bea5c34295e263939760635c92f428085e\" returns successfully" Apr 12 18:55:46.076533 kubelet[1983]: I0412 18:55:46.076495 1983 scope.go:117] "RemoveContainer" containerID="1925d234b039a13e3d0b303f9542b6959460e05d8942033fddfea6c1e7f39502" Apr 12 18:55:46.078023 env[1124]: time="2024-04-12T18:55:46.077975311Z" level=info msg="RemoveContainer for \"1925d234b039a13e3d0b303f9542b6959460e05d8942033fddfea6c1e7f39502\"" Apr 12 18:55:46.094391 env[1124]: time="2024-04-12T18:55:46.089641267Z" level=info msg="RemoveContainer for \"1925d234b039a13e3d0b303f9542b6959460e05d8942033fddfea6c1e7f39502\" returns successfully" Apr 12 18:55:46.095721 kubelet[1983]: I0412 18:55:46.095680 1983 scope.go:117] "RemoveContainer" containerID="7bfb08d7764fd9de760d5f143b2aff953e2a7ede4b0830ab4cc9330e7690b8c6" Apr 12 18:55:46.107740 env[1124]: time="2024-04-12T18:55:46.107438245Z" level=info msg="RemoveContainer for \"7bfb08d7764fd9de760d5f143b2aff953e2a7ede4b0830ab4cc9330e7690b8c6\"" Apr 12 18:55:46.154907 env[1124]: time="2024-04-12T18:55:46.154826901Z" level=info msg="RemoveContainer for \"7bfb08d7764fd9de760d5f143b2aff953e2a7ede4b0830ab4cc9330e7690b8c6\" returns successfully" Apr 12 18:55:46.164061 kubelet[1983]: I0412 18:55:46.160348 1983 scope.go:117] "RemoveContainer" containerID="45e7353ef9a50927a1a52702880d2e028dd6f35356cb27e450b068989c9c5855" Apr 12 18:55:46.172230 env[1124]: time="2024-04-12T18:55:46.169365024Z" level=error msg="ContainerStatus for \"45e7353ef9a50927a1a52702880d2e028dd6f35356cb27e450b068989c9c5855\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"45e7353ef9a50927a1a52702880d2e028dd6f35356cb27e450b068989c9c5855\": not found" Apr 12 18:55:46.172230 env[1124]: time="2024-04-12T18:55:46.170172777Z" level=error msg="ContainerStatus for \"4563592c2c403158d9a66ec95da03d2cdf7de9056136d82592a48f24960656a9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4563592c2c403158d9a66ec95da03d2cdf7de9056136d82592a48f24960656a9\": not found" Apr 12 18:55:46.172230 env[1124]: time="2024-04-12T18:55:46.170528948Z" level=error msg="ContainerStatus for \"0a71050448f507b975132b679d49f0bea5c34295e263939760635c92f428085e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0a71050448f507b975132b679d49f0bea5c34295e263939760635c92f428085e\": not found" Apr 12 18:55:46.172230 env[1124]: time="2024-04-12T18:55:46.170895489Z" level=error msg="ContainerStatus for \"1925d234b039a13e3d0b303f9542b6959460e05d8942033fddfea6c1e7f39502\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1925d234b039a13e3d0b303f9542b6959460e05d8942033fddfea6c1e7f39502\": not found" Apr 12 18:55:46.172230 env[1124]: time="2024-04-12T18:55:46.171273442Z" level=error msg="ContainerStatus for \"7bfb08d7764fd9de760d5f143b2aff953e2a7ede4b0830ab4cc9330e7690b8c6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7bfb08d7764fd9de760d5f143b2aff953e2a7ede4b0830ab4cc9330e7690b8c6\": not found" Apr 12 18:55:46.172694 kubelet[1983]: E0412 18:55:46.169735 1983 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"45e7353ef9a50927a1a52702880d2e028dd6f35356cb27e450b068989c9c5855\": not found" containerID="45e7353ef9a50927a1a52702880d2e028dd6f35356cb27e450b068989c9c5855" Apr 12 18:55:46.172694 kubelet[1983]: I0412 18:55:46.169833 1983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"45e7353ef9a50927a1a52702880d2e028dd6f35356cb27e450b068989c9c5855"} err="failed to get container status \"45e7353ef9a50927a1a52702880d2e028dd6f35356cb27e450b068989c9c5855\": rpc error: code = NotFound desc = an error occurred when try to find container \"45e7353ef9a50927a1a52702880d2e028dd6f35356cb27e450b068989c9c5855\": not found" Apr 12 18:55:46.172694 kubelet[1983]: I0412 18:55:46.169854 1983 scope.go:117] "RemoveContainer" containerID="4563592c2c403158d9a66ec95da03d2cdf7de9056136d82592a48f24960656a9" Apr 12 18:55:46.172694 kubelet[1983]: E0412 18:55:46.170310 1983 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4563592c2c403158d9a66ec95da03d2cdf7de9056136d82592a48f24960656a9\": not found" containerID="4563592c2c403158d9a66ec95da03d2cdf7de9056136d82592a48f24960656a9" Apr 12 18:55:46.172694 kubelet[1983]: I0412 18:55:46.170347 1983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4563592c2c403158d9a66ec95da03d2cdf7de9056136d82592a48f24960656a9"} err="failed to get container status \"4563592c2c403158d9a66ec95da03d2cdf7de9056136d82592a48f24960656a9\": rpc error: code = NotFound desc = an error occurred when try to find container \"4563592c2c403158d9a66ec95da03d2cdf7de9056136d82592a48f24960656a9\": not found" Apr 12 18:55:46.172694 kubelet[1983]: I0412 18:55:46.170361 1983 scope.go:117] "RemoveContainer" containerID="0a71050448f507b975132b679d49f0bea5c34295e263939760635c92f428085e" Apr 12 18:55:46.172993 kubelet[1983]: E0412 18:55:46.170661 1983 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0a71050448f507b975132b679d49f0bea5c34295e263939760635c92f428085e\": not found" containerID="0a71050448f507b975132b679d49f0bea5c34295e263939760635c92f428085e" Apr 12 18:55:46.172993 kubelet[1983]: I0412 18:55:46.170687 1983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0a71050448f507b975132b679d49f0bea5c34295e263939760635c92f428085e"} err="failed to get container status \"0a71050448f507b975132b679d49f0bea5c34295e263939760635c92f428085e\": rpc error: code = NotFound desc = an error occurred when try to find container \"0a71050448f507b975132b679d49f0bea5c34295e263939760635c92f428085e\": not found" Apr 12 18:55:46.172993 kubelet[1983]: I0412 18:55:46.170697 1983 scope.go:117] "RemoveContainer" containerID="1925d234b039a13e3d0b303f9542b6959460e05d8942033fddfea6c1e7f39502" Apr 12 18:55:46.172993 kubelet[1983]: E0412 18:55:46.171045 1983 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1925d234b039a13e3d0b303f9542b6959460e05d8942033fddfea6c1e7f39502\": not found" containerID="1925d234b039a13e3d0b303f9542b6959460e05d8942033fddfea6c1e7f39502" Apr 12 18:55:46.172993 kubelet[1983]: I0412 18:55:46.171076 1983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1925d234b039a13e3d0b303f9542b6959460e05d8942033fddfea6c1e7f39502"} err="failed to get container status \"1925d234b039a13e3d0b303f9542b6959460e05d8942033fddfea6c1e7f39502\": rpc error: code = NotFound desc = an error occurred when try to find container \"1925d234b039a13e3d0b303f9542b6959460e05d8942033fddfea6c1e7f39502\": not found" Apr 12 18:55:46.172993 kubelet[1983]: I0412 18:55:46.171090 1983 scope.go:117] "RemoveContainer" containerID="7bfb08d7764fd9de760d5f143b2aff953e2a7ede4b0830ab4cc9330e7690b8c6" Apr 12 18:55:46.173253 kubelet[1983]: E0412 18:55:46.171384 1983 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7bfb08d7764fd9de760d5f143b2aff953e2a7ede4b0830ab4cc9330e7690b8c6\": not found" containerID="7bfb08d7764fd9de760d5f143b2aff953e2a7ede4b0830ab4cc9330e7690b8c6" Apr 12 18:55:46.173253 kubelet[1983]: I0412 18:55:46.171410 1983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7bfb08d7764fd9de760d5f143b2aff953e2a7ede4b0830ab4cc9330e7690b8c6"} err="failed to get container status \"7bfb08d7764fd9de760d5f143b2aff953e2a7ede4b0830ab4cc9330e7690b8c6\": rpc error: code = NotFound desc = an error occurred when try to find container \"7bfb08d7764fd9de760d5f143b2aff953e2a7ede4b0830ab4cc9330e7690b8c6\": not found" Apr 12 18:55:46.582326 kubelet[1983]: I0412 18:55:46.582270 1983 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="9f3f767e-c6bb-4e52-9d6f-20670f8e9421" path="/var/lib/kubelet/pods/9f3f767e-c6bb-4e52-9d6f-20670f8e9421/volumes" Apr 12 18:55:46.582817 kubelet[1983]: I0412 18:55:46.582789 1983 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="e013ebdf-8734-4cd7-826a-183d39594872" path="/var/lib/kubelet/pods/e013ebdf-8734-4cd7-826a-183d39594872/volumes" Apr 12 18:55:46.890337 sshd[3695]: pam_unix(sshd:session): session closed for user core Apr 12 18:55:46.899108 systemd[1]: Started sshd@28-10.0.0.118:22-10.0.0.1:36102.service. Apr 12 18:55:46.899964 systemd[1]: sshd@27-10.0.0.118:22-10.0.0.1:36088.service: Deactivated successfully. Apr 12 18:55:46.904518 systemd[1]: session-28.scope: Deactivated successfully. Apr 12 18:55:46.904708 systemd[1]: session-28.scope: Consumed 1.087s CPU time. Apr 12 18:55:46.906790 systemd-logind[1116]: Session 28 logged out. Waiting for processes to exit. Apr 12 18:55:46.916513 systemd-logind[1116]: Removed session 28. Apr 12 18:55:46.980451 sshd[3856]: Accepted publickey for core from 10.0.0.1 port 36102 ssh2: RSA SHA256:oFTmhZVjs8bjXH/lYpDQZ+WL9oh5tEY90V+L3H6oLsU Apr 12 18:55:46.983616 sshd[3856]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:55:46.992858 systemd-logind[1116]: New session 29 of user core. Apr 12 18:55:46.994553 systemd[1]: Started session-29.scope. Apr 12 18:55:47.848171 kubelet[1983]: E0412 18:55:47.848037 1983 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 12 18:55:48.311507 sshd[3856]: pam_unix(sshd:session): session closed for user core Apr 12 18:55:48.320382 systemd[1]: Started sshd@29-10.0.0.118:22-10.0.0.1:59112.service. Apr 12 18:55:48.325934 systemd[1]: sshd@28-10.0.0.118:22-10.0.0.1:36102.service: Deactivated successfully. Apr 12 18:55:48.326783 systemd[1]: session-29.scope: Deactivated successfully. Apr 12 18:55:48.334494 systemd-logind[1116]: Session 29 logged out. Waiting for processes to exit. Apr 12 18:55:48.336166 systemd-logind[1116]: Removed session 29. Apr 12 18:55:48.380355 sshd[3870]: Accepted publickey for core from 10.0.0.1 port 59112 ssh2: RSA SHA256:oFTmhZVjs8bjXH/lYpDQZ+WL9oh5tEY90V+L3H6oLsU Apr 12 18:55:48.382538 sshd[3870]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:55:48.398384 systemd[1]: Started session-30.scope. Apr 12 18:55:48.404480 systemd-logind[1116]: New session 30 of user core. Apr 12 18:55:48.434821 kubelet[1983]: I0412 18:55:48.432717 1983 topology_manager.go:215] "Topology Admit Handler" podUID="1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc" podNamespace="kube-system" podName="cilium-zzl2d" Apr 12 18:55:48.434821 kubelet[1983]: E0412 18:55:48.432832 1983 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e013ebdf-8734-4cd7-826a-183d39594872" containerName="mount-cgroup" Apr 12 18:55:48.434821 kubelet[1983]: E0412 18:55:48.432849 1983 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e013ebdf-8734-4cd7-826a-183d39594872" containerName="mount-bpf-fs" Apr 12 18:55:48.434821 kubelet[1983]: E0412 18:55:48.432859 1983 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e013ebdf-8734-4cd7-826a-183d39594872" containerName="clean-cilium-state" Apr 12 18:55:48.434821 kubelet[1983]: E0412 18:55:48.432867 1983 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9f3f767e-c6bb-4e52-9d6f-20670f8e9421" containerName="cilium-operator" Apr 12 18:55:48.434821 kubelet[1983]: E0412 18:55:48.432877 1983 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e013ebdf-8734-4cd7-826a-183d39594872" containerName="apply-sysctl-overwrites" Apr 12 18:55:48.434821 kubelet[1983]: E0412 18:55:48.432887 1983 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e013ebdf-8734-4cd7-826a-183d39594872" containerName="cilium-agent" Apr 12 18:55:48.434821 kubelet[1983]: I0412 18:55:48.432930 1983 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f3f767e-c6bb-4e52-9d6f-20670f8e9421" containerName="cilium-operator" Apr 12 18:55:48.434821 kubelet[1983]: I0412 18:55:48.432942 1983 memory_manager.go:354] "RemoveStaleState removing state" podUID="e013ebdf-8734-4cd7-826a-183d39594872" containerName="cilium-agent" Apr 12 18:55:48.453754 systemd[1]: Created slice kubepods-burstable-pod1f97f266_a4d2_4a7a_9e4f_77c8f288f9dc.slice. Apr 12 18:55:48.555738 kubelet[1983]: I0412 18:55:48.555663 1983 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc-cilium-ipsec-secrets\") pod \"cilium-zzl2d\" (UID: \"1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc\") " pod="kube-system/cilium-zzl2d" Apr 12 18:55:48.556001 kubelet[1983]: I0412 18:55:48.555921 1983 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc-hubble-tls\") pod \"cilium-zzl2d\" (UID: \"1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc\") " pod="kube-system/cilium-zzl2d" Apr 12 18:55:48.556054 kubelet[1983]: I0412 18:55:48.556031 1983 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc-hostproc\") pod \"cilium-zzl2d\" (UID: \"1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc\") " pod="kube-system/cilium-zzl2d" Apr 12 18:55:48.556328 kubelet[1983]: I0412 18:55:48.556213 1983 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc-cilium-cgroup\") pod \"cilium-zzl2d\" (UID: \"1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc\") " pod="kube-system/cilium-zzl2d" Apr 12 18:55:48.556513 kubelet[1983]: I0412 18:55:48.556404 1983 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc-xtables-lock\") pod \"cilium-zzl2d\" (UID: \"1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc\") " pod="kube-system/cilium-zzl2d" Apr 12 18:55:48.556702 kubelet[1983]: I0412 18:55:48.556590 1983 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc-cni-path\") pod \"cilium-zzl2d\" (UID: \"1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc\") " pod="kube-system/cilium-zzl2d" Apr 12 18:55:48.556910 kubelet[1983]: I0412 18:55:48.556876 1983 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkwtg\" (UniqueName: \"kubernetes.io/projected/1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc-kube-api-access-gkwtg\") pod \"cilium-zzl2d\" (UID: \"1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc\") " pod="kube-system/cilium-zzl2d" Apr 12 18:55:48.561131 kubelet[1983]: I0412 18:55:48.557078 1983 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc-clustermesh-secrets\") pod \"cilium-zzl2d\" (UID: \"1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc\") " pod="kube-system/cilium-zzl2d" Apr 12 18:55:48.561131 kubelet[1983]: I0412 18:55:48.557179 1983 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc-host-proc-sys-net\") pod \"cilium-zzl2d\" (UID: \"1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc\") " pod="kube-system/cilium-zzl2d" Apr 12 18:55:48.561131 kubelet[1983]: I0412 18:55:48.557361 1983 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc-host-proc-sys-kernel\") pod \"cilium-zzl2d\" (UID: \"1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc\") " pod="kube-system/cilium-zzl2d" Apr 12 18:55:48.561131 kubelet[1983]: I0412 18:55:48.557537 1983 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc-cilium-run\") pod \"cilium-zzl2d\" (UID: \"1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc\") " pod="kube-system/cilium-zzl2d" Apr 12 18:55:48.561131 kubelet[1983]: I0412 18:55:48.557713 1983 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc-bpf-maps\") pod \"cilium-zzl2d\" (UID: \"1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc\") " pod="kube-system/cilium-zzl2d" Apr 12 18:55:48.561131 kubelet[1983]: I0412 18:55:48.557916 1983 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc-etc-cni-netd\") pod \"cilium-zzl2d\" (UID: \"1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc\") " pod="kube-system/cilium-zzl2d" Apr 12 18:55:48.561522 kubelet[1983]: I0412 18:55:48.558092 1983 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc-lib-modules\") pod \"cilium-zzl2d\" (UID: \"1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc\") " pod="kube-system/cilium-zzl2d" Apr 12 18:55:48.561522 kubelet[1983]: I0412 18:55:48.558311 1983 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc-cilium-config-path\") pod \"cilium-zzl2d\" (UID: \"1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc\") " pod="kube-system/cilium-zzl2d" Apr 12 18:55:48.702476 sshd[3870]: pam_unix(sshd:session): session closed for user core Apr 12 18:55:48.712985 systemd[1]: sshd@29-10.0.0.118:22-10.0.0.1:59112.service: Deactivated successfully. Apr 12 18:55:48.713803 systemd[1]: session-30.scope: Deactivated successfully. Apr 12 18:55:48.714992 systemd-logind[1116]: Session 30 logged out. Waiting for processes to exit. Apr 12 18:55:48.726433 systemd[1]: Started sshd@30-10.0.0.118:22-10.0.0.1:59114.service. Apr 12 18:55:48.741434 systemd-logind[1116]: Removed session 30. Apr 12 18:55:48.745184 kubelet[1983]: E0412 18:55:48.744691 1983 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:55:48.746047 env[1124]: time="2024-04-12T18:55:48.745281984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zzl2d,Uid:1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc,Namespace:kube-system,Attempt:0,}" Apr 12 18:55:48.784642 sshd[3888]: Accepted publickey for core from 10.0.0.1 port 59114 ssh2: RSA SHA256:oFTmhZVjs8bjXH/lYpDQZ+WL9oh5tEY90V+L3H6oLsU Apr 12 18:55:48.792486 sshd[3888]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:55:48.817992 systemd[1]: Started session-31.scope. Apr 12 18:55:48.818693 systemd-logind[1116]: New session 31 of user core. Apr 12 18:55:48.834998 env[1124]: time="2024-04-12T18:55:48.827194033Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:55:48.834998 env[1124]: time="2024-04-12T18:55:48.827294563Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:55:48.834998 env[1124]: time="2024-04-12T18:55:48.827324038Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:55:48.834998 env[1124]: time="2024-04-12T18:55:48.827674729Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2d00172c766677d2a95466c6b60760cf96cf6c5ef6b4cb70fa12daae09cf8f19 pid=3898 runtime=io.containerd.runc.v2 Apr 12 18:55:48.853901 systemd[1]: Started cri-containerd-2d00172c766677d2a95466c6b60760cf96cf6c5ef6b4cb70fa12daae09cf8f19.scope. Apr 12 18:55:48.952099 env[1124]: time="2024-04-12T18:55:48.951932779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zzl2d,Uid:1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc,Namespace:kube-system,Attempt:0,} returns sandbox id \"2d00172c766677d2a95466c6b60760cf96cf6c5ef6b4cb70fa12daae09cf8f19\"" Apr 12 18:55:48.953361 kubelet[1983]: E0412 18:55:48.953220 1983 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:55:48.956134 env[1124]: time="2024-04-12T18:55:48.956024494Z" level=info msg="CreateContainer within sandbox \"2d00172c766677d2a95466c6b60760cf96cf6c5ef6b4cb70fa12daae09cf8f19\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 12 18:55:49.002801 env[1124]: time="2024-04-12T18:55:49.002685359Z" level=info msg="CreateContainer within sandbox \"2d00172c766677d2a95466c6b60760cf96cf6c5ef6b4cb70fa12daae09cf8f19\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"81f3c0eb705cf8986ce249534594f9122ddce699676f8018f01d77fbc336d7f4\"" Apr 12 18:55:49.003913 env[1124]: time="2024-04-12T18:55:49.003874982Z" level=info msg="StartContainer for \"81f3c0eb705cf8986ce249534594f9122ddce699676f8018f01d77fbc336d7f4\"" Apr 12 18:55:49.047577 systemd[1]: Started cri-containerd-81f3c0eb705cf8986ce249534594f9122ddce699676f8018f01d77fbc336d7f4.scope. Apr 12 18:55:49.071918 systemd[1]: cri-containerd-81f3c0eb705cf8986ce249534594f9122ddce699676f8018f01d77fbc336d7f4.scope: Deactivated successfully. Apr 12 18:55:49.125573 env[1124]: time="2024-04-12T18:55:49.125234287Z" level=info msg="shim disconnected" id=81f3c0eb705cf8986ce249534594f9122ddce699676f8018f01d77fbc336d7f4 Apr 12 18:55:49.125573 env[1124]: time="2024-04-12T18:55:49.125304199Z" level=warning msg="cleaning up after shim disconnected" id=81f3c0eb705cf8986ce249534594f9122ddce699676f8018f01d77fbc336d7f4 namespace=k8s.io Apr 12 18:55:49.125573 env[1124]: time="2024-04-12T18:55:49.125320720Z" level=info msg="cleaning up dead shim" Apr 12 18:55:49.154087 env[1124]: time="2024-04-12T18:55:49.150603908Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:55:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3964 runtime=io.containerd.runc.v2\ntime=\"2024-04-12T18:55:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\ntime=\"2024-04-12T18:55:49Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/81f3c0eb705cf8986ce249534594f9122ddce699676f8018f01d77fbc336d7f4/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Apr 12 18:55:49.154087 env[1124]: time="2024-04-12T18:55:49.151087820Z" level=error msg="copy shim log" error="read /proc/self/fd/47: file already closed" Apr 12 18:55:49.154087 env[1124]: time="2024-04-12T18:55:49.151738256Z" level=error msg="Failed to pipe stdout of container \"81f3c0eb705cf8986ce249534594f9122ddce699676f8018f01d77fbc336d7f4\"" error="reading from a closed fifo" Apr 12 18:55:49.158238 env[1124]: time="2024-04-12T18:55:49.158130849Z" level=error msg="Failed to pipe stderr of container \"81f3c0eb705cf8986ce249534594f9122ddce699676f8018f01d77fbc336d7f4\"" error="reading from a closed fifo" Apr 12 18:55:49.161398 env[1124]: time="2024-04-12T18:55:49.161262002Z" level=error msg="StartContainer for \"81f3c0eb705cf8986ce249534594f9122ddce699676f8018f01d77fbc336d7f4\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Apr 12 18:55:49.161872 kubelet[1983]: E0412 18:55:49.161827 1983 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="81f3c0eb705cf8986ce249534594f9122ddce699676f8018f01d77fbc336d7f4" Apr 12 18:55:49.165284 kubelet[1983]: E0412 18:55:49.165102 1983 kuberuntime_manager.go:1262] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Apr 12 18:55:49.165284 kubelet[1983]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Apr 12 18:55:49.165284 kubelet[1983]: rm /hostbin/cilium-mount Apr 12 18:55:49.165573 kubelet[1983]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-gkwtg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-zzl2d_kube-system(1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Apr 12 18:55:49.165573 kubelet[1983]: E0412 18:55:49.165189 1983 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-zzl2d" podUID="1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc" Apr 12 18:55:49.994515 env[1124]: time="2024-04-12T18:55:49.994404570Z" level=info msg="StopPodSandbox for \"2d00172c766677d2a95466c6b60760cf96cf6c5ef6b4cb70fa12daae09cf8f19\"" Apr 12 18:55:49.994515 env[1124]: time="2024-04-12T18:55:49.994502444Z" level=info msg="Container to stop \"81f3c0eb705cf8986ce249534594f9122ddce699676f8018f01d77fbc336d7f4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:55:49.996465 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2d00172c766677d2a95466c6b60760cf96cf6c5ef6b4cb70fa12daae09cf8f19-shm.mount: Deactivated successfully. Apr 12 18:55:50.024611 systemd[1]: cri-containerd-2d00172c766677d2a95466c6b60760cf96cf6c5ef6b4cb70fa12daae09cf8f19.scope: Deactivated successfully. Apr 12 18:55:50.098636 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2d00172c766677d2a95466c6b60760cf96cf6c5ef6b4cb70fa12daae09cf8f19-rootfs.mount: Deactivated successfully. Apr 12 18:55:50.138990 env[1124]: time="2024-04-12T18:55:50.138908491Z" level=info msg="shim disconnected" id=2d00172c766677d2a95466c6b60760cf96cf6c5ef6b4cb70fa12daae09cf8f19 Apr 12 18:55:50.138990 env[1124]: time="2024-04-12T18:55:50.138988422Z" level=warning msg="cleaning up after shim disconnected" id=2d00172c766677d2a95466c6b60760cf96cf6c5ef6b4cb70fa12daae09cf8f19 namespace=k8s.io Apr 12 18:55:50.139288 env[1124]: time="2024-04-12T18:55:50.139005053Z" level=info msg="cleaning up dead shim" Apr 12 18:55:50.169227 env[1124]: time="2024-04-12T18:55:50.169160124Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:55:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3995 runtime=io.containerd.runc.v2\n" Apr 12 18:55:50.169894 env[1124]: time="2024-04-12T18:55:50.169855645Z" level=info msg="TearDown network for sandbox \"2d00172c766677d2a95466c6b60760cf96cf6c5ef6b4cb70fa12daae09cf8f19\" successfully" Apr 12 18:55:50.170043 env[1124]: time="2024-04-12T18:55:50.170016668Z" level=info msg="StopPodSandbox for \"2d00172c766677d2a95466c6b60760cf96cf6c5ef6b4cb70fa12daae09cf8f19\" returns successfully" Apr 12 18:55:50.292240 kubelet[1983]: I0412 18:55:50.292074 1983 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc-cni-path\") pod \"1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc\" (UID: \"1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc\") " Apr 12 18:55:50.292799 kubelet[1983]: I0412 18:55:50.292746 1983 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc-bpf-maps\") pod \"1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc\" (UID: \"1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc\") " Apr 12 18:55:50.292937 kubelet[1983]: I0412 18:55:50.292905 1983 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc-cilium-cgroup\") pod \"1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc\" (UID: \"1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc\") " Apr 12 18:55:50.293083 kubelet[1983]: I0412 18:55:50.293057 1983 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkwtg\" (UniqueName: \"kubernetes.io/projected/1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc-kube-api-access-gkwtg\") pod \"1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc\" (UID: \"1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc\") " Apr 12 18:55:50.293213 kubelet[1983]: I0412 18:55:50.293195 1983 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc-host-proc-sys-kernel\") pod \"1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc\" (UID: \"1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc\") " Apr 12 18:55:50.293337 kubelet[1983]: I0412 18:55:50.293318 1983 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc-etc-cni-netd\") pod \"1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc\" (UID: \"1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc\") " Apr 12 18:55:50.293459 kubelet[1983]: I0412 18:55:50.293440 1983 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc-host-proc-sys-net\") pod \"1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc\" (UID: \"1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc\") " Apr 12 18:55:50.293584 kubelet[1983]: I0412 18:55:50.293564 1983 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc-xtables-lock\") pod \"1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc\" (UID: \"1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc\") " Apr 12 18:55:50.293710 kubelet[1983]: I0412 18:55:50.293692 1983 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc-clustermesh-secrets\") pod \"1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc\" (UID: \"1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc\") " Apr 12 18:55:50.293860 kubelet[1983]: I0412 18:55:50.293841 1983 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc-cilium-ipsec-secrets\") pod \"1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc\" (UID: \"1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc\") " Apr 12 18:55:50.298022 kubelet[1983]: I0412 18:55:50.297982 1983 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc-hubble-tls\") pod \"1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc\" (UID: \"1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc\") " Apr 12 18:55:50.298250 kubelet[1983]: I0412 18:55:50.298230 1983 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc-lib-modules\") pod \"1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc\" (UID: \"1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc\") " Apr 12 18:55:50.298391 kubelet[1983]: I0412 18:55:50.298370 1983 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc-cilium-config-path\") pod \"1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc\" (UID: \"1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc\") " Apr 12 18:55:50.298523 kubelet[1983]: I0412 18:55:50.298503 1983 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc-hostproc\") pod \"1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc\" (UID: \"1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc\") " Apr 12 18:55:50.298658 kubelet[1983]: I0412 18:55:50.298637 1983 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc-cilium-run\") pod \"1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc\" (UID: \"1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc\") " Apr 12 18:55:50.298876 kubelet[1983]: I0412 18:55:50.298852 1983 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc" (UID: "1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:55:50.301144 systemd[1]: var-lib-kubelet-pods-1f97f266\x2da4d2\x2d4a7a\x2d9e4f\x2d77c8f288f9dc-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Apr 12 18:55:50.309030 kubelet[1983]: I0412 18:55:50.302367 1983 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc-cni-path" (OuterVolumeSpecName: "cni-path") pod "1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc" (UID: "1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:55:50.309030 kubelet[1983]: I0412 18:55:50.302415 1983 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc" (UID: "1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:55:50.309030 kubelet[1983]: I0412 18:55:50.302436 1983 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc" (UID: "1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:55:50.309030 kubelet[1983]: I0412 18:55:50.305474 1983 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc" (UID: "1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 12 18:55:50.309030 kubelet[1983]: I0412 18:55:50.305542 1983 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc" (UID: "1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:55:50.309030 kubelet[1983]: I0412 18:55:50.305570 1983 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc" (UID: "1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:55:50.309030 kubelet[1983]: I0412 18:55:50.305598 1983 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc-hostproc" (OuterVolumeSpecName: "hostproc") pod "1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc" (UID: "1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:55:50.316583 kubelet[1983]: I0412 18:55:50.309712 1983 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc" (UID: "1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:55:50.316583 kubelet[1983]: I0412 18:55:50.310024 1983 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc" (UID: "1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 12 18:55:50.310993 systemd[1]: var-lib-kubelet-pods-1f97f266\x2da4d2\x2d4a7a\x2d9e4f\x2d77c8f288f9dc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgkwtg.mount: Deactivated successfully. Apr 12 18:55:50.322656 kubelet[1983]: I0412 18:55:50.322274 1983 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc" (UID: "1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 12 18:55:50.322656 kubelet[1983]: I0412 18:55:50.310025 1983 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc" (UID: "1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:55:50.322656 kubelet[1983]: I0412 18:55:50.310059 1983 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc" (UID: "1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:55:50.331895 kubelet[1983]: I0412 18:55:50.325347 1983 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc-kube-api-access-gkwtg" (OuterVolumeSpecName: "kube-api-access-gkwtg") pod "1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc" (UID: "1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc"). InnerVolumeSpecName "kube-api-access-gkwtg". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 12 18:55:50.340585 kubelet[1983]: I0412 18:55:50.339391 1983 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc" (UID: "1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 12 18:55:50.399651 kubelet[1983]: I0412 18:55:50.399568 1983 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc-cilium-run\") on node \"localhost\" DevicePath \"\"" Apr 12 18:55:50.399651 kubelet[1983]: I0412 18:55:50.399626 1983 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc-hostproc\") on node \"localhost\" DevicePath \"\"" Apr 12 18:55:50.399651 kubelet[1983]: I0412 18:55:50.399642 1983 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc-bpf-maps\") on node \"localhost\" DevicePath \"\"" Apr 12 18:55:50.399651 kubelet[1983]: I0412 18:55:50.399655 1983 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc-cni-path\") on node \"localhost\" DevicePath \"\"" Apr 12 18:55:50.399651 kubelet[1983]: I0412 18:55:50.399668 1983 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Apr 12 18:55:50.400143 kubelet[1983]: I0412 18:55:50.399683 1983 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-gkwtg\" (UniqueName: \"kubernetes.io/projected/1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc-kube-api-access-gkwtg\") on node \"localhost\" DevicePath \"\"" Apr 12 18:55:50.400143 kubelet[1983]: I0412 18:55:50.399699 1983 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Apr 12 18:55:50.400143 kubelet[1983]: I0412 18:55:50.399713 1983 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Apr 12 18:55:50.400143 kubelet[1983]: I0412 18:55:50.399726 1983 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Apr 12 18:55:50.400143 kubelet[1983]: I0412 18:55:50.399739 1983 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Apr 12 18:55:50.400143 kubelet[1983]: I0412 18:55:50.399750 1983 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc-xtables-lock\") on node \"localhost\" DevicePath \"\"" Apr 12 18:55:50.400143 kubelet[1983]: I0412 18:55:50.399779 1983 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc-lib-modules\") on node \"localhost\" DevicePath \"\"" Apr 12 18:55:50.400143 kubelet[1983]: I0412 18:55:50.399797 1983 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 12 18:55:50.400143 kubelet[1983]: I0412 18:55:50.399810 1983 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Apr 12 18:55:50.400143 kubelet[1983]: I0412 18:55:50.399825 1983 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc-hubble-tls\") on node \"localhost\" DevicePath \"\"" Apr 12 18:55:50.576611 kubelet[1983]: E0412 18:55:50.576444 1983 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:55:50.593531 systemd[1]: Removed slice kubepods-burstable-pod1f97f266_a4d2_4a7a_9e4f_77c8f288f9dc.slice. Apr 12 18:55:50.673062 systemd[1]: var-lib-kubelet-pods-1f97f266\x2da4d2\x2d4a7a\x2d9e4f\x2d77c8f288f9dc-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 12 18:55:50.673201 systemd[1]: var-lib-kubelet-pods-1f97f266\x2da4d2\x2d4a7a\x2d9e4f\x2d77c8f288f9dc-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 12 18:55:50.999044 kubelet[1983]: I0412 18:55:50.998192 1983 scope.go:117] "RemoveContainer" containerID="81f3c0eb705cf8986ce249534594f9122ddce699676f8018f01d77fbc336d7f4" Apr 12 18:55:51.003733 env[1124]: time="2024-04-12T18:55:51.002545909Z" level=info msg="RemoveContainer for \"81f3c0eb705cf8986ce249534594f9122ddce699676f8018f01d77fbc336d7f4\"" Apr 12 18:55:51.022397 env[1124]: time="2024-04-12T18:55:51.022258973Z" level=info msg="RemoveContainer for \"81f3c0eb705cf8986ce249534594f9122ddce699676f8018f01d77fbc336d7f4\" returns successfully" Apr 12 18:55:51.134140 kubelet[1983]: I0412 18:55:51.134089 1983 topology_manager.go:215] "Topology Admit Handler" podUID="6fd45872-73b4-4c48-84f4-c62999408dc8" podNamespace="kube-system" podName="cilium-m5qp9" Apr 12 18:55:51.134451 kubelet[1983]: E0412 18:55:51.134431 1983 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc" containerName="mount-cgroup" Apr 12 18:55:51.134572 kubelet[1983]: I0412 18:55:51.134552 1983 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc" containerName="mount-cgroup" Apr 12 18:55:51.153634 systemd[1]: Created slice kubepods-burstable-pod6fd45872_73b4_4c48_84f4_c62999408dc8.slice. Apr 12 18:55:51.310096 kubelet[1983]: I0412 18:55:51.306260 1983 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6fd45872-73b4-4c48-84f4-c62999408dc8-clustermesh-secrets\") pod \"cilium-m5qp9\" (UID: \"6fd45872-73b4-4c48-84f4-c62999408dc8\") " pod="kube-system/cilium-m5qp9" Apr 12 18:55:51.310096 kubelet[1983]: I0412 18:55:51.306345 1983 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6fd45872-73b4-4c48-84f4-c62999408dc8-cilium-config-path\") pod \"cilium-m5qp9\" (UID: \"6fd45872-73b4-4c48-84f4-c62999408dc8\") " pod="kube-system/cilium-m5qp9" Apr 12 18:55:51.310096 kubelet[1983]: I0412 18:55:51.306379 1983 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfhz9\" (UniqueName: \"kubernetes.io/projected/6fd45872-73b4-4c48-84f4-c62999408dc8-kube-api-access-kfhz9\") pod \"cilium-m5qp9\" (UID: \"6fd45872-73b4-4c48-84f4-c62999408dc8\") " pod="kube-system/cilium-m5qp9" Apr 12 18:55:51.310096 kubelet[1983]: I0412 18:55:51.306413 1983 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6fd45872-73b4-4c48-84f4-c62999408dc8-bpf-maps\") pod \"cilium-m5qp9\" (UID: \"6fd45872-73b4-4c48-84f4-c62999408dc8\") " pod="kube-system/cilium-m5qp9" Apr 12 18:55:51.310096 kubelet[1983]: I0412 18:55:51.306438 1983 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6fd45872-73b4-4c48-84f4-c62999408dc8-cni-path\") pod \"cilium-m5qp9\" (UID: \"6fd45872-73b4-4c48-84f4-c62999408dc8\") " pod="kube-system/cilium-m5qp9" Apr 12 18:55:51.310096 kubelet[1983]: I0412 18:55:51.306463 1983 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6fd45872-73b4-4c48-84f4-c62999408dc8-etc-cni-netd\") pod \"cilium-m5qp9\" (UID: \"6fd45872-73b4-4c48-84f4-c62999408dc8\") " pod="kube-system/cilium-m5qp9" Apr 12 18:55:51.310096 kubelet[1983]: I0412 18:55:51.306491 1983 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6fd45872-73b4-4c48-84f4-c62999408dc8-host-proc-sys-kernel\") pod \"cilium-m5qp9\" (UID: \"6fd45872-73b4-4c48-84f4-c62999408dc8\") " pod="kube-system/cilium-m5qp9" Apr 12 18:55:51.310096 kubelet[1983]: I0412 18:55:51.306520 1983 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6fd45872-73b4-4c48-84f4-c62999408dc8-cilium-cgroup\") pod \"cilium-m5qp9\" (UID: \"6fd45872-73b4-4c48-84f4-c62999408dc8\") " pod="kube-system/cilium-m5qp9" Apr 12 18:55:51.310096 kubelet[1983]: I0412 18:55:51.306551 1983 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6fd45872-73b4-4c48-84f4-c62999408dc8-lib-modules\") pod \"cilium-m5qp9\" (UID: \"6fd45872-73b4-4c48-84f4-c62999408dc8\") " pod="kube-system/cilium-m5qp9" Apr 12 18:55:51.310096 kubelet[1983]: I0412 18:55:51.306600 1983 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6fd45872-73b4-4c48-84f4-c62999408dc8-xtables-lock\") pod \"cilium-m5qp9\" (UID: \"6fd45872-73b4-4c48-84f4-c62999408dc8\") " pod="kube-system/cilium-m5qp9" Apr 12 18:55:51.310096 kubelet[1983]: I0412 18:55:51.306630 1983 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6fd45872-73b4-4c48-84f4-c62999408dc8-cilium-ipsec-secrets\") pod \"cilium-m5qp9\" (UID: \"6fd45872-73b4-4c48-84f4-c62999408dc8\") " pod="kube-system/cilium-m5qp9" Apr 12 18:55:51.310096 kubelet[1983]: I0412 18:55:51.306655 1983 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6fd45872-73b4-4c48-84f4-c62999408dc8-host-proc-sys-net\") pod \"cilium-m5qp9\" (UID: \"6fd45872-73b4-4c48-84f4-c62999408dc8\") " pod="kube-system/cilium-m5qp9" Apr 12 18:55:51.310096 kubelet[1983]: I0412 18:55:51.306681 1983 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6fd45872-73b4-4c48-84f4-c62999408dc8-cilium-run\") pod \"cilium-m5qp9\" (UID: \"6fd45872-73b4-4c48-84f4-c62999408dc8\") " pod="kube-system/cilium-m5qp9" Apr 12 18:55:51.310096 kubelet[1983]: I0412 18:55:51.306707 1983 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6fd45872-73b4-4c48-84f4-c62999408dc8-hostproc\") pod \"cilium-m5qp9\" (UID: \"6fd45872-73b4-4c48-84f4-c62999408dc8\") " pod="kube-system/cilium-m5qp9" Apr 12 18:55:51.310096 kubelet[1983]: I0412 18:55:51.306736 1983 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6fd45872-73b4-4c48-84f4-c62999408dc8-hubble-tls\") pod \"cilium-m5qp9\" (UID: \"6fd45872-73b4-4c48-84f4-c62999408dc8\") " pod="kube-system/cilium-m5qp9" Apr 12 18:55:51.473418 kubelet[1983]: E0412 18:55:51.473342 1983 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:55:51.477425 env[1124]: time="2024-04-12T18:55:51.474316154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-m5qp9,Uid:6fd45872-73b4-4c48-84f4-c62999408dc8,Namespace:kube-system,Attempt:0,}" Apr 12 18:55:51.535585 env[1124]: time="2024-04-12T18:55:51.535193049Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:55:51.535585 env[1124]: time="2024-04-12T18:55:51.535247652Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:55:51.535585 env[1124]: time="2024-04-12T18:55:51.535262400Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:55:51.535585 env[1124]: time="2024-04-12T18:55:51.535473688Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e859123bc7cf920b977bb118a1af16fb2ee34dc5838ea1af0c8f828d2f508e95 pid=4024 runtime=io.containerd.runc.v2 Apr 12 18:55:51.556214 systemd[1]: Started cri-containerd-e859123bc7cf920b977bb118a1af16fb2ee34dc5838ea1af0c8f828d2f508e95.scope. Apr 12 18:55:51.657259 env[1124]: time="2024-04-12T18:55:51.657203875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-m5qp9,Uid:6fd45872-73b4-4c48-84f4-c62999408dc8,Namespace:kube-system,Attempt:0,} returns sandbox id \"e859123bc7cf920b977bb118a1af16fb2ee34dc5838ea1af0c8f828d2f508e95\"" Apr 12 18:55:51.661449 kubelet[1983]: E0412 18:55:51.660466 1983 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:55:51.664751 env[1124]: time="2024-04-12T18:55:51.664684116Z" level=info msg="CreateContainer within sandbox \"e859123bc7cf920b977bb118a1af16fb2ee34dc5838ea1af0c8f828d2f508e95\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 12 18:55:51.763241 env[1124]: time="2024-04-12T18:55:51.763035820Z" level=info msg="CreateContainer within sandbox \"e859123bc7cf920b977bb118a1af16fb2ee34dc5838ea1af0c8f828d2f508e95\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"86068e6d924613ac2da34daa234ec9c8610168603725e2ef8a7ccb693f928b17\"" Apr 12 18:55:51.765693 env[1124]: time="2024-04-12T18:55:51.764258485Z" level=info msg="StartContainer for \"86068e6d924613ac2da34daa234ec9c8610168603725e2ef8a7ccb693f928b17\"" Apr 12 18:55:51.815342 systemd[1]: Started cri-containerd-86068e6d924613ac2da34daa234ec9c8610168603725e2ef8a7ccb693f928b17.scope. Apr 12 18:55:51.915959 env[1124]: time="2024-04-12T18:55:51.915520075Z" level=info msg="StartContainer for \"86068e6d924613ac2da34daa234ec9c8610168603725e2ef8a7ccb693f928b17\" returns successfully" Apr 12 18:55:51.940168 systemd[1]: cri-containerd-86068e6d924613ac2da34daa234ec9c8610168603725e2ef8a7ccb693f928b17.scope: Deactivated successfully. Apr 12 18:55:52.010522 kubelet[1983]: E0412 18:55:52.010452 1983 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:55:52.042920 env[1124]: time="2024-04-12T18:55:52.037441199Z" level=info msg="shim disconnected" id=86068e6d924613ac2da34daa234ec9c8610168603725e2ef8a7ccb693f928b17 Apr 12 18:55:52.042920 env[1124]: time="2024-04-12T18:55:52.037512904Z" level=warning msg="cleaning up after shim disconnected" id=86068e6d924613ac2da34daa234ec9c8610168603725e2ef8a7ccb693f928b17 namespace=k8s.io Apr 12 18:55:52.042920 env[1124]: time="2024-04-12T18:55:52.037526149Z" level=info msg="cleaning up dead shim" Apr 12 18:55:52.053856 env[1124]: time="2024-04-12T18:55:52.053728884Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:55:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4106 runtime=io.containerd.runc.v2\n" Apr 12 18:55:52.246372 kubelet[1983]: W0412 18:55:52.245103 1983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f97f266_a4d2_4a7a_9e4f_77c8f288f9dc.slice/cri-containerd-81f3c0eb705cf8986ce249534594f9122ddce699676f8018f01d77fbc336d7f4.scope WatchSource:0}: container "81f3c0eb705cf8986ce249534594f9122ddce699676f8018f01d77fbc336d7f4" in namespace "k8s.io": not found Apr 12 18:55:52.579695 kubelet[1983]: I0412 18:55:52.579180 1983 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc" path="/var/lib/kubelet/pods/1f97f266-a4d2-4a7a-9e4f-77c8f288f9dc/volumes" Apr 12 18:55:52.673403 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-86068e6d924613ac2da34daa234ec9c8610168603725e2ef8a7ccb693f928b17-rootfs.mount: Deactivated successfully. Apr 12 18:55:52.851387 kubelet[1983]: E0412 18:55:52.850363 1983 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 12 18:55:53.023378 kubelet[1983]: E0412 18:55:53.023320 1983 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:55:53.030109 env[1124]: time="2024-04-12T18:55:53.026726769Z" level=info msg="CreateContainer within sandbox \"e859123bc7cf920b977bb118a1af16fb2ee34dc5838ea1af0c8f828d2f508e95\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 12 18:55:53.096581 env[1124]: time="2024-04-12T18:55:53.096440734Z" level=info msg="CreateContainer within sandbox \"e859123bc7cf920b977bb118a1af16fb2ee34dc5838ea1af0c8f828d2f508e95\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"04259e87bc136eddd66c15146145d26d3fa6467925e63dfcf202727304002107\"" Apr 12 18:55:53.101361 env[1124]: time="2024-04-12T18:55:53.097637300Z" level=info msg="StartContainer for \"04259e87bc136eddd66c15146145d26d3fa6467925e63dfcf202727304002107\"" Apr 12 18:55:53.150117 systemd[1]: Started cri-containerd-04259e87bc136eddd66c15146145d26d3fa6467925e63dfcf202727304002107.scope. Apr 12 18:55:53.213693 env[1124]: time="2024-04-12T18:55:53.213591462Z" level=info msg="StartContainer for \"04259e87bc136eddd66c15146145d26d3fa6467925e63dfcf202727304002107\" returns successfully" Apr 12 18:55:53.217810 systemd[1]: cri-containerd-04259e87bc136eddd66c15146145d26d3fa6467925e63dfcf202727304002107.scope: Deactivated successfully. Apr 12 18:55:53.280124 env[1124]: time="2024-04-12T18:55:53.280040062Z" level=info msg="shim disconnected" id=04259e87bc136eddd66c15146145d26d3fa6467925e63dfcf202727304002107 Apr 12 18:55:53.280124 env[1124]: time="2024-04-12T18:55:53.280112529Z" level=warning msg="cleaning up after shim disconnected" id=04259e87bc136eddd66c15146145d26d3fa6467925e63dfcf202727304002107 namespace=k8s.io Apr 12 18:55:53.280124 env[1124]: time="2024-04-12T18:55:53.280126425Z" level=info msg="cleaning up dead shim" Apr 12 18:55:53.300283 env[1124]: time="2024-04-12T18:55:53.300181300Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:55:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4169 runtime=io.containerd.runc.v2\n" Apr 12 18:55:53.673704 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-04259e87bc136eddd66c15146145d26d3fa6467925e63dfcf202727304002107-rootfs.mount: Deactivated successfully. Apr 12 18:55:54.035572 kubelet[1983]: E0412 18:55:54.035176 1983 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:55:54.039473 env[1124]: time="2024-04-12T18:55:54.039427038Z" level=info msg="CreateContainer within sandbox \"e859123bc7cf920b977bb118a1af16fb2ee34dc5838ea1af0c8f828d2f508e95\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 12 18:55:54.103919 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3176959320.mount: Deactivated successfully. Apr 12 18:55:54.118449 env[1124]: time="2024-04-12T18:55:54.118351923Z" level=info msg="CreateContainer within sandbox \"e859123bc7cf920b977bb118a1af16fb2ee34dc5838ea1af0c8f828d2f508e95\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0b2f39b360920191ff87d589cfffe9c305c390f9fcf91b4a4fbd124d8db8d116\"" Apr 12 18:55:54.119623 env[1124]: time="2024-04-12T18:55:54.119544561Z" level=info msg="StartContainer for \"0b2f39b360920191ff87d589cfffe9c305c390f9fcf91b4a4fbd124d8db8d116\"" Apr 12 18:55:54.178102 systemd[1]: Started cri-containerd-0b2f39b360920191ff87d589cfffe9c305c390f9fcf91b4a4fbd124d8db8d116.scope. Apr 12 18:55:54.261986 env[1124]: time="2024-04-12T18:55:54.261736784Z" level=info msg="StartContainer for \"0b2f39b360920191ff87d589cfffe9c305c390f9fcf91b4a4fbd124d8db8d116\" returns successfully" Apr 12 18:55:54.267203 systemd[1]: cri-containerd-0b2f39b360920191ff87d589cfffe9c305c390f9fcf91b4a4fbd124d8db8d116.scope: Deactivated successfully. Apr 12 18:55:54.359695 env[1124]: time="2024-04-12T18:55:54.359472727Z" level=info msg="shim disconnected" id=0b2f39b360920191ff87d589cfffe9c305c390f9fcf91b4a4fbd124d8db8d116 Apr 12 18:55:54.359695 env[1124]: time="2024-04-12T18:55:54.359667053Z" level=warning msg="cleaning up after shim disconnected" id=0b2f39b360920191ff87d589cfffe9c305c390f9fcf91b4a4fbd124d8db8d116 namespace=k8s.io Apr 12 18:55:54.359695 env[1124]: time="2024-04-12T18:55:54.359683083Z" level=info msg="cleaning up dead shim" Apr 12 18:55:54.395972 env[1124]: time="2024-04-12T18:55:54.395902178Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:55:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4222 runtime=io.containerd.runc.v2\n" Apr 12 18:55:54.676717 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0b2f39b360920191ff87d589cfffe9c305c390f9fcf91b4a4fbd124d8db8d116-rootfs.mount: Deactivated successfully. Apr 12 18:55:55.064310 kubelet[1983]: E0412 18:55:55.059285 1983 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:55:55.066170 env[1124]: time="2024-04-12T18:55:55.064270753Z" level=info msg="CreateContainer within sandbox \"e859123bc7cf920b977bb118a1af16fb2ee34dc5838ea1af0c8f828d2f508e95\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 12 18:55:55.134800 env[1124]: time="2024-04-12T18:55:55.134685000Z" level=info msg="CreateContainer within sandbox \"e859123bc7cf920b977bb118a1af16fb2ee34dc5838ea1af0c8f828d2f508e95\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ed5ac4c3bfa1f3285fb99e2f952d45a48eb88d0aaa16f36e7cd1d04276011474\"" Apr 12 18:55:55.139122 env[1124]: time="2024-04-12T18:55:55.139038476Z" level=info msg="StartContainer for \"ed5ac4c3bfa1f3285fb99e2f952d45a48eb88d0aaa16f36e7cd1d04276011474\"" Apr 12 18:55:55.196416 systemd[1]: Started cri-containerd-ed5ac4c3bfa1f3285fb99e2f952d45a48eb88d0aaa16f36e7cd1d04276011474.scope. Apr 12 18:55:55.260067 systemd[1]: cri-containerd-ed5ac4c3bfa1f3285fb99e2f952d45a48eb88d0aaa16f36e7cd1d04276011474.scope: Deactivated successfully. Apr 12 18:55:55.272063 env[1124]: time="2024-04-12T18:55:55.271206546Z" level=info msg="StartContainer for \"ed5ac4c3bfa1f3285fb99e2f952d45a48eb88d0aaa16f36e7cd1d04276011474\" returns successfully" Apr 12 18:55:55.363356 env[1124]: time="2024-04-12T18:55:55.363256344Z" level=info msg="shim disconnected" id=ed5ac4c3bfa1f3285fb99e2f952d45a48eb88d0aaa16f36e7cd1d04276011474 Apr 12 18:55:55.363356 env[1124]: time="2024-04-12T18:55:55.363346304Z" level=warning msg="cleaning up after shim disconnected" id=ed5ac4c3bfa1f3285fb99e2f952d45a48eb88d0aaa16f36e7cd1d04276011474 namespace=k8s.io Apr 12 18:55:55.363356 env[1124]: time="2024-04-12T18:55:55.363360611Z" level=info msg="cleaning up dead shim" Apr 12 18:55:55.371043 kubelet[1983]: W0412 18:55:55.369552 1983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6fd45872_73b4_4c48_84f4_c62999408dc8.slice/cri-containerd-86068e6d924613ac2da34daa234ec9c8610168603725e2ef8a7ccb693f928b17.scope WatchSource:0}: task 86068e6d924613ac2da34daa234ec9c8610168603725e2ef8a7ccb693f928b17 not found: not found Apr 12 18:55:55.394855 env[1124]: time="2024-04-12T18:55:55.394744747Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:55:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4278 runtime=io.containerd.runc.v2\n" Apr 12 18:55:55.674330 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ed5ac4c3bfa1f3285fb99e2f952d45a48eb88d0aaa16f36e7cd1d04276011474-rootfs.mount: Deactivated successfully. Apr 12 18:55:56.075235 kubelet[1983]: E0412 18:55:56.074360 1983 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:55:56.080656 env[1124]: time="2024-04-12T18:55:56.080596976Z" level=info msg="CreateContainer within sandbox \"e859123bc7cf920b977bb118a1af16fb2ee34dc5838ea1af0c8f828d2f508e95\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 12 18:55:56.184693 env[1124]: time="2024-04-12T18:55:56.184578480Z" level=info msg="CreateContainer within sandbox \"e859123bc7cf920b977bb118a1af16fb2ee34dc5838ea1af0c8f828d2f508e95\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5b246d82046d139521cf39a1bc87a768bc1fb9715708cd79b8fea26c169285d1\"" Apr 12 18:55:56.187617 env[1124]: time="2024-04-12T18:55:56.187523741Z" level=info msg="StartContainer for \"5b246d82046d139521cf39a1bc87a768bc1fb9715708cd79b8fea26c169285d1\"" Apr 12 18:55:56.229436 systemd[1]: Started cri-containerd-5b246d82046d139521cf39a1bc87a768bc1fb9715708cd79b8fea26c169285d1.scope. Apr 12 18:55:56.353961 kubelet[1983]: I0412 18:55:56.351430 1983 setters.go:568] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-04-12T18:55:56Z","lastTransitionTime":"2024-04-12T18:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Apr 12 18:55:56.369480 env[1124]: time="2024-04-12T18:55:56.369373357Z" level=info msg="StartContainer for \"5b246d82046d139521cf39a1bc87a768bc1fb9715708cd79b8fea26c169285d1\" returns successfully" Apr 12 18:55:57.107177 kubelet[1983]: E0412 18:55:57.107138 1983 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:55:57.144855 kubelet[1983]: I0412 18:55:57.144740 1983 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-m5qp9" podStartSLOduration=6.144662849 podStartE2EDuration="6.144662849s" podCreationTimestamp="2024-04-12 18:55:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:55:57.138046178 +0000 UTC m=+154.760721299" watchObservedRunningTime="2024-04-12 18:55:57.144662849 +0000 UTC m=+154.767337970" Apr 12 18:55:57.417995 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Apr 12 18:55:58.122194 kubelet[1983]: E0412 18:55:58.121740 1983 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:55:58.517219 kubelet[1983]: W0412 18:55:58.515718 1983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6fd45872_73b4_4c48_84f4_c62999408dc8.slice/cri-containerd-04259e87bc136eddd66c15146145d26d3fa6467925e63dfcf202727304002107.scope WatchSource:0}: task 04259e87bc136eddd66c15146145d26d3fa6467925e63dfcf202727304002107 not found: not found Apr 12 18:55:58.578826 kubelet[1983]: E0412 18:55:58.576881 1983 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:55:58.578826 kubelet[1983]: E0412 18:55:58.578347 1983 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:55:59.574727 kubelet[1983]: E0412 18:55:59.574639 1983 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:56:01.655601 kubelet[1983]: W0412 18:56:01.655129 1983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6fd45872_73b4_4c48_84f4_c62999408dc8.slice/cri-containerd-0b2f39b360920191ff87d589cfffe9c305c390f9fcf91b4a4fbd124d8db8d116.scope WatchSource:0}: task 0b2f39b360920191ff87d589cfffe9c305c390f9fcf91b4a4fbd124d8db8d116 not found: not found Apr 12 18:56:01.824323 systemd-networkd[1019]: lxc_health: Link UP Apr 12 18:56:01.899080 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Apr 12 18:56:01.898730 systemd-networkd[1019]: lxc_health: Gained carrier Apr 12 18:56:03.475275 kubelet[1983]: E0412 18:56:03.475234 1983 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:56:03.694139 systemd-networkd[1019]: lxc_health: Gained IPv6LL Apr 12 18:56:04.167513 kubelet[1983]: E0412 18:56:04.167179 1983 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:56:04.524915 systemd[1]: run-containerd-runc-k8s.io-5b246d82046d139521cf39a1bc87a768bc1fb9715708cd79b8fea26c169285d1-runc.IMCZyG.mount: Deactivated successfully. Apr 12 18:56:04.765501 kubelet[1983]: W0412 18:56:04.765444 1983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6fd45872_73b4_4c48_84f4_c62999408dc8.slice/cri-containerd-ed5ac4c3bfa1f3285fb99e2f952d45a48eb88d0aaa16f36e7cd1d04276011474.scope WatchSource:0}: task ed5ac4c3bfa1f3285fb99e2f952d45a48eb88d0aaa16f36e7cd1d04276011474 not found: not found Apr 12 18:56:05.180409 kubelet[1983]: E0412 18:56:05.180342 1983 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:56:06.997653 sshd[3888]: pam_unix(sshd:session): session closed for user core Apr 12 18:56:07.000537 systemd[1]: sshd@30-10.0.0.118:22-10.0.0.1:59114.service: Deactivated successfully. Apr 12 18:56:07.001492 systemd[1]: session-31.scope: Deactivated successfully. Apr 12 18:56:07.002367 systemd-logind[1116]: Session 31 logged out. Waiting for processes to exit. Apr 12 18:56:07.003459 systemd-logind[1116]: Removed session 31.