Apr 12 18:45:11.781248 kernel: Linux version 5.15.154-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Apr 12 17:19:00 -00 2024 Apr 12 18:45:11.781276 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=189121f7c8c0a24098d3bb1e040d34611f7c276be43815ff7fe409fce185edaf Apr 12 18:45:11.781287 kernel: BIOS-provided physical RAM map: Apr 12 18:45:11.781293 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 12 18:45:11.781298 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Apr 12 18:45:11.781303 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Apr 12 18:45:11.781310 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Apr 12 18:45:11.781316 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Apr 12 18:45:11.781321 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Apr 12 18:45:11.781328 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Apr 12 18:45:11.781333 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Apr 12 18:45:11.781339 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Apr 12 18:45:11.781344 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Apr 12 18:45:11.781350 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Apr 12 18:45:11.781357 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Apr 12 18:45:11.781364 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Apr 12 18:45:11.781370 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Apr 12 18:45:11.781375 kernel: NX (Execute Disable) protection: active Apr 12 18:45:11.781381 kernel: e820: update [mem 0x9b3f7018-0x9b400c57] usable ==> usable Apr 12 18:45:11.781387 kernel: e820: update [mem 0x9b3f7018-0x9b400c57] usable ==> usable Apr 12 18:45:11.781393 kernel: e820: update [mem 0x9b1aa018-0x9b1e6e57] usable ==> usable Apr 12 18:45:11.781399 kernel: e820: update [mem 0x9b1aa018-0x9b1e6e57] usable ==> usable Apr 12 18:45:11.781404 kernel: extended physical RAM map: Apr 12 18:45:11.781410 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 12 18:45:11.781416 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Apr 12 18:45:11.781424 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Apr 12 18:45:11.781430 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Apr 12 18:45:11.781435 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Apr 12 18:45:11.781441 kernel: reserve setup_data: [mem 0x000000000080c000-0x000000000080ffff] usable Apr 12 18:45:11.781447 kernel: reserve setup_data: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Apr 12 18:45:11.781453 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b1aa017] usable Apr 12 18:45:11.781459 kernel: reserve setup_data: [mem 0x000000009b1aa018-0x000000009b1e6e57] usable Apr 12 18:45:11.781464 kernel: reserve setup_data: [mem 0x000000009b1e6e58-0x000000009b3f7017] usable Apr 12 18:45:11.781470 kernel: reserve setup_data: [mem 0x000000009b3f7018-0x000000009b400c57] usable Apr 12 18:45:11.781476 kernel: reserve setup_data: [mem 0x000000009b400c58-0x000000009c8eefff] usable Apr 12 18:45:11.781482 kernel: reserve setup_data: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Apr 12 18:45:11.781489 kernel: reserve setup_data: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Apr 12 18:45:11.781495 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Apr 12 18:45:11.781500 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Apr 12 18:45:11.781506 kernel: reserve setup_data: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Apr 12 18:45:11.781515 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Apr 12 18:45:11.781521 kernel: efi: EFI v2.70 by EDK II Apr 12 18:45:11.781528 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b773018 RNG=0x9cb75018 Apr 12 18:45:11.781535 kernel: random: crng init done Apr 12 18:45:11.781542 kernel: SMBIOS 2.8 present. Apr 12 18:45:11.781548 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 0.0.0 02/06/2015 Apr 12 18:45:11.781555 kernel: Hypervisor detected: KVM Apr 12 18:45:11.781561 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 12 18:45:11.781567 kernel: kvm-clock: cpu 0, msr d191001, primary cpu clock Apr 12 18:45:11.781574 kernel: kvm-clock: using sched offset of 4242173389 cycles Apr 12 18:45:11.781581 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 12 18:45:11.781587 kernel: tsc: Detected 2794.750 MHz processor Apr 12 18:45:11.781595 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 12 18:45:11.781602 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 12 18:45:11.781608 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Apr 12 18:45:11.781615 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 12 18:45:11.781621 kernel: Using GB pages for direct mapping Apr 12 18:45:11.781628 kernel: Secure boot disabled Apr 12 18:45:11.781634 kernel: ACPI: Early table checksum verification disabled Apr 12 18:45:11.781641 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Apr 12 18:45:11.781647 kernel: ACPI: XSDT 0x000000009CB7D0E8 00004C (v01 BOCHS BXPC 00000001 01000013) Apr 12 18:45:11.781655 kernel: ACPI: FACP 0x000000009CB7A000 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 12 18:45:11.781662 kernel: ACPI: DSDT 0x000000009CB7B000 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 12 18:45:11.781668 kernel: ACPI: FACS 0x000000009CBDD000 000040 Apr 12 18:45:11.781674 kernel: ACPI: APIC 0x000000009CB79000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 12 18:45:11.781681 kernel: ACPI: HPET 0x000000009CB78000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 12 18:45:11.781687 kernel: ACPI: WAET 0x000000009CB77000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 12 18:45:11.781694 kernel: ACPI: BGRT 0x000000009CB76000 000038 (v01 INTEL EDK2 00000002 01000013) Apr 12 18:45:11.781700 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb7a000-0x9cb7a073] Apr 12 18:45:11.781707 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7b000-0x9cb7ca38] Apr 12 18:45:11.781714 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Apr 12 18:45:11.781721 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb79000-0x9cb7908f] Apr 12 18:45:11.781727 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb78000-0x9cb78037] Apr 12 18:45:11.781733 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb77000-0x9cb77027] Apr 12 18:45:11.781740 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb76000-0x9cb76037] Apr 12 18:45:11.781746 kernel: No NUMA configuration found Apr 12 18:45:11.781753 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Apr 12 18:45:11.781759 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Apr 12 18:45:11.781766 kernel: Zone ranges: Apr 12 18:45:11.781774 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 12 18:45:11.781780 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Apr 12 18:45:11.781786 kernel: Normal empty Apr 12 18:45:11.781793 kernel: Movable zone start for each node Apr 12 18:45:11.781799 kernel: Early memory node ranges Apr 12 18:45:11.781806 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 12 18:45:11.781812 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Apr 12 18:45:11.781818 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Apr 12 18:45:11.781825 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Apr 12 18:45:11.781832 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Apr 12 18:45:11.781839 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Apr 12 18:45:11.781845 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Apr 12 18:45:11.781852 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 12 18:45:11.781858 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 12 18:45:11.781864 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Apr 12 18:45:11.781871 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 12 18:45:11.781877 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Apr 12 18:45:11.781884 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Apr 12 18:45:11.781891 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Apr 12 18:45:11.781898 kernel: ACPI: PM-Timer IO Port: 0xb008 Apr 12 18:45:11.781904 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 12 18:45:11.781911 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 12 18:45:11.781917 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 12 18:45:11.781923 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 12 18:45:11.781930 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 12 18:45:11.781952 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 12 18:45:11.781958 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 12 18:45:11.781966 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 12 18:45:11.781973 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 12 18:45:11.781979 kernel: TSC deadline timer available Apr 12 18:45:11.781985 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Apr 12 18:45:11.781992 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 12 18:45:11.781998 kernel: kvm-guest: setup PV sched yield Apr 12 18:45:11.782004 kernel: [mem 0x9d000000-0xffffffff] available for PCI devices Apr 12 18:45:11.782011 kernel: Booting paravirtualized kernel on KVM Apr 12 18:45:11.782018 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 12 18:45:11.782024 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Apr 12 18:45:11.782032 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 Apr 12 18:45:11.782039 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 Apr 12 18:45:11.782050 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 12 18:45:11.782057 kernel: kvm-guest: setup async PF for cpu 0 Apr 12 18:45:11.782064 kernel: kvm-guest: stealtime: cpu 0, msr 9ae1c0c0 Apr 12 18:45:11.782071 kernel: kvm-guest: PV spinlocks enabled Apr 12 18:45:11.782078 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 12 18:45:11.782085 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Apr 12 18:45:11.782091 kernel: Policy zone: DMA32 Apr 12 18:45:11.782099 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=189121f7c8c0a24098d3bb1e040d34611f7c276be43815ff7fe409fce185edaf Apr 12 18:45:11.782106 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 12 18:45:11.782114 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 12 18:45:11.782121 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 12 18:45:11.782128 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 12 18:45:11.782136 kernel: Memory: 2398372K/2567000K available (12294K kernel code, 2275K rwdata, 13708K rodata, 47440K init, 4148K bss, 168368K reserved, 0K cma-reserved) Apr 12 18:45:11.782143 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 12 18:45:11.782150 kernel: ftrace: allocating 34508 entries in 135 pages Apr 12 18:45:11.782157 kernel: ftrace: allocated 135 pages with 4 groups Apr 12 18:45:11.782164 kernel: rcu: Hierarchical RCU implementation. Apr 12 18:45:11.782171 kernel: rcu: RCU event tracing is enabled. Apr 12 18:45:11.782178 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 12 18:45:11.782185 kernel: Rude variant of Tasks RCU enabled. Apr 12 18:45:11.782192 kernel: Tracing variant of Tasks RCU enabled. Apr 12 18:45:11.782199 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 12 18:45:11.782206 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 12 18:45:11.782214 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 12 18:45:11.782220 kernel: Console: colour dummy device 80x25 Apr 12 18:45:11.782227 kernel: printk: console [ttyS0] enabled Apr 12 18:45:11.782234 kernel: ACPI: Core revision 20210730 Apr 12 18:45:11.782241 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 12 18:45:11.782248 kernel: APIC: Switch to symmetric I/O mode setup Apr 12 18:45:11.782255 kernel: x2apic enabled Apr 12 18:45:11.782269 kernel: Switched APIC routing to physical x2apic. Apr 12 18:45:11.782276 kernel: kvm-guest: setup PV IPIs Apr 12 18:45:11.782284 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 12 18:45:11.782291 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Apr 12 18:45:11.782297 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Apr 12 18:45:11.782304 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 12 18:45:11.782311 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Apr 12 18:45:11.782318 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Apr 12 18:45:11.782325 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 12 18:45:11.782332 kernel: Spectre V2 : Mitigation: Retpolines Apr 12 18:45:11.782339 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Apr 12 18:45:11.782347 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Apr 12 18:45:11.782353 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Apr 12 18:45:11.782360 kernel: RETBleed: Mitigation: untrained return thunk Apr 12 18:45:11.782367 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 12 18:45:11.782374 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Apr 12 18:45:11.782381 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 12 18:45:11.782388 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 12 18:45:11.782395 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 12 18:45:11.782403 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 12 18:45:11.782410 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Apr 12 18:45:11.782417 kernel: Freeing SMP alternatives memory: 32K Apr 12 18:45:11.782423 kernel: pid_max: default: 32768 minimum: 301 Apr 12 18:45:11.782430 kernel: LSM: Security Framework initializing Apr 12 18:45:11.782437 kernel: SELinux: Initializing. Apr 12 18:45:11.782444 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 12 18:45:11.782450 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 12 18:45:11.782457 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Apr 12 18:45:11.782465 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Apr 12 18:45:11.782472 kernel: ... version: 0 Apr 12 18:45:11.782479 kernel: ... bit width: 48 Apr 12 18:45:11.782485 kernel: ... generic registers: 6 Apr 12 18:45:11.782492 kernel: ... value mask: 0000ffffffffffff Apr 12 18:45:11.782499 kernel: ... max period: 00007fffffffffff Apr 12 18:45:11.782505 kernel: ... fixed-purpose events: 0 Apr 12 18:45:11.782512 kernel: ... event mask: 000000000000003f Apr 12 18:45:11.782519 kernel: signal: max sigframe size: 1776 Apr 12 18:45:11.782526 kernel: rcu: Hierarchical SRCU implementation. Apr 12 18:45:11.782533 kernel: smp: Bringing up secondary CPUs ... Apr 12 18:45:11.782540 kernel: x86: Booting SMP configuration: Apr 12 18:45:11.782547 kernel: .... node #0, CPUs: #1 Apr 12 18:45:11.782554 kernel: kvm-clock: cpu 1, msr d191041, secondary cpu clock Apr 12 18:45:11.782560 kernel: kvm-guest: setup async PF for cpu 1 Apr 12 18:45:11.782567 kernel: kvm-guest: stealtime: cpu 1, msr 9ae9c0c0 Apr 12 18:45:11.782574 kernel: #2 Apr 12 18:45:11.782581 kernel: kvm-clock: cpu 2, msr d191081, secondary cpu clock Apr 12 18:45:11.782588 kernel: kvm-guest: setup async PF for cpu 2 Apr 12 18:45:11.782596 kernel: kvm-guest: stealtime: cpu 2, msr 9af1c0c0 Apr 12 18:45:11.782602 kernel: #3 Apr 12 18:45:11.782609 kernel: kvm-clock: cpu 3, msr d1910c1, secondary cpu clock Apr 12 18:45:11.782616 kernel: kvm-guest: setup async PF for cpu 3 Apr 12 18:45:11.782622 kernel: kvm-guest: stealtime: cpu 3, msr 9af9c0c0 Apr 12 18:45:11.782629 kernel: smp: Brought up 1 node, 4 CPUs Apr 12 18:45:11.782636 kernel: smpboot: Max logical packages: 1 Apr 12 18:45:11.782643 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Apr 12 18:45:11.782649 kernel: devtmpfs: initialized Apr 12 18:45:11.782657 kernel: x86/mm: Memory block size: 128MB Apr 12 18:45:11.782664 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Apr 12 18:45:11.782671 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Apr 12 18:45:11.782678 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Apr 12 18:45:11.782685 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Apr 12 18:45:11.782692 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Apr 12 18:45:11.782699 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 12 18:45:11.782706 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 12 18:45:11.782712 kernel: pinctrl core: initialized pinctrl subsystem Apr 12 18:45:11.782720 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 12 18:45:11.782727 kernel: audit: initializing netlink subsys (disabled) Apr 12 18:45:11.782734 kernel: audit: type=2000 audit(1712947511.612:1): state=initialized audit_enabled=0 res=1 Apr 12 18:45:11.782741 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 12 18:45:11.782748 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 12 18:45:11.782754 kernel: cpuidle: using governor menu Apr 12 18:45:11.782761 kernel: ACPI: bus type PCI registered Apr 12 18:45:11.782768 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 12 18:45:11.782775 kernel: dca service started, version 1.12.1 Apr 12 18:45:11.782783 kernel: PCI: Using configuration type 1 for base access Apr 12 18:45:11.782790 kernel: PCI: Using configuration type 1 for extended access Apr 12 18:45:11.782797 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 12 18:45:11.782803 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Apr 12 18:45:11.782810 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Apr 12 18:45:11.782817 kernel: ACPI: Added _OSI(Module Device) Apr 12 18:45:11.782824 kernel: ACPI: Added _OSI(Processor Device) Apr 12 18:45:11.782830 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 12 18:45:11.782837 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 12 18:45:11.782845 kernel: ACPI: Added _OSI(Linux-Dell-Video) Apr 12 18:45:11.782852 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Apr 12 18:45:11.782859 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Apr 12 18:45:11.782866 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 12 18:45:11.782873 kernel: ACPI: Interpreter enabled Apr 12 18:45:11.782882 kernel: ACPI: PM: (supports S0 S3 S5) Apr 12 18:45:11.782889 kernel: ACPI: Using IOAPIC for interrupt routing Apr 12 18:45:11.782895 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 12 18:45:11.782902 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Apr 12 18:45:11.782910 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 12 18:45:11.783029 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 12 18:45:11.783041 kernel: acpiphp: Slot [3] registered Apr 12 18:45:11.783048 kernel: acpiphp: Slot [4] registered Apr 12 18:45:11.783054 kernel: acpiphp: Slot [5] registered Apr 12 18:45:11.783061 kernel: acpiphp: Slot [6] registered Apr 12 18:45:11.783068 kernel: acpiphp: Slot [7] registered Apr 12 18:45:11.783074 kernel: acpiphp: Slot [8] registered Apr 12 18:45:11.783081 kernel: acpiphp: Slot [9] registered Apr 12 18:45:11.783089 kernel: acpiphp: Slot [10] registered Apr 12 18:45:11.783096 kernel: acpiphp: Slot [11] registered Apr 12 18:45:11.783103 kernel: acpiphp: Slot [12] registered Apr 12 18:45:11.783109 kernel: acpiphp: Slot [13] registered Apr 12 18:45:11.783116 kernel: acpiphp: Slot [14] registered Apr 12 18:45:11.783123 kernel: acpiphp: Slot [15] registered Apr 12 18:45:11.783129 kernel: acpiphp: Slot [16] registered Apr 12 18:45:11.783136 kernel: acpiphp: Slot [17] registered Apr 12 18:45:11.783143 kernel: acpiphp: Slot [18] registered Apr 12 18:45:11.783151 kernel: acpiphp: Slot [19] registered Apr 12 18:45:11.783157 kernel: acpiphp: Slot [20] registered Apr 12 18:45:11.783164 kernel: acpiphp: Slot [21] registered Apr 12 18:45:11.783171 kernel: acpiphp: Slot [22] registered Apr 12 18:45:11.783177 kernel: acpiphp: Slot [23] registered Apr 12 18:45:11.783184 kernel: acpiphp: Slot [24] registered Apr 12 18:45:11.783191 kernel: acpiphp: Slot [25] registered Apr 12 18:45:11.783197 kernel: acpiphp: Slot [26] registered Apr 12 18:45:11.783204 kernel: acpiphp: Slot [27] registered Apr 12 18:45:11.783212 kernel: acpiphp: Slot [28] registered Apr 12 18:45:11.783218 kernel: acpiphp: Slot [29] registered Apr 12 18:45:11.783225 kernel: acpiphp: Slot [30] registered Apr 12 18:45:11.783231 kernel: acpiphp: Slot [31] registered Apr 12 18:45:11.783238 kernel: PCI host bridge to bus 0000:00 Apr 12 18:45:11.783317 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 12 18:45:11.783378 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 12 18:45:11.783436 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 12 18:45:11.783694 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Apr 12 18:45:11.783760 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0x87fffffff window] Apr 12 18:45:11.783817 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 12 18:45:11.783896 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Apr 12 18:45:11.784002 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Apr 12 18:45:11.784078 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Apr 12 18:45:11.784149 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Apr 12 18:45:11.784216 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Apr 12 18:45:11.784291 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Apr 12 18:45:11.784357 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Apr 12 18:45:11.784477 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Apr 12 18:45:11.784564 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Apr 12 18:45:11.784632 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Apr 12 18:45:11.784724 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Apr 12 18:45:11.784802 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Apr 12 18:45:11.784880 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Apr 12 18:45:11.784977 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc1043000-0xc1043fff] Apr 12 18:45:11.785044 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Apr 12 18:45:11.785108 kernel: pci 0000:00:02.0: BAR 0: assigned to efifb Apr 12 18:45:11.785204 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 12 18:45:11.785321 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Apr 12 18:45:11.785404 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0a0-0xc0bf] Apr 12 18:45:11.785478 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Apr 12 18:45:11.785545 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Apr 12 18:45:11.785620 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Apr 12 18:45:11.785687 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Apr 12 18:45:11.785755 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Apr 12 18:45:11.785824 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Apr 12 18:45:11.785897 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Apr 12 18:45:11.786026 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Apr 12 18:45:11.786123 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc1040000-0xc1040fff] Apr 12 18:45:11.786190 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Apr 12 18:45:11.786256 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Apr 12 18:45:11.786274 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 12 18:45:11.786285 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 12 18:45:11.786292 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 12 18:45:11.786299 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 12 18:45:11.786306 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Apr 12 18:45:11.786313 kernel: iommu: Default domain type: Translated Apr 12 18:45:11.786320 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 12 18:45:11.786390 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Apr 12 18:45:11.786457 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 12 18:45:11.786524 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Apr 12 18:45:11.786536 kernel: vgaarb: loaded Apr 12 18:45:11.786543 kernel: pps_core: LinuxPPS API ver. 1 registered Apr 12 18:45:11.786550 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Apr 12 18:45:11.786557 kernel: PTP clock support registered Apr 12 18:45:11.786564 kernel: Registered efivars operations Apr 12 18:45:11.786571 kernel: PCI: Using ACPI for IRQ routing Apr 12 18:45:11.786578 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 12 18:45:11.786584 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Apr 12 18:45:11.786591 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Apr 12 18:45:11.786599 kernel: e820: reserve RAM buffer [mem 0x9b1aa018-0x9bffffff] Apr 12 18:45:11.786606 kernel: e820: reserve RAM buffer [mem 0x9b3f7018-0x9bffffff] Apr 12 18:45:11.786614 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Apr 12 18:45:11.786623 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Apr 12 18:45:11.786633 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 12 18:45:11.786644 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 12 18:45:11.786653 kernel: clocksource: Switched to clocksource kvm-clock Apr 12 18:45:11.786662 kernel: VFS: Disk quotas dquot_6.6.0 Apr 12 18:45:11.786673 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 12 18:45:11.786683 kernel: pnp: PnP ACPI init Apr 12 18:45:11.786817 kernel: pnp 00:02: [dma 2] Apr 12 18:45:11.786835 kernel: pnp: PnP ACPI: found 6 devices Apr 12 18:45:11.786847 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 12 18:45:11.786859 kernel: NET: Registered PF_INET protocol family Apr 12 18:45:11.786871 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 12 18:45:11.786883 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 12 18:45:11.786897 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 12 18:45:11.786909 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 12 18:45:11.786918 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Apr 12 18:45:11.786925 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 12 18:45:11.786953 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 12 18:45:11.786960 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 12 18:45:11.786967 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 12 18:45:11.786974 kernel: NET: Registered PF_XDP protocol family Apr 12 18:45:11.787049 kernel: pci 0000:00:05.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Apr 12 18:45:11.787136 kernel: pci 0000:00:05.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Apr 12 18:45:11.787199 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 12 18:45:11.787272 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 12 18:45:11.787337 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 12 18:45:11.787396 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Apr 12 18:45:11.787454 kernel: pci_bus 0000:00: resource 8 [mem 0x800000000-0x87fffffff window] Apr 12 18:45:11.787522 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Apr 12 18:45:11.787588 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Apr 12 18:45:11.787659 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Apr 12 18:45:11.787668 kernel: PCI: CLS 0 bytes, default 64 Apr 12 18:45:11.787676 kernel: Initialise system trusted keyrings Apr 12 18:45:11.787683 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 12 18:45:11.787690 kernel: Key type asymmetric registered Apr 12 18:45:11.787697 kernel: Asymmetric key parser 'x509' registered Apr 12 18:45:11.787704 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Apr 12 18:45:11.787711 kernel: io scheduler mq-deadline registered Apr 12 18:45:11.787721 kernel: io scheduler kyber registered Apr 12 18:45:11.787728 kernel: io scheduler bfq registered Apr 12 18:45:11.787735 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 12 18:45:11.787743 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Apr 12 18:45:11.787750 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Apr 12 18:45:11.787757 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Apr 12 18:45:11.787764 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 12 18:45:11.787772 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 12 18:45:11.787779 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 12 18:45:11.787786 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 12 18:45:11.787794 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 12 18:45:11.787802 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 12 18:45:11.787873 kernel: rtc_cmos 00:05: RTC can wake from S4 Apr 12 18:45:11.787948 kernel: rtc_cmos 00:05: registered as rtc0 Apr 12 18:45:11.788013 kernel: rtc_cmos 00:05: setting system clock to 2024-04-12T18:45:11 UTC (1712947511) Apr 12 18:45:11.788075 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 12 18:45:11.788084 kernel: efifb: probing for efifb Apr 12 18:45:11.788092 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Apr 12 18:45:11.788099 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Apr 12 18:45:11.788106 kernel: efifb: scrolling: redraw Apr 12 18:45:11.788114 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 12 18:45:11.788121 kernel: Console: switching to colour frame buffer device 160x50 Apr 12 18:45:11.788128 kernel: fb0: EFI VGA frame buffer device Apr 12 18:45:11.788137 kernel: pstore: Registered efi as persistent store backend Apr 12 18:45:11.788144 kernel: NET: Registered PF_INET6 protocol family Apr 12 18:45:11.788151 kernel: Segment Routing with IPv6 Apr 12 18:45:11.788158 kernel: In-situ OAM (IOAM) with IPv6 Apr 12 18:45:11.788165 kernel: NET: Registered PF_PACKET protocol family Apr 12 18:45:11.788173 kernel: Key type dns_resolver registered Apr 12 18:45:11.788179 kernel: IPI shorthand broadcast: enabled Apr 12 18:45:11.788187 kernel: sched_clock: Marking stable (435571678, 123968721)->(571091527, -11551128) Apr 12 18:45:11.788194 kernel: registered taskstats version 1 Apr 12 18:45:11.788202 kernel: Loading compiled-in X.509 certificates Apr 12 18:45:11.788210 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.154-flatcar: 1fa140a38fc6bd27c8b56127e4d1eb4f665c7ec4' Apr 12 18:45:11.788217 kernel: Key type .fscrypt registered Apr 12 18:45:11.788225 kernel: Key type fscrypt-provisioning registered Apr 12 18:45:11.788232 kernel: pstore: Using crash dump compression: deflate Apr 12 18:45:11.788248 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 12 18:45:11.788256 kernel: ima: Allocated hash algorithm: sha1 Apr 12 18:45:11.788272 kernel: ima: No architecture policies found Apr 12 18:45:11.788279 kernel: Freeing unused kernel image (initmem) memory: 47440K Apr 12 18:45:11.788288 kernel: Write protecting the kernel read-only data: 28672k Apr 12 18:45:11.788295 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Apr 12 18:45:11.788302 kernel: Freeing unused kernel image (rodata/data gap) memory: 628K Apr 12 18:45:11.788311 kernel: Run /init as init process Apr 12 18:45:11.788318 kernel: with arguments: Apr 12 18:45:11.788325 kernel: /init Apr 12 18:45:11.788332 kernel: with environment: Apr 12 18:45:11.788339 kernel: HOME=/ Apr 12 18:45:11.788345 kernel: TERM=linux Apr 12 18:45:11.788353 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 12 18:45:11.788363 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Apr 12 18:45:11.788372 systemd[1]: Detected virtualization kvm. Apr 12 18:45:11.788380 systemd[1]: Detected architecture x86-64. Apr 12 18:45:11.788387 systemd[1]: Running in initrd. Apr 12 18:45:11.788395 systemd[1]: No hostname configured, using default hostname. Apr 12 18:45:11.788402 systemd[1]: Hostname set to . Apr 12 18:45:11.788412 systemd[1]: Initializing machine ID from VM UUID. Apr 12 18:45:11.788419 systemd[1]: Queued start job for default target initrd.target. Apr 12 18:45:11.788427 systemd[1]: Started systemd-ask-password-console.path. Apr 12 18:45:11.788441 systemd[1]: Reached target cryptsetup.target. Apr 12 18:45:11.788449 systemd[1]: Reached target paths.target. Apr 12 18:45:11.788457 systemd[1]: Reached target slices.target. Apr 12 18:45:11.788464 systemd[1]: Reached target swap.target. Apr 12 18:45:11.788471 systemd[1]: Reached target timers.target. Apr 12 18:45:11.788481 systemd[1]: Listening on iscsid.socket. Apr 12 18:45:11.788489 systemd[1]: Listening on iscsiuio.socket. Apr 12 18:45:11.788497 systemd[1]: Listening on systemd-journald-audit.socket. Apr 12 18:45:11.788504 systemd[1]: Listening on systemd-journald-dev-log.socket. Apr 12 18:45:11.788512 systemd[1]: Listening on systemd-journald.socket. Apr 12 18:45:11.788519 systemd[1]: Listening on systemd-networkd.socket. Apr 12 18:45:11.788527 systemd[1]: Listening on systemd-udevd-control.socket. Apr 12 18:45:11.788534 systemd[1]: Listening on systemd-udevd-kernel.socket. Apr 12 18:45:11.788542 systemd[1]: Reached target sockets.target. Apr 12 18:45:11.788551 systemd[1]: Starting kmod-static-nodes.service... Apr 12 18:45:11.788558 systemd[1]: Finished network-cleanup.service. Apr 12 18:45:11.788566 systemd[1]: Starting systemd-fsck-usr.service... Apr 12 18:45:11.788574 systemd[1]: Starting systemd-journald.service... Apr 12 18:45:11.788581 systemd[1]: Starting systemd-modules-load.service... Apr 12 18:45:11.788589 systemd[1]: Starting systemd-resolved.service... Apr 12 18:45:11.788596 systemd[1]: Starting systemd-vconsole-setup.service... Apr 12 18:45:11.788604 systemd[1]: Finished kmod-static-nodes.service. Apr 12 18:45:11.788611 systemd[1]: Finished systemd-fsck-usr.service. Apr 12 18:45:11.788620 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Apr 12 18:45:11.788628 systemd[1]: Finished systemd-vconsole-setup.service. Apr 12 18:45:11.788636 kernel: audit: type=1130 audit(1712947511.779:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:11.788644 systemd[1]: Starting dracut-cmdline-ask.service... Apr 12 18:45:11.788651 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Apr 12 18:45:11.788662 systemd-journald[197]: Journal started Apr 12 18:45:11.788700 systemd-journald[197]: Runtime Journal (/run/log/journal/6f740b73705246deb5ab2fc8c1a35d82) is 6.0M, max 48.4M, 42.4M free. Apr 12 18:45:11.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:11.786089 systemd-modules-load[198]: Inserted module 'overlay' Apr 12 18:45:11.793434 systemd[1]: Started systemd-journald.service. Apr 12 18:45:11.793449 kernel: audit: type=1130 audit(1712947511.788:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:11.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:11.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:11.798175 kernel: audit: type=1130 audit(1712947511.793:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:11.804320 systemd[1]: Finished dracut-cmdline-ask.service. Apr 12 18:45:11.809588 kernel: audit: type=1130 audit(1712947511.804:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:11.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:11.805817 systemd[1]: Starting dracut-cmdline.service... Apr 12 18:45:11.809438 systemd-resolved[199]: Positive Trust Anchors: Apr 12 18:45:11.809445 systemd-resolved[199]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 12 18:45:11.809472 systemd-resolved[199]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Apr 12 18:45:11.811562 systemd-resolved[199]: Defaulting to hostname 'linux'. Apr 12 18:45:11.823028 kernel: audit: type=1130 audit(1712947511.812:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:11.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:11.823072 dracut-cmdline[216]: dracut-dracut-053 Apr 12 18:45:11.823072 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=189121f7c8c0a24098d3bb1e040d34611f7c276be43815ff7fe409fce185edaf Apr 12 18:45:11.812223 systemd[1]: Started systemd-resolved.service. Apr 12 18:45:11.813389 systemd[1]: Reached target nss-lookup.target. Apr 12 18:45:11.845957 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 12 18:45:11.850775 systemd-modules-load[198]: Inserted module 'br_netfilter' Apr 12 18:45:11.851655 kernel: Bridge firewalling registered Apr 12 18:45:11.867964 kernel: SCSI subsystem initialized Apr 12 18:45:11.879029 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 12 18:45:11.879046 kernel: device-mapper: uevent: version 1.0.3 Apr 12 18:45:11.880311 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Apr 12 18:45:11.881952 kernel: Loading iSCSI transport class v2.0-870. Apr 12 18:45:11.882990 systemd-modules-load[198]: Inserted module 'dm_multipath' Apr 12 18:45:11.884427 systemd[1]: Finished systemd-modules-load.service. Apr 12 18:45:11.889639 kernel: audit: type=1130 audit(1712947511.884:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:11.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:11.885844 systemd[1]: Starting systemd-sysctl.service... Apr 12 18:45:11.893431 systemd[1]: Finished systemd-sysctl.service. Apr 12 18:45:11.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:11.897949 kernel: audit: type=1130 audit(1712947511.893:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:11.902959 kernel: iscsi: registered transport (tcp) Apr 12 18:45:11.924227 kernel: iscsi: registered transport (qla4xxx) Apr 12 18:45:11.924254 kernel: QLogic iSCSI HBA Driver Apr 12 18:45:11.943508 systemd[1]: Finished dracut-cmdline.service. Apr 12 18:45:11.948866 kernel: audit: type=1130 audit(1712947511.944:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:11.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:11.945396 systemd[1]: Starting dracut-pre-udev.service... Apr 12 18:45:11.989959 kernel: raid6: avx2x4 gen() 29707 MB/s Apr 12 18:45:12.006960 kernel: raid6: avx2x4 xor() 7240 MB/s Apr 12 18:45:12.023954 kernel: raid6: avx2x2 gen() 31609 MB/s Apr 12 18:45:12.040953 kernel: raid6: avx2x2 xor() 18785 MB/s Apr 12 18:45:12.057953 kernel: raid6: avx2x1 gen() 25787 MB/s Apr 12 18:45:12.074952 kernel: raid6: avx2x1 xor() 15016 MB/s Apr 12 18:45:12.091955 kernel: raid6: sse2x4 gen() 14478 MB/s Apr 12 18:45:12.108956 kernel: raid6: sse2x4 xor() 6846 MB/s Apr 12 18:45:12.125954 kernel: raid6: sse2x2 gen() 15719 MB/s Apr 12 18:45:12.142965 kernel: raid6: sse2x2 xor() 9607 MB/s Apr 12 18:45:12.159966 kernel: raid6: sse2x1 gen() 11780 MB/s Apr 12 18:45:12.177358 kernel: raid6: sse2x1 xor() 7404 MB/s Apr 12 18:45:12.177376 kernel: raid6: using algorithm avx2x2 gen() 31609 MB/s Apr 12 18:45:12.177392 kernel: raid6: .... xor() 18785 MB/s, rmw enabled Apr 12 18:45:12.178079 kernel: raid6: using avx2x2 recovery algorithm Apr 12 18:45:12.189957 kernel: xor: automatically using best checksumming function avx Apr 12 18:45:12.278963 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Apr 12 18:45:12.286732 systemd[1]: Finished dracut-pre-udev.service. Apr 12 18:45:12.291455 kernel: audit: type=1130 audit(1712947512.286:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:12.286000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:12.290000 audit: BPF prog-id=7 op=LOAD Apr 12 18:45:12.290000 audit: BPF prog-id=8 op=LOAD Apr 12 18:45:12.291778 systemd[1]: Starting systemd-udevd.service... Apr 12 18:45:12.303273 systemd-udevd[401]: Using default interface naming scheme 'v252'. Apr 12 18:45:12.306922 systemd[1]: Started systemd-udevd.service. Apr 12 18:45:12.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:12.310188 systemd[1]: Starting dracut-pre-trigger.service... Apr 12 18:45:12.320493 dracut-pre-trigger[412]: rd.md=0: removing MD RAID activation Apr 12 18:45:12.343333 systemd[1]: Finished dracut-pre-trigger.service. Apr 12 18:45:12.342000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:12.344374 systemd[1]: Starting systemd-udev-trigger.service... Apr 12 18:45:12.375723 systemd[1]: Finished systemd-udev-trigger.service. Apr 12 18:45:12.376000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:12.400960 kernel: cryptd: max_cpu_qlen set to 1000 Apr 12 18:45:12.406079 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 12 18:45:12.418474 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 12 18:45:12.418503 kernel: GPT:9289727 != 19775487 Apr 12 18:45:12.418516 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 12 18:45:12.418525 kernel: GPT:9289727 != 19775487 Apr 12 18:45:12.419001 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 12 18:45:12.420544 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 12 18:45:12.421966 kernel: libata version 3.00 loaded. Apr 12 18:45:12.422006 kernel: AVX2 version of gcm_enc/dec engaged. Apr 12 18:45:12.422017 kernel: AES CTR mode by8 optimization enabled Apr 12 18:45:12.426956 kernel: ata_piix 0000:00:01.1: version 2.13 Apr 12 18:45:12.428963 kernel: scsi host0: ata_piix Apr 12 18:45:12.433252 kernel: scsi host1: ata_piix Apr 12 18:45:12.433434 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Apr 12 18:45:12.433450 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Apr 12 18:45:12.442023 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Apr 12 18:45:12.445831 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (446) Apr 12 18:45:12.442815 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Apr 12 18:45:12.448404 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Apr 12 18:45:12.454679 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Apr 12 18:45:12.458569 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Apr 12 18:45:12.460132 systemd[1]: Starting disk-uuid.service... Apr 12 18:45:12.466764 disk-uuid[516]: Primary Header is updated. Apr 12 18:45:12.466764 disk-uuid[516]: Secondary Entries is updated. Apr 12 18:45:12.466764 disk-uuid[516]: Secondary Header is updated. Apr 12 18:45:12.470258 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 12 18:45:12.472961 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 12 18:45:12.588014 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 12 18:45:12.589963 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 12 18:45:12.620302 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 12 18:45:12.620451 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 12 18:45:12.637962 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Apr 12 18:45:13.476618 disk-uuid[517]: The operation has completed successfully. Apr 12 18:45:13.477853 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 12 18:45:13.502139 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 12 18:45:13.502254 systemd[1]: Finished disk-uuid.service. Apr 12 18:45:13.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:13.504000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:13.511643 systemd[1]: Starting verity-setup.service... Apr 12 18:45:13.525967 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 12 18:45:13.549621 systemd[1]: Found device dev-mapper-usr.device. Apr 12 18:45:13.552304 systemd[1]: Mounting sysusr-usr.mount... Apr 12 18:45:13.556299 systemd[1]: Finished verity-setup.service. Apr 12 18:45:13.557000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:13.625851 systemd[1]: Mounted sysusr-usr.mount. Apr 12 18:45:13.627415 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Apr 12 18:45:13.627431 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Apr 12 18:45:13.629440 systemd[1]: Starting ignition-setup.service... Apr 12 18:45:13.631445 systemd[1]: Starting parse-ip-for-networkd.service... Apr 12 18:45:13.638984 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 12 18:45:13.639011 kernel: BTRFS info (device vda6): using free space tree Apr 12 18:45:13.639030 kernel: BTRFS info (device vda6): has skinny extents Apr 12 18:45:13.648654 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 12 18:45:13.657715 systemd[1]: Finished ignition-setup.service. Apr 12 18:45:13.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:13.658896 systemd[1]: Starting ignition-fetch-offline.service... Apr 12 18:45:13.698036 ignition[636]: Ignition 2.14.0 Apr 12 18:45:13.698049 ignition[636]: Stage: fetch-offline Apr 12 18:45:13.698145 ignition[636]: no configs at "/usr/lib/ignition/base.d" Apr 12 18:45:13.698157 ignition[636]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 12 18:45:13.698300 ignition[636]: parsed url from cmdline: "" Apr 12 18:45:13.698304 ignition[636]: no config URL provided Apr 12 18:45:13.698311 ignition[636]: reading system config file "/usr/lib/ignition/user.ign" Apr 12 18:45:13.698319 ignition[636]: no config at "/usr/lib/ignition/user.ign" Apr 12 18:45:13.698339 ignition[636]: op(1): [started] loading QEMU firmware config module Apr 12 18:45:13.698344 ignition[636]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 12 18:45:13.703218 ignition[636]: op(1): [finished] loading QEMU firmware config module Apr 12 18:45:13.703241 ignition[636]: QEMU firmware config was not found. Ignoring... Apr 12 18:45:13.713915 systemd[1]: Finished parse-ip-for-networkd.service. Apr 12 18:45:13.716789 systemd[1]: Starting systemd-networkd.service... Apr 12 18:45:13.713000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:13.716000 audit: BPF prog-id=9 op=LOAD Apr 12 18:45:13.786532 ignition[636]: parsing config with SHA512: d108f667b705903bbd66f00a70d456301cc6ad7ebd7f37a2b95189fec13c68dcb924a1a7f00d613cff9d80be16de7e727fc9aa06c94c516a0651486cf3173306 Apr 12 18:45:13.807195 systemd-networkd[712]: lo: Link UP Apr 12 18:45:13.807219 systemd-networkd[712]: lo: Gained carrier Apr 12 18:45:13.809080 systemd-networkd[712]: Enumeration completed Apr 12 18:45:13.809909 systemd[1]: Started systemd-networkd.service. Apr 12 18:45:13.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:13.811746 systemd[1]: Reached target network.target. Apr 12 18:45:13.813913 systemd[1]: Starting iscsiuio.service... Apr 12 18:45:13.815380 systemd-networkd[712]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 12 18:45:13.819160 systemd-networkd[712]: eth0: Link UP Apr 12 18:45:13.819170 systemd-networkd[712]: eth0: Gained carrier Apr 12 18:45:13.821825 systemd[1]: Started iscsiuio.service. Apr 12 18:45:13.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:13.824477 systemd[1]: Starting iscsid.service... Apr 12 18:45:13.827887 iscsid[717]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Apr 12 18:45:13.827887 iscsid[717]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Apr 12 18:45:13.827887 iscsid[717]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Apr 12 18:45:13.827887 iscsid[717]: If using hardware iscsi like qla4xxx this message can be ignored. Apr 12 18:45:13.827887 iscsid[717]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Apr 12 18:45:13.827887 iscsid[717]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Apr 12 18:45:13.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:13.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:13.830434 unknown[636]: fetched base config from "system" Apr 12 18:45:13.831274 ignition[636]: fetch-offline: fetch-offline passed Apr 12 18:45:13.830442 unknown[636]: fetched user config from "qemu" Apr 12 18:45:13.831343 ignition[636]: Ignition finished successfully Apr 12 18:45:13.833083 systemd[1]: Started iscsid.service. Apr 12 18:45:13.836031 systemd[1]: Finished ignition-fetch-offline.service. Apr 12 18:45:13.838036 systemd-networkd[712]: eth0: DHCPv4 address 10.0.0.52/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 12 18:45:13.850573 ignition[719]: Ignition 2.14.0 Apr 12 18:45:13.853000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:13.838854 systemd[1]: Starting dracut-initqueue.service... Apr 12 18:45:13.850581 ignition[719]: Stage: kargs Apr 12 18:45:13.840002 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 12 18:45:13.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:13.852759 ignition[719]: no configs at "/usr/lib/ignition/base.d" Apr 12 18:45:13.840692 systemd[1]: Starting ignition-kargs.service... Apr 12 18:45:13.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:13.852778 ignition[719]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 12 18:45:13.850868 systemd[1]: Finished dracut-initqueue.service. Apr 12 18:45:13.855492 ignition[719]: kargs: kargs passed Apr 12 18:45:13.853478 systemd[1]: Reached target remote-fs-pre.target. Apr 12 18:45:13.855546 ignition[719]: Ignition finished successfully Apr 12 18:45:13.854493 systemd[1]: Reached target remote-cryptsetup.target. Apr 12 18:45:13.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:13.868620 ignition[733]: Ignition 2.14.0 Apr 12 18:45:13.855473 systemd[1]: Reached target remote-fs.target. Apr 12 18:45:13.868626 ignition[733]: Stage: disks Apr 12 18:45:13.857887 systemd[1]: Starting dracut-pre-mount.service... Apr 12 18:45:13.868701 ignition[733]: no configs at "/usr/lib/ignition/base.d" Apr 12 18:45:13.858962 systemd[1]: Finished ignition-kargs.service. Apr 12 18:45:13.868709 ignition[733]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 12 18:45:13.861177 systemd[1]: Starting ignition-disks.service... Apr 12 18:45:13.869820 ignition[733]: disks: disks passed Apr 12 18:45:13.865413 systemd[1]: Finished dracut-pre-mount.service. Apr 12 18:45:13.869850 ignition[733]: Ignition finished successfully Apr 12 18:45:13.870562 systemd[1]: Finished ignition-disks.service. Apr 12 18:45:13.872026 systemd[1]: Reached target initrd-root-device.target. Apr 12 18:45:13.873863 systemd[1]: Reached target local-fs-pre.target. Apr 12 18:45:13.874825 systemd[1]: Reached target local-fs.target. Apr 12 18:45:13.875750 systemd[1]: Reached target sysinit.target. Apr 12 18:45:13.877298 systemd[1]: Reached target basic.target. Apr 12 18:45:13.893267 systemd-fsck[746]: ROOT: clean, 612/553520 files, 56019/553472 blocks Apr 12 18:45:13.879003 systemd[1]: Starting systemd-fsck-root.service... Apr 12 18:45:13.898243 systemd[1]: Finished systemd-fsck-root.service. Apr 12 18:45:13.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:13.899469 systemd[1]: Mounting sysroot.mount... Apr 12 18:45:13.905963 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Apr 12 18:45:13.906409 systemd[1]: Mounted sysroot.mount. Apr 12 18:45:13.906894 systemd[1]: Reached target initrd-root-fs.target. Apr 12 18:45:13.909300 systemd[1]: Mounting sysroot-usr.mount... Apr 12 18:45:13.910149 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Apr 12 18:45:13.910180 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 12 18:45:13.910200 systemd[1]: Reached target ignition-diskful.target. Apr 12 18:45:13.912348 systemd[1]: Mounted sysroot-usr.mount. Apr 12 18:45:13.915695 systemd[1]: Starting initrd-setup-root.service... Apr 12 18:45:13.922753 initrd-setup-root[756]: cut: /sysroot/etc/passwd: No such file or directory Apr 12 18:45:13.927218 initrd-setup-root[764]: cut: /sysroot/etc/group: No such file or directory Apr 12 18:45:13.931101 initrd-setup-root[772]: cut: /sysroot/etc/shadow: No such file or directory Apr 12 18:45:13.933872 initrd-setup-root[780]: cut: /sysroot/etc/gshadow: No such file or directory Apr 12 18:45:13.957750 systemd[1]: Finished initrd-setup-root.service. Apr 12 18:45:13.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:13.960209 systemd[1]: Starting ignition-mount.service... Apr 12 18:45:13.962402 systemd[1]: Starting sysroot-boot.service... Apr 12 18:45:13.965284 bash[797]: umount: /sysroot/usr/share/oem: not mounted. Apr 12 18:45:13.973763 ignition[798]: INFO : Ignition 2.14.0 Apr 12 18:45:13.973763 ignition[798]: INFO : Stage: mount Apr 12 18:45:13.976010 ignition[798]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 12 18:45:13.976010 ignition[798]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 12 18:45:13.976010 ignition[798]: INFO : mount: mount passed Apr 12 18:45:13.976010 ignition[798]: INFO : Ignition finished successfully Apr 12 18:45:13.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:13.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:13.976376 systemd[1]: Finished ignition-mount.service. Apr 12 18:45:13.979213 systemd[1]: Finished sysroot-boot.service. Apr 12 18:45:14.561905 systemd[1]: Mounting sysroot-usr-share-oem.mount... Apr 12 18:45:14.568953 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (807) Apr 12 18:45:14.568983 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 12 18:45:14.570689 kernel: BTRFS info (device vda6): using free space tree Apr 12 18:45:14.570706 kernel: BTRFS info (device vda6): has skinny extents Apr 12 18:45:14.574067 systemd[1]: Mounted sysroot-usr-share-oem.mount. Apr 12 18:45:14.575626 systemd[1]: Starting ignition-files.service... Apr 12 18:45:14.588642 ignition[827]: INFO : Ignition 2.14.0 Apr 12 18:45:14.588642 ignition[827]: INFO : Stage: files Apr 12 18:45:14.590731 ignition[827]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 12 18:45:14.590731 ignition[827]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 12 18:45:14.590731 ignition[827]: DEBUG : files: compiled without relabeling support, skipping Apr 12 18:45:14.590731 ignition[827]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 12 18:45:14.590731 ignition[827]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 12 18:45:14.597894 ignition[827]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 12 18:45:14.597894 ignition[827]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 12 18:45:14.597894 ignition[827]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 12 18:45:14.597894 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 12 18:45:14.597894 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Apr 12 18:45:14.592530 unknown[827]: wrote ssh authorized keys file for user: core Apr 12 18:45:14.636632 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 12 18:45:14.712854 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 12 18:45:14.712854 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Apr 12 18:45:14.716977 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz: attempt #1 Apr 12 18:45:15.044079 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 12 18:45:15.172071 ignition[827]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: 5d0324ca8a3c90c680b6e1fddb245a2255582fa15949ba1f3c6bb7323df9d3af754dae98d6e40ac9ccafb2999c932df2c4288d418949a4915d928eb23c090540 Apr 12 18:45:15.175076 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Apr 12 18:45:15.175076 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Apr 12 18:45:15.175076 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-amd64.tar.gz: attempt #1 Apr 12 18:45:15.446825 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Apr 12 18:45:15.529878 ignition[827]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: aa622325bf05520939f9e020d7a28ab48ac23e2fae6f47d5a4e52174c88c1ebc31b464853e4fd65bd8f5331f330a6ca96fd370d247d3eeaed042da4ee2d1219a Apr 12 18:45:15.532781 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Apr 12 18:45:15.532781 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 12 18:45:15.536262 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 12 18:45:15.537993 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubeadm" Apr 12 18:45:15.539591 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.27.2/bin/linux/amd64/kubeadm: attempt #1 Apr 12 18:45:15.562029 systemd-networkd[712]: eth0: Gained IPv6LL Apr 12 18:45:15.624190 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Apr 12 18:45:15.897636 ignition[827]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: f40216b7d14046931c58072d10c7122934eac5a23c08821371f8b08ac1779443ad11d3458a4c5dcde7cf80fc600a9fefb14b1942aa46a52330248d497ca88836 Apr 12 18:45:15.900725 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubeadm" Apr 12 18:45:15.900725 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubectl" Apr 12 18:45:15.900725 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.27.2/bin/linux/amd64/kubectl: attempt #1 Apr 12 18:45:15.956513 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Apr 12 18:45:16.184304 ignition[827]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 857e67001e74840518413593d90c6e64ad3f00d55fa44ad9a8e2ed6135392c908caff7ec19af18cbe10784b8f83afe687a0bc3bacbc9eee984cdeb9c0749cb83 Apr 12 18:45:16.184304 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubectl" Apr 12 18:45:16.189278 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/opt/bin/kubelet" Apr 12 18:45:16.189278 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET https://dl.k8s.io/release/v1.27.2/bin/linux/amd64/kubelet: attempt #1 Apr 12 18:45:16.242968 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET result: OK Apr 12 18:45:16.666450 ignition[827]: DEBUG : files: createFilesystemsFiles: createFiles: op(9): file matches expected sum of: a283da2224d456958b2cb99b4f6faf4457c4ed89e9e95f37d970c637f6a7f64ff4dd4d2bfce538759b2d2090933bece599a285ef8fd132eb383fece9a3941560 Apr 12 18:45:16.666450 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/bin/kubelet" Apr 12 18:45:16.671847 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/docker/daemon.json" Apr 12 18:45:16.671847 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/docker/daemon.json" Apr 12 18:45:16.671847 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 12 18:45:16.671847 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Apr 12 18:45:17.068744 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 12 18:45:17.146220 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 12 18:45:17.146220 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/install.sh" Apr 12 18:45:17.150706 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/install.sh" Apr 12 18:45:17.150706 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 12 18:45:17.150706 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 12 18:45:17.150706 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 12 18:45:17.150706 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 12 18:45:17.150706 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 12 18:45:17.150706 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 12 18:45:17.150706 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 12 18:45:17.150706 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 12 18:45:17.150706 ignition[827]: INFO : files: op(11): [started] processing unit "prepare-cni-plugins.service" Apr 12 18:45:17.150706 ignition[827]: INFO : files: op(11): op(12): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Apr 12 18:45:17.150706 ignition[827]: INFO : files: op(11): op(12): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Apr 12 18:45:17.150706 ignition[827]: INFO : files: op(11): [finished] processing unit "prepare-cni-plugins.service" Apr 12 18:45:17.150706 ignition[827]: INFO : files: op(13): [started] processing unit "prepare-critools.service" Apr 12 18:45:17.150706 ignition[827]: INFO : files: op(13): op(14): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Apr 12 18:45:17.150706 ignition[827]: INFO : files: op(13): op(14): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Apr 12 18:45:17.150706 ignition[827]: INFO : files: op(13): [finished] processing unit "prepare-critools.service" Apr 12 18:45:17.184152 ignition[827]: INFO : files: op(15): [started] processing unit "prepare-helm.service" Apr 12 18:45:17.184152 ignition[827]: INFO : files: op(15): op(16): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 12 18:45:17.184152 ignition[827]: INFO : files: op(15): op(16): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 12 18:45:17.184152 ignition[827]: INFO : files: op(15): [finished] processing unit "prepare-helm.service" Apr 12 18:45:17.184152 ignition[827]: INFO : files: op(17): [started] processing unit "coreos-metadata.service" Apr 12 18:45:17.184152 ignition[827]: INFO : files: op(17): op(18): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 12 18:45:17.184152 ignition[827]: INFO : files: op(17): op(18): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 12 18:45:17.184152 ignition[827]: INFO : files: op(17): [finished] processing unit "coreos-metadata.service" Apr 12 18:45:17.184152 ignition[827]: INFO : files: op(19): [started] processing unit "containerd.service" Apr 12 18:45:17.184152 ignition[827]: INFO : files: op(19): op(1a): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 12 18:45:17.184152 ignition[827]: INFO : files: op(19): op(1a): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 12 18:45:17.184152 ignition[827]: INFO : files: op(19): [finished] processing unit "containerd.service" Apr 12 18:45:17.184152 ignition[827]: INFO : files: op(1b): [started] setting preset to enabled for "prepare-critools.service" Apr 12 18:45:17.184152 ignition[827]: INFO : files: op(1b): [finished] setting preset to enabled for "prepare-critools.service" Apr 12 18:45:17.184152 ignition[827]: INFO : files: op(1c): [started] setting preset to enabled for "prepare-helm.service" Apr 12 18:45:17.184152 ignition[827]: INFO : files: op(1c): [finished] setting preset to enabled for "prepare-helm.service" Apr 12 18:45:17.184152 ignition[827]: INFO : files: op(1d): [started] setting preset to disabled for "coreos-metadata.service" Apr 12 18:45:17.184152 ignition[827]: INFO : files: op(1d): op(1e): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 12 18:45:17.214404 ignition[827]: INFO : files: op(1d): op(1e): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 12 18:45:17.214404 ignition[827]: INFO : files: op(1d): [finished] setting preset to disabled for "coreos-metadata.service" Apr 12 18:45:17.214404 ignition[827]: INFO : files: op(1f): [started] setting preset to enabled for "prepare-cni-plugins.service" Apr 12 18:45:17.214404 ignition[827]: INFO : files: op(1f): [finished] setting preset to enabled for "prepare-cni-plugins.service" Apr 12 18:45:17.220401 ignition[827]: INFO : files: createResultFile: createFiles: op(20): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 12 18:45:17.222176 ignition[827]: INFO : files: createResultFile: createFiles: op(20): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 12 18:45:17.222176 ignition[827]: INFO : files: files passed Apr 12 18:45:17.224837 ignition[827]: INFO : Ignition finished successfully Apr 12 18:45:17.227338 systemd[1]: Finished ignition-files.service. Apr 12 18:45:17.234318 kernel: kauditd_printk_skb: 23 callbacks suppressed Apr 12 18:45:17.234353 kernel: audit: type=1130 audit(1712947517.226:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:17.226000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:17.228365 systemd[1]: Starting initrd-setup-root-after-ignition.service... Apr 12 18:45:17.234254 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Apr 12 18:45:17.239000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:17.240037 initrd-setup-root-after-ignition[851]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Apr 12 18:45:17.246436 kernel: audit: type=1130 audit(1712947517.239:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:17.246462 kernel: audit: type=1130 audit(1712947517.246:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:17.246000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:17.234846 systemd[1]: Starting ignition-quench.service... Apr 12 18:45:17.256202 kernel: audit: type=1131 audit(1712947517.246:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:17.246000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:17.256406 initrd-setup-root-after-ignition[853]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 12 18:45:17.236657 systemd[1]: Finished initrd-setup-root-after-ignition.service. Apr 12 18:45:17.240157 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 12 18:45:17.240225 systemd[1]: Finished ignition-quench.service. Apr 12 18:45:17.246520 systemd[1]: Reached target ignition-complete.target. Apr 12 18:45:17.254986 systemd[1]: Starting initrd-parse-etc.service... Apr 12 18:45:17.265720 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 12 18:45:17.265799 systemd[1]: Finished initrd-parse-etc.service. Apr 12 18:45:17.276250 kernel: audit: type=1130 audit(1712947517.267:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:17.277196 kernel: audit: type=1131 audit(1712947517.267:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:17.267000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:17.267000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:17.268036 systemd[1]: Reached target initrd-fs.target. Apr 12 18:45:17.276222 systemd[1]: Reached target initrd.target. Apr 12 18:45:17.277189 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Apr 12 18:45:17.277749 systemd[1]: Starting dracut-pre-pivot.service... Apr 12 18:45:17.286274 systemd[1]: Finished dracut-pre-pivot.service. Apr 12 18:45:17.292649 kernel: audit: type=1130 audit(1712947517.287:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:17.287000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:17.287971 systemd[1]: Starting initrd-cleanup.service... Apr 12 18:45:17.297483 systemd[1]: Stopped target network.target. Apr 12 18:45:17.298494 systemd[1]: Stopped target nss-lookup.target. Apr 12 18:45:17.300289 systemd[1]: Stopped target remote-cryptsetup.target. Apr 12 18:45:17.302382 systemd[1]: Stopped target timers.target. Apr 12 18:45:17.304344 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 12 18:45:17.311019 kernel: audit: type=1131 audit(1712947517.305:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:17.305000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:17.304469 systemd[1]: Stopped dracut-pre-pivot.service. Apr 12 18:45:17.306258 systemd[1]: Stopped target initrd.target. Apr 12 18:45:17.311072 systemd[1]: Stopped target basic.target. Apr 12 18:45:17.312067 systemd[1]: Stopped target ignition-complete.target. Apr 12 18:45:17.314019 systemd[1]: Stopped target ignition-diskful.target. Apr 12 18:45:17.315952 systemd[1]: Stopped target initrd-root-device.target. Apr 12 18:45:17.317846 systemd[1]: Stopped target remote-fs.target. Apr 12 18:45:17.319816 systemd[1]: Stopped target remote-fs-pre.target. Apr 12 18:45:17.321841 systemd[1]: Stopped target sysinit.target. Apr 12 18:45:17.323778 systemd[1]: Stopped target local-fs.target. Apr 12 18:45:17.325604 systemd[1]: Stopped target local-fs-pre.target. Apr 12 18:45:17.336158 kernel: audit: type=1131 audit(1712947517.330:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:17.330000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:17.327501 systemd[1]: Stopped target swap.target. Apr 12 18:45:17.329201 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 12 18:45:17.343258 kernel: audit: type=1131 audit(1712947517.338:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:17.338000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:17.329309 systemd[1]: Stopped dracut-pre-mount.service. Apr 12 18:45:17.343000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:17.331212 systemd[1]: Stopped target cryptsetup.target. Apr 12 18:45:17.336187 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 12 18:45:17.336298 systemd[1]: Stopped dracut-initqueue.service. Apr 12 18:45:17.338159 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 12 18:45:17.338242 systemd[1]: Stopped ignition-fetch-offline.service. Apr 12 18:45:17.343361 systemd[1]: Stopped target paths.target. Apr 12 18:45:17.345129 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 12 18:45:17.347999 systemd[1]: Stopped systemd-ask-password-console.path. Apr 12 18:45:17.349408 systemd[1]: Stopped target slices.target. Apr 12 18:45:17.359000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:17.351588 systemd[1]: Stopped target sockets.target. Apr 12 18:45:17.361000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:17.353482 systemd[1]: iscsid.socket: Deactivated successfully. Apr 12 18:45:17.353565 systemd[1]: Closed iscsid.socket. Apr 12 18:45:17.355095 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 12 18:45:17.355168 systemd[1]: Closed iscsiuio.socket. Apr 12 18:45:17.357258 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 12 18:45:17.364000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:17.371870 ignition[868]: INFO : Ignition 2.14.0 Apr 12 18:45:17.371870 ignition[868]: INFO : Stage: umount Apr 12 18:45:17.371870 ignition[868]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 12 18:45:17.371870 ignition[868]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 12 18:45:17.371870 ignition[868]: INFO : umount: umount passed Apr 12 18:45:17.371870 ignition[868]: INFO : Ignition finished successfully Apr 12 18:45:17.373000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:17.375000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:17.357349 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Apr 12 18:45:17.359506 systemd[1]: ignition-files.service: Deactivated successfully. Apr 12 18:45:17.381000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:17.359607 systemd[1]: Stopped ignition-files.service. Apr 12 18:45:17.361981 systemd[1]: Stopping ignition-mount.service... Apr 12 18:45:17.363017 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 12 18:45:17.385000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:17.363120 systemd[1]: Stopped kmod-static-nodes.service. Apr 12 18:45:17.366019 systemd[1]: Stopping sysroot-boot.service... Apr 12 18:45:17.388000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:17.368613 systemd[1]: Stopping systemd-networkd.service... Apr 12 18:45:17.389000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:17.389000 audit: BPF prog-id=6 op=UNLOAD Apr 12 18:45:17.370019 systemd[1]: Stopping systemd-resolved.service... Apr 12 18:45:17.371861 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 12 18:45:17.393000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:17.372070 systemd[1]: Stopped systemd-udev-trigger.service. Apr 12 18:45:17.395000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:17.373982 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 12 18:45:17.395000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:17.398000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:17.374098 systemd[1]: Stopped dracut-pre-trigger.service. Apr 12 18:45:17.376987 systemd-networkd[712]: eth0: DHCPv6 lease lost Apr 12 18:45:17.400000 audit: BPF prog-id=9 op=UNLOAD Apr 12 18:45:17.401000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:17.402000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:17.379337 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 12 18:45:17.402000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:17.380279 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 12 18:45:17.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:17.405000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:17.380349 systemd[1]: Stopped systemd-resolved.service. Apr 12 18:45:17.383963 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 12 18:45:17.384029 systemd[1]: Stopped systemd-networkd.service. Apr 12 18:45:17.408000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:17.387479 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 12 18:45:17.387552 systemd[1]: Stopped ignition-mount.service. Apr 12 18:45:17.388645 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 12 18:45:17.388705 systemd[1]: Stopped sysroot-boot.service. Apr 12 18:45:17.390293 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 12 18:45:17.414000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:17.390355 systemd[1]: Closed systemd-networkd.socket. Apr 12 18:45:17.391570 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 12 18:45:17.391602 systemd[1]: Stopped ignition-disks.service. Apr 12 18:45:17.393224 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 12 18:45:17.421000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:17.393254 systemd[1]: Stopped ignition-kargs.service. Apr 12 18:45:17.421000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:17.424000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:17.395164 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 12 18:45:17.395195 systemd[1]: Stopped ignition-setup.service. Apr 12 18:45:17.427000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:17.396832 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 12 18:45:17.396861 systemd[1]: Stopped initrd-setup-root.service. Apr 12 18:45:17.399135 systemd[1]: Stopping network-cleanup.service... Apr 12 18:45:17.431000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:17.431000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:17.400207 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 12 18:45:17.400241 systemd[1]: Stopped parse-ip-for-networkd.service. Apr 12 18:45:17.401134 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 12 18:45:17.401165 systemd[1]: Stopped systemd-sysctl.service. Apr 12 18:45:17.402738 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 12 18:45:17.402767 systemd[1]: Stopped systemd-modules-load.service. Apr 12 18:45:17.403327 systemd[1]: Stopping systemd-udevd.service... Apr 12 18:45:17.404269 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Apr 12 18:45:17.440000 audit: BPF prog-id=8 op=UNLOAD Apr 12 18:45:17.440000 audit: BPF prog-id=7 op=UNLOAD Apr 12 18:45:17.404684 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 12 18:45:17.404759 systemd[1]: Finished initrd-cleanup.service. Apr 12 18:45:17.408230 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 12 18:45:17.408290 systemd[1]: Stopped network-cleanup.service. Apr 12 18:45:17.444000 audit: BPF prog-id=5 op=UNLOAD Apr 12 18:45:17.444000 audit: BPF prog-id=4 op=UNLOAD Apr 12 18:45:17.444000 audit: BPF prog-id=3 op=UNLOAD Apr 12 18:45:17.413504 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 12 18:45:17.413612 systemd[1]: Stopped systemd-udevd.service. Apr 12 18:45:17.416243 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 12 18:45:17.416275 systemd[1]: Closed systemd-udevd-control.socket. Apr 12 18:45:17.417768 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 12 18:45:17.417791 systemd[1]: Closed systemd-udevd-kernel.socket. Apr 12 18:45:17.419525 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 12 18:45:17.419558 systemd[1]: Stopped dracut-pre-udev.service. Apr 12 18:45:17.421152 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 12 18:45:17.421184 systemd[1]: Stopped dracut-cmdline.service. Apr 12 18:45:17.422904 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 12 18:45:17.422947 systemd[1]: Stopped dracut-cmdline-ask.service. Apr 12 18:45:17.425256 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Apr 12 18:45:17.426419 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 12 18:45:17.426454 systemd[1]: Stopped systemd-vconsole-setup.service. Apr 12 18:45:17.461835 iscsid[717]: iscsid shutting down. Apr 12 18:45:17.462580 systemd-journald[197]: Received SIGTERM from PID 1 (systemd). Apr 12 18:45:17.429969 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 12 18:45:17.430031 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Apr 12 18:45:17.431138 systemd[1]: Reached target initrd-switch-root.target. Apr 12 18:45:17.433390 systemd[1]: Starting initrd-switch-root.service... Apr 12 18:45:17.438887 systemd[1]: Switching root. Apr 12 18:45:17.467078 systemd-journald[197]: Journal stopped Apr 12 18:45:20.995017 kernel: SELinux: Class mctp_socket not defined in policy. Apr 12 18:45:20.995067 kernel: SELinux: Class anon_inode not defined in policy. Apr 12 18:45:20.995078 kernel: SELinux: the above unknown classes and permissions will be allowed Apr 12 18:45:20.995088 kernel: SELinux: policy capability network_peer_controls=1 Apr 12 18:45:20.995097 kernel: SELinux: policy capability open_perms=1 Apr 12 18:45:20.995106 kernel: SELinux: policy capability extended_socket_class=1 Apr 12 18:45:20.995116 kernel: SELinux: policy capability always_check_network=0 Apr 12 18:45:20.995132 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 12 18:45:20.995142 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 12 18:45:20.995153 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 12 18:45:20.995163 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 12 18:45:20.995173 systemd[1]: Successfully loaded SELinux policy in 45.676ms. Apr 12 18:45:20.995192 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.932ms. Apr 12 18:45:20.995205 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Apr 12 18:45:20.995216 systemd[1]: Detected virtualization kvm. Apr 12 18:45:20.995227 systemd[1]: Detected architecture x86-64. Apr 12 18:45:20.995238 systemd[1]: Detected first boot. Apr 12 18:45:20.995251 systemd[1]: Initializing machine ID from VM UUID. Apr 12 18:45:20.995261 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Apr 12 18:45:20.995271 systemd[1]: Populated /etc with preset unit settings. Apr 12 18:45:20.995281 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Apr 12 18:45:20.995295 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Apr 12 18:45:20.995306 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 12 18:45:20.995317 systemd[1]: Queued start job for default target multi-user.target. Apr 12 18:45:20.995327 systemd[1]: Unnecessary job was removed for dev-vda6.device. Apr 12 18:45:20.995338 systemd[1]: Created slice system-addon\x2dconfig.slice. Apr 12 18:45:20.995349 systemd[1]: Created slice system-addon\x2drun.slice. Apr 12 18:45:20.995359 systemd[1]: Created slice system-getty.slice. Apr 12 18:45:20.995369 systemd[1]: Created slice system-modprobe.slice. Apr 12 18:45:20.995379 systemd[1]: Created slice system-serial\x2dgetty.slice. Apr 12 18:45:20.995389 systemd[1]: Created slice system-system\x2dcloudinit.slice. Apr 12 18:45:20.995400 systemd[1]: Created slice system-systemd\x2dfsck.slice. Apr 12 18:45:20.995410 systemd[1]: Created slice user.slice. Apr 12 18:45:20.995424 systemd[1]: Started systemd-ask-password-console.path. Apr 12 18:45:20.995436 systemd[1]: Started systemd-ask-password-wall.path. Apr 12 18:45:20.995446 systemd[1]: Set up automount boot.automount. Apr 12 18:45:20.995456 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Apr 12 18:45:20.995466 systemd[1]: Reached target integritysetup.target. Apr 12 18:45:20.995476 systemd[1]: Reached target remote-cryptsetup.target. Apr 12 18:45:20.995486 systemd[1]: Reached target remote-fs.target. Apr 12 18:45:20.995496 systemd[1]: Reached target slices.target. Apr 12 18:45:20.995509 systemd[1]: Reached target swap.target. Apr 12 18:45:20.995520 systemd[1]: Reached target torcx.target. Apr 12 18:45:20.995531 systemd[1]: Reached target veritysetup.target. Apr 12 18:45:20.995541 systemd[1]: Listening on systemd-coredump.socket. Apr 12 18:45:20.995551 systemd[1]: Listening on systemd-initctl.socket. Apr 12 18:45:20.995561 systemd[1]: Listening on systemd-journald-audit.socket. Apr 12 18:45:20.995571 systemd[1]: Listening on systemd-journald-dev-log.socket. Apr 12 18:45:20.995580 systemd[1]: Listening on systemd-journald.socket. Apr 12 18:45:20.995590 systemd[1]: Listening on systemd-networkd.socket. Apr 12 18:45:20.995600 systemd[1]: Listening on systemd-udevd-control.socket. Apr 12 18:45:20.995611 systemd[1]: Listening on systemd-udevd-kernel.socket. Apr 12 18:45:20.995621 systemd[1]: Listening on systemd-userdbd.socket. Apr 12 18:45:20.995630 systemd[1]: Mounting dev-hugepages.mount... Apr 12 18:45:20.995640 systemd[1]: Mounting dev-mqueue.mount... Apr 12 18:45:20.995651 systemd[1]: Mounting media.mount... Apr 12 18:45:20.995661 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 12 18:45:20.995671 systemd[1]: Mounting sys-kernel-debug.mount... Apr 12 18:45:20.995681 systemd[1]: Mounting sys-kernel-tracing.mount... Apr 12 18:45:20.995691 systemd[1]: Mounting tmp.mount... Apr 12 18:45:20.995703 systemd[1]: Starting flatcar-tmpfiles.service... Apr 12 18:45:20.995713 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Apr 12 18:45:20.995724 systemd[1]: Starting kmod-static-nodes.service... Apr 12 18:45:20.995734 systemd[1]: Starting modprobe@configfs.service... Apr 12 18:45:20.995744 systemd[1]: Starting modprobe@dm_mod.service... Apr 12 18:45:20.995753 systemd[1]: Starting modprobe@drm.service... Apr 12 18:45:20.995763 systemd[1]: Starting modprobe@efi_pstore.service... Apr 12 18:45:20.995774 systemd[1]: Starting modprobe@fuse.service... Apr 12 18:45:20.995791 systemd[1]: Starting modprobe@loop.service... Apr 12 18:45:20.995803 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 12 18:45:20.995814 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Apr 12 18:45:20.995824 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Apr 12 18:45:20.995834 systemd[1]: Starting systemd-journald.service... Apr 12 18:45:20.995843 kernel: loop: module loaded Apr 12 18:45:20.995855 kernel: fuse: init (API version 7.34) Apr 12 18:45:20.995865 systemd[1]: Starting systemd-modules-load.service... Apr 12 18:45:20.995876 systemd[1]: Starting systemd-network-generator.service... Apr 12 18:45:20.995886 systemd[1]: Starting systemd-remount-fs.service... Apr 12 18:45:20.995896 systemd[1]: Starting systemd-udev-trigger.service... Apr 12 18:45:20.995906 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 12 18:45:20.995919 systemd-journald[1019]: Journal started Apr 12 18:45:20.995975 systemd-journald[1019]: Runtime Journal (/run/log/journal/6f740b73705246deb5ab2fc8c1a35d82) is 6.0M, max 48.4M, 42.4M free. Apr 12 18:45:20.916000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Apr 12 18:45:20.916000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Apr 12 18:45:20.993000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Apr 12 18:45:20.993000 audit[1019]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7fff3718c500 a2=4000 a3=7fff3718c59c items=0 ppid=1 pid=1019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 12 18:45:20.993000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Apr 12 18:45:21.001100 systemd[1]: Started systemd-journald.service. Apr 12 18:45:21.000000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:21.002185 systemd[1]: Mounted dev-hugepages.mount. Apr 12 18:45:21.003288 systemd[1]: Mounted dev-mqueue.mount. Apr 12 18:45:21.004421 systemd[1]: Mounted media.mount. Apr 12 18:45:21.005480 systemd[1]: Mounted sys-kernel-debug.mount. Apr 12 18:45:21.006455 systemd[1]: Mounted sys-kernel-tracing.mount. Apr 12 18:45:21.007397 systemd[1]: Mounted tmp.mount. Apr 12 18:45:21.008437 systemd[1]: Finished flatcar-tmpfiles.service. Apr 12 18:45:21.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:21.009603 systemd[1]: Finished kmod-static-nodes.service. Apr 12 18:45:21.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:21.010652 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 12 18:45:21.010815 systemd[1]: Finished modprobe@configfs.service. Apr 12 18:45:21.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:21.010000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:21.011901 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 12 18:45:21.012049 systemd[1]: Finished modprobe@dm_mod.service. Apr 12 18:45:21.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:21.011000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:21.013092 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 12 18:45:21.013245 systemd[1]: Finished modprobe@drm.service. Apr 12 18:45:21.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:21.013000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:21.014400 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 12 18:45:21.014579 systemd[1]: Finished modprobe@efi_pstore.service. Apr 12 18:45:21.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:21.014000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:21.015731 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 12 18:45:21.015872 systemd[1]: Finished modprobe@fuse.service. Apr 12 18:45:21.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:21.015000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:21.016876 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 12 18:45:21.017086 systemd[1]: Finished modprobe@loop.service. Apr 12 18:45:21.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:21.016000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:21.018319 systemd[1]: Finished systemd-modules-load.service. Apr 12 18:45:21.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:21.019656 systemd[1]: Finished systemd-network-generator.service. Apr 12 18:45:21.019000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:21.020879 systemd[1]: Finished systemd-remount-fs.service. Apr 12 18:45:21.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:21.022162 systemd[1]: Reached target network-pre.target. Apr 12 18:45:21.024046 systemd[1]: Mounting sys-fs-fuse-connections.mount... Apr 12 18:45:21.026043 systemd[1]: Mounting sys-kernel-config.mount... Apr 12 18:45:21.026991 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 12 18:45:21.028734 systemd[1]: Starting systemd-hwdb-update.service... Apr 12 18:45:21.031943 systemd[1]: Starting systemd-journal-flush.service... Apr 12 18:45:21.032898 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 12 18:45:21.035488 systemd-journald[1019]: Time spent on flushing to /var/log/journal/6f740b73705246deb5ab2fc8c1a35d82 is 31.463ms for 1122 entries. Apr 12 18:45:21.035488 systemd-journald[1019]: System Journal (/var/log/journal/6f740b73705246deb5ab2fc8c1a35d82) is 8.0M, max 195.6M, 187.6M free. Apr 12 18:45:21.080219 systemd-journald[1019]: Received client request to flush runtime journal. Apr 12 18:45:21.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:21.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:21.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:21.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:21.034133 systemd[1]: Starting systemd-random-seed.service... Apr 12 18:45:21.036855 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Apr 12 18:45:21.037951 systemd[1]: Starting systemd-sysctl.service... Apr 12 18:45:21.040160 systemd[1]: Starting systemd-sysusers.service... Apr 12 18:45:21.081257 udevadm[1055]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 12 18:45:21.044715 systemd[1]: Finished systemd-udev-trigger.service. Apr 12 18:45:21.048238 systemd[1]: Mounted sys-fs-fuse-connections.mount. Apr 12 18:45:21.049368 systemd[1]: Mounted sys-kernel-config.mount. Apr 12 18:45:21.050698 systemd[1]: Finished systemd-random-seed.service. Apr 12 18:45:21.052033 systemd[1]: Finished systemd-sysctl.service. Apr 12 18:45:21.053185 systemd[1]: Reached target first-boot-complete.target. Apr 12 18:45:21.055502 systemd[1]: Starting systemd-udev-settle.service... Apr 12 18:45:21.063588 systemd[1]: Finished systemd-sysusers.service. Apr 12 18:45:21.065837 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Apr 12 18:45:21.081061 systemd[1]: Finished systemd-journal-flush.service. Apr 12 18:45:21.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:21.086699 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Apr 12 18:45:21.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:21.480003 systemd[1]: Finished systemd-hwdb-update.service. Apr 12 18:45:21.480000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:21.482357 systemd[1]: Starting systemd-udevd.service... Apr 12 18:45:21.500123 systemd-udevd[1064]: Using default interface naming scheme 'v252'. Apr 12 18:45:21.511869 systemd[1]: Started systemd-udevd.service. Apr 12 18:45:21.512000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:21.514358 systemd[1]: Starting systemd-networkd.service... Apr 12 18:45:21.519901 systemd[1]: Starting systemd-userdbd.service... Apr 12 18:45:21.554076 systemd[1]: Started systemd-userdbd.service. Apr 12 18:45:21.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:21.555724 systemd[1]: Found device dev-ttyS0.device. Apr 12 18:45:21.581884 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Apr 12 18:45:21.584966 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 12 18:45:21.594965 kernel: ACPI: button: Power Button [PWRF] Apr 12 18:45:21.602000 audit[1078]: AVC avc: denied { confidentiality } for pid=1078 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Apr 12 18:45:21.602000 audit[1078]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5589ad78b300 a1=32194 a2=7f9212cdbbc5 a3=5 items=108 ppid=1064 pid=1078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 12 18:45:21.602000 audit: CWD cwd="/" Apr 12 18:45:21.602000 audit: PATH item=0 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=1 name=(null) inode=12807 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=2 name=(null) inode=12807 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=3 name=(null) inode=12808 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=4 name=(null) inode=12807 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=5 name=(null) inode=12809 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=6 name=(null) inode=12807 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=7 name=(null) inode=12810 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=8 name=(null) inode=12810 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=9 name=(null) inode=12811 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=10 name=(null) inode=12810 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=11 name=(null) inode=12812 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=12 name=(null) inode=12810 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=13 name=(null) inode=12813 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=14 name=(null) inode=12810 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=15 name=(null) inode=12814 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=16 name=(null) inode=12810 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=17 name=(null) inode=12815 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=18 name=(null) inode=12807 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=19 name=(null) inode=12816 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=20 name=(null) inode=12816 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=21 name=(null) inode=12817 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=22 name=(null) inode=12816 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=23 name=(null) inode=12818 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=24 name=(null) inode=12816 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=25 name=(null) inode=12819 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=26 name=(null) inode=12816 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=27 name=(null) inode=12820 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=28 name=(null) inode=12816 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=29 name=(null) inode=12821 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=30 name=(null) inode=12807 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=31 name=(null) inode=12822 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=32 name=(null) inode=12822 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=33 name=(null) inode=12823 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=34 name=(null) inode=12822 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=35 name=(null) inode=12824 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=36 name=(null) inode=12822 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=37 name=(null) inode=12825 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=38 name=(null) inode=12822 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=39 name=(null) inode=12826 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=40 name=(null) inode=12822 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=41 name=(null) inode=12827 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=42 name=(null) inode=12807 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=43 name=(null) inode=12828 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=44 name=(null) inode=12828 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=45 name=(null) inode=12829 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=46 name=(null) inode=12828 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=47 name=(null) inode=12830 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=48 name=(null) inode=12828 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=49 name=(null) inode=12831 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=50 name=(null) inode=12828 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=51 name=(null) inode=12832 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=52 name=(null) inode=12828 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=53 name=(null) inode=12833 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=54 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=55 name=(null) inode=12834 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=56 name=(null) inode=12834 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=57 name=(null) inode=12835 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=58 name=(null) inode=12834 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=59 name=(null) inode=12836 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=60 name=(null) inode=12834 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=61 name=(null) inode=12837 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=62 name=(null) inode=12837 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=63 name=(null) inode=12838 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=64 name=(null) inode=12837 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=65 name=(null) inode=12839 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=66 name=(null) inode=12837 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=67 name=(null) inode=12840 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=68 name=(null) inode=12837 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=69 name=(null) inode=12841 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=70 name=(null) inode=12837 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=71 name=(null) inode=12842 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=72 name=(null) inode=12834 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=73 name=(null) inode=12843 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=74 name=(null) inode=12843 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=75 name=(null) inode=12844 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=76 name=(null) inode=12843 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=77 name=(null) inode=12845 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=78 name=(null) inode=12843 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=79 name=(null) inode=12846 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=80 name=(null) inode=12843 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=81 name=(null) inode=12847 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=82 name=(null) inode=12843 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=83 name=(null) inode=12848 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=84 name=(null) inode=12834 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=85 name=(null) inode=12849 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=86 name=(null) inode=12849 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=87 name=(null) inode=12850 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=88 name=(null) inode=12849 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=89 name=(null) inode=12851 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=90 name=(null) inode=12849 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=91 name=(null) inode=12852 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=92 name=(null) inode=12849 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=93 name=(null) inode=12853 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=94 name=(null) inode=12849 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=95 name=(null) inode=12854 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=96 name=(null) inode=12834 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=97 name=(null) inode=12855 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=98 name=(null) inode=12855 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=99 name=(null) inode=12856 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=100 name=(null) inode=12855 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=101 name=(null) inode=12857 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=102 name=(null) inode=12855 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=103 name=(null) inode=12858 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=104 name=(null) inode=12855 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=105 name=(null) inode=12859 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=106 name=(null) inode=12855 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PATH item=107 name=(null) inode=12860 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:45:21.602000 audit: PROCTITLE proctitle="(udev-worker)" Apr 12 18:45:21.618364 systemd-networkd[1071]: lo: Link UP Apr 12 18:45:21.618726 systemd-networkd[1071]: lo: Gained carrier Apr 12 18:45:21.619244 systemd-networkd[1071]: Enumeration completed Apr 12 18:45:21.619486 systemd[1]: Started systemd-networkd.service. Apr 12 18:45:21.619497 systemd-networkd[1071]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 12 18:45:21.620008 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 0 Apr 12 18:45:21.620000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:21.621263 systemd-networkd[1071]: eth0: Link UP Apr 12 18:45:21.621500 systemd-networkd[1071]: eth0: Gained carrier Apr 12 18:45:21.634072 systemd-networkd[1071]: eth0: DHCPv4 address 10.0.0.52/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 12 18:45:21.645026 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 12 18:45:21.650953 kernel: mousedev: PS/2 mouse device common for all mice Apr 12 18:45:21.688421 kernel: kvm: Nested Virtualization enabled Apr 12 18:45:21.688494 kernel: SVM: kvm: Nested Paging enabled Apr 12 18:45:21.688522 kernel: SVM: Virtual VMLOAD VMSAVE supported Apr 12 18:45:21.688536 kernel: SVM: Virtual GIF supported Apr 12 18:45:21.706971 kernel: EDAC MC: Ver: 3.0.0 Apr 12 18:45:21.729374 systemd[1]: Finished systemd-udev-settle.service. Apr 12 18:45:21.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:21.731779 systemd[1]: Starting lvm2-activation-early.service... Apr 12 18:45:21.739677 lvm[1101]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 12 18:45:21.766947 systemd[1]: Finished lvm2-activation-early.service. Apr 12 18:45:21.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:21.768001 systemd[1]: Reached target cryptsetup.target. Apr 12 18:45:21.769790 systemd[1]: Starting lvm2-activation.service... Apr 12 18:45:21.773954 lvm[1103]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 12 18:45:21.802960 systemd[1]: Finished lvm2-activation.service. Apr 12 18:45:21.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:21.803890 systemd[1]: Reached target local-fs-pre.target. Apr 12 18:45:21.804739 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 12 18:45:21.804756 systemd[1]: Reached target local-fs.target. Apr 12 18:45:21.805561 systemd[1]: Reached target machines.target. Apr 12 18:45:21.807278 systemd[1]: Starting ldconfig.service... Apr 12 18:45:21.808245 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Apr 12 18:45:21.808288 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Apr 12 18:45:21.809177 systemd[1]: Starting systemd-boot-update.service... Apr 12 18:45:21.810842 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Apr 12 18:45:21.813002 systemd[1]: Starting systemd-machine-id-commit.service... Apr 12 18:45:21.814106 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Apr 12 18:45:21.814150 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Apr 12 18:45:21.815279 systemd[1]: Starting systemd-tmpfiles-setup.service... Apr 12 18:45:21.816381 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1106 (bootctl) Apr 12 18:45:21.817355 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Apr 12 18:45:21.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:21.823040 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Apr 12 18:45:21.831224 systemd-tmpfiles[1109]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Apr 12 18:45:21.831919 systemd-tmpfiles[1109]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 12 18:45:21.833082 systemd-tmpfiles[1109]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 12 18:45:21.847969 systemd-fsck[1115]: fsck.fat 4.2 (2021-01-31) Apr 12 18:45:21.847969 systemd-fsck[1115]: /dev/vda1: 790 files, 119263/258078 clusters Apr 12 18:45:21.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:21.849188 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Apr 12 18:45:21.851702 systemd[1]: Mounting boot.mount... Apr 12 18:45:21.884333 systemd[1]: Mounted boot.mount. Apr 12 18:45:21.894000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:21.894779 systemd[1]: Finished systemd-boot-update.service. Apr 12 18:45:21.908894 ldconfig[1105]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 12 18:45:22.553604 systemd[1]: Finished systemd-tmpfiles-setup.service. Apr 12 18:45:22.553000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:22.555463 kernel: kauditd_printk_skb: 196 callbacks suppressed Apr 12 18:45:22.555503 kernel: audit: type=1130 audit(1712947522.553:120): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:22.555754 systemd[1]: Starting audit-rules.service... Apr 12 18:45:22.560445 systemd[1]: Starting clean-ca-certificates.service... Apr 12 18:45:22.562177 systemd[1]: Starting systemd-journal-catalog-update.service... Apr 12 18:45:22.564212 systemd[1]: Starting systemd-resolved.service... Apr 12 18:45:22.566224 systemd[1]: Starting systemd-timesyncd.service... Apr 12 18:45:22.569018 systemd[1]: Starting systemd-update-utmp.service... Apr 12 18:45:22.570362 systemd[1]: Finished clean-ca-certificates.service. Apr 12 18:45:22.570000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:22.571537 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 12 18:45:22.575167 kernel: audit: type=1130 audit(1712947522.570:121): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:22.575000 audit[1139]: SYSTEM_BOOT pid=1139 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Apr 12 18:45:22.579951 kernel: audit: type=1127 audit(1712947522.575:122): pid=1139 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Apr 12 18:45:22.581922 systemd[1]: Finished systemd-journal-catalog-update.service. Apr 12 18:45:22.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:22.583000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Apr 12 18:45:22.588831 augenrules[1146]: No rules Apr 12 18:45:22.586846 systemd[1]: Finished audit-rules.service. Apr 12 18:45:22.589169 kernel: audit: type=1130 audit(1712947522.582:123): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:45:22.589224 kernel: audit: type=1305 audit(1712947522.583:124): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Apr 12 18:45:22.589247 kernel: audit: type=1300 audit(1712947522.583:124): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcf5bba7e0 a2=420 a3=0 items=0 ppid=1124 pid=1146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 12 18:45:22.583000 audit[1146]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcf5bba7e0 a2=420 a3=0 items=0 ppid=1124 pid=1146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 12 18:45:22.583000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Apr 12 18:45:22.597305 kernel: audit: type=1327 audit(1712947522.583:124): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Apr 12 18:45:22.598440 systemd[1]: Finished systemd-update-utmp.service. Apr 12 18:45:22.773265 systemd[1]: Finished ldconfig.service. Apr 12 18:45:22.774594 systemd[1]: Started systemd-timesyncd.service. Apr 12 18:45:23.341018 systemd-resolved[1134]: Positive Trust Anchors: Apr 12 18:45:23.341033 systemd-resolved[1134]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 12 18:45:23.341065 systemd-resolved[1134]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Apr 12 18:45:23.341075 systemd-timesyncd[1136]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 12 18:45:23.341111 systemd-timesyncd[1136]: Initial clock synchronization to Fri 2024-04-12 18:45:23.341003 UTC. Apr 12 18:45:23.342570 systemd[1]: Reached target time-set.target. Apr 12 18:45:23.345470 systemd[1]: Starting systemd-update-done.service... Apr 12 18:45:23.355627 systemd-resolved[1134]: Defaulting to hostname 'linux'. Apr 12 18:45:23.357054 systemd[1]: Finished systemd-update-done.service. Apr 12 18:45:23.358338 systemd[1]: Started systemd-resolved.service. Apr 12 18:45:23.359565 systemd[1]: Reached target network.target. Apr 12 18:45:23.360394 systemd[1]: Reached target nss-lookup.target. Apr 12 18:45:23.361607 systemd[1]: Reached target sysinit.target. Apr 12 18:45:23.362947 systemd[1]: Started motdgen.path. Apr 12 18:45:23.364012 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Apr 12 18:45:23.365615 systemd[1]: Started logrotate.timer. Apr 12 18:45:23.366475 systemd[1]: Started mdadm.timer. Apr 12 18:45:23.367146 systemd[1]: Started systemd-tmpfiles-clean.timer. Apr 12 18:45:23.368020 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 12 18:45:23.368046 systemd[1]: Reached target paths.target. Apr 12 18:45:23.368784 systemd[1]: Reached target timers.target. Apr 12 18:45:23.369808 systemd[1]: Listening on dbus.socket. Apr 12 18:45:23.371482 systemd[1]: Starting docker.socket... Apr 12 18:45:23.373109 systemd[1]: Listening on sshd.socket. Apr 12 18:45:23.374009 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Apr 12 18:45:23.374307 systemd[1]: Listening on docker.socket. Apr 12 18:45:23.375203 systemd[1]: Reached target sockets.target. Apr 12 18:45:23.376053 systemd[1]: Reached target basic.target. Apr 12 18:45:23.376949 systemd[1]: System is tainted: cgroupsv1 Apr 12 18:45:23.376985 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Apr 12 18:45:23.377003 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Apr 12 18:45:23.377809 systemd[1]: Starting containerd.service... Apr 12 18:45:23.379606 systemd[1]: Starting dbus.service... Apr 12 18:45:23.381145 systemd[1]: Starting enable-oem-cloudinit.service... Apr 12 18:45:23.383061 systemd[1]: Starting extend-filesystems.service... Apr 12 18:45:23.400250 jq[1163]: false Apr 12 18:45:23.383969 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Apr 12 18:45:23.385120 systemd[1]: Starting motdgen.service... Apr 12 18:45:23.387248 systemd[1]: Starting prepare-cni-plugins.service... Apr 12 18:45:23.389379 systemd[1]: Starting prepare-critools.service... Apr 12 18:45:23.391173 systemd[1]: Starting prepare-helm.service... Apr 12 18:45:23.392899 systemd[1]: Starting ssh-key-proc-cmdline.service... Apr 12 18:45:23.394826 systemd[1]: Starting sshd-keygen.service... Apr 12 18:45:23.400573 systemd[1]: Starting systemd-logind.service... Apr 12 18:45:23.401508 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Apr 12 18:45:23.401585 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 12 18:45:23.414999 jq[1188]: true Apr 12 18:45:23.403314 systemd[1]: Starting update-engine.service... Apr 12 18:45:23.406082 systemd[1]: Starting update-ssh-keys-after-ignition.service... Apr 12 18:45:23.409077 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 12 18:45:23.410212 systemd[1]: Finished systemd-machine-id-commit.service. Apr 12 18:45:23.413433 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 12 18:45:23.413692 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Apr 12 18:45:23.416655 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 12 18:45:23.416964 systemd[1]: Finished ssh-key-proc-cmdline.service. Apr 12 18:45:23.423054 systemd[1]: Started dbus.service. Apr 12 18:45:23.425604 tar[1195]: linux-amd64/helm Apr 12 18:45:23.422859 dbus-daemon[1162]: [system] SELinux support is enabled Apr 12 18:45:23.432445 extend-filesystems[1164]: Found sr0 Apr 12 18:45:23.432445 extend-filesystems[1164]: Found vda Apr 12 18:45:23.432445 extend-filesystems[1164]: Found vda1 Apr 12 18:45:23.432445 extend-filesystems[1164]: Found vda2 Apr 12 18:45:23.432445 extend-filesystems[1164]: Found vda3 Apr 12 18:45:23.432445 extend-filesystems[1164]: Found usr Apr 12 18:45:23.432445 extend-filesystems[1164]: Found vda4 Apr 12 18:45:23.432445 extend-filesystems[1164]: Found vda6 Apr 12 18:45:23.432445 extend-filesystems[1164]: Found vda7 Apr 12 18:45:23.432445 extend-filesystems[1164]: Found vda9 Apr 12 18:45:23.432445 extend-filesystems[1164]: Checking size of /dev/vda9 Apr 12 18:45:23.496232 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 12 18:45:23.496284 tar[1193]: ./ Apr 12 18:45:23.496284 tar[1193]: ./loopback Apr 12 18:45:23.425492 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 12 18:45:23.496591 jq[1198]: true Apr 12 18:45:23.496710 update_engine[1186]: I0412 18:45:23.462080 1186 main.cc:92] Flatcar Update Engine starting Apr 12 18:45:23.496710 update_engine[1186]: I0412 18:45:23.467418 1186 update_check_scheduler.cc:74] Next update check in 3m53s Apr 12 18:45:23.496878 tar[1194]: crictl Apr 12 18:45:23.497073 extend-filesystems[1164]: Resized partition /dev/vda9 Apr 12 18:45:23.425513 systemd[1]: Reached target system-config.target. Apr 12 18:45:23.500544 extend-filesystems[1224]: resize2fs 1.46.5 (30-Dec-2021) Apr 12 18:45:23.426637 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 12 18:45:23.501845 env[1199]: time="2024-04-12T18:45:23.500692153Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Apr 12 18:45:23.426650 systemd[1]: Reached target user-config.target. Apr 12 18:45:23.434900 systemd[1]: motdgen.service: Deactivated successfully. Apr 12 18:45:23.437417 systemd[1]: Finished motdgen.service. Apr 12 18:45:23.467373 systemd[1]: Started update-engine.service. Apr 12 18:45:23.471263 systemd[1]: Started locksmithd.service. Apr 12 18:45:23.473040 systemd-logind[1185]: Watching system buttons on /dev/input/event1 (Power Button) Apr 12 18:45:23.473062 systemd-logind[1185]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 12 18:45:23.473213 systemd-logind[1185]: New seat seat0. Apr 12 18:45:23.474296 systemd[1]: Started systemd-logind.service. Apr 12 18:45:23.506478 tar[1193]: ./bandwidth Apr 12 18:45:23.507934 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 12 18:45:23.533158 env[1199]: time="2024-04-12T18:45:23.520644684Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 12 18:45:23.533158 env[1199]: time="2024-04-12T18:45:23.526616611Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 12 18:45:23.533158 env[1199]: time="2024-04-12T18:45:23.527895108Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.154-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 12 18:45:23.533158 env[1199]: time="2024-04-12T18:45:23.527946164Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 12 18:45:23.533158 env[1199]: time="2024-04-12T18:45:23.528203817Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 12 18:45:23.533158 env[1199]: time="2024-04-12T18:45:23.528221911Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 12 18:45:23.533158 env[1199]: time="2024-04-12T18:45:23.528233483Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Apr 12 18:45:23.533158 env[1199]: time="2024-04-12T18:45:23.528242680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 12 18:45:23.533158 env[1199]: time="2024-04-12T18:45:23.528300007Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 12 18:45:23.533158 env[1199]: time="2024-04-12T18:45:23.528491737Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 12 18:45:23.534614 bash[1229]: Updated "/home/core/.ssh/authorized_keys" Apr 12 18:45:23.534686 extend-filesystems[1224]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 12 18:45:23.534686 extend-filesystems[1224]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 12 18:45:23.534686 extend-filesystems[1224]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 12 18:45:23.530997 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 12 18:45:23.548530 env[1199]: time="2024-04-12T18:45:23.528626880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 12 18:45:23.548530 env[1199]: time="2024-04-12T18:45:23.528639895Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 12 18:45:23.548530 env[1199]: time="2024-04-12T18:45:23.528680130Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Apr 12 18:45:23.548530 env[1199]: time="2024-04-12T18:45:23.528690429Z" level=info msg="metadata content store policy set" policy=shared Apr 12 18:45:23.548530 env[1199]: time="2024-04-12T18:45:23.543596669Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 12 18:45:23.548530 env[1199]: time="2024-04-12T18:45:23.543647794Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 12 18:45:23.548530 env[1199]: time="2024-04-12T18:45:23.543659957Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 12 18:45:23.548530 env[1199]: time="2024-04-12T18:45:23.543693881Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 12 18:45:23.548530 env[1199]: time="2024-04-12T18:45:23.543707186Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 12 18:45:23.548530 env[1199]: time="2024-04-12T18:45:23.543719469Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 12 18:45:23.548530 env[1199]: time="2024-04-12T18:45:23.543730750Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 12 18:45:23.548530 env[1199]: time="2024-04-12T18:45:23.543745227Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 12 18:45:23.548530 env[1199]: time="2024-04-12T18:45:23.543756478Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Apr 12 18:45:23.548854 extend-filesystems[1164]: Resized filesystem in /dev/vda9 Apr 12 18:45:23.531240 systemd[1]: Finished extend-filesystems.service. Apr 12 18:45:23.553041 env[1199]: time="2024-04-12T18:45:23.543767379Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 12 18:45:23.553041 env[1199]: time="2024-04-12T18:45:23.543786575Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 12 18:45:23.553041 env[1199]: time="2024-04-12T18:45:23.543800310Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 12 18:45:23.553041 env[1199]: time="2024-04-12T18:45:23.543996689Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 12 18:45:23.553041 env[1199]: time="2024-04-12T18:45:23.544092328Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 12 18:45:23.553041 env[1199]: time="2024-04-12T18:45:23.544397320Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 12 18:45:23.553041 env[1199]: time="2024-04-12T18:45:23.544418660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 12 18:45:23.553041 env[1199]: time="2024-04-12T18:45:23.544430793Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 12 18:45:23.553041 env[1199]: time="2024-04-12T18:45:23.544469154Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 12 18:45:23.553041 env[1199]: time="2024-04-12T18:45:23.544480135Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 12 18:45:23.553041 env[1199]: time="2024-04-12T18:45:23.544490504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 12 18:45:23.553041 env[1199]: time="2024-04-12T18:45:23.544501295Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 12 18:45:23.553041 env[1199]: time="2024-04-12T18:45:23.544511975Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 12 18:45:23.553041 env[1199]: time="2024-04-12T18:45:23.544522384Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 12 18:45:23.533246 systemd[1]: Finished update-ssh-keys-after-ignition.service. Apr 12 18:45:23.553387 env[1199]: time="2024-04-12T18:45:23.544533024Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 12 18:45:23.553387 env[1199]: time="2024-04-12T18:45:23.544542793Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 12 18:45:23.553387 env[1199]: time="2024-04-12T18:45:23.544554454Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 12 18:45:23.553387 env[1199]: time="2024-04-12T18:45:23.544643501Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 12 18:45:23.553387 env[1199]: time="2024-04-12T18:45:23.544659040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 12 18:45:23.553387 env[1199]: time="2024-04-12T18:45:23.544670592Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 12 18:45:23.553387 env[1199]: time="2024-04-12T18:45:23.544681563Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 12 18:45:23.553387 env[1199]: time="2024-04-12T18:45:23.544694257Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Apr 12 18:45:23.553387 env[1199]: time="2024-04-12T18:45:23.544704215Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 12 18:45:23.553387 env[1199]: time="2024-04-12T18:45:23.544721177Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Apr 12 18:45:23.553387 env[1199]: time="2024-04-12T18:45:23.544753948Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 12 18:45:23.546711 systemd[1]: Started containerd.service. Apr 12 18:45:23.553630 env[1199]: time="2024-04-12T18:45:23.545001853Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 12 18:45:23.553630 env[1199]: time="2024-04-12T18:45:23.545071043Z" level=info msg="Connect containerd service" Apr 12 18:45:23.553630 env[1199]: time="2024-04-12T18:45:23.545109054Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 12 18:45:23.553630 env[1199]: time="2024-04-12T18:45:23.545631514Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 12 18:45:23.553630 env[1199]: time="2024-04-12T18:45:23.545858920Z" level=info msg="Start subscribing containerd event" Apr 12 18:45:23.553630 env[1199]: time="2024-04-12T18:45:23.545938159Z" level=info msg="Start recovering state" Apr 12 18:45:23.553630 env[1199]: time="2024-04-12T18:45:23.546000796Z" level=info msg="Start event monitor" Apr 12 18:45:23.553630 env[1199]: time="2024-04-12T18:45:23.546017778Z" level=info msg="Start snapshots syncer" Apr 12 18:45:23.553630 env[1199]: time="2024-04-12T18:45:23.546027416Z" level=info msg="Start cni network conf syncer for default" Apr 12 18:45:23.553630 env[1199]: time="2024-04-12T18:45:23.546036002Z" level=info msg="Start streaming server" Apr 12 18:45:23.553630 env[1199]: time="2024-04-12T18:45:23.546278657Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 12 18:45:23.553630 env[1199]: time="2024-04-12T18:45:23.546588258Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 12 18:45:23.553630 env[1199]: time="2024-04-12T18:45:23.546659231Z" level=info msg="containerd successfully booted in 0.065424s" Apr 12 18:45:23.563262 tar[1193]: ./ptp Apr 12 18:45:23.598492 tar[1193]: ./vlan Apr 12 18:45:23.616073 systemd-networkd[1071]: eth0: Gained IPv6LL Apr 12 18:45:23.632789 tar[1193]: ./host-device Apr 12 18:45:23.665548 tar[1193]: ./tuning Apr 12 18:45:23.694147 tar[1193]: ./vrf Apr 12 18:45:23.723896 tar[1193]: ./sbr Apr 12 18:45:23.754022 tar[1193]: ./tap Apr 12 18:45:23.787759 tar[1193]: ./dhcp Apr 12 18:45:23.873352 tar[1193]: ./static Apr 12 18:45:23.897607 tar[1193]: ./firewall Apr 12 18:45:23.899573 tar[1195]: linux-amd64/LICENSE Apr 12 18:45:23.899820 tar[1195]: linux-amd64/README.md Apr 12 18:45:23.904102 systemd[1]: Finished prepare-helm.service. Apr 12 18:45:23.915469 locksmithd[1230]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 12 18:45:23.934780 tar[1193]: ./macvlan Apr 12 18:45:23.957778 systemd[1]: Finished prepare-critools.service. Apr 12 18:45:23.967195 tar[1193]: ./dummy Apr 12 18:45:23.996148 tar[1193]: ./bridge Apr 12 18:45:24.028250 tar[1193]: ./ipvlan Apr 12 18:45:24.030271 sshd_keygen[1190]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 12 18:45:24.049314 systemd[1]: Finished sshd-keygen.service. Apr 12 18:45:24.051566 systemd[1]: Starting issuegen.service... Apr 12 18:45:24.057640 systemd[1]: issuegen.service: Deactivated successfully. Apr 12 18:45:24.057985 systemd[1]: Finished issuegen.service. Apr 12 18:45:24.061013 systemd[1]: Starting systemd-user-sessions.service... Apr 12 18:45:24.062794 tar[1193]: ./portmap Apr 12 18:45:24.067622 systemd[1]: Finished systemd-user-sessions.service. Apr 12 18:45:24.070712 systemd[1]: Started getty@tty1.service. Apr 12 18:45:24.073647 systemd[1]: Started serial-getty@ttyS0.service. Apr 12 18:45:24.075002 systemd[1]: Reached target getty.target. Apr 12 18:45:24.097144 tar[1193]: ./host-local Apr 12 18:45:24.133000 systemd[1]: Finished prepare-cni-plugins.service. Apr 12 18:45:24.134272 systemd[1]: Reached target multi-user.target. Apr 12 18:45:24.136415 systemd[1]: Starting systemd-update-utmp-runlevel.service... Apr 12 18:45:24.143653 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Apr 12 18:45:24.143887 systemd[1]: Finished systemd-update-utmp-runlevel.service. Apr 12 18:45:24.145172 systemd[1]: Startup finished in 6.421s (kernel) + 6.074s (userspace) = 12.496s. Apr 12 18:45:27.238248 systemd[1]: Created slice system-sshd.slice. Apr 12 18:45:27.239299 systemd[1]: Started sshd@0-10.0.0.52:22-10.0.0.1:49756.service. Apr 12 18:45:27.283697 sshd[1275]: Accepted publickey for core from 10.0.0.1 port 49756 ssh2: RSA SHA256:oFTmhZVjs8bjXH/lYpDQZ+WL9oh5tEY90V+L3H6oLsU Apr 12 18:45:27.285161 sshd[1275]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:45:27.292709 systemd[1]: Created slice user-500.slice. Apr 12 18:45:27.293665 systemd[1]: Starting user-runtime-dir@500.service... Apr 12 18:45:27.295316 systemd-logind[1185]: New session 1 of user core. Apr 12 18:45:27.302312 systemd[1]: Finished user-runtime-dir@500.service. Apr 12 18:45:27.303952 systemd[1]: Starting user@500.service... Apr 12 18:45:27.306280 (systemd)[1279]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:45:27.372220 systemd[1279]: Queued start job for default target default.target. Apr 12 18:45:27.372401 systemd[1279]: Reached target paths.target. Apr 12 18:45:27.372420 systemd[1279]: Reached target sockets.target. Apr 12 18:45:27.372435 systemd[1279]: Reached target timers.target. Apr 12 18:45:27.372449 systemd[1279]: Reached target basic.target. Apr 12 18:45:27.372493 systemd[1279]: Reached target default.target. Apr 12 18:45:27.372519 systemd[1279]: Startup finished in 61ms. Apr 12 18:45:27.372714 systemd[1]: Started user@500.service. Apr 12 18:45:27.373723 systemd[1]: Started session-1.scope. Apr 12 18:45:27.425516 systemd[1]: Started sshd@1-10.0.0.52:22-10.0.0.1:49760.service. Apr 12 18:45:27.467586 sshd[1289]: Accepted publickey for core from 10.0.0.1 port 49760 ssh2: RSA SHA256:oFTmhZVjs8bjXH/lYpDQZ+WL9oh5tEY90V+L3H6oLsU Apr 12 18:45:27.468880 sshd[1289]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:45:27.472633 systemd-logind[1185]: New session 2 of user core. Apr 12 18:45:27.473331 systemd[1]: Started session-2.scope. Apr 12 18:45:27.530530 sshd[1289]: pam_unix(sshd:session): session closed for user core Apr 12 18:45:27.533043 systemd[1]: Started sshd@2-10.0.0.52:22-10.0.0.1:49762.service. Apr 12 18:45:27.534661 systemd[1]: sshd@1-10.0.0.52:22-10.0.0.1:49760.service: Deactivated successfully. Apr 12 18:45:27.535729 systemd-logind[1185]: Session 2 logged out. Waiting for processes to exit. Apr 12 18:45:27.535751 systemd[1]: session-2.scope: Deactivated successfully. Apr 12 18:45:27.537001 systemd-logind[1185]: Removed session 2. Apr 12 18:45:27.576305 sshd[1294]: Accepted publickey for core from 10.0.0.1 port 49762 ssh2: RSA SHA256:oFTmhZVjs8bjXH/lYpDQZ+WL9oh5tEY90V+L3H6oLsU Apr 12 18:45:27.577665 sshd[1294]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:45:27.581569 systemd-logind[1185]: New session 3 of user core. Apr 12 18:45:27.582594 systemd[1]: Started session-3.scope. Apr 12 18:45:27.633257 sshd[1294]: pam_unix(sshd:session): session closed for user core Apr 12 18:45:27.636224 systemd[1]: Started sshd@3-10.0.0.52:22-10.0.0.1:49772.service. Apr 12 18:45:27.636733 systemd[1]: sshd@2-10.0.0.52:22-10.0.0.1:49762.service: Deactivated successfully. Apr 12 18:45:27.637891 systemd[1]: session-3.scope: Deactivated successfully. Apr 12 18:45:27.638068 systemd-logind[1185]: Session 3 logged out. Waiting for processes to exit. Apr 12 18:45:27.639049 systemd-logind[1185]: Removed session 3. Apr 12 18:45:27.678181 sshd[1301]: Accepted publickey for core from 10.0.0.1 port 49772 ssh2: RSA SHA256:oFTmhZVjs8bjXH/lYpDQZ+WL9oh5tEY90V+L3H6oLsU Apr 12 18:45:27.679476 sshd[1301]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:45:27.683610 systemd-logind[1185]: New session 4 of user core. Apr 12 18:45:27.684659 systemd[1]: Started session-4.scope. Apr 12 18:45:27.737512 sshd[1301]: pam_unix(sshd:session): session closed for user core Apr 12 18:45:27.739537 systemd[1]: Started sshd@4-10.0.0.52:22-10.0.0.1:49782.service. Apr 12 18:45:27.740447 systemd[1]: sshd@3-10.0.0.52:22-10.0.0.1:49772.service: Deactivated successfully. Apr 12 18:45:27.741076 systemd[1]: session-4.scope: Deactivated successfully. Apr 12 18:45:27.741921 systemd-logind[1185]: Session 4 logged out. Waiting for processes to exit. Apr 12 18:45:27.742632 systemd-logind[1185]: Removed session 4. Apr 12 18:45:27.779366 sshd[1308]: Accepted publickey for core from 10.0.0.1 port 49782 ssh2: RSA SHA256:oFTmhZVjs8bjXH/lYpDQZ+WL9oh5tEY90V+L3H6oLsU Apr 12 18:45:27.780375 sshd[1308]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:45:27.783498 systemd-logind[1185]: New session 5 of user core. Apr 12 18:45:27.784402 systemd[1]: Started session-5.scope. Apr 12 18:45:27.837629 sudo[1314]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 12 18:45:27.837816 sudo[1314]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Apr 12 18:45:28.370849 systemd[1]: Starting systemd-networkd-wait-online.service... Apr 12 18:45:28.375719 systemd[1]: Finished systemd-networkd-wait-online.service. Apr 12 18:45:28.376025 systemd[1]: Reached target network-online.target. Apr 12 18:45:28.377281 systemd[1]: Starting docker.service... Apr 12 18:45:28.412761 env[1333]: time="2024-04-12T18:45:28.412706158Z" level=info msg="Starting up" Apr 12 18:45:28.414075 env[1333]: time="2024-04-12T18:45:28.414048585Z" level=info msg="parsed scheme: \"unix\"" module=grpc Apr 12 18:45:28.414075 env[1333]: time="2024-04-12T18:45:28.414068362Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Apr 12 18:45:28.414155 env[1333]: time="2024-04-12T18:45:28.414087728Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Apr 12 18:45:28.414155 env[1333]: time="2024-04-12T18:45:28.414100642Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Apr 12 18:45:28.415953 env[1333]: time="2024-04-12T18:45:28.415888514Z" level=info msg="parsed scheme: \"unix\"" module=grpc Apr 12 18:45:28.416026 env[1333]: time="2024-04-12T18:45:28.416009251Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Apr 12 18:45:28.416113 env[1333]: time="2024-04-12T18:45:28.416091014Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Apr 12 18:45:28.416238 env[1333]: time="2024-04-12T18:45:28.416220938Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Apr 12 18:45:29.269598 env[1333]: time="2024-04-12T18:45:29.269549929Z" level=warning msg="Your kernel does not support cgroup blkio weight" Apr 12 18:45:29.269598 env[1333]: time="2024-04-12T18:45:29.269582500Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Apr 12 18:45:29.269848 env[1333]: time="2024-04-12T18:45:29.269727552Z" level=info msg="Loading containers: start." Apr 12 18:45:29.651936 kernel: Initializing XFRM netlink socket Apr 12 18:45:29.677261 env[1333]: time="2024-04-12T18:45:29.677210854Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Apr 12 18:45:29.721229 systemd-networkd[1071]: docker0: Link UP Apr 12 18:45:29.760550 env[1333]: time="2024-04-12T18:45:29.760510721Z" level=info msg="Loading containers: done." Apr 12 18:45:29.771483 env[1333]: time="2024-04-12T18:45:29.771448699Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 12 18:45:29.771600 env[1333]: time="2024-04-12T18:45:29.771592128Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Apr 12 18:45:29.771676 env[1333]: time="2024-04-12T18:45:29.771656730Z" level=info msg="Daemon has completed initialization" Apr 12 18:45:29.788408 systemd[1]: Started docker.service. Apr 12 18:45:29.791849 env[1333]: time="2024-04-12T18:45:29.791797063Z" level=info msg="API listen on /run/docker.sock" Apr 12 18:45:29.807546 systemd[1]: Reloading. Apr 12 18:45:29.860954 /usr/lib/systemd/system-generators/torcx-generator[1476]: time="2024-04-12T18:45:29Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.3 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.3 /var/lib/torcx/store]" Apr 12 18:45:29.861330 /usr/lib/systemd/system-generators/torcx-generator[1476]: time="2024-04-12T18:45:29Z" level=info msg="torcx already run" Apr 12 18:45:29.922516 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Apr 12 18:45:29.922530 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Apr 12 18:45:29.939221 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 12 18:45:30.006305 systemd[1]: Started kubelet.service. Apr 12 18:45:30.048380 kubelet[1523]: E0412 18:45:30.048334 1523 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Apr 12 18:45:30.050205 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 12 18:45:30.050404 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 12 18:45:30.390676 env[1199]: time="2024-04-12T18:45:30.390570623Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.27.12\"" Apr 12 18:45:31.172661 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1811224143.mount: Deactivated successfully. Apr 12 18:45:32.798853 env[1199]: time="2024-04-12T18:45:32.798787739Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.27.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:45:32.800616 env[1199]: time="2024-04-12T18:45:32.800582334Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:394383b7bc9634d67978b735802d4039f702efd9e5cc2499eac1a8ad78184809,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:45:32.802238 env[1199]: time="2024-04-12T18:45:32.802208423Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.27.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:45:32.803806 env[1199]: time="2024-04-12T18:45:32.803757418Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:cf0c29f585316888225cf254949988bdbedc7ba6238bc9a24bf6f0c508c42b6c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:45:32.804387 env[1199]: time="2024-04-12T18:45:32.804350960Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.27.12\" returns image reference \"sha256:394383b7bc9634d67978b735802d4039f702efd9e5cc2499eac1a8ad78184809\"" Apr 12 18:45:32.812359 env[1199]: time="2024-04-12T18:45:32.812331744Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.27.12\"" Apr 12 18:45:35.037782 env[1199]: time="2024-04-12T18:45:35.037717242Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.27.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:45:35.039581 env[1199]: time="2024-04-12T18:45:35.039529820Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b68567f81c92edc7c53449e3958d8cf5ad474ac00bbbdfcd2bd47558a9bba5d7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:45:35.041089 env[1199]: time="2024-04-12T18:45:35.041053998Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.27.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:45:35.042597 env[1199]: time="2024-04-12T18:45:35.042565141Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:6caa3a4278e87169371d031861e49db21742bcbd8df650d7fe519a1a7f6764af,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:45:35.043162 env[1199]: time="2024-04-12T18:45:35.043134389Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.27.12\" returns image reference \"sha256:b68567f81c92edc7c53449e3958d8cf5ad474ac00bbbdfcd2bd47558a9bba5d7\"" Apr 12 18:45:35.051107 env[1199]: time="2024-04-12T18:45:35.051076340Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.27.12\"" Apr 12 18:45:37.117606 env[1199]: time="2024-04-12T18:45:37.117532775Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.27.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:45:37.119404 env[1199]: time="2024-04-12T18:45:37.119358007Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5fab684ed62aaef7130a9e5533c28699a5be380abc7cdbcd32502cca8b56e833,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:45:37.120917 env[1199]: time="2024-04-12T18:45:37.120874821Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.27.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:45:37.122372 env[1199]: time="2024-04-12T18:45:37.122333937Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:b8bb7b17a4f915419575ceb885e128d0bb5ea8e67cb88dbde257988b770a4dce,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:45:37.122924 env[1199]: time="2024-04-12T18:45:37.122879911Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.27.12\" returns image reference \"sha256:5fab684ed62aaef7130a9e5533c28699a5be380abc7cdbcd32502cca8b56e833\"" Apr 12 18:45:37.131180 env[1199]: time="2024-04-12T18:45:37.131146391Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.12\"" Apr 12 18:45:38.290925 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4182803373.mount: Deactivated successfully. Apr 12 18:45:38.918055 env[1199]: time="2024-04-12T18:45:38.917988088Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.27.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:45:38.919851 env[1199]: time="2024-04-12T18:45:38.919805696Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2b5590cbba38a0f4f32cbe39a2d3a1a1348612e7550f8b68af937ba5b6e9ba3d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:45:38.922244 env[1199]: time="2024-04-12T18:45:38.922213261Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.27.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:45:38.923380 env[1199]: time="2024-04-12T18:45:38.923355102Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:b0539f35b586abc54ca7660f9bb8a539d010b9e07d20e9e3d529cf0ca35d4ddf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:45:38.923724 env[1199]: time="2024-04-12T18:45:38.923689438Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.12\" returns image reference \"sha256:2b5590cbba38a0f4f32cbe39a2d3a1a1348612e7550f8b68af937ba5b6e9ba3d\"" Apr 12 18:45:38.932460 env[1199]: time="2024-04-12T18:45:38.932426070Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Apr 12 18:45:39.490006 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount905949623.mount: Deactivated successfully. Apr 12 18:45:39.495551 env[1199]: time="2024-04-12T18:45:39.495502999Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:45:39.497367 env[1199]: time="2024-04-12T18:45:39.497335835Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:45:39.498577 env[1199]: time="2024-04-12T18:45:39.498545022Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:45:39.499899 env[1199]: time="2024-04-12T18:45:39.499874626Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:45:39.500279 env[1199]: time="2024-04-12T18:45:39.500257814Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Apr 12 18:45:39.509597 env[1199]: time="2024-04-12T18:45:39.509559635Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.7-0\"" Apr 12 18:45:40.149257 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 12 18:45:40.149444 systemd[1]: Stopped kubelet.service. Apr 12 18:45:40.151171 systemd[1]: Started kubelet.service. Apr 12 18:45:40.211275 kubelet[1577]: E0412 18:45:40.211230 1577 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Apr 12 18:45:40.215388 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 12 18:45:40.215574 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 12 18:45:40.330792 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount889588719.mount: Deactivated successfully. Apr 12 18:45:45.599156 env[1199]: time="2024-04-12T18:45:45.599086156Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.7-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:45:45.600932 env[1199]: time="2024-04-12T18:45:45.600881953Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:45:45.602506 env[1199]: time="2024-04-12T18:45:45.602480009Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.7-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:45:45.604240 env[1199]: time="2024-04-12T18:45:45.604209512Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:45:45.604802 env[1199]: time="2024-04-12T18:45:45.604776274Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.7-0\" returns image reference \"sha256:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681\"" Apr 12 18:45:45.619667 env[1199]: time="2024-04-12T18:45:45.619623784Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Apr 12 18:45:46.224166 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3527181292.mount: Deactivated successfully. Apr 12 18:45:47.610271 env[1199]: time="2024-04-12T18:45:47.610199533Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:45:47.612267 env[1199]: time="2024-04-12T18:45:47.612223197Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:45:47.613862 env[1199]: time="2024-04-12T18:45:47.613831282Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:45:47.615515 env[1199]: time="2024-04-12T18:45:47.615486185Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:45:47.615902 env[1199]: time="2024-04-12T18:45:47.615873501Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Apr 12 18:45:49.730436 systemd[1]: Stopped kubelet.service. Apr 12 18:45:49.742316 systemd[1]: Reloading. Apr 12 18:45:49.797217 /usr/lib/systemd/system-generators/torcx-generator[1686]: time="2024-04-12T18:45:49Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.3 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.3 /var/lib/torcx/store]" Apr 12 18:45:49.797247 /usr/lib/systemd/system-generators/torcx-generator[1686]: time="2024-04-12T18:45:49Z" level=info msg="torcx already run" Apr 12 18:45:49.855169 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Apr 12 18:45:49.855183 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Apr 12 18:45:49.871664 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 12 18:45:49.942278 systemd[1]: Started kubelet.service. Apr 12 18:45:49.977945 kubelet[1734]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 12 18:45:49.977945 kubelet[1734]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 12 18:45:49.977945 kubelet[1734]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 12 18:45:49.978322 kubelet[1734]: I0412 18:45:49.977976 1734 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 12 18:45:50.117742 kubelet[1734]: I0412 18:45:50.117653 1734 server.go:415] "Kubelet version" kubeletVersion="v1.27.2" Apr 12 18:45:50.117742 kubelet[1734]: I0412 18:45:50.117680 1734 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 12 18:45:50.117933 kubelet[1734]: I0412 18:45:50.117896 1734 server.go:837] "Client rotation is on, will bootstrap in background" Apr 12 18:45:50.121045 kubelet[1734]: I0412 18:45:50.121023 1734 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 12 18:45:50.121607 kubelet[1734]: E0412 18:45:50.121596 1734 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.52:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.52:6443: connect: connection refused Apr 12 18:45:50.125011 kubelet[1734]: I0412 18:45:50.124992 1734 server.go:662] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 12 18:45:50.125334 kubelet[1734]: I0412 18:45:50.125316 1734 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 12 18:45:50.125406 kubelet[1734]: I0412 18:45:50.125390 1734 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Apr 12 18:45:50.125489 kubelet[1734]: I0412 18:45:50.125415 1734 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Apr 12 18:45:50.125489 kubelet[1734]: I0412 18:45:50.125427 1734 container_manager_linux.go:302] "Creating device plugin manager" Apr 12 18:45:50.125535 kubelet[1734]: I0412 18:45:50.125519 1734 state_mem.go:36] "Initialized new in-memory state store" Apr 12 18:45:50.127675 kubelet[1734]: I0412 18:45:50.127660 1734 kubelet.go:405] "Attempting to sync node with API server" Apr 12 18:45:50.127730 kubelet[1734]: I0412 18:45:50.127681 1734 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 12 18:45:50.127730 kubelet[1734]: I0412 18:45:50.127698 1734 kubelet.go:309] "Adding apiserver pod source" Apr 12 18:45:50.127730 kubelet[1734]: I0412 18:45:50.127712 1734 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 12 18:45:50.128110 kubelet[1734]: W0412 18:45:50.128073 1734 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.52:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Apr 12 18:45:50.128148 kubelet[1734]: E0412 18:45:50.128121 1734 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.52:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Apr 12 18:45:50.128393 kubelet[1734]: W0412 18:45:50.128370 1734 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.52:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Apr 12 18:45:50.128479 kubelet[1734]: E0412 18:45:50.128462 1734 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.52:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Apr 12 18:45:50.132889 kubelet[1734]: I0412 18:45:50.132868 1734 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Apr 12 18:45:50.133328 kubelet[1734]: W0412 18:45:50.133314 1734 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 12 18:45:50.133832 kubelet[1734]: I0412 18:45:50.133818 1734 server.go:1168] "Started kubelet" Apr 12 18:45:50.135857 kubelet[1734]: E0412 18:45:50.135735 1734 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17c59cb8a499ac0d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.April, 12, 18, 45, 50, 133791757, time.Local), LastTimestamp:time.Date(2024, time.April, 12, 18, 45, 50, 133791757, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.0.0.52:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.52:6443: connect: connection refused'(may retry after sleeping) Apr 12 18:45:50.136014 kubelet[1734]: E0412 18:45:50.135957 1734 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Apr 12 18:45:50.136014 kubelet[1734]: E0412 18:45:50.135980 1734 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 12 18:45:50.136519 kubelet[1734]: I0412 18:45:50.136502 1734 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Apr 12 18:45:50.137489 kubelet[1734]: I0412 18:45:50.137473 1734 server.go:461] "Adding debug handlers to kubelet server" Apr 12 18:45:50.137918 kubelet[1734]: I0412 18:45:50.137890 1734 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Apr 12 18:45:50.139606 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Apr 12 18:45:50.139758 kubelet[1734]: I0412 18:45:50.139741 1734 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 12 18:45:50.140074 kubelet[1734]: I0412 18:45:50.140057 1734 volume_manager.go:284] "Starting Kubelet Volume Manager" Apr 12 18:45:50.140200 kubelet[1734]: I0412 18:45:50.140172 1734 desired_state_of_world_populator.go:145] "Desired state populator starts to run" Apr 12 18:45:50.140772 kubelet[1734]: W0412 18:45:50.140545 1734 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.52:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Apr 12 18:45:50.140995 kubelet[1734]: E0412 18:45:50.140984 1734 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.52:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Apr 12 18:45:50.141331 kubelet[1734]: E0412 18:45:50.141312 1734 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.52:6443: connect: connection refused" interval="200ms" Apr 12 18:45:50.152690 kubelet[1734]: I0412 18:45:50.152651 1734 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Apr 12 18:45:50.153515 kubelet[1734]: I0412 18:45:50.153501 1734 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Apr 12 18:45:50.153626 kubelet[1734]: I0412 18:45:50.153607 1734 status_manager.go:207] "Starting to sync pod status with apiserver" Apr 12 18:45:50.153681 kubelet[1734]: I0412 18:45:50.153639 1734 kubelet.go:2257] "Starting kubelet main sync loop" Apr 12 18:45:50.153725 kubelet[1734]: E0412 18:45:50.153695 1734 kubelet.go:2281] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 12 18:45:50.158305 kubelet[1734]: W0412 18:45:50.158263 1734 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.52:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Apr 12 18:45:50.158439 kubelet[1734]: E0412 18:45:50.158423 1734 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.52:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Apr 12 18:45:50.173485 kubelet[1734]: I0412 18:45:50.173466 1734 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 12 18:45:50.173640 kubelet[1734]: I0412 18:45:50.173618 1734 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 12 18:45:50.173640 kubelet[1734]: I0412 18:45:50.173640 1734 state_mem.go:36] "Initialized new in-memory state store" Apr 12 18:45:50.176699 kubelet[1734]: I0412 18:45:50.176680 1734 policy_none.go:49] "None policy: Start" Apr 12 18:45:50.177056 kubelet[1734]: I0412 18:45:50.177043 1734 memory_manager.go:169] "Starting memorymanager" policy="None" Apr 12 18:45:50.177056 kubelet[1734]: I0412 18:45:50.177058 1734 state_mem.go:35] "Initializing new in-memory state store" Apr 12 18:45:50.181396 kubelet[1734]: I0412 18:45:50.181378 1734 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 12 18:45:50.181696 kubelet[1734]: I0412 18:45:50.181674 1734 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 12 18:45:50.182283 kubelet[1734]: E0412 18:45:50.182267 1734 eviction_manager.go:262] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 12 18:45:50.241325 kubelet[1734]: I0412 18:45:50.241291 1734 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Apr 12 18:45:50.241642 kubelet[1734]: E0412 18:45:50.241628 1734 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.52:6443/api/v1/nodes\": dial tcp 10.0.0.52:6443: connect: connection refused" node="localhost" Apr 12 18:45:50.253770 kubelet[1734]: I0412 18:45:50.253745 1734 topology_manager.go:212] "Topology Admit Handler" Apr 12 18:45:50.254481 kubelet[1734]: I0412 18:45:50.254467 1734 topology_manager.go:212] "Topology Admit Handler" Apr 12 18:45:50.255410 kubelet[1734]: I0412 18:45:50.255378 1734 topology_manager.go:212] "Topology Admit Handler" Apr 12 18:45:50.342033 kubelet[1734]: E0412 18:45:50.341988 1734 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.52:6443: connect: connection refused" interval="400ms" Apr 12 18:45:50.441635 kubelet[1734]: I0412 18:45:50.441462 1734 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7dbac70c75130d160b50f346abab2c1d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7dbac70c75130d160b50f346abab2c1d\") " pod="kube-system/kube-apiserver-localhost" Apr 12 18:45:50.441635 kubelet[1734]: I0412 18:45:50.441556 1734 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b23ea803843027eb81926493bf073366-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b23ea803843027eb81926493bf073366\") " pod="kube-system/kube-controller-manager-localhost" Apr 12 18:45:50.441635 kubelet[1734]: I0412 18:45:50.441583 1734 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b23ea803843027eb81926493bf073366-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b23ea803843027eb81926493bf073366\") " pod="kube-system/kube-controller-manager-localhost" Apr 12 18:45:50.441863 kubelet[1734]: I0412 18:45:50.441653 1734 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b23ea803843027eb81926493bf073366-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b23ea803843027eb81926493bf073366\") " pod="kube-system/kube-controller-manager-localhost" Apr 12 18:45:50.441863 kubelet[1734]: I0412 18:45:50.441700 1734 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7dbac70c75130d160b50f346abab2c1d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7dbac70c75130d160b50f346abab2c1d\") " pod="kube-system/kube-apiserver-localhost" Apr 12 18:45:50.441863 kubelet[1734]: I0412 18:45:50.441728 1734 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7dbac70c75130d160b50f346abab2c1d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7dbac70c75130d160b50f346abab2c1d\") " pod="kube-system/kube-apiserver-localhost" Apr 12 18:45:50.441863 kubelet[1734]: I0412 18:45:50.441763 1734 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2f7d78630cba827a770c684e2dbe6ce6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2f7d78630cba827a770c684e2dbe6ce6\") " pod="kube-system/kube-scheduler-localhost" Apr 12 18:45:50.441863 kubelet[1734]: I0412 18:45:50.441827 1734 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b23ea803843027eb81926493bf073366-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b23ea803843027eb81926493bf073366\") " pod="kube-system/kube-controller-manager-localhost" Apr 12 18:45:50.442058 kubelet[1734]: I0412 18:45:50.441855 1734 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b23ea803843027eb81926493bf073366-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b23ea803843027eb81926493bf073366\") " pod="kube-system/kube-controller-manager-localhost" Apr 12 18:45:50.443028 kubelet[1734]: I0412 18:45:50.443009 1734 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Apr 12 18:45:50.443382 kubelet[1734]: E0412 18:45:50.443352 1734 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.52:6443/api/v1/nodes\": dial tcp 10.0.0.52:6443: connect: connection refused" node="localhost" Apr 12 18:45:50.559226 kubelet[1734]: E0412 18:45:50.559183 1734 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:45:50.559226 kubelet[1734]: E0412 18:45:50.559201 1734 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:45:50.560015 env[1199]: time="2024-04-12T18:45:50.559972085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7dbac70c75130d160b50f346abab2c1d,Namespace:kube-system,Attempt:0,}" Apr 12 18:45:50.560350 env[1199]: time="2024-04-12T18:45:50.560115945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b23ea803843027eb81926493bf073366,Namespace:kube-system,Attempt:0,}" Apr 12 18:45:50.561178 kubelet[1734]: E0412 18:45:50.561142 1734 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:45:50.561594 env[1199]: time="2024-04-12T18:45:50.561562046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2f7d78630cba827a770c684e2dbe6ce6,Namespace:kube-system,Attempt:0,}" Apr 12 18:45:50.742724 kubelet[1734]: E0412 18:45:50.742614 1734 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.52:6443: connect: connection refused" interval="800ms" Apr 12 18:45:50.845423 kubelet[1734]: I0412 18:45:50.845385 1734 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Apr 12 18:45:50.845806 kubelet[1734]: E0412 18:45:50.845788 1734 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.52:6443/api/v1/nodes\": dial tcp 10.0.0.52:6443: connect: connection refused" node="localhost" Apr 12 18:45:50.981263 kubelet[1734]: W0412 18:45:50.981186 1734 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.52:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Apr 12 18:45:50.981263 kubelet[1734]: E0412 18:45:50.981255 1734 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.52:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Apr 12 18:45:51.266608 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount334182024.mount: Deactivated successfully. Apr 12 18:45:51.272301 env[1199]: time="2024-04-12T18:45:51.272244155Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:45:51.274000 env[1199]: time="2024-04-12T18:45:51.273947860Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:45:51.275544 env[1199]: time="2024-04-12T18:45:51.275514637Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:45:51.280015 env[1199]: time="2024-04-12T18:45:51.279986943Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:45:51.294859 env[1199]: time="2024-04-12T18:45:51.294833440Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:45:51.296062 env[1199]: time="2024-04-12T18:45:51.296037828Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:45:51.297541 env[1199]: time="2024-04-12T18:45:51.297504619Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:45:51.298864 env[1199]: time="2024-04-12T18:45:51.298836005Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:45:51.301737 env[1199]: time="2024-04-12T18:45:51.301706838Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:45:51.303780 env[1199]: time="2024-04-12T18:45:51.303739168Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:45:51.304356 env[1199]: time="2024-04-12T18:45:51.304325077Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:45:51.304841 env[1199]: time="2024-04-12T18:45:51.304820496Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:45:51.336802 env[1199]: time="2024-04-12T18:45:51.336387815Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:45:51.336802 env[1199]: time="2024-04-12T18:45:51.336441796Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:45:51.336802 env[1199]: time="2024-04-12T18:45:51.336451795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:45:51.336802 env[1199]: time="2024-04-12T18:45:51.336648183Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3c06bf53ae303b6a1dbf956b8b2d26d1db2c1efddb7bfa476b56dee44da0fa88 pid=1776 runtime=io.containerd.runc.v2 Apr 12 18:45:51.337324 env[1199]: time="2024-04-12T18:45:51.337261152Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:45:51.337324 env[1199]: time="2024-04-12T18:45:51.337286910Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:45:51.337324 env[1199]: time="2024-04-12T18:45:51.337295577Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:45:51.340923 env[1199]: time="2024-04-12T18:45:51.340742740Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1c369e06926cacf48c5dca46d75e94ec83e89f454f0f72140631da425b178dfc pid=1783 runtime=io.containerd.runc.v2 Apr 12 18:45:51.344102 env[1199]: time="2024-04-12T18:45:51.344025625Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:45:51.344204 env[1199]: time="2024-04-12T18:45:51.344107178Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:45:51.344204 env[1199]: time="2024-04-12T18:45:51.344129961Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:45:51.344423 env[1199]: time="2024-04-12T18:45:51.344385560Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/18674eb7cf205215c89fbb80cd3f0211d9789aa1e09e2c9ce9e4edf7379056c9 pid=1807 runtime=io.containerd.runc.v2 Apr 12 18:45:51.457936 kubelet[1734]: W0412 18:45:51.457764 1734 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.52:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Apr 12 18:45:51.457936 kubelet[1734]: E0412 18:45:51.457828 1734 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.52:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Apr 12 18:45:51.496748 env[1199]: time="2024-04-12T18:45:51.496686805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7dbac70c75130d160b50f346abab2c1d,Namespace:kube-system,Attempt:0,} returns sandbox id \"18674eb7cf205215c89fbb80cd3f0211d9789aa1e09e2c9ce9e4edf7379056c9\"" Apr 12 18:45:51.497549 kubelet[1734]: E0412 18:45:51.497526 1734 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:45:51.500671 env[1199]: time="2024-04-12T18:45:51.499959992Z" level=info msg="CreateContainer within sandbox \"18674eb7cf205215c89fbb80cd3f0211d9789aa1e09e2c9ce9e4edf7379056c9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 12 18:45:51.500671 env[1199]: time="2024-04-12T18:45:51.500214139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2f7d78630cba827a770c684e2dbe6ce6,Namespace:kube-system,Attempt:0,} returns sandbox id \"3c06bf53ae303b6a1dbf956b8b2d26d1db2c1efddb7bfa476b56dee44da0fa88\"" Apr 12 18:45:51.501356 kubelet[1734]: E0412 18:45:51.501339 1734 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:45:51.503177 env[1199]: time="2024-04-12T18:45:51.503150996Z" level=info msg="CreateContainer within sandbox \"3c06bf53ae303b6a1dbf956b8b2d26d1db2c1efddb7bfa476b56dee44da0fa88\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 12 18:45:51.505704 env[1199]: time="2024-04-12T18:45:51.505669768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b23ea803843027eb81926493bf073366,Namespace:kube-system,Attempt:0,} returns sandbox id \"1c369e06926cacf48c5dca46d75e94ec83e89f454f0f72140631da425b178dfc\"" Apr 12 18:45:51.506492 kubelet[1734]: E0412 18:45:51.506373 1734 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:45:51.508102 env[1199]: time="2024-04-12T18:45:51.508077753Z" level=info msg="CreateContainer within sandbox \"1c369e06926cacf48c5dca46d75e94ec83e89f454f0f72140631da425b178dfc\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 12 18:45:51.528496 env[1199]: time="2024-04-12T18:45:51.528389187Z" level=info msg="CreateContainer within sandbox \"18674eb7cf205215c89fbb80cd3f0211d9789aa1e09e2c9ce9e4edf7379056c9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a321863ecef164b0469e1c922d1597a2d83c270d402fea91924f79a9998f10cd\"" Apr 12 18:45:51.529279 env[1199]: time="2024-04-12T18:45:51.529259269Z" level=info msg="StartContainer for \"a321863ecef164b0469e1c922d1597a2d83c270d402fea91924f79a9998f10cd\"" Apr 12 18:45:51.530450 env[1199]: time="2024-04-12T18:45:51.530417891Z" level=info msg="CreateContainer within sandbox \"3c06bf53ae303b6a1dbf956b8b2d26d1db2c1efddb7bfa476b56dee44da0fa88\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"bf8a379a3df2087bc1fcc7222bbbbe291f47110b8b93ffb413769f4ae03e0ca2\"" Apr 12 18:45:51.531012 env[1199]: time="2024-04-12T18:45:51.530982099Z" level=info msg="StartContainer for \"bf8a379a3df2087bc1fcc7222bbbbe291f47110b8b93ffb413769f4ae03e0ca2\"" Apr 12 18:45:51.533005 env[1199]: time="2024-04-12T18:45:51.532977079Z" level=info msg="CreateContainer within sandbox \"1c369e06926cacf48c5dca46d75e94ec83e89f454f0f72140631da425b178dfc\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ed41e168fc513c8f4c8179bd79f29c67d26528b4811feadaa3b4c13a13d1f6d5\"" Apr 12 18:45:51.533421 env[1199]: time="2024-04-12T18:45:51.533371579Z" level=info msg="StartContainer for \"ed41e168fc513c8f4c8179bd79f29c67d26528b4811feadaa3b4c13a13d1f6d5\"" Apr 12 18:45:51.541333 kubelet[1734]: W0412 18:45:51.541255 1734 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.52:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Apr 12 18:45:51.541333 kubelet[1734]: E0412 18:45:51.541313 1734 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.52:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Apr 12 18:45:51.542990 kubelet[1734]: E0412 18:45:51.542959 1734 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.52:6443: connect: connection refused" interval="1.6s" Apr 12 18:45:51.567740 kubelet[1734]: W0412 18:45:51.567651 1734 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.52:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Apr 12 18:45:51.567740 kubelet[1734]: E0412 18:45:51.567707 1734 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.52:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Apr 12 18:45:51.608731 env[1199]: time="2024-04-12T18:45:51.608686505Z" level=info msg="StartContainer for \"a321863ecef164b0469e1c922d1597a2d83c270d402fea91924f79a9998f10cd\" returns successfully" Apr 12 18:45:51.618399 env[1199]: time="2024-04-12T18:45:51.618351136Z" level=info msg="StartContainer for \"bf8a379a3df2087bc1fcc7222bbbbe291f47110b8b93ffb413769f4ae03e0ca2\" returns successfully" Apr 12 18:45:51.632932 env[1199]: time="2024-04-12T18:45:51.629771409Z" level=info msg="StartContainer for \"ed41e168fc513c8f4c8179bd79f29c67d26528b4811feadaa3b4c13a13d1f6d5\" returns successfully" Apr 12 18:45:51.648153 kubelet[1734]: I0412 18:45:51.648103 1734 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Apr 12 18:45:51.648453 kubelet[1734]: E0412 18:45:51.648431 1734 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.52:6443/api/v1/nodes\": dial tcp 10.0.0.52:6443: connect: connection refused" node="localhost" Apr 12 18:45:52.165496 kubelet[1734]: E0412 18:45:52.165454 1734 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:45:52.166981 kubelet[1734]: E0412 18:45:52.166959 1734 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:45:52.168273 kubelet[1734]: E0412 18:45:52.168248 1734 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:45:53.170982 kubelet[1734]: E0412 18:45:53.170955 1734 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:45:53.171581 kubelet[1734]: E0412 18:45:53.171565 1734 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:45:53.211759 kubelet[1734]: E0412 18:45:53.211717 1734 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 12 18:45:53.249773 kubelet[1734]: I0412 18:45:53.249748 1734 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Apr 12 18:45:53.292865 kubelet[1734]: I0412 18:45:53.292813 1734 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Apr 12 18:45:53.301038 kubelet[1734]: E0412 18:45:53.301001 1734 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 12 18:45:53.401389 kubelet[1734]: E0412 18:45:53.401342 1734 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 12 18:45:53.501840 kubelet[1734]: E0412 18:45:53.501718 1734 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 12 18:45:54.129979 kubelet[1734]: I0412 18:45:54.129930 1734 apiserver.go:52] "Watching apiserver" Apr 12 18:45:54.140942 kubelet[1734]: I0412 18:45:54.140925 1734 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world" Apr 12 18:45:54.162922 kubelet[1734]: I0412 18:45:54.162883 1734 reconciler.go:41] "Reconciler: start to sync state" Apr 12 18:45:54.883092 kubelet[1734]: E0412 18:45:54.883028 1734 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:45:55.172402 kubelet[1734]: E0412 18:45:55.172300 1734 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:45:55.510344 systemd[1]: Reloading. Apr 12 18:45:55.569292 /usr/lib/systemd/system-generators/torcx-generator[2029]: time="2024-04-12T18:45:55Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.3 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.3 /var/lib/torcx/store]" Apr 12 18:45:55.569318 /usr/lib/systemd/system-generators/torcx-generator[2029]: time="2024-04-12T18:45:55Z" level=info msg="torcx already run" Apr 12 18:45:55.637652 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Apr 12 18:45:55.637670 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Apr 12 18:45:55.654496 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 12 18:45:55.733864 kubelet[1734]: I0412 18:45:55.733819 1734 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 12 18:45:55.733899 systemd[1]: Stopping kubelet.service... Apr 12 18:45:55.752573 systemd[1]: kubelet.service: Deactivated successfully. Apr 12 18:45:55.752941 systemd[1]: Stopped kubelet.service. Apr 12 18:45:55.754528 systemd[1]: Started kubelet.service. Apr 12 18:45:55.801389 kubelet[2077]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 12 18:45:55.801389 kubelet[2077]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 12 18:45:55.801389 kubelet[2077]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 12 18:45:55.801389 kubelet[2077]: I0412 18:45:55.801058 2077 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 12 18:45:55.805237 kubelet[2077]: I0412 18:45:55.805216 2077 server.go:415] "Kubelet version" kubeletVersion="v1.27.2" Apr 12 18:45:55.805237 kubelet[2077]: I0412 18:45:55.805236 2077 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 12 18:45:55.805433 kubelet[2077]: I0412 18:45:55.805416 2077 server.go:837] "Client rotation is on, will bootstrap in background" Apr 12 18:45:55.807297 kubelet[2077]: I0412 18:45:55.807282 2077 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Apr 12 18:45:55.808991 kubelet[2077]: I0412 18:45:55.808621 2077 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 12 18:45:55.813350 kubelet[2077]: I0412 18:45:55.813328 2077 server.go:662] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 12 18:45:55.813695 kubelet[2077]: I0412 18:45:55.813669 2077 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 12 18:45:55.813751 kubelet[2077]: I0412 18:45:55.813740 2077 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Apr 12 18:45:55.813830 kubelet[2077]: I0412 18:45:55.813758 2077 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Apr 12 18:45:55.813830 kubelet[2077]: I0412 18:45:55.813767 2077 container_manager_linux.go:302] "Creating device plugin manager" Apr 12 18:45:55.813830 kubelet[2077]: I0412 18:45:55.813789 2077 state_mem.go:36] "Initialized new in-memory state store" Apr 12 18:45:55.816302 kubelet[2077]: I0412 18:45:55.816279 2077 kubelet.go:405] "Attempting to sync node with API server" Apr 12 18:45:55.816302 kubelet[2077]: I0412 18:45:55.816302 2077 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 12 18:45:55.816386 kubelet[2077]: I0412 18:45:55.816320 2077 kubelet.go:309] "Adding apiserver pod source" Apr 12 18:45:55.816386 kubelet[2077]: I0412 18:45:55.816333 2077 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 12 18:45:55.824491 kubelet[2077]: I0412 18:45:55.820860 2077 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Apr 12 18:45:55.824491 kubelet[2077]: I0412 18:45:55.821796 2077 server.go:1168] "Started kubelet" Apr 12 18:45:55.824491 kubelet[2077]: I0412 18:45:55.823167 2077 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Apr 12 18:45:55.824491 kubelet[2077]: I0412 18:45:55.823253 2077 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Apr 12 18:45:55.831729 kubelet[2077]: I0412 18:45:55.831703 2077 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 12 18:45:55.834643 kubelet[2077]: I0412 18:45:55.834624 2077 server.go:461] "Adding debug handlers to kubelet server" Apr 12 18:45:55.836874 kubelet[2077]: E0412 18:45:55.836848 2077 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Apr 12 18:45:55.836995 kubelet[2077]: E0412 18:45:55.836979 2077 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 12 18:45:55.839654 kubelet[2077]: I0412 18:45:55.839630 2077 volume_manager.go:284] "Starting Kubelet Volume Manager" Apr 12 18:45:55.842413 kubelet[2077]: I0412 18:45:55.842037 2077 desired_state_of_world_populator.go:145] "Desired state populator starts to run" Apr 12 18:45:55.847104 kubelet[2077]: I0412 18:45:55.847078 2077 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Apr 12 18:45:55.847763 kubelet[2077]: I0412 18:45:55.847742 2077 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Apr 12 18:45:55.847832 kubelet[2077]: I0412 18:45:55.847769 2077 status_manager.go:207] "Starting to sync pod status with apiserver" Apr 12 18:45:55.847832 kubelet[2077]: I0412 18:45:55.847791 2077 kubelet.go:2257] "Starting kubelet main sync loop" Apr 12 18:45:55.847931 kubelet[2077]: E0412 18:45:55.847866 2077 kubelet.go:2281] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 12 18:45:55.896840 kubelet[2077]: I0412 18:45:55.896808 2077 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 12 18:45:55.896840 kubelet[2077]: I0412 18:45:55.896830 2077 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 12 18:45:55.901278 kubelet[2077]: I0412 18:45:55.896846 2077 state_mem.go:36] "Initialized new in-memory state store" Apr 12 18:45:55.901278 kubelet[2077]: I0412 18:45:55.897049 2077 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 12 18:45:55.901278 kubelet[2077]: I0412 18:45:55.897061 2077 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Apr 12 18:45:55.901278 kubelet[2077]: I0412 18:45:55.897066 2077 policy_none.go:49] "None policy: Start" Apr 12 18:45:55.901278 kubelet[2077]: I0412 18:45:55.897845 2077 memory_manager.go:169] "Starting memorymanager" policy="None" Apr 12 18:45:55.901278 kubelet[2077]: I0412 18:45:55.897863 2077 state_mem.go:35] "Initializing new in-memory state store" Apr 12 18:45:55.901278 kubelet[2077]: I0412 18:45:55.897983 2077 state_mem.go:75] "Updated machine memory state" Apr 12 18:45:55.901278 kubelet[2077]: I0412 18:45:55.899028 2077 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 12 18:45:55.901278 kubelet[2077]: I0412 18:45:55.899208 2077 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 12 18:45:55.908652 sudo[2107]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 12 18:45:55.908818 sudo[2107]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Apr 12 18:45:55.943359 kubelet[2077]: I0412 18:45:55.943331 2077 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Apr 12 18:45:55.948317 kubelet[2077]: I0412 18:45:55.948290 2077 topology_manager.go:212] "Topology Admit Handler" Apr 12 18:45:55.948376 kubelet[2077]: I0412 18:45:55.948370 2077 topology_manager.go:212] "Topology Admit Handler" Apr 12 18:45:55.948403 kubelet[2077]: I0412 18:45:55.948399 2077 topology_manager.go:212] "Topology Admit Handler" Apr 12 18:45:55.955770 kubelet[2077]: I0412 18:45:55.955744 2077 kubelet_node_status.go:108] "Node was previously registered" node="localhost" Apr 12 18:45:55.955862 kubelet[2077]: I0412 18:45:55.955818 2077 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Apr 12 18:45:55.957482 kubelet[2077]: E0412 18:45:55.957461 2077 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 12 18:45:56.043322 kubelet[2077]: I0412 18:45:56.043270 2077 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7dbac70c75130d160b50f346abab2c1d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7dbac70c75130d160b50f346abab2c1d\") " pod="kube-system/kube-apiserver-localhost" Apr 12 18:45:56.043322 kubelet[2077]: I0412 18:45:56.043321 2077 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b23ea803843027eb81926493bf073366-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b23ea803843027eb81926493bf073366\") " pod="kube-system/kube-controller-manager-localhost" Apr 12 18:45:56.043507 kubelet[2077]: I0412 18:45:56.043345 2077 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7dbac70c75130d160b50f346abab2c1d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7dbac70c75130d160b50f346abab2c1d\") " pod="kube-system/kube-apiserver-localhost" Apr 12 18:45:56.043507 kubelet[2077]: I0412 18:45:56.043393 2077 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b23ea803843027eb81926493bf073366-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b23ea803843027eb81926493bf073366\") " pod="kube-system/kube-controller-manager-localhost" Apr 12 18:45:56.043507 kubelet[2077]: I0412 18:45:56.043429 2077 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b23ea803843027eb81926493bf073366-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b23ea803843027eb81926493bf073366\") " pod="kube-system/kube-controller-manager-localhost" Apr 12 18:45:56.043507 kubelet[2077]: I0412 18:45:56.043477 2077 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b23ea803843027eb81926493bf073366-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b23ea803843027eb81926493bf073366\") " pod="kube-system/kube-controller-manager-localhost" Apr 12 18:45:56.043626 kubelet[2077]: I0412 18:45:56.043512 2077 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b23ea803843027eb81926493bf073366-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b23ea803843027eb81926493bf073366\") " pod="kube-system/kube-controller-manager-localhost" Apr 12 18:45:56.043626 kubelet[2077]: I0412 18:45:56.043535 2077 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2f7d78630cba827a770c684e2dbe6ce6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2f7d78630cba827a770c684e2dbe6ce6\") " pod="kube-system/kube-scheduler-localhost" Apr 12 18:45:56.043626 kubelet[2077]: I0412 18:45:56.043551 2077 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7dbac70c75130d160b50f346abab2c1d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7dbac70c75130d160b50f346abab2c1d\") " pod="kube-system/kube-apiserver-localhost" Apr 12 18:45:56.253645 kubelet[2077]: E0412 18:45:56.253606 2077 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:45:56.256423 kubelet[2077]: E0412 18:45:56.256386 2077 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:45:56.258141 kubelet[2077]: E0412 18:45:56.258126 2077 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:45:56.360296 sudo[2107]: pam_unix(sudo:session): session closed for user root Apr 12 18:45:56.820557 kubelet[2077]: I0412 18:45:56.820512 2077 apiserver.go:52] "Watching apiserver" Apr 12 18:45:56.842916 kubelet[2077]: I0412 18:45:56.842854 2077 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world" Apr 12 18:45:56.849015 kubelet[2077]: I0412 18:45:56.848990 2077 reconciler.go:41] "Reconciler: start to sync state" Apr 12 18:45:56.862547 kubelet[2077]: E0412 18:45:56.862509 2077 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:45:56.862547 kubelet[2077]: E0412 18:45:56.862509 2077 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:45:56.869949 kubelet[2077]: E0412 18:45:56.866491 2077 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 12 18:45:56.869949 kubelet[2077]: E0412 18:45:56.867530 2077 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:45:56.880633 kubelet[2077]: I0412 18:45:56.880606 2077 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.880525015 podCreationTimestamp="2024-04-12 18:45:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:45:56.880520716 +0000 UTC m=+1.122182156" watchObservedRunningTime="2024-04-12 18:45:56.880525015 +0000 UTC m=+1.122186445" Apr 12 18:45:56.886082 kubelet[2077]: I0412 18:45:56.885548 2077 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.885528905 podCreationTimestamp="2024-04-12 18:45:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:45:56.885446166 +0000 UTC m=+1.127107606" watchObservedRunningTime="2024-04-12 18:45:56.885528905 +0000 UTC m=+1.127190345" Apr 12 18:45:56.896939 kubelet[2077]: I0412 18:45:56.896875 2077 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.896843233 podCreationTimestamp="2024-04-12 18:45:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:45:56.890063883 +0000 UTC m=+1.131725333" watchObservedRunningTime="2024-04-12 18:45:56.896843233 +0000 UTC m=+1.138504673" Apr 12 18:45:57.480415 sudo[1314]: pam_unix(sudo:session): session closed for user root Apr 12 18:45:57.481601 sshd[1308]: pam_unix(sshd:session): session closed for user core Apr 12 18:45:57.483357 systemd[1]: sshd@4-10.0.0.52:22-10.0.0.1:49782.service: Deactivated successfully. Apr 12 18:45:57.484147 systemd[1]: session-5.scope: Deactivated successfully. Apr 12 18:45:57.485123 systemd-logind[1185]: Session 5 logged out. Waiting for processes to exit. Apr 12 18:45:57.485941 systemd-logind[1185]: Removed session 5. Apr 12 18:45:57.863540 kubelet[2077]: E0412 18:45:57.863518 2077 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:45:59.497168 kubelet[2077]: E0412 18:45:59.497125 2077 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:46:03.106599 kubelet[2077]: E0412 18:46:03.106563 2077 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:46:03.872445 kubelet[2077]: E0412 18:46:03.872415 2077 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:46:04.090883 kubelet[2077]: E0412 18:46:04.090857 2077 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:46:04.875084 kubelet[2077]: E0412 18:46:04.874432 2077 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:46:08.653687 update_engine[1186]: I0412 18:46:08.653609 1186 update_attempter.cc:509] Updating boot flags... Apr 12 18:46:09.044199 kubelet[2077]: I0412 18:46:09.044058 2077 kuberuntime_manager.go:1460] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 12 18:46:09.045017 env[1199]: time="2024-04-12T18:46:09.044975760Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 12 18:46:09.045582 kubelet[2077]: I0412 18:46:09.045564 2077 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 12 18:46:09.502315 kubelet[2077]: E0412 18:46:09.502270 2077 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:46:09.823486 kubelet[2077]: I0412 18:46:09.823338 2077 topology_manager.go:212] "Topology Admit Handler" Apr 12 18:46:09.826744 kubelet[2077]: I0412 18:46:09.826704 2077 topology_manager.go:212] "Topology Admit Handler" Apr 12 18:46:09.843895 kubelet[2077]: I0412 18:46:09.843857 2077 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8d9044ef-7ed8-4235-ab98-3c161ec2ea2a-cilium-config-path\") pod \"cilium-vksd5\" (UID: \"8d9044ef-7ed8-4235-ab98-3c161ec2ea2a\") " pod="kube-system/cilium-vksd5" Apr 12 18:46:09.844101 kubelet[2077]: I0412 18:46:09.843926 2077 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8d9044ef-7ed8-4235-ab98-3c161ec2ea2a-host-proc-sys-kernel\") pod \"cilium-vksd5\" (UID: \"8d9044ef-7ed8-4235-ab98-3c161ec2ea2a\") " pod="kube-system/cilium-vksd5" Apr 12 18:46:09.844101 kubelet[2077]: I0412 18:46:09.843958 2077 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0fe54a97-380d-467f-8128-46567a2b78f4-lib-modules\") pod \"kube-proxy-st8wp\" (UID: \"0fe54a97-380d-467f-8128-46567a2b78f4\") " pod="kube-system/kube-proxy-st8wp" Apr 12 18:46:09.844101 kubelet[2077]: I0412 18:46:09.843980 2077 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8d9044ef-7ed8-4235-ab98-3c161ec2ea2a-cilium-cgroup\") pod \"cilium-vksd5\" (UID: \"8d9044ef-7ed8-4235-ab98-3c161ec2ea2a\") " pod="kube-system/cilium-vksd5" Apr 12 18:46:09.844101 kubelet[2077]: I0412 18:46:09.844004 2077 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8d9044ef-7ed8-4235-ab98-3c161ec2ea2a-xtables-lock\") pod \"cilium-vksd5\" (UID: \"8d9044ef-7ed8-4235-ab98-3c161ec2ea2a\") " pod="kube-system/cilium-vksd5" Apr 12 18:46:09.844101 kubelet[2077]: I0412 18:46:09.844030 2077 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmm4s\" (UniqueName: \"kubernetes.io/projected/8d9044ef-7ed8-4235-ab98-3c161ec2ea2a-kube-api-access-nmm4s\") pod \"cilium-vksd5\" (UID: \"8d9044ef-7ed8-4235-ab98-3c161ec2ea2a\") " pod="kube-system/cilium-vksd5" Apr 12 18:46:09.844329 kubelet[2077]: I0412 18:46:09.844057 2077 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8d9044ef-7ed8-4235-ab98-3c161ec2ea2a-host-proc-sys-net\") pod \"cilium-vksd5\" (UID: \"8d9044ef-7ed8-4235-ab98-3c161ec2ea2a\") " pod="kube-system/cilium-vksd5" Apr 12 18:46:09.844329 kubelet[2077]: I0412 18:46:09.844140 2077 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0fe54a97-380d-467f-8128-46567a2b78f4-kube-proxy\") pod \"kube-proxy-st8wp\" (UID: \"0fe54a97-380d-467f-8128-46567a2b78f4\") " pod="kube-system/kube-proxy-st8wp" Apr 12 18:46:09.844329 kubelet[2077]: I0412 18:46:09.844177 2077 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxcqp\" (UniqueName: \"kubernetes.io/projected/0fe54a97-380d-467f-8128-46567a2b78f4-kube-api-access-lxcqp\") pod \"kube-proxy-st8wp\" (UID: \"0fe54a97-380d-467f-8128-46567a2b78f4\") " pod="kube-system/kube-proxy-st8wp" Apr 12 18:46:09.844329 kubelet[2077]: I0412 18:46:09.844199 2077 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8d9044ef-7ed8-4235-ab98-3c161ec2ea2a-etc-cni-netd\") pod \"cilium-vksd5\" (UID: \"8d9044ef-7ed8-4235-ab98-3c161ec2ea2a\") " pod="kube-system/cilium-vksd5" Apr 12 18:46:09.844329 kubelet[2077]: I0412 18:46:09.844218 2077 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8d9044ef-7ed8-4235-ab98-3c161ec2ea2a-clustermesh-secrets\") pod \"cilium-vksd5\" (UID: \"8d9044ef-7ed8-4235-ab98-3c161ec2ea2a\") " pod="kube-system/cilium-vksd5" Apr 12 18:46:09.844508 kubelet[2077]: I0412 18:46:09.844237 2077 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8d9044ef-7ed8-4235-ab98-3c161ec2ea2a-hubble-tls\") pod \"cilium-vksd5\" (UID: \"8d9044ef-7ed8-4235-ab98-3c161ec2ea2a\") " pod="kube-system/cilium-vksd5" Apr 12 18:46:09.844508 kubelet[2077]: I0412 18:46:09.844255 2077 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0fe54a97-380d-467f-8128-46567a2b78f4-xtables-lock\") pod \"kube-proxy-st8wp\" (UID: \"0fe54a97-380d-467f-8128-46567a2b78f4\") " pod="kube-system/kube-proxy-st8wp" Apr 12 18:46:09.844508 kubelet[2077]: I0412 18:46:09.844327 2077 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8d9044ef-7ed8-4235-ab98-3c161ec2ea2a-bpf-maps\") pod \"cilium-vksd5\" (UID: \"8d9044ef-7ed8-4235-ab98-3c161ec2ea2a\") " pod="kube-system/cilium-vksd5" Apr 12 18:46:09.844508 kubelet[2077]: I0412 18:46:09.844384 2077 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8d9044ef-7ed8-4235-ab98-3c161ec2ea2a-hostproc\") pod \"cilium-vksd5\" (UID: \"8d9044ef-7ed8-4235-ab98-3c161ec2ea2a\") " pod="kube-system/cilium-vksd5" Apr 12 18:46:09.844508 kubelet[2077]: I0412 18:46:09.844410 2077 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8d9044ef-7ed8-4235-ab98-3c161ec2ea2a-lib-modules\") pod \"cilium-vksd5\" (UID: \"8d9044ef-7ed8-4235-ab98-3c161ec2ea2a\") " pod="kube-system/cilium-vksd5" Apr 12 18:46:09.844508 kubelet[2077]: I0412 18:46:09.844441 2077 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8d9044ef-7ed8-4235-ab98-3c161ec2ea2a-cilium-run\") pod \"cilium-vksd5\" (UID: \"8d9044ef-7ed8-4235-ab98-3c161ec2ea2a\") " pod="kube-system/cilium-vksd5" Apr 12 18:46:09.844695 kubelet[2077]: I0412 18:46:09.844469 2077 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8d9044ef-7ed8-4235-ab98-3c161ec2ea2a-cni-path\") pod \"cilium-vksd5\" (UID: \"8d9044ef-7ed8-4235-ab98-3c161ec2ea2a\") " pod="kube-system/cilium-vksd5" Apr 12 18:46:10.032930 kubelet[2077]: I0412 18:46:10.032857 2077 topology_manager.go:212] "Topology Admit Handler" Apr 12 18:46:10.046323 kubelet[2077]: I0412 18:46:10.046251 2077 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wss8b\" (UniqueName: \"kubernetes.io/projected/61cfab05-6ca5-45ef-b0bc-89185e6c4993-kube-api-access-wss8b\") pod \"cilium-operator-574c4bb98d-tr2lx\" (UID: \"61cfab05-6ca5-45ef-b0bc-89185e6c4993\") " pod="kube-system/cilium-operator-574c4bb98d-tr2lx" Apr 12 18:46:10.046323 kubelet[2077]: I0412 18:46:10.046307 2077 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/61cfab05-6ca5-45ef-b0bc-89185e6c4993-cilium-config-path\") pod \"cilium-operator-574c4bb98d-tr2lx\" (UID: \"61cfab05-6ca5-45ef-b0bc-89185e6c4993\") " pod="kube-system/cilium-operator-574c4bb98d-tr2lx" Apr 12 18:46:10.129570 kubelet[2077]: E0412 18:46:10.129524 2077 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:46:10.130448 env[1199]: time="2024-04-12T18:46:10.130392122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-st8wp,Uid:0fe54a97-380d-467f-8128-46567a2b78f4,Namespace:kube-system,Attempt:0,}" Apr 12 18:46:10.131076 kubelet[2077]: E0412 18:46:10.131056 2077 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:46:10.131385 env[1199]: time="2024-04-12T18:46:10.131355057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vksd5,Uid:8d9044ef-7ed8-4235-ab98-3c161ec2ea2a,Namespace:kube-system,Attempt:0,}" Apr 12 18:46:10.337100 kubelet[2077]: E0412 18:46:10.337045 2077 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:46:10.337897 env[1199]: time="2024-04-12T18:46:10.337835365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-tr2lx,Uid:61cfab05-6ca5-45ef-b0bc-89185e6c4993,Namespace:kube-system,Attempt:0,}" Apr 12 18:46:10.373064 env[1199]: time="2024-04-12T18:46:10.372984315Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:46:10.373064 env[1199]: time="2024-04-12T18:46:10.373032797Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:46:10.373064 env[1199]: time="2024-04-12T18:46:10.373042956Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:46:10.373326 env[1199]: time="2024-04-12T18:46:10.373225382Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e1e6a80cdcc4835ef94d68924045afb4a4b46e706e5e803ec38c378152c60850 pid=2186 runtime=io.containerd.runc.v2 Apr 12 18:46:10.382166 env[1199]: time="2024-04-12T18:46:10.381845774Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:46:10.382166 env[1199]: time="2024-04-12T18:46:10.381881492Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:46:10.382166 env[1199]: time="2024-04-12T18:46:10.381890439Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:46:10.382166 env[1199]: time="2024-04-12T18:46:10.382041876Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f997260435e4a4fa7fc46c7e4dabc8caf3a658fe5dc46b256daac1a8d8a9d10b pid=2218 runtime=io.containerd.runc.v2 Apr 12 18:46:10.385502 env[1199]: time="2024-04-12T18:46:10.385426503Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:46:10.385502 env[1199]: time="2024-04-12T18:46:10.385480015Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:46:10.385717 env[1199]: time="2024-04-12T18:46:10.385495364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:46:10.386997 env[1199]: time="2024-04-12T18:46:10.385719929Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/72fb2d67bbcdb49c0d82730227042787f75768cf067e23f51c462194c65f836b pid=2210 runtime=io.containerd.runc.v2 Apr 12 18:46:10.413771 env[1199]: time="2024-04-12T18:46:10.413682404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-st8wp,Uid:0fe54a97-380d-467f-8128-46567a2b78f4,Namespace:kube-system,Attempt:0,} returns sandbox id \"e1e6a80cdcc4835ef94d68924045afb4a4b46e706e5e803ec38c378152c60850\"" Apr 12 18:46:10.414544 kubelet[2077]: E0412 18:46:10.414512 2077 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:46:10.417590 env[1199]: time="2024-04-12T18:46:10.417553924Z" level=info msg="CreateContainer within sandbox \"e1e6a80cdcc4835ef94d68924045afb4a4b46e706e5e803ec38c378152c60850\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 12 18:46:10.429642 env[1199]: time="2024-04-12T18:46:10.429580215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vksd5,Uid:8d9044ef-7ed8-4235-ab98-3c161ec2ea2a,Namespace:kube-system,Attempt:0,} returns sandbox id \"72fb2d67bbcdb49c0d82730227042787f75768cf067e23f51c462194c65f836b\"" Apr 12 18:46:10.430428 kubelet[2077]: E0412 18:46:10.430407 2077 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:46:10.432439 env[1199]: time="2024-04-12T18:46:10.432388138Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 12 18:46:10.440736 env[1199]: time="2024-04-12T18:46:10.440671984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-tr2lx,Uid:61cfab05-6ca5-45ef-b0bc-89185e6c4993,Namespace:kube-system,Attempt:0,} returns sandbox id \"f997260435e4a4fa7fc46c7e4dabc8caf3a658fe5dc46b256daac1a8d8a9d10b\"" Apr 12 18:46:10.442032 kubelet[2077]: E0412 18:46:10.441500 2077 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:46:10.489206 env[1199]: time="2024-04-12T18:46:10.489106351Z" level=info msg="CreateContainer within sandbox \"e1e6a80cdcc4835ef94d68924045afb4a4b46e706e5e803ec38c378152c60850\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"99f295ae9d8be29c67bceb2af7bcf82a169fd955b935eb36ed3e85f4f964a9ae\"" Apr 12 18:46:10.490008 env[1199]: time="2024-04-12T18:46:10.489963957Z" level=info msg="StartContainer for \"99f295ae9d8be29c67bceb2af7bcf82a169fd955b935eb36ed3e85f4f964a9ae\"" Apr 12 18:46:10.584212 env[1199]: time="2024-04-12T18:46:10.584162979Z" level=info msg="StartContainer for \"99f295ae9d8be29c67bceb2af7bcf82a169fd955b935eb36ed3e85f4f964a9ae\" returns successfully" Apr 12 18:46:10.887418 kubelet[2077]: E0412 18:46:10.887279 2077 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:46:10.897524 kubelet[2077]: I0412 18:46:10.897484 2077 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-st8wp" podStartSLOduration=1.897448991 podCreationTimestamp="2024-04-12 18:46:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:46:10.896009442 +0000 UTC m=+15.137670882" watchObservedRunningTime="2024-04-12 18:46:10.897448991 +0000 UTC m=+15.139110431" Apr 12 18:46:16.003830 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1814768006.mount: Deactivated successfully. Apr 12 18:46:20.426330 env[1199]: time="2024-04-12T18:46:20.426280397Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:46:20.428186 env[1199]: time="2024-04-12T18:46:20.428138561Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:46:20.429745 env[1199]: time="2024-04-12T18:46:20.429711537Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:46:20.430390 env[1199]: time="2024-04-12T18:46:20.430358267Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Apr 12 18:46:20.430862 env[1199]: time="2024-04-12T18:46:20.430842611Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 12 18:46:20.432135 env[1199]: time="2024-04-12T18:46:20.432106665Z" level=info msg="CreateContainer within sandbox \"72fb2d67bbcdb49c0d82730227042787f75768cf067e23f51c462194c65f836b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 12 18:46:20.444027 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount439500975.mount: Deactivated successfully. Apr 12 18:46:20.446612 env[1199]: time="2024-04-12T18:46:20.446564151Z" level=info msg="CreateContainer within sandbox \"72fb2d67bbcdb49c0d82730227042787f75768cf067e23f51c462194c65f836b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c60d4e5357f6fd9213474575c81ef292af99221b1ef5aedb54227db36d821f26\"" Apr 12 18:46:20.447086 env[1199]: time="2024-04-12T18:46:20.446985315Z" level=info msg="StartContainer for \"c60d4e5357f6fd9213474575c81ef292af99221b1ef5aedb54227db36d821f26\"" Apr 12 18:46:20.819482 env[1199]: time="2024-04-12T18:46:20.819365360Z" level=info msg="StartContainer for \"c60d4e5357f6fd9213474575c81ef292af99221b1ef5aedb54227db36d821f26\" returns successfully" Apr 12 18:46:20.905789 kubelet[2077]: E0412 18:46:20.905758 2077 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:46:21.013527 env[1199]: time="2024-04-12T18:46:21.013480600Z" level=info msg="shim disconnected" id=c60d4e5357f6fd9213474575c81ef292af99221b1ef5aedb54227db36d821f26 Apr 12 18:46:21.013527 env[1199]: time="2024-04-12T18:46:21.013523391Z" level=warning msg="cleaning up after shim disconnected" id=c60d4e5357f6fd9213474575c81ef292af99221b1ef5aedb54227db36d821f26 namespace=k8s.io Apr 12 18:46:21.013527 env[1199]: time="2024-04-12T18:46:21.013531096Z" level=info msg="cleaning up dead shim" Apr 12 18:46:21.018714 env[1199]: time="2024-04-12T18:46:21.018681586Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:46:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2509 runtime=io.containerd.runc.v2\n" Apr 12 18:46:21.442178 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c60d4e5357f6fd9213474575c81ef292af99221b1ef5aedb54227db36d821f26-rootfs.mount: Deactivated successfully. Apr 12 18:46:21.908698 kubelet[2077]: E0412 18:46:21.908674 2077 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:46:21.910946 env[1199]: time="2024-04-12T18:46:21.910892723Z" level=info msg="CreateContainer within sandbox \"72fb2d67bbcdb49c0d82730227042787f75768cf067e23f51c462194c65f836b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 12 18:46:22.063653 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3569789785.mount: Deactivated successfully. Apr 12 18:46:22.073921 env[1199]: time="2024-04-12T18:46:22.073855066Z" level=info msg="CreateContainer within sandbox \"72fb2d67bbcdb49c0d82730227042787f75768cf067e23f51c462194c65f836b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6e4980760663b0710baf5d5b772ebc46355d2072c06b823188cf8e753244ca3f\"" Apr 12 18:46:22.074519 env[1199]: time="2024-04-12T18:46:22.074473682Z" level=info msg="StartContainer for \"6e4980760663b0710baf5d5b772ebc46355d2072c06b823188cf8e753244ca3f\"" Apr 12 18:46:22.113609 env[1199]: time="2024-04-12T18:46:22.113556649Z" level=info msg="StartContainer for \"6e4980760663b0710baf5d5b772ebc46355d2072c06b823188cf8e753244ca3f\" returns successfully" Apr 12 18:46:22.122954 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 12 18:46:22.123205 systemd[1]: Stopped systemd-sysctl.service. Apr 12 18:46:22.123383 systemd[1]: Stopping systemd-sysctl.service... Apr 12 18:46:22.125031 systemd[1]: Starting systemd-sysctl.service... Apr 12 18:46:22.136373 systemd[1]: Finished systemd-sysctl.service. Apr 12 18:46:22.340055 env[1199]: time="2024-04-12T18:46:22.340003900Z" level=info msg="shim disconnected" id=6e4980760663b0710baf5d5b772ebc46355d2072c06b823188cf8e753244ca3f Apr 12 18:46:22.340240 env[1199]: time="2024-04-12T18:46:22.340052853Z" level=warning msg="cleaning up after shim disconnected" id=6e4980760663b0710baf5d5b772ebc46355d2072c06b823188cf8e753244ca3f namespace=k8s.io Apr 12 18:46:22.340240 env[1199]: time="2024-04-12T18:46:22.340068252Z" level=info msg="cleaning up dead shim" Apr 12 18:46:22.348052 env[1199]: time="2024-04-12T18:46:22.348008707Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:46:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2573 runtime=io.containerd.runc.v2\n" Apr 12 18:46:22.694988 env[1199]: time="2024-04-12T18:46:22.694861659Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:46:22.697265 env[1199]: time="2024-04-12T18:46:22.697217769Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:46:22.698940 env[1199]: time="2024-04-12T18:46:22.698892385Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:46:22.699448 env[1199]: time="2024-04-12T18:46:22.699417164Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Apr 12 18:46:22.701691 env[1199]: time="2024-04-12T18:46:22.701655884Z" level=info msg="CreateContainer within sandbox \"f997260435e4a4fa7fc46c7e4dabc8caf3a658fe5dc46b256daac1a8d8a9d10b\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 12 18:46:22.711642 env[1199]: time="2024-04-12T18:46:22.711606487Z" level=info msg="CreateContainer within sandbox \"f997260435e4a4fa7fc46c7e4dabc8caf3a658fe5dc46b256daac1a8d8a9d10b\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"b9a4c802d0deeebbce878ceda815ba877b3f626ea92fa732b5edd102ac5ef307\"" Apr 12 18:46:22.712076 env[1199]: time="2024-04-12T18:46:22.712000218Z" level=info msg="StartContainer for \"b9a4c802d0deeebbce878ceda815ba877b3f626ea92fa732b5edd102ac5ef307\"" Apr 12 18:46:22.749182 env[1199]: time="2024-04-12T18:46:22.749109486Z" level=info msg="StartContainer for \"b9a4c802d0deeebbce878ceda815ba877b3f626ea92fa732b5edd102ac5ef307\" returns successfully" Apr 12 18:46:22.911874 kubelet[2077]: E0412 18:46:22.911834 2077 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:46:22.913771 env[1199]: time="2024-04-12T18:46:22.913692275Z" level=info msg="CreateContainer within sandbox \"72fb2d67bbcdb49c0d82730227042787f75768cf067e23f51c462194c65f836b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 12 18:46:22.914429 kubelet[2077]: E0412 18:46:22.914401 2077 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:46:22.931119 env[1199]: time="2024-04-12T18:46:22.931072420Z" level=info msg="CreateContainer within sandbox \"72fb2d67bbcdb49c0d82730227042787f75768cf067e23f51c462194c65f836b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0f807ce89bafbf1184df06fd544ff5af560a7412edda3a5e964d077c7639d230\"" Apr 12 18:46:22.931603 env[1199]: time="2024-04-12T18:46:22.931565821Z" level=info msg="StartContainer for \"0f807ce89bafbf1184df06fd544ff5af560a7412edda3a5e964d077c7639d230\"" Apr 12 18:46:22.934413 kubelet[2077]: I0412 18:46:22.934376 2077 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-574c4bb98d-tr2lx" podStartSLOduration=0.677072252 podCreationTimestamp="2024-04-12 18:46:10 +0000 UTC" firstStartedPulling="2024-04-12 18:46:10.44237887 +0000 UTC m=+14.684040300" lastFinishedPulling="2024-04-12 18:46:22.699648069 +0000 UTC m=+26.941309519" observedRunningTime="2024-04-12 18:46:22.934159068 +0000 UTC m=+27.175820508" watchObservedRunningTime="2024-04-12 18:46:22.934341471 +0000 UTC m=+27.176002911" Apr 12 18:46:22.978082 env[1199]: time="2024-04-12T18:46:22.977990003Z" level=info msg="StartContainer for \"0f807ce89bafbf1184df06fd544ff5af560a7412edda3a5e964d077c7639d230\" returns successfully" Apr 12 18:46:23.258264 env[1199]: time="2024-04-12T18:46:23.258104111Z" level=info msg="shim disconnected" id=0f807ce89bafbf1184df06fd544ff5af560a7412edda3a5e964d077c7639d230 Apr 12 18:46:23.258264 env[1199]: time="2024-04-12T18:46:23.258173482Z" level=warning msg="cleaning up after shim disconnected" id=0f807ce89bafbf1184df06fd544ff5af560a7412edda3a5e964d077c7639d230 namespace=k8s.io Apr 12 18:46:23.258264 env[1199]: time="2024-04-12T18:46:23.258186106Z" level=info msg="cleaning up dead shim" Apr 12 18:46:23.286504 env[1199]: time="2024-04-12T18:46:23.285705243Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:46:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2669 runtime=io.containerd.runc.v2\n" Apr 12 18:46:23.442555 systemd[1]: run-containerd-runc-k8s.io-b9a4c802d0deeebbce878ceda815ba877b3f626ea92fa732b5edd102ac5ef307-runc.u2ku5m.mount: Deactivated successfully. Apr 12 18:46:23.927986 kubelet[2077]: E0412 18:46:23.926155 2077 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:46:23.927986 kubelet[2077]: E0412 18:46:23.926816 2077 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:46:23.932577 env[1199]: time="2024-04-12T18:46:23.930953022Z" level=info msg="CreateContainer within sandbox \"72fb2d67bbcdb49c0d82730227042787f75768cf067e23f51c462194c65f836b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 12 18:46:23.986073 env[1199]: time="2024-04-12T18:46:23.981401960Z" level=info msg="CreateContainer within sandbox \"72fb2d67bbcdb49c0d82730227042787f75768cf067e23f51c462194c65f836b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"27cdd5ee52882dec884cb7ceba5999493b14aae5ce28f0e049b9b4e2d85bf2db\"" Apr 12 18:46:23.986073 env[1199]: time="2024-04-12T18:46:23.982405150Z" level=info msg="StartContainer for \"27cdd5ee52882dec884cb7ceba5999493b14aae5ce28f0e049b9b4e2d85bf2db\"" Apr 12 18:46:24.060387 env[1199]: time="2024-04-12T18:46:24.060266390Z" level=info msg="StartContainer for \"27cdd5ee52882dec884cb7ceba5999493b14aae5ce28f0e049b9b4e2d85bf2db\" returns successfully" Apr 12 18:46:24.100762 env[1199]: time="2024-04-12T18:46:24.100666302Z" level=info msg="shim disconnected" id=27cdd5ee52882dec884cb7ceba5999493b14aae5ce28f0e049b9b4e2d85bf2db Apr 12 18:46:24.100762 env[1199]: time="2024-04-12T18:46:24.100725794Z" level=warning msg="cleaning up after shim disconnected" id=27cdd5ee52882dec884cb7ceba5999493b14aae5ce28f0e049b9b4e2d85bf2db namespace=k8s.io Apr 12 18:46:24.100762 env[1199]: time="2024-04-12T18:46:24.100738138Z" level=info msg="cleaning up dead shim" Apr 12 18:46:24.122172 env[1199]: time="2024-04-12T18:46:24.122090594Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:46:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2720 runtime=io.containerd.runc.v2\ntime=\"2024-04-12T18:46:24Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" Apr 12 18:46:24.442973 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-27cdd5ee52882dec884cb7ceba5999493b14aae5ce28f0e049b9b4e2d85bf2db-rootfs.mount: Deactivated successfully. Apr 12 18:46:24.940279 kubelet[2077]: E0412 18:46:24.938752 2077 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:46:24.943311 env[1199]: time="2024-04-12T18:46:24.943124598Z" level=info msg="CreateContainer within sandbox \"72fb2d67bbcdb49c0d82730227042787f75768cf067e23f51c462194c65f836b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 12 18:46:25.010697 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3682703218.mount: Deactivated successfully. Apr 12 18:46:25.038695 env[1199]: time="2024-04-12T18:46:25.038100482Z" level=info msg="CreateContainer within sandbox \"72fb2d67bbcdb49c0d82730227042787f75768cf067e23f51c462194c65f836b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8e4f6862dc96c377fa5df163980e86c7f7d996d0794d7d2f932f11a6cff05cb2\"" Apr 12 18:46:25.046198 env[1199]: time="2024-04-12T18:46:25.046039210Z" level=info msg="StartContainer for \"8e4f6862dc96c377fa5df163980e86c7f7d996d0794d7d2f932f11a6cff05cb2\"" Apr 12 18:46:25.136020 env[1199]: time="2024-04-12T18:46:25.131333864Z" level=info msg="StartContainer for \"8e4f6862dc96c377fa5df163980e86c7f7d996d0794d7d2f932f11a6cff05cb2\" returns successfully" Apr 12 18:46:25.264946 kubelet[2077]: I0412 18:46:25.264785 2077 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Apr 12 18:46:25.315813 kubelet[2077]: I0412 18:46:25.315662 2077 topology_manager.go:212] "Topology Admit Handler" Apr 12 18:46:25.326240 kubelet[2077]: I0412 18:46:25.324637 2077 topology_manager.go:212] "Topology Admit Handler" Apr 12 18:46:25.372757 kubelet[2077]: I0412 18:46:25.372511 2077 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/42078f68-b321-4d79-8e65-e7bd7568cbf0-config-volume\") pod \"coredns-5d78c9869d-x55t9\" (UID: \"42078f68-b321-4d79-8e65-e7bd7568cbf0\") " pod="kube-system/coredns-5d78c9869d-x55t9" Apr 12 18:46:25.372757 kubelet[2077]: I0412 18:46:25.372584 2077 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmzl4\" (UniqueName: \"kubernetes.io/projected/42078f68-b321-4d79-8e65-e7bd7568cbf0-kube-api-access-xmzl4\") pod \"coredns-5d78c9869d-x55t9\" (UID: \"42078f68-b321-4d79-8e65-e7bd7568cbf0\") " pod="kube-system/coredns-5d78c9869d-x55t9" Apr 12 18:46:25.372757 kubelet[2077]: I0412 18:46:25.372616 2077 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqjh7\" (UniqueName: \"kubernetes.io/projected/27dbe561-797c-4139-b445-15ab5bd5508c-kube-api-access-rqjh7\") pod \"coredns-5d78c9869d-cq5mn\" (UID: \"27dbe561-797c-4139-b445-15ab5bd5508c\") " pod="kube-system/coredns-5d78c9869d-cq5mn" Apr 12 18:46:25.372757 kubelet[2077]: I0412 18:46:25.372644 2077 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/27dbe561-797c-4139-b445-15ab5bd5508c-config-volume\") pod \"coredns-5d78c9869d-cq5mn\" (UID: \"27dbe561-797c-4139-b445-15ab5bd5508c\") " pod="kube-system/coredns-5d78c9869d-cq5mn" Apr 12 18:46:25.627981 kubelet[2077]: E0412 18:46:25.621387 2077 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:46:25.628213 env[1199]: time="2024-04-12T18:46:25.622352083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-cq5mn,Uid:27dbe561-797c-4139-b445-15ab5bd5508c,Namespace:kube-system,Attempt:0,}" Apr 12 18:46:25.628562 kubelet[2077]: E0412 18:46:25.628533 2077 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:46:25.630823 env[1199]: time="2024-04-12T18:46:25.629251724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-x55t9,Uid:42078f68-b321-4d79-8e65-e7bd7568cbf0,Namespace:kube-system,Attempt:0,}" Apr 12 18:46:25.956028 kubelet[2077]: E0412 18:46:25.949589 2077 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:46:26.007950 kubelet[2077]: I0412 18:46:26.006797 2077 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-vksd5" podStartSLOduration=7.007447207 podCreationTimestamp="2024-04-12 18:46:09 +0000 UTC" firstStartedPulling="2024-04-12 18:46:10.431384936 +0000 UTC m=+14.673046377" lastFinishedPulling="2024-04-12 18:46:20.430690334 +0000 UTC m=+24.672351764" observedRunningTime="2024-04-12 18:46:26.006746262 +0000 UTC m=+30.248407702" watchObservedRunningTime="2024-04-12 18:46:26.006752594 +0000 UTC m=+30.248414034" Apr 12 18:46:26.955127 kubelet[2077]: E0412 18:46:26.955097 2077 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:46:27.956715 kubelet[2077]: E0412 18:46:27.956660 2077 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:46:28.170323 systemd-networkd[1071]: cilium_host: Link UP Apr 12 18:46:28.170506 systemd-networkd[1071]: cilium_net: Link UP Apr 12 18:46:28.170510 systemd-networkd[1071]: cilium_net: Gained carrier Apr 12 18:46:28.170708 systemd-networkd[1071]: cilium_host: Gained carrier Apr 12 18:46:28.180261 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Apr 12 18:46:28.180042 systemd-networkd[1071]: cilium_host: Gained IPv6LL Apr 12 18:46:28.361656 systemd-networkd[1071]: cilium_vxlan: Link UP Apr 12 18:46:28.361665 systemd-networkd[1071]: cilium_vxlan: Gained carrier Apr 12 18:46:28.680172 kernel: NET: Registered PF_ALG protocol family Apr 12 18:46:28.707367 systemd-networkd[1071]: cilium_net: Gained IPv6LL Apr 12 18:46:29.905505 systemd-networkd[1071]: lxc_health: Link UP Apr 12 18:46:29.912581 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Apr 12 18:46:29.911892 systemd-networkd[1071]: lxc_health: Gained carrier Apr 12 18:46:30.112087 systemd-networkd[1071]: cilium_vxlan: Gained IPv6LL Apr 12 18:46:30.134407 kubelet[2077]: E0412 18:46:30.134349 2077 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:46:30.287330 systemd-networkd[1071]: lxc1088a4a38e61: Link UP Apr 12 18:46:30.319075 kernel: eth0: renamed from tmpfcd3b Apr 12 18:46:30.323087 systemd-networkd[1071]: lxc1088a4a38e61: Gained carrier Apr 12 18:46:30.326452 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc1088a4a38e61: link becomes ready Apr 12 18:46:30.387282 systemd-networkd[1071]: lxcaabeeb443c54: Link UP Apr 12 18:46:30.407990 kernel: eth0: renamed from tmp95878 Apr 12 18:46:30.416676 systemd-networkd[1071]: lxcaabeeb443c54: Gained carrier Apr 12 18:46:30.416976 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcaabeeb443c54: link becomes ready Apr 12 18:46:30.963934 kubelet[2077]: E0412 18:46:30.962226 2077 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:46:31.712428 systemd-networkd[1071]: lxc1088a4a38e61: Gained IPv6LL Apr 12 18:46:31.905410 systemd-networkd[1071]: lxcaabeeb443c54: Gained IPv6LL Apr 12 18:46:31.964420 kubelet[2077]: E0412 18:46:31.964392 2077 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:46:31.968202 systemd-networkd[1071]: lxc_health: Gained IPv6LL Apr 12 18:46:34.399576 systemd[1]: Started sshd@5-10.0.0.52:22-10.0.0.1:51556.service. Apr 12 18:46:34.476768 sshd[3293]: Accepted publickey for core from 10.0.0.1 port 51556 ssh2: RSA SHA256:oFTmhZVjs8bjXH/lYpDQZ+WL9oh5tEY90V+L3H6oLsU Apr 12 18:46:34.478470 sshd[3293]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:46:34.487674 systemd[1]: Started session-6.scope. Apr 12 18:46:34.493474 systemd-logind[1185]: New session 6 of user core. Apr 12 18:46:34.714477 sshd[3293]: pam_unix(sshd:session): session closed for user core Apr 12 18:46:34.717326 systemd[1]: sshd@5-10.0.0.52:22-10.0.0.1:51556.service: Deactivated successfully. Apr 12 18:46:34.718068 systemd[1]: session-6.scope: Deactivated successfully. Apr 12 18:46:34.718849 systemd-logind[1185]: Session 6 logged out. Waiting for processes to exit. Apr 12 18:46:34.719657 systemd-logind[1185]: Removed session 6. Apr 12 18:46:35.180579 env[1199]: time="2024-04-12T18:46:35.180399444Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:46:35.180579 env[1199]: time="2024-04-12T18:46:35.180439650Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:46:35.180579 env[1199]: time="2024-04-12T18:46:35.180448907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:46:35.180995 env[1199]: time="2024-04-12T18:46:35.180725968Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fcd3bdac805677238526ebc38650cba36705fdc8ba690fa64de2b75659f0b1a2 pid=3329 runtime=io.containerd.runc.v2 Apr 12 18:46:35.180995 env[1199]: time="2024-04-12T18:46:35.180725988Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:46:35.180995 env[1199]: time="2024-04-12T18:46:35.180794557Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:46:35.180995 env[1199]: time="2024-04-12T18:46:35.180805227Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:46:35.181086 env[1199]: time="2024-04-12T18:46:35.181014481Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/95878d1734b0ad4a449d6a9fa836896cf8f03642f1611ccddcd7c24679c870a1 pid=3337 runtime=io.containerd.runc.v2 Apr 12 18:46:35.206664 systemd-resolved[1134]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 12 18:46:35.207956 systemd-resolved[1134]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 12 18:46:35.232829 env[1199]: time="2024-04-12T18:46:35.231335813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-x55t9,Uid:42078f68-b321-4d79-8e65-e7bd7568cbf0,Namespace:kube-system,Attempt:0,} returns sandbox id \"95878d1734b0ad4a449d6a9fa836896cf8f03642f1611ccddcd7c24679c870a1\"" Apr 12 18:46:35.234543 kubelet[2077]: E0412 18:46:35.232256 2077 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:46:35.237272 env[1199]: time="2024-04-12T18:46:35.237217815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-cq5mn,Uid:27dbe561-797c-4139-b445-15ab5bd5508c,Namespace:kube-system,Attempt:0,} returns sandbox id \"fcd3bdac805677238526ebc38650cba36705fdc8ba690fa64de2b75659f0b1a2\"" Apr 12 18:46:35.237776 env[1199]: time="2024-04-12T18:46:35.237757880Z" level=info msg="CreateContainer within sandbox \"95878d1734b0ad4a449d6a9fa836896cf8f03642f1611ccddcd7c24679c870a1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 12 18:46:35.238785 kubelet[2077]: E0412 18:46:35.238611 2077 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:46:35.241769 env[1199]: time="2024-04-12T18:46:35.241727380Z" level=info msg="CreateContainer within sandbox \"fcd3bdac805677238526ebc38650cba36705fdc8ba690fa64de2b75659f0b1a2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 12 18:46:35.256049 env[1199]: time="2024-04-12T18:46:35.256006523Z" level=info msg="CreateContainer within sandbox \"95878d1734b0ad4a449d6a9fa836896cf8f03642f1611ccddcd7c24679c870a1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8639cd3f315ccc0a69ba905024b484d69bc07c1390a27b0b4884bfa276d025c2\"" Apr 12 18:46:35.256993 env[1199]: time="2024-04-12T18:46:35.256971887Z" level=info msg="StartContainer for \"8639cd3f315ccc0a69ba905024b484d69bc07c1390a27b0b4884bfa276d025c2\"" Apr 12 18:46:35.263012 env[1199]: time="2024-04-12T18:46:35.262949400Z" level=info msg="CreateContainer within sandbox \"fcd3bdac805677238526ebc38650cba36705fdc8ba690fa64de2b75659f0b1a2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7c09e4c29bd2cf57f6f440a44810cd086aafe5be6bc6f414131bf31094558441\"" Apr 12 18:46:35.263765 env[1199]: time="2024-04-12T18:46:35.263743031Z" level=info msg="StartContainer for \"7c09e4c29bd2cf57f6f440a44810cd086aafe5be6bc6f414131bf31094558441\"" Apr 12 18:46:35.311321 env[1199]: time="2024-04-12T18:46:35.311219868Z" level=info msg="StartContainer for \"8639cd3f315ccc0a69ba905024b484d69bc07c1390a27b0b4884bfa276d025c2\" returns successfully" Apr 12 18:46:35.318893 env[1199]: time="2024-04-12T18:46:35.318852631Z" level=info msg="StartContainer for \"7c09e4c29bd2cf57f6f440a44810cd086aafe5be6bc6f414131bf31094558441\" returns successfully" Apr 12 18:46:35.981276 kubelet[2077]: E0412 18:46:35.981252 2077 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:46:35.983739 kubelet[2077]: E0412 18:46:35.983715 2077 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:46:36.027959 kubelet[2077]: I0412 18:46:36.027032 2077 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5d78c9869d-x55t9" podStartSLOduration=26.026994349 podCreationTimestamp="2024-04-12 18:46:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:46:36.000742043 +0000 UTC m=+40.242403483" watchObservedRunningTime="2024-04-12 18:46:36.026994349 +0000 UTC m=+40.268655789" Apr 12 18:46:36.041234 kubelet[2077]: I0412 18:46:36.040426 2077 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5d78c9869d-cq5mn" podStartSLOduration=26.040377214 podCreationTimestamp="2024-04-12 18:46:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:46:36.027269205 +0000 UTC m=+40.268930635" watchObservedRunningTime="2024-04-12 18:46:36.040377214 +0000 UTC m=+40.282038654" Apr 12 18:46:36.185148 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2388371292.mount: Deactivated successfully. Apr 12 18:46:36.985989 kubelet[2077]: E0412 18:46:36.985946 2077 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:46:36.986443 kubelet[2077]: E0412 18:46:36.986042 2077 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:46:37.988153 kubelet[2077]: E0412 18:46:37.988124 2077 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:46:37.988524 kubelet[2077]: E0412 18:46:37.988256 2077 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:46:39.716833 systemd[1]: Started sshd@6-10.0.0.52:22-10.0.0.1:42516.service. Apr 12 18:46:39.757657 sshd[3486]: Accepted publickey for core from 10.0.0.1 port 42516 ssh2: RSA SHA256:oFTmhZVjs8bjXH/lYpDQZ+WL9oh5tEY90V+L3H6oLsU Apr 12 18:46:39.758596 sshd[3486]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:46:39.761816 systemd-logind[1185]: New session 7 of user core. Apr 12 18:46:39.762522 systemd[1]: Started session-7.scope. Apr 12 18:46:39.872187 sshd[3486]: pam_unix(sshd:session): session closed for user core Apr 12 18:46:39.874126 systemd[1]: sshd@6-10.0.0.52:22-10.0.0.1:42516.service: Deactivated successfully. Apr 12 18:46:39.874969 systemd-logind[1185]: Session 7 logged out. Waiting for processes to exit. Apr 12 18:46:39.875006 systemd[1]: session-7.scope: Deactivated successfully. Apr 12 18:46:39.875695 systemd-logind[1185]: Removed session 7. Apr 12 18:46:44.875316 systemd[1]: Started sshd@7-10.0.0.52:22-10.0.0.1:42520.service. Apr 12 18:46:44.914559 sshd[3503]: Accepted publickey for core from 10.0.0.1 port 42520 ssh2: RSA SHA256:oFTmhZVjs8bjXH/lYpDQZ+WL9oh5tEY90V+L3H6oLsU Apr 12 18:46:44.915324 sshd[3503]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:46:44.918117 systemd-logind[1185]: New session 8 of user core. Apr 12 18:46:44.918973 systemd[1]: Started session-8.scope. Apr 12 18:46:45.017468 sshd[3503]: pam_unix(sshd:session): session closed for user core Apr 12 18:46:45.019351 systemd[1]: sshd@7-10.0.0.52:22-10.0.0.1:42520.service: Deactivated successfully. Apr 12 18:46:45.020390 systemd-logind[1185]: Session 8 logged out. Waiting for processes to exit. Apr 12 18:46:45.020458 systemd[1]: session-8.scope: Deactivated successfully. Apr 12 18:46:45.021161 systemd-logind[1185]: Removed session 8. Apr 12 18:46:50.020688 systemd[1]: Started sshd@8-10.0.0.52:22-10.0.0.1:60424.service. Apr 12 18:46:50.061647 sshd[3518]: Accepted publickey for core from 10.0.0.1 port 60424 ssh2: RSA SHA256:oFTmhZVjs8bjXH/lYpDQZ+WL9oh5tEY90V+L3H6oLsU Apr 12 18:46:50.062933 sshd[3518]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:46:50.066487 systemd-logind[1185]: New session 9 of user core. Apr 12 18:46:50.067163 systemd[1]: Started session-9.scope. Apr 12 18:46:50.174686 sshd[3518]: pam_unix(sshd:session): session closed for user core Apr 12 18:46:50.176452 systemd[1]: sshd@8-10.0.0.52:22-10.0.0.1:60424.service: Deactivated successfully. Apr 12 18:46:50.177294 systemd-logind[1185]: Session 9 logged out. Waiting for processes to exit. Apr 12 18:46:50.177332 systemd[1]: session-9.scope: Deactivated successfully. Apr 12 18:46:50.178179 systemd-logind[1185]: Removed session 9. Apr 12 18:46:55.177762 systemd[1]: Started sshd@9-10.0.0.52:22-10.0.0.1:60432.service. Apr 12 18:46:55.217665 sshd[3534]: Accepted publickey for core from 10.0.0.1 port 60432 ssh2: RSA SHA256:oFTmhZVjs8bjXH/lYpDQZ+WL9oh5tEY90V+L3H6oLsU Apr 12 18:46:55.218595 sshd[3534]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:46:55.222127 systemd-logind[1185]: New session 10 of user core. Apr 12 18:46:55.223198 systemd[1]: Started session-10.scope. Apr 12 18:46:55.325294 sshd[3534]: pam_unix(sshd:session): session closed for user core Apr 12 18:46:55.327532 systemd[1]: Started sshd@10-10.0.0.52:22-10.0.0.1:60440.service. Apr 12 18:46:55.328781 systemd[1]: sshd@9-10.0.0.52:22-10.0.0.1:60432.service: Deactivated successfully. Apr 12 18:46:55.329888 systemd[1]: session-10.scope: Deactivated successfully. Apr 12 18:46:55.330047 systemd-logind[1185]: Session 10 logged out. Waiting for processes to exit. Apr 12 18:46:55.331450 systemd-logind[1185]: Removed session 10. Apr 12 18:46:55.369308 sshd[3548]: Accepted publickey for core from 10.0.0.1 port 60440 ssh2: RSA SHA256:oFTmhZVjs8bjXH/lYpDQZ+WL9oh5tEY90V+L3H6oLsU Apr 12 18:46:55.370512 sshd[3548]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:46:55.373997 systemd-logind[1185]: New session 11 of user core. Apr 12 18:46:55.374941 systemd[1]: Started session-11.scope. Apr 12 18:46:56.093699 sshd[3548]: pam_unix(sshd:session): session closed for user core Apr 12 18:46:56.096258 systemd[1]: Started sshd@11-10.0.0.52:22-10.0.0.1:60448.service. Apr 12 18:46:56.101046 systemd[1]: sshd@10-10.0.0.52:22-10.0.0.1:60440.service: Deactivated successfully. Apr 12 18:46:56.103328 systemd[1]: session-11.scope: Deactivated successfully. Apr 12 18:46:56.104037 systemd-logind[1185]: Session 11 logged out. Waiting for processes to exit. Apr 12 18:46:56.104933 systemd-logind[1185]: Removed session 11. Apr 12 18:46:56.137635 sshd[3563]: Accepted publickey for core from 10.0.0.1 port 60448 ssh2: RSA SHA256:oFTmhZVjs8bjXH/lYpDQZ+WL9oh5tEY90V+L3H6oLsU Apr 12 18:46:56.138790 sshd[3563]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:46:56.142267 systemd-logind[1185]: New session 12 of user core. Apr 12 18:46:56.143222 systemd[1]: Started session-12.scope. Apr 12 18:46:56.248399 sshd[3563]: pam_unix(sshd:session): session closed for user core Apr 12 18:46:56.250321 systemd[1]: sshd@11-10.0.0.52:22-10.0.0.1:60448.service: Deactivated successfully. Apr 12 18:46:56.251274 systemd-logind[1185]: Session 12 logged out. Waiting for processes to exit. Apr 12 18:46:56.251312 systemd[1]: session-12.scope: Deactivated successfully. Apr 12 18:46:56.252062 systemd-logind[1185]: Removed session 12. Apr 12 18:47:01.250787 systemd[1]: Started sshd@12-10.0.0.52:22-10.0.0.1:60282.service. Apr 12 18:47:01.290228 sshd[3579]: Accepted publickey for core from 10.0.0.1 port 60282 ssh2: RSA SHA256:oFTmhZVjs8bjXH/lYpDQZ+WL9oh5tEY90V+L3H6oLsU Apr 12 18:47:01.291250 sshd[3579]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:47:01.294420 systemd-logind[1185]: New session 13 of user core. Apr 12 18:47:01.295184 systemd[1]: Started session-13.scope. Apr 12 18:47:01.395953 sshd[3579]: pam_unix(sshd:session): session closed for user core Apr 12 18:47:01.397973 systemd[1]: sshd@12-10.0.0.52:22-10.0.0.1:60282.service: Deactivated successfully. Apr 12 18:47:01.398722 systemd[1]: session-13.scope: Deactivated successfully. Apr 12 18:47:01.399455 systemd-logind[1185]: Session 13 logged out. Waiting for processes to exit. Apr 12 18:47:01.400070 systemd-logind[1185]: Removed session 13. Apr 12 18:47:06.400034 systemd[1]: Started sshd@13-10.0.0.52:22-10.0.0.1:60290.service. Apr 12 18:47:06.439717 sshd[3593]: Accepted publickey for core from 10.0.0.1 port 60290 ssh2: RSA SHA256:oFTmhZVjs8bjXH/lYpDQZ+WL9oh5tEY90V+L3H6oLsU Apr 12 18:47:06.440693 sshd[3593]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:47:06.443843 systemd-logind[1185]: New session 14 of user core. Apr 12 18:47:06.444839 systemd[1]: Started session-14.scope. Apr 12 18:47:06.543015 sshd[3593]: pam_unix(sshd:session): session closed for user core Apr 12 18:47:06.545866 systemd[1]: Started sshd@14-10.0.0.52:22-10.0.0.1:60298.service. Apr 12 18:47:06.546413 systemd[1]: sshd@13-10.0.0.52:22-10.0.0.1:60290.service: Deactivated successfully. Apr 12 18:47:06.547498 systemd[1]: session-14.scope: Deactivated successfully. Apr 12 18:47:06.549004 systemd-logind[1185]: Session 14 logged out. Waiting for processes to exit. Apr 12 18:47:06.549843 systemd-logind[1185]: Removed session 14. Apr 12 18:47:06.587010 sshd[3606]: Accepted publickey for core from 10.0.0.1 port 60298 ssh2: RSA SHA256:oFTmhZVjs8bjXH/lYpDQZ+WL9oh5tEY90V+L3H6oLsU Apr 12 18:47:06.587996 sshd[3606]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:47:06.591427 systemd-logind[1185]: New session 15 of user core. Apr 12 18:47:06.592186 systemd[1]: Started session-15.scope. Apr 12 18:47:06.824448 sshd[3606]: pam_unix(sshd:session): session closed for user core Apr 12 18:47:06.826714 systemd[1]: Started sshd@15-10.0.0.52:22-10.0.0.1:60308.service. Apr 12 18:47:06.827640 systemd[1]: sshd@14-10.0.0.52:22-10.0.0.1:60298.service: Deactivated successfully. Apr 12 18:47:06.828501 systemd[1]: session-15.scope: Deactivated successfully. Apr 12 18:47:06.828970 systemd-logind[1185]: Session 15 logged out. Waiting for processes to exit. Apr 12 18:47:06.829842 systemd-logind[1185]: Removed session 15. Apr 12 18:47:06.868933 sshd[3617]: Accepted publickey for core from 10.0.0.1 port 60308 ssh2: RSA SHA256:oFTmhZVjs8bjXH/lYpDQZ+WL9oh5tEY90V+L3H6oLsU Apr 12 18:47:06.870097 sshd[3617]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:47:06.873581 systemd-logind[1185]: New session 16 of user core. Apr 12 18:47:06.874360 systemd[1]: Started session-16.scope. Apr 12 18:47:07.664200 sshd[3617]: pam_unix(sshd:session): session closed for user core Apr 12 18:47:07.668433 systemd[1]: Started sshd@16-10.0.0.52:22-10.0.0.1:52082.service. Apr 12 18:47:07.671501 systemd[1]: sshd@15-10.0.0.52:22-10.0.0.1:60308.service: Deactivated successfully. Apr 12 18:47:07.672578 systemd[1]: session-16.scope: Deactivated successfully. Apr 12 18:47:07.673095 systemd-logind[1185]: Session 16 logged out. Waiting for processes to exit. Apr 12 18:47:07.674261 systemd-logind[1185]: Removed session 16. Apr 12 18:47:07.707267 sshd[3637]: Accepted publickey for core from 10.0.0.1 port 52082 ssh2: RSA SHA256:oFTmhZVjs8bjXH/lYpDQZ+WL9oh5tEY90V+L3H6oLsU Apr 12 18:47:07.708417 sshd[3637]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:47:07.711720 systemd-logind[1185]: New session 17 of user core. Apr 12 18:47:07.712466 systemd[1]: Started session-17.scope. Apr 12 18:47:08.041177 sshd[3637]: pam_unix(sshd:session): session closed for user core Apr 12 18:47:08.043301 systemd[1]: Started sshd@17-10.0.0.52:22-10.0.0.1:52094.service. Apr 12 18:47:08.046578 systemd[1]: sshd@16-10.0.0.52:22-10.0.0.1:52082.service: Deactivated successfully. Apr 12 18:47:08.047934 systemd[1]: session-17.scope: Deactivated successfully. Apr 12 18:47:08.048516 systemd-logind[1185]: Session 17 logged out. Waiting for processes to exit. Apr 12 18:47:08.049397 systemd-logind[1185]: Removed session 17. Apr 12 18:47:08.085403 sshd[3649]: Accepted publickey for core from 10.0.0.1 port 52094 ssh2: RSA SHA256:oFTmhZVjs8bjXH/lYpDQZ+WL9oh5tEY90V+L3H6oLsU Apr 12 18:47:08.086443 sshd[3649]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:47:08.089659 systemd-logind[1185]: New session 18 of user core. Apr 12 18:47:08.090629 systemd[1]: Started session-18.scope. Apr 12 18:47:08.288423 sshd[3649]: pam_unix(sshd:session): session closed for user core Apr 12 18:47:08.291124 systemd[1]: sshd@17-10.0.0.52:22-10.0.0.1:52094.service: Deactivated successfully. Apr 12 18:47:08.292533 systemd-logind[1185]: Session 18 logged out. Waiting for processes to exit. Apr 12 18:47:08.292571 systemd[1]: session-18.scope: Deactivated successfully. Apr 12 18:47:08.293567 systemd-logind[1185]: Removed session 18. Apr 12 18:47:13.291510 systemd[1]: Started sshd@18-10.0.0.52:22-10.0.0.1:52098.service. Apr 12 18:47:13.333929 sshd[3667]: Accepted publickey for core from 10.0.0.1 port 52098 ssh2: RSA SHA256:oFTmhZVjs8bjXH/lYpDQZ+WL9oh5tEY90V+L3H6oLsU Apr 12 18:47:13.335173 sshd[3667]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:47:13.338286 systemd-logind[1185]: New session 19 of user core. Apr 12 18:47:13.339167 systemd[1]: Started session-19.scope. Apr 12 18:47:13.448144 sshd[3667]: pam_unix(sshd:session): session closed for user core Apr 12 18:47:13.450489 systemd[1]: sshd@18-10.0.0.52:22-10.0.0.1:52098.service: Deactivated successfully. Apr 12 18:47:13.451517 systemd[1]: session-19.scope: Deactivated successfully. Apr 12 18:47:13.452345 systemd-logind[1185]: Session 19 logged out. Waiting for processes to exit. Apr 12 18:47:13.453015 systemd-logind[1185]: Removed session 19. Apr 12 18:47:13.849528 kubelet[2077]: E0412 18:47:13.849492 2077 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:47:18.451616 systemd[1]: Started sshd@19-10.0.0.52:22-10.0.0.1:50842.service. Apr 12 18:47:18.491136 sshd[3684]: Accepted publickey for core from 10.0.0.1 port 50842 ssh2: RSA SHA256:oFTmhZVjs8bjXH/lYpDQZ+WL9oh5tEY90V+L3H6oLsU Apr 12 18:47:18.492272 sshd[3684]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:47:18.495411 systemd-logind[1185]: New session 20 of user core. Apr 12 18:47:18.496123 systemd[1]: Started session-20.scope. Apr 12 18:47:18.596806 sshd[3684]: pam_unix(sshd:session): session closed for user core Apr 12 18:47:18.600502 systemd[1]: sshd@19-10.0.0.52:22-10.0.0.1:50842.service: Deactivated successfully. Apr 12 18:47:18.602236 systemd-logind[1185]: Session 20 logged out. Waiting for processes to exit. Apr 12 18:47:18.602318 systemd[1]: session-20.scope: Deactivated successfully. Apr 12 18:47:18.603373 systemd-logind[1185]: Removed session 20. Apr 12 18:47:18.848747 kubelet[2077]: E0412 18:47:18.848695 2077 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:47:23.600156 systemd[1]: Started sshd@20-10.0.0.52:22-10.0.0.1:50854.service. Apr 12 18:47:23.639457 sshd[3698]: Accepted publickey for core from 10.0.0.1 port 50854 ssh2: RSA SHA256:oFTmhZVjs8bjXH/lYpDQZ+WL9oh5tEY90V+L3H6oLsU Apr 12 18:47:23.640455 sshd[3698]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:47:23.643632 systemd-logind[1185]: New session 21 of user core. Apr 12 18:47:23.644373 systemd[1]: Started session-21.scope. Apr 12 18:47:23.746980 sshd[3698]: pam_unix(sshd:session): session closed for user core Apr 12 18:47:23.749628 systemd[1]: sshd@20-10.0.0.52:22-10.0.0.1:50854.service: Deactivated successfully. Apr 12 18:47:23.750705 systemd-logind[1185]: Session 21 logged out. Waiting for processes to exit. Apr 12 18:47:23.750757 systemd[1]: session-21.scope: Deactivated successfully. Apr 12 18:47:23.751556 systemd-logind[1185]: Removed session 21. Apr 12 18:47:25.849121 kubelet[2077]: E0412 18:47:25.849096 2077 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:47:28.751410 systemd[1]: Started sshd@21-10.0.0.52:22-10.0.0.1:35846.service. Apr 12 18:47:28.790225 sshd[3713]: Accepted publickey for core from 10.0.0.1 port 35846 ssh2: RSA SHA256:oFTmhZVjs8bjXH/lYpDQZ+WL9oh5tEY90V+L3H6oLsU Apr 12 18:47:28.791308 sshd[3713]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:47:28.794950 systemd-logind[1185]: New session 22 of user core. Apr 12 18:47:28.795972 systemd[1]: Started session-22.scope. Apr 12 18:47:28.900610 sshd[3713]: pam_unix(sshd:session): session closed for user core Apr 12 18:47:28.903219 systemd[1]: Started sshd@22-10.0.0.52:22-10.0.0.1:35856.service. Apr 12 18:47:28.903993 systemd[1]: sshd@21-10.0.0.52:22-10.0.0.1:35846.service: Deactivated successfully. Apr 12 18:47:28.905459 systemd[1]: session-22.scope: Deactivated successfully. Apr 12 18:47:28.905469 systemd-logind[1185]: Session 22 logged out. Waiting for processes to exit. Apr 12 18:47:28.906419 systemd-logind[1185]: Removed session 22. Apr 12 18:47:28.947512 sshd[3725]: Accepted publickey for core from 10.0.0.1 port 35856 ssh2: RSA SHA256:oFTmhZVjs8bjXH/lYpDQZ+WL9oh5tEY90V+L3H6oLsU Apr 12 18:47:28.948760 sshd[3725]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:47:28.952783 systemd-logind[1185]: New session 23 of user core. Apr 12 18:47:28.953832 systemd[1]: Started session-23.scope. Apr 12 18:47:30.384134 env[1199]: time="2024-04-12T18:47:30.384073619Z" level=info msg="StopContainer for \"b9a4c802d0deeebbce878ceda815ba877b3f626ea92fa732b5edd102ac5ef307\" with timeout 30 (s)" Apr 12 18:47:30.385004 env[1199]: time="2024-04-12T18:47:30.384878281Z" level=info msg="Stop container \"b9a4c802d0deeebbce878ceda815ba877b3f626ea92fa732b5edd102ac5ef307\" with signal terminated" Apr 12 18:47:30.406320 env[1199]: time="2024-04-12T18:47:30.406258705Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 12 18:47:30.412037 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b9a4c802d0deeebbce878ceda815ba877b3f626ea92fa732b5edd102ac5ef307-rootfs.mount: Deactivated successfully. Apr 12 18:47:30.413655 env[1199]: time="2024-04-12T18:47:30.413628124Z" level=info msg="StopContainer for \"8e4f6862dc96c377fa5df163980e86c7f7d996d0794d7d2f932f11a6cff05cb2\" with timeout 1 (s)" Apr 12 18:47:30.413933 env[1199]: time="2024-04-12T18:47:30.413862170Z" level=info msg="Stop container \"8e4f6862dc96c377fa5df163980e86c7f7d996d0794d7d2f932f11a6cff05cb2\" with signal terminated" Apr 12 18:47:30.419983 systemd-networkd[1071]: lxc_health: Link DOWN Apr 12 18:47:30.419992 systemd-networkd[1071]: lxc_health: Lost carrier Apr 12 18:47:30.427568 env[1199]: time="2024-04-12T18:47:30.427504692Z" level=info msg="shim disconnected" id=b9a4c802d0deeebbce878ceda815ba877b3f626ea92fa732b5edd102ac5ef307 Apr 12 18:47:30.427568 env[1199]: time="2024-04-12T18:47:30.427568464Z" level=warning msg="cleaning up after shim disconnected" id=b9a4c802d0deeebbce878ceda815ba877b3f626ea92fa732b5edd102ac5ef307 namespace=k8s.io Apr 12 18:47:30.427568 env[1199]: time="2024-04-12T18:47:30.427582180Z" level=info msg="cleaning up dead shim" Apr 12 18:47:30.435670 env[1199]: time="2024-04-12T18:47:30.435625082Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:47:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3781 runtime=io.containerd.runc.v2\n" Apr 12 18:47:30.438261 env[1199]: time="2024-04-12T18:47:30.437714320Z" level=info msg="StopContainer for \"b9a4c802d0deeebbce878ceda815ba877b3f626ea92fa732b5edd102ac5ef307\" returns successfully" Apr 12 18:47:30.438466 env[1199]: time="2024-04-12T18:47:30.438445552Z" level=info msg="StopPodSandbox for \"f997260435e4a4fa7fc46c7e4dabc8caf3a658fe5dc46b256daac1a8d8a9d10b\"" Apr 12 18:47:30.438521 env[1199]: time="2024-04-12T18:47:30.438509974Z" level=info msg="Container to stop \"b9a4c802d0deeebbce878ceda815ba877b3f626ea92fa732b5edd102ac5ef307\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:47:30.440795 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f997260435e4a4fa7fc46c7e4dabc8caf3a658fe5dc46b256daac1a8d8a9d10b-shm.mount: Deactivated successfully. Apr 12 18:47:30.461344 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8e4f6862dc96c377fa5df163980e86c7f7d996d0794d7d2f932f11a6cff05cb2-rootfs.mount: Deactivated successfully. Apr 12 18:47:30.464721 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f997260435e4a4fa7fc46c7e4dabc8caf3a658fe5dc46b256daac1a8d8a9d10b-rootfs.mount: Deactivated successfully. Apr 12 18:47:30.470971 env[1199]: time="2024-04-12T18:47:30.470918884Z" level=info msg="shim disconnected" id=f997260435e4a4fa7fc46c7e4dabc8caf3a658fe5dc46b256daac1a8d8a9d10b Apr 12 18:47:30.470971 env[1199]: time="2024-04-12T18:47:30.470964872Z" level=warning msg="cleaning up after shim disconnected" id=f997260435e4a4fa7fc46c7e4dabc8caf3a658fe5dc46b256daac1a8d8a9d10b namespace=k8s.io Apr 12 18:47:30.470971 env[1199]: time="2024-04-12T18:47:30.470973638Z" level=info msg="cleaning up dead shim" Apr 12 18:47:30.471361 env[1199]: time="2024-04-12T18:47:30.471336389Z" level=info msg="shim disconnected" id=8e4f6862dc96c377fa5df163980e86c7f7d996d0794d7d2f932f11a6cff05cb2 Apr 12 18:47:30.471420 env[1199]: time="2024-04-12T18:47:30.471371426Z" level=warning msg="cleaning up after shim disconnected" id=8e4f6862dc96c377fa5df163980e86c7f7d996d0794d7d2f932f11a6cff05cb2 namespace=k8s.io Apr 12 18:47:30.471420 env[1199]: time="2024-04-12T18:47:30.471378339Z" level=info msg="cleaning up dead shim" Apr 12 18:47:30.477168 env[1199]: time="2024-04-12T18:47:30.477119859Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:47:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3829 runtime=io.containerd.runc.v2\n" Apr 12 18:47:30.477869 env[1199]: time="2024-04-12T18:47:30.477843557Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:47:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3828 runtime=io.containerd.runc.v2\n" Apr 12 18:47:30.478089 env[1199]: time="2024-04-12T18:47:30.478067734Z" level=info msg="TearDown network for sandbox \"f997260435e4a4fa7fc46c7e4dabc8caf3a658fe5dc46b256daac1a8d8a9d10b\" successfully" Apr 12 18:47:30.478130 env[1199]: time="2024-04-12T18:47:30.478089815Z" level=info msg="StopPodSandbox for \"f997260435e4a4fa7fc46c7e4dabc8caf3a658fe5dc46b256daac1a8d8a9d10b\" returns successfully" Apr 12 18:47:30.479434 env[1199]: time="2024-04-12T18:47:30.479409758Z" level=info msg="StopContainer for \"8e4f6862dc96c377fa5df163980e86c7f7d996d0794d7d2f932f11a6cff05cb2\" returns successfully" Apr 12 18:47:30.479762 env[1199]: time="2024-04-12T18:47:30.479742442Z" level=info msg="StopPodSandbox for \"72fb2d67bbcdb49c0d82730227042787f75768cf067e23f51c462194c65f836b\"" Apr 12 18:47:30.479817 env[1199]: time="2024-04-12T18:47:30.479783700Z" level=info msg="Container to stop \"27cdd5ee52882dec884cb7ceba5999493b14aae5ce28f0e049b9b4e2d85bf2db\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:47:30.479817 env[1199]: time="2024-04-12T18:47:30.479794360Z" level=info msg="Container to stop \"8e4f6862dc96c377fa5df163980e86c7f7d996d0794d7d2f932f11a6cff05cb2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:47:30.479817 env[1199]: time="2024-04-12T18:47:30.479805272Z" level=info msg="Container to stop \"c60d4e5357f6fd9213474575c81ef292af99221b1ef5aedb54227db36d821f26\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:47:30.479966 env[1199]: time="2024-04-12T18:47:30.479815421Z" level=info msg="Container to stop \"6e4980760663b0710baf5d5b772ebc46355d2072c06b823188cf8e753244ca3f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:47:30.479966 env[1199]: time="2024-04-12T18:47:30.479824809Z" level=info msg="Container to stop \"0f807ce89bafbf1184df06fd544ff5af560a7412edda3a5e964d077c7639d230\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:47:30.505450 env[1199]: time="2024-04-12T18:47:30.505396783Z" level=info msg="shim disconnected" id=72fb2d67bbcdb49c0d82730227042787f75768cf067e23f51c462194c65f836b Apr 12 18:47:30.505450 env[1199]: time="2024-04-12T18:47:30.505445455Z" level=warning msg="cleaning up after shim disconnected" id=72fb2d67bbcdb49c0d82730227042787f75768cf067e23f51c462194c65f836b namespace=k8s.io Apr 12 18:47:30.505450 env[1199]: time="2024-04-12T18:47:30.505455424Z" level=info msg="cleaning up dead shim" Apr 12 18:47:30.511627 env[1199]: time="2024-04-12T18:47:30.511588860Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:47:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3875 runtime=io.containerd.runc.v2\n" Apr 12 18:47:30.511919 env[1199]: time="2024-04-12T18:47:30.511877120Z" level=info msg="TearDown network for sandbox \"72fb2d67bbcdb49c0d82730227042787f75768cf067e23f51c462194c65f836b\" successfully" Apr 12 18:47:30.511973 env[1199]: time="2024-04-12T18:47:30.511920492Z" level=info msg="StopPodSandbox for \"72fb2d67bbcdb49c0d82730227042787f75768cf067e23f51c462194c65f836b\" returns successfully" Apr 12 18:47:30.654216 kubelet[2077]: I0412 18:47:30.654098 2077 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8d9044ef-7ed8-4235-ab98-3c161ec2ea2a-cilium-config-path\") pod \"8d9044ef-7ed8-4235-ab98-3c161ec2ea2a\" (UID: \"8d9044ef-7ed8-4235-ab98-3c161ec2ea2a\") " Apr 12 18:47:30.654216 kubelet[2077]: I0412 18:47:30.654155 2077 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8d9044ef-7ed8-4235-ab98-3c161ec2ea2a-host-proc-sys-net\") pod \"8d9044ef-7ed8-4235-ab98-3c161ec2ea2a\" (UID: \"8d9044ef-7ed8-4235-ab98-3c161ec2ea2a\") " Apr 12 18:47:30.654216 kubelet[2077]: I0412 18:47:30.654182 2077 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wss8b\" (UniqueName: \"kubernetes.io/projected/61cfab05-6ca5-45ef-b0bc-89185e6c4993-kube-api-access-wss8b\") pod \"61cfab05-6ca5-45ef-b0bc-89185e6c4993\" (UID: \"61cfab05-6ca5-45ef-b0bc-89185e6c4993\") " Apr 12 18:47:30.654216 kubelet[2077]: I0412 18:47:30.654199 2077 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8d9044ef-7ed8-4235-ab98-3c161ec2ea2a-cilium-cgroup\") pod \"8d9044ef-7ed8-4235-ab98-3c161ec2ea2a\" (UID: \"8d9044ef-7ed8-4235-ab98-3c161ec2ea2a\") " Apr 12 18:47:30.654216 kubelet[2077]: I0412 18:47:30.654218 2077 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8d9044ef-7ed8-4235-ab98-3c161ec2ea2a-cni-path\") pod \"8d9044ef-7ed8-4235-ab98-3c161ec2ea2a\" (UID: \"8d9044ef-7ed8-4235-ab98-3c161ec2ea2a\") " Apr 12 18:47:30.654776 kubelet[2077]: I0412 18:47:30.654233 2077 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8d9044ef-7ed8-4235-ab98-3c161ec2ea2a-hostproc\") pod \"8d9044ef-7ed8-4235-ab98-3c161ec2ea2a\" (UID: \"8d9044ef-7ed8-4235-ab98-3c161ec2ea2a\") " Apr 12 18:47:30.654776 kubelet[2077]: I0412 18:47:30.654249 2077 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8d9044ef-7ed8-4235-ab98-3c161ec2ea2a-xtables-lock\") pod \"8d9044ef-7ed8-4235-ab98-3c161ec2ea2a\" (UID: \"8d9044ef-7ed8-4235-ab98-3c161ec2ea2a\") " Apr 12 18:47:30.654776 kubelet[2077]: I0412 18:47:30.654270 2077 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8d9044ef-7ed8-4235-ab98-3c161ec2ea2a-clustermesh-secrets\") pod \"8d9044ef-7ed8-4235-ab98-3c161ec2ea2a\" (UID: \"8d9044ef-7ed8-4235-ab98-3c161ec2ea2a\") " Apr 12 18:47:30.654776 kubelet[2077]: W0412 18:47:30.654260 2077 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/8d9044ef-7ed8-4235-ab98-3c161ec2ea2a/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Apr 12 18:47:30.654776 kubelet[2077]: I0412 18:47:30.654313 2077 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d9044ef-7ed8-4235-ab98-3c161ec2ea2a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "8d9044ef-7ed8-4235-ab98-3c161ec2ea2a" (UID: "8d9044ef-7ed8-4235-ab98-3c161ec2ea2a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:47:30.654776 kubelet[2077]: I0412 18:47:30.654348 2077 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d9044ef-7ed8-4235-ab98-3c161ec2ea2a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "8d9044ef-7ed8-4235-ab98-3c161ec2ea2a" (UID: "8d9044ef-7ed8-4235-ab98-3c161ec2ea2a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:47:30.655019 kubelet[2077]: I0412 18:47:30.654677 2077 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d9044ef-7ed8-4235-ab98-3c161ec2ea2a-hostproc" (OuterVolumeSpecName: "hostproc") pod "8d9044ef-7ed8-4235-ab98-3c161ec2ea2a" (UID: "8d9044ef-7ed8-4235-ab98-3c161ec2ea2a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:47:30.655019 kubelet[2077]: I0412 18:47:30.654699 2077 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d9044ef-7ed8-4235-ab98-3c161ec2ea2a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "8d9044ef-7ed8-4235-ab98-3c161ec2ea2a" (UID: "8d9044ef-7ed8-4235-ab98-3c161ec2ea2a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:47:30.655019 kubelet[2077]: I0412 18:47:30.654713 2077 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d9044ef-7ed8-4235-ab98-3c161ec2ea2a-cni-path" (OuterVolumeSpecName: "cni-path") pod "8d9044ef-7ed8-4235-ab98-3c161ec2ea2a" (UID: "8d9044ef-7ed8-4235-ab98-3c161ec2ea2a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:47:30.655019 kubelet[2077]: I0412 18:47:30.654727 2077 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d9044ef-7ed8-4235-ab98-3c161ec2ea2a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "8d9044ef-7ed8-4235-ab98-3c161ec2ea2a" (UID: "8d9044ef-7ed8-4235-ab98-3c161ec2ea2a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:47:30.655019 kubelet[2077]: I0412 18:47:30.654285 2077 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8d9044ef-7ed8-4235-ab98-3c161ec2ea2a-etc-cni-netd\") pod \"8d9044ef-7ed8-4235-ab98-3c161ec2ea2a\" (UID: \"8d9044ef-7ed8-4235-ab98-3c161ec2ea2a\") " Apr 12 18:47:30.655196 kubelet[2077]: I0412 18:47:30.655010 2077 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8d9044ef-7ed8-4235-ab98-3c161ec2ea2a-bpf-maps\") pod \"8d9044ef-7ed8-4235-ab98-3c161ec2ea2a\" (UID: \"8d9044ef-7ed8-4235-ab98-3c161ec2ea2a\") " Apr 12 18:47:30.655196 kubelet[2077]: I0412 18:47:30.655030 2077 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8d9044ef-7ed8-4235-ab98-3c161ec2ea2a-lib-modules\") pod \"8d9044ef-7ed8-4235-ab98-3c161ec2ea2a\" (UID: \"8d9044ef-7ed8-4235-ab98-3c161ec2ea2a\") " Apr 12 18:47:30.655196 kubelet[2077]: I0412 18:47:30.655047 2077 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8d9044ef-7ed8-4235-ab98-3c161ec2ea2a-host-proc-sys-kernel\") pod \"8d9044ef-7ed8-4235-ab98-3c161ec2ea2a\" (UID: \"8d9044ef-7ed8-4235-ab98-3c161ec2ea2a\") " Apr 12 18:47:30.655196 kubelet[2077]: I0412 18:47:30.655073 2077 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d9044ef-7ed8-4235-ab98-3c161ec2ea2a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "8d9044ef-7ed8-4235-ab98-3c161ec2ea2a" (UID: "8d9044ef-7ed8-4235-ab98-3c161ec2ea2a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:47:30.655196 kubelet[2077]: I0412 18:47:30.655098 2077 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/61cfab05-6ca5-45ef-b0bc-89185e6c4993-cilium-config-path\") pod \"61cfab05-6ca5-45ef-b0bc-89185e6c4993\" (UID: \"61cfab05-6ca5-45ef-b0bc-89185e6c4993\") " Apr 12 18:47:30.655196 kubelet[2077]: I0412 18:47:30.655109 2077 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d9044ef-7ed8-4235-ab98-3c161ec2ea2a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8d9044ef-7ed8-4235-ab98-3c161ec2ea2a" (UID: "8d9044ef-7ed8-4235-ab98-3c161ec2ea2a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:47:30.655412 kubelet[2077]: I0412 18:47:30.655122 2077 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8d9044ef-7ed8-4235-ab98-3c161ec2ea2a-hubble-tls\") pod \"8d9044ef-7ed8-4235-ab98-3c161ec2ea2a\" (UID: \"8d9044ef-7ed8-4235-ab98-3c161ec2ea2a\") " Apr 12 18:47:30.655412 kubelet[2077]: I0412 18:47:30.655162 2077 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmm4s\" (UniqueName: \"kubernetes.io/projected/8d9044ef-7ed8-4235-ab98-3c161ec2ea2a-kube-api-access-nmm4s\") pod \"8d9044ef-7ed8-4235-ab98-3c161ec2ea2a\" (UID: \"8d9044ef-7ed8-4235-ab98-3c161ec2ea2a\") " Apr 12 18:47:30.655412 kubelet[2077]: I0412 18:47:30.655183 2077 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8d9044ef-7ed8-4235-ab98-3c161ec2ea2a-cilium-run\") pod \"8d9044ef-7ed8-4235-ab98-3c161ec2ea2a\" (UID: \"8d9044ef-7ed8-4235-ab98-3c161ec2ea2a\") " Apr 12 18:47:30.655412 kubelet[2077]: I0412 18:47:30.655224 2077 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8d9044ef-7ed8-4235-ab98-3c161ec2ea2a-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Apr 12 18:47:30.655412 kubelet[2077]: I0412 18:47:30.655235 2077 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8d9044ef-7ed8-4235-ab98-3c161ec2ea2a-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Apr 12 18:47:30.655412 kubelet[2077]: I0412 18:47:30.655244 2077 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8d9044ef-7ed8-4235-ab98-3c161ec2ea2a-xtables-lock\") on node \"localhost\" DevicePath \"\"" Apr 12 18:47:30.655412 kubelet[2077]: I0412 18:47:30.655251 2077 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8d9044ef-7ed8-4235-ab98-3c161ec2ea2a-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Apr 12 18:47:30.655641 kubelet[2077]: I0412 18:47:30.655260 2077 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8d9044ef-7ed8-4235-ab98-3c161ec2ea2a-cni-path\") on node \"localhost\" DevicePath \"\"" Apr 12 18:47:30.655641 kubelet[2077]: I0412 18:47:30.655262 2077 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d9044ef-7ed8-4235-ab98-3c161ec2ea2a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "8d9044ef-7ed8-4235-ab98-3c161ec2ea2a" (UID: "8d9044ef-7ed8-4235-ab98-3c161ec2ea2a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:47:30.655641 kubelet[2077]: I0412 18:47:30.655268 2077 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8d9044ef-7ed8-4235-ab98-3c161ec2ea2a-hostproc\") on node \"localhost\" DevicePath \"\"" Apr 12 18:47:30.655641 kubelet[2077]: I0412 18:47:30.655289 2077 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d9044ef-7ed8-4235-ab98-3c161ec2ea2a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "8d9044ef-7ed8-4235-ab98-3c161ec2ea2a" (UID: "8d9044ef-7ed8-4235-ab98-3c161ec2ea2a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:47:30.655641 kubelet[2077]: I0412 18:47:30.655296 2077 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8d9044ef-7ed8-4235-ab98-3c161ec2ea2a-lib-modules\") on node \"localhost\" DevicePath \"\"" Apr 12 18:47:30.655641 kubelet[2077]: I0412 18:47:30.655307 2077 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8d9044ef-7ed8-4235-ab98-3c161ec2ea2a-bpf-maps\") on node \"localhost\" DevicePath \"\"" Apr 12 18:47:30.655641 kubelet[2077]: W0412 18:47:30.655404 2077 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/61cfab05-6ca5-45ef-b0bc-89185e6c4993/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Apr 12 18:47:30.657981 kubelet[2077]: I0412 18:47:30.657956 2077 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d9044ef-7ed8-4235-ab98-3c161ec2ea2a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8d9044ef-7ed8-4235-ab98-3c161ec2ea2a" (UID: "8d9044ef-7ed8-4235-ab98-3c161ec2ea2a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 12 18:47:30.657981 kubelet[2077]: I0412 18:47:30.657981 2077 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61cfab05-6ca5-45ef-b0bc-89185e6c4993-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "61cfab05-6ca5-45ef-b0bc-89185e6c4993" (UID: "61cfab05-6ca5-45ef-b0bc-89185e6c4993"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 12 18:47:30.658254 kubelet[2077]: I0412 18:47:30.658226 2077 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d9044ef-7ed8-4235-ab98-3c161ec2ea2a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "8d9044ef-7ed8-4235-ab98-3c161ec2ea2a" (UID: "8d9044ef-7ed8-4235-ab98-3c161ec2ea2a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 12 18:47:30.658522 kubelet[2077]: I0412 18:47:30.658497 2077 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61cfab05-6ca5-45ef-b0bc-89185e6c4993-kube-api-access-wss8b" (OuterVolumeSpecName: "kube-api-access-wss8b") pod "61cfab05-6ca5-45ef-b0bc-89185e6c4993" (UID: "61cfab05-6ca5-45ef-b0bc-89185e6c4993"). InnerVolumeSpecName "kube-api-access-wss8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 12 18:47:30.659723 kubelet[2077]: I0412 18:47:30.659697 2077 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d9044ef-7ed8-4235-ab98-3c161ec2ea2a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "8d9044ef-7ed8-4235-ab98-3c161ec2ea2a" (UID: "8d9044ef-7ed8-4235-ab98-3c161ec2ea2a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 12 18:47:30.660403 kubelet[2077]: I0412 18:47:30.660373 2077 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d9044ef-7ed8-4235-ab98-3c161ec2ea2a-kube-api-access-nmm4s" (OuterVolumeSpecName: "kube-api-access-nmm4s") pod "8d9044ef-7ed8-4235-ab98-3c161ec2ea2a" (UID: "8d9044ef-7ed8-4235-ab98-3c161ec2ea2a"). InnerVolumeSpecName "kube-api-access-nmm4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 12 18:47:30.755519 kubelet[2077]: I0412 18:47:30.755500 2077 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8d9044ef-7ed8-4235-ab98-3c161ec2ea2a-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Apr 12 18:47:30.755519 kubelet[2077]: I0412 18:47:30.755521 2077 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/61cfab05-6ca5-45ef-b0bc-89185e6c4993-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 12 18:47:30.755703 kubelet[2077]: I0412 18:47:30.755540 2077 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8d9044ef-7ed8-4235-ab98-3c161ec2ea2a-hubble-tls\") on node \"localhost\" DevicePath \"\"" Apr 12 18:47:30.755703 kubelet[2077]: I0412 18:47:30.755550 2077 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-nmm4s\" (UniqueName: \"kubernetes.io/projected/8d9044ef-7ed8-4235-ab98-3c161ec2ea2a-kube-api-access-nmm4s\") on node \"localhost\" DevicePath \"\"" Apr 12 18:47:30.755703 kubelet[2077]: I0412 18:47:30.755559 2077 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8d9044ef-7ed8-4235-ab98-3c161ec2ea2a-cilium-run\") on node \"localhost\" DevicePath \"\"" Apr 12 18:47:30.755703 kubelet[2077]: I0412 18:47:30.755567 2077 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8d9044ef-7ed8-4235-ab98-3c161ec2ea2a-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 12 18:47:30.755703 kubelet[2077]: I0412 18:47:30.755576 2077 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-wss8b\" (UniqueName: \"kubernetes.io/projected/61cfab05-6ca5-45ef-b0bc-89185e6c4993-kube-api-access-wss8b\") on node \"localhost\" DevicePath \"\"" Apr 12 18:47:30.755703 kubelet[2077]: I0412 18:47:30.755584 2077 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8d9044ef-7ed8-4235-ab98-3c161ec2ea2a-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Apr 12 18:47:30.849113 kubelet[2077]: E0412 18:47:30.849080 2077 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:47:30.918647 kubelet[2077]: E0412 18:47:30.918572 2077 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 12 18:47:31.086858 kubelet[2077]: I0412 18:47:31.086707 2077 scope.go:115] "RemoveContainer" containerID="b9a4c802d0deeebbce878ceda815ba877b3f626ea92fa732b5edd102ac5ef307" Apr 12 18:47:31.088822 env[1199]: time="2024-04-12T18:47:31.088785902Z" level=info msg="RemoveContainer for \"b9a4c802d0deeebbce878ceda815ba877b3f626ea92fa732b5edd102ac5ef307\"" Apr 12 18:47:31.097318 env[1199]: time="2024-04-12T18:47:31.097272813Z" level=info msg="RemoveContainer for \"b9a4c802d0deeebbce878ceda815ba877b3f626ea92fa732b5edd102ac5ef307\" returns successfully" Apr 12 18:47:31.098222 env[1199]: time="2024-04-12T18:47:31.097700287Z" level=error msg="ContainerStatus for \"b9a4c802d0deeebbce878ceda815ba877b3f626ea92fa732b5edd102ac5ef307\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b9a4c802d0deeebbce878ceda815ba877b3f626ea92fa732b5edd102ac5ef307\": not found" Apr 12 18:47:31.098297 kubelet[2077]: I0412 18:47:31.097516 2077 scope.go:115] "RemoveContainer" containerID="b9a4c802d0deeebbce878ceda815ba877b3f626ea92fa732b5edd102ac5ef307" Apr 12 18:47:31.098297 kubelet[2077]: E0412 18:47:31.097897 2077 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b9a4c802d0deeebbce878ceda815ba877b3f626ea92fa732b5edd102ac5ef307\": not found" containerID="b9a4c802d0deeebbce878ceda815ba877b3f626ea92fa732b5edd102ac5ef307" Apr 12 18:47:31.098297 kubelet[2077]: I0412 18:47:31.097949 2077 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:b9a4c802d0deeebbce878ceda815ba877b3f626ea92fa732b5edd102ac5ef307} err="failed to get container status \"b9a4c802d0deeebbce878ceda815ba877b3f626ea92fa732b5edd102ac5ef307\": rpc error: code = NotFound desc = an error occurred when try to find container \"b9a4c802d0deeebbce878ceda815ba877b3f626ea92fa732b5edd102ac5ef307\": not found" Apr 12 18:47:31.098297 kubelet[2077]: I0412 18:47:31.097964 2077 scope.go:115] "RemoveContainer" containerID="8e4f6862dc96c377fa5df163980e86c7f7d996d0794d7d2f932f11a6cff05cb2" Apr 12 18:47:31.100051 env[1199]: time="2024-04-12T18:47:31.099952953Z" level=info msg="RemoveContainer for \"8e4f6862dc96c377fa5df163980e86c7f7d996d0794d7d2f932f11a6cff05cb2\"" Apr 12 18:47:31.103335 env[1199]: time="2024-04-12T18:47:31.103298430Z" level=info msg="RemoveContainer for \"8e4f6862dc96c377fa5df163980e86c7f7d996d0794d7d2f932f11a6cff05cb2\" returns successfully" Apr 12 18:47:31.103691 kubelet[2077]: I0412 18:47:31.103663 2077 scope.go:115] "RemoveContainer" containerID="27cdd5ee52882dec884cb7ceba5999493b14aae5ce28f0e049b9b4e2d85bf2db" Apr 12 18:47:31.104742 env[1199]: time="2024-04-12T18:47:31.104706139Z" level=info msg="RemoveContainer for \"27cdd5ee52882dec884cb7ceba5999493b14aae5ce28f0e049b9b4e2d85bf2db\"" Apr 12 18:47:31.107956 env[1199]: time="2024-04-12T18:47:31.107924013Z" level=info msg="RemoveContainer for \"27cdd5ee52882dec884cb7ceba5999493b14aae5ce28f0e049b9b4e2d85bf2db\" returns successfully" Apr 12 18:47:31.108099 kubelet[2077]: I0412 18:47:31.108078 2077 scope.go:115] "RemoveContainer" containerID="0f807ce89bafbf1184df06fd544ff5af560a7412edda3a5e964d077c7639d230" Apr 12 18:47:31.109113 env[1199]: time="2024-04-12T18:47:31.109080414Z" level=info msg="RemoveContainer for \"0f807ce89bafbf1184df06fd544ff5af560a7412edda3a5e964d077c7639d230\"" Apr 12 18:47:31.111498 env[1199]: time="2024-04-12T18:47:31.111465042Z" level=info msg="RemoveContainer for \"0f807ce89bafbf1184df06fd544ff5af560a7412edda3a5e964d077c7639d230\" returns successfully" Apr 12 18:47:31.111639 kubelet[2077]: I0412 18:47:31.111616 2077 scope.go:115] "RemoveContainer" containerID="6e4980760663b0710baf5d5b772ebc46355d2072c06b823188cf8e753244ca3f" Apr 12 18:47:31.112500 env[1199]: time="2024-04-12T18:47:31.112468862Z" level=info msg="RemoveContainer for \"6e4980760663b0710baf5d5b772ebc46355d2072c06b823188cf8e753244ca3f\"" Apr 12 18:47:31.115312 env[1199]: time="2024-04-12T18:47:31.115277777Z" level=info msg="RemoveContainer for \"6e4980760663b0710baf5d5b772ebc46355d2072c06b823188cf8e753244ca3f\" returns successfully" Apr 12 18:47:31.115467 kubelet[2077]: I0412 18:47:31.115430 2077 scope.go:115] "RemoveContainer" containerID="c60d4e5357f6fd9213474575c81ef292af99221b1ef5aedb54227db36d821f26" Apr 12 18:47:31.116177 env[1199]: time="2024-04-12T18:47:31.116147814Z" level=info msg="RemoveContainer for \"c60d4e5357f6fd9213474575c81ef292af99221b1ef5aedb54227db36d821f26\"" Apr 12 18:47:31.118523 env[1199]: time="2024-04-12T18:47:31.118483890Z" level=info msg="RemoveContainer for \"c60d4e5357f6fd9213474575c81ef292af99221b1ef5aedb54227db36d821f26\" returns successfully" Apr 12 18:47:31.118649 kubelet[2077]: I0412 18:47:31.118613 2077 scope.go:115] "RemoveContainer" containerID="8e4f6862dc96c377fa5df163980e86c7f7d996d0794d7d2f932f11a6cff05cb2" Apr 12 18:47:31.118821 env[1199]: time="2024-04-12T18:47:31.118752120Z" level=error msg="ContainerStatus for \"8e4f6862dc96c377fa5df163980e86c7f7d996d0794d7d2f932f11a6cff05cb2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8e4f6862dc96c377fa5df163980e86c7f7d996d0794d7d2f932f11a6cff05cb2\": not found" Apr 12 18:47:31.118932 kubelet[2077]: E0412 18:47:31.118880 2077 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8e4f6862dc96c377fa5df163980e86c7f7d996d0794d7d2f932f11a6cff05cb2\": not found" containerID="8e4f6862dc96c377fa5df163980e86c7f7d996d0794d7d2f932f11a6cff05cb2" Apr 12 18:47:31.118932 kubelet[2077]: I0412 18:47:31.118926 2077 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:8e4f6862dc96c377fa5df163980e86c7f7d996d0794d7d2f932f11a6cff05cb2} err="failed to get container status \"8e4f6862dc96c377fa5df163980e86c7f7d996d0794d7d2f932f11a6cff05cb2\": rpc error: code = NotFound desc = an error occurred when try to find container \"8e4f6862dc96c377fa5df163980e86c7f7d996d0794d7d2f932f11a6cff05cb2\": not found" Apr 12 18:47:31.118932 kubelet[2077]: I0412 18:47:31.118936 2077 scope.go:115] "RemoveContainer" containerID="27cdd5ee52882dec884cb7ceba5999493b14aae5ce28f0e049b9b4e2d85bf2db" Apr 12 18:47:31.119192 env[1199]: time="2024-04-12T18:47:31.119115752Z" level=error msg="ContainerStatus for \"27cdd5ee52882dec884cb7ceba5999493b14aae5ce28f0e049b9b4e2d85bf2db\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"27cdd5ee52882dec884cb7ceba5999493b14aae5ce28f0e049b9b4e2d85bf2db\": not found" Apr 12 18:47:31.119316 kubelet[2077]: E0412 18:47:31.119300 2077 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"27cdd5ee52882dec884cb7ceba5999493b14aae5ce28f0e049b9b4e2d85bf2db\": not found" containerID="27cdd5ee52882dec884cb7ceba5999493b14aae5ce28f0e049b9b4e2d85bf2db" Apr 12 18:47:31.119364 kubelet[2077]: I0412 18:47:31.119333 2077 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:27cdd5ee52882dec884cb7ceba5999493b14aae5ce28f0e049b9b4e2d85bf2db} err="failed to get container status \"27cdd5ee52882dec884cb7ceba5999493b14aae5ce28f0e049b9b4e2d85bf2db\": rpc error: code = NotFound desc = an error occurred when try to find container \"27cdd5ee52882dec884cb7ceba5999493b14aae5ce28f0e049b9b4e2d85bf2db\": not found" Apr 12 18:47:31.119364 kubelet[2077]: I0412 18:47:31.119343 2077 scope.go:115] "RemoveContainer" containerID="0f807ce89bafbf1184df06fd544ff5af560a7412edda3a5e964d077c7639d230" Apr 12 18:47:31.119511 env[1199]: time="2024-04-12T18:47:31.119473012Z" level=error msg="ContainerStatus for \"0f807ce89bafbf1184df06fd544ff5af560a7412edda3a5e964d077c7639d230\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0f807ce89bafbf1184df06fd544ff5af560a7412edda3a5e964d077c7639d230\": not found" Apr 12 18:47:31.119638 kubelet[2077]: E0412 18:47:31.119615 2077 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0f807ce89bafbf1184df06fd544ff5af560a7412edda3a5e964d077c7639d230\": not found" containerID="0f807ce89bafbf1184df06fd544ff5af560a7412edda3a5e964d077c7639d230" Apr 12 18:47:31.119684 kubelet[2077]: I0412 18:47:31.119653 2077 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:0f807ce89bafbf1184df06fd544ff5af560a7412edda3a5e964d077c7639d230} err="failed to get container status \"0f807ce89bafbf1184df06fd544ff5af560a7412edda3a5e964d077c7639d230\": rpc error: code = NotFound desc = an error occurred when try to find container \"0f807ce89bafbf1184df06fd544ff5af560a7412edda3a5e964d077c7639d230\": not found" Apr 12 18:47:31.119684 kubelet[2077]: I0412 18:47:31.119669 2077 scope.go:115] "RemoveContainer" containerID="6e4980760663b0710baf5d5b772ebc46355d2072c06b823188cf8e753244ca3f" Apr 12 18:47:31.119855 env[1199]: time="2024-04-12T18:47:31.119814802Z" level=error msg="ContainerStatus for \"6e4980760663b0710baf5d5b772ebc46355d2072c06b823188cf8e753244ca3f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6e4980760663b0710baf5d5b772ebc46355d2072c06b823188cf8e753244ca3f\": not found" Apr 12 18:47:31.120016 kubelet[2077]: E0412 18:47:31.119999 2077 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6e4980760663b0710baf5d5b772ebc46355d2072c06b823188cf8e753244ca3f\": not found" containerID="6e4980760663b0710baf5d5b772ebc46355d2072c06b823188cf8e753244ca3f" Apr 12 18:47:31.120071 kubelet[2077]: I0412 18:47:31.120038 2077 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:6e4980760663b0710baf5d5b772ebc46355d2072c06b823188cf8e753244ca3f} err="failed to get container status \"6e4980760663b0710baf5d5b772ebc46355d2072c06b823188cf8e753244ca3f\": rpc error: code = NotFound desc = an error occurred when try to find container \"6e4980760663b0710baf5d5b772ebc46355d2072c06b823188cf8e753244ca3f\": not found" Apr 12 18:47:31.120071 kubelet[2077]: I0412 18:47:31.120049 2077 scope.go:115] "RemoveContainer" containerID="c60d4e5357f6fd9213474575c81ef292af99221b1ef5aedb54227db36d821f26" Apr 12 18:47:31.120218 env[1199]: time="2024-04-12T18:47:31.120177502Z" level=error msg="ContainerStatus for \"c60d4e5357f6fd9213474575c81ef292af99221b1ef5aedb54227db36d821f26\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c60d4e5357f6fd9213474575c81ef292af99221b1ef5aedb54227db36d821f26\": not found" Apr 12 18:47:31.120315 kubelet[2077]: E0412 18:47:31.120297 2077 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c60d4e5357f6fd9213474575c81ef292af99221b1ef5aedb54227db36d821f26\": not found" containerID="c60d4e5357f6fd9213474575c81ef292af99221b1ef5aedb54227db36d821f26" Apr 12 18:47:31.120363 kubelet[2077]: I0412 18:47:31.120322 2077 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:c60d4e5357f6fd9213474575c81ef292af99221b1ef5aedb54227db36d821f26} err="failed to get container status \"c60d4e5357f6fd9213474575c81ef292af99221b1ef5aedb54227db36d821f26\": rpc error: code = NotFound desc = an error occurred when try to find container \"c60d4e5357f6fd9213474575c81ef292af99221b1ef5aedb54227db36d821f26\": not found" Apr 12 18:47:31.391649 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-72fb2d67bbcdb49c0d82730227042787f75768cf067e23f51c462194c65f836b-rootfs.mount: Deactivated successfully. Apr 12 18:47:31.391808 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-72fb2d67bbcdb49c0d82730227042787f75768cf067e23f51c462194c65f836b-shm.mount: Deactivated successfully. Apr 12 18:47:31.391897 systemd[1]: var-lib-kubelet-pods-61cfab05\x2d6ca5\x2d45ef\x2db0bc\x2d89185e6c4993-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwss8b.mount: Deactivated successfully. Apr 12 18:47:31.391996 systemd[1]: var-lib-kubelet-pods-8d9044ef\x2d7ed8\x2d4235\x2dab98\x2d3c161ec2ea2a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnmm4s.mount: Deactivated successfully. Apr 12 18:47:31.392077 systemd[1]: var-lib-kubelet-pods-8d9044ef\x2d7ed8\x2d4235\x2dab98\x2d3c161ec2ea2a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 12 18:47:31.392155 systemd[1]: var-lib-kubelet-pods-8d9044ef\x2d7ed8\x2d4235\x2dab98\x2d3c161ec2ea2a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 12 18:47:31.850120 kubelet[2077]: I0412 18:47:31.850093 2077 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=61cfab05-6ca5-45ef-b0bc-89185e6c4993 path="/var/lib/kubelet/pods/61cfab05-6ca5-45ef-b0bc-89185e6c4993/volumes" Apr 12 18:47:31.850438 kubelet[2077]: I0412 18:47:31.850425 2077 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=8d9044ef-7ed8-4235-ab98-3c161ec2ea2a path="/var/lib/kubelet/pods/8d9044ef-7ed8-4235-ab98-3c161ec2ea2a/volumes" Apr 12 18:47:32.358593 sshd[3725]: pam_unix(sshd:session): session closed for user core Apr 12 18:47:32.360963 systemd[1]: Started sshd@23-10.0.0.52:22-10.0.0.1:35864.service. Apr 12 18:47:32.361670 systemd[1]: sshd@22-10.0.0.52:22-10.0.0.1:35856.service: Deactivated successfully. Apr 12 18:47:32.362862 systemd[1]: session-23.scope: Deactivated successfully. Apr 12 18:47:32.363365 systemd-logind[1185]: Session 23 logged out. Waiting for processes to exit. Apr 12 18:47:32.364131 systemd-logind[1185]: Removed session 23. Apr 12 18:47:32.402006 sshd[3893]: Accepted publickey for core from 10.0.0.1 port 35864 ssh2: RSA SHA256:oFTmhZVjs8bjXH/lYpDQZ+WL9oh5tEY90V+L3H6oLsU Apr 12 18:47:32.403138 sshd[3893]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:47:32.407080 systemd-logind[1185]: New session 24 of user core. Apr 12 18:47:32.407727 systemd[1]: Started session-24.scope. Apr 12 18:47:32.723145 sshd[3893]: pam_unix(sshd:session): session closed for user core Apr 12 18:47:32.726514 systemd[1]: Started sshd@24-10.0.0.52:22-10.0.0.1:35868.service. Apr 12 18:47:32.740818 systemd-logind[1185]: Session 24 logged out. Waiting for processes to exit. Apr 12 18:47:32.741205 kubelet[2077]: I0412 18:47:32.741170 2077 topology_manager.go:212] "Topology Admit Handler" Apr 12 18:47:32.741268 kubelet[2077]: E0412 18:47:32.741234 2077 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="61cfab05-6ca5-45ef-b0bc-89185e6c4993" containerName="cilium-operator" Apr 12 18:47:32.741268 kubelet[2077]: E0412 18:47:32.741246 2077 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8d9044ef-7ed8-4235-ab98-3c161ec2ea2a" containerName="mount-bpf-fs" Apr 12 18:47:32.741268 kubelet[2077]: E0412 18:47:32.741255 2077 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8d9044ef-7ed8-4235-ab98-3c161ec2ea2a" containerName="apply-sysctl-overwrites" Apr 12 18:47:32.741268 kubelet[2077]: E0412 18:47:32.741263 2077 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8d9044ef-7ed8-4235-ab98-3c161ec2ea2a" containerName="clean-cilium-state" Apr 12 18:47:32.741268 kubelet[2077]: E0412 18:47:32.741272 2077 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8d9044ef-7ed8-4235-ab98-3c161ec2ea2a" containerName="cilium-agent" Apr 12 18:47:32.741381 kubelet[2077]: E0412 18:47:32.741283 2077 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8d9044ef-7ed8-4235-ab98-3c161ec2ea2a" containerName="mount-cgroup" Apr 12 18:47:32.741381 kubelet[2077]: I0412 18:47:32.741309 2077 memory_manager.go:346] "RemoveStaleState removing state" podUID="61cfab05-6ca5-45ef-b0bc-89185e6c4993" containerName="cilium-operator" Apr 12 18:47:32.741381 kubelet[2077]: I0412 18:47:32.741318 2077 memory_manager.go:346] "RemoveStaleState removing state" podUID="8d9044ef-7ed8-4235-ab98-3c161ec2ea2a" containerName="cilium-agent" Apr 12 18:47:32.744652 systemd[1]: sshd@23-10.0.0.52:22-10.0.0.1:35864.service: Deactivated successfully. Apr 12 18:47:32.745436 systemd[1]: session-24.scope: Deactivated successfully. Apr 12 18:47:32.746861 systemd-logind[1185]: Removed session 24. Apr 12 18:47:32.788791 sshd[3906]: Accepted publickey for core from 10.0.0.1 port 35868 ssh2: RSA SHA256:oFTmhZVjs8bjXH/lYpDQZ+WL9oh5tEY90V+L3H6oLsU Apr 12 18:47:32.790214 sshd[3906]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:47:32.796435 systemd[1]: Started session-25.scope. Apr 12 18:47:32.797459 systemd-logind[1185]: New session 25 of user core. Apr 12 18:47:32.866256 kubelet[2077]: I0412 18:47:32.866224 2077 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/75f8db33-88f5-4e3d-bfaf-934d93061ad8-etc-cni-netd\") pod \"cilium-skbcm\" (UID: \"75f8db33-88f5-4e3d-bfaf-934d93061ad8\") " pod="kube-system/cilium-skbcm" Apr 12 18:47:32.866660 kubelet[2077]: I0412 18:47:32.866646 2077 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/75f8db33-88f5-4e3d-bfaf-934d93061ad8-cilium-cgroup\") pod \"cilium-skbcm\" (UID: \"75f8db33-88f5-4e3d-bfaf-934d93061ad8\") " pod="kube-system/cilium-skbcm" Apr 12 18:47:32.866772 kubelet[2077]: I0412 18:47:32.866759 2077 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/75f8db33-88f5-4e3d-bfaf-934d93061ad8-hostproc\") pod \"cilium-skbcm\" (UID: \"75f8db33-88f5-4e3d-bfaf-934d93061ad8\") " pod="kube-system/cilium-skbcm" Apr 12 18:47:32.866882 kubelet[2077]: I0412 18:47:32.866869 2077 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/75f8db33-88f5-4e3d-bfaf-934d93061ad8-cilium-config-path\") pod \"cilium-skbcm\" (UID: \"75f8db33-88f5-4e3d-bfaf-934d93061ad8\") " pod="kube-system/cilium-skbcm" Apr 12 18:47:32.867007 kubelet[2077]: I0412 18:47:32.866994 2077 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/75f8db33-88f5-4e3d-bfaf-934d93061ad8-cilium-ipsec-secrets\") pod \"cilium-skbcm\" (UID: \"75f8db33-88f5-4e3d-bfaf-934d93061ad8\") " pod="kube-system/cilium-skbcm" Apr 12 18:47:32.867123 kubelet[2077]: I0412 18:47:32.867110 2077 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/75f8db33-88f5-4e3d-bfaf-934d93061ad8-bpf-maps\") pod \"cilium-skbcm\" (UID: \"75f8db33-88f5-4e3d-bfaf-934d93061ad8\") " pod="kube-system/cilium-skbcm" Apr 12 18:47:32.867237 kubelet[2077]: I0412 18:47:32.867222 2077 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/75f8db33-88f5-4e3d-bfaf-934d93061ad8-lib-modules\") pod \"cilium-skbcm\" (UID: \"75f8db33-88f5-4e3d-bfaf-934d93061ad8\") " pod="kube-system/cilium-skbcm" Apr 12 18:47:32.867347 kubelet[2077]: I0412 18:47:32.867335 2077 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/75f8db33-88f5-4e3d-bfaf-934d93061ad8-xtables-lock\") pod \"cilium-skbcm\" (UID: \"75f8db33-88f5-4e3d-bfaf-934d93061ad8\") " pod="kube-system/cilium-skbcm" Apr 12 18:47:32.867455 kubelet[2077]: I0412 18:47:32.867443 2077 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mk4kv\" (UniqueName: \"kubernetes.io/projected/75f8db33-88f5-4e3d-bfaf-934d93061ad8-kube-api-access-mk4kv\") pod \"cilium-skbcm\" (UID: \"75f8db33-88f5-4e3d-bfaf-934d93061ad8\") " pod="kube-system/cilium-skbcm" Apr 12 18:47:32.867576 kubelet[2077]: I0412 18:47:32.867564 2077 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/75f8db33-88f5-4e3d-bfaf-934d93061ad8-cilium-run\") pod \"cilium-skbcm\" (UID: \"75f8db33-88f5-4e3d-bfaf-934d93061ad8\") " pod="kube-system/cilium-skbcm" Apr 12 18:47:32.867682 kubelet[2077]: I0412 18:47:32.867667 2077 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/75f8db33-88f5-4e3d-bfaf-934d93061ad8-cni-path\") pod \"cilium-skbcm\" (UID: \"75f8db33-88f5-4e3d-bfaf-934d93061ad8\") " pod="kube-system/cilium-skbcm" Apr 12 18:47:32.867792 kubelet[2077]: I0412 18:47:32.867779 2077 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/75f8db33-88f5-4e3d-bfaf-934d93061ad8-clustermesh-secrets\") pod \"cilium-skbcm\" (UID: \"75f8db33-88f5-4e3d-bfaf-934d93061ad8\") " pod="kube-system/cilium-skbcm" Apr 12 18:47:32.867917 kubelet[2077]: I0412 18:47:32.867889 2077 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/75f8db33-88f5-4e3d-bfaf-934d93061ad8-host-proc-sys-kernel\") pod \"cilium-skbcm\" (UID: \"75f8db33-88f5-4e3d-bfaf-934d93061ad8\") " pod="kube-system/cilium-skbcm" Apr 12 18:47:32.868014 kubelet[2077]: I0412 18:47:32.868001 2077 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/75f8db33-88f5-4e3d-bfaf-934d93061ad8-host-proc-sys-net\") pod \"cilium-skbcm\" (UID: \"75f8db33-88f5-4e3d-bfaf-934d93061ad8\") " pod="kube-system/cilium-skbcm" Apr 12 18:47:32.868129 kubelet[2077]: I0412 18:47:32.868116 2077 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/75f8db33-88f5-4e3d-bfaf-934d93061ad8-hubble-tls\") pod \"cilium-skbcm\" (UID: \"75f8db33-88f5-4e3d-bfaf-934d93061ad8\") " pod="kube-system/cilium-skbcm" Apr 12 18:47:32.906764 sshd[3906]: pam_unix(sshd:session): session closed for user core Apr 12 18:47:32.909024 systemd[1]: Started sshd@25-10.0.0.52:22-10.0.0.1:35870.service. Apr 12 18:47:32.913190 systemd[1]: sshd@24-10.0.0.52:22-10.0.0.1:35868.service: Deactivated successfully. Apr 12 18:47:32.917594 systemd[1]: session-25.scope: Deactivated successfully. Apr 12 18:47:32.917846 systemd-logind[1185]: Session 25 logged out. Waiting for processes to exit. Apr 12 18:47:32.919116 systemd-logind[1185]: Removed session 25. Apr 12 18:47:32.953422 sshd[3920]: Accepted publickey for core from 10.0.0.1 port 35870 ssh2: RSA SHA256:oFTmhZVjs8bjXH/lYpDQZ+WL9oh5tEY90V+L3H6oLsU Apr 12 18:47:32.956678 sshd[3920]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:47:32.961558 systemd[1]: Started session-26.scope. Apr 12 18:47:32.961772 systemd-logind[1185]: New session 26 of user core. Apr 12 18:47:33.044264 kubelet[2077]: E0412 18:47:33.044175 2077 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:47:33.044859 env[1199]: time="2024-04-12T18:47:33.044819561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-skbcm,Uid:75f8db33-88f5-4e3d-bfaf-934d93061ad8,Namespace:kube-system,Attempt:0,}" Apr 12 18:47:33.061668 env[1199]: time="2024-04-12T18:47:33.061510364Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:47:33.061668 env[1199]: time="2024-04-12T18:47:33.061551562Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:47:33.061668 env[1199]: time="2024-04-12T18:47:33.061561070Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:47:33.061994 env[1199]: time="2024-04-12T18:47:33.061940561Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/98329f388dd899a41b2fddc61ae53f054c7d8860be7179abb88e8046091231d3 pid=3942 runtime=io.containerd.runc.v2 Apr 12 18:47:33.090401 env[1199]: time="2024-04-12T18:47:33.090332327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-skbcm,Uid:75f8db33-88f5-4e3d-bfaf-934d93061ad8,Namespace:kube-system,Attempt:0,} returns sandbox id \"98329f388dd899a41b2fddc61ae53f054c7d8860be7179abb88e8046091231d3\"" Apr 12 18:47:33.091373 kubelet[2077]: E0412 18:47:33.091344 2077 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:47:33.094136 env[1199]: time="2024-04-12T18:47:33.094103048Z" level=info msg="CreateContainer within sandbox \"98329f388dd899a41b2fddc61ae53f054c7d8860be7179abb88e8046091231d3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 12 18:47:33.109379 env[1199]: time="2024-04-12T18:47:33.109335818Z" level=info msg="CreateContainer within sandbox \"98329f388dd899a41b2fddc61ae53f054c7d8860be7179abb88e8046091231d3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f3752fa6ca1bb7774636cd74493b3104033a77a9c963125eac3f7b4e71696566\"" Apr 12 18:47:33.111435 env[1199]: time="2024-04-12T18:47:33.111385967Z" level=info msg="StartContainer for \"f3752fa6ca1bb7774636cd74493b3104033a77a9c963125eac3f7b4e71696566\"" Apr 12 18:47:33.151552 env[1199]: time="2024-04-12T18:47:33.151501750Z" level=info msg="StartContainer for \"f3752fa6ca1bb7774636cd74493b3104033a77a9c963125eac3f7b4e71696566\" returns successfully" Apr 12 18:47:33.179896 env[1199]: time="2024-04-12T18:47:33.179837078Z" level=info msg="shim disconnected" id=f3752fa6ca1bb7774636cd74493b3104033a77a9c963125eac3f7b4e71696566 Apr 12 18:47:33.179896 env[1199]: time="2024-04-12T18:47:33.179891932Z" level=warning msg="cleaning up after shim disconnected" id=f3752fa6ca1bb7774636cd74493b3104033a77a9c963125eac3f7b4e71696566 namespace=k8s.io Apr 12 18:47:33.180102 env[1199]: time="2024-04-12T18:47:33.179917822Z" level=info msg="cleaning up dead shim" Apr 12 18:47:33.186179 env[1199]: time="2024-04-12T18:47:33.186134374Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:47:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4028 runtime=io.containerd.runc.v2\n" Apr 12 18:47:34.102450 env[1199]: time="2024-04-12T18:47:34.102406066Z" level=info msg="StopPodSandbox for \"98329f388dd899a41b2fddc61ae53f054c7d8860be7179abb88e8046091231d3\"" Apr 12 18:47:34.102993 env[1199]: time="2024-04-12T18:47:34.102965860Z" level=info msg="Container to stop \"f3752fa6ca1bb7774636cd74493b3104033a77a9c963125eac3f7b4e71696566\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:47:34.105285 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-98329f388dd899a41b2fddc61ae53f054c7d8860be7179abb88e8046091231d3-shm.mount: Deactivated successfully. Apr 12 18:47:34.124511 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-98329f388dd899a41b2fddc61ae53f054c7d8860be7179abb88e8046091231d3-rootfs.mount: Deactivated successfully. Apr 12 18:47:34.137484 env[1199]: time="2024-04-12T18:47:34.137432297Z" level=info msg="shim disconnected" id=98329f388dd899a41b2fddc61ae53f054c7d8860be7179abb88e8046091231d3 Apr 12 18:47:34.137484 env[1199]: time="2024-04-12T18:47:34.137473756Z" level=warning msg="cleaning up after shim disconnected" id=98329f388dd899a41b2fddc61ae53f054c7d8860be7179abb88e8046091231d3 namespace=k8s.io Apr 12 18:47:34.137484 env[1199]: time="2024-04-12T18:47:34.137482462Z" level=info msg="cleaning up dead shim" Apr 12 18:47:34.143627 env[1199]: time="2024-04-12T18:47:34.143591357Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:47:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4060 runtime=io.containerd.runc.v2\n" Apr 12 18:47:34.143899 env[1199]: time="2024-04-12T18:47:34.143874214Z" level=info msg="TearDown network for sandbox \"98329f388dd899a41b2fddc61ae53f054c7d8860be7179abb88e8046091231d3\" successfully" Apr 12 18:47:34.143899 env[1199]: time="2024-04-12T18:47:34.143897058Z" level=info msg="StopPodSandbox for \"98329f388dd899a41b2fddc61ae53f054c7d8860be7179abb88e8046091231d3\" returns successfully" Apr 12 18:47:34.274479 kubelet[2077]: I0412 18:47:34.274437 2077 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/75f8db33-88f5-4e3d-bfaf-934d93061ad8-lib-modules\") pod \"75f8db33-88f5-4e3d-bfaf-934d93061ad8\" (UID: \"75f8db33-88f5-4e3d-bfaf-934d93061ad8\") " Apr 12 18:47:34.274479 kubelet[2077]: I0412 18:47:34.274489 2077 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/75f8db33-88f5-4e3d-bfaf-934d93061ad8-host-proc-sys-net\") pod \"75f8db33-88f5-4e3d-bfaf-934d93061ad8\" (UID: \"75f8db33-88f5-4e3d-bfaf-934d93061ad8\") " Apr 12 18:47:34.275034 kubelet[2077]: I0412 18:47:34.274521 2077 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/75f8db33-88f5-4e3d-bfaf-934d93061ad8-bpf-maps\") pod \"75f8db33-88f5-4e3d-bfaf-934d93061ad8\" (UID: \"75f8db33-88f5-4e3d-bfaf-934d93061ad8\") " Apr 12 18:47:34.275034 kubelet[2077]: I0412 18:47:34.274520 2077 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75f8db33-88f5-4e3d-bfaf-934d93061ad8-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "75f8db33-88f5-4e3d-bfaf-934d93061ad8" (UID: "75f8db33-88f5-4e3d-bfaf-934d93061ad8"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:47:34.275034 kubelet[2077]: I0412 18:47:34.274566 2077 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/75f8db33-88f5-4e3d-bfaf-934d93061ad8-cilium-config-path\") pod \"75f8db33-88f5-4e3d-bfaf-934d93061ad8\" (UID: \"75f8db33-88f5-4e3d-bfaf-934d93061ad8\") " Apr 12 18:47:34.275034 kubelet[2077]: I0412 18:47:34.274566 2077 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75f8db33-88f5-4e3d-bfaf-934d93061ad8-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "75f8db33-88f5-4e3d-bfaf-934d93061ad8" (UID: "75f8db33-88f5-4e3d-bfaf-934d93061ad8"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:47:34.275034 kubelet[2077]: I0412 18:47:34.274596 2077 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/75f8db33-88f5-4e3d-bfaf-934d93061ad8-cilium-ipsec-secrets\") pod \"75f8db33-88f5-4e3d-bfaf-934d93061ad8\" (UID: \"75f8db33-88f5-4e3d-bfaf-934d93061ad8\") " Apr 12 18:47:34.275034 kubelet[2077]: I0412 18:47:34.274621 2077 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/75f8db33-88f5-4e3d-bfaf-934d93061ad8-cilium-run\") pod \"75f8db33-88f5-4e3d-bfaf-934d93061ad8\" (UID: \"75f8db33-88f5-4e3d-bfaf-934d93061ad8\") " Apr 12 18:47:34.275233 kubelet[2077]: I0412 18:47:34.274647 2077 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/75f8db33-88f5-4e3d-bfaf-934d93061ad8-hubble-tls\") pod \"75f8db33-88f5-4e3d-bfaf-934d93061ad8\" (UID: \"75f8db33-88f5-4e3d-bfaf-934d93061ad8\") " Apr 12 18:47:34.275233 kubelet[2077]: I0412 18:47:34.274672 2077 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/75f8db33-88f5-4e3d-bfaf-934d93061ad8-clustermesh-secrets\") pod \"75f8db33-88f5-4e3d-bfaf-934d93061ad8\" (UID: \"75f8db33-88f5-4e3d-bfaf-934d93061ad8\") " Apr 12 18:47:34.275233 kubelet[2077]: I0412 18:47:34.274694 2077 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/75f8db33-88f5-4e3d-bfaf-934d93061ad8-cni-path\") pod \"75f8db33-88f5-4e3d-bfaf-934d93061ad8\" (UID: \"75f8db33-88f5-4e3d-bfaf-934d93061ad8\") " Apr 12 18:47:34.275233 kubelet[2077]: I0412 18:47:34.274719 2077 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/75f8db33-88f5-4e3d-bfaf-934d93061ad8-etc-cni-netd\") pod \"75f8db33-88f5-4e3d-bfaf-934d93061ad8\" (UID: \"75f8db33-88f5-4e3d-bfaf-934d93061ad8\") " Apr 12 18:47:34.275233 kubelet[2077]: I0412 18:47:34.274746 2077 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mk4kv\" (UniqueName: \"kubernetes.io/projected/75f8db33-88f5-4e3d-bfaf-934d93061ad8-kube-api-access-mk4kv\") pod \"75f8db33-88f5-4e3d-bfaf-934d93061ad8\" (UID: \"75f8db33-88f5-4e3d-bfaf-934d93061ad8\") " Apr 12 18:47:34.275233 kubelet[2077]: I0412 18:47:34.274769 2077 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/75f8db33-88f5-4e3d-bfaf-934d93061ad8-xtables-lock\") pod \"75f8db33-88f5-4e3d-bfaf-934d93061ad8\" (UID: \"75f8db33-88f5-4e3d-bfaf-934d93061ad8\") " Apr 12 18:47:34.275423 kubelet[2077]: I0412 18:47:34.274792 2077 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/75f8db33-88f5-4e3d-bfaf-934d93061ad8-cilium-cgroup\") pod \"75f8db33-88f5-4e3d-bfaf-934d93061ad8\" (UID: \"75f8db33-88f5-4e3d-bfaf-934d93061ad8\") " Apr 12 18:47:34.275423 kubelet[2077]: I0412 18:47:34.274819 2077 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/75f8db33-88f5-4e3d-bfaf-934d93061ad8-host-proc-sys-kernel\") pod \"75f8db33-88f5-4e3d-bfaf-934d93061ad8\" (UID: \"75f8db33-88f5-4e3d-bfaf-934d93061ad8\") " Apr 12 18:47:34.275423 kubelet[2077]: W0412 18:47:34.274811 2077 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/75f8db33-88f5-4e3d-bfaf-934d93061ad8/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Apr 12 18:47:34.275423 kubelet[2077]: I0412 18:47:34.274858 2077 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75f8db33-88f5-4e3d-bfaf-934d93061ad8-hostproc" (OuterVolumeSpecName: "hostproc") pod "75f8db33-88f5-4e3d-bfaf-934d93061ad8" (UID: "75f8db33-88f5-4e3d-bfaf-934d93061ad8"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:47:34.275423 kubelet[2077]: I0412 18:47:34.274881 2077 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75f8db33-88f5-4e3d-bfaf-934d93061ad8-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "75f8db33-88f5-4e3d-bfaf-934d93061ad8" (UID: "75f8db33-88f5-4e3d-bfaf-934d93061ad8"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:47:34.275423 kubelet[2077]: I0412 18:47:34.274841 2077 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/75f8db33-88f5-4e3d-bfaf-934d93061ad8-hostproc\") pod \"75f8db33-88f5-4e3d-bfaf-934d93061ad8\" (UID: \"75f8db33-88f5-4e3d-bfaf-934d93061ad8\") " Apr 12 18:47:34.275640 kubelet[2077]: I0412 18:47:34.274960 2077 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/75f8db33-88f5-4e3d-bfaf-934d93061ad8-hostproc\") on node \"localhost\" DevicePath \"\"" Apr 12 18:47:34.275640 kubelet[2077]: I0412 18:47:34.274972 2077 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/75f8db33-88f5-4e3d-bfaf-934d93061ad8-lib-modules\") on node \"localhost\" DevicePath \"\"" Apr 12 18:47:34.275640 kubelet[2077]: I0412 18:47:34.274983 2077 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/75f8db33-88f5-4e3d-bfaf-934d93061ad8-bpf-maps\") on node \"localhost\" DevicePath \"\"" Apr 12 18:47:34.275640 kubelet[2077]: I0412 18:47:34.274998 2077 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/75f8db33-88f5-4e3d-bfaf-934d93061ad8-cilium-run\") on node \"localhost\" DevicePath \"\"" Apr 12 18:47:34.275640 kubelet[2077]: I0412 18:47:34.275301 2077 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75f8db33-88f5-4e3d-bfaf-934d93061ad8-cni-path" (OuterVolumeSpecName: "cni-path") pod "75f8db33-88f5-4e3d-bfaf-934d93061ad8" (UID: "75f8db33-88f5-4e3d-bfaf-934d93061ad8"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:47:34.275640 kubelet[2077]: I0412 18:47:34.275321 2077 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75f8db33-88f5-4e3d-bfaf-934d93061ad8-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "75f8db33-88f5-4e3d-bfaf-934d93061ad8" (UID: "75f8db33-88f5-4e3d-bfaf-934d93061ad8"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:47:34.275854 kubelet[2077]: I0412 18:47:34.275334 2077 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75f8db33-88f5-4e3d-bfaf-934d93061ad8-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "75f8db33-88f5-4e3d-bfaf-934d93061ad8" (UID: "75f8db33-88f5-4e3d-bfaf-934d93061ad8"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:47:34.275854 kubelet[2077]: I0412 18:47:34.275364 2077 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75f8db33-88f5-4e3d-bfaf-934d93061ad8-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "75f8db33-88f5-4e3d-bfaf-934d93061ad8" (UID: "75f8db33-88f5-4e3d-bfaf-934d93061ad8"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:47:34.275854 kubelet[2077]: I0412 18:47:34.275376 2077 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75f8db33-88f5-4e3d-bfaf-934d93061ad8-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "75f8db33-88f5-4e3d-bfaf-934d93061ad8" (UID: "75f8db33-88f5-4e3d-bfaf-934d93061ad8"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:47:34.276038 kubelet[2077]: I0412 18:47:34.276002 2077 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75f8db33-88f5-4e3d-bfaf-934d93061ad8-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "75f8db33-88f5-4e3d-bfaf-934d93061ad8" (UID: "75f8db33-88f5-4e3d-bfaf-934d93061ad8"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:47:34.276729 kubelet[2077]: I0412 18:47:34.276690 2077 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75f8db33-88f5-4e3d-bfaf-934d93061ad8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "75f8db33-88f5-4e3d-bfaf-934d93061ad8" (UID: "75f8db33-88f5-4e3d-bfaf-934d93061ad8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 12 18:47:34.277603 kubelet[2077]: I0412 18:47:34.277570 2077 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75f8db33-88f5-4e3d-bfaf-934d93061ad8-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "75f8db33-88f5-4e3d-bfaf-934d93061ad8" (UID: "75f8db33-88f5-4e3d-bfaf-934d93061ad8"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 12 18:47:34.278706 systemd[1]: var-lib-kubelet-pods-75f8db33\x2d88f5\x2d4e3d\x2dbfaf\x2d934d93061ad8-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 12 18:47:34.279678 kubelet[2077]: I0412 18:47:34.279651 2077 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75f8db33-88f5-4e3d-bfaf-934d93061ad8-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "75f8db33-88f5-4e3d-bfaf-934d93061ad8" (UID: "75f8db33-88f5-4e3d-bfaf-934d93061ad8"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 12 18:47:34.279900 kubelet[2077]: I0412 18:47:34.279843 2077 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75f8db33-88f5-4e3d-bfaf-934d93061ad8-kube-api-access-mk4kv" (OuterVolumeSpecName: "kube-api-access-mk4kv") pod "75f8db33-88f5-4e3d-bfaf-934d93061ad8" (UID: "75f8db33-88f5-4e3d-bfaf-934d93061ad8"). InnerVolumeSpecName "kube-api-access-mk4kv". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 12 18:47:34.280190 kubelet[2077]: I0412 18:47:34.280164 2077 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75f8db33-88f5-4e3d-bfaf-934d93061ad8-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "75f8db33-88f5-4e3d-bfaf-934d93061ad8" (UID: "75f8db33-88f5-4e3d-bfaf-934d93061ad8"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 12 18:47:34.280407 systemd[1]: var-lib-kubelet-pods-75f8db33\x2d88f5\x2d4e3d\x2dbfaf\x2d934d93061ad8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmk4kv.mount: Deactivated successfully. Apr 12 18:47:34.280507 systemd[1]: var-lib-kubelet-pods-75f8db33\x2d88f5\x2d4e3d\x2dbfaf\x2d934d93061ad8-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 12 18:47:34.280600 systemd[1]: var-lib-kubelet-pods-75f8db33\x2d88f5\x2d4e3d\x2dbfaf\x2d934d93061ad8-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Apr 12 18:47:34.375330 kubelet[2077]: I0412 18:47:34.375207 2077 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/75f8db33-88f5-4e3d-bfaf-934d93061ad8-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Apr 12 18:47:34.375330 kubelet[2077]: I0412 18:47:34.375250 2077 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/75f8db33-88f5-4e3d-bfaf-934d93061ad8-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Apr 12 18:47:34.375330 kubelet[2077]: I0412 18:47:34.375263 2077 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/75f8db33-88f5-4e3d-bfaf-934d93061ad8-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Apr 12 18:47:34.375330 kubelet[2077]: I0412 18:47:34.375272 2077 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/75f8db33-88f5-4e3d-bfaf-934d93061ad8-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Apr 12 18:47:34.375330 kubelet[2077]: I0412 18:47:34.375280 2077 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/75f8db33-88f5-4e3d-bfaf-934d93061ad8-hubble-tls\") on node \"localhost\" DevicePath \"\"" Apr 12 18:47:34.375330 kubelet[2077]: I0412 18:47:34.375289 2077 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/75f8db33-88f5-4e3d-bfaf-934d93061ad8-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 12 18:47:34.375330 kubelet[2077]: I0412 18:47:34.375298 2077 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/75f8db33-88f5-4e3d-bfaf-934d93061ad8-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Apr 12 18:47:34.375330 kubelet[2077]: I0412 18:47:34.375306 2077 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/75f8db33-88f5-4e3d-bfaf-934d93061ad8-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Apr 12 18:47:34.375661 kubelet[2077]: I0412 18:47:34.375316 2077 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/75f8db33-88f5-4e3d-bfaf-934d93061ad8-cni-path\") on node \"localhost\" DevicePath \"\"" Apr 12 18:47:34.375661 kubelet[2077]: I0412 18:47:34.375326 2077 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-mk4kv\" (UniqueName: \"kubernetes.io/projected/75f8db33-88f5-4e3d-bfaf-934d93061ad8-kube-api-access-mk4kv\") on node \"localhost\" DevicePath \"\"" Apr 12 18:47:34.375661 kubelet[2077]: I0412 18:47:34.375334 2077 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/75f8db33-88f5-4e3d-bfaf-934d93061ad8-xtables-lock\") on node \"localhost\" DevicePath \"\"" Apr 12 18:47:35.105548 kubelet[2077]: I0412 18:47:35.105513 2077 scope.go:115] "RemoveContainer" containerID="f3752fa6ca1bb7774636cd74493b3104033a77a9c963125eac3f7b4e71696566" Apr 12 18:47:35.106585 env[1199]: time="2024-04-12T18:47:35.106543613Z" level=info msg="RemoveContainer for \"f3752fa6ca1bb7774636cd74493b3104033a77a9c963125eac3f7b4e71696566\"" Apr 12 18:47:35.182104 env[1199]: time="2024-04-12T18:47:35.182058616Z" level=info msg="RemoveContainer for \"f3752fa6ca1bb7774636cd74493b3104033a77a9c963125eac3f7b4e71696566\" returns successfully" Apr 12 18:47:35.432360 kubelet[2077]: I0412 18:47:35.432253 2077 topology_manager.go:212] "Topology Admit Handler" Apr 12 18:47:35.432791 kubelet[2077]: E0412 18:47:35.432777 2077 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="75f8db33-88f5-4e3d-bfaf-934d93061ad8" containerName="mount-cgroup" Apr 12 18:47:35.432890 kubelet[2077]: I0412 18:47:35.432877 2077 memory_manager.go:346] "RemoveStaleState removing state" podUID="75f8db33-88f5-4e3d-bfaf-934d93061ad8" containerName="mount-cgroup" Apr 12 18:47:35.581431 kubelet[2077]: I0412 18:47:35.581380 2077 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c6aaa31f-a919-4aa7-beed-c331801a729f-host-proc-sys-kernel\") pod \"cilium-dvmmb\" (UID: \"c6aaa31f-a919-4aa7-beed-c331801a729f\") " pod="kube-system/cilium-dvmmb" Apr 12 18:47:35.581431 kubelet[2077]: I0412 18:47:35.581444 2077 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c6aaa31f-a919-4aa7-beed-c331801a729f-hubble-tls\") pod \"cilium-dvmmb\" (UID: \"c6aaa31f-a919-4aa7-beed-c331801a729f\") " pod="kube-system/cilium-dvmmb" Apr 12 18:47:35.581643 kubelet[2077]: I0412 18:47:35.581520 2077 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c6aaa31f-a919-4aa7-beed-c331801a729f-lib-modules\") pod \"cilium-dvmmb\" (UID: \"c6aaa31f-a919-4aa7-beed-c331801a729f\") " pod="kube-system/cilium-dvmmb" Apr 12 18:47:35.581643 kubelet[2077]: I0412 18:47:35.581566 2077 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c6aaa31f-a919-4aa7-beed-c331801a729f-cilium-ipsec-secrets\") pod \"cilium-dvmmb\" (UID: \"c6aaa31f-a919-4aa7-beed-c331801a729f\") " pod="kube-system/cilium-dvmmb" Apr 12 18:47:35.581728 kubelet[2077]: I0412 18:47:35.581695 2077 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xrnw\" (UniqueName: \"kubernetes.io/projected/c6aaa31f-a919-4aa7-beed-c331801a729f-kube-api-access-6xrnw\") pod \"cilium-dvmmb\" (UID: \"c6aaa31f-a919-4aa7-beed-c331801a729f\") " pod="kube-system/cilium-dvmmb" Apr 12 18:47:35.581728 kubelet[2077]: I0412 18:47:35.581727 2077 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c6aaa31f-a919-4aa7-beed-c331801a729f-cni-path\") pod \"cilium-dvmmb\" (UID: \"c6aaa31f-a919-4aa7-beed-c331801a729f\") " pod="kube-system/cilium-dvmmb" Apr 12 18:47:35.581811 kubelet[2077]: I0412 18:47:35.581770 2077 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c6aaa31f-a919-4aa7-beed-c331801a729f-host-proc-sys-net\") pod \"cilium-dvmmb\" (UID: \"c6aaa31f-a919-4aa7-beed-c331801a729f\") " pod="kube-system/cilium-dvmmb" Apr 12 18:47:35.581811 kubelet[2077]: I0412 18:47:35.581818 2077 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c6aaa31f-a919-4aa7-beed-c331801a729f-cilium-cgroup\") pod \"cilium-dvmmb\" (UID: \"c6aaa31f-a919-4aa7-beed-c331801a729f\") " pod="kube-system/cilium-dvmmb" Apr 12 18:47:35.582061 kubelet[2077]: I0412 18:47:35.581841 2077 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c6aaa31f-a919-4aa7-beed-c331801a729f-xtables-lock\") pod \"cilium-dvmmb\" (UID: \"c6aaa31f-a919-4aa7-beed-c331801a729f\") " pod="kube-system/cilium-dvmmb" Apr 12 18:47:35.582061 kubelet[2077]: I0412 18:47:35.581859 2077 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c6aaa31f-a919-4aa7-beed-c331801a729f-etc-cni-netd\") pod \"cilium-dvmmb\" (UID: \"c6aaa31f-a919-4aa7-beed-c331801a729f\") " pod="kube-system/cilium-dvmmb" Apr 12 18:47:35.582061 kubelet[2077]: I0412 18:47:35.581878 2077 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c6aaa31f-a919-4aa7-beed-c331801a729f-clustermesh-secrets\") pod \"cilium-dvmmb\" (UID: \"c6aaa31f-a919-4aa7-beed-c331801a729f\") " pod="kube-system/cilium-dvmmb" Apr 12 18:47:35.582061 kubelet[2077]: I0412 18:47:35.581897 2077 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c6aaa31f-a919-4aa7-beed-c331801a729f-bpf-maps\") pod \"cilium-dvmmb\" (UID: \"c6aaa31f-a919-4aa7-beed-c331801a729f\") " pod="kube-system/cilium-dvmmb" Apr 12 18:47:35.582061 kubelet[2077]: I0412 18:47:35.581963 2077 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c6aaa31f-a919-4aa7-beed-c331801a729f-hostproc\") pod \"cilium-dvmmb\" (UID: \"c6aaa31f-a919-4aa7-beed-c331801a729f\") " pod="kube-system/cilium-dvmmb" Apr 12 18:47:35.582061 kubelet[2077]: I0412 18:47:35.582023 2077 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c6aaa31f-a919-4aa7-beed-c331801a729f-cilium-config-path\") pod \"cilium-dvmmb\" (UID: \"c6aaa31f-a919-4aa7-beed-c331801a729f\") " pod="kube-system/cilium-dvmmb" Apr 12 18:47:35.582309 kubelet[2077]: I0412 18:47:35.582053 2077 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c6aaa31f-a919-4aa7-beed-c331801a729f-cilium-run\") pod \"cilium-dvmmb\" (UID: \"c6aaa31f-a919-4aa7-beed-c331801a729f\") " pod="kube-system/cilium-dvmmb" Apr 12 18:47:35.738116 kubelet[2077]: E0412 18:47:35.737987 2077 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:47:35.738697 env[1199]: time="2024-04-12T18:47:35.738660710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dvmmb,Uid:c6aaa31f-a919-4aa7-beed-c331801a729f,Namespace:kube-system,Attempt:0,}" Apr 12 18:47:35.752270 env[1199]: time="2024-04-12T18:47:35.752203018Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:47:35.752270 env[1199]: time="2024-04-12T18:47:35.752241802Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:47:35.752270 env[1199]: time="2024-04-12T18:47:35.752255058Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:47:35.752506 env[1199]: time="2024-04-12T18:47:35.752463443Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b07af88c7a01f6d5936856072f02570e35a9e39cf8bfd736b08ee35948190479 pid=4087 runtime=io.containerd.runc.v2 Apr 12 18:47:35.788709 env[1199]: time="2024-04-12T18:47:35.788647805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dvmmb,Uid:c6aaa31f-a919-4aa7-beed-c331801a729f,Namespace:kube-system,Attempt:0,} returns sandbox id \"b07af88c7a01f6d5936856072f02570e35a9e39cf8bfd736b08ee35948190479\"" Apr 12 18:47:35.789657 kubelet[2077]: E0412 18:47:35.789638 2077 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:47:35.796154 env[1199]: time="2024-04-12T18:47:35.796118824Z" level=info msg="CreateContainer within sandbox \"b07af88c7a01f6d5936856072f02570e35a9e39cf8bfd736b08ee35948190479\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 12 18:47:35.811483 env[1199]: time="2024-04-12T18:47:35.811442037Z" level=info msg="CreateContainer within sandbox \"b07af88c7a01f6d5936856072f02570e35a9e39cf8bfd736b08ee35948190479\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"93528bf179343a29e25b3ec5999bcac3863ef3c03c843aafecb807dd8237a500\"" Apr 12 18:47:35.811963 env[1199]: time="2024-04-12T18:47:35.811847819Z" level=info msg="StartContainer for \"93528bf179343a29e25b3ec5999bcac3863ef3c03c843aafecb807dd8237a500\"" Apr 12 18:47:35.851724 kubelet[2077]: I0412 18:47:35.851682 2077 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=75f8db33-88f5-4e3d-bfaf-934d93061ad8 path="/var/lib/kubelet/pods/75f8db33-88f5-4e3d-bfaf-934d93061ad8/volumes" Apr 12 18:47:35.856697 env[1199]: time="2024-04-12T18:47:35.856656823Z" level=info msg="StartContainer for \"93528bf179343a29e25b3ec5999bcac3863ef3c03c843aafecb807dd8237a500\" returns successfully" Apr 12 18:47:35.891186 env[1199]: time="2024-04-12T18:47:35.891116356Z" level=info msg="shim disconnected" id=93528bf179343a29e25b3ec5999bcac3863ef3c03c843aafecb807dd8237a500 Apr 12 18:47:35.891186 env[1199]: time="2024-04-12T18:47:35.891182983Z" level=warning msg="cleaning up after shim disconnected" id=93528bf179343a29e25b3ec5999bcac3863ef3c03c843aafecb807dd8237a500 namespace=k8s.io Apr 12 18:47:35.891372 env[1199]: time="2024-04-12T18:47:35.891196879Z" level=info msg="cleaning up dead shim" Apr 12 18:47:35.899029 env[1199]: time="2024-04-12T18:47:35.898883107Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:47:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4173 runtime=io.containerd.runc.v2\n" Apr 12 18:47:35.919738 kubelet[2077]: E0412 18:47:35.919689 2077 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 12 18:47:36.108229 kubelet[2077]: E0412 18:47:36.108197 2077 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:47:36.111351 env[1199]: time="2024-04-12T18:47:36.111298883Z" level=info msg="CreateContainer within sandbox \"b07af88c7a01f6d5936856072f02570e35a9e39cf8bfd736b08ee35948190479\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 12 18:47:36.491306 env[1199]: time="2024-04-12T18:47:36.491193568Z" level=info msg="CreateContainer within sandbox \"b07af88c7a01f6d5936856072f02570e35a9e39cf8bfd736b08ee35948190479\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"34ea1e87350bce6d690e1caab97abdbadd010d32f8ae4300642b45587c395987\"" Apr 12 18:47:36.491989 env[1199]: time="2024-04-12T18:47:36.491942501Z" level=info msg="StartContainer for \"34ea1e87350bce6d690e1caab97abdbadd010d32f8ae4300642b45587c395987\"" Apr 12 18:47:36.592899 env[1199]: time="2024-04-12T18:47:36.592862501Z" level=info msg="StartContainer for \"34ea1e87350bce6d690e1caab97abdbadd010d32f8ae4300642b45587c395987\" returns successfully" Apr 12 18:47:36.645771 env[1199]: time="2024-04-12T18:47:36.645726102Z" level=info msg="shim disconnected" id=34ea1e87350bce6d690e1caab97abdbadd010d32f8ae4300642b45587c395987 Apr 12 18:47:36.645771 env[1199]: time="2024-04-12T18:47:36.645767441Z" level=warning msg="cleaning up after shim disconnected" id=34ea1e87350bce6d690e1caab97abdbadd010d32f8ae4300642b45587c395987 namespace=k8s.io Apr 12 18:47:36.645771 env[1199]: time="2024-04-12T18:47:36.645775566Z" level=info msg="cleaning up dead shim" Apr 12 18:47:36.652602 env[1199]: time="2024-04-12T18:47:36.652557632Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:47:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4234 runtime=io.containerd.runc.v2\n" Apr 12 18:47:37.112697 kubelet[2077]: E0412 18:47:37.112668 2077 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:47:37.114611 env[1199]: time="2024-04-12T18:47:37.114569750Z" level=info msg="CreateContainer within sandbox \"b07af88c7a01f6d5936856072f02570e35a9e39cf8bfd736b08ee35948190479\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 12 18:47:37.134846 env[1199]: time="2024-04-12T18:47:37.134793825Z" level=info msg="CreateContainer within sandbox \"b07af88c7a01f6d5936856072f02570e35a9e39cf8bfd736b08ee35948190479\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ab5d91e05cf83b38beae8720e06455ddf1d2b1fdffa922454fd779c0df606b6d\"" Apr 12 18:47:37.135303 env[1199]: time="2024-04-12T18:47:37.135272725Z" level=info msg="StartContainer for \"ab5d91e05cf83b38beae8720e06455ddf1d2b1fdffa922454fd779c0df606b6d\"" Apr 12 18:47:37.183231 env[1199]: time="2024-04-12T18:47:37.183187486Z" level=info msg="StartContainer for \"ab5d91e05cf83b38beae8720e06455ddf1d2b1fdffa922454fd779c0df606b6d\" returns successfully" Apr 12 18:47:37.200310 env[1199]: time="2024-04-12T18:47:37.200263382Z" level=info msg="shim disconnected" id=ab5d91e05cf83b38beae8720e06455ddf1d2b1fdffa922454fd779c0df606b6d Apr 12 18:47:37.200310 env[1199]: time="2024-04-12T18:47:37.200303839Z" level=warning msg="cleaning up after shim disconnected" id=ab5d91e05cf83b38beae8720e06455ddf1d2b1fdffa922454fd779c0df606b6d namespace=k8s.io Apr 12 18:47:37.200310 env[1199]: time="2024-04-12T18:47:37.200311974Z" level=info msg="cleaning up dead shim" Apr 12 18:47:37.205671 env[1199]: time="2024-04-12T18:47:37.205613604Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:47:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4290 runtime=io.containerd.runc.v2\n" Apr 12 18:47:37.687114 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ab5d91e05cf83b38beae8720e06455ddf1d2b1fdffa922454fd779c0df606b6d-rootfs.mount: Deactivated successfully. Apr 12 18:47:38.115161 kubelet[2077]: E0412 18:47:38.115138 2077 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:47:38.116972 env[1199]: time="2024-04-12T18:47:38.116933800Z" level=info msg="CreateContainer within sandbox \"b07af88c7a01f6d5936856072f02570e35a9e39cf8bfd736b08ee35948190479\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 12 18:47:38.130034 env[1199]: time="2024-04-12T18:47:38.129990453Z" level=info msg="CreateContainer within sandbox \"b07af88c7a01f6d5936856072f02570e35a9e39cf8bfd736b08ee35948190479\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"da71b19ba7bc0cd906ea6c29d97a075349bbf7b6aa3884423214b0b004727f42\"" Apr 12 18:47:38.130450 env[1199]: time="2024-04-12T18:47:38.130426672Z" level=info msg="StartContainer for \"da71b19ba7bc0cd906ea6c29d97a075349bbf7b6aa3884423214b0b004727f42\"" Apr 12 18:47:38.166232 env[1199]: time="2024-04-12T18:47:38.166172079Z" level=info msg="StartContainer for \"da71b19ba7bc0cd906ea6c29d97a075349bbf7b6aa3884423214b0b004727f42\" returns successfully" Apr 12 18:47:38.181430 env[1199]: time="2024-04-12T18:47:38.181373245Z" level=info msg="shim disconnected" id=da71b19ba7bc0cd906ea6c29d97a075349bbf7b6aa3884423214b0b004727f42 Apr 12 18:47:38.181430 env[1199]: time="2024-04-12T18:47:38.181412088Z" level=warning msg="cleaning up after shim disconnected" id=da71b19ba7bc0cd906ea6c29d97a075349bbf7b6aa3884423214b0b004727f42 namespace=k8s.io Apr 12 18:47:38.181430 env[1199]: time="2024-04-12T18:47:38.181420604Z" level=info msg="cleaning up dead shim" Apr 12 18:47:38.187286 env[1199]: time="2024-04-12T18:47:38.187237551Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:47:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4345 runtime=io.containerd.runc.v2\n" Apr 12 18:47:38.252821 kubelet[2077]: I0412 18:47:38.252796 2077 setters.go:548] "Node became not ready" node="localhost" condition={Type:Ready Status:False LastHeartbeatTime:2024-04-12 18:47:38.252749802 +0000 UTC m=+102.494411242 LastTransitionTime:2024-04-12 18:47:38.252749802 +0000 UTC m=+102.494411242 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Apr 12 18:47:38.687180 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-da71b19ba7bc0cd906ea6c29d97a075349bbf7b6aa3884423214b0b004727f42-rootfs.mount: Deactivated successfully. Apr 12 18:47:39.118247 kubelet[2077]: E0412 18:47:39.118206 2077 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:47:39.120685 env[1199]: time="2024-04-12T18:47:39.120647167Z" level=info msg="CreateContainer within sandbox \"b07af88c7a01f6d5936856072f02570e35a9e39cf8bfd736b08ee35948190479\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 12 18:47:39.135230 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount543129528.mount: Deactivated successfully. Apr 12 18:47:39.136999 env[1199]: time="2024-04-12T18:47:39.136947160Z" level=info msg="CreateContainer within sandbox \"b07af88c7a01f6d5936856072f02570e35a9e39cf8bfd736b08ee35948190479\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b53aca92654f443fae8f924b6447a17effbad44bf62d28c9fdccefbad56902be\"" Apr 12 18:47:39.137418 env[1199]: time="2024-04-12T18:47:39.137390151Z" level=info msg="StartContainer for \"b53aca92654f443fae8f924b6447a17effbad44bf62d28c9fdccefbad56902be\"" Apr 12 18:47:39.175129 env[1199]: time="2024-04-12T18:47:39.175073158Z" level=info msg="StartContainer for \"b53aca92654f443fae8f924b6447a17effbad44bf62d28c9fdccefbad56902be\" returns successfully" Apr 12 18:47:39.415930 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Apr 12 18:47:40.123108 kubelet[2077]: E0412 18:47:40.123082 2077 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:47:40.156304 kubelet[2077]: I0412 18:47:40.156263 2077 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-dvmmb" podStartSLOduration=5.1562194980000005 podCreationTimestamp="2024-04-12 18:47:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:47:40.156015572 +0000 UTC m=+104.397677002" watchObservedRunningTime="2024-04-12 18:47:40.156219498 +0000 UTC m=+104.397880938" Apr 12 18:47:41.257848 systemd[1]: run-containerd-runc-k8s.io-b53aca92654f443fae8f924b6447a17effbad44bf62d28c9fdccefbad56902be-runc.3ekZhe.mount: Deactivated successfully. Apr 12 18:47:41.739637 kubelet[2077]: E0412 18:47:41.739609 2077 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:47:41.871277 systemd-networkd[1071]: lxc_health: Link UP Apr 12 18:47:41.880096 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Apr 12 18:47:41.879970 systemd-networkd[1071]: lxc_health: Gained carrier Apr 12 18:47:43.381263 kubelet[2077]: E0412 18:47:43.381227 2077 upgradeaware.go:426] Error proxying data from client to backend: readfrom tcp 127.0.0.1:40934->127.0.0.1:42653: write tcp 127.0.0.1:40934->127.0.0.1:42653: write: broken pipe Apr 12 18:47:43.520065 systemd-networkd[1071]: lxc_health: Gained IPv6LL Apr 12 18:47:43.740054 kubelet[2077]: E0412 18:47:43.739926 2077 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:47:44.130491 kubelet[2077]: E0412 18:47:44.130458 2077 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:47:45.132499 kubelet[2077]: E0412 18:47:45.132451 2077 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:47:45.458579 systemd[1]: run-containerd-runc-k8s.io-b53aca92654f443fae8f924b6447a17effbad44bf62d28c9fdccefbad56902be-runc.R4pgaD.mount: Deactivated successfully. Apr 12 18:47:49.646764 systemd[1]: run-containerd-runc-k8s.io-b53aca92654f443fae8f924b6447a17effbad44bf62d28c9fdccefbad56902be-runc.aKAFZG.mount: Deactivated successfully. Apr 12 18:47:49.689233 sshd[3920]: pam_unix(sshd:session): session closed for user core Apr 12 18:47:49.691633 systemd[1]: sshd@25-10.0.0.52:22-10.0.0.1:35870.service: Deactivated successfully. Apr 12 18:47:49.692800 systemd[1]: session-26.scope: Deactivated successfully. Apr 12 18:47:49.692865 systemd-logind[1185]: Session 26 logged out. Waiting for processes to exit. Apr 12 18:47:49.693670 systemd-logind[1185]: Removed session 26.