Jul 2 07:49:16.787614 kernel: Linux version 5.15.161-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Jul 1 23:45:21 -00 2024 Jul 2 07:49:16.787631 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 07:49:16.787641 kernel: BIOS-provided physical RAM map: Jul 2 07:49:16.787646 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 2 07:49:16.787651 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jul 2 07:49:16.787657 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jul 2 07:49:16.787663 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jul 2 07:49:16.787669 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jul 2 07:49:16.787674 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Jul 2 07:49:16.787681 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Jul 2 07:49:16.787686 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Jul 2 07:49:16.787692 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Jul 2 07:49:16.787697 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Jul 2 07:49:16.787703 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jul 2 07:49:16.787710 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Jul 2 07:49:16.787717 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Jul 2 07:49:16.787723 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jul 2 07:49:16.787729 kernel: NX (Execute Disable) protection: active Jul 2 07:49:16.787734 kernel: e820: update [mem 0x9b3f7018-0x9b400c57] usable ==> usable Jul 2 07:49:16.787740 kernel: e820: update [mem 0x9b3f7018-0x9b400c57] usable ==> usable Jul 2 07:49:16.787746 kernel: e820: update [mem 0x9b1aa018-0x9b1e6e57] usable ==> usable Jul 2 07:49:16.787752 kernel: e820: update [mem 0x9b1aa018-0x9b1e6e57] usable ==> usable Jul 2 07:49:16.787757 kernel: extended physical RAM map: Jul 2 07:49:16.787763 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 2 07:49:16.787769 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jul 2 07:49:16.787776 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jul 2 07:49:16.787782 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jul 2 07:49:16.787788 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jul 2 07:49:16.787793 kernel: reserve setup_data: [mem 0x000000000080c000-0x000000000080ffff] usable Jul 2 07:49:16.787799 kernel: reserve setup_data: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Jul 2 07:49:16.787805 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b1aa017] usable Jul 2 07:49:16.787811 kernel: reserve setup_data: [mem 0x000000009b1aa018-0x000000009b1e6e57] usable Jul 2 07:49:16.787816 kernel: reserve setup_data: [mem 0x000000009b1e6e58-0x000000009b3f7017] usable Jul 2 07:49:16.787822 kernel: reserve setup_data: [mem 0x000000009b3f7018-0x000000009b400c57] usable Jul 2 07:49:16.787837 kernel: reserve setup_data: [mem 0x000000009b400c58-0x000000009c8eefff] usable Jul 2 07:49:16.787843 kernel: reserve setup_data: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Jul 2 07:49:16.787850 kernel: reserve setup_data: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Jul 2 07:49:16.787856 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jul 2 07:49:16.787862 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Jul 2 07:49:16.787868 kernel: reserve setup_data: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Jul 2 07:49:16.787876 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jul 2 07:49:16.787882 kernel: efi: EFI v2.70 by EDK II Jul 2 07:49:16.787889 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b773018 RNG=0x9cb75018 Jul 2 07:49:16.787896 kernel: random: crng init done Jul 2 07:49:16.787902 kernel: SMBIOS 2.8 present. Jul 2 07:49:16.787909 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 0.0.0 02/06/2015 Jul 2 07:49:16.787915 kernel: Hypervisor detected: KVM Jul 2 07:49:16.787921 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 2 07:49:16.787928 kernel: kvm-clock: cpu 0, msr 4c192001, primary cpu clock Jul 2 07:49:16.787934 kernel: kvm-clock: using sched offset of 4230189499 cycles Jul 2 07:49:16.787941 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 2 07:49:16.787948 kernel: tsc: Detected 2794.748 MHz processor Jul 2 07:49:16.787955 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 2 07:49:16.787962 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 2 07:49:16.787968 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Jul 2 07:49:16.787975 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 2 07:49:16.787982 kernel: Using GB pages for direct mapping Jul 2 07:49:16.787988 kernel: Secure boot disabled Jul 2 07:49:16.787994 kernel: ACPI: Early table checksum verification disabled Jul 2 07:49:16.788001 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jul 2 07:49:16.788007 kernel: ACPI: XSDT 0x000000009CB7D0E8 00004C (v01 BOCHS BXPC 00000001 01000013) Jul 2 07:49:16.788015 kernel: ACPI: FACP 0x000000009CB7A000 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 07:49:16.788021 kernel: ACPI: DSDT 0x000000009CB7B000 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 07:49:16.788029 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jul 2 07:49:16.788035 kernel: ACPI: APIC 0x000000009CB79000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 07:49:16.788042 kernel: ACPI: HPET 0x000000009CB78000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 07:49:16.788048 kernel: ACPI: WAET 0x000000009CB77000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 07:49:16.788055 kernel: ACPI: BGRT 0x000000009CB76000 000038 (v01 INTEL EDK2 00000002 01000013) Jul 2 07:49:16.788061 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb7a000-0x9cb7a073] Jul 2 07:49:16.788067 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7b000-0x9cb7ca38] Jul 2 07:49:16.788075 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jul 2 07:49:16.788081 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb79000-0x9cb7908f] Jul 2 07:49:16.788088 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb78000-0x9cb78037] Jul 2 07:49:16.788094 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb77000-0x9cb77027] Jul 2 07:49:16.788101 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb76000-0x9cb76037] Jul 2 07:49:16.788107 kernel: No NUMA configuration found Jul 2 07:49:16.788113 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Jul 2 07:49:16.788120 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Jul 2 07:49:16.788126 kernel: Zone ranges: Jul 2 07:49:16.788134 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 2 07:49:16.788140 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Jul 2 07:49:16.788146 kernel: Normal empty Jul 2 07:49:16.788153 kernel: Movable zone start for each node Jul 2 07:49:16.788159 kernel: Early memory node ranges Jul 2 07:49:16.788166 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jul 2 07:49:16.788172 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jul 2 07:49:16.788178 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jul 2 07:49:16.788185 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Jul 2 07:49:16.788192 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Jul 2 07:49:16.788198 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Jul 2 07:49:16.788205 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Jul 2 07:49:16.788211 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 2 07:49:16.788218 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jul 2 07:49:16.788224 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jul 2 07:49:16.788230 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 2 07:49:16.788237 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Jul 2 07:49:16.788243 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jul 2 07:49:16.788251 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Jul 2 07:49:16.788257 kernel: ACPI: PM-Timer IO Port: 0xb008 Jul 2 07:49:16.788264 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 2 07:49:16.788270 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 2 07:49:16.788277 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 2 07:49:16.788283 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 2 07:49:16.788290 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 2 07:49:16.788296 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 2 07:49:16.788303 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 2 07:49:16.788321 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 2 07:49:16.788328 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 2 07:49:16.788334 kernel: TSC deadline timer available Jul 2 07:49:16.788341 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jul 2 07:49:16.788347 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 2 07:49:16.788353 kernel: kvm-guest: setup PV sched yield Jul 2 07:49:16.788360 kernel: [mem 0x9d000000-0xffffffff] available for PCI devices Jul 2 07:49:16.788366 kernel: Booting paravirtualized kernel on KVM Jul 2 07:49:16.788373 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 2 07:49:16.788380 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Jul 2 07:49:16.788388 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 Jul 2 07:49:16.788395 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 Jul 2 07:49:16.788405 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 2 07:49:16.788413 kernel: kvm-guest: setup async PF for cpu 0 Jul 2 07:49:16.788419 kernel: kvm-guest: stealtime: cpu 0, msr 9ae1c0c0 Jul 2 07:49:16.788426 kernel: kvm-guest: PV spinlocks enabled Jul 2 07:49:16.788433 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 2 07:49:16.788440 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Jul 2 07:49:16.788447 kernel: Policy zone: DMA32 Jul 2 07:49:16.788455 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 07:49:16.788462 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 07:49:16.788470 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 2 07:49:16.788476 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 07:49:16.788483 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 07:49:16.788490 kernel: Memory: 2398372K/2567000K available (12294K kernel code, 2276K rwdata, 13712K rodata, 47444K init, 4144K bss, 168368K reserved, 0K cma-reserved) Jul 2 07:49:16.788497 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 2 07:49:16.788505 kernel: ftrace: allocating 34514 entries in 135 pages Jul 2 07:49:16.788512 kernel: ftrace: allocated 135 pages with 4 groups Jul 2 07:49:16.788519 kernel: rcu: Hierarchical RCU implementation. Jul 2 07:49:16.788526 kernel: rcu: RCU event tracing is enabled. Jul 2 07:49:16.788533 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 2 07:49:16.788540 kernel: Rude variant of Tasks RCU enabled. Jul 2 07:49:16.788547 kernel: Tracing variant of Tasks RCU enabled. Jul 2 07:49:16.788554 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 07:49:16.788561 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 2 07:49:16.788568 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 2 07:49:16.788575 kernel: Console: colour dummy device 80x25 Jul 2 07:49:16.788582 kernel: printk: console [ttyS0] enabled Jul 2 07:49:16.788589 kernel: ACPI: Core revision 20210730 Jul 2 07:49:16.788596 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 2 07:49:16.788603 kernel: APIC: Switch to symmetric I/O mode setup Jul 2 07:49:16.788609 kernel: x2apic enabled Jul 2 07:49:16.788616 kernel: Switched APIC routing to physical x2apic. Jul 2 07:49:16.788623 kernel: kvm-guest: setup PV IPIs Jul 2 07:49:16.788631 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 2 07:49:16.788638 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jul 2 07:49:16.788645 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jul 2 07:49:16.788651 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 2 07:49:16.788658 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 2 07:49:16.788665 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 2 07:49:16.788672 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 2 07:49:16.788679 kernel: Spectre V2 : Mitigation: Retpolines Jul 2 07:49:16.788686 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jul 2 07:49:16.788694 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jul 2 07:49:16.788701 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 2 07:49:16.788707 kernel: RETBleed: Mitigation: untrained return thunk Jul 2 07:49:16.788714 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 2 07:49:16.788721 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Jul 2 07:49:16.788728 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 2 07:49:16.788735 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 2 07:49:16.788742 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 2 07:49:16.788749 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 2 07:49:16.788757 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jul 2 07:49:16.788764 kernel: Freeing SMP alternatives memory: 32K Jul 2 07:49:16.788770 kernel: pid_max: default: 32768 minimum: 301 Jul 2 07:49:16.788777 kernel: LSM: Security Framework initializing Jul 2 07:49:16.788784 kernel: SELinux: Initializing. Jul 2 07:49:16.788791 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 07:49:16.788798 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 07:49:16.788805 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 2 07:49:16.788813 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 2 07:49:16.788820 kernel: ... version: 0 Jul 2 07:49:16.788834 kernel: ... bit width: 48 Jul 2 07:49:16.788841 kernel: ... generic registers: 6 Jul 2 07:49:16.788848 kernel: ... value mask: 0000ffffffffffff Jul 2 07:49:16.788855 kernel: ... max period: 00007fffffffffff Jul 2 07:49:16.788862 kernel: ... fixed-purpose events: 0 Jul 2 07:49:16.788869 kernel: ... event mask: 000000000000003f Jul 2 07:49:16.788875 kernel: signal: max sigframe size: 1776 Jul 2 07:49:16.788882 kernel: rcu: Hierarchical SRCU implementation. Jul 2 07:49:16.788890 kernel: smp: Bringing up secondary CPUs ... Jul 2 07:49:16.788897 kernel: x86: Booting SMP configuration: Jul 2 07:49:16.788904 kernel: .... node #0, CPUs: #1 Jul 2 07:49:16.788911 kernel: kvm-clock: cpu 1, msr 4c192041, secondary cpu clock Jul 2 07:49:16.788918 kernel: kvm-guest: setup async PF for cpu 1 Jul 2 07:49:16.788924 kernel: kvm-guest: stealtime: cpu 1, msr 9ae9c0c0 Jul 2 07:49:16.788931 kernel: #2 Jul 2 07:49:16.788938 kernel: kvm-clock: cpu 2, msr 4c192081, secondary cpu clock Jul 2 07:49:16.788945 kernel: kvm-guest: setup async PF for cpu 2 Jul 2 07:49:16.788952 kernel: kvm-guest: stealtime: cpu 2, msr 9af1c0c0 Jul 2 07:49:16.788959 kernel: #3 Jul 2 07:49:16.788966 kernel: kvm-clock: cpu 3, msr 4c1920c1, secondary cpu clock Jul 2 07:49:16.788972 kernel: kvm-guest: setup async PF for cpu 3 Jul 2 07:49:16.788979 kernel: kvm-guest: stealtime: cpu 3, msr 9af9c0c0 Jul 2 07:49:16.788986 kernel: smp: Brought up 1 node, 4 CPUs Jul 2 07:49:16.788993 kernel: smpboot: Max logical packages: 1 Jul 2 07:49:16.789000 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jul 2 07:49:16.789007 kernel: devtmpfs: initialized Jul 2 07:49:16.789014 kernel: x86/mm: Memory block size: 128MB Jul 2 07:49:16.789021 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jul 2 07:49:16.789028 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jul 2 07:49:16.789035 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Jul 2 07:49:16.789042 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jul 2 07:49:16.789049 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jul 2 07:49:16.789056 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 07:49:16.789063 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 2 07:49:16.789070 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 07:49:16.789078 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 07:49:16.789085 kernel: audit: initializing netlink subsys (disabled) Jul 2 07:49:16.789091 kernel: audit: type=2000 audit(1719906557.049:1): state=initialized audit_enabled=0 res=1 Jul 2 07:49:16.789098 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 07:49:16.789105 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 2 07:49:16.789112 kernel: cpuidle: using governor menu Jul 2 07:49:16.789119 kernel: ACPI: bus type PCI registered Jul 2 07:49:16.789125 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 07:49:16.789132 kernel: dca service started, version 1.12.1 Jul 2 07:49:16.789140 kernel: PCI: Using configuration type 1 for base access Jul 2 07:49:16.789147 kernel: PCI: Using configuration type 1 for extended access Jul 2 07:49:16.789154 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 2 07:49:16.789161 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 07:49:16.789168 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 07:49:16.789174 kernel: ACPI: Added _OSI(Module Device) Jul 2 07:49:16.789181 kernel: ACPI: Added _OSI(Processor Device) Jul 2 07:49:16.789188 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 07:49:16.789195 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 07:49:16.789202 kernel: ACPI: Added _OSI(Linux-Dell-Video) Jul 2 07:49:16.789209 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jul 2 07:49:16.789216 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jul 2 07:49:16.789223 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 2 07:49:16.789230 kernel: ACPI: Interpreter enabled Jul 2 07:49:16.789236 kernel: ACPI: PM: (supports S0 S3 S5) Jul 2 07:49:16.789243 kernel: ACPI: Using IOAPIC for interrupt routing Jul 2 07:49:16.789250 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 2 07:49:16.789257 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jul 2 07:49:16.789265 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 2 07:49:16.789380 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 2 07:49:16.789392 kernel: acpiphp: Slot [3] registered Jul 2 07:49:16.789399 kernel: acpiphp: Slot [4] registered Jul 2 07:49:16.789406 kernel: acpiphp: Slot [5] registered Jul 2 07:49:16.789413 kernel: acpiphp: Slot [6] registered Jul 2 07:49:16.789419 kernel: acpiphp: Slot [7] registered Jul 2 07:49:16.789426 kernel: acpiphp: Slot [8] registered Jul 2 07:49:16.789433 kernel: acpiphp: Slot [9] registered Jul 2 07:49:16.789442 kernel: acpiphp: Slot [10] registered Jul 2 07:49:16.789449 kernel: acpiphp: Slot [11] registered Jul 2 07:49:16.789456 kernel: acpiphp: Slot [12] registered Jul 2 07:49:16.789462 kernel: acpiphp: Slot [13] registered Jul 2 07:49:16.789469 kernel: acpiphp: Slot [14] registered Jul 2 07:49:16.789475 kernel: acpiphp: Slot [15] registered Jul 2 07:49:16.789482 kernel: acpiphp: Slot [16] registered Jul 2 07:49:16.789489 kernel: acpiphp: Slot [17] registered Jul 2 07:49:16.789495 kernel: acpiphp: Slot [18] registered Jul 2 07:49:16.789503 kernel: acpiphp: Slot [19] registered Jul 2 07:49:16.789510 kernel: acpiphp: Slot [20] registered Jul 2 07:49:16.789516 kernel: acpiphp: Slot [21] registered Jul 2 07:49:16.789523 kernel: acpiphp: Slot [22] registered Jul 2 07:49:16.789529 kernel: acpiphp: Slot [23] registered Jul 2 07:49:16.789536 kernel: acpiphp: Slot [24] registered Jul 2 07:49:16.789543 kernel: acpiphp: Slot [25] registered Jul 2 07:49:16.789549 kernel: acpiphp: Slot [26] registered Jul 2 07:49:16.789556 kernel: acpiphp: Slot [27] registered Jul 2 07:49:16.789564 kernel: acpiphp: Slot [28] registered Jul 2 07:49:16.789570 kernel: acpiphp: Slot [29] registered Jul 2 07:49:16.789577 kernel: acpiphp: Slot [30] registered Jul 2 07:49:16.789584 kernel: acpiphp: Slot [31] registered Jul 2 07:49:16.789590 kernel: PCI host bridge to bus 0000:00 Jul 2 07:49:16.789667 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 2 07:49:16.789729 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 2 07:49:16.789794 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 2 07:49:16.789868 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Jul 2 07:49:16.789928 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0x87fffffff window] Jul 2 07:49:16.789987 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 2 07:49:16.790067 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jul 2 07:49:16.790149 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jul 2 07:49:16.790224 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jul 2 07:49:16.790297 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Jul 2 07:49:16.790391 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jul 2 07:49:16.790511 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jul 2 07:49:16.790586 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jul 2 07:49:16.790653 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jul 2 07:49:16.790728 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jul 2 07:49:16.790796 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jul 2 07:49:16.790877 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Jul 2 07:49:16.790950 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Jul 2 07:49:16.791017 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jul 2 07:49:16.791082 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc1043000-0xc1043fff] Jul 2 07:49:16.791149 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jul 2 07:49:16.791215 kernel: pci 0000:00:02.0: BAR 0: assigned to efifb Jul 2 07:49:16.791282 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 2 07:49:16.791376 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Jul 2 07:49:16.791447 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0a0-0xc0bf] Jul 2 07:49:16.791518 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jul 2 07:49:16.791587 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Jul 2 07:49:16.791661 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jul 2 07:49:16.791729 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jul 2 07:49:16.791795 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jul 2 07:49:16.791878 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Jul 2 07:49:16.791965 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Jul 2 07:49:16.792050 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Jul 2 07:49:16.792119 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc1040000-0xc1040fff] Jul 2 07:49:16.792185 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Jul 2 07:49:16.792763 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jul 2 07:49:16.792777 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 2 07:49:16.792787 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 2 07:49:16.792794 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 2 07:49:16.792801 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 2 07:49:16.792807 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 2 07:49:16.792814 kernel: iommu: Default domain type: Translated Jul 2 07:49:16.792821 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 2 07:49:16.792904 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jul 2 07:49:16.792973 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 2 07:49:16.793040 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jul 2 07:49:16.793051 kernel: vgaarb: loaded Jul 2 07:49:16.793058 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 2 07:49:16.793065 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 2 07:49:16.793072 kernel: PTP clock support registered Jul 2 07:49:16.793079 kernel: Registered efivars operations Jul 2 07:49:16.793086 kernel: PCI: Using ACPI for IRQ routing Jul 2 07:49:16.793093 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 2 07:49:16.793099 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jul 2 07:49:16.793106 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Jul 2 07:49:16.793114 kernel: e820: reserve RAM buffer [mem 0x9b1aa018-0x9bffffff] Jul 2 07:49:16.793121 kernel: e820: reserve RAM buffer [mem 0x9b3f7018-0x9bffffff] Jul 2 07:49:16.793127 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Jul 2 07:49:16.793134 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Jul 2 07:49:16.793141 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 2 07:49:16.793148 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 2 07:49:16.793155 kernel: clocksource: Switched to clocksource kvm-clock Jul 2 07:49:16.793162 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 07:49:16.793169 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 07:49:16.793177 kernel: pnp: PnP ACPI init Jul 2 07:49:16.793255 kernel: pnp 00:02: [dma 2] Jul 2 07:49:16.793265 kernel: pnp: PnP ACPI: found 6 devices Jul 2 07:49:16.793272 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 2 07:49:16.793279 kernel: NET: Registered PF_INET protocol family Jul 2 07:49:16.793286 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 2 07:49:16.793293 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 2 07:49:16.793300 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 07:49:16.793321 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 2 07:49:16.793328 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Jul 2 07:49:16.793335 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 2 07:49:16.793342 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 07:49:16.793349 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 07:49:16.793356 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 07:49:16.793362 kernel: NET: Registered PF_XDP protocol family Jul 2 07:49:16.793435 kernel: pci 0000:00:05.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jul 2 07:49:16.793514 kernel: pci 0000:00:05.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jul 2 07:49:16.793576 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 2 07:49:16.793634 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 2 07:49:16.793695 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 2 07:49:16.793754 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Jul 2 07:49:16.793813 kernel: pci_bus 0000:00: resource 8 [mem 0x800000000-0x87fffffff window] Jul 2 07:49:16.793893 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jul 2 07:49:16.793961 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 2 07:49:16.794029 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Jul 2 07:49:16.794039 kernel: PCI: CLS 0 bytes, default 64 Jul 2 07:49:16.794046 kernel: Initialise system trusted keyrings Jul 2 07:49:16.794053 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 2 07:49:16.794060 kernel: Key type asymmetric registered Jul 2 07:49:16.794068 kernel: Asymmetric key parser 'x509' registered Jul 2 07:49:16.794075 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 2 07:49:16.794082 kernel: io scheduler mq-deadline registered Jul 2 07:49:16.794089 kernel: io scheduler kyber registered Jul 2 07:49:16.794098 kernel: io scheduler bfq registered Jul 2 07:49:16.794105 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 2 07:49:16.794112 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jul 2 07:49:16.794120 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jul 2 07:49:16.794127 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jul 2 07:49:16.794134 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 07:49:16.794141 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 2 07:49:16.794148 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 2 07:49:16.794155 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 2 07:49:16.794163 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 2 07:49:16.794171 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 2 07:49:16.794252 kernel: rtc_cmos 00:05: RTC can wake from S4 Jul 2 07:49:16.794391 kernel: rtc_cmos 00:05: registered as rtc0 Jul 2 07:49:16.796378 kernel: rtc_cmos 00:05: setting system clock to 2024-07-02T07:49:16 UTC (1719906556) Jul 2 07:49:16.796449 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 2 07:49:16.796458 kernel: efifb: probing for efifb Jul 2 07:49:16.796466 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jul 2 07:49:16.796474 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jul 2 07:49:16.796481 kernel: efifb: scrolling: redraw Jul 2 07:49:16.796488 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 2 07:49:16.796495 kernel: Console: switching to colour frame buffer device 160x50 Jul 2 07:49:16.796695 kernel: fb0: EFI VGA frame buffer device Jul 2 07:49:16.796708 kernel: pstore: Registered efi as persistent store backend Jul 2 07:49:16.796715 kernel: NET: Registered PF_INET6 protocol family Jul 2 07:49:16.796722 kernel: Segment Routing with IPv6 Jul 2 07:49:16.796729 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 07:49:16.796736 kernel: NET: Registered PF_PACKET protocol family Jul 2 07:49:16.796743 kernel: Key type dns_resolver registered Jul 2 07:49:16.796750 kernel: IPI shorthand broadcast: enabled Jul 2 07:49:16.796758 kernel: sched_clock: Marking stable (473457408, 132279913)->(618070302, -12332981) Jul 2 07:49:16.796765 kernel: registered taskstats version 1 Jul 2 07:49:16.796773 kernel: Loading compiled-in X.509 certificates Jul 2 07:49:16.796781 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.161-flatcar: a1ce693884775675566f1ed116e36d15950b9a42' Jul 2 07:49:16.796787 kernel: Key type .fscrypt registered Jul 2 07:49:16.796794 kernel: Key type fscrypt-provisioning registered Jul 2 07:49:16.796803 kernel: pstore: Using crash dump compression: deflate Jul 2 07:49:16.796810 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 07:49:16.796817 kernel: ima: Allocated hash algorithm: sha1 Jul 2 07:49:16.796824 kernel: ima: No architecture policies found Jul 2 07:49:16.796840 kernel: clk: Disabling unused clocks Jul 2 07:49:16.796849 kernel: Freeing unused kernel image (initmem) memory: 47444K Jul 2 07:49:16.796856 kernel: Write protecting the kernel read-only data: 28672k Jul 2 07:49:16.796864 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Jul 2 07:49:16.797061 kernel: Freeing unused kernel image (rodata/data gap) memory: 624K Jul 2 07:49:16.797073 kernel: Run /init as init process Jul 2 07:49:16.797080 kernel: with arguments: Jul 2 07:49:16.797087 kernel: /init Jul 2 07:49:16.797094 kernel: with environment: Jul 2 07:49:16.797101 kernel: HOME=/ Jul 2 07:49:16.797109 kernel: TERM=linux Jul 2 07:49:16.797116 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 07:49:16.797125 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 07:49:16.797135 systemd[1]: Detected virtualization kvm. Jul 2 07:49:16.797143 systemd[1]: Detected architecture x86-64. Jul 2 07:49:16.797150 systemd[1]: Running in initrd. Jul 2 07:49:16.797158 systemd[1]: No hostname configured, using default hostname. Jul 2 07:49:16.797165 systemd[1]: Hostname set to . Jul 2 07:49:16.797174 systemd[1]: Initializing machine ID from VM UUID. Jul 2 07:49:16.797181 systemd[1]: Queued start job for default target initrd.target. Jul 2 07:49:16.797189 systemd[1]: Started systemd-ask-password-console.path. Jul 2 07:49:16.797197 systemd[1]: Reached target cryptsetup.target. Jul 2 07:49:16.797204 systemd[1]: Reached target paths.target. Jul 2 07:49:16.797211 systemd[1]: Reached target slices.target. Jul 2 07:49:16.797219 systemd[1]: Reached target swap.target. Jul 2 07:49:16.797226 systemd[1]: Reached target timers.target. Jul 2 07:49:16.797235 systemd[1]: Listening on iscsid.socket. Jul 2 07:49:16.797242 systemd[1]: Listening on iscsiuio.socket. Jul 2 07:49:16.797250 systemd[1]: Listening on systemd-journald-audit.socket. Jul 2 07:49:16.797258 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 2 07:49:16.797265 systemd[1]: Listening on systemd-journald.socket. Jul 2 07:49:16.797273 systemd[1]: Listening on systemd-networkd.socket. Jul 2 07:49:16.797280 systemd[1]: Listening on systemd-udevd-control.socket. Jul 2 07:49:16.797288 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 2 07:49:16.797297 systemd[1]: Reached target sockets.target. Jul 2 07:49:16.797304 systemd[1]: Starting kmod-static-nodes.service... Jul 2 07:49:16.797324 systemd[1]: Finished network-cleanup.service. Jul 2 07:49:16.797332 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 07:49:16.797339 systemd[1]: Starting systemd-journald.service... Jul 2 07:49:16.797347 systemd[1]: Starting systemd-modules-load.service... Jul 2 07:49:16.797354 systemd[1]: Starting systemd-resolved.service... Jul 2 07:49:16.797362 systemd[1]: Starting systemd-vconsole-setup.service... Jul 2 07:49:16.797369 systemd[1]: Finished kmod-static-nodes.service. Jul 2 07:49:16.797378 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 07:49:16.797386 kernel: audit: type=1130 audit(1719906556.787:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:16.797394 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 2 07:49:16.797401 systemd[1]: Finished systemd-vconsole-setup.service. Jul 2 07:49:16.797409 kernel: audit: type=1130 audit(1719906556.796:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:16.797419 systemd-journald[197]: Journal started Jul 2 07:49:16.797460 systemd-journald[197]: Runtime Journal (/run/log/journal/50bd3218ee354580856761468e094ef0) is 6.0M, max 48.4M, 42.4M free. Jul 2 07:49:16.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:16.796000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:16.795236 systemd-modules-load[198]: Inserted module 'overlay' Jul 2 07:49:16.801813 systemd[1]: Started systemd-journald.service. Jul 2 07:49:16.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:16.802885 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 2 07:49:16.810187 kernel: audit: type=1130 audit(1719906556.802:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:16.810205 kernel: audit: type=1130 audit(1719906556.805:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:16.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:16.806257 systemd[1]: Starting dracut-cmdline-ask.service... Jul 2 07:49:16.822406 systemd-resolved[199]: Positive Trust Anchors: Jul 2 07:49:16.822426 systemd-resolved[199]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 07:49:16.822454 systemd-resolved[199]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 07:49:16.823945 systemd[1]: Finished dracut-cmdline-ask.service. Jul 2 07:49:16.828356 kernel: audit: type=1130 audit(1719906556.823:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:16.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:16.824647 systemd[1]: Starting dracut-cmdline.service... Jul 2 07:49:16.824666 systemd-resolved[199]: Defaulting to hostname 'linux'. Jul 2 07:49:16.835183 kernel: audit: type=1130 audit(1719906556.830:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:16.830000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:16.835274 dracut-cmdline[214]: dracut-dracut-053 Jul 2 07:49:16.829205 systemd[1]: Started systemd-resolved.service. Jul 2 07:49:16.830721 systemd[1]: Reached target nss-lookup.target. Jul 2 07:49:16.837792 dracut-cmdline[214]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 07:49:16.857340 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 07:49:16.862251 systemd-modules-load[198]: Inserted module 'br_netfilter' Jul 2 07:49:16.863218 kernel: Bridge firewalling registered Jul 2 07:49:16.880334 kernel: SCSI subsystem initialized Jul 2 07:49:16.890359 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 07:49:16.890421 kernel: device-mapper: uevent: version 1.0.3 Jul 2 07:49:16.892294 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Jul 2 07:49:16.894334 kernel: Loading iSCSI transport class v2.0-870. Jul 2 07:49:16.895024 systemd-modules-load[198]: Inserted module 'dm_multipath' Jul 2 07:49:16.896589 systemd[1]: Finished systemd-modules-load.service. Jul 2 07:49:16.901277 kernel: audit: type=1130 audit(1719906556.896:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:16.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:16.901304 systemd[1]: Starting systemd-sysctl.service... Jul 2 07:49:16.909492 systemd[1]: Finished systemd-sysctl.service. Jul 2 07:49:16.910000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:16.915335 kernel: audit: type=1130 audit(1719906556.910:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:16.921338 kernel: iscsi: registered transport (tcp) Jul 2 07:49:16.942331 kernel: iscsi: registered transport (qla4xxx) Jul 2 07:49:16.942354 kernel: QLogic iSCSI HBA Driver Jul 2 07:49:16.971145 systemd[1]: Finished dracut-cmdline.service. Jul 2 07:49:16.975745 kernel: audit: type=1130 audit(1719906556.970:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:16.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:16.975775 systemd[1]: Starting dracut-pre-udev.service... Jul 2 07:49:17.024347 kernel: raid6: avx2x4 gen() 24429 MB/s Jul 2 07:49:17.041341 kernel: raid6: avx2x4 xor() 6311 MB/s Jul 2 07:49:17.058342 kernel: raid6: avx2x2 gen() 25328 MB/s Jul 2 07:49:17.075341 kernel: raid6: avx2x2 xor() 16612 MB/s Jul 2 07:49:17.092345 kernel: raid6: avx2x1 gen() 20947 MB/s Jul 2 07:49:17.109342 kernel: raid6: avx2x1 xor() 13163 MB/s Jul 2 07:49:17.126341 kernel: raid6: sse2x4 gen() 12424 MB/s Jul 2 07:49:17.143333 kernel: raid6: sse2x4 xor() 5274 MB/s Jul 2 07:49:17.160332 kernel: raid6: sse2x2 gen() 13406 MB/s Jul 2 07:49:17.177338 kernel: raid6: sse2x2 xor() 8486 MB/s Jul 2 07:49:17.194334 kernel: raid6: sse2x1 gen() 10668 MB/s Jul 2 07:49:17.211863 kernel: raid6: sse2x1 xor() 6651 MB/s Jul 2 07:49:17.211882 kernel: raid6: using algorithm avx2x2 gen() 25328 MB/s Jul 2 07:49:17.211891 kernel: raid6: .... xor() 16612 MB/s, rmw enabled Jul 2 07:49:17.212642 kernel: raid6: using avx2x2 recovery algorithm Jul 2 07:49:17.225339 kernel: xor: automatically using best checksumming function avx Jul 2 07:49:17.312343 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Jul 2 07:49:17.319196 systemd[1]: Finished dracut-pre-udev.service. Jul 2 07:49:17.318000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:17.320000 audit: BPF prog-id=7 op=LOAD Jul 2 07:49:17.320000 audit: BPF prog-id=8 op=LOAD Jul 2 07:49:17.321067 systemd[1]: Starting systemd-udevd.service... Jul 2 07:49:17.332943 systemd-udevd[400]: Using default interface naming scheme 'v252'. Jul 2 07:49:17.337492 systemd[1]: Started systemd-udevd.service. Jul 2 07:49:17.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:17.339087 systemd[1]: Starting dracut-pre-trigger.service... Jul 2 07:49:17.348464 dracut-pre-trigger[402]: rd.md=0: removing MD RAID activation Jul 2 07:49:17.372712 systemd[1]: Finished dracut-pre-trigger.service. Jul 2 07:49:17.374000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:17.375226 systemd[1]: Starting systemd-udev-trigger.service... Jul 2 07:49:17.405974 systemd[1]: Finished systemd-udev-trigger.service. Jul 2 07:49:17.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:17.444340 kernel: libata version 3.00 loaded. Jul 2 07:49:17.446825 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 2 07:49:17.468919 kernel: cryptd: max_cpu_qlen set to 1000 Jul 2 07:49:17.468934 kernel: ata_piix 0000:00:01.1: version 2.13 Jul 2 07:49:17.471655 kernel: AVX2 version of gcm_enc/dec engaged. Jul 2 07:49:17.471667 kernel: AES CTR mode by8 optimization enabled Jul 2 07:49:17.478341 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 2 07:49:17.478398 kernel: GPT:9289727 != 19775487 Jul 2 07:49:17.478410 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 2 07:49:17.478420 kernel: GPT:9289727 != 19775487 Jul 2 07:49:17.478430 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 2 07:49:17.478440 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 07:49:17.487333 kernel: scsi host0: ata_piix Jul 2 07:49:17.494338 kernel: scsi host1: ata_piix Jul 2 07:49:17.497330 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (446) Jul 2 07:49:17.497357 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Jul 2 07:49:17.499271 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Jul 2 07:49:17.504103 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Jul 2 07:49:17.504215 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Jul 2 07:49:17.511613 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Jul 2 07:49:17.515266 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Jul 2 07:49:17.518890 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 2 07:49:17.519780 systemd[1]: Starting disk-uuid.service... Jul 2 07:49:17.526921 disk-uuid[527]: Primary Header is updated. Jul 2 07:49:17.526921 disk-uuid[527]: Secondary Entries is updated. Jul 2 07:49:17.526921 disk-uuid[527]: Secondary Header is updated. Jul 2 07:49:17.530161 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 07:49:17.538335 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 07:49:17.654390 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 2 07:49:17.656348 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 2 07:49:17.683770 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 2 07:49:17.684060 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 2 07:49:17.701349 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Jul 2 07:49:18.536823 disk-uuid[528]: The operation has completed successfully. Jul 2 07:49:18.538367 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 07:49:18.559866 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 07:49:18.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:18.560000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:18.559942 systemd[1]: Finished disk-uuid.service. Jul 2 07:49:18.564536 systemd[1]: Starting verity-setup.service... Jul 2 07:49:18.578348 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jul 2 07:49:18.596365 systemd[1]: Found device dev-mapper-usr.device. Jul 2 07:49:18.598998 systemd[1]: Mounting sysusr-usr.mount... Jul 2 07:49:18.602494 systemd[1]: Finished verity-setup.service. Jul 2 07:49:18.602000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:18.663335 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 2 07:49:18.663572 systemd[1]: Mounted sysusr-usr.mount. Jul 2 07:49:18.663768 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Jul 2 07:49:18.664487 systemd[1]: Starting ignition-setup.service... Jul 2 07:49:18.666198 systemd[1]: Starting parse-ip-for-networkd.service... Jul 2 07:49:18.674362 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 07:49:18.674392 kernel: BTRFS info (device vda6): using free space tree Jul 2 07:49:18.674405 kernel: BTRFS info (device vda6): has skinny extents Jul 2 07:49:18.682202 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 07:49:18.689423 systemd[1]: Finished ignition-setup.service. Jul 2 07:49:18.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:18.691194 systemd[1]: Starting ignition-fetch-offline.service... Jul 2 07:49:18.723856 systemd[1]: Finished parse-ip-for-networkd.service. Jul 2 07:49:18.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:18.726000 audit: BPF prog-id=9 op=LOAD Jul 2 07:49:18.726647 systemd[1]: Starting systemd-networkd.service... Jul 2 07:49:18.731363 ignition[636]: Ignition 2.14.0 Jul 2 07:49:18.731588 ignition[636]: Stage: fetch-offline Jul 2 07:49:18.731625 ignition[636]: no configs at "/usr/lib/ignition/base.d" Jul 2 07:49:18.731633 ignition[636]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 07:49:18.731720 ignition[636]: parsed url from cmdline: "" Jul 2 07:49:18.731722 ignition[636]: no config URL provided Jul 2 07:49:18.731727 ignition[636]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 07:49:18.731732 ignition[636]: no config at "/usr/lib/ignition/user.ign" Jul 2 07:49:18.731753 ignition[636]: op(1): [started] loading QEMU firmware config module Jul 2 07:49:18.731758 ignition[636]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 2 07:49:18.742305 ignition[636]: op(1): [finished] loading QEMU firmware config module Jul 2 07:49:18.751639 systemd-networkd[708]: lo: Link UP Jul 2 07:49:18.751648 systemd-networkd[708]: lo: Gained carrier Jul 2 07:49:18.753000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:18.752037 systemd-networkd[708]: Enumeration completed Jul 2 07:49:18.752284 systemd-networkd[708]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 07:49:18.752437 systemd[1]: Started systemd-networkd.service. Jul 2 07:49:18.753253 systemd-networkd[708]: eth0: Link UP Jul 2 07:49:18.753259 systemd-networkd[708]: eth0: Gained carrier Jul 2 07:49:18.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:18.753883 systemd[1]: Reached target network.target. Jul 2 07:49:18.756388 systemd[1]: Starting iscsiuio.service... Jul 2 07:49:18.760159 systemd[1]: Started iscsiuio.service. Jul 2 07:49:18.762326 systemd[1]: Starting iscsid.service... Jul 2 07:49:18.767265 iscsid[715]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 2 07:49:18.767265 iscsid[715]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 2 07:49:18.767265 iscsid[715]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 2 07:49:18.767265 iscsid[715]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 2 07:49:18.767265 iscsid[715]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 2 07:49:18.767265 iscsid[715]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 2 07:49:18.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:18.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:18.766043 systemd[1]: Started iscsid.service. Jul 2 07:49:18.767907 systemd[1]: Starting dracut-initqueue.service... Jul 2 07:49:18.776238 systemd[1]: Finished dracut-initqueue.service. Jul 2 07:49:18.777630 systemd[1]: Reached target remote-fs-pre.target. Jul 2 07:49:18.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:18.779766 systemd[1]: Reached target remote-cryptsetup.target. Jul 2 07:49:18.782218 systemd[1]: Reached target remote-fs.target. Jul 2 07:49:18.782861 systemd[1]: Starting dracut-pre-mount.service... Jul 2 07:49:18.789114 systemd[1]: Finished dracut-pre-mount.service. Jul 2 07:49:18.814443 ignition[636]: parsing config with SHA512: a88aa34abe312c6c109dffe1ca383b5bcf15bc7c6db3779515748335515353970f35c3ffcf7cbdd75705f26e21048932f88db57d343bc3e5dce4f74149ab6204 Jul 2 07:49:18.820550 unknown[636]: fetched base config from "system" Jul 2 07:49:18.820560 unknown[636]: fetched user config from "qemu" Jul 2 07:49:18.822435 ignition[636]: fetch-offline: fetch-offline passed Jul 2 07:49:18.822485 ignition[636]: Ignition finished successfully Jul 2 07:49:18.824398 systemd-networkd[708]: eth0: DHCPv4 address 10.0.0.99/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 2 07:49:18.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:18.824426 systemd[1]: Finished ignition-fetch-offline.service. Jul 2 07:49:18.824606 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 2 07:49:18.825474 systemd[1]: Starting ignition-kargs.service... Jul 2 07:49:18.835656 ignition[729]: Ignition 2.14.0 Jul 2 07:49:18.835664 ignition[729]: Stage: kargs Jul 2 07:49:18.835744 ignition[729]: no configs at "/usr/lib/ignition/base.d" Jul 2 07:49:18.835752 ignition[729]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 07:49:18.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:18.838134 systemd[1]: Finished ignition-kargs.service. Jul 2 07:49:18.836826 ignition[729]: kargs: kargs passed Jul 2 07:49:18.839493 systemd[1]: Starting ignition-disks.service... Jul 2 07:49:18.836857 ignition[729]: Ignition finished successfully Jul 2 07:49:18.845571 ignition[735]: Ignition 2.14.0 Jul 2 07:49:18.845580 ignition[735]: Stage: disks Jul 2 07:49:18.845650 ignition[735]: no configs at "/usr/lib/ignition/base.d" Jul 2 07:49:18.845658 ignition[735]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 07:49:18.847049 ignition[735]: disks: disks passed Jul 2 07:49:18.847076 ignition[735]: Ignition finished successfully Jul 2 07:49:18.850496 systemd[1]: Finished ignition-disks.service. Jul 2 07:49:18.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:18.852016 systemd[1]: Reached target initrd-root-device.target. Jul 2 07:49:18.852069 systemd[1]: Reached target local-fs-pre.target. Jul 2 07:49:18.854377 systemd[1]: Reached target local-fs.target. Jul 2 07:49:18.855864 systemd[1]: Reached target sysinit.target. Jul 2 07:49:18.857224 systemd[1]: Reached target basic.target. Jul 2 07:49:18.858621 systemd[1]: Starting systemd-fsck-root.service... Jul 2 07:49:18.869553 systemd-fsck[743]: ROOT: clean, 614/553520 files, 56020/553472 blocks Jul 2 07:49:18.874739 systemd[1]: Finished systemd-fsck-root.service. Jul 2 07:49:18.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:18.876400 systemd[1]: Mounting sysroot.mount... Jul 2 07:49:18.882340 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 2 07:49:18.882507 systemd[1]: Mounted sysroot.mount. Jul 2 07:49:18.882618 systemd[1]: Reached target initrd-root-fs.target. Jul 2 07:49:18.885602 systemd[1]: Mounting sysroot-usr.mount... Jul 2 07:49:18.885975 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Jul 2 07:49:18.886010 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 07:49:18.886032 systemd[1]: Reached target ignition-diskful.target. Jul 2 07:49:18.888418 systemd[1]: Mounted sysroot-usr.mount. Jul 2 07:49:18.890288 systemd[1]: Starting initrd-setup-root.service... Jul 2 07:49:18.895976 initrd-setup-root[753]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 07:49:18.899687 initrd-setup-root[761]: cut: /sysroot/etc/group: No such file or directory Jul 2 07:49:18.902972 initrd-setup-root[769]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 07:49:18.906039 initrd-setup-root[777]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 07:49:18.928439 systemd[1]: Finished initrd-setup-root.service. Jul 2 07:49:18.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:18.929261 systemd[1]: Starting ignition-mount.service... Jul 2 07:49:18.931508 systemd[1]: Starting sysroot-boot.service... Jul 2 07:49:18.935689 bash[794]: umount: /sysroot/usr/share/oem: not mounted. Jul 2 07:49:18.942481 ignition[795]: INFO : Ignition 2.14.0 Jul 2 07:49:18.942481 ignition[795]: INFO : Stage: mount Jul 2 07:49:18.944016 ignition[795]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 07:49:18.944016 ignition[795]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 07:49:18.944016 ignition[795]: INFO : mount: mount passed Jul 2 07:49:18.944016 ignition[795]: INFO : Ignition finished successfully Jul 2 07:49:18.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:18.944062 systemd[1]: Finished ignition-mount.service. Jul 2 07:49:18.952810 systemd[1]: Finished sysroot-boot.service. Jul 2 07:49:18.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:19.608924 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 2 07:49:19.616344 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (804) Jul 2 07:49:19.616378 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 07:49:19.619041 kernel: BTRFS info (device vda6): using free space tree Jul 2 07:49:19.619052 kernel: BTRFS info (device vda6): has skinny extents Jul 2 07:49:19.622037 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 2 07:49:19.623535 systemd[1]: Starting ignition-files.service... Jul 2 07:49:19.634889 ignition[824]: INFO : Ignition 2.14.0 Jul 2 07:49:19.634889 ignition[824]: INFO : Stage: files Jul 2 07:49:19.636818 ignition[824]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 07:49:19.636818 ignition[824]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 07:49:19.636818 ignition[824]: DEBUG : files: compiled without relabeling support, skipping Jul 2 07:49:19.640525 ignition[824]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 07:49:19.640525 ignition[824]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 07:49:19.640525 ignition[824]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 07:49:19.640525 ignition[824]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 07:49:19.640525 ignition[824]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 07:49:19.640525 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 07:49:19.640525 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 2 07:49:19.639292 unknown[824]: wrote ssh authorized keys file for user: core Jul 2 07:49:20.070182 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 2 07:49:20.268533 systemd-networkd[708]: eth0: Gained IPv6LL Jul 2 07:49:21.092950 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 07:49:21.095251 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 07:49:21.095251 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 2 07:49:21.460148 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 2 07:49:21.554339 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 07:49:21.556247 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 2 07:49:21.557918 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 07:49:21.559560 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 07:49:21.561280 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 07:49:21.562928 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 07:49:21.564634 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 07:49:21.566327 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 07:49:21.568026 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 07:49:21.569795 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 07:49:21.571521 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 07:49:21.573201 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jul 2 07:49:21.575602 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jul 2 07:49:21.578007 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jul 2 07:49:21.580041 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jul 2 07:49:21.849096 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 2 07:49:22.224000 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jul 2 07:49:22.224000 ignition[824]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 2 07:49:22.227633 ignition[824]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 07:49:22.227633 ignition[824]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 07:49:22.227633 ignition[824]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 2 07:49:22.227633 ignition[824]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 2 07:49:22.227633 ignition[824]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 2 07:49:22.227633 ignition[824]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 2 07:49:22.227633 ignition[824]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 2 07:49:22.227633 ignition[824]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jul 2 07:49:22.227633 ignition[824]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 07:49:22.227633 ignition[824]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" Jul 2 07:49:22.227633 ignition[824]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 2 07:49:22.255608 ignition[824]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 2 07:49:22.257175 ignition[824]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" Jul 2 07:49:22.258605 ignition[824]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 07:49:22.260297 ignition[824]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 07:49:22.261961 ignition[824]: INFO : files: files passed Jul 2 07:49:22.261961 ignition[824]: INFO : Ignition finished successfully Jul 2 07:49:22.263982 systemd[1]: Finished ignition-files.service. Jul 2 07:49:22.270390 kernel: kauditd_printk_skb: 24 callbacks suppressed Jul 2 07:49:22.270413 kernel: audit: type=1130 audit(1719906562.264:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:22.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:22.265735 systemd[1]: Starting initrd-setup-root-after-ignition.service... Jul 2 07:49:22.270413 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Jul 2 07:49:22.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:22.275508 initrd-setup-root-after-ignition[848]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Jul 2 07:49:22.280773 kernel: audit: type=1130 audit(1719906562.275:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:22.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:22.271083 systemd[1]: Starting ignition-quench.service... Jul 2 07:49:22.288126 kernel: audit: type=1130 audit(1719906562.280:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:22.288140 kernel: audit: type=1131 audit(1719906562.280:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:22.280000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:22.288216 initrd-setup-root-after-ignition[850]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 07:49:22.272608 systemd[1]: Finished initrd-setup-root-after-ignition.service. Jul 2 07:49:22.275634 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 07:49:22.275723 systemd[1]: Finished ignition-quench.service. Jul 2 07:49:22.280878 systemd[1]: Reached target ignition-complete.target. Jul 2 07:49:22.288615 systemd[1]: Starting initrd-parse-etc.service... Jul 2 07:49:22.299978 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 07:49:22.300046 systemd[1]: Finished initrd-parse-etc.service. Jul 2 07:49:22.309222 kernel: audit: type=1130 audit(1719906562.301:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:22.309237 kernel: audit: type=1131 audit(1719906562.301:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:22.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:22.301000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:22.302110 systemd[1]: Reached target initrd-fs.target. Jul 2 07:49:22.309228 systemd[1]: Reached target initrd.target. Jul 2 07:49:22.310009 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Jul 2 07:49:22.310552 systemd[1]: Starting dracut-pre-pivot.service... Jul 2 07:49:22.319431 systemd[1]: Finished dracut-pre-pivot.service. Jul 2 07:49:22.324689 kernel: audit: type=1130 audit(1719906562.319:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:22.319000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:22.320971 systemd[1]: Starting initrd-cleanup.service... Jul 2 07:49:22.329061 systemd[1]: Stopped target nss-lookup.target. Jul 2 07:49:22.329946 systemd[1]: Stopped target remote-cryptsetup.target. Jul 2 07:49:22.331649 systemd[1]: Stopped target timers.target. Jul 2 07:49:22.333251 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 07:49:22.339347 kernel: audit: type=1131 audit(1719906562.334:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:22.334000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:22.333347 systemd[1]: Stopped dracut-pre-pivot.service. Jul 2 07:49:22.334912 systemd[1]: Stopped target initrd.target. Jul 2 07:49:22.339505 systemd[1]: Stopped target basic.target. Jul 2 07:49:22.340997 systemd[1]: Stopped target ignition-complete.target. Jul 2 07:49:22.342692 systemd[1]: Stopped target ignition-diskful.target. Jul 2 07:49:22.344298 systemd[1]: Stopped target initrd-root-device.target. Jul 2 07:49:22.346107 systemd[1]: Stopped target remote-fs.target. Jul 2 07:49:22.347797 systemd[1]: Stopped target remote-fs-pre.target. Jul 2 07:49:22.349577 systemd[1]: Stopped target sysinit.target. Jul 2 07:49:22.351172 systemd[1]: Stopped target local-fs.target. Jul 2 07:49:22.352824 systemd[1]: Stopped target local-fs-pre.target. Jul 2 07:49:22.354406 systemd[1]: Stopped target swap.target. Jul 2 07:49:22.361908 kernel: audit: type=1131 audit(1719906562.357:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:22.357000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:22.355902 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 07:49:22.355998 systemd[1]: Stopped dracut-pre-mount.service. Jul 2 07:49:22.368328 kernel: audit: type=1131 audit(1719906562.363:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:22.363000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:22.357641 systemd[1]: Stopped target cryptsetup.target. Jul 2 07:49:22.368000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:22.361940 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 07:49:22.362021 systemd[1]: Stopped dracut-initqueue.service. Jul 2 07:49:22.363895 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 07:49:22.363978 systemd[1]: Stopped ignition-fetch-offline.service. Jul 2 07:49:22.368449 systemd[1]: Stopped target paths.target. Jul 2 07:49:22.369949 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 07:49:22.373364 systemd[1]: Stopped systemd-ask-password-console.path. Jul 2 07:49:22.374783 systemd[1]: Stopped target slices.target. Jul 2 07:49:22.379000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:22.376247 systemd[1]: Stopped target sockets.target. Jul 2 07:49:22.381000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:22.378112 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 07:49:22.384823 iscsid[715]: iscsid shutting down. Jul 2 07:49:22.378204 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Jul 2 07:49:22.380005 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 07:49:22.380089 systemd[1]: Stopped ignition-files.service. Jul 2 07:49:22.388000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:22.390969 ignition[865]: INFO : Ignition 2.14.0 Jul 2 07:49:22.390969 ignition[865]: INFO : Stage: umount Jul 2 07:49:22.390969 ignition[865]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 07:49:22.390969 ignition[865]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 07:49:22.390969 ignition[865]: INFO : umount: umount passed Jul 2 07:49:22.390969 ignition[865]: INFO : Ignition finished successfully Jul 2 07:49:22.390000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:22.393000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:22.395000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:22.398000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:22.400000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:22.401000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:22.382142 systemd[1]: Stopping ignition-mount.service... Jul 2 07:49:22.383160 systemd[1]: Stopping iscsid.service... Jul 2 07:49:22.386054 systemd[1]: Stopping sysroot-boot.service... Jul 2 07:49:22.387373 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 07:49:22.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:22.405000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:22.407000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:22.387490 systemd[1]: Stopped systemd-udev-trigger.service. Jul 2 07:49:22.389340 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 07:49:22.389448 systemd[1]: Stopped dracut-pre-trigger.service. Jul 2 07:49:22.392351 systemd[1]: iscsid.service: Deactivated successfully. Jul 2 07:49:22.392432 systemd[1]: Stopped iscsid.service. Jul 2 07:49:22.393726 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 07:49:22.393794 systemd[1]: Stopped ignition-mount.service. Jul 2 07:49:22.395646 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 07:49:22.441000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:22.456000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:22.395722 systemd[1]: Closed iscsid.socket. Jul 2 07:49:22.397291 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 07:49:22.397332 systemd[1]: Stopped ignition-disks.service. Jul 2 07:49:22.398947 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 07:49:22.398975 systemd[1]: Stopped ignition-kargs.service. Jul 2 07:49:22.462000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:22.463000 audit: BPF prog-id=6 op=UNLOAD Jul 2 07:49:22.463000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:22.400713 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 07:49:22.464000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:22.400741 systemd[1]: Stopped ignition-setup.service. Jul 2 07:49:22.401755 systemd[1]: Stopping iscsiuio.service... Jul 2 07:49:22.404025 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 07:49:22.404438 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 07:49:22.470000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:22.404502 systemd[1]: Finished initrd-cleanup.service. Jul 2 07:49:22.405831 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 2 07:49:22.405892 systemd[1]: Stopped iscsiuio.service. Jul 2 07:49:22.474000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:22.408133 systemd[1]: Stopped target network.target. Jul 2 07:49:22.409233 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 07:49:22.409261 systemd[1]: Closed iscsiuio.socket. Jul 2 07:49:22.410998 systemd[1]: Stopping systemd-networkd.service... Jul 2 07:49:22.479000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:22.480000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:22.412612 systemd[1]: Stopping systemd-resolved.service... Jul 2 07:49:22.438351 systemd-networkd[708]: eth0: DHCPv6 lease lost Jul 2 07:49:22.482000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:22.482000 audit: BPF prog-id=9 op=UNLOAD Jul 2 07:49:22.440001 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 07:49:22.486000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:22.440071 systemd[1]: Stopped systemd-networkd.service. Jul 2 07:49:22.453558 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 07:49:22.487000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:22.489000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:22.453664 systemd[1]: Stopped systemd-resolved.service. Jul 2 07:49:22.456960 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 07:49:22.456987 systemd[1]: Closed systemd-networkd.socket. Jul 2 07:49:22.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:22.492000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:22.459933 systemd[1]: Stopping network-cleanup.service... Jul 2 07:49:22.460733 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 07:49:22.460773 systemd[1]: Stopped parse-ip-for-networkd.service. Jul 2 07:49:22.462735 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 07:49:22.462772 systemd[1]: Stopped systemd-sysctl.service. Jul 2 07:49:22.464308 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 07:49:22.464353 systemd[1]: Stopped systemd-modules-load.service. Jul 2 07:49:22.464483 systemd[1]: Stopping systemd-udevd.service... Jul 2 07:49:22.465347 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 2 07:49:22.468971 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 07:49:22.469048 systemd[1]: Stopped network-cleanup.service. Jul 2 07:49:22.473363 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 07:49:22.473460 systemd[1]: Stopped systemd-udevd.service. Jul 2 07:49:22.474837 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 07:49:22.474864 systemd[1]: Closed systemd-udevd-control.socket. Jul 2 07:49:22.476496 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 07:49:22.476525 systemd[1]: Closed systemd-udevd-kernel.socket. Jul 2 07:49:22.478160 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 07:49:22.478192 systemd[1]: Stopped dracut-pre-udev.service. Jul 2 07:49:22.479633 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 07:49:22.479663 systemd[1]: Stopped dracut-cmdline.service. Jul 2 07:49:22.481242 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 07:49:22.481273 systemd[1]: Stopped dracut-cmdline-ask.service. Jul 2 07:49:22.483471 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Jul 2 07:49:22.484506 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 2 07:49:22.484560 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Jul 2 07:49:22.486509 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 07:49:22.486550 systemd[1]: Stopped kmod-static-nodes.service. Jul 2 07:49:22.488088 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 07:49:22.520000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:22.488130 systemd[1]: Stopped systemd-vconsole-setup.service. Jul 2 07:49:22.522000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:22.490584 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 2 07:49:22.490944 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 07:49:22.491004 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Jul 2 07:49:22.519148 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 07:49:22.519233 systemd[1]: Stopped sysroot-boot.service. Jul 2 07:49:22.520583 systemd[1]: Reached target initrd-switch-root.target. Jul 2 07:49:22.522437 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 07:49:22.522474 systemd[1]: Stopped initrd-setup-root.service. Jul 2 07:49:22.523167 systemd[1]: Starting initrd-switch-root.service... Jul 2 07:49:22.538506 systemd[1]: Switching root. Jul 2 07:49:22.555812 systemd-journald[197]: Journal stopped Jul 2 07:49:25.101776 systemd-journald[197]: Received SIGTERM from PID 1 (systemd). Jul 2 07:49:25.101837 kernel: SELinux: Class mctp_socket not defined in policy. Jul 2 07:49:25.101858 kernel: SELinux: Class anon_inode not defined in policy. Jul 2 07:49:25.101872 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 2 07:49:25.101885 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 07:49:25.101898 kernel: SELinux: policy capability open_perms=1 Jul 2 07:49:25.101912 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 07:49:25.101925 kernel: SELinux: policy capability always_check_network=0 Jul 2 07:49:25.101938 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 07:49:25.101951 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 07:49:25.101966 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 07:49:25.101982 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 07:49:25.101996 systemd[1]: Successfully loaded SELinux policy in 39.324ms. Jul 2 07:49:25.102019 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.484ms. Jul 2 07:49:25.102036 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 07:49:25.102050 systemd[1]: Detected virtualization kvm. Jul 2 07:49:25.102066 systemd[1]: Detected architecture x86-64. Jul 2 07:49:25.102081 systemd[1]: Detected first boot. Jul 2 07:49:25.102094 systemd[1]: Initializing machine ID from VM UUID. Jul 2 07:49:25.102108 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Jul 2 07:49:25.102123 systemd[1]: Populated /etc with preset unit settings. Jul 2 07:49:25.102137 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 07:49:25.102159 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 07:49:25.102177 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 07:49:25.102191 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 2 07:49:25.102209 systemd[1]: Stopped initrd-switch-root.service. Jul 2 07:49:25.102223 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 2 07:49:25.102237 systemd[1]: Created slice system-addon\x2dconfig.slice. Jul 2 07:49:25.102251 systemd[1]: Created slice system-addon\x2drun.slice. Jul 2 07:49:25.102265 systemd[1]: Created slice system-getty.slice. Jul 2 07:49:25.102278 systemd[1]: Created slice system-modprobe.slice. Jul 2 07:49:25.102295 systemd[1]: Created slice system-serial\x2dgetty.slice. Jul 2 07:49:25.102308 systemd[1]: Created slice system-system\x2dcloudinit.slice. Jul 2 07:49:25.102335 systemd[1]: Created slice system-systemd\x2dfsck.slice. Jul 2 07:49:25.102349 systemd[1]: Created slice user.slice. Jul 2 07:49:25.102363 systemd[1]: Started systemd-ask-password-console.path. Jul 2 07:49:25.102377 systemd[1]: Started systemd-ask-password-wall.path. Jul 2 07:49:25.102391 systemd[1]: Set up automount boot.automount. Jul 2 07:49:25.102405 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Jul 2 07:49:25.102419 systemd[1]: Stopped target initrd-switch-root.target. Jul 2 07:49:25.102435 systemd[1]: Stopped target initrd-fs.target. Jul 2 07:49:25.102450 systemd[1]: Stopped target initrd-root-fs.target. Jul 2 07:49:25.102464 systemd[1]: Reached target integritysetup.target. Jul 2 07:49:25.102478 systemd[1]: Reached target remote-cryptsetup.target. Jul 2 07:49:25.102492 systemd[1]: Reached target remote-fs.target. Jul 2 07:49:25.102506 systemd[1]: Reached target slices.target. Jul 2 07:49:25.102520 systemd[1]: Reached target swap.target. Jul 2 07:49:25.102533 systemd[1]: Reached target torcx.target. Jul 2 07:49:25.102549 systemd[1]: Reached target veritysetup.target. Jul 2 07:49:25.102563 systemd[1]: Listening on systemd-coredump.socket. Jul 2 07:49:25.102577 systemd[1]: Listening on systemd-initctl.socket. Jul 2 07:49:25.102590 systemd[1]: Listening on systemd-networkd.socket. Jul 2 07:49:25.102614 systemd[1]: Listening on systemd-udevd-control.socket. Jul 2 07:49:25.102628 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 2 07:49:25.102642 systemd[1]: Listening on systemd-userdbd.socket. Jul 2 07:49:25.102659 systemd[1]: Mounting dev-hugepages.mount... Jul 2 07:49:25.102673 systemd[1]: Mounting dev-mqueue.mount... Jul 2 07:49:25.102686 systemd[1]: Mounting media.mount... Jul 2 07:49:25.102703 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:49:25.102718 systemd[1]: Mounting sys-kernel-debug.mount... Jul 2 07:49:25.102732 systemd[1]: Mounting sys-kernel-tracing.mount... Jul 2 07:49:25.102746 systemd[1]: Mounting tmp.mount... Jul 2 07:49:25.102761 systemd[1]: Starting flatcar-tmpfiles.service... Jul 2 07:49:25.102776 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 07:49:25.102791 systemd[1]: Starting kmod-static-nodes.service... Jul 2 07:49:25.102805 systemd[1]: Starting modprobe@configfs.service... Jul 2 07:49:25.102820 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 07:49:25.102837 systemd[1]: Starting modprobe@drm.service... Jul 2 07:49:25.102851 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 07:49:25.102865 systemd[1]: Starting modprobe@fuse.service... Jul 2 07:49:25.102880 systemd[1]: Starting modprobe@loop.service... Jul 2 07:49:25.102895 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 07:49:25.102910 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 2 07:49:25.102924 systemd[1]: Stopped systemd-fsck-root.service. Jul 2 07:49:25.102938 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 2 07:49:25.102953 systemd[1]: Stopped systemd-fsck-usr.service. Jul 2 07:49:25.102969 kernel: fuse: init (API version 7.34) Jul 2 07:49:25.102983 systemd[1]: Stopped systemd-journald.service. Jul 2 07:49:25.102997 kernel: loop: module loaded Jul 2 07:49:25.103011 systemd[1]: Starting systemd-journald.service... Jul 2 07:49:25.103025 systemd[1]: Starting systemd-modules-load.service... Jul 2 07:49:25.103039 systemd[1]: Starting systemd-network-generator.service... Jul 2 07:49:25.103054 systemd[1]: Starting systemd-remount-fs.service... Jul 2 07:49:25.103070 systemd[1]: Starting systemd-udev-trigger.service... Jul 2 07:49:25.103085 systemd[1]: verity-setup.service: Deactivated successfully. Jul 2 07:49:25.103102 systemd[1]: Stopped verity-setup.service. Jul 2 07:49:25.103117 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:49:25.103133 systemd-journald[985]: Journal started Jul 2 07:49:25.103181 systemd-journald[985]: Runtime Journal (/run/log/journal/50bd3218ee354580856761468e094ef0) is 6.0M, max 48.4M, 42.4M free. Jul 2 07:49:22.613000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 07:49:22.939000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 07:49:22.939000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 07:49:22.939000 audit: BPF prog-id=10 op=LOAD Jul 2 07:49:22.939000 audit: BPF prog-id=10 op=UNLOAD Jul 2 07:49:22.939000 audit: BPF prog-id=11 op=LOAD Jul 2 07:49:22.939000 audit: BPF prog-id=11 op=UNLOAD Jul 2 07:49:22.971000 audit[900]: AVC avc: denied { associate } for pid=900 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Jul 2 07:49:22.971000 audit[900]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001878e2 a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=883 pid=900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:49:22.971000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 2 07:49:22.972000 audit[900]: AVC avc: denied { associate } for pid=900 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Jul 2 07:49:22.972000 audit[900]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001879b9 a2=1ed a3=0 items=2 ppid=883 pid=900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:49:22.972000 audit: CWD cwd="/" Jul 2 07:49:22.972000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:22.972000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:22.972000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 2 07:49:24.976000 audit: BPF prog-id=12 op=LOAD Jul 2 07:49:24.976000 audit: BPF prog-id=3 op=UNLOAD Jul 2 07:49:24.976000 audit: BPF prog-id=13 op=LOAD Jul 2 07:49:24.976000 audit: BPF prog-id=14 op=LOAD Jul 2 07:49:24.976000 audit: BPF prog-id=4 op=UNLOAD Jul 2 07:49:24.976000 audit: BPF prog-id=5 op=UNLOAD Jul 2 07:49:24.977000 audit: BPF prog-id=15 op=LOAD Jul 2 07:49:24.977000 audit: BPF prog-id=12 op=UNLOAD Jul 2 07:49:24.977000 audit: BPF prog-id=16 op=LOAD Jul 2 07:49:24.977000 audit: BPF prog-id=17 op=LOAD Jul 2 07:49:24.977000 audit: BPF prog-id=13 op=UNLOAD Jul 2 07:49:24.977000 audit: BPF prog-id=14 op=UNLOAD Jul 2 07:49:24.978000 audit: BPF prog-id=18 op=LOAD Jul 2 07:49:24.978000 audit: BPF prog-id=15 op=UNLOAD Jul 2 07:49:24.978000 audit: BPF prog-id=19 op=LOAD Jul 2 07:49:24.978000 audit: BPF prog-id=20 op=LOAD Jul 2 07:49:24.978000 audit: BPF prog-id=16 op=UNLOAD Jul 2 07:49:24.978000 audit: BPF prog-id=17 op=UNLOAD Jul 2 07:49:24.979000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:24.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:24.982000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:24.987000 audit: BPF prog-id=18 op=UNLOAD Jul 2 07:49:25.075000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:25.079000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:25.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:25.082000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:25.083000 audit: BPF prog-id=21 op=LOAD Jul 2 07:49:25.083000 audit: BPF prog-id=22 op=LOAD Jul 2 07:49:25.083000 audit: BPF prog-id=23 op=LOAD Jul 2 07:49:25.083000 audit: BPF prog-id=19 op=UNLOAD Jul 2 07:49:25.083000 audit: BPF prog-id=20 op=UNLOAD Jul 2 07:49:25.099000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 2 07:49:25.099000 audit[985]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffd2bb63d70 a2=4000 a3=7ffd2bb63e0c items=0 ppid=1 pid=985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:49:25.099000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 2 07:49:25.102000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:24.975810 systemd[1]: Queued start job for default target multi-user.target. Jul 2 07:49:22.969935 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-07-02T07:49:22Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 07:49:24.975820 systemd[1]: Unnecessary job was removed for dev-vda6.device. Jul 2 07:49:22.970099 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-07-02T07:49:22Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 2 07:49:24.979303 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 2 07:49:22.970115 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-07-02T07:49:22Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 2 07:49:22.970140 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-07-02T07:49:22Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Jul 2 07:49:22.970149 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-07-02T07:49:22Z" level=debug msg="skipped missing lower profile" missing profile=oem Jul 2 07:49:22.970173 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-07-02T07:49:22Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Jul 2 07:49:22.970184 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-07-02T07:49:22Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Jul 2 07:49:22.970378 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-07-02T07:49:22Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Jul 2 07:49:22.970407 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-07-02T07:49:22Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 2 07:49:22.970418 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-07-02T07:49:22Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 2 07:49:22.970950 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-07-02T07:49:22Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Jul 2 07:49:22.970985 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-07-02T07:49:22Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Jul 2 07:49:22.971004 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-07-02T07:49:22Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.5: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.5 Jul 2 07:49:22.971017 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-07-02T07:49:22Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Jul 2 07:49:22.971034 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-07-02T07:49:22Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.5: no such file or directory" path=/var/lib/torcx/store/3510.3.5 Jul 2 07:49:22.971046 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-07-02T07:49:22Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Jul 2 07:49:24.729007 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-07-02T07:49:24Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 07:49:24.729254 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-07-02T07:49:24Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 07:49:24.729364 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-07-02T07:49:24Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 07:49:24.729505 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-07-02T07:49:24Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 07:49:24.729546 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-07-02T07:49:24Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Jul 2 07:49:24.729595 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-07-02T07:49:24Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Jul 2 07:49:25.107372 systemd[1]: Started systemd-journald.service. Jul 2 07:49:25.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:25.107895 systemd[1]: Mounted dev-hugepages.mount. Jul 2 07:49:25.108785 systemd[1]: Mounted dev-mqueue.mount. Jul 2 07:49:25.109642 systemd[1]: Mounted media.mount. Jul 2 07:49:25.110441 systemd[1]: Mounted sys-kernel-debug.mount. Jul 2 07:49:25.111336 systemd[1]: Mounted sys-kernel-tracing.mount. Jul 2 07:49:25.112263 systemd[1]: Mounted tmp.mount. Jul 2 07:49:25.113200 systemd[1]: Finished flatcar-tmpfiles.service. Jul 2 07:49:25.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:25.114329 systemd[1]: Finished kmod-static-nodes.service. Jul 2 07:49:25.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:25.115407 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 07:49:25.115532 systemd[1]: Finished modprobe@configfs.service. Jul 2 07:49:25.116000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:25.116000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:25.116627 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 07:49:25.116741 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 07:49:25.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:25.117000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:25.117819 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 07:49:25.117924 systemd[1]: Finished modprobe@drm.service. Jul 2 07:49:25.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:25.118000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:25.118992 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 07:49:25.119103 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 07:49:25.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:25.119000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:25.120270 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 07:49:25.120514 systemd[1]: Finished modprobe@fuse.service. Jul 2 07:49:25.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:25.121000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:25.121529 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 07:49:25.121650 systemd[1]: Finished modprobe@loop.service. Jul 2 07:49:25.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:25.122000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:25.122767 systemd[1]: Finished systemd-modules-load.service. Jul 2 07:49:25.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:25.123881 systemd[1]: Finished systemd-network-generator.service. Jul 2 07:49:25.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:25.125055 systemd[1]: Finished systemd-remount-fs.service. Jul 2 07:49:25.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:25.126266 systemd[1]: Reached target network-pre.target. Jul 2 07:49:25.127996 systemd[1]: Mounting sys-fs-fuse-connections.mount... Jul 2 07:49:25.129628 systemd[1]: Mounting sys-kernel-config.mount... Jul 2 07:49:25.130489 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 07:49:25.131494 systemd[1]: Starting systemd-hwdb-update.service... Jul 2 07:49:25.133223 systemd[1]: Starting systemd-journal-flush.service... Jul 2 07:49:25.134212 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 07:49:25.134879 systemd[1]: Starting systemd-random-seed.service... Jul 2 07:49:25.135874 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 07:49:25.136584 systemd[1]: Starting systemd-sysctl.service... Jul 2 07:49:25.138146 systemd[1]: Starting systemd-sysusers.service... Jul 2 07:49:25.141049 systemd[1]: Mounted sys-fs-fuse-connections.mount. Jul 2 07:49:25.142093 systemd[1]: Mounted sys-kernel-config.mount. Jul 2 07:49:25.145666 systemd-journald[985]: Time spent on flushing to /var/log/journal/50bd3218ee354580856761468e094ef0 is 15.559ms for 1174 entries. Jul 2 07:49:25.145666 systemd-journald[985]: System Journal (/var/log/journal/50bd3218ee354580856761468e094ef0) is 8.0M, max 195.6M, 187.6M free. Jul 2 07:49:25.166664 systemd-journald[985]: Received client request to flush runtime journal. Jul 2 07:49:25.148000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:25.153000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:25.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:25.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:25.146859 systemd[1]: Finished systemd-random-seed.service. Jul 2 07:49:25.148664 systemd[1]: Reached target first-boot-complete.target. Jul 2 07:49:25.152637 systemd[1]: Finished systemd-udev-trigger.service. Jul 2 07:49:25.154919 systemd[1]: Starting systemd-udev-settle.service... Jul 2 07:49:25.155924 systemd[1]: Finished systemd-sysctl.service. Jul 2 07:49:25.162028 systemd[1]: Finished systemd-sysusers.service. Jul 2 07:49:25.164113 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 2 07:49:25.167256 systemd[1]: Finished systemd-journal-flush.service. Jul 2 07:49:25.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:25.168561 udevadm[1004]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 2 07:49:25.178746 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 2 07:49:25.179000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:25.551200 systemd[1]: Finished systemd-hwdb-update.service. Jul 2 07:49:25.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:25.552000 audit: BPF prog-id=24 op=LOAD Jul 2 07:49:25.552000 audit: BPF prog-id=25 op=LOAD Jul 2 07:49:25.552000 audit: BPF prog-id=7 op=UNLOAD Jul 2 07:49:25.552000 audit: BPF prog-id=8 op=UNLOAD Jul 2 07:49:25.553260 systemd[1]: Starting systemd-udevd.service... Jul 2 07:49:25.567796 systemd-udevd[1008]: Using default interface naming scheme 'v252'. Jul 2 07:49:25.578833 systemd[1]: Started systemd-udevd.service. Jul 2 07:49:25.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:25.580000 audit: BPF prog-id=26 op=LOAD Jul 2 07:49:25.581196 systemd[1]: Starting systemd-networkd.service... Jul 2 07:49:25.588073 systemd[1]: Starting systemd-userdbd.service... Jul 2 07:49:25.586000 audit: BPF prog-id=27 op=LOAD Jul 2 07:49:25.587000 audit: BPF prog-id=28 op=LOAD Jul 2 07:49:25.587000 audit: BPF prog-id=29 op=LOAD Jul 2 07:49:25.601232 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Jul 2 07:49:25.613427 systemd[1]: Started systemd-userdbd.service. Jul 2 07:49:25.614000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:25.628681 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 2 07:49:25.645333 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jul 2 07:49:25.652867 systemd-networkd[1016]: lo: Link UP Jul 2 07:49:25.652879 systemd-networkd[1016]: lo: Gained carrier Jul 2 07:49:25.653214 systemd-networkd[1016]: Enumeration completed Jul 2 07:49:25.653298 systemd[1]: Started systemd-networkd.service. Jul 2 07:49:25.653488 systemd-networkd[1016]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 07:49:25.654464 systemd-networkd[1016]: eth0: Link UP Jul 2 07:49:25.654000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:25.654475 systemd-networkd[1016]: eth0: Gained carrier Jul 2 07:49:25.650000 audit[1020]: AVC avc: denied { confidentiality } for pid=1020 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Jul 2 07:49:25.655360 kernel: ACPI: button: Power Button [PWRF] Jul 2 07:49:25.665406 systemd-networkd[1016]: eth0: DHCPv4 address 10.0.0.99/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 2 07:49:25.650000 audit[1020]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55696583eeb0 a1=3207c a2=7f40743fabc5 a3=5 items=108 ppid=1008 pid=1020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:49:25.650000 audit: CWD cwd="/" Jul 2 07:49:25.650000 audit: PATH item=0 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=1 name=(null) inode=908 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=2 name=(null) inode=908 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=3 name=(null) inode=909 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=4 name=(null) inode=908 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=5 name=(null) inode=910 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=6 name=(null) inode=908 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=7 name=(null) inode=911 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=8 name=(null) inode=911 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=9 name=(null) inode=912 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=10 name=(null) inode=911 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=11 name=(null) inode=913 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=12 name=(null) inode=911 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=13 name=(null) inode=914 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=14 name=(null) inode=911 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=15 name=(null) inode=915 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=16 name=(null) inode=911 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=17 name=(null) inode=916 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=18 name=(null) inode=908 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=19 name=(null) inode=917 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=20 name=(null) inode=917 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=21 name=(null) inode=918 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=22 name=(null) inode=917 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=23 name=(null) inode=919 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=24 name=(null) inode=917 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=25 name=(null) inode=920 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=26 name=(null) inode=917 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=27 name=(null) inode=921 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=28 name=(null) inode=917 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=29 name=(null) inode=922 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=30 name=(null) inode=908 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=31 name=(null) inode=923 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=32 name=(null) inode=923 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=33 name=(null) inode=924 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=34 name=(null) inode=923 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=35 name=(null) inode=925 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=36 name=(null) inode=923 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=37 name=(null) inode=926 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=38 name=(null) inode=923 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=39 name=(null) inode=927 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=40 name=(null) inode=923 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=41 name=(null) inode=928 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=42 name=(null) inode=908 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=43 name=(null) inode=929 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=44 name=(null) inode=929 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=45 name=(null) inode=930 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=46 name=(null) inode=929 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=47 name=(null) inode=931 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=48 name=(null) inode=929 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=49 name=(null) inode=932 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=50 name=(null) inode=929 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=51 name=(null) inode=933 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=52 name=(null) inode=929 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=53 name=(null) inode=934 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=54 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=55 name=(null) inode=935 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=56 name=(null) inode=935 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=57 name=(null) inode=936 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=58 name=(null) inode=935 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=59 name=(null) inode=937 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=60 name=(null) inode=935 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=61 name=(null) inode=938 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=62 name=(null) inode=938 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=63 name=(null) inode=939 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=64 name=(null) inode=938 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=65 name=(null) inode=940 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=66 name=(null) inode=938 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=67 name=(null) inode=941 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=68 name=(null) inode=938 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=69 name=(null) inode=942 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=70 name=(null) inode=938 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=71 name=(null) inode=943 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=72 name=(null) inode=935 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=73 name=(null) inode=944 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=74 name=(null) inode=944 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=75 name=(null) inode=945 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=76 name=(null) inode=944 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=77 name=(null) inode=946 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=78 name=(null) inode=944 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=79 name=(null) inode=947 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=80 name=(null) inode=944 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=81 name=(null) inode=948 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=82 name=(null) inode=944 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=83 name=(null) inode=949 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=84 name=(null) inode=935 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=85 name=(null) inode=950 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=86 name=(null) inode=950 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=87 name=(null) inode=951 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=88 name=(null) inode=950 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=89 name=(null) inode=952 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=90 name=(null) inode=950 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=91 name=(null) inode=953 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=92 name=(null) inode=950 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=93 name=(null) inode=954 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=94 name=(null) inode=950 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=95 name=(null) inode=955 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=96 name=(null) inode=935 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=97 name=(null) inode=956 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=98 name=(null) inode=956 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=99 name=(null) inode=957 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=100 name=(null) inode=956 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=101 name=(null) inode=958 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=102 name=(null) inode=956 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=103 name=(null) inode=959 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=104 name=(null) inode=956 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=105 name=(null) inode=960 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=106 name=(null) inode=956 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PATH item=107 name=(null) inode=961 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:25.650000 audit: PROCTITLE proctitle="(udev-worker)" Jul 2 07:49:25.674339 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 0 Jul 2 07:49:25.693342 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jul 2 07:49:25.698330 kernel: mousedev: PS/2 mouse device common for all mice Jul 2 07:49:25.737757 kernel: kvm: Nested Virtualization enabled Jul 2 07:49:25.737836 kernel: SVM: kvm: Nested Paging enabled Jul 2 07:49:25.737868 kernel: SVM: Virtual VMLOAD VMSAVE supported Jul 2 07:49:25.737881 kernel: SVM: Virtual GIF supported Jul 2 07:49:25.751332 kernel: EDAC MC: Ver: 3.0.0 Jul 2 07:49:25.771614 systemd[1]: Finished systemd-udev-settle.service. Jul 2 07:49:25.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:25.773424 systemd[1]: Starting lvm2-activation-early.service... Jul 2 07:49:25.779876 lvm[1044]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 07:49:25.805950 systemd[1]: Finished lvm2-activation-early.service. Jul 2 07:49:25.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:25.807007 systemd[1]: Reached target cryptsetup.target. Jul 2 07:49:25.808718 systemd[1]: Starting lvm2-activation.service... Jul 2 07:49:25.812250 lvm[1045]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 07:49:25.837401 systemd[1]: Finished lvm2-activation.service. Jul 2 07:49:25.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:25.838353 systemd[1]: Reached target local-fs-pre.target. Jul 2 07:49:25.839241 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 07:49:25.839259 systemd[1]: Reached target local-fs.target. Jul 2 07:49:25.840116 systemd[1]: Reached target machines.target. Jul 2 07:49:25.841789 systemd[1]: Starting ldconfig.service... Jul 2 07:49:25.842784 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 07:49:25.842823 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:49:25.843573 systemd[1]: Starting systemd-boot-update.service... Jul 2 07:49:25.845102 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Jul 2 07:49:25.846830 systemd[1]: Starting systemd-machine-id-commit.service... Jul 2 07:49:25.848564 systemd[1]: Starting systemd-sysext.service... Jul 2 07:49:25.849648 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1047 (bootctl) Jul 2 07:49:25.850457 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Jul 2 07:49:25.858803 systemd[1]: Unmounting usr-share-oem.mount... Jul 2 07:49:25.861355 systemd[1]: usr-share-oem.mount: Deactivated successfully. Jul 2 07:49:25.861470 systemd[1]: Unmounted usr-share-oem.mount. Jul 2 07:49:25.863196 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Jul 2 07:49:25.864000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:25.873333 kernel: loop0: detected capacity change from 0 to 211296 Jul 2 07:49:25.880458 systemd-fsck[1055]: fsck.fat 4.2 (2021-01-31) Jul 2 07:49:25.880458 systemd-fsck[1055]: /dev/vda1: 790 files, 119261/258078 clusters Jul 2 07:49:25.881769 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Jul 2 07:49:25.883000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:25.884534 systemd[1]: Mounting boot.mount... Jul 2 07:49:25.898060 systemd[1]: Mounted boot.mount. Jul 2 07:49:26.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:26.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:26.065927 systemd[1]: Finished systemd-boot-update.service. Jul 2 07:49:26.067366 systemd[1]: Finished systemd-machine-id-commit.service. Jul 2 07:49:26.070352 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 07:49:26.085335 kernel: loop1: detected capacity change from 0 to 211296 Jul 2 07:49:26.088651 (sd-sysext)[1063]: Using extensions 'kubernetes'. Jul 2 07:49:26.089015 (sd-sysext)[1063]: Merged extensions into '/usr'. Jul 2 07:49:26.102562 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 07:49:26.103655 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 07:49:26.105473 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 07:49:26.107049 systemd[1]: Starting modprobe@loop.service... Jul 2 07:49:26.107956 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 07:49:26.108065 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:49:26.109200 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 07:49:26.109982 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 07:49:26.110111 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 07:49:26.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:26.110000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:26.111351 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 07:49:26.111475 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 07:49:26.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:26.112000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:26.112757 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 07:49:26.112876 systemd[1]: Finished modprobe@loop.service. Jul 2 07:49:26.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:26.113000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:26.114162 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:49:26.115452 systemd[1]: Mounting usr-share-oem.mount... Jul 2 07:49:26.116163 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 07:49:26.116306 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 07:49:26.116430 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:49:26.120439 systemd[1]: Mounted usr-share-oem.mount. Jul 2 07:49:26.122451 systemd[1]: Finished systemd-sysext.service. Jul 2 07:49:26.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:26.124617 systemd[1]: Starting ensure-sysext.service... Jul 2 07:49:26.126438 systemd[1]: Starting systemd-tmpfiles-setup.service... Jul 2 07:49:26.126675 ldconfig[1046]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 07:49:26.131489 systemd[1]: Reloading. Jul 2 07:49:26.135561 systemd-tmpfiles[1070]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 2 07:49:26.136175 systemd-tmpfiles[1070]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 07:49:26.137458 systemd-tmpfiles[1070]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 07:49:26.187764 /usr/lib/systemd/system-generators/torcx-generator[1089]: time="2024-07-02T07:49:26Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 07:49:26.187799 /usr/lib/systemd/system-generators/torcx-generator[1089]: time="2024-07-02T07:49:26Z" level=info msg="torcx already run" Jul 2 07:49:26.247842 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 07:49:26.247858 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 07:49:26.264703 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 07:49:26.314000 audit: BPF prog-id=30 op=LOAD Jul 2 07:49:26.314000 audit: BPF prog-id=26 op=UNLOAD Jul 2 07:49:26.314000 audit: BPF prog-id=31 op=LOAD Jul 2 07:49:26.314000 audit: BPF prog-id=32 op=LOAD Jul 2 07:49:26.314000 audit: BPF prog-id=24 op=UNLOAD Jul 2 07:49:26.314000 audit: BPF prog-id=25 op=UNLOAD Jul 2 07:49:26.315000 audit: BPF prog-id=33 op=LOAD Jul 2 07:49:26.315000 audit: BPF prog-id=27 op=UNLOAD Jul 2 07:49:26.315000 audit: BPF prog-id=34 op=LOAD Jul 2 07:49:26.315000 audit: BPF prog-id=35 op=LOAD Jul 2 07:49:26.316000 audit: BPF prog-id=28 op=UNLOAD Jul 2 07:49:26.316000 audit: BPF prog-id=29 op=UNLOAD Jul 2 07:49:26.317000 audit: BPF prog-id=36 op=LOAD Jul 2 07:49:26.317000 audit: BPF prog-id=21 op=UNLOAD Jul 2 07:49:26.317000 audit: BPF prog-id=37 op=LOAD Jul 2 07:49:26.317000 audit: BPF prog-id=38 op=LOAD Jul 2 07:49:26.317000 audit: BPF prog-id=22 op=UNLOAD Jul 2 07:49:26.317000 audit: BPF prog-id=23 op=UNLOAD Jul 2 07:49:26.319853 systemd[1]: Finished ldconfig.service. Jul 2 07:49:26.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:26.321614 systemd[1]: Finished systemd-tmpfiles-setup.service. Jul 2 07:49:26.322000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:26.324936 systemd[1]: Starting audit-rules.service... Jul 2 07:49:26.326493 systemd[1]: Starting clean-ca-certificates.service... Jul 2 07:49:26.328276 systemd[1]: Starting systemd-journal-catalog-update.service... Jul 2 07:49:26.329000 audit: BPF prog-id=39 op=LOAD Jul 2 07:49:26.331000 audit: BPF prog-id=40 op=LOAD Jul 2 07:49:26.330523 systemd[1]: Starting systemd-resolved.service... Jul 2 07:49:26.332494 systemd[1]: Starting systemd-timesyncd.service... Jul 2 07:49:26.334008 systemd[1]: Starting systemd-update-utmp.service... Jul 2 07:49:26.336948 systemd[1]: Finished clean-ca-certificates.service. Jul 2 07:49:26.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:26.337000 audit[1141]: SYSTEM_BOOT pid=1141 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 2 07:49:26.338247 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 07:49:26.341843 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:49:26.342122 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 07:49:26.343385 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 07:49:26.345074 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 07:49:26.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:26.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:26.346847 systemd[1]: Starting modprobe@loop.service... Jul 2 07:49:26.347621 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 07:49:26.347728 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:49:26.347821 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 07:49:26.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:26.352000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:26.347929 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:49:26.349050 systemd[1]: Finished systemd-update-utmp.service. Jul 2 07:49:26.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:26.354000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:26.350365 systemd[1]: Finished systemd-journal-catalog-update.service. Jul 2 07:49:26.351854 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 07:49:26.351984 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 07:49:26.353299 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 07:49:26.353449 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 07:49:26.354911 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 07:49:26.355039 systemd[1]: Finished modprobe@loop.service. Jul 2 07:49:26.355000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:26.355000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:26.358393 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:49:26.358575 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 07:49:26.359779 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 07:49:26.360377 augenrules[1155]: No rules Jul 2 07:49:26.359000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 2 07:49:26.359000 audit[1155]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe52b98760 a2=420 a3=0 items=0 ppid=1132 pid=1155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:49:26.359000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 2 07:49:26.362145 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 07:49:26.363839 systemd[1]: Starting modprobe@loop.service... Jul 2 07:49:26.364576 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 07:49:26.364675 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:49:26.365737 systemd[1]: Starting systemd-update-done.service... Jul 2 07:49:26.370388 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 07:49:26.370488 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:49:26.371516 systemd[1]: Finished audit-rules.service. Jul 2 07:49:26.372616 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 07:49:26.372721 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 07:49:26.373820 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 07:49:26.373918 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 07:49:26.375049 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 07:49:26.375149 systemd[1]: Finished modprobe@loop.service. Jul 2 07:49:26.376230 systemd[1]: Finished systemd-update-done.service. Jul 2 07:49:26.377424 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 07:49:26.377513 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 07:49:26.379464 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:49:26.379659 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 07:49:26.380656 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 07:49:26.382742 systemd[1]: Starting modprobe@drm.service... Jul 2 07:49:26.384670 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 07:49:26.386633 systemd[1]: Starting modprobe@loop.service... Jul 2 07:49:26.387484 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 07:49:26.387586 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:49:26.388524 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 2 07:49:26.389525 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 07:49:26.389612 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:49:26.390618 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 07:49:26.390773 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 07:49:26.392051 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 07:49:26.392180 systemd[1]: Finished modprobe@drm.service. Jul 2 07:49:26.393587 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 07:49:26.393717 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 07:49:26.395036 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 07:49:26.395153 systemd[1]: Finished modprobe@loop.service. Jul 2 07:49:26.396623 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 07:49:26.396720 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 07:49:26.398779 systemd[1]: Finished ensure-sysext.service. Jul 2 07:49:26.405265 systemd[1]: Started systemd-timesyncd.service. Jul 2 07:49:27.452745 systemd-timesyncd[1140]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 2 07:49:27.452759 systemd[1]: Reached target time-set.target. Jul 2 07:49:27.452791 systemd-timesyncd[1140]: Initial clock synchronization to Tue 2024-07-02 07:49:27.452686 UTC. Jul 2 07:49:27.457652 systemd-resolved[1136]: Positive Trust Anchors: Jul 2 07:49:27.457665 systemd-resolved[1136]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 07:49:27.457698 systemd-resolved[1136]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 07:49:27.465424 systemd-resolved[1136]: Defaulting to hostname 'linux'. Jul 2 07:49:27.467131 systemd[1]: Started systemd-resolved.service. Jul 2 07:49:27.468058 systemd[1]: Reached target network.target. Jul 2 07:49:27.468888 systemd[1]: Reached target nss-lookup.target. Jul 2 07:49:27.469753 systemd[1]: Reached target sysinit.target. Jul 2 07:49:27.470646 systemd[1]: Started motdgen.path. Jul 2 07:49:27.471404 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Jul 2 07:49:27.472669 systemd[1]: Started logrotate.timer. Jul 2 07:49:27.473517 systemd[1]: Started mdadm.timer. Jul 2 07:49:27.474265 systemd[1]: Started systemd-tmpfiles-clean.timer. Jul 2 07:49:27.475170 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 07:49:27.475195 systemd[1]: Reached target paths.target. Jul 2 07:49:27.476004 systemd[1]: Reached target timers.target. Jul 2 07:49:27.477038 systemd[1]: Listening on dbus.socket. Jul 2 07:49:27.478569 systemd[1]: Starting docker.socket... Jul 2 07:49:27.481251 systemd[1]: Listening on sshd.socket. Jul 2 07:49:27.482149 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:49:27.482459 systemd[1]: Listening on docker.socket. Jul 2 07:49:27.483326 systemd[1]: Reached target sockets.target. Jul 2 07:49:27.484168 systemd[1]: Reached target basic.target. Jul 2 07:49:27.485031 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 2 07:49:27.485054 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 2 07:49:27.485804 systemd[1]: Starting containerd.service... Jul 2 07:49:27.487356 systemd[1]: Starting dbus.service... Jul 2 07:49:27.488856 systemd[1]: Starting enable-oem-cloudinit.service... Jul 2 07:49:27.490643 systemd[1]: Starting extend-filesystems.service... Jul 2 07:49:27.491667 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Jul 2 07:49:27.493965 jq[1175]: false Jul 2 07:49:27.492508 systemd[1]: Starting motdgen.service... Jul 2 07:49:27.494567 systemd[1]: Starting prepare-helm.service... Jul 2 07:49:27.497047 systemd[1]: Starting ssh-key-proc-cmdline.service... Jul 2 07:49:27.499616 systemd[1]: Starting sshd-keygen.service... Jul 2 07:49:27.502372 systemd[1]: Starting systemd-logind.service... Jul 2 07:49:27.505508 extend-filesystems[1176]: Found loop1 Jul 2 07:49:27.505508 extend-filesystems[1176]: Found sr0 Jul 2 07:49:27.505508 extend-filesystems[1176]: Found vda Jul 2 07:49:27.505508 extend-filesystems[1176]: Found vda1 Jul 2 07:49:27.505508 extend-filesystems[1176]: Found vda2 Jul 2 07:49:27.505508 extend-filesystems[1176]: Found vda3 Jul 2 07:49:27.505508 extend-filesystems[1176]: Found usr Jul 2 07:49:27.505508 extend-filesystems[1176]: Found vda4 Jul 2 07:49:27.505508 extend-filesystems[1176]: Found vda6 Jul 2 07:49:27.505508 extend-filesystems[1176]: Found vda7 Jul 2 07:49:27.505508 extend-filesystems[1176]: Found vda9 Jul 2 07:49:27.505508 extend-filesystems[1176]: Checking size of /dev/vda9 Jul 2 07:49:27.505045 dbus-daemon[1174]: [system] SELinux support is enabled Jul 2 07:49:27.504655 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:49:27.526722 extend-filesystems[1176]: Resized partition /dev/vda9 Jul 2 07:49:27.504705 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 2 07:49:27.505033 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 2 07:49:27.526951 jq[1193]: true Jul 2 07:49:27.505614 systemd[1]: Starting update-engine.service... Jul 2 07:49:27.507232 systemd[1]: Starting update-ssh-keys-after-ignition.service... Jul 2 07:49:27.509263 systemd[1]: Started dbus.service. Jul 2 07:49:27.515869 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 07:49:27.516018 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Jul 2 07:49:27.520725 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 07:49:27.520900 systemd[1]: Finished ssh-key-proc-cmdline.service. Jul 2 07:49:27.526065 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 07:49:27.526081 systemd[1]: Reached target system-config.target. Jul 2 07:49:27.527868 jq[1203]: true Jul 2 07:49:27.529083 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 07:49:27.529104 systemd[1]: Reached target user-config.target. Jul 2 07:49:27.530547 extend-filesystems[1200]: resize2fs 1.46.5 (30-Dec-2021) Jul 2 07:49:27.531657 tar[1201]: linux-amd64/helm Jul 2 07:49:27.538606 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 2 07:49:27.539393 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 07:49:27.539516 systemd[1]: Finished motdgen.service. Jul 2 07:49:27.543956 update_engine[1192]: I0702 07:49:27.543808 1192 main.cc:92] Flatcar Update Engine starting Jul 2 07:49:27.545299 update_engine[1192]: I0702 07:49:27.545279 1192 update_check_scheduler.cc:74] Next update check in 10m14s Jul 2 07:49:27.545959 systemd[1]: Started update-engine.service. Jul 2 07:49:27.547991 systemd[1]: Started locksmithd.service. Jul 2 07:49:27.564264 systemd-logind[1187]: Watching system buttons on /dev/input/event1 (Power Button) Jul 2 07:49:27.564286 systemd-logind[1187]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 2 07:49:27.564479 systemd-logind[1187]: New seat seat0. Jul 2 07:49:27.565751 systemd[1]: Started systemd-logind.service. Jul 2 07:49:27.568592 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 2 07:49:27.592284 extend-filesystems[1200]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 2 07:49:27.592284 extend-filesystems[1200]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 2 07:49:27.592284 extend-filesystems[1200]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 2 07:49:27.598336 extend-filesystems[1176]: Resized filesystem in /dev/vda9 Jul 2 07:49:27.596962 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 07:49:27.599509 env[1204]: time="2024-07-02T07:49:27.592730418Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Jul 2 07:49:27.597096 systemd[1]: Finished extend-filesystems.service. Jul 2 07:49:27.602260 bash[1225]: Updated "/home/core/.ssh/authorized_keys" Jul 2 07:49:27.600359 systemd[1]: Finished update-ssh-keys-after-ignition.service. Jul 2 07:49:27.611835 env[1204]: time="2024-07-02T07:49:27.611785042Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 07:49:27.612033 env[1204]: time="2024-07-02T07:49:27.612016787Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 07:49:27.613115 env[1204]: time="2024-07-02T07:49:27.613092764Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.161-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 07:49:27.613187 env[1204]: time="2024-07-02T07:49:27.613169168Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 07:49:27.613424 env[1204]: time="2024-07-02T07:49:27.613404068Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 07:49:27.613497 env[1204]: time="2024-07-02T07:49:27.613478478Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 07:49:27.613571 env[1204]: time="2024-07-02T07:49:27.613551825Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 07:49:27.613684 env[1204]: time="2024-07-02T07:49:27.613665008Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 07:49:27.613810 env[1204]: time="2024-07-02T07:49:27.613792958Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 07:49:27.614062 env[1204]: time="2024-07-02T07:49:27.614045361Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 07:49:27.614231 env[1204]: time="2024-07-02T07:49:27.614211653Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 07:49:27.614304 env[1204]: time="2024-07-02T07:49:27.614284479Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 07:49:27.614409 env[1204]: time="2024-07-02T07:49:27.614391310Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 07:49:27.614485 env[1204]: time="2024-07-02T07:49:27.614467743Z" level=info msg="metadata content store policy set" policy=shared Jul 2 07:49:27.618881 locksmithd[1221]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 07:49:27.620595 env[1204]: time="2024-07-02T07:49:27.619813809Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 07:49:27.620595 env[1204]: time="2024-07-02T07:49:27.619849786Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 07:49:27.620595 env[1204]: time="2024-07-02T07:49:27.619861588Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 07:49:27.620595 env[1204]: time="2024-07-02T07:49:27.619887627Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 07:49:27.620595 env[1204]: time="2024-07-02T07:49:27.619899540Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 07:49:27.620595 env[1204]: time="2024-07-02T07:49:27.619910901Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 07:49:27.620595 env[1204]: time="2024-07-02T07:49:27.619922793Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 07:49:27.620595 env[1204]: time="2024-07-02T07:49:27.619944814Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 07:49:27.620595 env[1204]: time="2024-07-02T07:49:27.619956486Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 2 07:49:27.620595 env[1204]: time="2024-07-02T07:49:27.619967908Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 07:49:27.620595 env[1204]: time="2024-07-02T07:49:27.619978648Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 07:49:27.620595 env[1204]: time="2024-07-02T07:49:27.619989679Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 07:49:27.620595 env[1204]: time="2024-07-02T07:49:27.620068476Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 07:49:27.620595 env[1204]: time="2024-07-02T07:49:27.620125654Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 07:49:27.620906 env[1204]: time="2024-07-02T07:49:27.620304259Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 07:49:27.620906 env[1204]: time="2024-07-02T07:49:27.620324486Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 07:49:27.620906 env[1204]: time="2024-07-02T07:49:27.620335708Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 07:49:27.620906 env[1204]: time="2024-07-02T07:49:27.620374651Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 07:49:27.620906 env[1204]: time="2024-07-02T07:49:27.620386042Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 07:49:27.620906 env[1204]: time="2024-07-02T07:49:27.620396281Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 07:49:27.620906 env[1204]: time="2024-07-02T07:49:27.620406260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 07:49:27.620906 env[1204]: time="2024-07-02T07:49:27.620416970Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 07:49:27.620906 env[1204]: time="2024-07-02T07:49:27.620427399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 07:49:27.620906 env[1204]: time="2024-07-02T07:49:27.620437609Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 07:49:27.620906 env[1204]: time="2024-07-02T07:49:27.620448048Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 07:49:27.620906 env[1204]: time="2024-07-02T07:49:27.620459129Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 07:49:27.620906 env[1204]: time="2024-07-02T07:49:27.620546994Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 07:49:27.620906 env[1204]: time="2024-07-02T07:49:27.620559808Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 07:49:27.622191 env[1204]: time="2024-07-02T07:49:27.620570237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 07:49:27.622191 env[1204]: time="2024-07-02T07:49:27.621180191Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 07:49:27.622191 env[1204]: time="2024-07-02T07:49:27.621196131Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 2 07:49:27.622191 env[1204]: time="2024-07-02T07:49:27.621216199Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 07:49:27.622191 env[1204]: time="2024-07-02T07:49:27.621233912Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Jul 2 07:49:27.622191 env[1204]: time="2024-07-02T07:49:27.621264389Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 07:49:27.622344 env[1204]: time="2024-07-02T07:49:27.621423648Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 07:49:27.622344 env[1204]: time="2024-07-02T07:49:27.621473952Z" level=info msg="Connect containerd service" Jul 2 07:49:27.622344 env[1204]: time="2024-07-02T07:49:27.621499370Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 07:49:27.622344 env[1204]: time="2024-07-02T07:49:27.622037940Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 07:49:27.623251 env[1204]: time="2024-07-02T07:49:27.623221339Z" level=info msg="Start subscribing containerd event" Jul 2 07:49:27.623754 env[1204]: time="2024-07-02T07:49:27.623731756Z" level=info msg="Start recovering state" Jul 2 07:49:27.623852 env[1204]: time="2024-07-02T07:49:27.623810684Z" level=info msg="Start event monitor" Jul 2 07:49:27.623852 env[1204]: time="2024-07-02T07:49:27.623847624Z" level=info msg="Start snapshots syncer" Jul 2 07:49:27.623918 env[1204]: time="2024-07-02T07:49:27.623855929Z" level=info msg="Start cni network conf syncer for default" Jul 2 07:49:27.623918 env[1204]: time="2024-07-02T07:49:27.623915511Z" level=info msg="Start streaming server" Jul 2 07:49:27.624039 env[1204]: time="2024-07-02T07:49:27.624022762Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 07:49:27.624144 env[1204]: time="2024-07-02T07:49:27.624127619Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 07:49:27.624310 systemd[1]: Started containerd.service. Jul 2 07:49:27.625492 env[1204]: time="2024-07-02T07:49:27.625471709Z" level=info msg="containerd successfully booted in 0.049998s" Jul 2 07:49:27.913862 tar[1201]: linux-amd64/LICENSE Jul 2 07:49:27.913983 tar[1201]: linux-amd64/README.md Jul 2 07:49:27.917666 systemd[1]: Finished prepare-helm.service. Jul 2 07:49:28.034762 systemd-networkd[1016]: eth0: Gained IPv6LL Jul 2 07:49:28.036349 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 2 07:49:28.037560 systemd[1]: Reached target network-online.target. Jul 2 07:49:28.039632 systemd[1]: Starting kubelet.service... Jul 2 07:49:28.312201 sshd_keygen[1196]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 07:49:28.330665 systemd[1]: Finished sshd-keygen.service. Jul 2 07:49:28.332976 systemd[1]: Starting issuegen.service... Jul 2 07:49:28.338542 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 07:49:28.338720 systemd[1]: Finished issuegen.service. Jul 2 07:49:28.340866 systemd[1]: Starting systemd-user-sessions.service... Jul 2 07:49:28.346350 systemd[1]: Finished systemd-user-sessions.service. Jul 2 07:49:28.348569 systemd[1]: Started getty@tty1.service. Jul 2 07:49:28.350470 systemd[1]: Started serial-getty@ttyS0.service. Jul 2 07:49:28.351536 systemd[1]: Reached target getty.target. Jul 2 07:49:28.590126 systemd[1]: Started kubelet.service. Jul 2 07:49:28.591444 systemd[1]: Reached target multi-user.target. Jul 2 07:49:28.593438 systemd[1]: Starting systemd-update-utmp-runlevel.service... Jul 2 07:49:28.600680 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 2 07:49:28.600843 systemd[1]: Finished systemd-update-utmp-runlevel.service. Jul 2 07:49:28.602034 systemd[1]: Startup finished in 602ms (kernel) + 5.924s (initrd) + 4.982s (userspace) = 11.510s. Jul 2 07:49:29.067555 kubelet[1256]: E0702 07:49:29.067388 1256 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 07:49:29.069186 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 07:49:29.069299 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 07:49:32.682529 systemd[1]: Created slice system-sshd.slice. Jul 2 07:49:32.683482 systemd[1]: Started sshd@0-10.0.0.99:22-10.0.0.1:60714.service. Jul 2 07:49:32.719593 sshd[1266]: Accepted publickey for core from 10.0.0.1 port 60714 ssh2: RSA SHA256:nleLw6rQrPXDNmjSByoAmmq8uYaZ7eXcVx3kSRwUXrw Jul 2 07:49:32.720983 sshd[1266]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:49:32.729651 systemd-logind[1187]: New session 1 of user core. Jul 2 07:49:32.730775 systemd[1]: Created slice user-500.slice. Jul 2 07:49:32.732045 systemd[1]: Starting user-runtime-dir@500.service... Jul 2 07:49:32.739883 systemd[1]: Finished user-runtime-dir@500.service. Jul 2 07:49:32.740999 systemd[1]: Starting user@500.service... Jul 2 07:49:32.743768 (systemd)[1269]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:49:32.809764 systemd[1269]: Queued start job for default target default.target. Jul 2 07:49:32.810176 systemd[1269]: Reached target paths.target. Jul 2 07:49:32.810203 systemd[1269]: Reached target sockets.target. Jul 2 07:49:32.810219 systemd[1269]: Reached target timers.target. Jul 2 07:49:32.810233 systemd[1269]: Reached target basic.target. Jul 2 07:49:32.810278 systemd[1269]: Reached target default.target. Jul 2 07:49:32.810310 systemd[1269]: Startup finished in 61ms. Jul 2 07:49:32.810369 systemd[1]: Started user@500.service. Jul 2 07:49:32.811252 systemd[1]: Started session-1.scope. Jul 2 07:49:32.862238 systemd[1]: Started sshd@1-10.0.0.99:22-10.0.0.1:60716.service. Jul 2 07:49:32.895624 sshd[1278]: Accepted publickey for core from 10.0.0.1 port 60716 ssh2: RSA SHA256:nleLw6rQrPXDNmjSByoAmmq8uYaZ7eXcVx3kSRwUXrw Jul 2 07:49:32.896843 sshd[1278]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:49:32.901080 systemd-logind[1187]: New session 2 of user core. Jul 2 07:49:32.901647 systemd[1]: Started session-2.scope. Jul 2 07:49:32.954095 sshd[1278]: pam_unix(sshd:session): session closed for user core Jul 2 07:49:32.957225 systemd[1]: Started sshd@2-10.0.0.99:22-10.0.0.1:60724.service. Jul 2 07:49:32.957731 systemd[1]: sshd@1-10.0.0.99:22-10.0.0.1:60716.service: Deactivated successfully. Jul 2 07:49:32.958288 systemd[1]: session-2.scope: Deactivated successfully. Jul 2 07:49:32.958833 systemd-logind[1187]: Session 2 logged out. Waiting for processes to exit. Jul 2 07:49:32.959535 systemd-logind[1187]: Removed session 2. Jul 2 07:49:32.991321 sshd[1283]: Accepted publickey for core from 10.0.0.1 port 60724 ssh2: RSA SHA256:nleLw6rQrPXDNmjSByoAmmq8uYaZ7eXcVx3kSRwUXrw Jul 2 07:49:32.992790 sshd[1283]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:49:32.996512 systemd-logind[1187]: New session 3 of user core. Jul 2 07:49:32.997205 systemd[1]: Started session-3.scope. Jul 2 07:49:33.047310 sshd[1283]: pam_unix(sshd:session): session closed for user core Jul 2 07:49:33.049924 systemd[1]: sshd@2-10.0.0.99:22-10.0.0.1:60724.service: Deactivated successfully. Jul 2 07:49:33.050472 systemd[1]: session-3.scope: Deactivated successfully. Jul 2 07:49:33.051149 systemd-logind[1187]: Session 3 logged out. Waiting for processes to exit. Jul 2 07:49:33.052304 systemd[1]: Started sshd@3-10.0.0.99:22-10.0.0.1:60728.service. Jul 2 07:49:33.053123 systemd-logind[1187]: Removed session 3. Jul 2 07:49:33.082772 sshd[1290]: Accepted publickey for core from 10.0.0.1 port 60728 ssh2: RSA SHA256:nleLw6rQrPXDNmjSByoAmmq8uYaZ7eXcVx3kSRwUXrw Jul 2 07:49:33.083780 sshd[1290]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:49:33.087418 systemd-logind[1187]: New session 4 of user core. Jul 2 07:49:33.088078 systemd[1]: Started session-4.scope. Jul 2 07:49:33.143032 sshd[1290]: pam_unix(sshd:session): session closed for user core Jul 2 07:49:33.145928 systemd[1]: sshd@3-10.0.0.99:22-10.0.0.1:60728.service: Deactivated successfully. Jul 2 07:49:33.146404 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 07:49:33.146870 systemd-logind[1187]: Session 4 logged out. Waiting for processes to exit. Jul 2 07:49:33.147831 systemd[1]: Started sshd@4-10.0.0.99:22-10.0.0.1:60732.service. Jul 2 07:49:33.148357 systemd-logind[1187]: Removed session 4. Jul 2 07:49:33.177799 sshd[1296]: Accepted publickey for core from 10.0.0.1 port 60732 ssh2: RSA SHA256:nleLw6rQrPXDNmjSByoAmmq8uYaZ7eXcVx3kSRwUXrw Jul 2 07:49:33.178917 sshd[1296]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:49:33.181801 systemd-logind[1187]: New session 5 of user core. Jul 2 07:49:33.182459 systemd[1]: Started session-5.scope. Jul 2 07:49:33.236340 sudo[1299]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 07:49:33.236527 sudo[1299]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 07:49:33.254910 systemd[1]: Starting docker.service... Jul 2 07:49:33.287645 env[1311]: time="2024-07-02T07:49:33.287588690Z" level=info msg="Starting up" Jul 2 07:49:33.288994 env[1311]: time="2024-07-02T07:49:33.288948109Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 2 07:49:33.288994 env[1311]: time="2024-07-02T07:49:33.288976022Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 2 07:49:33.288994 env[1311]: time="2024-07-02T07:49:33.289003142Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 2 07:49:33.289178 env[1311]: time="2024-07-02T07:49:33.289017109Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 2 07:49:33.292309 env[1311]: time="2024-07-02T07:49:33.292157849Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 2 07:49:33.292309 env[1311]: time="2024-07-02T07:49:33.292179279Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 2 07:49:33.292309 env[1311]: time="2024-07-02T07:49:33.292195780Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 2 07:49:33.292309 env[1311]: time="2024-07-02T07:49:33.292206149Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 2 07:49:33.777124 env[1311]: time="2024-07-02T07:49:33.777081774Z" level=info msg="Loading containers: start." Jul 2 07:49:33.897607 kernel: Initializing XFRM netlink socket Jul 2 07:49:33.926048 env[1311]: time="2024-07-02T07:49:33.925990190Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Jul 2 07:49:33.970837 systemd-networkd[1016]: docker0: Link UP Jul 2 07:49:33.981528 env[1311]: time="2024-07-02T07:49:33.981487548Z" level=info msg="Loading containers: done." Jul 2 07:49:33.990010 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2153296402-merged.mount: Deactivated successfully. Jul 2 07:49:33.994015 env[1311]: time="2024-07-02T07:49:33.993955570Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 2 07:49:33.994184 env[1311]: time="2024-07-02T07:49:33.994158270Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Jul 2 07:49:33.994288 env[1311]: time="2024-07-02T07:49:33.994267445Z" level=info msg="Daemon has completed initialization" Jul 2 07:49:34.011302 systemd[1]: Started docker.service. Jul 2 07:49:34.014609 env[1311]: time="2024-07-02T07:49:34.014565320Z" level=info msg="API listen on /run/docker.sock" Jul 2 07:49:34.612554 env[1204]: time="2024-07-02T07:49:34.612502226Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.6\"" Jul 2 07:49:35.460326 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2577188342.mount: Deactivated successfully. Jul 2 07:49:37.147521 env[1204]: time="2024-07-02T07:49:37.147457221Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:49:37.149827 env[1204]: time="2024-07-02T07:49:37.149757054Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3af2ab51e136465590d968a2052e02e180fc7967a03724b269c1337e8f09d36f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:49:37.151594 env[1204]: time="2024-07-02T07:49:37.151561889Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:49:37.153214 env[1204]: time="2024-07-02T07:49:37.153195042Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:f4d993b3d73cc0d59558be584b5b40785b4a96874bc76873b69d1dd818485e70,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:49:37.154018 env[1204]: time="2024-07-02T07:49:37.153979513Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.6\" returns image reference \"sha256:3af2ab51e136465590d968a2052e02e180fc7967a03724b269c1337e8f09d36f\"" Jul 2 07:49:37.163704 env[1204]: time="2024-07-02T07:49:37.163659125Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.6\"" Jul 2 07:49:39.320055 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 07:49:39.320229 systemd[1]: Stopped kubelet.service. Jul 2 07:49:39.321450 systemd[1]: Starting kubelet.service... Jul 2 07:49:39.401324 systemd[1]: Started kubelet.service. Jul 2 07:49:39.446068 kubelet[1460]: E0702 07:49:39.446004 1460 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 07:49:39.449300 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 07:49:39.449435 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 07:49:41.351321 env[1204]: time="2024-07-02T07:49:41.351270862Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:49:41.355314 env[1204]: time="2024-07-02T07:49:41.355249343Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:083b81fc09e858d3e0d4b42f567a9d44a2232b60bac396a94cbdd7ce1098235e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:49:41.358324 env[1204]: time="2024-07-02T07:49:41.358286649Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:49:41.361100 env[1204]: time="2024-07-02T07:49:41.361070931Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:692fc3f88a60b3afc76492ad347306d34042000f56f230959e9367fd59c48b1e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:49:41.362248 env[1204]: time="2024-07-02T07:49:41.362215918Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.6\" returns image reference \"sha256:083b81fc09e858d3e0d4b42f567a9d44a2232b60bac396a94cbdd7ce1098235e\"" Jul 2 07:49:41.371656 env[1204]: time="2024-07-02T07:49:41.371611478Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.6\"" Jul 2 07:49:44.470706 env[1204]: time="2024-07-02T07:49:44.470643299Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:49:44.572045 env[1204]: time="2024-07-02T07:49:44.571970874Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:49d9b8328a8fda6ebca6b3226c6d722d92ec7adffff18668511a88058444cf15,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:49:44.668681 env[1204]: time="2024-07-02T07:49:44.668619445Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:49:44.707013 env[1204]: time="2024-07-02T07:49:44.706931664Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:b91a4e45debd0d5336d9f533aefdf47d4b39b24071feb459e521709b9e4ec24f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:49:44.707700 env[1204]: time="2024-07-02T07:49:44.707668927Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.6\" returns image reference \"sha256:49d9b8328a8fda6ebca6b3226c6d722d92ec7adffff18668511a88058444cf15\"" Jul 2 07:49:44.719205 env[1204]: time="2024-07-02T07:49:44.719176217Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.6\"" Jul 2 07:49:46.726917 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1296991594.mount: Deactivated successfully. Jul 2 07:49:47.747634 env[1204]: time="2024-07-02T07:49:47.747531724Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:49:47.750540 env[1204]: time="2024-07-02T07:49:47.750474493Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9c49592198fa15b509fe4ee4a538067866776e325d6dd33c77ad6647e1d3aac9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:49:47.752004 env[1204]: time="2024-07-02T07:49:47.751967132Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:49:47.754631 env[1204]: time="2024-07-02T07:49:47.754567830Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:88bacb3e1d6c0c37c6da95c6d6b8e30531d0b4d0ab540cc290b0af51fbfebd90,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:49:47.755071 env[1204]: time="2024-07-02T07:49:47.755011802Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.6\" returns image reference \"sha256:9c49592198fa15b509fe4ee4a538067866776e325d6dd33c77ad6647e1d3aac9\"" Jul 2 07:49:47.770060 env[1204]: time="2024-07-02T07:49:47.770002365Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jul 2 07:49:48.325128 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3875247656.mount: Deactivated successfully. Jul 2 07:49:49.476255 env[1204]: time="2024-07-02T07:49:49.476173144Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:49:49.478186 env[1204]: time="2024-07-02T07:49:49.478143910Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:49:49.480139 env[1204]: time="2024-07-02T07:49:49.480113874Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:49:49.483846 env[1204]: time="2024-07-02T07:49:49.483794978Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:49:49.484612 env[1204]: time="2024-07-02T07:49:49.484551016Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jul 2 07:49:49.502746 env[1204]: time="2024-07-02T07:49:49.502699931Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jul 2 07:49:49.700285 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 2 07:49:49.700460 systemd[1]: Stopped kubelet.service. Jul 2 07:49:49.701876 systemd[1]: Starting kubelet.service... Jul 2 07:49:49.771099 systemd[1]: Started kubelet.service. Jul 2 07:49:49.948130 kubelet[1500]: E0702 07:49:49.948078 1500 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 07:49:49.949978 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 07:49:49.950095 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 07:49:50.196710 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1873017866.mount: Deactivated successfully. Jul 2 07:49:50.202670 env[1204]: time="2024-07-02T07:49:50.202616926Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:49:50.204642 env[1204]: time="2024-07-02T07:49:50.204602569Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:49:50.206121 env[1204]: time="2024-07-02T07:49:50.206093455Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:49:50.207467 env[1204]: time="2024-07-02T07:49:50.207427617Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:49:50.207938 env[1204]: time="2024-07-02T07:49:50.207912867Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jul 2 07:49:50.216520 env[1204]: time="2024-07-02T07:49:50.216482478Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jul 2 07:49:50.715196 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1668014148.mount: Deactivated successfully. Jul 2 07:49:54.805333 env[1204]: time="2024-07-02T07:49:54.805267699Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:49:54.807250 env[1204]: time="2024-07-02T07:49:54.807220381Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:49:54.809265 env[1204]: time="2024-07-02T07:49:54.809210924Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:49:54.811087 env[1204]: time="2024-07-02T07:49:54.811033321Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:49:54.811840 env[1204]: time="2024-07-02T07:49:54.811803335Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jul 2 07:49:57.050637 systemd[1]: Stopped kubelet.service. Jul 2 07:49:57.052376 systemd[1]: Starting kubelet.service... Jul 2 07:49:57.069039 systemd[1]: Reloading. Jul 2 07:49:57.133144 /usr/lib/systemd/system-generators/torcx-generator[1616]: time="2024-07-02T07:49:57Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 07:49:57.133529 /usr/lib/systemd/system-generators/torcx-generator[1616]: time="2024-07-02T07:49:57Z" level=info msg="torcx already run" Jul 2 07:49:57.664691 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 07:49:57.664707 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 07:49:57.681254 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 07:49:57.756233 systemd[1]: Started kubelet.service. Jul 2 07:49:57.757701 systemd[1]: Stopping kubelet.service... Jul 2 07:49:57.758040 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 07:49:57.758205 systemd[1]: Stopped kubelet.service. Jul 2 07:49:57.759872 systemd[1]: Starting kubelet.service... Jul 2 07:49:57.832674 systemd[1]: Started kubelet.service. Jul 2 07:49:57.878401 kubelet[1664]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 07:49:57.878401 kubelet[1664]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 07:49:57.878401 kubelet[1664]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 07:49:57.878793 kubelet[1664]: I0702 07:49:57.878432 1664 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 07:49:58.207500 kubelet[1664]: I0702 07:49:58.207457 1664 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jul 2 07:49:58.207500 kubelet[1664]: I0702 07:49:58.207495 1664 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 07:49:58.207776 kubelet[1664]: I0702 07:49:58.207756 1664 server.go:919] "Client rotation is on, will bootstrap in background" Jul 2 07:49:58.227846 kubelet[1664]: E0702 07:49:58.227809 1664 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.99:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.99:6443: connect: connection refused Jul 2 07:49:58.228719 kubelet[1664]: I0702 07:49:58.228699 1664 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 07:49:58.238702 kubelet[1664]: I0702 07:49:58.238667 1664 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 07:49:58.239997 kubelet[1664]: I0702 07:49:58.239976 1664 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 07:49:58.240162 kubelet[1664]: I0702 07:49:58.240143 1664 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 07:49:58.240694 kubelet[1664]: I0702 07:49:58.240672 1664 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 07:49:58.240694 kubelet[1664]: I0702 07:49:58.240694 1664 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 07:49:58.242431 kubelet[1664]: I0702 07:49:58.242406 1664 state_mem.go:36] "Initialized new in-memory state store" Jul 2 07:49:58.242533 kubelet[1664]: I0702 07:49:58.242513 1664 kubelet.go:396] "Attempting to sync node with API server" Jul 2 07:49:58.242564 kubelet[1664]: I0702 07:49:58.242537 1664 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 07:49:58.242564 kubelet[1664]: I0702 07:49:58.242563 1664 kubelet.go:312] "Adding apiserver pod source" Jul 2 07:49:58.242614 kubelet[1664]: I0702 07:49:58.242593 1664 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 07:49:58.243088 kubelet[1664]: W0702 07:49:58.243045 1664 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Jul 2 07:49:58.243132 kubelet[1664]: E0702 07:49:58.243108 1664 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Jul 2 07:49:58.243372 kubelet[1664]: W0702 07:49:58.243339 1664 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.99:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Jul 2 07:49:58.243372 kubelet[1664]: E0702 07:49:58.243369 1664 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.99:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Jul 2 07:49:58.243883 kubelet[1664]: I0702 07:49:58.243856 1664 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 2 07:49:58.247370 kubelet[1664]: I0702 07:49:58.247346 1664 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 07:49:58.252016 kubelet[1664]: W0702 07:49:58.251990 1664 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 07:49:58.254406 kubelet[1664]: I0702 07:49:58.252634 1664 server.go:1256] "Started kubelet" Jul 2 07:49:58.256910 kubelet[1664]: I0702 07:49:58.256895 1664 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 07:49:58.257661 kubelet[1664]: I0702 07:49:58.257629 1664 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 07:49:58.258079 kubelet[1664]: I0702 07:49:58.258058 1664 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 07:49:58.258160 kubelet[1664]: I0702 07:49:58.258143 1664 server.go:461] "Adding debug handlers to kubelet server" Jul 2 07:49:58.262852 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Jul 2 07:49:58.264095 kubelet[1664]: I0702 07:49:58.264066 1664 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 07:49:58.266251 kubelet[1664]: E0702 07:49:58.266222 1664 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.99:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.99:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17de55f04bb27102 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-07-02 07:49:58.252605698 +0000 UTC m=+0.416463048,LastTimestamp:2024-07-02 07:49:58.252605698 +0000 UTC m=+0.416463048,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 2 07:49:58.267241 kubelet[1664]: I0702 07:49:58.267208 1664 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 07:49:58.268209 kubelet[1664]: I0702 07:49:58.268170 1664 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 07:49:58.268349 kubelet[1664]: I0702 07:49:58.268329 1664 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 07:49:58.268836 kubelet[1664]: W0702 07:49:58.268730 1664 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Jul 2 07:49:58.268836 kubelet[1664]: E0702 07:49:58.268772 1664 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Jul 2 07:49:58.268836 kubelet[1664]: E0702 07:49:58.268821 1664 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.99:6443: connect: connection refused" interval="200ms" Jul 2 07:49:58.269441 kubelet[1664]: I0702 07:49:58.269058 1664 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 07:49:58.270063 kubelet[1664]: E0702 07:49:58.269926 1664 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 07:49:58.323387 kubelet[1664]: I0702 07:49:58.323357 1664 factory.go:221] Registration of the containerd container factory successfully Jul 2 07:49:58.323387 kubelet[1664]: I0702 07:49:58.323379 1664 factory.go:221] Registration of the systemd container factory successfully Jul 2 07:49:58.334291 kubelet[1664]: I0702 07:49:58.333430 1664 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 07:49:58.334362 kubelet[1664]: I0702 07:49:58.334315 1664 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 07:49:58.334390 kubelet[1664]: I0702 07:49:58.334364 1664 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 07:49:58.334390 kubelet[1664]: I0702 07:49:58.334389 1664 kubelet.go:2329] "Starting kubelet main sync loop" Jul 2 07:49:58.334605 kubelet[1664]: E0702 07:49:58.334573 1664 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 07:49:58.335384 kubelet[1664]: W0702 07:49:58.335355 1664 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Jul 2 07:49:58.335433 kubelet[1664]: E0702 07:49:58.335391 1664 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Jul 2 07:49:58.337064 kubelet[1664]: I0702 07:49:58.337045 1664 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 07:49:58.337064 kubelet[1664]: I0702 07:49:58.337063 1664 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 07:49:58.337135 kubelet[1664]: I0702 07:49:58.337076 1664 state_mem.go:36] "Initialized new in-memory state store" Jul 2 07:49:58.368884 kubelet[1664]: I0702 07:49:58.368868 1664 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 07:49:58.369266 kubelet[1664]: E0702 07:49:58.369235 1664 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.99:6443/api/v1/nodes\": dial tcp 10.0.0.99:6443: connect: connection refused" node="localhost" Jul 2 07:49:58.435519 kubelet[1664]: E0702 07:49:58.435466 1664 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 2 07:49:58.470381 kubelet[1664]: E0702 07:49:58.470304 1664 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.99:6443: connect: connection refused" interval="400ms" Jul 2 07:49:58.570550 kubelet[1664]: I0702 07:49:58.570520 1664 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 07:49:58.570902 kubelet[1664]: E0702 07:49:58.570873 1664 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.99:6443/api/v1/nodes\": dial tcp 10.0.0.99:6443: connect: connection refused" node="localhost" Jul 2 07:49:58.604435 kubelet[1664]: I0702 07:49:58.604400 1664 policy_none.go:49] "None policy: Start" Jul 2 07:49:58.605125 kubelet[1664]: I0702 07:49:58.605104 1664 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 07:49:58.605125 kubelet[1664]: I0702 07:49:58.605125 1664 state_mem.go:35] "Initializing new in-memory state store" Jul 2 07:49:58.611145 systemd[1]: Created slice kubepods.slice. Jul 2 07:49:58.614945 systemd[1]: Created slice kubepods-burstable.slice. Jul 2 07:49:58.617134 systemd[1]: Created slice kubepods-besteffort.slice. Jul 2 07:49:58.623208 kubelet[1664]: I0702 07:49:58.623185 1664 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 07:49:58.623398 kubelet[1664]: I0702 07:49:58.623385 1664 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 07:49:58.624339 kubelet[1664]: E0702 07:49:58.624322 1664 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 2 07:49:58.635750 kubelet[1664]: I0702 07:49:58.635721 1664 topology_manager.go:215] "Topology Admit Handler" podUID="10f99d1b9596a09bfaffce27d68b9d09" podNamespace="kube-system" podName="kube-apiserver-localhost" Jul 2 07:49:58.636802 kubelet[1664]: I0702 07:49:58.636779 1664 topology_manager.go:215] "Topology Admit Handler" podUID="42b008e702ec2a5b396aebedf13804b4" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jul 2 07:49:58.637737 kubelet[1664]: I0702 07:49:58.637709 1664 topology_manager.go:215] "Topology Admit Handler" podUID="593d08bacb1d5de22dcb8f5224a99e3c" podNamespace="kube-system" podName="kube-scheduler-localhost" Jul 2 07:49:58.642174 systemd[1]: Created slice kubepods-burstable-pod10f99d1b9596a09bfaffce27d68b9d09.slice. Jul 2 07:49:58.657458 systemd[1]: Created slice kubepods-burstable-pod42b008e702ec2a5b396aebedf13804b4.slice. Jul 2 07:49:58.660626 systemd[1]: Created slice kubepods-burstable-pod593d08bacb1d5de22dcb8f5224a99e3c.slice. Jul 2 07:49:58.669968 kubelet[1664]: I0702 07:49:58.669937 1664 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/10f99d1b9596a09bfaffce27d68b9d09-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"10f99d1b9596a09bfaffce27d68b9d09\") " pod="kube-system/kube-apiserver-localhost" Jul 2 07:49:58.670034 kubelet[1664]: I0702 07:49:58.669985 1664 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/10f99d1b9596a09bfaffce27d68b9d09-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"10f99d1b9596a09bfaffce27d68b9d09\") " pod="kube-system/kube-apiserver-localhost" Jul 2 07:49:58.670059 kubelet[1664]: I0702 07:49:58.670028 1664 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/10f99d1b9596a09bfaffce27d68b9d09-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"10f99d1b9596a09bfaffce27d68b9d09\") " pod="kube-system/kube-apiserver-localhost" Jul 2 07:49:58.670125 kubelet[1664]: I0702 07:49:58.670097 1664 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 07:49:58.670152 kubelet[1664]: I0702 07:49:58.670133 1664 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/593d08bacb1d5de22dcb8f5224a99e3c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"593d08bacb1d5de22dcb8f5224a99e3c\") " pod="kube-system/kube-scheduler-localhost" Jul 2 07:49:58.670177 kubelet[1664]: I0702 07:49:58.670152 1664 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 07:49:58.670177 kubelet[1664]: I0702 07:49:58.670175 1664 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 07:49:58.670220 kubelet[1664]: I0702 07:49:58.670205 1664 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 07:49:58.670244 kubelet[1664]: I0702 07:49:58.670230 1664 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 07:49:58.871445 kubelet[1664]: E0702 07:49:58.871333 1664 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.99:6443: connect: connection refused" interval="800ms" Jul 2 07:49:58.960047 kubelet[1664]: E0702 07:49:58.960010 1664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:49:58.960376 kubelet[1664]: E0702 07:49:58.960355 1664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:49:58.960665 env[1204]: time="2024-07-02T07:49:58.960621009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:42b008e702ec2a5b396aebedf13804b4,Namespace:kube-system,Attempt:0,}" Jul 2 07:49:58.960901 env[1204]: time="2024-07-02T07:49:58.960860689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:10f99d1b9596a09bfaffce27d68b9d09,Namespace:kube-system,Attempt:0,}" Jul 2 07:49:58.962107 kubelet[1664]: E0702 07:49:58.962073 1664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:49:58.962367 env[1204]: time="2024-07-02T07:49:58.962331517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:593d08bacb1d5de22dcb8f5224a99e3c,Namespace:kube-system,Attempt:0,}" Jul 2 07:49:58.972510 kubelet[1664]: I0702 07:49:58.972474 1664 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 07:49:58.972825 kubelet[1664]: E0702 07:49:58.972798 1664 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.99:6443/api/v1/nodes\": dial tcp 10.0.0.99:6443: connect: connection refused" node="localhost" Jul 2 07:49:59.081184 kubelet[1664]: W0702 07:49:59.081098 1664 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.99:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Jul 2 07:49:59.081184 kubelet[1664]: E0702 07:49:59.081185 1664 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.99:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Jul 2 07:49:59.453714 kubelet[1664]: W0702 07:49:59.453652 1664 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Jul 2 07:49:59.453714 kubelet[1664]: E0702 07:49:59.453709 1664 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Jul 2 07:49:59.672235 kubelet[1664]: E0702 07:49:59.672180 1664 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.99:6443: connect: connection refused" interval="1.6s" Jul 2 07:49:59.719857 kubelet[1664]: W0702 07:49:59.719740 1664 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Jul 2 07:49:59.719857 kubelet[1664]: E0702 07:49:59.719795 1664 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Jul 2 07:49:59.774317 kubelet[1664]: I0702 07:49:59.774279 1664 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 07:49:59.774690 kubelet[1664]: E0702 07:49:59.774663 1664 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.99:6443/api/v1/nodes\": dial tcp 10.0.0.99:6443: connect: connection refused" node="localhost" Jul 2 07:49:59.848106 kubelet[1664]: W0702 07:49:59.848066 1664 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Jul 2 07:49:59.848106 kubelet[1664]: E0702 07:49:59.848100 1664 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Jul 2 07:49:59.983162 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount613294740.mount: Deactivated successfully. Jul 2 07:49:59.989259 env[1204]: time="2024-07-02T07:49:59.989219519Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:49:59.990818 env[1204]: time="2024-07-02T07:49:59.990791918Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:49:59.991781 env[1204]: time="2024-07-02T07:49:59.991719497Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:49:59.992528 env[1204]: time="2024-07-02T07:49:59.992500752Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:49:59.995095 env[1204]: time="2024-07-02T07:49:59.995052969Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:49:59.996076 env[1204]: time="2024-07-02T07:49:59.996041843Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:49:59.997209 env[1204]: time="2024-07-02T07:49:59.997176792Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:49:59.998243 env[1204]: time="2024-07-02T07:49:59.998205571Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:49:59.999492 env[1204]: time="2024-07-02T07:49:59.999452690Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:50:00.000599 env[1204]: time="2024-07-02T07:50:00.000548344Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:50:00.001628 env[1204]: time="2024-07-02T07:50:00.001594647Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:50:00.003384 env[1204]: time="2024-07-02T07:50:00.003351983Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:50:00.032285 env[1204]: time="2024-07-02T07:50:00.032227191Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:50:00.032415 env[1204]: time="2024-07-02T07:50:00.032295232Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:50:00.032415 env[1204]: time="2024-07-02T07:50:00.032316623Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:50:00.032766 env[1204]: time="2024-07-02T07:50:00.032705052Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:50:00.032766 env[1204]: time="2024-07-02T07:50:00.032749287Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:50:00.032766 env[1204]: time="2024-07-02T07:50:00.032694100Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/165c4ca22161f4667476e9af7b3702247b86fcfe7fb06da892f2b31b2a505194 pid=1723 runtime=io.containerd.runc.v2 Jul 2 07:50:00.032766 env[1204]: time="2024-07-02T07:50:00.032760749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:50:00.032941 env[1204]: time="2024-07-02T07:50:00.032898484Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f4624e3ab53338839bbb15ebf9dd10e10fb38a7423b57c47d3e3548c156342f3 pid=1714 runtime=io.containerd.runc.v2 Jul 2 07:50:00.054144 systemd[1]: Started cri-containerd-165c4ca22161f4667476e9af7b3702247b86fcfe7fb06da892f2b31b2a505194.scope. Jul 2 07:50:00.062968 env[1204]: time="2024-07-02T07:50:00.062039404Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:50:00.062968 env[1204]: time="2024-07-02T07:50:00.062115942Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:50:00.062968 env[1204]: time="2024-07-02T07:50:00.062136682Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:50:00.062968 env[1204]: time="2024-07-02T07:50:00.062331006Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ff14fc7bd5af8370b880b03d2d2863aa08cff1e2ea7797c08266868646f81ece pid=1718 runtime=io.containerd.runc.v2 Jul 2 07:50:00.067979 systemd[1]: Started cri-containerd-f4624e3ab53338839bbb15ebf9dd10e10fb38a7423b57c47d3e3548c156342f3.scope. Jul 2 07:50:00.082982 systemd[1]: Started cri-containerd-ff14fc7bd5af8370b880b03d2d2863aa08cff1e2ea7797c08266868646f81ece.scope. Jul 2 07:50:00.211755 env[1204]: time="2024-07-02T07:50:00.211698955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:593d08bacb1d5de22dcb8f5224a99e3c,Namespace:kube-system,Attempt:0,} returns sandbox id \"165c4ca22161f4667476e9af7b3702247b86fcfe7fb06da892f2b31b2a505194\"" Jul 2 07:50:00.212031 env[1204]: time="2024-07-02T07:50:00.212005436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:10f99d1b9596a09bfaffce27d68b9d09,Namespace:kube-system,Attempt:0,} returns sandbox id \"f4624e3ab53338839bbb15ebf9dd10e10fb38a7423b57c47d3e3548c156342f3\"" Jul 2 07:50:00.213113 kubelet[1664]: E0702 07:50:00.213080 1664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:50:00.213394 kubelet[1664]: E0702 07:50:00.213322 1664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:50:00.216522 env[1204]: time="2024-07-02T07:50:00.216485909Z" level=info msg="CreateContainer within sandbox \"f4624e3ab53338839bbb15ebf9dd10e10fb38a7423b57c47d3e3548c156342f3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 2 07:50:00.216663 env[1204]: time="2024-07-02T07:50:00.216632741Z" level=info msg="CreateContainer within sandbox \"165c4ca22161f4667476e9af7b3702247b86fcfe7fb06da892f2b31b2a505194\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 2 07:50:00.232284 env[1204]: time="2024-07-02T07:50:00.232233021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:42b008e702ec2a5b396aebedf13804b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"ff14fc7bd5af8370b880b03d2d2863aa08cff1e2ea7797c08266868646f81ece\"" Jul 2 07:50:00.232976 kubelet[1664]: E0702 07:50:00.232944 1664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:50:00.234980 env[1204]: time="2024-07-02T07:50:00.234881815Z" level=info msg="CreateContainer within sandbox \"ff14fc7bd5af8370b880b03d2d2863aa08cff1e2ea7797c08266868646f81ece\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 2 07:50:00.239760 env[1204]: time="2024-07-02T07:50:00.239728193Z" level=info msg="CreateContainer within sandbox \"f4624e3ab53338839bbb15ebf9dd10e10fb38a7423b57c47d3e3548c156342f3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0496e47528506bbca7b5293d160c3435b52feba02661961ff62dc9f92a27c0cc\"" Jul 2 07:50:00.240433 env[1204]: time="2024-07-02T07:50:00.240396670Z" level=info msg="StartContainer for \"0496e47528506bbca7b5293d160c3435b52feba02661961ff62dc9f92a27c0cc\"" Jul 2 07:50:00.241404 env[1204]: time="2024-07-02T07:50:00.241348585Z" level=info msg="CreateContainer within sandbox \"165c4ca22161f4667476e9af7b3702247b86fcfe7fb06da892f2b31b2a505194\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7e0a0112c70246b8013bfe6bc4a2df05d24e408c738689e8293a5cc3453d4ac3\"" Jul 2 07:50:00.241766 env[1204]: time="2024-07-02T07:50:00.241738666Z" level=info msg="StartContainer for \"7e0a0112c70246b8013bfe6bc4a2df05d24e408c738689e8293a5cc3453d4ac3\"" Jul 2 07:50:00.252842 env[1204]: time="2024-07-02T07:50:00.252727630Z" level=info msg="CreateContainer within sandbox \"ff14fc7bd5af8370b880b03d2d2863aa08cff1e2ea7797c08266868646f81ece\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3ed5dbea3ceef28e431a94a461348f22ed98299891194bcd00f535d34ffb8316\"" Jul 2 07:50:00.253364 env[1204]: time="2024-07-02T07:50:00.253311826Z" level=info msg="StartContainer for \"3ed5dbea3ceef28e431a94a461348f22ed98299891194bcd00f535d34ffb8316\"" Jul 2 07:50:00.257820 systemd[1]: Started cri-containerd-0496e47528506bbca7b5293d160c3435b52feba02661961ff62dc9f92a27c0cc.scope. Jul 2 07:50:00.273675 systemd[1]: Started cri-containerd-7e0a0112c70246b8013bfe6bc4a2df05d24e408c738689e8293a5cc3453d4ac3.scope. Jul 2 07:50:00.286878 systemd[1]: Started cri-containerd-3ed5dbea3ceef28e431a94a461348f22ed98299891194bcd00f535d34ffb8316.scope. Jul 2 07:50:00.299336 kubelet[1664]: E0702 07:50:00.299302 1664 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.99:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.99:6443: connect: connection refused Jul 2 07:50:00.326259 env[1204]: time="2024-07-02T07:50:00.326209429Z" level=info msg="StartContainer for \"0496e47528506bbca7b5293d160c3435b52feba02661961ff62dc9f92a27c0cc\" returns successfully" Jul 2 07:50:00.331113 env[1204]: time="2024-07-02T07:50:00.331082228Z" level=info msg="StartContainer for \"7e0a0112c70246b8013bfe6bc4a2df05d24e408c738689e8293a5cc3453d4ac3\" returns successfully" Jul 2 07:50:00.352254 kubelet[1664]: E0702 07:50:00.352221 1664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:50:00.354216 kubelet[1664]: E0702 07:50:00.354189 1664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:50:00.357537 env[1204]: time="2024-07-02T07:50:00.357511153Z" level=info msg="StartContainer for \"3ed5dbea3ceef28e431a94a461348f22ed98299891194bcd00f535d34ffb8316\" returns successfully" Jul 2 07:50:01.344666 kubelet[1664]: E0702 07:50:01.344620 1664 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 2 07:50:01.359560 kubelet[1664]: E0702 07:50:01.359512 1664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:50:01.360115 kubelet[1664]: E0702 07:50:01.360085 1664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:50:01.376095 kubelet[1664]: I0702 07:50:01.376047 1664 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 07:50:01.382096 kubelet[1664]: I0702 07:50:01.382073 1664 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jul 2 07:50:01.388443 kubelet[1664]: E0702 07:50:01.388419 1664 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 07:50:01.488800 kubelet[1664]: E0702 07:50:01.488755 1664 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 07:50:01.589412 kubelet[1664]: E0702 07:50:01.589373 1664 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 07:50:01.690531 kubelet[1664]: E0702 07:50:01.690392 1664 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 07:50:01.791129 kubelet[1664]: E0702 07:50:01.791091 1664 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 07:50:01.868805 kubelet[1664]: E0702 07:50:01.868771 1664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:50:01.892029 kubelet[1664]: E0702 07:50:01.891988 1664 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 07:50:01.992812 kubelet[1664]: E0702 07:50:01.992699 1664 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 07:50:02.093102 kubelet[1664]: E0702 07:50:02.093058 1664 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 07:50:02.194120 kubelet[1664]: E0702 07:50:02.194078 1664 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 07:50:02.295116 kubelet[1664]: E0702 07:50:02.295020 1664 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 07:50:02.442458 kubelet[1664]: E0702 07:50:02.442431 1664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:50:03.247227 kubelet[1664]: I0702 07:50:03.247198 1664 apiserver.go:52] "Watching apiserver" Jul 2 07:50:03.268972 kubelet[1664]: I0702 07:50:03.268933 1664 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 07:50:03.360590 kubelet[1664]: E0702 07:50:03.360543 1664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:50:03.773976 kubelet[1664]: E0702 07:50:03.773949 1664 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:50:03.923088 systemd[1]: Reloading. Jul 2 07:50:03.991456 /usr/lib/systemd/system-generators/torcx-generator[1960]: time="2024-07-02T07:50:03Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 07:50:03.991492 /usr/lib/systemd/system-generators/torcx-generator[1960]: time="2024-07-02T07:50:03Z" level=info msg="torcx already run" Jul 2 07:50:04.054021 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 07:50:04.054036 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 07:50:04.070647 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 07:50:04.156204 systemd[1]: Stopping kubelet.service... Jul 2 07:50:04.156395 kubelet[1664]: I0702 07:50:04.156179 1664 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 07:50:04.171935 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 07:50:04.172087 systemd[1]: Stopped kubelet.service. Jul 2 07:50:04.173466 systemd[1]: Starting kubelet.service... Jul 2 07:50:04.244228 systemd[1]: Started kubelet.service. Jul 2 07:50:04.299895 kubelet[2005]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 07:50:04.299895 kubelet[2005]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 07:50:04.299895 kubelet[2005]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 07:50:04.300259 kubelet[2005]: I0702 07:50:04.299988 2005 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 07:50:04.305504 kubelet[2005]: I0702 07:50:04.305380 2005 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jul 2 07:50:04.305504 kubelet[2005]: I0702 07:50:04.305414 2005 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 07:50:04.305817 kubelet[2005]: I0702 07:50:04.305649 2005 server.go:919] "Client rotation is on, will bootstrap in background" Jul 2 07:50:04.307421 kubelet[2005]: I0702 07:50:04.307390 2005 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 2 07:50:04.309554 kubelet[2005]: I0702 07:50:04.309525 2005 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 07:50:04.312938 sudo[2019]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 2 07:50:04.313133 sudo[2019]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jul 2 07:50:04.317070 kubelet[2005]: I0702 07:50:04.317037 2005 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 07:50:04.317314 kubelet[2005]: I0702 07:50:04.317283 2005 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 07:50:04.317530 kubelet[2005]: I0702 07:50:04.317499 2005 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 07:50:04.317650 kubelet[2005]: I0702 07:50:04.317542 2005 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 07:50:04.317650 kubelet[2005]: I0702 07:50:04.317557 2005 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 07:50:04.317650 kubelet[2005]: I0702 07:50:04.317607 2005 state_mem.go:36] "Initialized new in-memory state store" Jul 2 07:50:04.317754 kubelet[2005]: I0702 07:50:04.317705 2005 kubelet.go:396] "Attempting to sync node with API server" Jul 2 07:50:04.317754 kubelet[2005]: I0702 07:50:04.317724 2005 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 07:50:04.317754 kubelet[2005]: I0702 07:50:04.317752 2005 kubelet.go:312] "Adding apiserver pod source" Jul 2 07:50:04.317846 kubelet[2005]: I0702 07:50:04.317771 2005 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 07:50:04.318841 kubelet[2005]: I0702 07:50:04.318813 2005 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 2 07:50:04.319055 kubelet[2005]: I0702 07:50:04.319027 2005 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 07:50:04.319534 kubelet[2005]: I0702 07:50:04.319508 2005 server.go:1256] "Started kubelet" Jul 2 07:50:04.319923 kubelet[2005]: I0702 07:50:04.319890 2005 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 07:50:04.321757 kubelet[2005]: I0702 07:50:04.321735 2005 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 07:50:04.337597 kubelet[2005]: I0702 07:50:04.322394 2005 server.go:461] "Adding debug handlers to kubelet server" Jul 2 07:50:04.338120 kubelet[2005]: I0702 07:50:04.322566 2005 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 07:50:04.338456 kubelet[2005]: I0702 07:50:04.338438 2005 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 07:50:04.339617 kubelet[2005]: E0702 07:50:04.339570 2005 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 07:50:04.343496 kubelet[2005]: I0702 07:50:04.343463 2005 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 07:50:04.343612 kubelet[2005]: I0702 07:50:04.343598 2005 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 07:50:04.343728 kubelet[2005]: I0702 07:50:04.343703 2005 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 07:50:04.344672 kubelet[2005]: I0702 07:50:04.344628 2005 factory.go:221] Registration of the systemd container factory successfully Jul 2 07:50:04.344807 kubelet[2005]: I0702 07:50:04.344758 2005 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 07:50:04.346760 kubelet[2005]: I0702 07:50:04.346738 2005 factory.go:221] Registration of the containerd container factory successfully Jul 2 07:50:04.366029 kubelet[2005]: I0702 07:50:04.365981 2005 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 07:50:04.367721 kubelet[2005]: I0702 07:50:04.367700 2005 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 07:50:04.367793 kubelet[2005]: I0702 07:50:04.367728 2005 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 07:50:04.367793 kubelet[2005]: I0702 07:50:04.367745 2005 kubelet.go:2329] "Starting kubelet main sync loop" Jul 2 07:50:04.367981 kubelet[2005]: E0702 07:50:04.367960 2005 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 07:50:04.376840 kubelet[2005]: I0702 07:50:04.376806 2005 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 07:50:04.376840 kubelet[2005]: I0702 07:50:04.376839 2005 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 07:50:04.376935 kubelet[2005]: I0702 07:50:04.376852 2005 state_mem.go:36] "Initialized new in-memory state store" Jul 2 07:50:04.376993 kubelet[2005]: I0702 07:50:04.376978 2005 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 2 07:50:04.377030 kubelet[2005]: I0702 07:50:04.377008 2005 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 2 07:50:04.377030 kubelet[2005]: I0702 07:50:04.377015 2005 policy_none.go:49] "None policy: Start" Jul 2 07:50:04.377437 kubelet[2005]: I0702 07:50:04.377423 2005 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 07:50:04.377486 kubelet[2005]: I0702 07:50:04.377455 2005 state_mem.go:35] "Initializing new in-memory state store" Jul 2 07:50:04.377603 kubelet[2005]: I0702 07:50:04.377592 2005 state_mem.go:75] "Updated machine memory state" Jul 2 07:50:04.380492 kubelet[2005]: I0702 07:50:04.380476 2005 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 07:50:04.380831 kubelet[2005]: I0702 07:50:04.380816 2005 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 07:50:04.444435 kubelet[2005]: I0702 07:50:04.444405 2005 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 07:50:04.451781 kubelet[2005]: I0702 07:50:04.451189 2005 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jul 2 07:50:04.451781 kubelet[2005]: I0702 07:50:04.451266 2005 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jul 2 07:50:04.468535 kubelet[2005]: I0702 07:50:04.468496 2005 topology_manager.go:215] "Topology Admit Handler" podUID="10f99d1b9596a09bfaffce27d68b9d09" podNamespace="kube-system" podName="kube-apiserver-localhost" Jul 2 07:50:04.468724 kubelet[2005]: I0702 07:50:04.468618 2005 topology_manager.go:215] "Topology Admit Handler" podUID="42b008e702ec2a5b396aebedf13804b4" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jul 2 07:50:04.468724 kubelet[2005]: I0702 07:50:04.468700 2005 topology_manager.go:215] "Topology Admit Handler" podUID="593d08bacb1d5de22dcb8f5224a99e3c" podNamespace="kube-system" podName="kube-scheduler-localhost" Jul 2 07:50:04.474456 kubelet[2005]: E0702 07:50:04.474425 2005 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 2 07:50:04.475446 kubelet[2005]: E0702 07:50:04.475412 2005 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 2 07:50:04.645714 kubelet[2005]: I0702 07:50:04.645558 2005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/10f99d1b9596a09bfaffce27d68b9d09-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"10f99d1b9596a09bfaffce27d68b9d09\") " pod="kube-system/kube-apiserver-localhost" Jul 2 07:50:04.645714 kubelet[2005]: I0702 07:50:04.645618 2005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 07:50:04.645714 kubelet[2005]: I0702 07:50:04.645641 2005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 07:50:04.645714 kubelet[2005]: I0702 07:50:04.645706 2005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 07:50:04.647153 kubelet[2005]: I0702 07:50:04.645742 2005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/10f99d1b9596a09bfaffce27d68b9d09-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"10f99d1b9596a09bfaffce27d68b9d09\") " pod="kube-system/kube-apiserver-localhost" Jul 2 07:50:04.647153 kubelet[2005]: I0702 07:50:04.645802 2005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/10f99d1b9596a09bfaffce27d68b9d09-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"10f99d1b9596a09bfaffce27d68b9d09\") " pod="kube-system/kube-apiserver-localhost" Jul 2 07:50:04.647153 kubelet[2005]: I0702 07:50:04.645827 2005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 07:50:04.647153 kubelet[2005]: I0702 07:50:04.645849 2005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 07:50:04.647153 kubelet[2005]: I0702 07:50:04.645874 2005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/593d08bacb1d5de22dcb8f5224a99e3c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"593d08bacb1d5de22dcb8f5224a99e3c\") " pod="kube-system/kube-scheduler-localhost" Jul 2 07:50:04.768692 sudo[2019]: pam_unix(sudo:session): session closed for user root Jul 2 07:50:04.775438 kubelet[2005]: E0702 07:50:04.775403 2005 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:50:04.775500 kubelet[2005]: E0702 07:50:04.775408 2005 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:50:04.776441 kubelet[2005]: E0702 07:50:04.775826 2005 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:50:05.318794 kubelet[2005]: I0702 07:50:05.318766 2005 apiserver.go:52] "Watching apiserver" Jul 2 07:50:05.344531 kubelet[2005]: I0702 07:50:05.344469 2005 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 07:50:05.376899 kubelet[2005]: E0702 07:50:05.376868 2005 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:50:05.614532 kubelet[2005]: I0702 07:50:05.614430 2005 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.614377841 podStartE2EDuration="1.614377841s" podCreationTimestamp="2024-07-02 07:50:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:50:05.614197566 +0000 UTC m=+1.364542256" watchObservedRunningTime="2024-07-02 07:50:05.614377841 +0000 UTC m=+1.364722511" Jul 2 07:50:05.631103 kubelet[2005]: E0702 07:50:05.631063 2005 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 2 07:50:05.631540 kubelet[2005]: E0702 07:50:05.631513 2005 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:50:05.632091 kubelet[2005]: E0702 07:50:05.632070 2005 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 2 07:50:05.632662 kubelet[2005]: E0702 07:50:05.632638 2005 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:50:06.336105 kubelet[2005]: I0702 07:50:06.336070 2005 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=4.33601757 podStartE2EDuration="4.33601757s" podCreationTimestamp="2024-07-02 07:50:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:50:06.052317463 +0000 UTC m=+1.802662143" watchObservedRunningTime="2024-07-02 07:50:06.33601757 +0000 UTC m=+2.086362250" Jul 2 07:50:06.336533 kubelet[2005]: I0702 07:50:06.336199 2005 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.336182515 podStartE2EDuration="3.336182515s" podCreationTimestamp="2024-07-02 07:50:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:50:06.336146867 +0000 UTC m=+2.086491537" watchObservedRunningTime="2024-07-02 07:50:06.336182515 +0000 UTC m=+2.086527195" Jul 2 07:50:06.377904 kubelet[2005]: E0702 07:50:06.377865 2005 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:50:06.378108 kubelet[2005]: E0702 07:50:06.378085 2005 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:50:06.897494 sudo[1299]: pam_unix(sudo:session): session closed for user root Jul 2 07:50:06.898843 sshd[1296]: pam_unix(sshd:session): session closed for user core Jul 2 07:50:06.901485 systemd[1]: sshd@4-10.0.0.99:22-10.0.0.1:60732.service: Deactivated successfully. Jul 2 07:50:06.902154 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 07:50:06.902287 systemd[1]: session-5.scope: Consumed 4.206s CPU time. Jul 2 07:50:06.902756 systemd-logind[1187]: Session 5 logged out. Waiting for processes to exit. Jul 2 07:50:06.903363 systemd-logind[1187]: Removed session 5. Jul 2 07:50:07.380232 kubelet[2005]: E0702 07:50:07.380124 2005 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:50:10.522371 kubelet[2005]: E0702 07:50:10.522342 2005 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:50:11.385951 kubelet[2005]: E0702 07:50:11.385911 2005 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:50:13.069375 kubelet[2005]: E0702 07:50:13.069341 2005 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:50:13.093689 update_engine[1192]: I0702 07:50:13.093633 1192 update_attempter.cc:509] Updating boot flags... Jul 2 07:50:13.388138 kubelet[2005]: E0702 07:50:13.388018 2005 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:50:17.228336 kubelet[2005]: E0702 07:50:17.227783 2005 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:50:19.088048 kubelet[2005]: I0702 07:50:19.088004 2005 topology_manager.go:215] "Topology Admit Handler" podUID="e20b27c3-c352-4ac4-99d1-5a4832748aef" podNamespace="kube-system" podName="kube-proxy-9vjww" Jul 2 07:50:19.093445 kubelet[2005]: I0702 07:50:19.093416 2005 topology_manager.go:215] "Topology Admit Handler" podUID="9d30ca27-38f5-45f1-beb5-3b5f55148966" podNamespace="kube-system" podName="cilium-k22j4" Jul 2 07:50:19.096096 systemd[1]: Created slice kubepods-besteffort-pode20b27c3_c352_4ac4_99d1_5a4832748aef.slice. Jul 2 07:50:19.097300 kubelet[2005]: W0702 07:50:19.097276 2005 reflector.go:539] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jul 2 07:50:19.097366 kubelet[2005]: E0702 07:50:19.097306 2005 reflector.go:147] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jul 2 07:50:19.105265 systemd[1]: Created slice kubepods-burstable-pod9d30ca27_38f5_45f1_beb5_3b5f55148966.slice. Jul 2 07:50:19.126463 kubelet[2005]: I0702 07:50:19.126437 2005 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 2 07:50:19.127030 env[1204]: time="2024-07-02T07:50:19.126937605Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 07:50:19.127360 kubelet[2005]: I0702 07:50:19.127348 2005 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 2 07:50:19.142776 kubelet[2005]: I0702 07:50:19.142742 2005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9d30ca27-38f5-45f1-beb5-3b5f55148966-host-proc-sys-kernel\") pod \"cilium-k22j4\" (UID: \"9d30ca27-38f5-45f1-beb5-3b5f55148966\") " pod="kube-system/cilium-k22j4" Jul 2 07:50:19.143013 kubelet[2005]: I0702 07:50:19.142997 2005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9d30ca27-38f5-45f1-beb5-3b5f55148966-cilium-run\") pod \"cilium-k22j4\" (UID: \"9d30ca27-38f5-45f1-beb5-3b5f55148966\") " pod="kube-system/cilium-k22j4" Jul 2 07:50:19.143110 kubelet[2005]: I0702 07:50:19.143096 2005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9d30ca27-38f5-45f1-beb5-3b5f55148966-etc-cni-netd\") pod \"cilium-k22j4\" (UID: \"9d30ca27-38f5-45f1-beb5-3b5f55148966\") " pod="kube-system/cilium-k22j4" Jul 2 07:50:19.143199 kubelet[2005]: I0702 07:50:19.143185 2005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9d30ca27-38f5-45f1-beb5-3b5f55148966-xtables-lock\") pod \"cilium-k22j4\" (UID: \"9d30ca27-38f5-45f1-beb5-3b5f55148966\") " pod="kube-system/cilium-k22j4" Jul 2 07:50:19.143292 kubelet[2005]: I0702 07:50:19.143278 2005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9d30ca27-38f5-45f1-beb5-3b5f55148966-clustermesh-secrets\") pod \"cilium-k22j4\" (UID: \"9d30ca27-38f5-45f1-beb5-3b5f55148966\") " pod="kube-system/cilium-k22j4" Jul 2 07:50:19.143389 kubelet[2005]: I0702 07:50:19.143375 2005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e20b27c3-c352-4ac4-99d1-5a4832748aef-lib-modules\") pod \"kube-proxy-9vjww\" (UID: \"e20b27c3-c352-4ac4-99d1-5a4832748aef\") " pod="kube-system/kube-proxy-9vjww" Jul 2 07:50:19.143479 kubelet[2005]: I0702 07:50:19.143465 2005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9d30ca27-38f5-45f1-beb5-3b5f55148966-lib-modules\") pod \"cilium-k22j4\" (UID: \"9d30ca27-38f5-45f1-beb5-3b5f55148966\") " pod="kube-system/cilium-k22j4" Jul 2 07:50:19.143604 kubelet[2005]: I0702 07:50:19.143590 2005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5zpj\" (UniqueName: \"kubernetes.io/projected/9d30ca27-38f5-45f1-beb5-3b5f55148966-kube-api-access-d5zpj\") pod \"cilium-k22j4\" (UID: \"9d30ca27-38f5-45f1-beb5-3b5f55148966\") " pod="kube-system/cilium-k22j4" Jul 2 07:50:19.143708 kubelet[2005]: I0702 07:50:19.143694 2005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9d30ca27-38f5-45f1-beb5-3b5f55148966-cilium-config-path\") pod \"cilium-k22j4\" (UID: \"9d30ca27-38f5-45f1-beb5-3b5f55148966\") " pod="kube-system/cilium-k22j4" Jul 2 07:50:19.143802 kubelet[2005]: I0702 07:50:19.143788 2005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9d30ca27-38f5-45f1-beb5-3b5f55148966-hubble-tls\") pod \"cilium-k22j4\" (UID: \"9d30ca27-38f5-45f1-beb5-3b5f55148966\") " pod="kube-system/cilium-k22j4" Jul 2 07:50:19.143894 kubelet[2005]: I0702 07:50:19.143880 2005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9d30ca27-38f5-45f1-beb5-3b5f55148966-cni-path\") pod \"cilium-k22j4\" (UID: \"9d30ca27-38f5-45f1-beb5-3b5f55148966\") " pod="kube-system/cilium-k22j4" Jul 2 07:50:19.144030 kubelet[2005]: I0702 07:50:19.143980 2005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e20b27c3-c352-4ac4-99d1-5a4832748aef-xtables-lock\") pod \"kube-proxy-9vjww\" (UID: \"e20b27c3-c352-4ac4-99d1-5a4832748aef\") " pod="kube-system/kube-proxy-9vjww" Jul 2 07:50:19.144030 kubelet[2005]: I0702 07:50:19.144013 2005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9d30ca27-38f5-45f1-beb5-3b5f55148966-hostproc\") pod \"cilium-k22j4\" (UID: \"9d30ca27-38f5-45f1-beb5-3b5f55148966\") " pod="kube-system/cilium-k22j4" Jul 2 07:50:19.144030 kubelet[2005]: I0702 07:50:19.144034 2005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9d30ca27-38f5-45f1-beb5-3b5f55148966-cilium-cgroup\") pod \"cilium-k22j4\" (UID: \"9d30ca27-38f5-45f1-beb5-3b5f55148966\") " pod="kube-system/cilium-k22j4" Jul 2 07:50:19.144235 kubelet[2005]: I0702 07:50:19.144050 2005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9d30ca27-38f5-45f1-beb5-3b5f55148966-bpf-maps\") pod \"cilium-k22j4\" (UID: \"9d30ca27-38f5-45f1-beb5-3b5f55148966\") " pod="kube-system/cilium-k22j4" Jul 2 07:50:19.144235 kubelet[2005]: I0702 07:50:19.144067 2005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e20b27c3-c352-4ac4-99d1-5a4832748aef-kube-proxy\") pod \"kube-proxy-9vjww\" (UID: \"e20b27c3-c352-4ac4-99d1-5a4832748aef\") " pod="kube-system/kube-proxy-9vjww" Jul 2 07:50:19.144235 kubelet[2005]: I0702 07:50:19.144085 2005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gswx7\" (UniqueName: \"kubernetes.io/projected/e20b27c3-c352-4ac4-99d1-5a4832748aef-kube-api-access-gswx7\") pod \"kube-proxy-9vjww\" (UID: \"e20b27c3-c352-4ac4-99d1-5a4832748aef\") " pod="kube-system/kube-proxy-9vjww" Jul 2 07:50:19.144235 kubelet[2005]: I0702 07:50:19.144105 2005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9d30ca27-38f5-45f1-beb5-3b5f55148966-host-proc-sys-net\") pod \"cilium-k22j4\" (UID: \"9d30ca27-38f5-45f1-beb5-3b5f55148966\") " pod="kube-system/cilium-k22j4" Jul 2 07:50:19.403303 kubelet[2005]: E0702 07:50:19.403191 2005 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:50:19.403839 env[1204]: time="2024-07-02T07:50:19.403796477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9vjww,Uid:e20b27c3-c352-4ac4-99d1-5a4832748aef,Namespace:kube-system,Attempt:0,}" Jul 2 07:50:19.420200 kubelet[2005]: I0702 07:50:19.420145 2005 topology_manager.go:215] "Topology Admit Handler" podUID="3b9636fb-987b-44c2-b0b1-e48c5bf00423" podNamespace="kube-system" podName="cilium-operator-5cc964979-5g7k9" Jul 2 07:50:19.425443 systemd[1]: Created slice kubepods-besteffort-pod3b9636fb_987b_44c2_b0b1_e48c5bf00423.slice. Jul 2 07:50:19.446385 kubelet[2005]: I0702 07:50:19.446356 2005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3b9636fb-987b-44c2-b0b1-e48c5bf00423-cilium-config-path\") pod \"cilium-operator-5cc964979-5g7k9\" (UID: \"3b9636fb-987b-44c2-b0b1-e48c5bf00423\") " pod="kube-system/cilium-operator-5cc964979-5g7k9" Jul 2 07:50:19.446456 kubelet[2005]: I0702 07:50:19.446395 2005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhlql\" (UniqueName: \"kubernetes.io/projected/3b9636fb-987b-44c2-b0b1-e48c5bf00423-kube-api-access-fhlql\") pod \"cilium-operator-5cc964979-5g7k9\" (UID: \"3b9636fb-987b-44c2-b0b1-e48c5bf00423\") " pod="kube-system/cilium-operator-5cc964979-5g7k9" Jul 2 07:50:19.584414 env[1204]: time="2024-07-02T07:50:19.584330510Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:50:19.584414 env[1204]: time="2024-07-02T07:50:19.584376047Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:50:19.584414 env[1204]: time="2024-07-02T07:50:19.584387979Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:50:19.584667 env[1204]: time="2024-07-02T07:50:19.584624476Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f3e8ba1b5c68eeecd414f31a80378a014400b7a9ef6478e5fe4a7f7cdea2b931 pid=2113 runtime=io.containerd.runc.v2 Jul 2 07:50:19.595277 systemd[1]: Started cri-containerd-f3e8ba1b5c68eeecd414f31a80378a014400b7a9ef6478e5fe4a7f7cdea2b931.scope. Jul 2 07:50:19.614864 env[1204]: time="2024-07-02T07:50:19.614809120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9vjww,Uid:e20b27c3-c352-4ac4-99d1-5a4832748aef,Namespace:kube-system,Attempt:0,} returns sandbox id \"f3e8ba1b5c68eeecd414f31a80378a014400b7a9ef6478e5fe4a7f7cdea2b931\"" Jul 2 07:50:19.615663 kubelet[2005]: E0702 07:50:19.615642 2005 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:50:19.618136 env[1204]: time="2024-07-02T07:50:19.618084321Z" level=info msg="CreateContainer within sandbox \"f3e8ba1b5c68eeecd414f31a80378a014400b7a9ef6478e5fe4a7f7cdea2b931\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 07:50:19.634318 env[1204]: time="2024-07-02T07:50:19.634272995Z" level=info msg="CreateContainer within sandbox \"f3e8ba1b5c68eeecd414f31a80378a014400b7a9ef6478e5fe4a7f7cdea2b931\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"239d283ef19e1d467d20002a946d7ef8ad7ff4e4e977dffd61a757607f8221d8\"" Jul 2 07:50:19.634792 env[1204]: time="2024-07-02T07:50:19.634762941Z" level=info msg="StartContainer for \"239d283ef19e1d467d20002a946d7ef8ad7ff4e4e977dffd61a757607f8221d8\"" Jul 2 07:50:19.648457 systemd[1]: Started cri-containerd-239d283ef19e1d467d20002a946d7ef8ad7ff4e4e977dffd61a757607f8221d8.scope. Jul 2 07:50:19.676277 env[1204]: time="2024-07-02T07:50:19.676161333Z" level=info msg="StartContainer for \"239d283ef19e1d467d20002a946d7ef8ad7ff4e4e977dffd61a757607f8221d8\" returns successfully" Jul 2 07:50:19.728293 kubelet[2005]: E0702 07:50:19.728259 2005 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:50:19.728935 env[1204]: time="2024-07-02T07:50:19.728907467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-5g7k9,Uid:3b9636fb-987b-44c2-b0b1-e48c5bf00423,Namespace:kube-system,Attempt:0,}" Jul 2 07:50:19.746337 env[1204]: time="2024-07-02T07:50:19.746262737Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:50:19.746337 env[1204]: time="2024-07-02T07:50:19.746302934Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:50:19.746569 env[1204]: time="2024-07-02T07:50:19.746521086Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:50:19.746835 env[1204]: time="2024-07-02T07:50:19.746781779Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b9de22c8d0f81ea7f9bf9a18fe2a7f96d592aed940968e92202d0865b10eaa30 pid=2221 runtime=io.containerd.runc.v2 Jul 2 07:50:19.756051 systemd[1]: Started cri-containerd-b9de22c8d0f81ea7f9bf9a18fe2a7f96d592aed940968e92202d0865b10eaa30.scope. Jul 2 07:50:19.794891 env[1204]: time="2024-07-02T07:50:19.794824572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-5g7k9,Uid:3b9636fb-987b-44c2-b0b1-e48c5bf00423,Namespace:kube-system,Attempt:0,} returns sandbox id \"b9de22c8d0f81ea7f9bf9a18fe2a7f96d592aed940968e92202d0865b10eaa30\"" Jul 2 07:50:19.802638 kubelet[2005]: E0702 07:50:19.802604 2005 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:50:19.803856 env[1204]: time="2024-07-02T07:50:19.803822973Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 2 07:50:20.246859 kubelet[2005]: E0702 07:50:20.246823 2005 projected.go:269] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Jul 2 07:50:20.246859 kubelet[2005]: E0702 07:50:20.246847 2005 projected.go:200] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-k22j4: failed to sync secret cache: timed out waiting for the condition Jul 2 07:50:20.247232 kubelet[2005]: E0702 07:50:20.246909 2005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d30ca27-38f5-45f1-beb5-3b5f55148966-hubble-tls podName:9d30ca27-38f5-45f1-beb5-3b5f55148966 nodeName:}" failed. No retries permitted until 2024-07-02 07:50:20.746891446 +0000 UTC m=+16.497236126 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/9d30ca27-38f5-45f1-beb5-3b5f55148966-hubble-tls") pod "cilium-k22j4" (UID: "9d30ca27-38f5-45f1-beb5-3b5f55148966") : failed to sync secret cache: timed out waiting for the condition Jul 2 07:50:20.400644 kubelet[2005]: E0702 07:50:20.400599 2005 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:50:20.907680 kubelet[2005]: E0702 07:50:20.907627 2005 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:50:20.908125 env[1204]: time="2024-07-02T07:50:20.908080416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k22j4,Uid:9d30ca27-38f5-45f1-beb5-3b5f55148966,Namespace:kube-system,Attempt:0,}" Jul 2 07:50:21.938880 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3015810113.mount: Deactivated successfully. Jul 2 07:50:21.952052 env[1204]: time="2024-07-02T07:50:21.951981016Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:50:21.952052 env[1204]: time="2024-07-02T07:50:21.952026302Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:50:21.952052 env[1204]: time="2024-07-02T07:50:21.952040959Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:50:21.952494 env[1204]: time="2024-07-02T07:50:21.952187556Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/39775a1e6b5ec27b514d8c5e9b204a84e7651f3d2a137cf6f6fd002f6d7aed2e pid=2351 runtime=io.containerd.runc.v2 Jul 2 07:50:21.964273 systemd[1]: Started cri-containerd-39775a1e6b5ec27b514d8c5e9b204a84e7651f3d2a137cf6f6fd002f6d7aed2e.scope. Jul 2 07:50:21.982605 env[1204]: time="2024-07-02T07:50:21.982541303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k22j4,Uid:9d30ca27-38f5-45f1-beb5-3b5f55148966,Namespace:kube-system,Attempt:0,} returns sandbox id \"39775a1e6b5ec27b514d8c5e9b204a84e7651f3d2a137cf6f6fd002f6d7aed2e\"" Jul 2 07:50:21.983328 kubelet[2005]: E0702 07:50:21.983289 2005 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:50:22.713005 env[1204]: time="2024-07-02T07:50:22.712943335Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:50:22.714682 env[1204]: time="2024-07-02T07:50:22.714635710Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:50:22.716116 env[1204]: time="2024-07-02T07:50:22.716078764Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:50:22.716502 env[1204]: time="2024-07-02T07:50:22.716464703Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 2 07:50:22.717754 env[1204]: time="2024-07-02T07:50:22.716981177Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 2 07:50:22.717979 env[1204]: time="2024-07-02T07:50:22.717947632Z" level=info msg="CreateContainer within sandbox \"b9de22c8d0f81ea7f9bf9a18fe2a7f96d592aed940968e92202d0865b10eaa30\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 2 07:50:22.731474 env[1204]: time="2024-07-02T07:50:22.731411769Z" level=info msg="CreateContainer within sandbox \"b9de22c8d0f81ea7f9bf9a18fe2a7f96d592aed940968e92202d0865b10eaa30\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"2b13d98c15a2aada60650c100815762a1901608db18ed11d57834846d30e773f\"" Jul 2 07:50:22.731849 env[1204]: time="2024-07-02T07:50:22.731812887Z" level=info msg="StartContainer for \"2b13d98c15a2aada60650c100815762a1901608db18ed11d57834846d30e773f\"" Jul 2 07:50:22.745908 systemd[1]: Started cri-containerd-2b13d98c15a2aada60650c100815762a1901608db18ed11d57834846d30e773f.scope. Jul 2 07:50:22.767840 env[1204]: time="2024-07-02T07:50:22.767799302Z" level=info msg="StartContainer for \"2b13d98c15a2aada60650c100815762a1901608db18ed11d57834846d30e773f\" returns successfully" Jul 2 07:50:23.408898 kubelet[2005]: E0702 07:50:23.408856 2005 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:50:23.427449 kubelet[2005]: I0702 07:50:23.427254 2005 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-9vjww" podStartSLOduration=4.42718932 podStartE2EDuration="4.42718932s" podCreationTimestamp="2024-07-02 07:50:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:50:20.407604507 +0000 UTC m=+16.157949187" watchObservedRunningTime="2024-07-02 07:50:23.42718932 +0000 UTC m=+19.177534000" Jul 2 07:50:24.379309 kubelet[2005]: I0702 07:50:24.379183 2005 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-5g7k9" podStartSLOduration=2.465847623 podStartE2EDuration="5.379140372s" podCreationTimestamp="2024-07-02 07:50:19 +0000 UTC" firstStartedPulling="2024-07-02 07:50:19.803449748 +0000 UTC m=+15.553794428" lastFinishedPulling="2024-07-02 07:50:22.716742497 +0000 UTC m=+18.467087177" observedRunningTime="2024-07-02 07:50:23.427868953 +0000 UTC m=+19.178213633" watchObservedRunningTime="2024-07-02 07:50:24.379140372 +0000 UTC m=+20.129485042" Jul 2 07:50:24.410928 kubelet[2005]: E0702 07:50:24.410616 2005 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:50:24.411726 systemd[1]: Started sshd@5-10.0.0.99:22-10.0.0.1:39604.service. Jul 2 07:50:24.449259 sshd[2422]: Accepted publickey for core from 10.0.0.1 port 39604 ssh2: RSA SHA256:nleLw6rQrPXDNmjSByoAmmq8uYaZ7eXcVx3kSRwUXrw Jul 2 07:50:24.451358 sshd[2422]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:50:24.456056 systemd-logind[1187]: New session 6 of user core. Jul 2 07:50:24.456983 systemd[1]: Started session-6.scope. Jul 2 07:50:24.585072 sshd[2422]: pam_unix(sshd:session): session closed for user core Jul 2 07:50:24.587701 systemd[1]: sshd@5-10.0.0.99:22-10.0.0.1:39604.service: Deactivated successfully. Jul 2 07:50:24.588461 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 07:50:24.589453 systemd-logind[1187]: Session 6 logged out. Waiting for processes to exit. Jul 2 07:50:24.590262 systemd-logind[1187]: Removed session 6. Jul 2 07:50:29.589609 systemd[1]: Started sshd@6-10.0.0.99:22-10.0.0.1:39614.service. Jul 2 07:50:29.625434 sshd[2436]: Accepted publickey for core from 10.0.0.1 port 39614 ssh2: RSA SHA256:nleLw6rQrPXDNmjSByoAmmq8uYaZ7eXcVx3kSRwUXrw Jul 2 07:50:29.627047 sshd[2436]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:50:29.631987 systemd-logind[1187]: New session 7 of user core. Jul 2 07:50:29.632705 systemd[1]: Started session-7.scope. Jul 2 07:50:29.771672 sshd[2436]: pam_unix(sshd:session): session closed for user core Jul 2 07:50:29.774494 systemd[1]: sshd@6-10.0.0.99:22-10.0.0.1:39614.service: Deactivated successfully. Jul 2 07:50:29.775447 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 07:50:29.776102 systemd-logind[1187]: Session 7 logged out. Waiting for processes to exit. Jul 2 07:50:29.777070 systemd-logind[1187]: Removed session 7. Jul 2 07:50:31.109396 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1873187140.mount: Deactivated successfully. Jul 2 07:50:34.775329 systemd[1]: Started sshd@7-10.0.0.99:22-10.0.0.1:55652.service. Jul 2 07:50:34.806340 sshd[2452]: Accepted publickey for core from 10.0.0.1 port 55652 ssh2: RSA SHA256:nleLw6rQrPXDNmjSByoAmmq8uYaZ7eXcVx3kSRwUXrw Jul 2 07:50:34.807464 sshd[2452]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:50:34.810673 systemd-logind[1187]: New session 8 of user core. Jul 2 07:50:34.811358 systemd[1]: Started session-8.scope. Jul 2 07:50:34.911809 sshd[2452]: pam_unix(sshd:session): session closed for user core Jul 2 07:50:34.914002 systemd[1]: sshd@7-10.0.0.99:22-10.0.0.1:55652.service: Deactivated successfully. Jul 2 07:50:34.914696 systemd[1]: session-8.scope: Deactivated successfully. Jul 2 07:50:34.915788 systemd-logind[1187]: Session 8 logged out. Waiting for processes to exit. Jul 2 07:50:34.916512 systemd-logind[1187]: Removed session 8. Jul 2 07:50:36.606301 env[1204]: time="2024-07-02T07:50:36.606234833Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:50:36.609789 env[1204]: time="2024-07-02T07:50:36.609740736Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:50:36.611881 env[1204]: time="2024-07-02T07:50:36.611834011Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:50:36.612688 env[1204]: time="2024-07-02T07:50:36.612626942Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 2 07:50:36.614479 env[1204]: time="2024-07-02T07:50:36.614433479Z" level=info msg="CreateContainer within sandbox \"39775a1e6b5ec27b514d8c5e9b204a84e7651f3d2a137cf6f6fd002f6d7aed2e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 07:50:36.628494 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2114662919.mount: Deactivated successfully. Jul 2 07:50:36.629662 env[1204]: time="2024-07-02T07:50:36.629601572Z" level=info msg="CreateContainer within sandbox \"39775a1e6b5ec27b514d8c5e9b204a84e7651f3d2a137cf6f6fd002f6d7aed2e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"03382a20f7581c77891aff8763ac626f6cc676639a3e41af68600357c4e13508\"" Jul 2 07:50:36.630119 env[1204]: time="2024-07-02T07:50:36.630090761Z" level=info msg="StartContainer for \"03382a20f7581c77891aff8763ac626f6cc676639a3e41af68600357c4e13508\"" Jul 2 07:50:36.647359 systemd[1]: Started cri-containerd-03382a20f7581c77891aff8763ac626f6cc676639a3e41af68600357c4e13508.scope. Jul 2 07:50:36.695873 systemd[1]: cri-containerd-03382a20f7581c77891aff8763ac626f6cc676639a3e41af68600357c4e13508.scope: Deactivated successfully. Jul 2 07:50:36.851767 env[1204]: time="2024-07-02T07:50:36.851701258Z" level=info msg="StartContainer for \"03382a20f7581c77891aff8763ac626f6cc676639a3e41af68600357c4e13508\" returns successfully" Jul 2 07:50:36.971032 env[1204]: time="2024-07-02T07:50:36.970852810Z" level=info msg="shim disconnected" id=03382a20f7581c77891aff8763ac626f6cc676639a3e41af68600357c4e13508 Jul 2 07:50:36.971032 env[1204]: time="2024-07-02T07:50:36.970909877Z" level=warning msg="cleaning up after shim disconnected" id=03382a20f7581c77891aff8763ac626f6cc676639a3e41af68600357c4e13508 namespace=k8s.io Jul 2 07:50:36.971032 env[1204]: time="2024-07-02T07:50:36.970920067Z" level=info msg="cleaning up dead shim" Jul 2 07:50:36.980893 env[1204]: time="2024-07-02T07:50:36.980850780Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:50:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2515 runtime=io.containerd.runc.v2\n" Jul 2 07:50:37.434192 kubelet[2005]: E0702 07:50:37.434165 2005 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:50:37.436005 env[1204]: time="2024-07-02T07:50:37.435974739Z" level=info msg="CreateContainer within sandbox \"39775a1e6b5ec27b514d8c5e9b204a84e7651f3d2a137cf6f6fd002f6d7aed2e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 07:50:37.625420 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-03382a20f7581c77891aff8763ac626f6cc676639a3e41af68600357c4e13508-rootfs.mount: Deactivated successfully. Jul 2 07:50:37.698606 env[1204]: time="2024-07-02T07:50:37.698471465Z" level=info msg="CreateContainer within sandbox \"39775a1e6b5ec27b514d8c5e9b204a84e7651f3d2a137cf6f6fd002f6d7aed2e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2d5342d8e2f9fb579504950823429d9e6ed5dcedc1b1f6321b09f8d797a0e987\"" Jul 2 07:50:37.699193 env[1204]: time="2024-07-02T07:50:37.699153978Z" level=info msg="StartContainer for \"2d5342d8e2f9fb579504950823429d9e6ed5dcedc1b1f6321b09f8d797a0e987\"" Jul 2 07:50:37.716377 systemd[1]: Started cri-containerd-2d5342d8e2f9fb579504950823429d9e6ed5dcedc1b1f6321b09f8d797a0e987.scope. Jul 2 07:50:37.740805 env[1204]: time="2024-07-02T07:50:37.740754049Z" level=info msg="StartContainer for \"2d5342d8e2f9fb579504950823429d9e6ed5dcedc1b1f6321b09f8d797a0e987\" returns successfully" Jul 2 07:50:37.751042 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 07:50:37.751301 systemd[1]: Stopped systemd-sysctl.service. Jul 2 07:50:37.751482 systemd[1]: Stopping systemd-sysctl.service... Jul 2 07:50:37.753020 systemd[1]: Starting systemd-sysctl.service... Jul 2 07:50:37.753284 systemd[1]: cri-containerd-2d5342d8e2f9fb579504950823429d9e6ed5dcedc1b1f6321b09f8d797a0e987.scope: Deactivated successfully. Jul 2 07:50:37.763853 systemd[1]: Finished systemd-sysctl.service. Jul 2 07:50:37.781970 env[1204]: time="2024-07-02T07:50:37.781891641Z" level=info msg="shim disconnected" id=2d5342d8e2f9fb579504950823429d9e6ed5dcedc1b1f6321b09f8d797a0e987 Jul 2 07:50:37.781970 env[1204]: time="2024-07-02T07:50:37.781941826Z" level=warning msg="cleaning up after shim disconnected" id=2d5342d8e2f9fb579504950823429d9e6ed5dcedc1b1f6321b09f8d797a0e987 namespace=k8s.io Jul 2 07:50:37.781970 env[1204]: time="2024-07-02T07:50:37.781949650Z" level=info msg="cleaning up dead shim" Jul 2 07:50:37.788686 env[1204]: time="2024-07-02T07:50:37.788634078Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:50:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2577 runtime=io.containerd.runc.v2\n" Jul 2 07:50:38.436750 kubelet[2005]: E0702 07:50:38.436724 2005 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:50:38.439132 env[1204]: time="2024-07-02T07:50:38.439094861Z" level=info msg="CreateContainer within sandbox \"39775a1e6b5ec27b514d8c5e9b204a84e7651f3d2a137cf6f6fd002f6d7aed2e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 07:50:38.458332 env[1204]: time="2024-07-02T07:50:38.458265377Z" level=info msg="CreateContainer within sandbox \"39775a1e6b5ec27b514d8c5e9b204a84e7651f3d2a137cf6f6fd002f6d7aed2e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"986ae99c45b6556aa2af5bd405fbe78ade2b1117e435a9f7a3c96dbc076289e7\"" Jul 2 07:50:38.458945 env[1204]: time="2024-07-02T07:50:38.458913956Z" level=info msg="StartContainer for \"986ae99c45b6556aa2af5bd405fbe78ade2b1117e435a9f7a3c96dbc076289e7\"" Jul 2 07:50:38.473473 systemd[1]: Started cri-containerd-986ae99c45b6556aa2af5bd405fbe78ade2b1117e435a9f7a3c96dbc076289e7.scope. Jul 2 07:50:38.500362 systemd[1]: cri-containerd-986ae99c45b6556aa2af5bd405fbe78ade2b1117e435a9f7a3c96dbc076289e7.scope: Deactivated successfully. Jul 2 07:50:38.501088 env[1204]: time="2024-07-02T07:50:38.501020508Z" level=info msg="StartContainer for \"986ae99c45b6556aa2af5bd405fbe78ade2b1117e435a9f7a3c96dbc076289e7\" returns successfully" Jul 2 07:50:38.521315 env[1204]: time="2024-07-02T07:50:38.521277186Z" level=info msg="shim disconnected" id=986ae99c45b6556aa2af5bd405fbe78ade2b1117e435a9f7a3c96dbc076289e7 Jul 2 07:50:38.521446 env[1204]: time="2024-07-02T07:50:38.521316690Z" level=warning msg="cleaning up after shim disconnected" id=986ae99c45b6556aa2af5bd405fbe78ade2b1117e435a9f7a3c96dbc076289e7 namespace=k8s.io Jul 2 07:50:38.521446 env[1204]: time="2024-07-02T07:50:38.521328993Z" level=info msg="cleaning up dead shim" Jul 2 07:50:38.527943 env[1204]: time="2024-07-02T07:50:38.527912228Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:50:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2634 runtime=io.containerd.runc.v2\n" Jul 2 07:50:38.625725 systemd[1]: run-containerd-runc-k8s.io-2d5342d8e2f9fb579504950823429d9e6ed5dcedc1b1f6321b09f8d797a0e987-runc.Jsglm1.mount: Deactivated successfully. Jul 2 07:50:38.625827 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2d5342d8e2f9fb579504950823429d9e6ed5dcedc1b1f6321b09f8d797a0e987-rootfs.mount: Deactivated successfully. Jul 2 07:50:39.439805 kubelet[2005]: E0702 07:50:39.439774 2005 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:50:39.443085 env[1204]: time="2024-07-02T07:50:39.443035194Z" level=info msg="CreateContainer within sandbox \"39775a1e6b5ec27b514d8c5e9b204a84e7651f3d2a137cf6f6fd002f6d7aed2e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 07:50:39.459343 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount924572085.mount: Deactivated successfully. Jul 2 07:50:39.462869 env[1204]: time="2024-07-02T07:50:39.462800434Z" level=info msg="CreateContainer within sandbox \"39775a1e6b5ec27b514d8c5e9b204a84e7651f3d2a137cf6f6fd002f6d7aed2e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ce213205449a88655f476c297ceb552f29228de24b313f4fa3c5517358de40f6\"" Jul 2 07:50:39.463388 env[1204]: time="2024-07-02T07:50:39.463329899Z" level=info msg="StartContainer for \"ce213205449a88655f476c297ceb552f29228de24b313f4fa3c5517358de40f6\"" Jul 2 07:50:39.480132 systemd[1]: Started cri-containerd-ce213205449a88655f476c297ceb552f29228de24b313f4fa3c5517358de40f6.scope. Jul 2 07:50:39.503190 systemd[1]: cri-containerd-ce213205449a88655f476c297ceb552f29228de24b313f4fa3c5517358de40f6.scope: Deactivated successfully. Jul 2 07:50:39.506724 env[1204]: time="2024-07-02T07:50:39.506644939Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d30ca27_38f5_45f1_beb5_3b5f55148966.slice/cri-containerd-ce213205449a88655f476c297ceb552f29228de24b313f4fa3c5517358de40f6.scope/memory.events\": no such file or directory" Jul 2 07:50:39.506860 env[1204]: time="2024-07-02T07:50:39.506823916Z" level=info msg="StartContainer for \"ce213205449a88655f476c297ceb552f29228de24b313f4fa3c5517358de40f6\" returns successfully" Jul 2 07:50:39.526506 env[1204]: time="2024-07-02T07:50:39.526455805Z" level=info msg="shim disconnected" id=ce213205449a88655f476c297ceb552f29228de24b313f4fa3c5517358de40f6 Jul 2 07:50:39.526506 env[1204]: time="2024-07-02T07:50:39.526508994Z" level=warning msg="cleaning up after shim disconnected" id=ce213205449a88655f476c297ceb552f29228de24b313f4fa3c5517358de40f6 namespace=k8s.io Jul 2 07:50:39.526721 env[1204]: time="2024-07-02T07:50:39.526518172Z" level=info msg="cleaning up dead shim" Jul 2 07:50:39.533145 env[1204]: time="2024-07-02T07:50:39.533102557Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:50:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2688 runtime=io.containerd.runc.v2\n" Jul 2 07:50:39.625823 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ce213205449a88655f476c297ceb552f29228de24b313f4fa3c5517358de40f6-rootfs.mount: Deactivated successfully. Jul 2 07:50:39.916219 systemd[1]: Started sshd@8-10.0.0.99:22-10.0.0.1:55666.service. Jul 2 07:50:39.949031 sshd[2702]: Accepted publickey for core from 10.0.0.1 port 55666 ssh2: RSA SHA256:nleLw6rQrPXDNmjSByoAmmq8uYaZ7eXcVx3kSRwUXrw Jul 2 07:50:39.950213 sshd[2702]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:50:39.953766 systemd-logind[1187]: New session 9 of user core. Jul 2 07:50:39.954502 systemd[1]: Started session-9.scope. Jul 2 07:50:40.057285 sshd[2702]: pam_unix(sshd:session): session closed for user core Jul 2 07:50:40.059324 systemd[1]: sshd@8-10.0.0.99:22-10.0.0.1:55666.service: Deactivated successfully. Jul 2 07:50:40.060115 systemd[1]: session-9.scope: Deactivated successfully. Jul 2 07:50:40.060810 systemd-logind[1187]: Session 9 logged out. Waiting for processes to exit. Jul 2 07:50:40.061521 systemd-logind[1187]: Removed session 9. Jul 2 07:50:40.442898 kubelet[2005]: E0702 07:50:40.442873 2005 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:50:40.445034 env[1204]: time="2024-07-02T07:50:40.444998160Z" level=info msg="CreateContainer within sandbox \"39775a1e6b5ec27b514d8c5e9b204a84e7651f3d2a137cf6f6fd002f6d7aed2e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 07:50:40.460254 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount120702719.mount: Deactivated successfully. Jul 2 07:50:40.461842 env[1204]: time="2024-07-02T07:50:40.461792582Z" level=info msg="CreateContainer within sandbox \"39775a1e6b5ec27b514d8c5e9b204a84e7651f3d2a137cf6f6fd002f6d7aed2e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b8e460b6bdb8a1c4d5a4cb712eb33275f26ce9d064fe651267d7f20a046b5c2c\"" Jul 2 07:50:40.462293 env[1204]: time="2024-07-02T07:50:40.462264389Z" level=info msg="StartContainer for \"b8e460b6bdb8a1c4d5a4cb712eb33275f26ce9d064fe651267d7f20a046b5c2c\"" Jul 2 07:50:40.476242 systemd[1]: Started cri-containerd-b8e460b6bdb8a1c4d5a4cb712eb33275f26ce9d064fe651267d7f20a046b5c2c.scope. Jul 2 07:50:40.501940 env[1204]: time="2024-07-02T07:50:40.501899963Z" level=info msg="StartContainer for \"b8e460b6bdb8a1c4d5a4cb712eb33275f26ce9d064fe651267d7f20a046b5c2c\" returns successfully" Jul 2 07:50:40.572983 kubelet[2005]: I0702 07:50:40.572787 2005 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jul 2 07:50:40.588193 kubelet[2005]: I0702 07:50:40.588157 2005 topology_manager.go:215] "Topology Admit Handler" podUID="5cbf2828-71ed-40ed-9ef4-35fc1f365c5a" podNamespace="kube-system" podName="coredns-76f75df574-88pdw" Jul 2 07:50:40.590662 kubelet[2005]: I0702 07:50:40.590608 2005 topology_manager.go:215] "Topology Admit Handler" podUID="55e63c16-2371-4da1-8d14-e178148dbc76" podNamespace="kube-system" podName="coredns-76f75df574-vk2wn" Jul 2 07:50:40.595211 systemd[1]: Created slice kubepods-burstable-pod5cbf2828_71ed_40ed_9ef4_35fc1f365c5a.slice. Jul 2 07:50:40.599910 systemd[1]: Created slice kubepods-burstable-pod55e63c16_2371_4da1_8d14_e178148dbc76.slice. Jul 2 07:50:40.627469 systemd[1]: run-containerd-runc-k8s.io-b8e460b6bdb8a1c4d5a4cb712eb33275f26ce9d064fe651267d7f20a046b5c2c-runc.VkIEYs.mount: Deactivated successfully. Jul 2 07:50:40.703216 kubelet[2005]: I0702 07:50:40.703119 2005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/55e63c16-2371-4da1-8d14-e178148dbc76-config-volume\") pod \"coredns-76f75df574-vk2wn\" (UID: \"55e63c16-2371-4da1-8d14-e178148dbc76\") " pod="kube-system/coredns-76f75df574-vk2wn" Jul 2 07:50:40.703414 kubelet[2005]: I0702 07:50:40.703399 2005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwdqd\" (UniqueName: \"kubernetes.io/projected/55e63c16-2371-4da1-8d14-e178148dbc76-kube-api-access-kwdqd\") pod \"coredns-76f75df574-vk2wn\" (UID: \"55e63c16-2371-4da1-8d14-e178148dbc76\") " pod="kube-system/coredns-76f75df574-vk2wn" Jul 2 07:50:40.703510 kubelet[2005]: I0702 07:50:40.703495 2005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5cbf2828-71ed-40ed-9ef4-35fc1f365c5a-config-volume\") pod \"coredns-76f75df574-88pdw\" (UID: \"5cbf2828-71ed-40ed-9ef4-35fc1f365c5a\") " pod="kube-system/coredns-76f75df574-88pdw" Jul 2 07:50:40.703633 kubelet[2005]: I0702 07:50:40.703620 2005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gkft\" (UniqueName: \"kubernetes.io/projected/5cbf2828-71ed-40ed-9ef4-35fc1f365c5a-kube-api-access-9gkft\") pod \"coredns-76f75df574-88pdw\" (UID: \"5cbf2828-71ed-40ed-9ef4-35fc1f365c5a\") " pod="kube-system/coredns-76f75df574-88pdw" Jul 2 07:50:40.897929 kubelet[2005]: E0702 07:50:40.897878 2005 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:50:40.898600 env[1204]: time="2024-07-02T07:50:40.898546011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-88pdw,Uid:5cbf2828-71ed-40ed-9ef4-35fc1f365c5a,Namespace:kube-system,Attempt:0,}" Jul 2 07:50:40.904020 kubelet[2005]: E0702 07:50:40.903987 2005 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:50:40.904545 env[1204]: time="2024-07-02T07:50:40.904492878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-vk2wn,Uid:55e63c16-2371-4da1-8d14-e178148dbc76,Namespace:kube-system,Attempt:0,}" Jul 2 07:50:41.447175 kubelet[2005]: E0702 07:50:41.447120 2005 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:50:41.458593 kubelet[2005]: I0702 07:50:41.458520 2005 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-k22j4" podStartSLOduration=7.829283971 podStartE2EDuration="22.45847074s" podCreationTimestamp="2024-07-02 07:50:19 +0000 UTC" firstStartedPulling="2024-07-02 07:50:21.983724819 +0000 UTC m=+17.734069499" lastFinishedPulling="2024-07-02 07:50:36.612911578 +0000 UTC m=+32.363256268" observedRunningTime="2024-07-02 07:50:41.458088973 +0000 UTC m=+37.208433653" watchObservedRunningTime="2024-07-02 07:50:41.45847074 +0000 UTC m=+37.208815430" Jul 2 07:50:42.434648 systemd-networkd[1016]: cilium_host: Link UP Jul 2 07:50:42.434767 systemd-networkd[1016]: cilium_net: Link UP Jul 2 07:50:42.434769 systemd-networkd[1016]: cilium_net: Gained carrier Jul 2 07:50:42.435204 systemd-networkd[1016]: cilium_host: Gained carrier Jul 2 07:50:42.440770 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Jul 2 07:50:42.438522 systemd-networkd[1016]: cilium_host: Gained IPv6LL Jul 2 07:50:42.449153 kubelet[2005]: E0702 07:50:42.449118 2005 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:50:42.502022 systemd-networkd[1016]: cilium_vxlan: Link UP Jul 2 07:50:42.502031 systemd-networkd[1016]: cilium_vxlan: Gained carrier Jul 2 07:50:42.682605 kernel: NET: Registered PF_ALG protocol family Jul 2 07:50:43.221482 systemd-networkd[1016]: lxc_health: Link UP Jul 2 07:50:43.549605 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 2 07:50:43.550906 systemd-networkd[1016]: lxc_health: Gained carrier Jul 2 07:50:43.551897 kubelet[2005]: E0702 07:50:43.551355 2005 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:50:43.557627 systemd-networkd[1016]: cilium_net: Gained IPv6LL Jul 2 07:50:43.566808 systemd-networkd[1016]: lxcca78a4b71618: Link UP Jul 2 07:50:43.583317 systemd-networkd[1016]: lxc563589d1804a: Link UP Jul 2 07:50:43.586672 kernel: eth0: renamed from tmp843dd Jul 2 07:50:43.593675 kernel: eth0: renamed from tmp4780a Jul 2 07:50:43.599726 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 2 07:50:43.599780 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcca78a4b71618: link becomes ready Jul 2 07:50:43.599369 systemd-networkd[1016]: lxcca78a4b71618: Gained carrier Jul 2 07:50:43.603972 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 2 07:50:43.604014 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc563589d1804a: link becomes ready Jul 2 07:50:43.604641 systemd-networkd[1016]: lxc563589d1804a: Gained carrier Jul 2 07:50:43.618720 systemd-networkd[1016]: cilium_vxlan: Gained IPv6LL Jul 2 07:50:44.909793 kubelet[2005]: E0702 07:50:44.909756 2005 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:50:45.061898 systemd[1]: Started sshd@9-10.0.0.99:22-10.0.0.1:40060.service. Jul 2 07:50:45.094328 sshd[3252]: Accepted publickey for core from 10.0.0.1 port 40060 ssh2: RSA SHA256:nleLw6rQrPXDNmjSByoAmmq8uYaZ7eXcVx3kSRwUXrw Jul 2 07:50:45.096013 sshd[3252]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:50:45.099844 systemd-logind[1187]: New session 10 of user core. Jul 2 07:50:45.100609 systemd[1]: Started session-10.scope. Jul 2 07:50:45.223552 systemd[1]: Started sshd@10-10.0.0.99:22-10.0.0.1:40068.service. Jul 2 07:50:45.224081 sshd[3252]: pam_unix(sshd:session): session closed for user core Jul 2 07:50:45.227369 systemd[1]: sshd@9-10.0.0.99:22-10.0.0.1:40060.service: Deactivated successfully. Jul 2 07:50:45.228342 systemd[1]: session-10.scope: Deactivated successfully. Jul 2 07:50:45.229246 systemd-logind[1187]: Session 10 logged out. Waiting for processes to exit. Jul 2 07:50:45.230308 systemd-logind[1187]: Removed session 10. Jul 2 07:50:45.258061 sshd[3265]: Accepted publickey for core from 10.0.0.1 port 40068 ssh2: RSA SHA256:nleLw6rQrPXDNmjSByoAmmq8uYaZ7eXcVx3kSRwUXrw Jul 2 07:50:45.259476 sshd[3265]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:50:45.263757 systemd-logind[1187]: New session 11 of user core. Jul 2 07:50:45.264465 systemd[1]: Started session-11.scope. Jul 2 07:50:45.410733 systemd-networkd[1016]: lxc_health: Gained IPv6LL Jul 2 07:50:45.411950 systemd-networkd[1016]: lxcca78a4b71618: Gained IPv6LL Jul 2 07:50:45.412120 systemd-networkd[1016]: lxc563589d1804a: Gained IPv6LL Jul 2 07:50:45.419791 sshd[3265]: pam_unix(sshd:session): session closed for user core Jul 2 07:50:45.422698 systemd[1]: Started sshd@11-10.0.0.99:22-10.0.0.1:40076.service. Jul 2 07:50:45.429111 systemd-logind[1187]: Session 11 logged out. Waiting for processes to exit. Jul 2 07:50:45.434061 systemd[1]: sshd@10-10.0.0.99:22-10.0.0.1:40068.service: Deactivated successfully. Jul 2 07:50:45.435045 systemd[1]: session-11.scope: Deactivated successfully. Jul 2 07:50:45.440024 systemd-logind[1187]: Removed session 11. Jul 2 07:50:45.457103 sshd[3276]: Accepted publickey for core from 10.0.0.1 port 40076 ssh2: RSA SHA256:nleLw6rQrPXDNmjSByoAmmq8uYaZ7eXcVx3kSRwUXrw Jul 2 07:50:45.458297 sshd[3276]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:50:45.463768 systemd[1]: Started session-12.scope. Jul 2 07:50:45.465570 systemd-logind[1187]: New session 12 of user core. Jul 2 07:50:45.589900 sshd[3276]: pam_unix(sshd:session): session closed for user core Jul 2 07:50:45.592772 systemd[1]: sshd@11-10.0.0.99:22-10.0.0.1:40076.service: Deactivated successfully. Jul 2 07:50:45.593447 systemd[1]: session-12.scope: Deactivated successfully. Jul 2 07:50:45.594051 systemd-logind[1187]: Session 12 logged out. Waiting for processes to exit. Jul 2 07:50:45.594937 systemd-logind[1187]: Removed session 12. Jul 2 07:50:47.123604 env[1204]: time="2024-07-02T07:50:47.123519763Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:50:47.123951 env[1204]: time="2024-07-02T07:50:47.123567503Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:50:47.123951 env[1204]: time="2024-07-02T07:50:47.123639288Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:50:47.123951 env[1204]: time="2024-07-02T07:50:47.123775083Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4780a7a7d23533f6744b286b7702211b353285ec76953f5d21d1b1a01de61678 pid=3309 runtime=io.containerd.runc.v2 Jul 2 07:50:47.124818 env[1204]: time="2024-07-02T07:50:47.124768839Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:50:47.124932 env[1204]: time="2024-07-02T07:50:47.124909684Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:50:47.125041 env[1204]: time="2024-07-02T07:50:47.125018849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:50:47.125294 env[1204]: time="2024-07-02T07:50:47.125238170Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/843dd3eb01f6e910a7eab5db20fab73c733e362121d8abb99d10093415e2fdf7 pid=3315 runtime=io.containerd.runc.v2 Jul 2 07:50:47.143371 systemd[1]: run-containerd-runc-k8s.io-843dd3eb01f6e910a7eab5db20fab73c733e362121d8abb99d10093415e2fdf7-runc.pV67DQ.mount: Deactivated successfully. Jul 2 07:50:47.149024 systemd[1]: Started cri-containerd-843dd3eb01f6e910a7eab5db20fab73c733e362121d8abb99d10093415e2fdf7.scope. Jul 2 07:50:47.151896 systemd[1]: Started cri-containerd-4780a7a7d23533f6744b286b7702211b353285ec76953f5d21d1b1a01de61678.scope. Jul 2 07:50:47.163770 systemd-resolved[1136]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 07:50:47.165865 systemd-resolved[1136]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 07:50:47.194717 env[1204]: time="2024-07-02T07:50:47.194669351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-vk2wn,Uid:55e63c16-2371-4da1-8d14-e178148dbc76,Namespace:kube-system,Attempt:0,} returns sandbox id \"4780a7a7d23533f6744b286b7702211b353285ec76953f5d21d1b1a01de61678\"" Jul 2 07:50:47.197030 kubelet[2005]: E0702 07:50:47.196989 2005 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:50:47.198366 env[1204]: time="2024-07-02T07:50:47.198322110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-88pdw,Uid:5cbf2828-71ed-40ed-9ef4-35fc1f365c5a,Namespace:kube-system,Attempt:0,} returns sandbox id \"843dd3eb01f6e910a7eab5db20fab73c733e362121d8abb99d10093415e2fdf7\"" Jul 2 07:50:47.200197 env[1204]: time="2024-07-02T07:50:47.200166384Z" level=info msg="CreateContainer within sandbox \"4780a7a7d23533f6744b286b7702211b353285ec76953f5d21d1b1a01de61678\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 07:50:47.200849 kubelet[2005]: E0702 07:50:47.200694 2005 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:50:47.202470 env[1204]: time="2024-07-02T07:50:47.202419695Z" level=info msg="CreateContainer within sandbox \"843dd3eb01f6e910a7eab5db20fab73c733e362121d8abb99d10093415e2fdf7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 07:50:47.227971 env[1204]: time="2024-07-02T07:50:47.227904962Z" level=info msg="CreateContainer within sandbox \"843dd3eb01f6e910a7eab5db20fab73c733e362121d8abb99d10093415e2fdf7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cbfa0aa3d40ecda44bc65a7589b8c53beb1e247c7b9ce97013c038c7e29d645c\"" Jul 2 07:50:47.228511 env[1204]: time="2024-07-02T07:50:47.228456668Z" level=info msg="StartContainer for \"cbfa0aa3d40ecda44bc65a7589b8c53beb1e247c7b9ce97013c038c7e29d645c\"" Jul 2 07:50:47.232841 env[1204]: time="2024-07-02T07:50:47.232747997Z" level=info msg="CreateContainer within sandbox \"4780a7a7d23533f6744b286b7702211b353285ec76953f5d21d1b1a01de61678\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c350e88517ac255a13517fb3262fab32512afe307625d40d4ef5edd04bfd4a03\"" Jul 2 07:50:47.234332 env[1204]: time="2024-07-02T07:50:47.233546435Z" level=info msg="StartContainer for \"c350e88517ac255a13517fb3262fab32512afe307625d40d4ef5edd04bfd4a03\"" Jul 2 07:50:47.242511 systemd[1]: Started cri-containerd-cbfa0aa3d40ecda44bc65a7589b8c53beb1e247c7b9ce97013c038c7e29d645c.scope. Jul 2 07:50:47.256418 systemd[1]: Started cri-containerd-c350e88517ac255a13517fb3262fab32512afe307625d40d4ef5edd04bfd4a03.scope. Jul 2 07:50:47.391329 env[1204]: time="2024-07-02T07:50:47.391197028Z" level=info msg="StartContainer for \"cbfa0aa3d40ecda44bc65a7589b8c53beb1e247c7b9ce97013c038c7e29d645c\" returns successfully" Jul 2 07:50:47.466871 env[1204]: time="2024-07-02T07:50:47.466784589Z" level=info msg="StartContainer for \"c350e88517ac255a13517fb3262fab32512afe307625d40d4ef5edd04bfd4a03\" returns successfully" Jul 2 07:50:47.560216 kubelet[2005]: E0702 07:50:47.560178 2005 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:50:47.562845 kubelet[2005]: E0702 07:50:47.562818 2005 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:50:47.657465 kubelet[2005]: I0702 07:50:47.657119 2005 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-88pdw" podStartSLOduration=28.657069623 podStartE2EDuration="28.657069623s" podCreationTimestamp="2024-07-02 07:50:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:50:47.656281834 +0000 UTC m=+43.406626524" watchObservedRunningTime="2024-07-02 07:50:47.657069623 +0000 UTC m=+43.407414304" Jul 2 07:50:47.657465 kubelet[2005]: I0702 07:50:47.657266 2005 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-vk2wn" podStartSLOduration=28.657235605 podStartE2EDuration="28.657235605s" podCreationTimestamp="2024-07-02 07:50:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:50:47.570334258 +0000 UTC m=+43.320678938" watchObservedRunningTime="2024-07-02 07:50:47.657235605 +0000 UTC m=+43.407580305" Jul 2 07:50:48.564505 kubelet[2005]: E0702 07:50:48.564464 2005 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:50:48.564871 kubelet[2005]: E0702 07:50:48.564638 2005 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:50:48.884785 kubelet[2005]: I0702 07:50:48.884656 2005 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 07:50:48.885423 kubelet[2005]: E0702 07:50:48.885407 2005 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:50:49.566027 kubelet[2005]: E0702 07:50:49.566001 2005 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:50:49.566363 kubelet[2005]: E0702 07:50:49.566116 2005 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:50:49.566363 kubelet[2005]: E0702 07:50:49.566136 2005 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:50:50.594957 systemd[1]: Started sshd@12-10.0.0.99:22-10.0.0.1:40092.service. Jul 2 07:50:50.627461 sshd[3467]: Accepted publickey for core from 10.0.0.1 port 40092 ssh2: RSA SHA256:nleLw6rQrPXDNmjSByoAmmq8uYaZ7eXcVx3kSRwUXrw Jul 2 07:50:50.629020 sshd[3467]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:50:50.633069 systemd-logind[1187]: New session 13 of user core. Jul 2 07:50:50.633938 systemd[1]: Started session-13.scope. Jul 2 07:50:50.748303 sshd[3467]: pam_unix(sshd:session): session closed for user core Jul 2 07:50:50.751270 systemd[1]: sshd@12-10.0.0.99:22-10.0.0.1:40092.service: Deactivated successfully. Jul 2 07:50:50.752097 systemd[1]: session-13.scope: Deactivated successfully. Jul 2 07:50:50.752878 systemd-logind[1187]: Session 13 logged out. Waiting for processes to exit. Jul 2 07:50:50.753536 systemd-logind[1187]: Removed session 13. Jul 2 07:50:55.752385 systemd[1]: Started sshd@13-10.0.0.99:22-10.0.0.1:54076.service. Jul 2 07:50:55.782936 sshd[3480]: Accepted publickey for core from 10.0.0.1 port 54076 ssh2: RSA SHA256:nleLw6rQrPXDNmjSByoAmmq8uYaZ7eXcVx3kSRwUXrw Jul 2 07:50:55.784129 sshd[3480]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:50:55.787273 systemd-logind[1187]: New session 14 of user core. Jul 2 07:50:55.788220 systemd[1]: Started session-14.scope. Jul 2 07:50:55.889419 sshd[3480]: pam_unix(sshd:session): session closed for user core Jul 2 07:50:55.892476 systemd[1]: sshd@13-10.0.0.99:22-10.0.0.1:54076.service: Deactivated successfully. Jul 2 07:50:55.893105 systemd[1]: session-14.scope: Deactivated successfully. Jul 2 07:50:55.893661 systemd-logind[1187]: Session 14 logged out. Waiting for processes to exit. Jul 2 07:50:55.894865 systemd[1]: Started sshd@14-10.0.0.99:22-10.0.0.1:54084.service. Jul 2 07:50:55.895699 systemd-logind[1187]: Removed session 14. Jul 2 07:50:55.924523 sshd[3493]: Accepted publickey for core from 10.0.0.1 port 54084 ssh2: RSA SHA256:nleLw6rQrPXDNmjSByoAmmq8uYaZ7eXcVx3kSRwUXrw Jul 2 07:50:55.925559 sshd[3493]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:50:55.928770 systemd-logind[1187]: New session 15 of user core. Jul 2 07:50:55.929608 systemd[1]: Started session-15.scope. Jul 2 07:50:56.086128 sshd[3493]: pam_unix(sshd:session): session closed for user core Jul 2 07:50:56.088877 systemd[1]: sshd@14-10.0.0.99:22-10.0.0.1:54084.service: Deactivated successfully. Jul 2 07:50:56.089461 systemd[1]: session-15.scope: Deactivated successfully. Jul 2 07:50:56.090001 systemd-logind[1187]: Session 15 logged out. Waiting for processes to exit. Jul 2 07:50:56.091030 systemd[1]: Started sshd@15-10.0.0.99:22-10.0.0.1:54086.service. Jul 2 07:50:56.091804 systemd-logind[1187]: Removed session 15. Jul 2 07:50:56.123876 sshd[3504]: Accepted publickey for core from 10.0.0.1 port 54086 ssh2: RSA SHA256:nleLw6rQrPXDNmjSByoAmmq8uYaZ7eXcVx3kSRwUXrw Jul 2 07:50:56.124992 sshd[3504]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:50:56.128270 systemd-logind[1187]: New session 16 of user core. Jul 2 07:50:56.128990 systemd[1]: Started session-16.scope. Jul 2 07:50:57.462637 sshd[3504]: pam_unix(sshd:session): session closed for user core Jul 2 07:50:57.465607 systemd[1]: sshd@15-10.0.0.99:22-10.0.0.1:54086.service: Deactivated successfully. Jul 2 07:50:57.466263 systemd[1]: session-16.scope: Deactivated successfully. Jul 2 07:50:57.467786 systemd-logind[1187]: Session 16 logged out. Waiting for processes to exit. Jul 2 07:50:57.468999 systemd[1]: Started sshd@16-10.0.0.99:22-10.0.0.1:54102.service. Jul 2 07:50:57.471543 systemd-logind[1187]: Removed session 16. Jul 2 07:50:57.506330 sshd[3525]: Accepted publickey for core from 10.0.0.1 port 54102 ssh2: RSA SHA256:nleLw6rQrPXDNmjSByoAmmq8uYaZ7eXcVx3kSRwUXrw Jul 2 07:50:57.507808 sshd[3525]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:50:57.511288 systemd-logind[1187]: New session 17 of user core. Jul 2 07:50:57.512108 systemd[1]: Started session-17.scope. Jul 2 07:50:57.725992 sshd[3525]: pam_unix(sshd:session): session closed for user core Jul 2 07:50:57.728766 systemd[1]: sshd@16-10.0.0.99:22-10.0.0.1:54102.service: Deactivated successfully. Jul 2 07:50:57.729333 systemd[1]: session-17.scope: Deactivated successfully. Jul 2 07:50:57.730035 systemd-logind[1187]: Session 17 logged out. Waiting for processes to exit. Jul 2 07:50:57.731119 systemd[1]: Started sshd@17-10.0.0.99:22-10.0.0.1:54118.service. Jul 2 07:50:57.732024 systemd-logind[1187]: Removed session 17. Jul 2 07:50:57.765028 sshd[3536]: Accepted publickey for core from 10.0.0.1 port 54118 ssh2: RSA SHA256:nleLw6rQrPXDNmjSByoAmmq8uYaZ7eXcVx3kSRwUXrw Jul 2 07:50:57.766438 sshd[3536]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:50:57.770040 systemd-logind[1187]: New session 18 of user core. Jul 2 07:50:57.770798 systemd[1]: Started session-18.scope. Jul 2 07:50:57.878655 sshd[3536]: pam_unix(sshd:session): session closed for user core Jul 2 07:50:57.880805 systemd[1]: sshd@17-10.0.0.99:22-10.0.0.1:54118.service: Deactivated successfully. Jul 2 07:50:57.881526 systemd[1]: session-18.scope: Deactivated successfully. Jul 2 07:50:57.882210 systemd-logind[1187]: Session 18 logged out. Waiting for processes to exit. Jul 2 07:50:57.883039 systemd-logind[1187]: Removed session 18. Jul 2 07:51:02.883724 systemd[1]: Started sshd@18-10.0.0.99:22-10.0.0.1:36354.service. Jul 2 07:51:02.915376 sshd[3550]: Accepted publickey for core from 10.0.0.1 port 36354 ssh2: RSA SHA256:nleLw6rQrPXDNmjSByoAmmq8uYaZ7eXcVx3kSRwUXrw Jul 2 07:51:02.916830 sshd[3550]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:51:02.920739 systemd-logind[1187]: New session 19 of user core. Jul 2 07:51:02.921867 systemd[1]: Started session-19.scope. Jul 2 07:51:03.031400 sshd[3550]: pam_unix(sshd:session): session closed for user core Jul 2 07:51:03.034397 systemd[1]: sshd@18-10.0.0.99:22-10.0.0.1:36354.service: Deactivated successfully. Jul 2 07:51:03.035108 systemd[1]: session-19.scope: Deactivated successfully. Jul 2 07:51:03.035602 systemd-logind[1187]: Session 19 logged out. Waiting for processes to exit. Jul 2 07:51:03.036499 systemd-logind[1187]: Removed session 19. Jul 2 07:51:08.035994 systemd[1]: Started sshd@19-10.0.0.99:22-10.0.0.1:36368.service. Jul 2 07:51:08.067172 sshd[3568]: Accepted publickey for core from 10.0.0.1 port 36368 ssh2: RSA SHA256:nleLw6rQrPXDNmjSByoAmmq8uYaZ7eXcVx3kSRwUXrw Jul 2 07:51:08.068361 sshd[3568]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:51:08.071708 systemd-logind[1187]: New session 20 of user core. Jul 2 07:51:08.072668 systemd[1]: Started session-20.scope. Jul 2 07:51:08.170636 sshd[3568]: pam_unix(sshd:session): session closed for user core Jul 2 07:51:08.172698 systemd[1]: sshd@19-10.0.0.99:22-10.0.0.1:36368.service: Deactivated successfully. Jul 2 07:51:08.173369 systemd[1]: session-20.scope: Deactivated successfully. Jul 2 07:51:08.174125 systemd-logind[1187]: Session 20 logged out. Waiting for processes to exit. Jul 2 07:51:08.174882 systemd-logind[1187]: Removed session 20. Jul 2 07:51:13.175716 systemd[1]: Started sshd@20-10.0.0.99:22-10.0.0.1:43650.service. Jul 2 07:51:13.205388 sshd[3581]: Accepted publickey for core from 10.0.0.1 port 43650 ssh2: RSA SHA256:nleLw6rQrPXDNmjSByoAmmq8uYaZ7eXcVx3kSRwUXrw Jul 2 07:51:13.206380 sshd[3581]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:51:13.209900 systemd-logind[1187]: New session 21 of user core. Jul 2 07:51:13.210980 systemd[1]: Started session-21.scope. Jul 2 07:51:13.310725 sshd[3581]: pam_unix(sshd:session): session closed for user core Jul 2 07:51:13.313194 systemd[1]: sshd@20-10.0.0.99:22-10.0.0.1:43650.service: Deactivated successfully. Jul 2 07:51:13.314063 systemd[1]: session-21.scope: Deactivated successfully. Jul 2 07:51:13.314762 systemd-logind[1187]: Session 21 logged out. Waiting for processes to exit. Jul 2 07:51:13.315548 systemd-logind[1187]: Removed session 21. Jul 2 07:51:18.314965 systemd[1]: Started sshd@21-10.0.0.99:22-10.0.0.1:43658.service. Jul 2 07:51:18.346570 sshd[3594]: Accepted publickey for core from 10.0.0.1 port 43658 ssh2: RSA SHA256:nleLw6rQrPXDNmjSByoAmmq8uYaZ7eXcVx3kSRwUXrw Jul 2 07:51:18.347505 sshd[3594]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:51:18.350540 systemd-logind[1187]: New session 22 of user core. Jul 2 07:51:18.351533 systemd[1]: Started session-22.scope. Jul 2 07:51:18.368977 kubelet[2005]: E0702 07:51:18.368921 2005 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:51:18.453255 sshd[3594]: pam_unix(sshd:session): session closed for user core Jul 2 07:51:18.456727 systemd[1]: Started sshd@22-10.0.0.99:22-10.0.0.1:43666.service. Jul 2 07:51:18.457111 systemd[1]: sshd@21-10.0.0.99:22-10.0.0.1:43658.service: Deactivated successfully. Jul 2 07:51:18.457659 systemd[1]: session-22.scope: Deactivated successfully. Jul 2 07:51:18.458236 systemd-logind[1187]: Session 22 logged out. Waiting for processes to exit. Jul 2 07:51:18.459140 systemd-logind[1187]: Removed session 22. Jul 2 07:51:18.488038 sshd[3606]: Accepted publickey for core from 10.0.0.1 port 43666 ssh2: RSA SHA256:nleLw6rQrPXDNmjSByoAmmq8uYaZ7eXcVx3kSRwUXrw Jul 2 07:51:18.489340 sshd[3606]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:51:18.492558 systemd-logind[1187]: New session 23 of user core. Jul 2 07:51:18.493328 systemd[1]: Started session-23.scope. Jul 2 07:51:19.878151 env[1204]: time="2024-07-02T07:51:19.878106938Z" level=info msg="StopContainer for \"2b13d98c15a2aada60650c100815762a1901608db18ed11d57834846d30e773f\" with timeout 30 (s)" Jul 2 07:51:19.879043 env[1204]: time="2024-07-02T07:51:19.879020188Z" level=info msg="Stop container \"2b13d98c15a2aada60650c100815762a1901608db18ed11d57834846d30e773f\" with signal terminated" Jul 2 07:51:19.889211 systemd[1]: cri-containerd-2b13d98c15a2aada60650c100815762a1901608db18ed11d57834846d30e773f.scope: Deactivated successfully. Jul 2 07:51:19.900061 env[1204]: time="2024-07-02T07:51:19.899988536Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 07:51:19.906205 env[1204]: time="2024-07-02T07:51:19.906170076Z" level=info msg="StopContainer for \"b8e460b6bdb8a1c4d5a4cb712eb33275f26ce9d064fe651267d7f20a046b5c2c\" with timeout 2 (s)" Jul 2 07:51:19.906411 env[1204]: time="2024-07-02T07:51:19.906385309Z" level=info msg="Stop container \"b8e460b6bdb8a1c4d5a4cb712eb33275f26ce9d064fe651267d7f20a046b5c2c\" with signal terminated" Jul 2 07:51:19.912691 systemd-networkd[1016]: lxc_health: Link DOWN Jul 2 07:51:19.912703 systemd-networkd[1016]: lxc_health: Lost carrier Jul 2 07:51:19.913248 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2b13d98c15a2aada60650c100815762a1901608db18ed11d57834846d30e773f-rootfs.mount: Deactivated successfully. Jul 2 07:51:19.923513 env[1204]: time="2024-07-02T07:51:19.923465742Z" level=info msg="shim disconnected" id=2b13d98c15a2aada60650c100815762a1901608db18ed11d57834846d30e773f Jul 2 07:51:19.923513 env[1204]: time="2024-07-02T07:51:19.923510138Z" level=warning msg="cleaning up after shim disconnected" id=2b13d98c15a2aada60650c100815762a1901608db18ed11d57834846d30e773f namespace=k8s.io Jul 2 07:51:19.923513 env[1204]: time="2024-07-02T07:51:19.923518603Z" level=info msg="cleaning up dead shim" Jul 2 07:51:19.930528 env[1204]: time="2024-07-02T07:51:19.930477264Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:51:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3664 runtime=io.containerd.runc.v2\n" Jul 2 07:51:19.934285 env[1204]: time="2024-07-02T07:51:19.934237051Z" level=info msg="StopContainer for \"2b13d98c15a2aada60650c100815762a1901608db18ed11d57834846d30e773f\" returns successfully" Jul 2 07:51:19.935070 env[1204]: time="2024-07-02T07:51:19.934986648Z" level=info msg="StopPodSandbox for \"b9de22c8d0f81ea7f9bf9a18fe2a7f96d592aed940968e92202d0865b10eaa30\"" Jul 2 07:51:19.935070 env[1204]: time="2024-07-02T07:51:19.935072022Z" level=info msg="Container to stop \"2b13d98c15a2aada60650c100815762a1901608db18ed11d57834846d30e773f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 07:51:19.936992 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b9de22c8d0f81ea7f9bf9a18fe2a7f96d592aed940968e92202d0865b10eaa30-shm.mount: Deactivated successfully. Jul 2 07:51:19.938960 systemd[1]: cri-containerd-b8e460b6bdb8a1c4d5a4cb712eb33275f26ce9d064fe651267d7f20a046b5c2c.scope: Deactivated successfully. Jul 2 07:51:19.939236 systemd[1]: cri-containerd-b8e460b6bdb8a1c4d5a4cb712eb33275f26ce9d064fe651267d7f20a046b5c2c.scope: Consumed 6.398s CPU time. Jul 2 07:51:19.951062 systemd[1]: cri-containerd-b9de22c8d0f81ea7f9bf9a18fe2a7f96d592aed940968e92202d0865b10eaa30.scope: Deactivated successfully. Jul 2 07:51:19.959739 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b8e460b6bdb8a1c4d5a4cb712eb33275f26ce9d064fe651267d7f20a046b5c2c-rootfs.mount: Deactivated successfully. Jul 2 07:51:19.965858 env[1204]: time="2024-07-02T07:51:19.965809186Z" level=info msg="shim disconnected" id=b8e460b6bdb8a1c4d5a4cb712eb33275f26ce9d064fe651267d7f20a046b5c2c Jul 2 07:51:19.966265 env[1204]: time="2024-07-02T07:51:19.966243729Z" level=warning msg="cleaning up after shim disconnected" id=b8e460b6bdb8a1c4d5a4cb712eb33275f26ce9d064fe651267d7f20a046b5c2c namespace=k8s.io Jul 2 07:51:19.966367 env[1204]: time="2024-07-02T07:51:19.966344894Z" level=info msg="cleaning up dead shim" Jul 2 07:51:19.970370 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b9de22c8d0f81ea7f9bf9a18fe2a7f96d592aed940968e92202d0865b10eaa30-rootfs.mount: Deactivated successfully. Jul 2 07:51:19.971568 env[1204]: time="2024-07-02T07:51:19.971510586Z" level=info msg="shim disconnected" id=b9de22c8d0f81ea7f9bf9a18fe2a7f96d592aed940968e92202d0865b10eaa30 Jul 2 07:51:19.971568 env[1204]: time="2024-07-02T07:51:19.971556153Z" level=warning msg="cleaning up after shim disconnected" id=b9de22c8d0f81ea7f9bf9a18fe2a7f96d592aed940968e92202d0865b10eaa30 namespace=k8s.io Jul 2 07:51:19.971568 env[1204]: time="2024-07-02T07:51:19.971564330Z" level=info msg="cleaning up dead shim" Jul 2 07:51:19.975006 env[1204]: time="2024-07-02T07:51:19.974946633Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:51:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3710 runtime=io.containerd.runc.v2\n" Jul 2 07:51:19.977696 env[1204]: time="2024-07-02T07:51:19.977658121Z" level=info msg="StopContainer for \"b8e460b6bdb8a1c4d5a4cb712eb33275f26ce9d064fe651267d7f20a046b5c2c\" returns successfully" Jul 2 07:51:19.978376 env[1204]: time="2024-07-02T07:51:19.978330009Z" level=info msg="StopPodSandbox for \"39775a1e6b5ec27b514d8c5e9b204a84e7651f3d2a137cf6f6fd002f6d7aed2e\"" Jul 2 07:51:19.978436 env[1204]: time="2024-07-02T07:51:19.978402258Z" level=info msg="Container to stop \"2d5342d8e2f9fb579504950823429d9e6ed5dcedc1b1f6321b09f8d797a0e987\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 07:51:19.978436 env[1204]: time="2024-07-02T07:51:19.978415073Z" level=info msg="Container to stop \"b8e460b6bdb8a1c4d5a4cb712eb33275f26ce9d064fe651267d7f20a046b5c2c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 07:51:19.978436 env[1204]: time="2024-07-02T07:51:19.978425312Z" level=info msg="Container to stop \"03382a20f7581c77891aff8763ac626f6cc676639a3e41af68600357c4e13508\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 07:51:19.978436 env[1204]: time="2024-07-02T07:51:19.978434320Z" level=info msg="Container to stop \"986ae99c45b6556aa2af5bd405fbe78ade2b1117e435a9f7a3c96dbc076289e7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 07:51:19.978622 env[1204]: time="2024-07-02T07:51:19.978443056Z" level=info msg="Container to stop \"ce213205449a88655f476c297ceb552f29228de24b313f4fa3c5517358de40f6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 07:51:19.979242 env[1204]: time="2024-07-02T07:51:19.979216278Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:51:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3718 runtime=io.containerd.runc.v2\n" Jul 2 07:51:19.979769 env[1204]: time="2024-07-02T07:51:19.979742687Z" level=info msg="TearDown network for sandbox \"b9de22c8d0f81ea7f9bf9a18fe2a7f96d592aed940968e92202d0865b10eaa30\" successfully" Jul 2 07:51:19.979877 env[1204]: time="2024-07-02T07:51:19.979849232Z" level=info msg="StopPodSandbox for \"b9de22c8d0f81ea7f9bf9a18fe2a7f96d592aed940968e92202d0865b10eaa30\" returns successfully" Jul 2 07:51:19.983345 systemd[1]: cri-containerd-39775a1e6b5ec27b514d8c5e9b204a84e7651f3d2a137cf6f6fd002f6d7aed2e.scope: Deactivated successfully. Jul 2 07:51:20.011030 env[1204]: time="2024-07-02T07:51:20.010977003Z" level=info msg="shim disconnected" id=39775a1e6b5ec27b514d8c5e9b204a84e7651f3d2a137cf6f6fd002f6d7aed2e Jul 2 07:51:20.011345 env[1204]: time="2024-07-02T07:51:20.011304071Z" level=warning msg="cleaning up after shim disconnected" id=39775a1e6b5ec27b514d8c5e9b204a84e7651f3d2a137cf6f6fd002f6d7aed2e namespace=k8s.io Jul 2 07:51:20.011345 env[1204]: time="2024-07-02T07:51:20.011324379Z" level=info msg="cleaning up dead shim" Jul 2 07:51:20.021122 env[1204]: time="2024-07-02T07:51:20.021071865Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:51:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3753 runtime=io.containerd.runc.v2\n" Jul 2 07:51:20.021404 env[1204]: time="2024-07-02T07:51:20.021378312Z" level=info msg="TearDown network for sandbox \"39775a1e6b5ec27b514d8c5e9b204a84e7651f3d2a137cf6f6fd002f6d7aed2e\" successfully" Jul 2 07:51:20.021404 env[1204]: time="2024-07-02T07:51:20.021400945Z" level=info msg="StopPodSandbox for \"39775a1e6b5ec27b514d8c5e9b204a84e7651f3d2a137cf6f6fd002f6d7aed2e\" returns successfully" Jul 2 07:51:20.144352 kubelet[2005]: I0702 07:51:20.144215 2005 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9d30ca27-38f5-45f1-beb5-3b5f55148966-cni-path\") pod \"9d30ca27-38f5-45f1-beb5-3b5f55148966\" (UID: \"9d30ca27-38f5-45f1-beb5-3b5f55148966\") " Jul 2 07:51:20.144352 kubelet[2005]: I0702 07:51:20.144318 2005 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9d30ca27-38f5-45f1-beb5-3b5f55148966-cilium-cgroup\") pod \"9d30ca27-38f5-45f1-beb5-3b5f55148966\" (UID: \"9d30ca27-38f5-45f1-beb5-3b5f55148966\") " Jul 2 07:51:20.144866 kubelet[2005]: I0702 07:51:20.144367 2005 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9d30ca27-38f5-45f1-beb5-3b5f55148966-cilium-run\") pod \"9d30ca27-38f5-45f1-beb5-3b5f55148966\" (UID: \"9d30ca27-38f5-45f1-beb5-3b5f55148966\") " Jul 2 07:51:20.144866 kubelet[2005]: I0702 07:51:20.144408 2005 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9d30ca27-38f5-45f1-beb5-3b5f55148966-hubble-tls\") pod \"9d30ca27-38f5-45f1-beb5-3b5f55148966\" (UID: \"9d30ca27-38f5-45f1-beb5-3b5f55148966\") " Jul 2 07:51:20.144866 kubelet[2005]: I0702 07:51:20.144436 2005 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9d30ca27-38f5-45f1-beb5-3b5f55148966-xtables-lock\") pod \"9d30ca27-38f5-45f1-beb5-3b5f55148966\" (UID: \"9d30ca27-38f5-45f1-beb5-3b5f55148966\") " Jul 2 07:51:20.144866 kubelet[2005]: I0702 07:51:20.144457 2005 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9d30ca27-38f5-45f1-beb5-3b5f55148966-lib-modules\") pod \"9d30ca27-38f5-45f1-beb5-3b5f55148966\" (UID: \"9d30ca27-38f5-45f1-beb5-3b5f55148966\") " Jul 2 07:51:20.144866 kubelet[2005]: I0702 07:51:20.144475 2005 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9d30ca27-38f5-45f1-beb5-3b5f55148966-host-proc-sys-kernel\") pod \"9d30ca27-38f5-45f1-beb5-3b5f55148966\" (UID: \"9d30ca27-38f5-45f1-beb5-3b5f55148966\") " Jul 2 07:51:20.144866 kubelet[2005]: I0702 07:51:20.144496 2005 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3b9636fb-987b-44c2-b0b1-e48c5bf00423-cilium-config-path\") pod \"3b9636fb-987b-44c2-b0b1-e48c5bf00423\" (UID: \"3b9636fb-987b-44c2-b0b1-e48c5bf00423\") " Jul 2 07:51:20.145023 kubelet[2005]: I0702 07:51:20.144514 2005 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5zpj\" (UniqueName: \"kubernetes.io/projected/9d30ca27-38f5-45f1-beb5-3b5f55148966-kube-api-access-d5zpj\") pod \"9d30ca27-38f5-45f1-beb5-3b5f55148966\" (UID: \"9d30ca27-38f5-45f1-beb5-3b5f55148966\") " Jul 2 07:51:20.145023 kubelet[2005]: I0702 07:51:20.144542 2005 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9d30ca27-38f5-45f1-beb5-3b5f55148966-hostproc\") pod \"9d30ca27-38f5-45f1-beb5-3b5f55148966\" (UID: \"9d30ca27-38f5-45f1-beb5-3b5f55148966\") " Jul 2 07:51:20.145023 kubelet[2005]: I0702 07:51:20.144558 2005 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9d30ca27-38f5-45f1-beb5-3b5f55148966-bpf-maps\") pod \"9d30ca27-38f5-45f1-beb5-3b5f55148966\" (UID: \"9d30ca27-38f5-45f1-beb5-3b5f55148966\") " Jul 2 07:51:20.145023 kubelet[2005]: I0702 07:51:20.144593 2005 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9d30ca27-38f5-45f1-beb5-3b5f55148966-clustermesh-secrets\") pod \"9d30ca27-38f5-45f1-beb5-3b5f55148966\" (UID: \"9d30ca27-38f5-45f1-beb5-3b5f55148966\") " Jul 2 07:51:20.145023 kubelet[2005]: I0702 07:51:20.144621 2005 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9d30ca27-38f5-45f1-beb5-3b5f55148966-cilium-config-path\") pod \"9d30ca27-38f5-45f1-beb5-3b5f55148966\" (UID: \"9d30ca27-38f5-45f1-beb5-3b5f55148966\") " Jul 2 07:51:20.145023 kubelet[2005]: I0702 07:51:20.144640 2005 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9d30ca27-38f5-45f1-beb5-3b5f55148966-host-proc-sys-net\") pod \"9d30ca27-38f5-45f1-beb5-3b5f55148966\" (UID: \"9d30ca27-38f5-45f1-beb5-3b5f55148966\") " Jul 2 07:51:20.145233 kubelet[2005]: I0702 07:51:20.144660 2005 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9d30ca27-38f5-45f1-beb5-3b5f55148966-etc-cni-netd\") pod \"9d30ca27-38f5-45f1-beb5-3b5f55148966\" (UID: \"9d30ca27-38f5-45f1-beb5-3b5f55148966\") " Jul 2 07:51:20.145233 kubelet[2005]: I0702 07:51:20.144682 2005 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhlql\" (UniqueName: \"kubernetes.io/projected/3b9636fb-987b-44c2-b0b1-e48c5bf00423-kube-api-access-fhlql\") pod \"3b9636fb-987b-44c2-b0b1-e48c5bf00423\" (UID: \"3b9636fb-987b-44c2-b0b1-e48c5bf00423\") " Jul 2 07:51:20.147632 kubelet[2005]: I0702 07:51:20.146806 2005 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d30ca27-38f5-45f1-beb5-3b5f55148966-hostproc" (OuterVolumeSpecName: "hostproc") pod "9d30ca27-38f5-45f1-beb5-3b5f55148966" (UID: "9d30ca27-38f5-45f1-beb5-3b5f55148966"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:51:20.147632 kubelet[2005]: I0702 07:51:20.146855 2005 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d30ca27-38f5-45f1-beb5-3b5f55148966-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "9d30ca27-38f5-45f1-beb5-3b5f55148966" (UID: "9d30ca27-38f5-45f1-beb5-3b5f55148966"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:51:20.147632 kubelet[2005]: I0702 07:51:20.146874 2005 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d30ca27-38f5-45f1-beb5-3b5f55148966-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "9d30ca27-38f5-45f1-beb5-3b5f55148966" (UID: "9d30ca27-38f5-45f1-beb5-3b5f55148966"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:51:20.148226 kubelet[2005]: I0702 07:51:20.144226 2005 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d30ca27-38f5-45f1-beb5-3b5f55148966-cni-path" (OuterVolumeSpecName: "cni-path") pod "9d30ca27-38f5-45f1-beb5-3b5f55148966" (UID: "9d30ca27-38f5-45f1-beb5-3b5f55148966"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:51:20.148272 kubelet[2005]: I0702 07:51:20.148240 2005 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d30ca27-38f5-45f1-beb5-3b5f55148966-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "9d30ca27-38f5-45f1-beb5-3b5f55148966" (UID: "9d30ca27-38f5-45f1-beb5-3b5f55148966"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:51:20.148307 kubelet[2005]: I0702 07:51:20.148274 2005 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d30ca27-38f5-45f1-beb5-3b5f55148966-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "9d30ca27-38f5-45f1-beb5-3b5f55148966" (UID: "9d30ca27-38f5-45f1-beb5-3b5f55148966"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:51:20.148891 kubelet[2005]: I0702 07:51:20.148858 2005 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d30ca27-38f5-45f1-beb5-3b5f55148966-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "9d30ca27-38f5-45f1-beb5-3b5f55148966" (UID: "9d30ca27-38f5-45f1-beb5-3b5f55148966"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:51:20.148944 kubelet[2005]: I0702 07:51:20.148905 2005 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d30ca27-38f5-45f1-beb5-3b5f55148966-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "9d30ca27-38f5-45f1-beb5-3b5f55148966" (UID: "9d30ca27-38f5-45f1-beb5-3b5f55148966"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:51:20.148944 kubelet[2005]: I0702 07:51:20.148934 2005 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d30ca27-38f5-45f1-beb5-3b5f55148966-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "9d30ca27-38f5-45f1-beb5-3b5f55148966" (UID: "9d30ca27-38f5-45f1-beb5-3b5f55148966"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:51:20.149049 kubelet[2005]: I0702 07:51:20.149017 2005 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d30ca27-38f5-45f1-beb5-3b5f55148966-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "9d30ca27-38f5-45f1-beb5-3b5f55148966" (UID: "9d30ca27-38f5-45f1-beb5-3b5f55148966"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 07:51:20.149102 kubelet[2005]: I0702 07:51:20.149071 2005 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d30ca27-38f5-45f1-beb5-3b5f55148966-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "9d30ca27-38f5-45f1-beb5-3b5f55148966" (UID: "9d30ca27-38f5-45f1-beb5-3b5f55148966"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:51:20.149189 kubelet[2005]: I0702 07:51:20.149142 2005 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b9636fb-987b-44c2-b0b1-e48c5bf00423-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3b9636fb-987b-44c2-b0b1-e48c5bf00423" (UID: "3b9636fb-987b-44c2-b0b1-e48c5bf00423"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 07:51:20.149470 kubelet[2005]: I0702 07:51:20.149443 2005 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d30ca27-38f5-45f1-beb5-3b5f55148966-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9d30ca27-38f5-45f1-beb5-3b5f55148966" (UID: "9d30ca27-38f5-45f1-beb5-3b5f55148966"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 07:51:20.149730 kubelet[2005]: I0702 07:51:20.149699 2005 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d30ca27-38f5-45f1-beb5-3b5f55148966-kube-api-access-d5zpj" (OuterVolumeSpecName: "kube-api-access-d5zpj") pod "9d30ca27-38f5-45f1-beb5-3b5f55148966" (UID: "9d30ca27-38f5-45f1-beb5-3b5f55148966"). InnerVolumeSpecName "kube-api-access-d5zpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 07:51:20.149857 kubelet[2005]: I0702 07:51:20.149794 2005 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d30ca27-38f5-45f1-beb5-3b5f55148966-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "9d30ca27-38f5-45f1-beb5-3b5f55148966" (UID: "9d30ca27-38f5-45f1-beb5-3b5f55148966"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 07:51:20.150052 kubelet[2005]: I0702 07:51:20.150030 2005 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b9636fb-987b-44c2-b0b1-e48c5bf00423-kube-api-access-fhlql" (OuterVolumeSpecName: "kube-api-access-fhlql") pod "3b9636fb-987b-44c2-b0b1-e48c5bf00423" (UID: "3b9636fb-987b-44c2-b0b1-e48c5bf00423"). InnerVolumeSpecName "kube-api-access-fhlql". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 07:51:20.245403 kubelet[2005]: I0702 07:51:20.245329 2005 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9d30ca27-38f5-45f1-beb5-3b5f55148966-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 2 07:51:20.245403 kubelet[2005]: I0702 07:51:20.245378 2005 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9d30ca27-38f5-45f1-beb5-3b5f55148966-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 2 07:51:20.245403 kubelet[2005]: I0702 07:51:20.245394 2005 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9d30ca27-38f5-45f1-beb5-3b5f55148966-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 2 07:51:20.245403 kubelet[2005]: I0702 07:51:20.245406 2005 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9d30ca27-38f5-45f1-beb5-3b5f55148966-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 2 07:51:20.245665 kubelet[2005]: I0702 07:51:20.245420 2005 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9d30ca27-38f5-45f1-beb5-3b5f55148966-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 2 07:51:20.245665 kubelet[2005]: I0702 07:51:20.245433 2005 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3b9636fb-987b-44c2-b0b1-e48c5bf00423-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 2 07:51:20.245665 kubelet[2005]: I0702 07:51:20.245445 2005 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-d5zpj\" (UniqueName: \"kubernetes.io/projected/9d30ca27-38f5-45f1-beb5-3b5f55148966-kube-api-access-d5zpj\") on node \"localhost\" DevicePath \"\"" Jul 2 07:51:20.245665 kubelet[2005]: I0702 07:51:20.245458 2005 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9d30ca27-38f5-45f1-beb5-3b5f55148966-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 2 07:51:20.245665 kubelet[2005]: I0702 07:51:20.245472 2005 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9d30ca27-38f5-45f1-beb5-3b5f55148966-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 2 07:51:20.245665 kubelet[2005]: I0702 07:51:20.245484 2005 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9d30ca27-38f5-45f1-beb5-3b5f55148966-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 2 07:51:20.245665 kubelet[2005]: I0702 07:51:20.245497 2005 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9d30ca27-38f5-45f1-beb5-3b5f55148966-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 2 07:51:20.245665 kubelet[2005]: I0702 07:51:20.245511 2005 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9d30ca27-38f5-45f1-beb5-3b5f55148966-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 2 07:51:20.245838 kubelet[2005]: I0702 07:51:20.245524 2005 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9d30ca27-38f5-45f1-beb5-3b5f55148966-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 2 07:51:20.245838 kubelet[2005]: I0702 07:51:20.245535 2005 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-fhlql\" (UniqueName: \"kubernetes.io/projected/3b9636fb-987b-44c2-b0b1-e48c5bf00423-kube-api-access-fhlql\") on node \"localhost\" DevicePath \"\"" Jul 2 07:51:20.245838 kubelet[2005]: I0702 07:51:20.245545 2005 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9d30ca27-38f5-45f1-beb5-3b5f55148966-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 2 07:51:20.245838 kubelet[2005]: I0702 07:51:20.245558 2005 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9d30ca27-38f5-45f1-beb5-3b5f55148966-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 2 07:51:20.374393 systemd[1]: Removed slice kubepods-burstable-pod9d30ca27_38f5_45f1_beb5_3b5f55148966.slice. Jul 2 07:51:20.374496 systemd[1]: kubepods-burstable-pod9d30ca27_38f5_45f1_beb5_3b5f55148966.slice: Consumed 6.491s CPU time. Jul 2 07:51:20.375545 systemd[1]: Removed slice kubepods-besteffort-pod3b9636fb_987b_44c2_b0b1_e48c5bf00423.slice. Jul 2 07:51:20.622491 kubelet[2005]: I0702 07:51:20.622450 2005 scope.go:117] "RemoveContainer" containerID="2b13d98c15a2aada60650c100815762a1901608db18ed11d57834846d30e773f" Jul 2 07:51:20.624066 env[1204]: time="2024-07-02T07:51:20.623717805Z" level=info msg="RemoveContainer for \"2b13d98c15a2aada60650c100815762a1901608db18ed11d57834846d30e773f\"" Jul 2 07:51:20.629729 env[1204]: time="2024-07-02T07:51:20.629686702Z" level=info msg="RemoveContainer for \"2b13d98c15a2aada60650c100815762a1901608db18ed11d57834846d30e773f\" returns successfully" Jul 2 07:51:20.629901 kubelet[2005]: I0702 07:51:20.629878 2005 scope.go:117] "RemoveContainer" containerID="2b13d98c15a2aada60650c100815762a1901608db18ed11d57834846d30e773f" Jul 2 07:51:20.630193 env[1204]: time="2024-07-02T07:51:20.630102007Z" level=error msg="ContainerStatus for \"2b13d98c15a2aada60650c100815762a1901608db18ed11d57834846d30e773f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2b13d98c15a2aada60650c100815762a1901608db18ed11d57834846d30e773f\": not found" Jul 2 07:51:20.630450 kubelet[2005]: E0702 07:51:20.630429 2005 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2b13d98c15a2aada60650c100815762a1901608db18ed11d57834846d30e773f\": not found" containerID="2b13d98c15a2aada60650c100815762a1901608db18ed11d57834846d30e773f" Jul 2 07:51:20.630530 kubelet[2005]: I0702 07:51:20.630515 2005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2b13d98c15a2aada60650c100815762a1901608db18ed11d57834846d30e773f"} err="failed to get container status \"2b13d98c15a2aada60650c100815762a1901608db18ed11d57834846d30e773f\": rpc error: code = NotFound desc = an error occurred when try to find container \"2b13d98c15a2aada60650c100815762a1901608db18ed11d57834846d30e773f\": not found" Jul 2 07:51:20.630530 kubelet[2005]: I0702 07:51:20.630524 2005 scope.go:117] "RemoveContainer" containerID="b8e460b6bdb8a1c4d5a4cb712eb33275f26ce9d064fe651267d7f20a046b5c2c" Jul 2 07:51:20.631973 env[1204]: time="2024-07-02T07:51:20.631938036Z" level=info msg="RemoveContainer for \"b8e460b6bdb8a1c4d5a4cb712eb33275f26ce9d064fe651267d7f20a046b5c2c\"" Jul 2 07:51:20.635368 env[1204]: time="2024-07-02T07:51:20.635327530Z" level=info msg="RemoveContainer for \"b8e460b6bdb8a1c4d5a4cb712eb33275f26ce9d064fe651267d7f20a046b5c2c\" returns successfully" Jul 2 07:51:20.635494 kubelet[2005]: I0702 07:51:20.635477 2005 scope.go:117] "RemoveContainer" containerID="ce213205449a88655f476c297ceb552f29228de24b313f4fa3c5517358de40f6" Jul 2 07:51:20.636441 env[1204]: time="2024-07-02T07:51:20.636412408Z" level=info msg="RemoveContainer for \"ce213205449a88655f476c297ceb552f29228de24b313f4fa3c5517358de40f6\"" Jul 2 07:51:20.639649 env[1204]: time="2024-07-02T07:51:20.639615335Z" level=info msg="RemoveContainer for \"ce213205449a88655f476c297ceb552f29228de24b313f4fa3c5517358de40f6\" returns successfully" Jul 2 07:51:20.639771 kubelet[2005]: I0702 07:51:20.639748 2005 scope.go:117] "RemoveContainer" containerID="986ae99c45b6556aa2af5bd405fbe78ade2b1117e435a9f7a3c96dbc076289e7" Jul 2 07:51:20.641018 env[1204]: time="2024-07-02T07:51:20.640989908Z" level=info msg="RemoveContainer for \"986ae99c45b6556aa2af5bd405fbe78ade2b1117e435a9f7a3c96dbc076289e7\"" Jul 2 07:51:20.643887 env[1204]: time="2024-07-02T07:51:20.643854486Z" level=info msg="RemoveContainer for \"986ae99c45b6556aa2af5bd405fbe78ade2b1117e435a9f7a3c96dbc076289e7\" returns successfully" Jul 2 07:51:20.644030 kubelet[2005]: I0702 07:51:20.644002 2005 scope.go:117] "RemoveContainer" containerID="2d5342d8e2f9fb579504950823429d9e6ed5dcedc1b1f6321b09f8d797a0e987" Jul 2 07:51:20.644826 env[1204]: time="2024-07-02T07:51:20.644804877Z" level=info msg="RemoveContainer for \"2d5342d8e2f9fb579504950823429d9e6ed5dcedc1b1f6321b09f8d797a0e987\"" Jul 2 07:51:20.647399 env[1204]: time="2024-07-02T07:51:20.647369541Z" level=info msg="RemoveContainer for \"2d5342d8e2f9fb579504950823429d9e6ed5dcedc1b1f6321b09f8d797a0e987\" returns successfully" Jul 2 07:51:20.647520 kubelet[2005]: I0702 07:51:20.647494 2005 scope.go:117] "RemoveContainer" containerID="03382a20f7581c77891aff8763ac626f6cc676639a3e41af68600357c4e13508" Jul 2 07:51:20.648246 env[1204]: time="2024-07-02T07:51:20.648214190Z" level=info msg="RemoveContainer for \"03382a20f7581c77891aff8763ac626f6cc676639a3e41af68600357c4e13508\"" Jul 2 07:51:20.650928 env[1204]: time="2024-07-02T07:51:20.650896790Z" level=info msg="RemoveContainer for \"03382a20f7581c77891aff8763ac626f6cc676639a3e41af68600357c4e13508\" returns successfully" Jul 2 07:51:20.651041 kubelet[2005]: I0702 07:51:20.651016 2005 scope.go:117] "RemoveContainer" containerID="b8e460b6bdb8a1c4d5a4cb712eb33275f26ce9d064fe651267d7f20a046b5c2c" Jul 2 07:51:20.651226 env[1204]: time="2024-07-02T07:51:20.651167378Z" level=error msg="ContainerStatus for \"b8e460b6bdb8a1c4d5a4cb712eb33275f26ce9d064fe651267d7f20a046b5c2c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b8e460b6bdb8a1c4d5a4cb712eb33275f26ce9d064fe651267d7f20a046b5c2c\": not found" Jul 2 07:51:20.651354 kubelet[2005]: E0702 07:51:20.651314 2005 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b8e460b6bdb8a1c4d5a4cb712eb33275f26ce9d064fe651267d7f20a046b5c2c\": not found" containerID="b8e460b6bdb8a1c4d5a4cb712eb33275f26ce9d064fe651267d7f20a046b5c2c" Jul 2 07:51:20.651414 kubelet[2005]: I0702 07:51:20.651365 2005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b8e460b6bdb8a1c4d5a4cb712eb33275f26ce9d064fe651267d7f20a046b5c2c"} err="failed to get container status \"b8e460b6bdb8a1c4d5a4cb712eb33275f26ce9d064fe651267d7f20a046b5c2c\": rpc error: code = NotFound desc = an error occurred when try to find container \"b8e460b6bdb8a1c4d5a4cb712eb33275f26ce9d064fe651267d7f20a046b5c2c\": not found" Jul 2 07:51:20.651414 kubelet[2005]: I0702 07:51:20.651376 2005 scope.go:117] "RemoveContainer" containerID="ce213205449a88655f476c297ceb552f29228de24b313f4fa3c5517358de40f6" Jul 2 07:51:20.651525 env[1204]: time="2024-07-02T07:51:20.651490457Z" level=error msg="ContainerStatus for \"ce213205449a88655f476c297ceb552f29228de24b313f4fa3c5517358de40f6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ce213205449a88655f476c297ceb552f29228de24b313f4fa3c5517358de40f6\": not found" Jul 2 07:51:20.651628 kubelet[2005]: E0702 07:51:20.651614 2005 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ce213205449a88655f476c297ceb552f29228de24b313f4fa3c5517358de40f6\": not found" containerID="ce213205449a88655f476c297ceb552f29228de24b313f4fa3c5517358de40f6" Jul 2 07:51:20.651658 kubelet[2005]: I0702 07:51:20.651638 2005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ce213205449a88655f476c297ceb552f29228de24b313f4fa3c5517358de40f6"} err="failed to get container status \"ce213205449a88655f476c297ceb552f29228de24b313f4fa3c5517358de40f6\": rpc error: code = NotFound desc = an error occurred when try to find container \"ce213205449a88655f476c297ceb552f29228de24b313f4fa3c5517358de40f6\": not found" Jul 2 07:51:20.651658 kubelet[2005]: I0702 07:51:20.651650 2005 scope.go:117] "RemoveContainer" containerID="986ae99c45b6556aa2af5bd405fbe78ade2b1117e435a9f7a3c96dbc076289e7" Jul 2 07:51:20.651852 env[1204]: time="2024-07-02T07:51:20.651799198Z" level=error msg="ContainerStatus for \"986ae99c45b6556aa2af5bd405fbe78ade2b1117e435a9f7a3c96dbc076289e7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"986ae99c45b6556aa2af5bd405fbe78ade2b1117e435a9f7a3c96dbc076289e7\": not found" Jul 2 07:51:20.651950 kubelet[2005]: E0702 07:51:20.651933 2005 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"986ae99c45b6556aa2af5bd405fbe78ade2b1117e435a9f7a3c96dbc076289e7\": not found" containerID="986ae99c45b6556aa2af5bd405fbe78ade2b1117e435a9f7a3c96dbc076289e7" Jul 2 07:51:20.652005 kubelet[2005]: I0702 07:51:20.651957 2005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"986ae99c45b6556aa2af5bd405fbe78ade2b1117e435a9f7a3c96dbc076289e7"} err="failed to get container status \"986ae99c45b6556aa2af5bd405fbe78ade2b1117e435a9f7a3c96dbc076289e7\": rpc error: code = NotFound desc = an error occurred when try to find container \"986ae99c45b6556aa2af5bd405fbe78ade2b1117e435a9f7a3c96dbc076289e7\": not found" Jul 2 07:51:20.652005 kubelet[2005]: I0702 07:51:20.651966 2005 scope.go:117] "RemoveContainer" containerID="2d5342d8e2f9fb579504950823429d9e6ed5dcedc1b1f6321b09f8d797a0e987" Jul 2 07:51:20.652173 env[1204]: time="2024-07-02T07:51:20.652122408Z" level=error msg="ContainerStatus for \"2d5342d8e2f9fb579504950823429d9e6ed5dcedc1b1f6321b09f8d797a0e987\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2d5342d8e2f9fb579504950823429d9e6ed5dcedc1b1f6321b09f8d797a0e987\": not found" Jul 2 07:51:20.652258 kubelet[2005]: E0702 07:51:20.652243 2005 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2d5342d8e2f9fb579504950823429d9e6ed5dcedc1b1f6321b09f8d797a0e987\": not found" containerID="2d5342d8e2f9fb579504950823429d9e6ed5dcedc1b1f6321b09f8d797a0e987" Jul 2 07:51:20.652294 kubelet[2005]: I0702 07:51:20.652264 2005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2d5342d8e2f9fb579504950823429d9e6ed5dcedc1b1f6321b09f8d797a0e987"} err="failed to get container status \"2d5342d8e2f9fb579504950823429d9e6ed5dcedc1b1f6321b09f8d797a0e987\": rpc error: code = NotFound desc = an error occurred when try to find container \"2d5342d8e2f9fb579504950823429d9e6ed5dcedc1b1f6321b09f8d797a0e987\": not found" Jul 2 07:51:20.652294 kubelet[2005]: I0702 07:51:20.652273 2005 scope.go:117] "RemoveContainer" containerID="03382a20f7581c77891aff8763ac626f6cc676639a3e41af68600357c4e13508" Jul 2 07:51:20.652472 env[1204]: time="2024-07-02T07:51:20.652417563Z" level=error msg="ContainerStatus for \"03382a20f7581c77891aff8763ac626f6cc676639a3e41af68600357c4e13508\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"03382a20f7581c77891aff8763ac626f6cc676639a3e41af68600357c4e13508\": not found" Jul 2 07:51:20.652540 kubelet[2005]: E0702 07:51:20.652513 2005 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"03382a20f7581c77891aff8763ac626f6cc676639a3e41af68600357c4e13508\": not found" containerID="03382a20f7581c77891aff8763ac626f6cc676639a3e41af68600357c4e13508" Jul 2 07:51:20.652540 kubelet[2005]: I0702 07:51:20.652532 2005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"03382a20f7581c77891aff8763ac626f6cc676639a3e41af68600357c4e13508"} err="failed to get container status \"03382a20f7581c77891aff8763ac626f6cc676639a3e41af68600357c4e13508\": rpc error: code = NotFound desc = an error occurred when try to find container \"03382a20f7581c77891aff8763ac626f6cc676639a3e41af68600357c4e13508\": not found" Jul 2 07:51:20.883171 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-39775a1e6b5ec27b514d8c5e9b204a84e7651f3d2a137cf6f6fd002f6d7aed2e-rootfs.mount: Deactivated successfully. Jul 2 07:51:20.883251 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-39775a1e6b5ec27b514d8c5e9b204a84e7651f3d2a137cf6f6fd002f6d7aed2e-shm.mount: Deactivated successfully. Jul 2 07:51:20.883299 systemd[1]: var-lib-kubelet-pods-9d30ca27\x2d38f5\x2d45f1\x2dbeb5\x2d3b5f55148966-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 07:51:20.883357 systemd[1]: var-lib-kubelet-pods-3b9636fb\x2d987b\x2d44c2\x2db0b1\x2de48c5bf00423-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfhlql.mount: Deactivated successfully. Jul 2 07:51:20.883414 systemd[1]: var-lib-kubelet-pods-9d30ca27\x2d38f5\x2d45f1\x2dbeb5\x2d3b5f55148966-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dd5zpj.mount: Deactivated successfully. Jul 2 07:51:20.883460 systemd[1]: var-lib-kubelet-pods-9d30ca27\x2d38f5\x2d45f1\x2dbeb5\x2d3b5f55148966-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 07:51:21.845786 sshd[3606]: pam_unix(sshd:session): session closed for user core Jul 2 07:51:21.848990 systemd[1]: Started sshd@23-10.0.0.99:22-10.0.0.1:43680.service. Jul 2 07:51:21.849406 systemd[1]: sshd@22-10.0.0.99:22-10.0.0.1:43666.service: Deactivated successfully. Jul 2 07:51:21.849993 systemd[1]: session-23.scope: Deactivated successfully. Jul 2 07:51:21.850551 systemd-logind[1187]: Session 23 logged out. Waiting for processes to exit. Jul 2 07:51:21.851548 systemd-logind[1187]: Removed session 23. Jul 2 07:51:21.883184 sshd[3772]: Accepted publickey for core from 10.0.0.1 port 43680 ssh2: RSA SHA256:nleLw6rQrPXDNmjSByoAmmq8uYaZ7eXcVx3kSRwUXrw Jul 2 07:51:21.884464 sshd[3772]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:51:21.887737 systemd-logind[1187]: New session 24 of user core. Jul 2 07:51:21.888494 systemd[1]: Started session-24.scope. Jul 2 07:51:22.206253 sshd[3772]: pam_unix(sshd:session): session closed for user core Jul 2 07:51:22.208093 systemd[1]: Started sshd@24-10.0.0.99:22-10.0.0.1:43690.service. Jul 2 07:51:22.208907 kubelet[2005]: I0702 07:51:22.208873 2005 topology_manager.go:215] "Topology Admit Handler" podUID="3c884a9f-6ac1-4dca-8625-4aa9301bc1ea" podNamespace="kube-system" podName="cilium-k4sbm" Jul 2 07:51:22.209290 kubelet[2005]: E0702 07:51:22.209275 2005 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9d30ca27-38f5-45f1-beb5-3b5f55148966" containerName="apply-sysctl-overwrites" Jul 2 07:51:22.209398 kubelet[2005]: E0702 07:51:22.209350 2005 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9d30ca27-38f5-45f1-beb5-3b5f55148966" containerName="mount-bpf-fs" Jul 2 07:51:22.209472 kubelet[2005]: E0702 07:51:22.209458 2005 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9d30ca27-38f5-45f1-beb5-3b5f55148966" containerName="clean-cilium-state" Jul 2 07:51:22.209561 kubelet[2005]: E0702 07:51:22.209547 2005 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3b9636fb-987b-44c2-b0b1-e48c5bf00423" containerName="cilium-operator" Jul 2 07:51:22.209660 kubelet[2005]: E0702 07:51:22.209646 2005 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9d30ca27-38f5-45f1-beb5-3b5f55148966" containerName="mount-cgroup" Jul 2 07:51:22.209732 kubelet[2005]: E0702 07:51:22.209717 2005 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9d30ca27-38f5-45f1-beb5-3b5f55148966" containerName="cilium-agent" Jul 2 07:51:22.209820 kubelet[2005]: I0702 07:51:22.209805 2005 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b9636fb-987b-44c2-b0b1-e48c5bf00423" containerName="cilium-operator" Jul 2 07:51:22.209894 kubelet[2005]: I0702 07:51:22.209880 2005 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d30ca27-38f5-45f1-beb5-3b5f55148966" containerName="cilium-agent" Jul 2 07:51:22.210526 systemd[1]: sshd@23-10.0.0.99:22-10.0.0.1:43680.service: Deactivated successfully. Jul 2 07:51:22.211284 systemd[1]: session-24.scope: Deactivated successfully. Jul 2 07:51:22.211848 systemd-logind[1187]: Session 24 logged out. Waiting for processes to exit. Jul 2 07:51:22.213420 systemd-logind[1187]: Removed session 24. Jul 2 07:51:22.216085 systemd[1]: Created slice kubepods-burstable-pod3c884a9f_6ac1_4dca_8625_4aa9301bc1ea.slice. Jul 2 07:51:22.241876 sshd[3784]: Accepted publickey for core from 10.0.0.1 port 43690 ssh2: RSA SHA256:nleLw6rQrPXDNmjSByoAmmq8uYaZ7eXcVx3kSRwUXrw Jul 2 07:51:22.243291 sshd[3784]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:51:22.247763 systemd[1]: Started session-25.scope. Jul 2 07:51:22.248667 systemd-logind[1187]: New session 25 of user core. Jul 2 07:51:22.356912 kubelet[2005]: I0702 07:51:22.356864 2005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3c884a9f-6ac1-4dca-8625-4aa9301bc1ea-etc-cni-netd\") pod \"cilium-k4sbm\" (UID: \"3c884a9f-6ac1-4dca-8625-4aa9301bc1ea\") " pod="kube-system/cilium-k4sbm" Jul 2 07:51:22.356912 kubelet[2005]: I0702 07:51:22.356902 2005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3c884a9f-6ac1-4dca-8625-4aa9301bc1ea-cilium-config-path\") pod \"cilium-k4sbm\" (UID: \"3c884a9f-6ac1-4dca-8625-4aa9301bc1ea\") " pod="kube-system/cilium-k4sbm" Jul 2 07:51:22.356912 kubelet[2005]: I0702 07:51:22.356923 2005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3c884a9f-6ac1-4dca-8625-4aa9301bc1ea-host-proc-sys-net\") pod \"cilium-k4sbm\" (UID: \"3c884a9f-6ac1-4dca-8625-4aa9301bc1ea\") " pod="kube-system/cilium-k4sbm" Jul 2 07:51:22.357179 kubelet[2005]: I0702 07:51:22.356941 2005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3c884a9f-6ac1-4dca-8625-4aa9301bc1ea-cilium-run\") pod \"cilium-k4sbm\" (UID: \"3c884a9f-6ac1-4dca-8625-4aa9301bc1ea\") " pod="kube-system/cilium-k4sbm" Jul 2 07:51:22.357179 kubelet[2005]: I0702 07:51:22.357013 2005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3c884a9f-6ac1-4dca-8625-4aa9301bc1ea-hostproc\") pod \"cilium-k4sbm\" (UID: \"3c884a9f-6ac1-4dca-8625-4aa9301bc1ea\") " pod="kube-system/cilium-k4sbm" Jul 2 07:51:22.357179 kubelet[2005]: I0702 07:51:22.357051 2005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3c884a9f-6ac1-4dca-8625-4aa9301bc1ea-host-proc-sys-kernel\") pod \"cilium-k4sbm\" (UID: \"3c884a9f-6ac1-4dca-8625-4aa9301bc1ea\") " pod="kube-system/cilium-k4sbm" Jul 2 07:51:22.357179 kubelet[2005]: I0702 07:51:22.357091 2005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3c884a9f-6ac1-4dca-8625-4aa9301bc1ea-hubble-tls\") pod \"cilium-k4sbm\" (UID: \"3c884a9f-6ac1-4dca-8625-4aa9301bc1ea\") " pod="kube-system/cilium-k4sbm" Jul 2 07:51:22.357179 kubelet[2005]: I0702 07:51:22.357124 2005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3c884a9f-6ac1-4dca-8625-4aa9301bc1ea-bpf-maps\") pod \"cilium-k4sbm\" (UID: \"3c884a9f-6ac1-4dca-8625-4aa9301bc1ea\") " pod="kube-system/cilium-k4sbm" Jul 2 07:51:22.357179 kubelet[2005]: I0702 07:51:22.357144 2005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3c884a9f-6ac1-4dca-8625-4aa9301bc1ea-lib-modules\") pod \"cilium-k4sbm\" (UID: \"3c884a9f-6ac1-4dca-8625-4aa9301bc1ea\") " pod="kube-system/cilium-k4sbm" Jul 2 07:51:22.357513 kubelet[2005]: I0702 07:51:22.357162 2005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3c884a9f-6ac1-4dca-8625-4aa9301bc1ea-xtables-lock\") pod \"cilium-k4sbm\" (UID: \"3c884a9f-6ac1-4dca-8625-4aa9301bc1ea\") " pod="kube-system/cilium-k4sbm" Jul 2 07:51:22.357513 kubelet[2005]: I0702 07:51:22.357187 2005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3c884a9f-6ac1-4dca-8625-4aa9301bc1ea-clustermesh-secrets\") pod \"cilium-k4sbm\" (UID: \"3c884a9f-6ac1-4dca-8625-4aa9301bc1ea\") " pod="kube-system/cilium-k4sbm" Jul 2 07:51:22.357513 kubelet[2005]: I0702 07:51:22.357206 2005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3c884a9f-6ac1-4dca-8625-4aa9301bc1ea-cilium-cgroup\") pod \"cilium-k4sbm\" (UID: \"3c884a9f-6ac1-4dca-8625-4aa9301bc1ea\") " pod="kube-system/cilium-k4sbm" Jul 2 07:51:22.357513 kubelet[2005]: I0702 07:51:22.357227 2005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3c884a9f-6ac1-4dca-8625-4aa9301bc1ea-cni-path\") pod \"cilium-k4sbm\" (UID: \"3c884a9f-6ac1-4dca-8625-4aa9301bc1ea\") " pod="kube-system/cilium-k4sbm" Jul 2 07:51:22.357513 kubelet[2005]: I0702 07:51:22.357266 2005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3c884a9f-6ac1-4dca-8625-4aa9301bc1ea-cilium-ipsec-secrets\") pod \"cilium-k4sbm\" (UID: \"3c884a9f-6ac1-4dca-8625-4aa9301bc1ea\") " pod="kube-system/cilium-k4sbm" Jul 2 07:51:22.357513 kubelet[2005]: I0702 07:51:22.357286 2005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z29zn\" (UniqueName: \"kubernetes.io/projected/3c884a9f-6ac1-4dca-8625-4aa9301bc1ea-kube-api-access-z29zn\") pod \"cilium-k4sbm\" (UID: \"3c884a9f-6ac1-4dca-8625-4aa9301bc1ea\") " pod="kube-system/cilium-k4sbm" Jul 2 07:51:22.370299 kubelet[2005]: I0702 07:51:22.370265 2005 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="3b9636fb-987b-44c2-b0b1-e48c5bf00423" path="/var/lib/kubelet/pods/3b9636fb-987b-44c2-b0b1-e48c5bf00423/volumes" Jul 2 07:51:22.370666 kubelet[2005]: I0702 07:51:22.370652 2005 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="9d30ca27-38f5-45f1-beb5-3b5f55148966" path="/var/lib/kubelet/pods/9d30ca27-38f5-45f1-beb5-3b5f55148966/volumes" Jul 2 07:51:22.377805 sshd[3784]: pam_unix(sshd:session): session closed for user core Jul 2 07:51:22.381346 systemd[1]: Started sshd@25-10.0.0.99:22-10.0.0.1:43696.service. Jul 2 07:51:22.381811 systemd[1]: sshd@24-10.0.0.99:22-10.0.0.1:43690.service: Deactivated successfully. Jul 2 07:51:22.382520 systemd[1]: session-25.scope: Deactivated successfully. Jul 2 07:51:22.385273 kubelet[2005]: E0702 07:51:22.385245 2005 pod_workers.go:1298] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-z29zn lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-k4sbm" podUID="3c884a9f-6ac1-4dca-8625-4aa9301bc1ea" Jul 2 07:51:22.385501 systemd-logind[1187]: Session 25 logged out. Waiting for processes to exit. Jul 2 07:51:22.387203 systemd-logind[1187]: Removed session 25. Jul 2 07:51:22.412700 sshd[3799]: Accepted publickey for core from 10.0.0.1 port 43696 ssh2: RSA SHA256:nleLw6rQrPXDNmjSByoAmmq8uYaZ7eXcVx3kSRwUXrw Jul 2 07:51:22.413931 sshd[3799]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:51:22.417145 systemd-logind[1187]: New session 26 of user core. Jul 2 07:51:22.417947 systemd[1]: Started session-26.scope. Jul 2 07:51:22.759430 kubelet[2005]: I0702 07:51:22.759379 2005 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3c884a9f-6ac1-4dca-8625-4aa9301bc1ea-hubble-tls\") pod \"3c884a9f-6ac1-4dca-8625-4aa9301bc1ea\" (UID: \"3c884a9f-6ac1-4dca-8625-4aa9301bc1ea\") " Jul 2 07:51:22.759430 kubelet[2005]: I0702 07:51:22.759420 2005 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3c884a9f-6ac1-4dca-8625-4aa9301bc1ea-cni-path\") pod \"3c884a9f-6ac1-4dca-8625-4aa9301bc1ea\" (UID: \"3c884a9f-6ac1-4dca-8625-4aa9301bc1ea\") " Jul 2 07:51:22.759655 kubelet[2005]: I0702 07:51:22.759445 2005 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z29zn\" (UniqueName: \"kubernetes.io/projected/3c884a9f-6ac1-4dca-8625-4aa9301bc1ea-kube-api-access-z29zn\") pod \"3c884a9f-6ac1-4dca-8625-4aa9301bc1ea\" (UID: \"3c884a9f-6ac1-4dca-8625-4aa9301bc1ea\") " Jul 2 07:51:22.759655 kubelet[2005]: I0702 07:51:22.759465 2005 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3c884a9f-6ac1-4dca-8625-4aa9301bc1ea-etc-cni-netd\") pod \"3c884a9f-6ac1-4dca-8625-4aa9301bc1ea\" (UID: \"3c884a9f-6ac1-4dca-8625-4aa9301bc1ea\") " Jul 2 07:51:22.759655 kubelet[2005]: I0702 07:51:22.759491 2005 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3c884a9f-6ac1-4dca-8625-4aa9301bc1ea-cilium-cgroup\") pod \"3c884a9f-6ac1-4dca-8625-4aa9301bc1ea\" (UID: \"3c884a9f-6ac1-4dca-8625-4aa9301bc1ea\") " Jul 2 07:51:22.759655 kubelet[2005]: I0702 07:51:22.759512 2005 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3c884a9f-6ac1-4dca-8625-4aa9301bc1ea-clustermesh-secrets\") pod \"3c884a9f-6ac1-4dca-8625-4aa9301bc1ea\" (UID: \"3c884a9f-6ac1-4dca-8625-4aa9301bc1ea\") " Jul 2 07:51:22.759655 kubelet[2005]: I0702 07:51:22.759528 2005 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3c884a9f-6ac1-4dca-8625-4aa9301bc1ea-host-proc-sys-net\") pod \"3c884a9f-6ac1-4dca-8625-4aa9301bc1ea\" (UID: \"3c884a9f-6ac1-4dca-8625-4aa9301bc1ea\") " Jul 2 07:51:22.759655 kubelet[2005]: I0702 07:51:22.759514 2005 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c884a9f-6ac1-4dca-8625-4aa9301bc1ea-cni-path" (OuterVolumeSpecName: "cni-path") pod "3c884a9f-6ac1-4dca-8625-4aa9301bc1ea" (UID: "3c884a9f-6ac1-4dca-8625-4aa9301bc1ea"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:51:22.759792 kubelet[2005]: I0702 07:51:22.759545 2005 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3c884a9f-6ac1-4dca-8625-4aa9301bc1ea-cilium-run\") pod \"3c884a9f-6ac1-4dca-8625-4aa9301bc1ea\" (UID: \"3c884a9f-6ac1-4dca-8625-4aa9301bc1ea\") " Jul 2 07:51:22.759792 kubelet[2005]: I0702 07:51:22.759569 2005 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c884a9f-6ac1-4dca-8625-4aa9301bc1ea-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "3c884a9f-6ac1-4dca-8625-4aa9301bc1ea" (UID: "3c884a9f-6ac1-4dca-8625-4aa9301bc1ea"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:51:22.759792 kubelet[2005]: I0702 07:51:22.759620 2005 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3c884a9f-6ac1-4dca-8625-4aa9301bc1ea-bpf-maps\") pod \"3c884a9f-6ac1-4dca-8625-4aa9301bc1ea\" (UID: \"3c884a9f-6ac1-4dca-8625-4aa9301bc1ea\") " Jul 2 07:51:22.759792 kubelet[2005]: I0702 07:51:22.759643 2005 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3c884a9f-6ac1-4dca-8625-4aa9301bc1ea-lib-modules\") pod \"3c884a9f-6ac1-4dca-8625-4aa9301bc1ea\" (UID: \"3c884a9f-6ac1-4dca-8625-4aa9301bc1ea\") " Jul 2 07:51:22.759792 kubelet[2005]: I0702 07:51:22.759667 2005 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3c884a9f-6ac1-4dca-8625-4aa9301bc1ea-cilium-ipsec-secrets\") pod \"3c884a9f-6ac1-4dca-8625-4aa9301bc1ea\" (UID: \"3c884a9f-6ac1-4dca-8625-4aa9301bc1ea\") " Jul 2 07:51:22.759792 kubelet[2005]: I0702 07:51:22.759687 2005 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3c884a9f-6ac1-4dca-8625-4aa9301bc1ea-cilium-config-path\") pod \"3c884a9f-6ac1-4dca-8625-4aa9301bc1ea\" (UID: \"3c884a9f-6ac1-4dca-8625-4aa9301bc1ea\") " Jul 2 07:51:22.759928 kubelet[2005]: I0702 07:51:22.759704 2005 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3c884a9f-6ac1-4dca-8625-4aa9301bc1ea-hostproc\") pod \"3c884a9f-6ac1-4dca-8625-4aa9301bc1ea\" (UID: \"3c884a9f-6ac1-4dca-8625-4aa9301bc1ea\") " Jul 2 07:51:22.759928 kubelet[2005]: I0702 07:51:22.759721 2005 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3c884a9f-6ac1-4dca-8625-4aa9301bc1ea-xtables-lock\") pod \"3c884a9f-6ac1-4dca-8625-4aa9301bc1ea\" (UID: \"3c884a9f-6ac1-4dca-8625-4aa9301bc1ea\") " Jul 2 07:51:22.759928 kubelet[2005]: I0702 07:51:22.759745 2005 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3c884a9f-6ac1-4dca-8625-4aa9301bc1ea-host-proc-sys-kernel\") pod \"3c884a9f-6ac1-4dca-8625-4aa9301bc1ea\" (UID: \"3c884a9f-6ac1-4dca-8625-4aa9301bc1ea\") " Jul 2 07:51:22.759928 kubelet[2005]: I0702 07:51:22.759794 2005 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3c884a9f-6ac1-4dca-8625-4aa9301bc1ea-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 2 07:51:22.759928 kubelet[2005]: I0702 07:51:22.759806 2005 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3c884a9f-6ac1-4dca-8625-4aa9301bc1ea-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 2 07:51:22.759928 kubelet[2005]: I0702 07:51:22.759823 2005 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c884a9f-6ac1-4dca-8625-4aa9301bc1ea-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "3c884a9f-6ac1-4dca-8625-4aa9301bc1ea" (UID: "3c884a9f-6ac1-4dca-8625-4aa9301bc1ea"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:51:22.760063 kubelet[2005]: I0702 07:51:22.759841 2005 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c884a9f-6ac1-4dca-8625-4aa9301bc1ea-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "3c884a9f-6ac1-4dca-8625-4aa9301bc1ea" (UID: "3c884a9f-6ac1-4dca-8625-4aa9301bc1ea"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:51:22.760063 kubelet[2005]: I0702 07:51:22.759855 2005 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c884a9f-6ac1-4dca-8625-4aa9301bc1ea-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3c884a9f-6ac1-4dca-8625-4aa9301bc1ea" (UID: "3c884a9f-6ac1-4dca-8625-4aa9301bc1ea"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:51:22.760171 kubelet[2005]: I0702 07:51:22.760147 2005 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c884a9f-6ac1-4dca-8625-4aa9301bc1ea-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "3c884a9f-6ac1-4dca-8625-4aa9301bc1ea" (UID: "3c884a9f-6ac1-4dca-8625-4aa9301bc1ea"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:51:22.760218 kubelet[2005]: I0702 07:51:22.760175 2005 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c884a9f-6ac1-4dca-8625-4aa9301bc1ea-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "3c884a9f-6ac1-4dca-8625-4aa9301bc1ea" (UID: "3c884a9f-6ac1-4dca-8625-4aa9301bc1ea"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:51:22.760493 kubelet[2005]: I0702 07:51:22.760455 2005 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c884a9f-6ac1-4dca-8625-4aa9301bc1ea-hostproc" (OuterVolumeSpecName: "hostproc") pod "3c884a9f-6ac1-4dca-8625-4aa9301bc1ea" (UID: "3c884a9f-6ac1-4dca-8625-4aa9301bc1ea"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:51:22.760602 kubelet[2005]: I0702 07:51:22.760550 2005 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c884a9f-6ac1-4dca-8625-4aa9301bc1ea-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "3c884a9f-6ac1-4dca-8625-4aa9301bc1ea" (UID: "3c884a9f-6ac1-4dca-8625-4aa9301bc1ea"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:51:22.760699 kubelet[2005]: I0702 07:51:22.760618 2005 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c884a9f-6ac1-4dca-8625-4aa9301bc1ea-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "3c884a9f-6ac1-4dca-8625-4aa9301bc1ea" (UID: "3c884a9f-6ac1-4dca-8625-4aa9301bc1ea"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:51:22.763185 kubelet[2005]: I0702 07:51:22.763165 2005 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c884a9f-6ac1-4dca-8625-4aa9301bc1ea-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "3c884a9f-6ac1-4dca-8625-4aa9301bc1ea" (UID: "3c884a9f-6ac1-4dca-8625-4aa9301bc1ea"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 07:51:22.763311 systemd[1]: var-lib-kubelet-pods-3c884a9f\x2d6ac1\x2d4dca\x2d8625\x2d4aa9301bc1ea-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dz29zn.mount: Deactivated successfully. Jul 2 07:51:22.763441 kubelet[2005]: I0702 07:51:22.763363 2005 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c884a9f-6ac1-4dca-8625-4aa9301bc1ea-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3c884a9f-6ac1-4dca-8625-4aa9301bc1ea" (UID: "3c884a9f-6ac1-4dca-8625-4aa9301bc1ea"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 07:51:22.764010 kubelet[2005]: I0702 07:51:22.763993 2005 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c884a9f-6ac1-4dca-8625-4aa9301bc1ea-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "3c884a9f-6ac1-4dca-8625-4aa9301bc1ea" (UID: "3c884a9f-6ac1-4dca-8625-4aa9301bc1ea"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 07:51:22.765030 kubelet[2005]: I0702 07:51:22.764993 2005 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c884a9f-6ac1-4dca-8625-4aa9301bc1ea-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "3c884a9f-6ac1-4dca-8625-4aa9301bc1ea" (UID: "3c884a9f-6ac1-4dca-8625-4aa9301bc1ea"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 07:51:22.765152 systemd[1]: var-lib-kubelet-pods-3c884a9f\x2d6ac1\x2d4dca\x2d8625\x2d4aa9301bc1ea-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Jul 2 07:51:22.765225 systemd[1]: var-lib-kubelet-pods-3c884a9f\x2d6ac1\x2d4dca\x2d8625\x2d4aa9301bc1ea-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 07:51:22.765889 kubelet[2005]: I0702 07:51:22.765712 2005 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c884a9f-6ac1-4dca-8625-4aa9301bc1ea-kube-api-access-z29zn" (OuterVolumeSpecName: "kube-api-access-z29zn") pod "3c884a9f-6ac1-4dca-8625-4aa9301bc1ea" (UID: "3c884a9f-6ac1-4dca-8625-4aa9301bc1ea"). InnerVolumeSpecName "kube-api-access-z29zn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 07:51:22.859980 kubelet[2005]: I0702 07:51:22.859934 2005 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3c884a9f-6ac1-4dca-8625-4aa9301bc1ea-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 2 07:51:22.859980 kubelet[2005]: I0702 07:51:22.859968 2005 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3c884a9f-6ac1-4dca-8625-4aa9301bc1ea-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 2 07:51:22.859980 kubelet[2005]: I0702 07:51:22.859979 2005 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3c884a9f-6ac1-4dca-8625-4aa9301bc1ea-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 2 07:51:22.859980 kubelet[2005]: I0702 07:51:22.859989 2005 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3c884a9f-6ac1-4dca-8625-4aa9301bc1ea-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 2 07:51:22.860204 kubelet[2005]: I0702 07:51:22.859998 2005 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3c884a9f-6ac1-4dca-8625-4aa9301bc1ea-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Jul 2 07:51:22.860204 kubelet[2005]: I0702 07:51:22.860012 2005 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3c884a9f-6ac1-4dca-8625-4aa9301bc1ea-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 2 07:51:22.860204 kubelet[2005]: I0702 07:51:22.860022 2005 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3c884a9f-6ac1-4dca-8625-4aa9301bc1ea-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 2 07:51:22.860204 kubelet[2005]: I0702 07:51:22.860032 2005 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3c884a9f-6ac1-4dca-8625-4aa9301bc1ea-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 2 07:51:22.860204 kubelet[2005]: I0702 07:51:22.860042 2005 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3c884a9f-6ac1-4dca-8625-4aa9301bc1ea-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 2 07:51:22.860204 kubelet[2005]: I0702 07:51:22.860051 2005 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3c884a9f-6ac1-4dca-8625-4aa9301bc1ea-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 2 07:51:22.860204 kubelet[2005]: I0702 07:51:22.860059 2005 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3c884a9f-6ac1-4dca-8625-4aa9301bc1ea-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 2 07:51:22.860204 kubelet[2005]: I0702 07:51:22.860068 2005 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3c884a9f-6ac1-4dca-8625-4aa9301bc1ea-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 2 07:51:22.860408 kubelet[2005]: I0702 07:51:22.860077 2005 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-z29zn\" (UniqueName: \"kubernetes.io/projected/3c884a9f-6ac1-4dca-8625-4aa9301bc1ea-kube-api-access-z29zn\") on node \"localhost\" DevicePath \"\"" Jul 2 07:51:23.462340 systemd[1]: var-lib-kubelet-pods-3c884a9f\x2d6ac1\x2d4dca\x2d8625\x2d4aa9301bc1ea-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 07:51:23.636679 systemd[1]: Removed slice kubepods-burstable-pod3c884a9f_6ac1_4dca_8625_4aa9301bc1ea.slice. Jul 2 07:51:23.731320 kubelet[2005]: I0702 07:51:23.731200 2005 topology_manager.go:215] "Topology Admit Handler" podUID="5aa019f2-b67d-41ea-b430-3bd041afe88f" podNamespace="kube-system" podName="cilium-624rl" Jul 2 07:51:23.738415 systemd[1]: Created slice kubepods-burstable-pod5aa019f2_b67d_41ea_b430_3bd041afe88f.slice. Jul 2 07:51:23.766063 kubelet[2005]: I0702 07:51:23.766020 2005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5aa019f2-b67d-41ea-b430-3bd041afe88f-cilium-run\") pod \"cilium-624rl\" (UID: \"5aa019f2-b67d-41ea-b430-3bd041afe88f\") " pod="kube-system/cilium-624rl" Jul 2 07:51:23.766248 kubelet[2005]: I0702 07:51:23.766091 2005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5aa019f2-b67d-41ea-b430-3bd041afe88f-hostproc\") pod \"cilium-624rl\" (UID: \"5aa019f2-b67d-41ea-b430-3bd041afe88f\") " pod="kube-system/cilium-624rl" Jul 2 07:51:23.766248 kubelet[2005]: I0702 07:51:23.766122 2005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5aa019f2-b67d-41ea-b430-3bd041afe88f-cni-path\") pod \"cilium-624rl\" (UID: \"5aa019f2-b67d-41ea-b430-3bd041afe88f\") " pod="kube-system/cilium-624rl" Jul 2 07:51:23.766248 kubelet[2005]: I0702 07:51:23.766151 2005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5aa019f2-b67d-41ea-b430-3bd041afe88f-etc-cni-netd\") pod \"cilium-624rl\" (UID: \"5aa019f2-b67d-41ea-b430-3bd041afe88f\") " pod="kube-system/cilium-624rl" Jul 2 07:51:23.766248 kubelet[2005]: I0702 07:51:23.766186 2005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5aa019f2-b67d-41ea-b430-3bd041afe88f-lib-modules\") pod \"cilium-624rl\" (UID: \"5aa019f2-b67d-41ea-b430-3bd041afe88f\") " pod="kube-system/cilium-624rl" Jul 2 07:51:23.766248 kubelet[2005]: I0702 07:51:23.766209 2005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5aa019f2-b67d-41ea-b430-3bd041afe88f-xtables-lock\") pod \"cilium-624rl\" (UID: \"5aa019f2-b67d-41ea-b430-3bd041afe88f\") " pod="kube-system/cilium-624rl" Jul 2 07:51:23.766443 kubelet[2005]: I0702 07:51:23.766256 2005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlfqd\" (UniqueName: \"kubernetes.io/projected/5aa019f2-b67d-41ea-b430-3bd041afe88f-kube-api-access-tlfqd\") pod \"cilium-624rl\" (UID: \"5aa019f2-b67d-41ea-b430-3bd041afe88f\") " pod="kube-system/cilium-624rl" Jul 2 07:51:23.766443 kubelet[2005]: I0702 07:51:23.766333 2005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5aa019f2-b67d-41ea-b430-3bd041afe88f-cilium-cgroup\") pod \"cilium-624rl\" (UID: \"5aa019f2-b67d-41ea-b430-3bd041afe88f\") " pod="kube-system/cilium-624rl" Jul 2 07:51:23.766443 kubelet[2005]: I0702 07:51:23.766398 2005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5aa019f2-b67d-41ea-b430-3bd041afe88f-host-proc-sys-kernel\") pod \"cilium-624rl\" (UID: \"5aa019f2-b67d-41ea-b430-3bd041afe88f\") " pod="kube-system/cilium-624rl" Jul 2 07:51:23.766549 kubelet[2005]: I0702 07:51:23.766484 2005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5aa019f2-b67d-41ea-b430-3bd041afe88f-cilium-config-path\") pod \"cilium-624rl\" (UID: \"5aa019f2-b67d-41ea-b430-3bd041afe88f\") " pod="kube-system/cilium-624rl" Jul 2 07:51:23.766549 kubelet[2005]: I0702 07:51:23.766543 2005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5aa019f2-b67d-41ea-b430-3bd041afe88f-hubble-tls\") pod \"cilium-624rl\" (UID: \"5aa019f2-b67d-41ea-b430-3bd041afe88f\") " pod="kube-system/cilium-624rl" Jul 2 07:51:23.766633 kubelet[2005]: I0702 07:51:23.766571 2005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5aa019f2-b67d-41ea-b430-3bd041afe88f-bpf-maps\") pod \"cilium-624rl\" (UID: \"5aa019f2-b67d-41ea-b430-3bd041afe88f\") " pod="kube-system/cilium-624rl" Jul 2 07:51:23.766633 kubelet[2005]: I0702 07:51:23.766609 2005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5aa019f2-b67d-41ea-b430-3bd041afe88f-clustermesh-secrets\") pod \"cilium-624rl\" (UID: \"5aa019f2-b67d-41ea-b430-3bd041afe88f\") " pod="kube-system/cilium-624rl" Jul 2 07:51:23.766716 kubelet[2005]: I0702 07:51:23.766635 2005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5aa019f2-b67d-41ea-b430-3bd041afe88f-cilium-ipsec-secrets\") pod \"cilium-624rl\" (UID: \"5aa019f2-b67d-41ea-b430-3bd041afe88f\") " pod="kube-system/cilium-624rl" Jul 2 07:51:23.766716 kubelet[2005]: I0702 07:51:23.766660 2005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5aa019f2-b67d-41ea-b430-3bd041afe88f-host-proc-sys-net\") pod \"cilium-624rl\" (UID: \"5aa019f2-b67d-41ea-b430-3bd041afe88f\") " pod="kube-system/cilium-624rl" Jul 2 07:51:24.040653 kubelet[2005]: E0702 07:51:24.040622 2005 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:51:24.041226 env[1204]: time="2024-07-02T07:51:24.041187481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-624rl,Uid:5aa019f2-b67d-41ea-b430-3bd041afe88f,Namespace:kube-system,Attempt:0,}" Jul 2 07:51:24.053552 env[1204]: time="2024-07-02T07:51:24.053480946Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:51:24.053552 env[1204]: time="2024-07-02T07:51:24.053521142Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:51:24.053552 env[1204]: time="2024-07-02T07:51:24.053533316Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:51:24.053794 env[1204]: time="2024-07-02T07:51:24.053674426Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a869508b0ebcd69d33224cfc85e033f6c51de1bc1ae5c391483db127c66a5d71 pid=3829 runtime=io.containerd.runc.v2 Jul 2 07:51:24.065325 systemd[1]: Started cri-containerd-a869508b0ebcd69d33224cfc85e033f6c51de1bc1ae5c391483db127c66a5d71.scope. Jul 2 07:51:24.084078 env[1204]: time="2024-07-02T07:51:24.084032038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-624rl,Uid:5aa019f2-b67d-41ea-b430-3bd041afe88f,Namespace:kube-system,Attempt:0,} returns sandbox id \"a869508b0ebcd69d33224cfc85e033f6c51de1bc1ae5c391483db127c66a5d71\"" Jul 2 07:51:24.084776 kubelet[2005]: E0702 07:51:24.084743 2005 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:51:24.087278 env[1204]: time="2024-07-02T07:51:24.087237877Z" level=info msg="CreateContainer within sandbox \"a869508b0ebcd69d33224cfc85e033f6c51de1bc1ae5c391483db127c66a5d71\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 07:51:24.121596 env[1204]: time="2024-07-02T07:51:24.121524972Z" level=info msg="CreateContainer within sandbox \"a869508b0ebcd69d33224cfc85e033f6c51de1bc1ae5c391483db127c66a5d71\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"fcf0f244733ff0d9d50bf976983fcf47d395d8eb1abe138e5809f64b6819be5b\"" Jul 2 07:51:24.122192 env[1204]: time="2024-07-02T07:51:24.122141641Z" level=info msg="StartContainer for \"fcf0f244733ff0d9d50bf976983fcf47d395d8eb1abe138e5809f64b6819be5b\"" Jul 2 07:51:24.135194 systemd[1]: Started cri-containerd-fcf0f244733ff0d9d50bf976983fcf47d395d8eb1abe138e5809f64b6819be5b.scope. Jul 2 07:51:24.160806 env[1204]: time="2024-07-02T07:51:24.157970303Z" level=info msg="StartContainer for \"fcf0f244733ff0d9d50bf976983fcf47d395d8eb1abe138e5809f64b6819be5b\" returns successfully" Jul 2 07:51:24.165294 systemd[1]: cri-containerd-fcf0f244733ff0d9d50bf976983fcf47d395d8eb1abe138e5809f64b6819be5b.scope: Deactivated successfully. Jul 2 07:51:24.192148 env[1204]: time="2024-07-02T07:51:24.192101200Z" level=info msg="shim disconnected" id=fcf0f244733ff0d9d50bf976983fcf47d395d8eb1abe138e5809f64b6819be5b Jul 2 07:51:24.192148 env[1204]: time="2024-07-02T07:51:24.192143600Z" level=warning msg="cleaning up after shim disconnected" id=fcf0f244733ff0d9d50bf976983fcf47d395d8eb1abe138e5809f64b6819be5b namespace=k8s.io Jul 2 07:51:24.192148 env[1204]: time="2024-07-02T07:51:24.192152288Z" level=info msg="cleaning up dead shim" Jul 2 07:51:24.199200 env[1204]: time="2024-07-02T07:51:24.199154785Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:51:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3910 runtime=io.containerd.runc.v2\n" Jul 2 07:51:24.370512 kubelet[2005]: I0702 07:51:24.370411 2005 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="3c884a9f-6ac1-4dca-8625-4aa9301bc1ea" path="/var/lib/kubelet/pods/3c884a9f-6ac1-4dca-8625-4aa9301bc1ea/volumes" Jul 2 07:51:24.394850 kubelet[2005]: E0702 07:51:24.394820 2005 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 07:51:24.637436 kubelet[2005]: E0702 07:51:24.637140 2005 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:51:24.638713 env[1204]: time="2024-07-02T07:51:24.638681557Z" level=info msg="CreateContainer within sandbox \"a869508b0ebcd69d33224cfc85e033f6c51de1bc1ae5c391483db127c66a5d71\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 07:51:24.907137 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1351776077.mount: Deactivated successfully. Jul 2 07:51:25.059373 env[1204]: time="2024-07-02T07:51:25.059310641Z" level=info msg="CreateContainer within sandbox \"a869508b0ebcd69d33224cfc85e033f6c51de1bc1ae5c391483db127c66a5d71\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8f80936883820f38296547d730be602399f432c596c95ae6cab771c43b0eacde\"" Jul 2 07:51:25.060280 env[1204]: time="2024-07-02T07:51:25.060215962Z" level=info msg="StartContainer for \"8f80936883820f38296547d730be602399f432c596c95ae6cab771c43b0eacde\"" Jul 2 07:51:25.075610 systemd[1]: Started cri-containerd-8f80936883820f38296547d730be602399f432c596c95ae6cab771c43b0eacde.scope. Jul 2 07:51:25.097364 env[1204]: time="2024-07-02T07:51:25.097317063Z" level=info msg="StartContainer for \"8f80936883820f38296547d730be602399f432c596c95ae6cab771c43b0eacde\" returns successfully" Jul 2 07:51:25.102530 systemd[1]: cri-containerd-8f80936883820f38296547d730be602399f432c596c95ae6cab771c43b0eacde.scope: Deactivated successfully. Jul 2 07:51:25.179657 env[1204]: time="2024-07-02T07:51:25.179512380Z" level=info msg="shim disconnected" id=8f80936883820f38296547d730be602399f432c596c95ae6cab771c43b0eacde Jul 2 07:51:25.179657 env[1204]: time="2024-07-02T07:51:25.179565161Z" level=warning msg="cleaning up after shim disconnected" id=8f80936883820f38296547d730be602399f432c596c95ae6cab771c43b0eacde namespace=k8s.io Jul 2 07:51:25.179657 env[1204]: time="2024-07-02T07:51:25.179591200Z" level=info msg="cleaning up dead shim" Jul 2 07:51:25.185515 env[1204]: time="2024-07-02T07:51:25.185476557Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:51:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3971 runtime=io.containerd.runc.v2\n" Jul 2 07:51:25.462382 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8f80936883820f38296547d730be602399f432c596c95ae6cab771c43b0eacde-rootfs.mount: Deactivated successfully. Jul 2 07:51:25.639780 kubelet[2005]: E0702 07:51:25.639752 2005 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:51:25.645128 env[1204]: time="2024-07-02T07:51:25.645060080Z" level=info msg="CreateContainer within sandbox \"a869508b0ebcd69d33224cfc85e033f6c51de1bc1ae5c391483db127c66a5d71\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 07:51:25.659431 env[1204]: time="2024-07-02T07:51:25.659376177Z" level=info msg="CreateContainer within sandbox \"a869508b0ebcd69d33224cfc85e033f6c51de1bc1ae5c391483db127c66a5d71\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0e324471365a6db133e3c669403d824b0112bb859af211391357cdf80aed81fc\"" Jul 2 07:51:25.659891 env[1204]: time="2024-07-02T07:51:25.659867647Z" level=info msg="StartContainer for \"0e324471365a6db133e3c669403d824b0112bb859af211391357cdf80aed81fc\"" Jul 2 07:51:25.676310 systemd[1]: Started cri-containerd-0e324471365a6db133e3c669403d824b0112bb859af211391357cdf80aed81fc.scope. Jul 2 07:51:25.701058 env[1204]: time="2024-07-02T07:51:25.701023015Z" level=info msg="StartContainer for \"0e324471365a6db133e3c669403d824b0112bb859af211391357cdf80aed81fc\" returns successfully" Jul 2 07:51:25.702704 systemd[1]: cri-containerd-0e324471365a6db133e3c669403d824b0112bb859af211391357cdf80aed81fc.scope: Deactivated successfully. Jul 2 07:51:25.726753 env[1204]: time="2024-07-02T07:51:25.726390753Z" level=info msg="shim disconnected" id=0e324471365a6db133e3c669403d824b0112bb859af211391357cdf80aed81fc Jul 2 07:51:25.727081 env[1204]: time="2024-07-02T07:51:25.727028552Z" level=warning msg="cleaning up after shim disconnected" id=0e324471365a6db133e3c669403d824b0112bb859af211391357cdf80aed81fc namespace=k8s.io Jul 2 07:51:25.727194 env[1204]: time="2024-07-02T07:51:25.727173780Z" level=info msg="cleaning up dead shim" Jul 2 07:51:25.738125 env[1204]: time="2024-07-02T07:51:25.738066577Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:51:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4029 runtime=io.containerd.runc.v2\n" Jul 2 07:51:26.324070 kubelet[2005]: I0702 07:51:26.323156 2005 setters.go:568] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-07-02T07:51:26Z","lastTransitionTime":"2024-07-02T07:51:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 2 07:51:26.462476 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0e324471365a6db133e3c669403d824b0112bb859af211391357cdf80aed81fc-rootfs.mount: Deactivated successfully. Jul 2 07:51:26.643343 kubelet[2005]: E0702 07:51:26.643239 2005 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:51:26.645313 env[1204]: time="2024-07-02T07:51:26.645268238Z" level=info msg="CreateContainer within sandbox \"a869508b0ebcd69d33224cfc85e033f6c51de1bc1ae5c391483db127c66a5d71\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 07:51:26.817400 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2002956661.mount: Deactivated successfully. Jul 2 07:51:26.823981 env[1204]: time="2024-07-02T07:51:26.823937058Z" level=info msg="CreateContainer within sandbox \"a869508b0ebcd69d33224cfc85e033f6c51de1bc1ae5c391483db127c66a5d71\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f94d8016cdfdbd3a291db7eddf2a9eb755af25bcceac09a0423a8ebb8365e9e5\"" Jul 2 07:51:26.824483 env[1204]: time="2024-07-02T07:51:26.824464325Z" level=info msg="StartContainer for \"f94d8016cdfdbd3a291db7eddf2a9eb755af25bcceac09a0423a8ebb8365e9e5\"" Jul 2 07:51:26.840728 systemd[1]: Started cri-containerd-f94d8016cdfdbd3a291db7eddf2a9eb755af25bcceac09a0423a8ebb8365e9e5.scope. Jul 2 07:51:26.861387 systemd[1]: cri-containerd-f94d8016cdfdbd3a291db7eddf2a9eb755af25bcceac09a0423a8ebb8365e9e5.scope: Deactivated successfully. Jul 2 07:51:26.863088 env[1204]: time="2024-07-02T07:51:26.863017126Z" level=info msg="StartContainer for \"f94d8016cdfdbd3a291db7eddf2a9eb755af25bcceac09a0423a8ebb8365e9e5\" returns successfully" Jul 2 07:51:26.881767 env[1204]: time="2024-07-02T07:51:26.881722134Z" level=info msg="shim disconnected" id=f94d8016cdfdbd3a291db7eddf2a9eb755af25bcceac09a0423a8ebb8365e9e5 Jul 2 07:51:26.881767 env[1204]: time="2024-07-02T07:51:26.881767411Z" level=warning msg="cleaning up after shim disconnected" id=f94d8016cdfdbd3a291db7eddf2a9eb755af25bcceac09a0423a8ebb8365e9e5 namespace=k8s.io Jul 2 07:51:26.882010 env[1204]: time="2024-07-02T07:51:26.881775336Z" level=info msg="cleaning up dead shim" Jul 2 07:51:26.887768 env[1204]: time="2024-07-02T07:51:26.887707076Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:51:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4083 runtime=io.containerd.runc.v2\n" Jul 2 07:51:27.462524 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f94d8016cdfdbd3a291db7eddf2a9eb755af25bcceac09a0423a8ebb8365e9e5-rootfs.mount: Deactivated successfully. Jul 2 07:51:27.646600 kubelet[2005]: E0702 07:51:27.646549 2005 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:51:27.651216 env[1204]: time="2024-07-02T07:51:27.651163388Z" level=info msg="CreateContainer within sandbox \"a869508b0ebcd69d33224cfc85e033f6c51de1bc1ae5c391483db127c66a5d71\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 07:51:27.967624 env[1204]: time="2024-07-02T07:51:27.967554658Z" level=info msg="CreateContainer within sandbox \"a869508b0ebcd69d33224cfc85e033f6c51de1bc1ae5c391483db127c66a5d71\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e7f1d4549b7f5099e83c8d0f7e38cc076e652f96d88938f0bc4b757e8fd709ff\"" Jul 2 07:51:27.968263 env[1204]: time="2024-07-02T07:51:27.968238193Z" level=info msg="StartContainer for \"e7f1d4549b7f5099e83c8d0f7e38cc076e652f96d88938f0bc4b757e8fd709ff\"" Jul 2 07:51:27.983339 systemd[1]: Started cri-containerd-e7f1d4549b7f5099e83c8d0f7e38cc076e652f96d88938f0bc4b757e8fd709ff.scope. Jul 2 07:51:28.069997 env[1204]: time="2024-07-02T07:51:28.069949338Z" level=info msg="StartContainer for \"e7f1d4549b7f5099e83c8d0f7e38cc076e652f96d88938f0bc4b757e8fd709ff\" returns successfully" Jul 2 07:51:28.260613 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jul 2 07:51:28.650781 kubelet[2005]: E0702 07:51:28.650676 2005 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:51:30.041738 kubelet[2005]: E0702 07:51:30.041702 2005 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:51:30.786896 systemd-networkd[1016]: lxc_health: Link UP Jul 2 07:51:30.810394 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 2 07:51:30.810973 systemd-networkd[1016]: lxc_health: Gained carrier Jul 2 07:51:32.042804 kubelet[2005]: E0702 07:51:32.042767 2005 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:51:32.089121 kubelet[2005]: I0702 07:51:32.089069 2005 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-624rl" podStartSLOduration=9.089025175 podStartE2EDuration="9.089025175s" podCreationTimestamp="2024-07-02 07:51:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:51:28.663316028 +0000 UTC m=+84.413660728" watchObservedRunningTime="2024-07-02 07:51:32.089025175 +0000 UTC m=+87.839369855" Jul 2 07:51:32.642830 systemd-networkd[1016]: lxc_health: Gained IPv6LL Jul 2 07:51:32.656409 kubelet[2005]: E0702 07:51:32.656374 2005 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:51:32.749112 systemd[1]: run-containerd-runc-k8s.io-e7f1d4549b7f5099e83c8d0f7e38cc076e652f96d88938f0bc4b757e8fd709ff-runc.PFqD1C.mount: Deactivated successfully. Jul 2 07:51:36.910763 systemd[1]: run-containerd-runc-k8s.io-e7f1d4549b7f5099e83c8d0f7e38cc076e652f96d88938f0bc4b757e8fd709ff-runc.57nUm3.mount: Deactivated successfully. Jul 2 07:51:36.949797 kubelet[2005]: E0702 07:51:36.949745 2005 upgradeaware.go:425] Error proxying data from client to backend: readfrom tcp 127.0.0.1:42596->127.0.0.1:34683: write tcp 127.0.0.1:42596->127.0.0.1:34683: write: broken pipe Jul 2 07:51:36.953073 sshd[3799]: pam_unix(sshd:session): session closed for user core Jul 2 07:51:36.956155 systemd[1]: sshd@25-10.0.0.99:22-10.0.0.1:43696.service: Deactivated successfully. Jul 2 07:51:36.956859 systemd[1]: session-26.scope: Deactivated successfully. Jul 2 07:51:36.957423 systemd-logind[1187]: Session 26 logged out. Waiting for processes to exit. Jul 2 07:51:36.958064 systemd-logind[1187]: Removed session 26. Jul 2 07:51:37.369299 kubelet[2005]: E0702 07:51:37.369260 2005 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:51:38.369199 kubelet[2005]: E0702 07:51:38.369123 2005 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"